January 22, 2018

hackergotchi for Serbian GNU/Linux

Serbian GNU/Linux

Доступан је Сербиан 2018 Опенбокс



Доступно је за преузимање ново издање дистрибуције Сербиан, намењено за рачунаре са слабијим карактеристикама, као и за све оне који су љубитељи оперативних система са малом потрошњом ресурса. У употреби је управник прозора Openbox, а нешто више о његовим подешавањима можете сазнати овде. Као основа у образовању овог издања коришћен је Debian у својој стабилној верзији, са кодним именом Stretch.  Величина ИСО слике за преузимање износи око 1.4 ГБ, док ће по инсталацији, заузети простор на хард диску бити нешто изнад 4 ГБ.

 Корисници којима то одговара, могу имати видљиве иконе у менију. Потребно је само у фасцикли, на путањи .config/openbox, пронаћи фајл icons-menu.xml и извршити његово преименовање на стандардни menu.xml. Када се поново покрене Openbox, мени ће изгледати као на слици. Додатне снимке екрана можете погледати овде.



Апликације које долазе уз Сербиан 2018 Openbox, важе за мање захтевне, а њихов избор изгледа овако:

Ту су још и алати за  архивирање, клипборд, чишћење система, управљање партицијама, подешавање изгледа итд. На овој страни, наведени су неки од савета који вам могу затребати, а односе се на Openbox управник прозора и пратеће програме. Као пар битнијих ставки, треба напоменути да се распоред тастатуре мења пречицом Ctrl+Shift, а подешене су опције: la, ћи, en. Ако сте инсталацију извршили на преносиви рачунар, у аутостарт фајлу уклоните ознаку испред линије fdpowermon, да би могли да пратите стање батерије. 

               


Ако сте новајлија на Линуксу, инсталациони процес је једноставан и траје око 10 минута, а овде можете прочитати како се припремају медији за инсталацију. Графички интерфејс инсталационог поступка подразумевано је подешен на ћириличну опцију, док ће управљање тастатуром бити на латиници. Ако до сада нисте видели како изгледа инсталација, можете је погледати у сликама, а доступан је и видео материјал. Када вам се подигне тек инсталирани систем, баците поглед у фасциклу Документи, где је записано неколико савета. Ако некоме са другог говорног подручја затреба, ту се може пронаћи и фајл eng-menu.xml на енглеском језику. 



22 January, 2018 10:53AM by Debian Srbija (noreply@blogger.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Daniel Pocock: Keeping an Irish home warm and free in winter

The Irish Government's Better Energy Homes Scheme gives people grants from public funds to replace their boiler and install a zoned heating control system.

Having grown up in Australia, I think it is always cold in Ireland and would be satisfied with a simple control switch with a key to make sure nobody ever turns it off but that isn't what they had in mind for these energy efficiency grants.

Having recently stripped everything out of the house, right down to the brickwork and floorboards in some places, I'm cautious about letting any technologies back in without checking whether they are free and trustworthy.

bare home

This issue would also appear to fall under the scope of FSFE's Public Money Public Code campaign.

Looking at the last set of heating controls in the house, they have been there for decades. Therefore, I can't help wondering, if I buy some proprietary black box today, will the company behind it still be around when it needs a software upgrade in future? How many of these black boxes have wireless transceivers inside them that will be compromised by security flaws within the next 5-10 years, making another replacement essential?

With free and open technologies, anybody who is using it can potentially make improvements whenever they want. Every time a better algorithm is developed, if all the homes in the country start using it immediately, we will always be at the cutting edge of energy efficiency.

Are you aware of free and open solutions that qualify for this grant funding? Can a solution built with devices like Raspberry Pi and Arduino qualify for the grant?

Please come and share any feedback you have on the FSFE discussion list (join, reply to the thread).

22 January, 2018 09:20AM

Dustin Kirkland: Dell XPS 13 with Ubuntu -- The Ultimate Developer Laptop of 2018!


I'm the proud owner of a new Dell XPS 13 Developer Edition (9360) laptop, pre-loaded from the Dell factory with Ubuntu 16.04 LTS Desktop.

Kudos to the Dell and the Canonical teams that have engineered a truly remarkable developer desktop experience.  You should also check out the post from Dell's senior architect behind the XPS 13, Barton George.

As it happens, I'm also the proud owner of a long loved, heavily used, 1st Generation Dell XPS 13 Developer Edition laptop :-)  See this post from May 7, 2012.  You'll be happy to know that machine is still going strong.  It's now my wife's daily driver.  And I use it almost every day, for any and all hacking that I do from the couch, after hours, after I leave the office ;-)

Now, this latest XPS edition is a real dream of a machine!

From a hardware perspective, this newer XPS 13 sports an Intel i7-7660U 2.5GHz processor and 16GB of memory.  While that's mildly exciting to me (as I've long used i7's and 16GB), here's what I am excited about...

The 500GB NVME storage and a whopping 1239 MB/sec I/O throughput!

kirkland@xps13:~$ sudo hdparm -tT /dev/nvme0n1
/dev/nvme0n1:
Timing cached reads: 25230 MB in 2.00 seconds = 12627.16 MB/sec
Timing buffered disk reads: 3718 MB in 3.00 seconds = 1239.08 MB/sec

And on top of that, this is my first QHD+ touch screen laptop display, sporting a magnificent 3200x1800 resolution.  The graphics are nothing short of spectacular.  Here's nearly 4K of Hollywood hard "at work" :-)


The keyboard is super comfortable.  I like it a bit better than the 1st generation.  Unlike your Apple friends, we still have our F-keys, which is important to me as a Byobu user :-)  The placement of the PgUp, PgDn, Home, and End keys (as Fn + Up/Down/Left/Right) takes a while to get used to.


The speakers are decent for a laptop, and the microphone is excellent.  The webcam is placed in an odd location (lower left of the screen), but it has quite nice resolution and focus quality.


And Bluetooth and WiFi, well, they "just work".  I got 98.2 Mbits/sec of throughput over WiFi.

kirkland@xps:~$ iperf -c 10.0.0.45
------------------------------------------------------------
Client connecting to 10.0.0.45, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[ 3] local 10.0.0.149 port 40568 connected with 10.0.0.45 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.1 sec 118 MBytes 98.2 Mbits/sec

There's no external display port, so you'll need something like this USB-C-to-HDMI adapter to project to a TV or monitor.


There's 1x USB-C port, 2x USB-3 ports, and an SD-Card reader.


One of the USB-3 ports can be used to charge your phone or other devices, even while your laptop is suspended.  I use this all the time, to keep my phone topped up while I'm aboard planes, trains, and cars.  To do so, you'll need to enable "USB PowerShare" in the BIOS.  Here's an article from Dell's KnowledgeBase explaining how.


Honestly, I have only one complaint...  And that's that there is no Trackstick mouse (which is available on some Dell models).  I'm not a huge fan of the Touchpad.  It's too sensitive, and my palms are always touching it inadvertently.  So I need to use an external mouse to be effective.  I'll continue to provide this feedback to the Dell team, in the hopes that one day I'll have my perfect developer laptop!  Otherwise, this machine is a beauty.  I'm sure you'll love it too.

Cheers,
Dustin

22 January, 2018 08:51AM by Dustin Kirkland (noreply@blogger.com)

LiMux

New Work im öffentlichen Dienst – Jetzt abstimmen!

Auf dem Open Government Tag 2017 hat das Team E- und Open Government der Stadt München gefragt, ob öffentliche Verwaltung auch offen, transparent, innovativ und agil funktioniert. Erstmals beschäftigte sich damit eine städtische Veranstaltung mit … Weiterlesen

Der Beitrag New Work im öffentlichen Dienst – Jetzt abstimmen! erschien zuerst auf Münchner IT-Blog.

22 January, 2018 07:53AM by Stefan Döring

hackergotchi for Qubes

Qubes

Qubes Air: Generalizing the Qubes Architecture

The Qubes OS project has been around for nearly 8 years now, since its original announcement back in April 2010 (and the actual origin date can be traced back to November 11th, 2009, when an initial email introducing this project was sent within ITL internally). Over these years Qubes has achieved reasonable success: according to our estimates, it has nearly 30k regular users. This could even be considered a great success given that 1) it is a new operating system, rather than an application that can be installed in the user’s favorite OS; 2) it has introduced a (radically?) new approach to managing one’s digital life (i.e. an explicit partitioning model into security domains); and last but not least, 3) it has very specific hardware requirements, which is the result of using Xen as the hypervisor and Linux-based Virtual Machines (VMs) for networking and USB qubes. (The term “qube” refers to a compartment – not necessarily a VM – inside a Qubes OS system. We’ll explain this in more detail below.)

For the past several years, we’ve been working hard to bring you Qubes 4.0, which features state-of-the-art technology not seen in previous Qubes versions, notably the next generation Qubes Core Stack and our unique Admin API. We believe this new platform (Qubes 4 represents a major rewrite of the previous Qubes codebase!) paves the way to solving many of the obstacles mentioned above.

The new, flexible architecture of Qubes 4 will also open up new possibilities, and we’ve recently been thinking about how Qubes OS should evolve in the long term. In this article, I discuss this vision, which we call Qubes Air. It should be noted that what I describe in this article has not been implemented yet.

Why?

Before we take a look at the long-term vision, it might be helpful to understand why we would like the Qubes architecture to further evolve. Let us quickly recap some of the most important current weaknesses of Qubes OS (including Qubes 4.0).

Deployment cost (aka “How do I find a Qubes-compatible laptop?”)

Probably the biggest current problem with Qubes OS – a problem that prevents its wider adoption – is the difficulty of finding a compatible laptop on which to install it. Then, the whole process of needing to install a new operating system, rather than just adding a new application, scares many people away. It’s hard to be surprised by that.

This problem of deployment is not limited to Qubes OS, by the way. It’s just that, in the case of Qubes OS, these problems are significantly more pronounced due to the aggressive use of virtualization technology to isolate not just apps, but also devices, as well as incompatibilities between Linux drivers and modern hardware. (While these driver issues are not inherent to the architecture of Qubes OS, they affected us nonetheless, since we use Linux-based VMs to handle devices.)

The hypervisor as a single point of failure

Since the beginning, we’ve relied on virtualization technology to isolate individual qubes from one another. However, this has led to the problem of over-dependence on the hypervisor. In recent years, as more and more top notch researchers have begun scrutinizing Xen, a number of security bugs have been discovered. While many of them did not affect the security of Qubes OS, there were still too many that did. :(

Potential Xen bugs present just one, though arguably the most serious, security problem. Other problems arise from the underlying architecture of the x86 platform, where various inter-VM side- and covert-channels are made possible thanks to the aggressively optimized multi-core CPU architecture, most spectacularly demonstrated by the recently published Meltdown and Spectre attacks. Fundamental problems in other areas of the underlying hardware have also been discovered, such as the Row Hammer Attack.

This leads us to a conclusion that, at least for some applications, we would like to be able to achieve better isolation than currently available hypervisors and commodity hardware can provide.

How?

One possible solution to these problems is actually to “move Qubes to the cloud.” Readers who are allergic to the notion of having their private computations running in the (untrusted) cloud should not give up reading just yet. Rest assured that we will also discuss other solutions not involving the cloud. The beauty of Qubes Air, we believe, lies in the fact that all these solutions are largely isomorphic, from both an architecture and code point of view.

Example: Qubes in the cloud

Let’s start with one critical need that many of our customers have expressed: Can we have “Qubes in the Cloud”?

As I’ve emphasized over the years, the essence of Qubes does not rest in the Xen hypervisor, or even in the simple notion of “isolation,” but rather in the careful decomposition of various workflows, devices, apps across securely compartmentalized containers. Right now, these are mostly desktop workflows, and the compartments just happen to be implemented as Xen VMs, but neither of these aspects is essential to the nature of Qubes. Consequently, we can easily imagine Qubes running on top of VMs that are hosted in some cloud, such as Amazon EC2, Microsoft Azure, Google Compute Engine, or even a decentralized computing network, such as Golem. This is illustrated (in a very simplified way) in the diagram below:

Simplified Qubes-as-a-Service scenario

It should be clear that such a setup automatically eliminates the deployment problem discussed above, as the user is no longer expected to perform any installation steps herself. Instead, she can access Qubes-as-a-Service with just a Web browser or a mobile app. This approach may trade security for convenience (if the endpoint device used to access Qubes-as-a-Service is insufficiently protected) or privacy for convenience (if the cloud operator is not trusted). For many use cases, however, the ability to access Qubes from any device and any location makes the trade-off well worth it.

We said above that we can imagine “Qubes running on top of VMs” in some cloud, but what exactly does that mean?

First and foremost, we’d want the Qubes Core Stack connected to that cloud’s management API, so that whenever the user executes, say, qvm-create (or, more generally, issues any Admin API call, in this case admin.vm.Create.*) a new VM gets created and properly connected in the Qubes infrastructure.

This means that most (all?) Qubes Apps (e.g. Split GPG, PDF and image converters, and many more), which are built around qrexec, should Just Work (TM) when run inside a Qubes-as-a-Service setup.

Now, what about the Admin and GUI domains? Where would they go in a Qubes-as-a-Service scenario? This is an important question, and the answer is much less obvious. We’ll return to it below. First, let’s look at a couple more examples that demonstrate how Qubes Air could be implemented.

Example: Hybrid Mode

Some users might decide to run a subset of their qubes (perhaps some personal ones) on their local laptops, while using the cloud only for other, less privacy-sensitive VMs. In addition to privacy, another bonus of running some of the VMs locally would be much lower GUI latency (as we discuss below).

The ability to run some VMs locally and some in the cloud is what I refer to as Hybrid Mode. The beauty of Hybrid Mode is that the user doesn’t even have to be aware (unless specifically interested!) in whether a particular VM is running locally or in the cloud. The Admin API, qrexec services, and even the GUI, should all automatically handle both cases. Here’s an example of a Hybrid Mode configuration:

Qubes hybrid scenario

Another benefit of Hybrid Mode is that it can be used to host VMs across several different cloud providers, not just one. This allows us to solve the problem of over-dependence on a single isolation technology, e.g. on one specific hypervisor. Now, if a fatal security bug is discovered that affects one of the cloud services hosting a group of our VMs, the vulnerability will not automatically affect the security of our other groups of VMs, since the other groups may be hosted on different cloud services, or not in the cloud at all. Crucially, different groups of VMs may be run on different underlying containerization technologies and different hardware, allowing us to diversify our risk exposure against any single class of attack.

Example: Qubes on “air-gapped” devices

This approach even allows us to host each qube (or groups of them) on a physically distinct computer, such as a Raspberry PI or USB Armory. Despite the fact that these are physically separate devices, the Admin API calls, qrexec services, and even GUI virtualization should all work seamlessly across these qubes!

Qubes with some qubes running on "air-gapped" devices

For some users, it may be particularly appealing to host one’s Split GPG backend or password manager on a physically separate qube. Of course, it should also be possible to run normal GUI-based apps, such as office suites, if one wants to dedicate a physically separate qube to work on a sensitive project.

The ability to host qubes on distinct physical devices of radically different kinds opens up numerous possibilities for working around the security problems with hypervisors and processors we face today.

Under the hood: Qubes Zones

We’ve been thinking about what changes to the current Qubes architecture, especially to the Qubes Core Stack, would be necessary to make the scenarios outlined above easy (and elegant) to implement.

There is one important new concept that should make it possible to support all these scenarios with a unified architecture. We’ve named it Qubes Zones.

A Zone is a concept that combines several things together:

  • An underlying “isolation technology” used to implement qubes, which may or may not be VMs. For example, they could be Raspberry PIs, USB Armory devices, Amazon EC2 VMs, or Docker containers.

  • The inter-qube communication technology. In the case of qubes implemented as Xen-based VMs (as in existing Qubes OS releases), the Xen-specific shared memory mechanism (so called Grant Tables) is used to implement the communication between qubes. In the case of Raspberry PIs, Ethernet technology would likely be used. In the case of Qubes running in the cloud, some form of cloud-provided networking would provide inter-qube communication. Technically speaking, this is about how Qubes’ vchan would be implemented, as the qrexec layer should remain the same across all possible platforms.

  • A “local copy” of an Admin qube (previously referred to as the “AdminVM”), used mainly to orchestrate VMs and make policing decisions for all the qubes within the Zone. This Admin qube can be in either “Master” or “Slave” mode, and there can only be one Admin qube running as Master across all the Zones in one Qubes system.

  • Optionally, a “local copy” of GUI qube (previously referred to as the “GUI domain” or “GUIVM”). As with the Admin qube, the GUI qube runs in either Master or Slave mode. The user is expected to connect (e.g. with the RDP protocol) or log into the GUI qube that runs in Master mode (and only that one), which has the job of combining all the GUI elements exposed via the other GUI qubes (all of which must run in Slave mode).

  • Some technology to implement storage for the qubes running within the Zone. In the case of Qubes OS running Xen, the local disk is used to store VM images (more specifically, in Qubes 4.0 we use Storage Pools by default). In the case of a Zone composed of a cluster of Raspberry PIs or similar devices, the storage could be a bunch of micro-SD cards (each plugged into one Raspberry PI) or some kind of network storage.

Qubes Zones is the key new concept for Qubes Air

So far, this is nothing radically new compared to what we already have in Qubes OS, especially since we have nearly completed our effort to abstract the Qubes architecture away from Xen-specific details – an effort we code-named Qubes Odyssey.

What is radically different is that we now want to allow more than one Zone to exist in a single Qubes system!

In order to support multiple Zones, we have to provide transparent proxying of qrexec services across Zones, so that a qube need not be aware that another qube from which it requests a service resides in a different zone. This is the main reason we’ve introduce multiple “local” Admin qubes – one for each Zone. Slave Admin qubes are also bridges that allow the Master Admin qube to manage the whole system (e.g. request the creation of new qubes, connect and set up storage for qubes, and set up networking between qubes).

Under the hood: qubes’ interfaces

Within one Zone, there are multiple qubes. Let me stress that the term “qube” is very generic and does not imply any specific technology. It could be a VM under some virtualization system. It could be some kind of a container or a physically separate computing device, such as a Raspberry PI, Arduino board, or similar device.

While a qube can be implemented in many different ways, there are certain features it should have:

  1. A qube should implement a vchan endpoint. The actual technology on top of which this will be implemented – whether some shared memory within a virtualization or containerization system, TCP/IP, or something else – will be specific to the kind of Zone it occupies.

  2. A qube should implement a qrexec endpoint, though this should be very straightforward if a vchan endpoint has already been implemented. This ensures that most (all?) the qrexec services, which are the basis for most of the integration, apps, and services we have created for Qubes, should Just Work(TM).

  3. Optionally, for some qubes, a GUI endpoint should also be implemented (see the discussion below).

  4. In order to be compatible with Qubes networking, a qube should expect one uplink network interface (to be exposed by the management technology specific to that particular Zone), and (optionally) multiple downlink network interfaces (if it is to work as a proxy qube, e.g. VPN or firewalling qube).

  5. Finally, a qube should expect two kinds of volumes to be exposed by the Zone-specific management stack:

    • one read-only, which is intended to be used as a root filesystem by the qube (the management stack might also expose an auxiliary volume for implementing copy-on-write illusion for the VM, like the volatile.img we currently expose on Qubes),
    • and one read-writable, which is specific to this qube, and which is intended to be used as home directory-like storage. This is, naturally, to allow the implementation of Qubes templates, a mechanism that we believe brings not only a lot of convenience but also some security benefits.

GUI virtualization considerations

Since the very beginning, Qubes was envisioned as a system for desktop computing (as opposed to servers). This implied that GUI virtualization was part of the core Qubes infrastructure.

However, with some of the security-optimized management infrastructure we have recently added to Qubes OS, i.e. Salt stack integration (which significantly shrinks the attack surface on the system TCB compared to more traditional “management” solutions), the Qubes Admin API (which allows for the fine-grained decomposition of management roles), and deeply integrated features such as templates, we think Qubes Air may also be useful in some non-desktop applications, such as the embedded appliance space, and possibly even on the server/services side. In this case, it makes perfect sense to have qubes not implement GUI protocol endpoints.

However, I still think that the primary area where Qubes excels is in securing desktop workflows. For these, we need GUI virtualizationmultiplexing, and the qubes need to implement GUI protocol endpoints. Below, we discuss some of the trade-offs involved here.

The Qubes GUI protocol is optimized for security. This means that the protocol is designed to be extremely simple, allowing only for very simple processing on incoming packets, thus significantly limiting the attack surface on the GUI daemon (which is usually considered trusted). The price we pay for this security is the lack of various optimizations, such as on-the-fly compression, which others protocols, such as VNC and RDP, naturally offer. So far, we’ve been able to get away with these trade-offs, because in current Qubes releases the GUI protocol runs over Xen shared memory. DRAM is very fast (i.e has low latency and super-high bandwidth), and the implementation on Xen smartly makes use of page sharing rather than memory copying, so that it achieves near native speed (of course with the limitation that we don’t expose GPU functionalities to VMs, which might limit the experience in some graphical applications anyway).

However, when qubes run on remote computers (e.g in the cloud) or on physically separate computers (e.g. on a cluster of Raspberry PIs), we face the potential problem of graphics performance. The solution we see is to introduce a local copy of the GUI qube into each zone. Here, we make the assumption that there should be a significantly faster communication channel available between qubes within a Zone than between Zones. For example, inter-VM communication within one data center should be significantly faster than between the user’s laptop and the cloud. The Qubes GUI protocol is then used between qubes and the local GUI qube within a single zone, but a more efficient (and more complex) protocol is used to aggregate the GUI into the Master GUI qube from all the Slave GUI qubes. Thanks to this combined setup, we still get the benefit of a reasonably secure GUI. Untrusted qubes still use the Qubes secure GUI protocol to communicate with the local GUI qube. However, we also benefit from the greater efficiency of remote access-optimized protocols such as RDP and VNC to get the GUI onto the user’s device over the network. (Here, we make the assumption that the Slave GUI qubes are significantly more trustworthy than other non-privileged qubes in the Zone. If that’s not the case, and if we’re also worried about an attacker who has compromised a Slave GUI qube to exploit a potential bug in the VNC or RDP protocol in order to attack the Master GUI qube, we could still resort to the fine-grained Qubes Admin API to limit the potential damage the attacker might inflict.)

Digression on the “cloudification” of apps

It’s hard not to notice how the model of desktop applications has changed over the past decade or so, where many standalone applications that previously ran on desktop computers now run in the cloud and have only their frontends executed in a browser running on the client system. How does the Qubes compartmentalization model, and more importantly Qubes as a desktop OS, deal with this change?

Above, we discussed how it’s possible to move Qubes VMs from the user’s local machine to the cloud (or to physically separate computers) without the user having to notice. I think it will be a great milestone when we finally get there, as it will open up many new applications, as well as remove many obstacles that today prevent the easy deployment of Qubes OS (such as the need to find and maintain dedicated hardware).

However, it’s important to ask ourselves how relevant this model will be in the coming years. Even with our new approach, we’re still talking about classic standalone desktop applications running in qubes, while the rest of the world seems to be moving toward an app-as-a-service model in which everything is hosted in the cloud (e.g. Google Docs and Microsoft Office 365). How relevant is the whole Qubes architecture, even the cloud-based version, in the app-as-a-service model?

I’d like to argue that the Qubes architecture still makes perfect sense in this new model.

First, it’s probably easy to accept that there will always be applications that users, both individual and corporate, will prefer (or be forced) to run locally, or at least on trusted servers. At the same time, it’s very likely that these same users will want to embrace the general, public cloud with its multitude of app-as-a-service options. Not surprisingly, there will be a need for isolating these workloads from interfering with each other.

Some examples of payloads that are better suited as traditional, local applications (and consequently within qubes), are MS Office for sensitive documents, large data-processing applications, and… networking and USB drivers and stacks. The latter things may not be very visible to the user, but we can’t really offload them to the cloud. We have to host them on the local machine, and they present a huge attack surface that jeopardizes the user’s other data and applications.

What about isolating web apps from each other, as well as protecting the host from them? Of course, that’s the primary task of the Web browser. Yet, despite vendors’ best efforts, browser security measures are still being circumvented. Continued expansion of the APIs that modern browsers expose to Web applications, such as WebGL, suggests that this state of affairs may not significantly improve in the foreseeable future.

What makes the Qubes model especially useful, I think, is that it allows us to put the whole browser in a container that is isolated by stronger mechanisms (simply because Qubes does not have to maintain all the interfaces that the browser must) and is managed by Qubes-defined policies. It’s rather natural to imagine, e.g. a Chrome OS-based template for Qubes (perhaps even a unikernel-based one), from which lightweight browser VMs could be created, running either on the user’s local machine, or in the cloud, as described above. Again, there will be pros and cons to both approaches, but Qubes should support both – and mostly seamlessly from the user’s and admin’s points of view (as well the Qubes service developer’s point of view!).

Summary

Qubes Air is the next step on our roadmap to making the concept of “Security through Compartmentalization” applicable to more scenarios. It is also an attempt to address some of the biggest problems and weaknesses plaguing the current implementation of Qubes, specifically the difficulty of deployment and virtualization as a single point of failure. While Qubes-as-a-Service is one natural application that could be built on top of Qubes Air, it is certainly not the only one. We have also discussed running Qubes over clusters of physically isolated devices, as well as various hybrid scenarios. I believe the approach to security that Qubes has been implementing for years will continue to be valid for years to come, even in a world of apps-as-a-service.

22 January, 2018 12:00AM

January 21, 2018

hackergotchi for VyOS

VyOS

Setting up GRE/IPsec behind NAT

In the previous posts of this series we've discussed setting up "plain" IPsec tunnels from behind NAT.

The transparency of the plain IPsec, however, is more often a curse than a blessing. Truly transparent IPsec is only possible between publicly routed networks, and the tunnel mode creates a strange mix of the two approaches: you do not have a network interface associated with the tunnel, but the setup is not free of routing issues either, and it's often hard to test whether the tunnel actually works or not from the router itself.

GRE/IPsec (or IPIP/IPsec, or anything else) offers a convenient solution: for all intents and purposes it's a normal network interface and makes it look like the networks are connected with a wire. You can easily ping the other side, use the interface for firewall and QoS rulesets, and setup dynamic routing protocols in a straightforward way. However, NAT creates a unique challenge for this setup.

The canonical and the simplest GRE/IPsec setup looks like this:

interfaces {
  tunnel tun0 {
    address 10.0.0.2/29
    local-ip 192.0.2.10
    remote-ip 203.0.113.20
    encapsulation gre
  }
}
vpn {
  ipsec {
    site-to-site {
      peer 203.0.113.20 {
        tunnel 1 {
          protocol gre
        }
        local-address 192.0.2.10

It creates a policy that encrypts any GRE packets sent to 203.0.113.20. Of course it's not going to work with NAT because the remote side is not directly routable.

Let's see how we can get around it. Suppose you are setting up a tunnel between routers called East and West. The way to get around it is pretty simple even if not exactly intuitive and boils down to this:

  1. Setup an additional address on a loopback or dummy interface on each router, e.g. 10.10.0.1/32 on the East and 10.10.0.2/32 on the West.
  2. Setup GRE tunnels that are using 10.10.0.1 and .2 as local-ip and remote-ip respectively.
  3. Setup an IPsec tunnels that uses 10.10.0.1 and .2 as local-prefix and remote-prefix respectively.

This way when traffic is sent through the GRE tunnel on the East, the GRE packets will use 10.10.0.1 as a source address, which will match the IPsec policy. Since 10.10.0.2/32 is specified as the remote-prefix of the tunnel, the IPsec process will setup a kernel route to it, and the GRE packets will reach the other side.

Let's look at the config:

interfaces {
  dummy dum0 {
    address 10.10.0.1/32
  }
  tunnel tun0 {
    address 10.0.0.1/29
    local-ip 10.10.0.1
    remote-ip 10.10.0.2
    encapsulation gre
  }
}
vpn {
  ipsec {
    site-to-site {
      peer @west {
        connection-type respond
        tunnel 1 {
          local {
            prefix 10.10.0.1/32
          }
          remote {
            prefix 10.10.0.2/32
          }

This approach also has a property that may make it useful even in publicly routed networks if you are going to use the GRE tunnel for sensitive but unencrypted traffic (I've seen that in legacy applications): unlike the canonical setup, GRE tunnel stops working when the IPsec SA goes down because the remote end becomes unreachable. The canonical setup will continue to work even without IPsec and may expose the GRE traffic to eavesdropping and MitM attacks.

This concludes the series of posts about IPsec and NAT. Next Friday I'll find something else to write about. ;)

21 January, 2018 05:34PM by Daniil Baturin

How to setup an IPsec connection between two NATed peers: using id's and RSA keys

In the previous post from this series, we've discussed setting up an IPsec tunnel from a NATed router to a non-NATed one. The key point is that in the presence of NAT, the non-NATed side cannot identify the NATed peer by its public address, so a manually configured id is required.

What if both peers are NATed though? Suppose you are setting up a tunnel between two EC2 instances. They are both NATed, and this creates its own unique challenges: neither of them know their public addresses or can identify their peers by their public address. So, we need to solve two problems.

In this post, we'll setup a tunnel between two routers, let's call them "east" and "west". The "east" router will be the initiator, and "west" will be the responder.

21 January, 2018 05:34PM by Daniil Baturin

VyOS builds now use the deb.debian.net load balanced mirror

If there are any good things about that packages server migration and restructuring is that it promoted a revamp of the associated part of the build scripts.

Since the start the default Debian mirror was set to nl.debian.org for a completely arbitrary reason. This of course was suboptimal for most users who are far from the Netherlands, and while the mirror is easy enough to change in ./configure options, a better out of the box experience wouldn't harm.

Danny ter Haar (fromport) suggested that we change it to deb.debian.org which is load balanced, which I think is a good idea. There's a small chance that it will redirect you to a dead mirror, but if you run into any issues, you can always set it by hand.

21 January, 2018 05:34PM by Daniil Baturin

VyOS builds and HTTPS: build works again, HTTP still needs testing

We have restored VyOS builds. Nightly build should work as expected today, and you can build it by hand as well if you want. This is not exactly the end of the story for us since we need to finish some reconfiguration of Jenkins to accomodate the new setup, but nothing dramatic should happen to the ISO builds any soon, or so we hope.

HTTPS, however, is another story. It still doesn't work for me, and I'm not sure if it's APT itself to blame or anything in our build setup. Since this is not a pressing issue, I'm not going to put much effort into it right now, but if you have a build setup, please check if it works for you. If it doesn't work for anyone, then we can write it off as an APT issue.

21 January, 2018 05:34PM by Daniil Baturin

Follow-up: VyOS builds and HTTPS

We've made HTTP on the dev.packages.vyos.net host optional, and restored the real directory index (provided by the Apache HTTP's mod_autoindex) instead of using the DirectoryLister that was proven a bit problematic with APT.

Since we had to change the default repository URL anyway, I also took a chance to finally make it configurable rather than hardcoded (T519). Now you can specify a custom URL with the --vyos-mirror="$URL" option. It defaults to the plain HTTP URL right now for the reason stated below.

I have also found a way to make live-build include apt-transport-https packages at the bootstrap stage and enable it to use HTTPS servers for building images. However, for some reason it doesn't work for me, apt says it cannot fetch the package index, while fetching that file with curl works just fine from the same host. I'm not sure what the issue may be. If you verify that it works for you or doesn't, or you know how to make it work, please comment upon T422.

Actual builds "still" don't work but for a completely unrelated reasons: mdns-repeater package needed for the recently merged mDNS repeater feature is not yet in the repository. We will fix it shortly, now that the builds otherwise work.

21 January, 2018 05:34PM by Daniil Baturin

VyOS builds and HTTPS

For a while some people kept asking why we do not enable HTTPS on the servers with ISOs and repositories. Now we have enabled it, but it turned out it's not all that simple: since APT doesn't support HTTPS by default and the apt-transport-https package is not installed by the bootstrap stage of the live-build process, the dev.packages.vyos.net server is now unusable and image builds are broken.

We are looking for solutions. If you know how to make live-build use apt-transport-https for the boostrap stage, please tell us. If we don't find anything today, we'll try to make HTTPS optional, and if that turns out to be impractical in VestaCP (which we are using for the web frontend server), we'll just disable it there.

Sorry for the inconvenience!

21 January, 2018 05:34PM by Daniil Baturin

No new features in Perl and shell and no old style templates since May 2018

Now that the Python library for accessing the running config and the generator of old style templates from new style XML command definitions are known to be functional, it's time to set a cutoff date for the old style code. We decided to arbitrarily set it to May 2018.

Since May the 1st, no new code written in the old style will be accepted. We will accept (and make ourselves) fixes to the old code when the bug severity is high enough to affect VyOS operation, but all new features get to be new style.

If you have some new feature already almost done, consider completing it until May. If you are planning a new feature, consider learning about the new style first.

If you are late to the party, please read these blog posts:

For those who are not, I'll reiterate the main points:

  • Contributors who know Perl and are still willing to write it are getting progressively harder to find and many people say they would contribute if it wasn't in Perl.
  • The reasons people abandoned Perl are more than compelling: lack of proper type checking, exception handling, and many other features of post-60's language designs do not help reliability and ease of maintenance at all.
  • Old style handwritten templates are very hard to maintain since both the data and often the logic is spread across dozens, if not hundreds of files in deeply nested directories. Separation of logic from data and a way to keep all command definitions of a feature in a single, observable file are required to make it maintainable.
  • With old style templates, there's no way to verify them without installing them on VyOS and trying them by hand. XML is the only common language that has ready to use tools for verification of its syntax and semantics: right now it's already integrated in our build system and a build with malformed command definitions will fail.

I'll write a tutorial about writing features in the new style shortly, stay tuned. Meanwhile you can look at the implementation of cron in 1.2.0:

  • https://github.com/vyos/vyos-1x/blob/current/src/conf-mode/vyos-update-crontab.py (conf mode script)
  • https://github.com/vyos/vyos-1x/blob/current/interface-definitions/cron.xml (command definitions)


21 January, 2018 05:34PM by Daniil Baturin

Migration and restructuring of the (dev).packages.vyos.net hosts

The original host where the packages.vyos.net and dev.packages.vyos.net web servers used to live has kept having serious I/O performance issues ever since a RAID failure event, and the situation hasn't improved much. After we've started having very noticeable latency even with bash completion (not speaking of very slow download speed for our users and mirror maintainers), we decided that it's time to move it elsewhere.

We do not blame the hoster, to the contrary, we are grateful for hosting it for us at no cost for so long despite its bandwidth consumption, and for offering it to us at the point in the VyOS project life when it has exactly zero budget for anything. But, it seems now it's time to move on.

The dev.packages.vyos.net host is used for the package repositories of the current branch and for nightly builds, as well as experimental images we make available for testing. It is already migrated to the new host and operational, even though some things are not working as well as they should, including temporary (we hope) loss of IPv6 access to it, and somewhat untidy directory links generated by the (otherwise very nice) DirectoryLister script.

We are sorry for the migration taking longer than it should have and for not communicating it to the community. The dev.packages.vyos.net stayed inaccessible for a whole day because of miscommunication: we have migrated the data, but I assumed that Yuriy (syncer) will setup the web server, while he assumed I will. Our maintenance procedures definitely could use some improvement.

Note that apart from physical data migration, we have also done some directory restructuring. We hope it's now more logical and that it will not have any impact on anyone, but if it does, we can setup redirects to make old direct links work again.

The packages.vyos.net server will stay on the old machine for a bit as we also need to migrate the rsync setup that the mirrors are using. We are still not sure whether to use downloads.vyos.io as a central server for mirrors too.

As of building VyOS images, we'll make adjustments to the build scripts to make them use the new paths by default and images will be buildable again.

21 January, 2018 05:34PM by Daniil Baturin

Why IPsec behind 1:1 NAT is so problematic and what you can do about it

Not so long ago the only scenario when the issues with IPsec and NAT could arise was a remote access setup, while routers invariably had real public addresses and router to router IPsec operators were incredibly unlikely to run into those issues. A router behind NAT was seen as a pathological case.

I believe it's still a pathological case, but as platforms such as Amazon EC2 that use one to one NAT to simplify IP address management rose in popularity and people started setting up their routers there, it became the new normal. This new reality became a constant source of confusion for beginner network admins and people who simply are not VPN specialists. Attempts to setup a VPN as if there's no NAT, with peer external IP addresses and pre-shared keys as it's commonly done, just fail to work.

Sometimes I'm asked why is that so, if the NAT in question is 1:1, and no other protocol just refuses to work behind it without reconfiguration. The reasons are inside the IPsec protocols themselves, and 1:1 NAT is not much better in this regard than any other kind (even though it's somewhat better than the situation when there are multiple clients behind NAT).

IPsec pre-dates NAT by over a decade, and was explicitly designed with end to end connectivity in mind. It was also designed as a way to add built-in security to the IP protocol stack rather than a new protocol within the stack, so it made heavy use of those underlying assumptions. That's why workarounds had to be added when the assumption of end to end connectivity became incorrect.

Let's look into the details and then see how IPsec can be configured to work around those issues. Incidentally, the solutions are equally applicable to machines with dynamic addresses as well as those behind a 1:1 NAT, since the root cause of the issues is not knowing beforehand what your outgoing address will be next time.

AH and ESP

We all (hopefully) remember that there are three criteria of information security: availability, confidentiality, and integrity. Availability (when required) in the IP is provided by reliable transmission protocols such as TCP and SCTP. IPsec was supposed to provide the other two.

AH (Authentication Header) was designed to ensure packet integrity. For that, it calculates a hash function from the entire packet with all its headers, including source and destination addresses. Since NAT changes the addresses, it invalidates the checksum and NATed packets would be considered corrupted. So, AH is unconditionally incompatible with NAT.

ESP only protects the payload of the packet (whether it's some protocol data, or an entire tunnelled packet), so it, luckily, can work behind NAT. The UDP encapsulation of ESP packets was introduced to allow the traffic to pass through incorrect NAT implementations that may discard non-TCP or UDP protocols, and for correct implementations this should be irrelevant (I've seen a number of SOHO routers that couldn't handle typical L2TP/IPsec connections despite that UDP encapsulation — but that's another story).

Before we can start sending ESP packets, we need to establish a "connection" (security association) though. In practice, most IPsec connections are negotiated via the IKE protocol rather than statically configured, and that's where IKE issues come into play.

IKE and pre-shared keys

To begin with, original IKE was specified to use UDP/500 for both source and destination port. Since NAT may change the source port, the specification had to be adjusted to allow other source ports. Even more issues arise when there are multiple clients behind NAT and traffic has to be multiplexed, but we'll limit the discussion to 1:1 NAT where canonical IKE might work. Let's assume the IKE exchange has gone that far.

The vast majority of IPsec setups in the wild use pre-shared keys. I've configured connections to vendors and partners on behalf of my employers and customers countless times, and I believe I've seen an x.509-based setup once or twice. In a setup with pre-shared keys, routers need to know which keys to use for which peers. How do they know? Suppose you've configured a tunnel with peer 192.0.2.10 and key "qwerty". The other side has 172.16.0.1 address NATed to 192.0.2.10. Will it be the source address of the peer that will be used for the key lookup? Not quite.

In the IKE/ISAKMP exchange, there are identifiers. It's the pairs of identifiers and pre-shared keys that must be unique to reliably tell one peer from another and use correct keys for every peer. There are several types of identifiers specified by the ISAKMP protocol: IP (v4 or v6) address, subnet, FQDN, user and FQDN pair (user@example.com), and raw byte string.

By default, most implementations (including StrongSWAN in VyOS) will use the IP address of the outgoing interface for the identifier, and it will be embedded in the IKE packet. If the host is behind NAT, that address is a private address, 172.16.0.1. When the packet passes through the NAT, the payload will obviously remain unchanged, and when it arrives, the responder will try to find the pair of 172.16.0.1:qwerty in its configuration rather than 192.0.2.1:qwerty that is has configured, and will send NO_PROPOSAL_CHOSEN to the NATed peer.

The solutions

So, how do you get around those issues? Whether IKE is configured to use NAT detection or not, you'll get to use peer identification methods that do not rely on known and fixed IP addresses.

The simplest approach is to use a manually configured peer identifier. This approach will work when one side is NATed and the other is not. The only thing you need to remember is that VyOS (right now at least) requires that FQDN identifiers are prepended with the "@" character. They also do not need to match any real domain name of your machine. You also need to set the local-address on the NATed side to "any".

Here's an example:

Non-NATed side

vpn {
  ipsec {
    site-to-site {
       peer @foo {
         authentication {
             mode pre-shared-secret
             pre-shared-secret qwerty
         }
         default-esp-group Foo
         ike-group Foo
         local-address 203.0.113.1
         tunnel 1 {
             local {
                 prefix 10.103.0.0/24
             }
             remote {
                 prefix 10.104.0.0/24
             }
         }
     }
 } 

NATed side

  ipsec {
    site-to-site {
         peer 203.0.113.1 {
             authentication {
                 id @foo
                 mode pre-shared-secret
                 pre-shared-secret qwerty
             }
             default-esp-group Foo
             ike-group Foo
             local-address any
             tunnel 1 {
                 local {
                     prefix 10.104.0.0/24
                 }
                 remote {
                     prefix 10.103.0.0/24
                 }
             }
         }
     }
 }

21 January, 2018 05:34PM by Daniil Baturin

The meltdown and spectre vulnerabilities

Everyone is talking about the meltdown and the spectre vulnerabilities now. If you are late to the party, read https://meltdownattack.com/ 

Of course we are aware of it and took time to assess the risks for VyOS. Since both vulnerabilities can only be exploited locally, the risk for a typical VyOS installation is very low: if an untrusted person managed to login to your router, you are already deep in trouble and unathorized access to the OS memory is arguably the least of your concerns since even operator level users can make traffic dumps.

The fix will not be included in 1.1.9. Since the fix is associated with an up to 30% performance penalty, in 1.2.x, we will make it optional is feasible.

21 January, 2018 05:34PM by Daniil Baturin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Your first robot: Introduction to the Robot Operating System [2/5]

This article originally appeared on Kyle Fazzari’s blog and is the second installment in a series

This is the second blog post in this series about creating your first robot with ROS and Ubuntu Core. In the previous post we walked through all the hardware necessary to follow this series, and introduced Ubuntu Core, the operating system for IoT devices. We installed it on our Raspberry Pi, and used it to go through the CamJam worksheets. In this post, I’m going to introduce you to the Robot Operating System (ROS), and we’ll use it to move our robot. We’ll be using ROS throughout the rest of the series. Remember that this is also a video series, feel free to watch the video version of this post:

What is the Robot Operating System?

At its simplest, ROS is a set of open-source libraries and tools meant to ease development of robots. It also provides an infrastructure for connecting various robotic components together. For example, if you happened to go through all of the CamJam worksheets (particularly #9), you’ve written a single Python script that’s responsible for a bunch of things: controlling the motors, reading from the line detector, reading from the ultrasonic sensor, etc. What if we threw a wireless controller into the mix? That script quickly gets complicated, and if you happen to want to swap one component out for another, you need to rewrite the whole thing; i.e. these logically different components are tightly coupled together since they’re in the same script.

ROS provides a communication infrastructure that allows you to extract different logic into their own modules, and have them communicate with each other in a standard way, as depicted in the picture above. For example, if you wanted to switch ultrasonic sensors, and rewrite the “distance” module, you could do that without having to touch any of the other modules as long as the new distance module talked the same way as the old one.

This will all make more sense once we dive in, so let’s get started, shall we?

Step 1: Install ROS on the Raspberry Pi

ROS has three currently-supported releases: Indigo Igloo, Kinetic Kame, and Lunar Loggerhead. Ubuntu Core series 16 (the one we’re using) is Ubuntu Xenial, which limits our options to Kinetic and Lunar. Lunar is technically newer and shinier, but like Ubuntu, ROS has Long-Term Support (LTS) releases that are supported for an extended period of time, and Kinetic is their most recent LTS. As a result, we’ll use Kinetic here.

SSH into your Pi, and get into your classic shell:

$ sudo classic

Let’s follow ROS Kinetic’s install guide. ROS maintains its own repository of Debian packages, which we need to add to our system:

$ sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'

We then need to add that repository’s keys to the list of keys we’ll accept (this verifies that packages in that repository really do come from ROS):

$ sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116

Now we’ll re-index all the repositories we have configured, since we just added one:

$ sudo apt update

Now let’s install ROS. As you’ll see on the install guide, there are a bunch of meta packages available (packages that exist solely to pull in other packages). Let’s install the smallest, bare-bones one, ros-kinetic-ros-base, which will take up around 700MB. We’ll also install g++, the C++ compiler (it’s still required, even though we’re writing Python):

$ sudo apt install g++ ros-kinetic-ros-base

At this point, ROS is successfully installed, but none of its tools are available to run. That’s because ROS installs itself into what it calls a “workspace”, and provides a shell script that activates that workspace. We can make sure we activate that workspace upon login by adding it to the .bashrc file in our home directory:

$ echo "source /opt/ros/kinetic/setup.bash" >> ~/.bashrc
$ source ~/.bashrc

Now you should be able to run roscore without troubles:

$ roscore
<snip>
SUMMARY
========
PARAMETERS
* /rosdistro: kinetic
* /rosversion: 1.12.12
NODES
auto-starting new master
process[master]: started with pid [4987]
ROS_MASTER_URI=http://localhost.localdomain:11311/
setting /run_id to 1db9f4c6-e044-11e7-9931-b827eba43643
process[rosout-1]: started with pid [5000]
started core service [/rosout]

Go ahead and quit that by pressing CTRL+C.

Step 2: Get to know ROS

One of the reasons I like ROS so much (and, I think, one of the reasons it’s so popular) is that their introductory documentation is fantastic. They have a phenomenal set of tutorials that take you from knowing absolutely nothing to feeling more or less comfortable with the entire system. Each one is easily digestible in a few minutes. Because they are so good, rather than try and duplicate their hard work here, you should just start at the beginning and go through them at least until you complete #13, “Examining the Simple Publisher and Subscriber”. Note that there are two parallel tutorial tracks, one that uses C++ and one that uses Python. We’ll be using Python through this series, so you don’t need to worry about the C++ ones unless they interest you.

Step 3: Setup Python 2

Now that we’ve gained some familiarity with ROS, it’s almost time to make our robot move using it. However, there’s something we need to do first. Back in CamJam worksheet #1, they mention the following:

“When the Raspberry Pi was first released, some of the important Python libraries were only available for Python 2.7. However, almost every library, and all the ones used in these worksheets, are available for Python 3.2. It has been decided that all code for this EduKit will be developed for Python 3.2.”

~ CamJam worksheet #1

That’s all fine and dandy, and I agree with it, but unfortunately the Python bindings for ROS are only officially supported on Python 2, so we need to use Python 2 from now on instead of Python 3. Don’t worry, all the code from the worksheets should still work, but it means we need to install the Python 2 version of RPi.GPIO (we only have the Python 3 version right now):

$ sudo apt install python-dev python-pip python-setuptools
$ pip install RPi.GPIO

Step 4: Create ROS package for our robot

Alright, let’s have some fun! We’re going to rewrite the code we wrote for the CamJam Worksheet #7 using ROS. We’ll add some message handling to it, so, using ROS, we can command the robot to move forward, turn left, turn right, etc.

The first step is to create a new workspace. You learned how to do this in the first ROS tutorial. I’m calling mine “edukit_bot_ws”, if you call yours something else remember to change the directions accordingly:

$ mkdir -p ~/edukit_bot_ws/src
$ cd ~/edukit_bot_ws/src
$ catkin_init_workspace

Now let’s create a new package in that workspace. I’ll call mine “edukit_bot”, and it has three dependencies: rospy (the Python bindings for ROS), std_msgs (the standard ROS messages, e.g. numbers, strings, etc.), and python-rpi.gpio (RPi.GPIO, which we use for GPIO access):

$ cd ~/edukit_bot_ws/src
$ catkin_create_pkg edukit_bot rospy std_msgs -s python-rpi.gpio

Step 5: Write the ROS node

Time to write some code. First, create a new Python script in the ROS package’s src/ directory:

$ touch ~/edukit_bot_ws/src/edukit_bot/src/driver_node

The CamJam worksheets don’t discuss this, but if we set the script to be executable, we can run it directly instead of calling it like python path/to/script.py. Let’s do that:

$ chmod a+x ~/edukit_bot_ws/src/edukit_bot/src/driver_node

Open that script in a text editor, and make it look like this (note that the entire package used in this post is available for reference):

#!/usr/bin/env python

import rospy
from std_msgs.msg import String

import RPi.GPIO as GPIO

# Set the GPIO modes
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)

# Set variables for the GPIO motor pins
pinMotorAForwards = 10
pinMotorABackwards = 9
pinMotorBForwards = 8
pinMotorBBackwards = 7

# How many times to turn the pin on and off each second
Frequency = 20
# How long the pin stays on each cycle, as a percent (here, it's 30%)
DutyCycle = 30
# Setting the duty cycle to 0 means the motors will not turn
Stop = 0

# Set the GPIO Pin mode to be Output
GPIO.setup(pinMotorAForwards, GPIO.OUT)
GPIO.setup(pinMotorABackwards, GPIO.OUT)
GPIO.setup(pinMotorBForwards, GPIO.OUT)
GPIO.setup(pinMotorBBackwards, GPIO.OUT)

# Set the GPIO to software PWM at 'Frequency' Hertz
pwmMotorAForwards = GPIO.PWM(pinMotorAForwards, Frequency)
pwmMotorABackwards = GPIO.PWM(pinMotorABackwards, Frequency)
pwmMotorBForwards = GPIO.PWM(pinMotorBForwards, Frequency)
pwmMotorBBackwards = GPIO.PWM(pinMotorBBackwards, Frequency)

# Start the software PWM with a duty cycle of 0 (i.e. not moving)
pwmMotorAForwards.start(Stop)
pwmMotorABackwards.start(Stop)
pwmMotorBForwards.start(Stop)
pwmMotorBBackwards.start(Stop)

# Turn all motors off
def StopMotors():
    pwmMotorAForwards.ChangeDutyCycle(Stop)
    pwmMotorABackwards.ChangeDutyCycle(Stop)
    pwmMotorBForwards.ChangeDutyCycle(Stop)
    pwmMotorBBackwards.ChangeDutyCycle(Stop)

# Turn both motors forwards
def Forwards():
    pwmMotorAForwards.ChangeDutyCycle(DutyCycle)
    pwmMotorABackwards.ChangeDutyCycle(Stop)
    pwmMotorBForwards.ChangeDutyCycle(DutyCycle)
    pwmMotorBBackwards.ChangeDutyCycle(Stop)

# Turn both motors backwards
def Backwards():
    pwmMotorAForwards.ChangeDutyCycle(Stop)
    pwmMotorABackwards.ChangeDutyCycle(DutyCycle)
    pwmMotorBForwards.ChangeDutyCycle(Stop)
    pwmMotorBBackwards.ChangeDutyCycle(DutyCycle)

# Turn left
def Left():
    pwmMotorAForwards.ChangeDutyCycle(Stop)
    pwmMotorABackwards.ChangeDutyCycle(DutyCycle)
    pwmMotorBForwards.ChangeDutyCycle(DutyCycle)
    pwmMotorBBackwards.ChangeDutyCycle(Stop)

# Turn Right
def Right():
    pwmMotorAForwards.ChangeDutyCycle(DutyCycle)
    pwmMotorABackwards.ChangeDutyCycle(Stop)
    pwmMotorBForwards.ChangeDutyCycle(Stop)
    pwmMotorBBackwards.ChangeDutyCycle(DutyCycle)

# Message handler
def CommandCallback(commandMessage):
    command = commandMessage.data
    if command == 'forwards':
        print('Moving forwards')
        Forwards()
    elif command == 'backwards':
        print('Moving backwards')
        Backwards()
    elif command == 'left':
        print('Turning left')
        Left()
    elif command == 'right':
        print('Turning right')
        Right()
    elif command == 'stop':
        print('Stopping')
        StopMotors()
    else:
        print('Unknown command, stopping instead')
        StopMotors()

rospy.init_node('driver')

rospy.Subscriber('command', String, CommandCallback)

rospy.spin()
print('Shutting down: stopping motors')
StopMotors()
GPIO.cleanup()

Lots of that should look familiar, but let’s break it down into chunks.

#!/usr/bin/env python

import rospy
from std_msgs.msg import String

The very first line of this file is called a shebang. Since we’ve marked this file as executable on its own, this defines the interpreter that will execute this program. In this case, we’re telling it that it needs the python command.

We then import rospy, which includes the ROS Python bindings, and we import the String message from the ROS std_msgs. We’ll use both of these a little later in the program.

import RPi.GPIO as GPIO

# Set the GPIO modes
GPIO.setmode(GPIO.BCM)
GPIO.setwarnings(False)

# <snip you don't need to see all this again...>

# Turn Right
def Right():
    pwmMotorAForwards.ChangeDutyCycle(DutyCycle)
    pwmMotorABackwards.ChangeDutyCycle(Stop)
    pwmMotorBForwards.ChangeDutyCycle(Stop)
    pwmMotorBBackwards.ChangeDutyCycle(DutyCycle)

This entire section was lifted virtually verbatim from the CamJam Worksheet #7. It’s explained there, so I won’t repeat the explanation here.

# Message handler
def CommandCallback(commandMessage):
    command = commandMessage.data
    if command == 'forwards':
        print('Moving forwards')
        Forwards()
    elif command == 'backwards':
        print('Moving backwards')
        Backwards()
    elif command == 'left':
        print('Turning left')
        Left()
    elif command == 'right':
        print('Turning right')
        Right()
    elif command == 'stop':
        print('Stopping')
        StopMotors()
    else:
        print('Unknown command, stopping instead')
        StopMotors()

Here’s a new, ROS-specific part. The CommandCallback function is created to handle a String message. It simply takes a look at the data (i.e. the string itself) contained within the message, and takes action appropriately. For example, if the string is the word “forwards” it moves the robot forward by calling the Forwards function created in the worksheet. Similarly, if the word is “left” it turns the robot left by calling the Left function. If the command isn’t one of the recognized words, the function does the safest thing it can: stop.

rospy.init_node('driver')

rospy.Subscriber('command', String, CommandCallback)

rospy.spin()
print('Shutting down: stopping motors')
StopMotors()
GPIO.cleanup()

Here’s the main part of the program. First of all, we initialize the node and give it a name (“driver”). This begins communication with the ROS master. We then subscribe to a topic named “command”, and we specify that we expect that topic to be a String message. We then provide our CommandCallback function to request that it be called when a new messages comes in on that topic.

Then we call rospy.spin() which blocks and waits for messages to come in. Once the node is asked to quit (say, with a CTRL+C), that function will exit, at which time we ensure that the motors have been stopped. We don’t want the robot running away from us!

We’re done with our workspace for now, so let’s build it:

$ cd ~/edukit_bot_ws
$ catkin_make

Step 6: Move the robot using ROS

At this point, we have a ROS node created that will drive our robot as requested in a “command” message. It uses GPIO though, which still requires sudo. Instead of trying to get our workspace working using sudo, let’s temporarily change the permissions of GPIO so that we don’t need sudo (this will be reset upon reboot):

$ sudo chmod a+rw /dev/gpiomem

/dev/gpiomem is a device representing the memory dedicated to GPIO (i.e. not other, more important/dangerous memory). As a result, this operation is relatively safe, particularly compared to doing the same to e.g. /dev/mem.

Alright, let’s test this out! You’ll need to open three terminals for this, each one using the classic shell (remember, run sudo classic to enter the classic shell). We first need the ROS master, as without it publishers and subscribers can’t find one another. So in one terminal, run the ROS master:

$ roscore

In another terminal, make sure you activate our newly-built workspace, and run our “driver” node:

$ cd ~/edukit_bot_ws
$ source devel/setup.sh
$ rosrun edukit_bot driver_node

Finally, in the third terminal, we’ll start giving our commands to make the robot move. First of all, take note of the topics that either have publishers or subscribers:

$ rostopic list
/command
/rosout
/rosout_agg

Notice the /command topic. This is the topic on which our “driver” node is listening, because of the Subscriber we configured. It’s expecting a String message there, so let’s send it a command, say, to move forward:

$ rostopic pub -1 /command std_msgs/String "forwards"
publishing and latching message for 3.0 seconds

You should notice the “driver” node say that it’s moving forward, and then your robot’s wheels should start rotating forward! Try sending any of the strings we handled in CommandCallback (“left”, “backwards”, etc.), and commands that you know are invalid to ensure that it stops safely.

Congratulations, you’re quickly learning ROS! In the next post in this series, we’ll break free of the CamJam worksheets and strike out on our own. We’ll introduce the wireless controller, and begin working on making our robot remotely-controlled using ROS.

21 January, 2018 03:19AM

January 20, 2018

David Tomaschik: socat as a handler for multiple reverse shells

I was looking for a new way to handle multiple incoming reverse shells. My shells needed to be encrypted and I preferred not to use Metasploit in this case. Because of the way I was deploying my implants, I wasn’t able to use separate incoming port numbers or other ways of directing the traffic to multiple listeners.

Obviously, it’s important to keep each reverse shell separated, so I couldn’t just have a listener redirecting all the connections to STDIN/STDOUT. I also didn’t want to wait for sessions serially – obviously I wanted to be connected to all of my implants simultaneously. (And allow them to disconnect/reconnect as needed due to loss of network connectivity.)

As I was thinking about the problem, I realized that I basically wanted tmux for reverse shells. So I began to wonder if there was some way to connect openssl s_server or something similar to tmux. Given the limitations of s_server, I started looking at socat. Despite it’s versatility, I’ve actually only used it once or twice before this, so I spent a fair bit of time reading the man page and the examples.

I couldn’t find a way to get socat to talk directly to tmux in a way that would spawn each connection as a new window (file descriptors are not passed to the newly-started process in tmux new-window), so I ended up with a strange workaround. I feel a little bit like Rube Goldberg inventing C2 software (and I need to get something more permanent and featureful eventually, but this was a quick and dirty PoC), but I’ve put together a chain of socat to get a working solution.

My implementation works by having a single socat process receive the incoming connections (forking on incoming connection), and executing a script that first starts a socat instance within tmux, and then another socat process to copy from the first to the second over a UNIX domain socket.

Yes, this is 3 socat processes. It’s a little ridiculous, but I couldn’t find a better approach. Roughly speaking, the communications flow looks a little like this:

1
TLS data <--> socat listener <--> script stdio <--> socat <--> unix socket <--> socat in tmux <--> terminal window

Getting it started is fairly simple. Begin by generating your SSL certificate. In this case, I’m using a self-signed certificate, but obviously you could go through a commercial CA, Let’s Encrypt, etc.

1
2
openssl req -newkey rsa:2048 -nodes -keyout server.key -x509 -days 30 -out server.crt
cat server.key server.crt > server.pem

Now we will create the script that is run on each incoming connection. This script needs to launch a tmux window running a socat process copying from a UNIX domain socket to stdio (in tmux), and then connecting another socat between the stdio coming in to the UNIX domain socket.

1
2
3
4
5
6
7
8
9
10
11
12
13
#!/bin/bash

SOCKDIR=$(mktemp -d)
SOCKF=${SOCKDIR}/usock

# Start tmux, if needed
tmux start
# Create window
tmux new-window "socat UNIX-LISTEN:${SOCKF},umask=0077 STDIO"
# Wait for socket
while test ! -e ${SOCKF} ; do sleep 1 ; done
# Use socat to ship data between the unix socket and STDIO.
exec socat STDIO UNIX-CONNECT:${SOCKF}

The while loop is necessary to make sure that the last socat process does not attempt to open the UNIX domain socket before it has been created by the new tmux child process.

Finally, we can launch the socat process that will accept the incoming requests (handling all the TLS steps) and execute our per-connection script:

1
socat OPENSSL-LISTEN:8443,cert=server.pem,reuseaddr,verify=0,fork EXEC:./socatscript.sh

This listens on port 8443, using the certificate and private key contained in server.pem, performs a fork() on accepting each incoming connection (so they do not block each other) and disables certificate verification (since we’re not expecting our clients to provide a certificate). On the other side, it launches our script, providing the data from the TLS connection via STDIO.

At this point, an incoming TLS connection connects, and is passed through our processes to eventually arrive on the STDIO of a new window in the running tmux server. Each connection gets its own window, allowing us to easily see and manage the connections for our implants.

20 January, 2018 08:00AM

January 19, 2018

Ubuntu Insights: Tutorial: Continuous delivery of snaps with Circle CI

Bullet-proof continuous delivery of software is crucial to the health of your community, more than a way to run manual tests, it also enables your early adopters to test code and give feedback on it as soon as it lands. You may be already using build.snapcraft.io to do so for snaps, but in some cases, your existing tooling needs to prevail.

Enabling Circle CI for snaps

In this tutorial, you will learn how to use Circle CI to build a snap and push it automatically to the edge channel of the Snap Store every time you make a change to your master branch in GitHub.

What you’ll learn

  • How to build your snap in Circle CI
  • How to push your snap to the store automatically from Circle CI
  • How to use snapcraft to enable all this, with a few simple commands

Take The Tutorial

19 January, 2018 07:15PM

Joe Barker: Configuring MSMTP On Ubuntu 16.04 (Again)

This post exists as a copy of what I had on my previous blog about configuring MSMTP on Ubuntu 16.04; I’m posting it as-is for posterity, and have no idea if it’ll work on later versions. As I’m not hosting my own Ubuntu/MSMTP server anymore I can’t see any updates being made to this, but if I ever do have to set this up again I’ll create an updated post! Anyway, here’s what I had…

I previously wrote an article around configuring msmtp on Ubuntu 12.04, but as I hinted at in a previous post that sort of got lost when the upgrade of my host to Ubuntu 16.04 went somewhat awry. What follows is essentially the same post, with some slight updates for 16.04. As before, this assumes that you’re using Apache as the web server, but I’m sure it shouldn’t be too different if your web server of choice is something else.

I use msmtp for sending emails from this blog to notify me of comments and upgrades etc. Here I’m going to document how I configured it to send emails via a Google Apps account, although this should also work with a standard Gmail account too.

To begin, we need to install 3 packages:
sudo apt-get install msmtp msmtp-mta ca-certificates
Once these are installed, a default config is required. By default msmtp will look at /etc/msmtprc, so I created that using vim, though any text editor will do the trick. This file looked something like this:

# Set defaults.
defaults
# Enable or disable TLS/SSL encryption.
tls on
tls_starttls on
tls_trust_file /etc/ssl/certs/ca-certificates.crt
# Setup WP account's settings.
account 
host smtp.gmail.com
port 587
auth login
user 
password 
from 
logfile /var/log/msmtp/msmtp.log

account default : 

Any of the uppercase items (i.e. ) are things that need replacing specific to your configuration. The exception to that is the log file, which can of course be placed wherever you wish to log any msmtp activity/warnings/errors to.

Once that file is saved, we’ll update the permissions on the above configuration file — msmtp won’t run if the permissions on that file are too open — and create the directory for the log file.

sudo mkdir /var/log/msmtp
sudo chown -R www-data:adm /var/log/msmtp
sudo chmod 0600 /etc/msmtprc

Next I chose to configure logrotate for the msmtp logs, to make sure that the log files don’t get too large as well as keeping the log directory a little tidier. To do this, we create /etc/logrotate.d/msmtp and configure it with the following file. Note that this is optional, you may choose to not do this, or you may choose to configure the logs differently.

/var/log/msmtp/*.log {
rotate 12
monthly
compress
missingok
notifempty
}

Now that the logging is configured, we need to tell PHP to use msmtp by editing /etc/php/7.0/apache2/php.ini and updating the sendmail path from
sendmail_path =
to
sendmail_path = "/usr/bin/msmtp -C /etc/msmtprc -a -t"
Here I did run into an issue where even though I specified the account name it wasn’t sending emails correctly when I tested it. This is why the line account default : was placed at the end of the msmtp configuration file. To test the configuration, ensure that the PHP file has been saved and run sudo service apache2 restart, then run php -a and execute the following

mail ('personal@email.com', 'Test Subject', 'Test body text');
exit();

Any errors that occur at this point will be displayed in the output so should make diagnosing any errors after the test relatively easy. If all is successful, you should now be able to use PHPs sendmail (which at the very least WordPress uses) to send emails from your Ubuntu server using Gmail (or Google Apps).

I make no claims that this is the most secure configuration, so if you come across this and realise it’s grossly insecure or something is drastically wrong please let me know and I’ll update it accordingly.

19 January, 2018 10:30AM

hackergotchi for Univention Corporate Server

Univention Corporate Server

Collaborative Cloud Workspace Based on Univention Corporate Server

In this success story, a team which had only worked together online in the past required a stable IT infrastructure of its own with classic, collaborative elements as well as a cloud workspace. Following successful realization of the project, the cooperation can now continue in a virtualized environment with UCS.

In this article, I will be explaining in more detail which hardware and software were employed in the implementation of this project.

Team of 10 in a virtual space

The 10-strong team at Chefarztabrechnung24 GmbH is a long-term billing partner for chief physicians, consultants, and hospitals. In the past, it employed simple forms of data exchange and communication tools such as IMAP mail. In early 2017, the group decided to develop its own IT infrastructure.

The new infrastructure needed to offer typical elements of on-premise environments:

  • Collaborative applications such as e-mail, calendar, and address books
  • Access via web frontend
  • Outlook
  • Mobile synchronization of smartphones via ActiveSync app
  • Shared access to file structures organized on network drives

Data privacy-relevant information such as patient data, the duties performed by the physicians, and the diagnoses made should not be saved on these systems.

Starting situation and plans for the future

At that time, CAA24 was operating without any set premises or locations. However, premises were planned for the future, and so it was important for it to be possible to integrate the headquarters into the new infrastructure and perform the administration. For financial reasons, there was no intention to set up and operate a standard private cloud including the procurement of the corresponding hardware. Instead, the requisite instances should be operated securely on the public cloud of a trustworthy provider.

In this respect, the team wished to operate instances on the basis of the virtualization software KVM/QEMU. Furthermore, the idea was to complement the instances with a private, local network and administrate them via a web frontend. Into the bargain, the instances should ideally also offer Ceph-based storage. Last, but by no means least, the virtual hardware components should be changed during runtime.

Selection of provider and applications

After evaluating a number of different providers, the team selected Filoo GmbH, which operates a data center with a location in one of Germany’s most state-of-the-art data center campuses, Telehouse’s ISO:27001-controlled and certified data center in Frankfurt.

As a solution, they decided to go with our suggestion of Univention Corporate Server in combination with the groupware Zimbra. This combination means that all the central aspects – including the creation of new users, groups, and network drives – can be easily accessed and centrally administrated via Univention Management Console. UCS’ flexible concept and integrated replication mechanisms inherently support the mounting and administration of additional instances.

Realization of the shared cloud workspace

We created three virtual instances in the data center so as to allow us to implement the desired features. The key components of this scenario are a dedicated UCS domain controller master for the provision of central components such as LDAP, Kerberos, DNS/DHCP, etc. The domain controller was set up with the DNS zone corp.caa24.de and the workgroup CAA24 for internal use. To keep the setup simple, a network was created with a private and a public network segment, secured by a 3-zone firewall and an OpenVPN server. The assignment of a virtual CPU, 2 GB vRAM, and a 10 GB vDisk are sufficient in this case to ensure the sufficiently high-performing running of the instance. The resources can be increased during runtime.

The second instance was also set up with Univention Corporate Server in the role of a domain controller slave and the file server component, connected via the private network, and mounted in the domain. The third instance forms the basis for the Ubuntu server LTS and the Zimbra groupware. It is set up with the top-level domain caa24.de and securely mounted via a public network interface. The inboxes were automatically migrated to the new platform with the help of the imapsync tool. File-based backup is performed via the rsnapshot tool, which backs up the included corporate data every two hours.

The team members’ newly purchased, company ThinkPad laptops were equipped with an OpenVNP client for secure mobile access. At the CAA24 team’s request, the familiar Windows 7 Professional desktop was retained. Following preparation of the ThinkPads, they were sent to the members of the team, who are based in Stuttgart among other locations. Once they were delivered, each team member was offered telephone support and instructed in their use accordingly.

This environment supports a team working at different locations via a traditional, collaborative workspace provided on the cloud. Complete control is retained, the system can be centrally administrated, and it is also possible to integrate additional components.

What is the situation like today?

The selected hardware and software components have proven their worth and allow problem-free shared working. The resources are sufficient for the virtual instances and have not required any expansions to date. The central administration of identities has proven practical.

In the meantime, there are now premises in Berlin with permanent IT workstations. The site was connected in an offline-capable manner via VPN using a low energy server the size of a router from Thomas-Krenn AG, a Univention Corporate Server in the role of a domain controller slave, and the file server component, and integrated in the domain. Both file servers are synchronized bidirectionally by means of the osync project. The workstations and network devices are centrally administrated in the domain, use roaming profiles, and can also be administrated remotely. The Debian/Ubuntu desktops employed are administrated in the domain by means of the realmd project.

Further plans

The next project in the pipeline is the wish to provide the desktops with applications and updates centrally, which, if all goes to plan, will be made possible through the use of Ansible.

I hope that this user story will provide you with new impulses, and would be delighted if it did. If you have any questions or comments, please feel free to use the comments field or contact me directly via our website.

 

Der Beitrag Collaborative Cloud Workspace Based on Univention Corporate Server erschien zuerst auf Univention.

19 January, 2018 09:00AM by Maren Abatielos

hackergotchi for ARMBIAN

ARMBIAN

Odroid HC2


Ubuntu server – LTS kernel
  .torrent (recommended) ?
Command line interface – server usage scenarios.

Stable

Debian server – LTS kernel
  .torrent (recommended) ?
Command line interface – server usage scenarios.

Stable

other download options and archive

Known issues

Legacy kernel images

Based on a review from @tkaiser,

https://forum.armbian.com/index.php?/topic/4983-odroid-hc1/

  1. Use next kernel,
  2. don’t think about USB3 issues any more (that’s a XU4 problem),
  3. don’t expect serial console problems (that was an Armbian problem), and
  4. please overthink ‘rootfs on HDD’ (on SSD that’s great, HDDs are too slow and you prevent them from sleeping when moving the rootfs on them).
  • eMMC install might be broken if you don’t have recent uboot on your eMMC card – you must update it. Add run copy_uboot_sd2emmc to your boot.ini, boot from SD card with attached eMMC. This is one time job – remove that command from boot.ini,
  • serial console is broken.

Mainline kernel images
TBD

Hardkernel kept everything 100% compatible to ODROID XU4

With HC1 we’re talking about Hardkernel’s 4.9 or mainline

Unverified matter

derived from comment chain in the review

  1. Mine is not connecting at gigabit – fast ethernet only. Same cable (+few others), same PSU, same SD card, any kernel, while XU4 runs at Gigabit w/o problem

    Possibly a cable quality issue, as a cable swap caused the problem to disappear

Notes

  1. Based on the XU4 template with exceptions

  2. To avoid ‘using up’ write cycles on the SD media, once working, move to a RO root
    filesystem, and ‘chain boot’ into the 2.5 in SATA drive (either SSD or spinnnig)
    which do not have the same ‘wear’ issues

Boot message chain

  1. Made with:

    armbianmonitor -u output

test run done with external RTL8153 behind internal USB3 hub on XU4

http://sprunge.us/cOJP

Quick start | Documentation

Preparation

Make sure you have a good & reliable SD card and a proper power supply. Archives can be uncompressed with 7-Zip on Windows, Keka on OS X and 7z on Linux (apt-get install p7zip-full). RAW images can be written with Etcher (all OS).

Boot

Insert SD card into a slot and power the board. (First) boot (with DHCP) takes up to 35 seconds with a class 10 SD Card and cheapest board.

Login

Login as root on HDMI / serial console or via SSH and use password 1234. You will be prompted to change this password at first login. Next you will be asked to create a normal user account that is sudo enabled (beware of default QWERTY keyboard settings at this stage).

Tested hardware

 legacy kernel 
32GB SD card

19 January, 2018 05:47AM by igorpecovnik

January 18, 2018

hackergotchi for ArcheOS

ArcheOS

Could open photogrammetry be an alternative to 3D Orthodontics?


Cícero Moraes
3D designer

Graziane Olimpo
SD, Master in Orthodontics


The effectiveness of scans by intraoral scanners is undeniable. These are the gold standard and reference of any professional who wishes to use the new 3D technologies for planning procedures involving orthodontics. However, while these technologies meet the needs of the market, they also stand out because of the high costs. Few professionals, taking into account the whole, have access to these means in a constant and unrestricted way.

The purpose of this material is to offer an affordable alternative to those who want to enter the world of 3D graphics, but do not have the means to enjoy powerful and expensive machines.

The objective is not to criticize the values, since, in the face of the investments and the results are quite coherent, in addition the world is very big and has place for all. What we are doing is just showing a way to achieve compatible results, with due limitations, on those produced by market references.

If you have some problem with the embed text below, please read it here: https://docs.google.com/document/d/1SYJUNS2fihm-BpTt4mkqAdeDv0QQWffaO3e1vdr334s/edit?usp=sharing


18 January, 2018 07:42PM by cogitas3d (noreply@blogger.com)

Cumulus Linux

Getting started with Linux: the basics – part 2

In part 1 of our series about getting started with Linux, we learned how to download Linux, whether you should use the CLI or the GUI, how to get a SSH client, how to login to Linux and how to get help. In this post, you’ll learn how to know what type of Linux you are using and how to navigate the Linux file system.

How do I know what type of Linux I am using?

Because there are so many different types of Linux, you want to be sure you know what distribution and version you are using (for the sake of searching the right documentation on the Internet, if nothing else). Keep in mind a couple different commands to identify your Linux version.

The uname command shows the basic type of operating system you are using, like this:

david@debian:~$ uname -a
Linux debian 3.16.0-4-686-pae #1 SMP Debian 3.16.43-2 (2017-04-30) i686 GNU/Linux

And the hostnamectl command shows you the hostname of the Linux server as well as other system information, like the machine ID, virtualization hypervisor (if used), operating system and Linux kernel version. Here’s an example:

david@debian:~$ hostnamectl
Static hostname: debian
Icon name: computer-vm
Chassis: vm
Machine ID: 0eb625ef6e084c9181b9db9c6381d8ff
Boot ID: 8a3ef6ecdfcf4218a6102b613e41f9ee
Virtualization: vmware
Operating System: Debian GNU/Linux 8 (jessie)
Kernel: Linux 3.16.0-4-686-pae
Architecture: x86

As shown above, this host is running Linux. More specifically, the host is running Debian GNU Linux version 8 (codename jessie) with a Linux 3.16 version kernel on an x86 CPU architecture. Among other things, you can also see that this Linux installation is running on a virtual machine with VMware as the hypervisor. Cool, huh?

Where do I find things?

An operating system has a file system that, similar to a filing cabinet, allows you to store and retrieve data. Most file systems use the concept of directories—also called folders—and files that are stored inside the directories. Everything in Linux, even hardware, is represented in this folder and file structure.

If you’re new to Linux, you might be wondering how the Linux file system compares to something familiar like the Microsoft Windows file system. In Windows, you may be used to drive letters (like the C: drive) being used as the highest point of a storage volume. Linux represents the highest level of the volume differently. The Linux file system can span multiple physical drives, which are all a part of the same tree. The highest point of the Linux file system is the “/,” or “root,” with all other directories branching down the tree from there, as shown in Figure 1.

Interaction with and navigation of the Linux file system is done up and down the tree with commands such as:

  • pwd: Display the directory you’re currently in (short for print working directory).
  • ls: List out files that are present in the folder.
  • cd: Change directory.
  • rm: Remove files.
  • mkdir and rmdir: Make and remove folders or directories, respectively.

Linux file system

Figure 1. The typical Linux file system

Let’s do a quick exercise. First, by using the pwd command, you can see what directory I’m currently in.

david@debian:~$ pwd
/home/david

Next, to change to the root directory, you can use the cd command.

david@debian:~$ cd /

To get a simple list of files, you can use the ls command. This will display a very concise list of the files and folders that exist in the current directory.

david@debian:/$ ls
bin boot dev etc home initrd.img lib lost+found media mnt opt proc root run sbin srv sys tmp usr var vmlinuz

But, in most cases, you probably want more information than just a simple list of files. Linux uses command line flags or switches to extend what a command can do. For example, to list out all the files and folders in the current directory, along with full details about each one, you would type ls -la. This long listing format then shows you each file and directory, as well as the permissions and access rights for each object, the name of the user that owns the object (root), the name of the group that owns the object (again, root), the file size and the data and time that the object was last modified. Here’s what this output looks like for the root folder on my test system:

david@debian:/$ ls -la
total 88
drwxr-xr-x 21 root root 4096 May 15 11:50 .
drwxr-xr-x 21 root root 4096 May 15 11:50 ..
drwxr-xr-x 2 root root 4096 May 15 12:11 bin
drwxr-xr-x 3 root root 4096 May 15 15:53 boot
drwxr-xr-x 18 root root 3200 Jul 14 01:52 dev
drwxr-xr-x 134 root root 12288 Jul 14 01:55 etc
drwxr-xr-x 3 root root 4096 May 15 15:53 home
lrwxrwxrwx 1 root root 33 May 15 11:50 initrd.img -> /boot/initrd.img-3.16.0-4-686-pae
drwxr-xr-x 19 root root 4096 May 17 00:41 lib
drwx------ 2 root root 16384 May 15 11:49 lost+found
drwxr-xr-x 3 root root 4096 May 15 11:49 media
drwxr-xr-x 2 root root 4096 May 15 11:49 mnt
drwxr-xr-x 2 root root 4096 May 15 11:49 opt
dr-xr-xr-x 150 root root 0 Jul 14 01:52 proc
drwx------ 2 root root 4096 May 16 14:29 root
drwxr-xr-x 23 root root 880 Jul 14 01:57 run
drwxr-xr-x 2 root root 4096 May 17 00:41 sbin
drwxr-xr-x 2 root root 4096 May 15 11:49 srv
dr-xr-xr-x 13 root root 0 Jul 14 01:52 sys
drwxrwxrwt 13 root root 4096 Jul 14 02:02 tmp
drwxr-xr-x 10 root root 4096 May 15 11:49 usr
drwxr-xr-x 12 root root 4096 May 15 12:12 var
lrwxrwxrwx 1 root root 29 May 15 11:50 vmlinuz -> boot/vmlinuz-3.16.0-4-686-pae

One of the most common questions from new Linux users who are at a command line is, “What are the applications available to me, and how do I run them?” As mentioned previously, most user tools are found in the directories /bin and /usr/bin and system tools are typically located in /sbin and /usr/sbin. For example, tools like cp (to copy a file), ps (for process status) and cat (to display the contents of a file) are all found in /bin. The great thing is you don’t need to go into any of these directories to run these types of tools because these directories are included in your $PATH variable by default.

The $PATH variable includes all the locations that are searched when you run a command in the CLI. Because the /bin directories are in your path, when you execute the name of any of these sample tools, they will be found. Here’s what your $PATH variable might look like (shown by using the echo command to show the $PATH variable):

david@debian:~$ echo $PATH
/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games

You can execute applications or commands simply by typing the name of the command if the application’s location is in your $PATH. If that application is not in one of the folders listed in your $PATH, you have to do one of the following:

  • Navigate to the folder where the application is found and tell Linux that you want to execute the application in that folder, like this:

david@debian:~$ cd /opt/app/bin
david@debian:~$ ./myapp

(The “dot slash” refers to the current folder, with the full command saying “in the current directory, execute ‘my app.’”)

  • Specify the full path of the application when you execute it, like this:

david@debian:~$ /opt/app/bin/myapp

A useful command in determining which command will be run and from what directory it will be run is the which command. Use which with the executable of a command afterward to get a list of the location of the command that will be executed.

Besides the standard types of Linux tools, there are tens of thousands of applications you can install into Linux in just a few commands. Linux distributions offer package managers that help you search online package or application repositories and then download and install just about any application you might want. Package managers also make it easy to update your packages to get the latest version. Examples of package managers are apt, dpkg, rpm, and yum. The package manager that is available to you will be determined by the Linux distribution that you have installed. Linux running on Android mobile devices also has its own package manager (similar to the Apple “App Store”).

On Debian and Ubuntu systems, you can run:

apt list --installed and get a list of the packages that are already installed, like this:
david@debian:~$ apt list --installed
accountsservice/stable,now 0.6.37-3+b1 i386 [installed,automatic]
acl/stable,now 2.2.52-2 i386 [installed]
acpi/stable,now 1.7-1 i386 [installed]
acpi-support-base/stable,now 0.142-6 all [installed]
(Output truncated)

Any apt list command will result in very long output, so you may consider piping it to the “less” pager tool, like this:

apt list | less.

This will show you the output page by page and allow you to press the space bar after each page to see the next page.

Next Steps

To learn more about getting started with the Linux OS, keep an eye out for part 3 of this series, coming soon!

Want to learn more about Linux in the data center? Download the full ebook titled the “Gorilla Guide to… Linux Networking 101” or visit our learn section to discover more!

The post Getting started with Linux: the basics – part 2 appeared first on Cumulus Networks Blog.

18 January, 2018 07:29PM by Kelsey Havens

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Canonical brings Slack to the snap ecosystem

The digital workspace will now be available to all Linux users

London, UK – 18th January 2018 – Canonical, the company behind Ubuntu, today announced the first iteration of Slack as a snap, bringing collaboration to open source users.

Slack is an enterprise software platform that allows teams and businesses of all sizes to communicate effectively. Slack works seamlessly with other software tools within a single integrated environment, providing an accessible archive of an organisation’s communications, information and projects.

In adopting the universal Linux app packaging format, Slack will open its digital workplace up to an-ever growing community of Linux users, including those using Linux Mint, Manjaro, Debian, ArchLinux, OpenSUSE, Solus, and Ubuntu.

Designed to connect us to the people and tools we work with every day, the Slack snap will help Linux users be more efficient and streamlined in their work. And an intuitive user experience remains central to the snaps’ appeal, with automatic updates and rollback features giving developers greater control in the delivery of each offering.

“Slack is helping to transform the modern workplace, and we’re thrilled to welcome them to the snaps ecosystem”, said Jamie Bennett, VP of Engineering, Devices & IoT at Canonical. “Today’s announcement is yet another example of putting the Linux user first – Slack’s developers will now be able to push out the latest features straight to the user. By prioritising usability, and with the popularity of open source continuing to grow, the number of snaps is only set to rise in 2018.”

Snaps are containerised software packages, designed to work perfectly and securely within any Linux environment; across desktop, the cloud, and IoT devices. Thousands of snaps have been launched since the first in 2016, with snaps’ automated updates, added security benefits, and rollback features, with the option for applications to revert back to the previous working version in the event of a bug.

Slack is available to download as a snap by clicking here, or running snap install Slack.

-ENDS-

About Canonical
Canonical is the company behind Ubuntu, the leading OS for cloud operations. Most public cloud workloads use Ubuntu, as do most new smart gateways, switches, self-driving cars and advanced robots. Canonical provides enterprise support and services for commercial users of Ubuntu. Established in 2004, Canonical is a privately held company.

18 January, 2018 04:54PM

hackergotchi for Serbian GNU/Linux

Serbian GNU/Linux

ПДФ издање - Сербиан Линукс













Поводом петог издања дистрибуције Сербиан ГНУ/Линукс, обликован је медиј у ПДФ формату чији је циљ да корисницима употпуни слику о овом пројекту отвореног кода. Приручник је намењен, пре свих, корисницима који користе друге оперативне системе и немају потпуни преглед о дешавањима у свету слободног софтвера, као и свим корисницима који би желели да на свом рачунару имају оперативни систем на српском језику и ћириличном писму. Корисници који су преузели или ће преузети ново издање оперативног система, ПДФ могу пронаћи у фасцикли Књиге.





18 January, 2018 04:54PM by Debian Srbija (noreply@blogger.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Sean Davis: MenuLibre 2.1.4 Released

The wait is over. MenuLibre 2.1.4 is now available for public testing and translations! With well over 100 commits, numerous bug fixes, and a lot of polish, the best menu editing solution for Linux is ready for primetime.

What’s New?

New Features

  • New Test Launcher button to try out new changes without saving (LP: #1315875)
  • New Sort Alphabetically button to instantly sort subdirectories (LP: #1315536)
  • Added ability to create subdirectories in system-installed paths (LP: #1315872)
  • New Parsing Errors tool to review invalid launcher files
  • New layout preferences! Budgie, GNOME, and Pantheon users will find that MenuLibre uses client side decorations (CSD) by default, while other desktops will use the more traditional server side decorations with a toolbar. Users are able to set their preference via the commandline.
    • -b, --headerbar to use the headerbar layout
    • -t, --toolbar to use the toolbar layout

General

  • The folder icon is now used in place of applications-other for directories (LP: #1605905)
  • DBusActivatable and Hidden keys are now represented by switches instead of text entries
  • Additional non-standard but commonly used categories have been added
  • Support for the Implements key has been added
  • Cinnamon, EDE, LXQt, and Pantheon have been added to the list of supported ShowIn environments
  • All file handling has been replaced with the better-maintained GLib KeyFile library
  • The Version key has been bumped to 1.1 to comply with the latest version of the specification

Bug Fixes

  • TypeError when adding a launcher and nothing is selected in the directory view (LP: #1556664)
  • Invalid categories added to first launcher in top-level directory under Xfce (LP: #1605973)
  • Categories created by Alacarte not respected, custom launchers deleted (LP: #1315880)
  • Exit application when Ctrl-C is pressed in the terminal (LP: #1702725)
  • Some categories cannot be removed from a launcher (LP: #1307002)
  • Catch exceptions when saving and display an error (LP: #1444668)
  • Automatically replace ~ with full home directory (LP: #1732099)
  • Make hidden items italic (LP: #1310261)

Translation Updates

  • This is the first release with complete documentation for every translated string in MenuLibre. This allows translators to better understand the context of each string when they adapt MenuLibre to their language, and should lead to more and better quality translations in the future.
  • The following languages were updating since MenuLibre 2.1.3:
    • Brazilian Portuguese, Catalan, Croatian, Danish, French, Galician, German, Italian, Kazakh, Lithuanian, Polish, Russian, Slovak, Spanish, Swedish, Ukrainian

Screenshots

Downloads

The latest version of MenuLibre can always be downloaded from the Launchpad archives. Grab version 2.1.4 from the below link.

https://launchpad.net/menulibre/2.1/2.1.4/+download/menulibre-2.1.4.tar.gz

  • SHA-256: 36a6350019e45fbd1219c19a9afce29281e806993d4911b45b371dac50064284
  • SHA-1: 498fdd0b6be671f4388b6fa77a14a7d1e127e7ce
  • MD5: 0e30f24f544f0929621046d17874ecf0

18 January, 2018 11:50AM

hackergotchi for ZEVENET

ZEVENET

Building Highly Available Data Centers and Automatic Disaster Recovery

Global Service Load Balancing, as well known as GSLB, is a load balancing technique that permits to build fault tolerance, automatic disaster recovery, load balancing across WAN servers and improved latency among different data center sites using the distributed DNS protocol. For that reason, we've compiled a technical article to show how easy is to build a GSLB service with Zevenet Load Balancer...

Source

18 January, 2018 10:29AM by Zevenet

hackergotchi for VyOS

VyOS

1.1.8 release is available for download

1.1.8, the major minor release, is available for download from https://downloads.vyos.io/?dir=release/1.1.8 (mirrors are syncing up).

It breaks the semantic versioning convention, while the version number implies a bugfix-only release, it actually includes a number of new features. This is because 1.2.0 number is already assigned to the Jessie-based release that is still in beta, but not including those features that have been in the codebase for a while and a few of them have already been in production for some users would feel quite wrong, especially considering the long delay between the releases. Overall it's pretty close in scope to the original 1.2.0 release plan before Debian Squeeze was EOLd and we had to switch the effort to getting rid of the legacy that was keeping us from moving to a newer base distro.

You can find the full changelog here.

The release is available for both 64-bit and 32-bit machines. The i586-virt flavour, however, was discontinued since a) according to web server logs and user comments, there is no demand for it, unlike a release for 32-bit physical machines b) hypervisors capable of running on 32-bit hardware went extinct years ago. The current 32-bit image is built with paravirtual drivers for KVM/Xen, VMware, and Hyper-V, but without PAE, so you shouldn't have any problem running it on small x86 boards and testing it on virtual machines.

We've also made a 64-bit OVA that works with VMware and VirtualBox.

Security

Multiple vulnerabilities in OpenSSL, dnsmasq, and hostapd were patched, including the recently found remote code execution in dnsmasq.

Bugfixes

Some notable bugs that were fixed include:

  • Protocol negation in NAT not working correctly (it had exactly opposite effect and made the rule match the negated protocol instead)
  • Inability to reconfigure L2TPv3 interface tunnel and session ID after interface creation
  • GRUB not getting installed on RAID1 members
  • Lack of USB autosuspend causing excessive CPU load in KVM guests
  • VTI interfaces not coming back after tunnel reset
  • Cluster failing to start on boot if network links take too long to get up

New features

User/password authentication for OpenVPN client mode

A number of VPN providers (and some corporate VPNs) require that you use user/password authentication and do not support x.509-only authentication. Now this is supported by VyOS:

set interfaces openvpn vtun0 authentication username jrandomhacker
set interfaces openvpn vtun0 authentication password qwerty
set interfaces openvpn vtun0 tls ca-cert-file /config/auth/ca.crt
set interfaces openvpn vtun0 mode client
set interfaces openvpn vtun0 remote-host 192.0.2.1

Bridged OpenVPN servers no longer require subnet settings

Before this release, OpenVPN would always require subnet settings, so if one wanted to setup an L2 OpenVPN bridged to another interface, they'd have to specify a mock subnet. Not anymore, now if the device-type is set to "tap" and bridge-group is configured, subnet settings are not required.

New OpenVPN options exposed in the CLI

A few OpenVPN options that formerly would have to be configured through openvpn-option are now available in the CLI:

set interfaces openvpn vtun0 use-lzo-compression
set interfaces openvpn vtun0 keepalive interval 10
set interfaces openvpn vtun0 keepalive failure-count 5

Point to point VXLAN tunnels are now supported

In earlier releases, it was only possible to create multicast, point to multipoint VXLAN interfaces. Now the option to create point to point interfaces is also available:
set interfaces vxlan vxlan0 address 10.1.1.1/24
set interfaces vxlan vxlan0 remote 203.0.113.50
set interfaces vxlan vxlan0 vni 10

AS-override option for BGP

The as-override option that is often used as an alternative to allow-as-in is now available in the CLI:

set protocols bgp 64512 neighbor 192.0.2.10 as-override

as-path-exclude option for route-maps

The option for removing selected ASNs from AS paths is available now:
set policy route-map Foo rule 10 action permit
set policy route-map Foo rule 10 set as-path-exclude 64600

Buffer size option for NetFlow/sFlow

The default buffer size was often insufficient for high-traffic installations, which caused pmacct to crash. Now it is possible to specify the buffer size option:
set system flow-accounting buffer-size 512 # megabytes
There are a few more options for NetFlow: source address (can be either IPv4 or IPv6) and maximum number of concurrenct flows (on high traffic machines setting it too low can cause netflow data loss):
set system flow-accounting netflow source-ip 192.0.2.55
set system flow-accounting netflow max-flows 2097152

VLAN QoS mapping options

It is now possible to specify VLAN QoS values:
set interfaces ethernet eth0 vif 42 egress-qos 1:6
set interfaces ethernet eth0 vif 42 ingress-qos 1:6

Ability to set custom sysctl options

There are lots of sysctl options in the Linux kernel and it would be impractical to expose them all in the CLI, since most of them only need to be modified under special circumstances. Now you can set a custom option is you need to:
set system sysctl custom $key value $value

Custom client ID for DHCPv6

It  is now possible to specify custom client ID for DHCPv6 client:
set interfaces ethernet eth0 dhcpv6-options duid foobar

Ethernet offload options

Under "set interfaces ethernet ethX offload-options" you can find a number of options that control NIC offload.

Syslog level "all"

Now you can specify options for the *.* syslog pattern, for example:

set system syslog global facility all level notice

Unresolved or partially resolved issues

Latest ixgbe driver updates are not included in this release.

The issue with VyOS losing parts of its BGP config when update-source is set to an address belonging to a dynamic interface such as VTI and the interface takes long to get up and acquire its address was resolved in its literal wording, but it's not guaranteed that the BGP session will get up on its own in this case. It's recommended to set the update-source to an address of an interface available right away on boot, for example, a loopback or dummy interface.

The issue with changing the routing table number in PBR rules is not yet resolved. The recommended workaround is to delete the rule and re-create it with the new table number, e.g. by copying its commands from 'run show configuration commands | match "policy route "'.

Acknowledgements

I would like to say thanks to everyone who contributed and made this release possible, namely: Kim Hagen, Alex Harpin, Yuya Kusakabe, Yuriy Andamasov, Ray Soucy, Nikolay Krasnoyarski, Jason Hendry, Kevin Blackham, kouak, upa, Logan Attwood, Panagiotis Moustafellos, Thomas Courbon, and Ildar Ibragimov (hope I didn't forget anyone).

A note on the downloads server

The original packages.vyos.net server is still having IO performance problems and won't handle a traffic spike associated with release well. We've setup the https://downloads.vyos.io server on our new host specially for release images and will later migrate the rest of the old server including package repositories and the rsync setup.

18 January, 2018 06:21AM by Daniil Baturin

January 17, 2018

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu 17.04 (Zesty Zapus) reached End of Life on January 13, 2018

Ubuntu announced its 17.04 (Zesty Zapus) release almost 9 months ago, on
April 13, 2017.  As a non-LTS release, 17.04 has a 9-month support cycle
and, as such, will reach end of life on Saturday, January 13th.

At that time, Ubuntu Security Notices will no longer include information or
updated packages for Ubuntu 17.04.

The supported upgrade path from Ubuntu 17.04 is via Ubuntu 17.10.
Instructions and caveats for the upgrade may be found at:

https://help.ubuntu.com/community/Upgrades

Ubuntu 17.10 continues to be actively supported with security updates and
select high-impact bug fixes.  Announcements of security updates for Ubuntu
releases are sent to the ubuntu-security-announce mailing list, information
about which may be found at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce

Development of a complete response to the highly-publicized Meltdown and
Spectre vulnerabilities is ongoing, and due to the timing with respect to
this End of Life, we will not be providing updated Linux kernel packages for
Ubuntu 17.04.  We advise users to upgrade to Ubuntu 17.10 and install the
updated kernel packages for that release when they become available.

For more information about Canonical’s response to the Meltdown and
Spectre vulnerabilities, see:

https://insights.ubuntu.com/2018/01/04/ubuntu-updates-for-the-meltdown-spectre-vulnerabilities/

Since its launch in October 2004 Ubuntu has become one of the most highly
regarded Linux distributions with millions of users in homes, schools,
businesses and governments around the world.  Ubuntu is Open Source
software, costs nothing to download, and users are free to customise or
alter their software in order to meet their needs.

https://lists.ubuntu.com/archives/ubuntu-announce/2018-January/000227.html

Originally posted to the ubuntu-announce mailing list on Fri Jan 5 22:23:25 UTC 2018 
by Steve Langasek, on behalf of the Ubuntu Release Team

17 January, 2018 01:32PM

Ubuntu Insights: Spectre Mitigation Updates Available for Testing in Ubuntu Proposed


Canonical holds Ubuntu to the highest standards of security and quality.  This week we published candidate Ubuntu kernels providing mitigation for CVE-2017-5715 and CVE-2017-5753 (ie, Spectre / Variants 1 & 2) to their respective -proposed pockets for Ubuntu 17.10 (Artful), 16.04 LTS (Xenial), and 14.04 LTS (Trusty).  We have also expanded mitigation to cover s390x and ppc64el.

You are invited to test and provide feedback for the following updated Linux kernels.  We have also rebased all derivative kernels such as the public cloud kernels (Amazon, Google, Microsoft, etc) and the Hardware Enablement (HWE) kernels.

Updates for Ubuntu 12.04 ESM are in progress, and will be available for Canonical’s Ubuntu Advantage customers.  UA customers should reach out to Canonical support for access to candidate kernels.

We intend to promote the candidate kernels to the -security/-updates pocket for General Availability (GA) on Monday, January 22, 2018.

There is a corresponding intel-microcode update for many Intel CPUs, as well as an eventual amd64-microcode update, that will also need to be applied in order to fully mitigate Spectre.  In the interest of full disclosure, we understand from Intel that there are currently known issues with the intel-microcode binary:

Canonical QA and Hardware Certification teams are engaged in extensive, automated and manual testing of these kernels and the Intel microcode kernel updates on Ubuntu certified hardware, and Ubuntu certified public clouds.  The primary focus is on regression testing and security effectiveness.   We are actively investigating Google’s “Retpoline” toolchain-based approach, which requires rebuilding Ubuntu binaries but reduce performance impact of the mitigation.

For your reference, the following links explain how to enable Ubuntu’s Proposed repositories, and how to file Linux kernel bugs:

The most current information will continue to be available at:

@Canonical

17 January, 2018 12:50PM

LiMux

München als Smart City – Jetzt auch als App

Die München SmartCity App steht zum Download bereit. Im Auftrag der Landeshauptstadt München entwickelten die Portalgesellschaft des offiziellen Stadtportals muenchen.de (Portal München Betriebs-GmbH & Co. KG) und die Münchner Verkehrsgesellschaft (MVG) die bestehenden muenchen.de Apps … Weiterlesen

Der Beitrag München als Smart City – Jetzt auch als App erschien zuerst auf Münchner IT-Blog.

17 January, 2018 09:25AM by Stefan Döring

January 16, 2018

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: LXD Weekly Status #30

 

Introduction

The main highlight for this week was the inclusion of the new proxy device in LXD, thanks to the hard work of some University of Texas students!

The rest of the time was spent fixing a number of bugs, working on various bits of kernel work, getting the upcoming clustering work to go through our CI process and preparing for a number of planning meetings that are going on this week.

Upcoming conferences and events

Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.

LXD

LXC

LXCFS

  • Nothing to report

Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.

Ubuntu

  • Nothing to report (build farm was offline)

Snap

  • Nothing to report (build farm was offline)

16 January, 2018 10:28PM

Cumulus Linux

NetDevOpEd: Abstracting configuration management

I was talking to a banking customer in Northern Europe the other day and they asked me about configuration management. They had many different vendors with different management methods in their infrastructure and wanted to know how they could speed up management.
This specific customer had an outsourced infrastructure. They picked what hardware they wanted to run, but then paid a managed services company to deploy the infrastructure in a colocation facility and perform day-to-day operations.

The issue arose in the speed of deployment. When they launched a new application that required a new service in their data center, the application engineers would need to contact the network team in this bank. The network team would then open up a ticket with the managed services company to provision VLANs and open up ports on their firewalls to allow access to the application. The issue was that this process took up to one week to complete.

This bank contacted us with the hope we could help them unify their management under one framework, so that they could insource the firewall configuration to accelerate their application deployments. They asked me about automated management best practices.
Normally when I have this conversation, we have to address three topics:

  1. Management pane
  2. Multivendor support
  3. Level and scope of automation

The management pane is probably the most important question. How do you want to interact with this multivendor management solution? In the simplest form, I always tend to lead with an open source automation tool and programmatic code that is executed via a CLI. An example of this is using Ansible or Puppet as the automation tool and using playbooks or manifests as the programmatic code. Ansible is an open source and free solution that can be downloaded via APT on Ubuntu machines. Playbooks are YAML format files that provide a series of instructions.

This format requires your operations team to learn a more CI/CD based workflow to operate the network. Specifically, configurations will be stored in a centralized code management repository like Github or Gitlab. And any changes will need their own pull request via a unique branch. This enforces more of a software development style workflow, which has great perks. But if your network operations team isn’t trained in these skills, they’ll have to spend some time learning them.

On the opposite end of a management spectrum from a CLI based solution is a GUI based solution. Right now, there are a few players in this market, including Apstra and Ansible Tower, who provide a front end GUI to manage your infrastructure. This is a more traditional approach, and segways nicely into the next topic.

The second factor is the multivendor support of a unified management solution. It is much more difficult to create a vendor agnostic solution when the underlying vendor has a proprietary management API. I always rep that the native Linux implementation of Cumulus Linux Network Operating System makes it perfect for multivendor application management. But the more closed vendors you have in your infrastructure, the more difficult it becomes for a management solution to support all vendors and maintain feature velocity.

The challenge mainly comes from how a management application takes abstracted configuration contexts and translates them into vendor specific language. Building on my previous post about Linux as the language of the data center, using an open solution such as Linux helps manage all these solutions more fluidly.

Most companies address this challenge in one of two ways. One method is making it the vendor or community’s responsibility to code and maintain plugins. The other is bringing it in-house and owning the modules and plugins that are used to manage each unique vendor in your infrastructure.

The last topic that arises from this management question is the level of integration. There is an industry dream where configuring one part of your infrastructure will automatically configure other parts of the infrastructure. An example for this bank customer would be for the firewalls to dynamically provision open ports at the edge of the network based on the application that is launched, removing the whole user interrupted workflow. I’ve found that most customers want this, but are unwilling to be the first customer to deploy something this connected.

Let’s bring this back to my original conversation with this customer. Ultimately, I was able to whittle down their requirements to only wanting to manage their firewall ACL policies in-house so they can react faster to port availability. All their firewalls only came from two different vendors, and they were looking for a short-term tactical solution rather than a long-term strategic solution.

Though it’s hard to hear this when talking to a customer, I understand that time and people are fleeting resources, so not every problem can be solved in the most strategic or optimal way. I explained all these important points to the customer and they opted to start with seeing if Ansible will integrate with their firewall solution, as they are already using Ansible in their application and server space. I told them that’s a good start and wished them best on their journey.

The important message I wanted to get across was that they needed to consider their workflow and the interactivity with the underlying solutions when picking a management solution. Management is more about workflow than it is about technology. The technology primarily drives the workflow. And, once you start diving deeper into the solution, you’ll realize that open solutions such as Cumulus Linux are much easier to integrate into a unified management solution than closed solutions.

Eager to learn even more about configuration management? Watch our how-to video series about centralized configuration management to get an in-depth lesson on how it can optimize your networking configurations.

The post NetDevOpEd: Abstracting configuration management appeared first on Cumulus Networks Blog.

16 January, 2018 09:55PM by Rama Darbha

hackergotchi for Tails

Tails

Tails report for December, 2017

Releases

Code

Documentation and website

User experience

New download page

We completely redesigned our download page. This work was triggered by the rewrite of Tails Verification (our browser extension) into Web Extensions but the result improves the overall experience.

Tails Verification is also now available for Chrome.

UX work on VeraCrypt in GNOME

We designed the user experience for our work on the integration of VeraCrypt in GNOME.

  1. We analyzed the results of our online survey.

  2. We did some paper prototypes of the interactions and tested them with users.

Read more about our UX work in our dedicated report.

Hot topics on our help desk

  1. Printing to PDF with Tor Browser does not work anymore

Infrastructure

  • We have converted two sources of email sent by cron with Icinga 2 monitoring checks that are easier to track and fine-tune (#11598, #12455).

  • We have fixed a longstanding bug that made the UX of our CI system confusing when deleting a branch in Git (#15069).

  • We have set up a local email server on every Jenkins node that runs automated ISO tests, as a first step towards making our automatic Thunderbird tests more robust (#12277).

  • We have tuned our servers to get a little bit better performance out of our CI system (#15054).

  • We have extended the blueprint about upgrading the hardware for automated tests.

Outreach

Past events

  • 34c3: A fair number of Tails developers and contributors attended the 34th Chaos Communication Congress (34c3) in Leipzig. During this event, we set up a table to meet with interested parties- users, contributors, or anyone interested. We were able to reach both newcomers and long-time users of Tails, answer questions, help, exchange stickers, give away USB keys and t-shirts in exchange of donations, get feedback or just chat. It was, of course, also a great opportunity to meet people in person, discuss, work, and have fun both within the project and with friend projects.

Upcoming events

Translation

All the website

  • de: 54% (2846) strings translated, 7% strings fuzzy, 48% words translated
  • fa: 39% (2076) strings translated, 10% strings fuzzy, 41% words translated
  • fr: 91% (4786) strings translated, 1% strings fuzzy, 88% words translated
  • it: 36% (1919) strings translated, 5% strings fuzzy, 32% words translated
  • pt: 23% (1216) strings translated, 9% strings fuzzy, 19% words translated

Total original words: 56302

Core pages of the website

  • de: 76% (1454) strings translated, 13% strings fuzzy, 77% words translated
  • fa: 33% (640) strings translated, 11% strings fuzzy, 33% words translated
  • fr: 97% (1848) strings translated, 2% strings fuzzy, 96% words translated
  • it: 71% (1342) strings translated, 13% strings fuzzy, 71% words translated
  • pt: 41% (784) strings translated, 15% strings fuzzy, 41% words translated

Total original words: 17235

Metrics

  • Tails has been started more than 637276 times this month. This makes 20557 boots a day on average.
  • 9215 downloads of the OpenPGP signature of Tails ISO from our website.
  • 66 bug reports were received through WhisperBack.

16 January, 2018 06:43PM

hackergotchi for Ubuntu developers

Ubuntu developers

Kubuntu General News: Plasma 5.12 LTS beta available in PPA for testing on Artful & Bionic

Adventurous users, testers and developers running Artful 17.10 or our development release Bionic 18.04 can now test the beta version of Plasma 5.12 LTS.

An upgrade to the required Frameworks 5.42 is also provided.

As with previous betas, this is experimental and is only suggested for people who are prepared for possible bugs and breakages.

In addition, please be prepared to use ppa-purge to revert changes, should the need arise at some point.

Read more about the beta release at: https://www.kde.org/announcements/plasma-5.11.95.php

If you want to test then:

sudo add-apt-repository ppa:kubuntu-ppa/beta

and then update packages with

sudo apt update
sudo apt full-upgrade

A Wayland session can be made available at the SDDM login screen by installing the package plasma-workspace-wayland. Please note the information on Wayland sessions in the KDE announcement.

Note: Due to Launchpad builder downtime and maintenance due to Meltdown/Spectre fixes, limiting us to amd64/i386 architectures, these builds may be superseded with a rebuild once the builders are back to normal availability.

The primary purpose of this PPA is to assist testing for bugs and quality of the upcoming final Plasma 5.12 LTS release, due for release by KDE on 6th Febuary.

It is anticipated that Kubuntu Bionic Beaver 18.04 LTS will ship with Plasma 5.12.4, the latest point release of 5.12 LTS available at release date.

Bug reports on the beta itself should be reported to bugs.kde.org.

Packaging bugs can be reported as normal to: Kubuntu PPA bugs: https://bugs.launchpad.net/kubuntu-ppa

Should any issues occur, please provide feedback on our mailing lists [1] or IRC [2]

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net

16 January, 2018 04:21PM

Ubuntu Insights: Monitor your Kubernetes Cluster

This article originally appeared on Kevin Monroe’s blog

Keeping an eye on logs and metrics is a necessary evil for cluster admins. The benefits are clear: metrics help you set reasonable performance goals, while log analysis can uncover issues that impact your workloads. The hard part, however, is getting a slew of applications to work together in a useful monitoring solution.

In this post, I’ll cover monitoring a Kubernetes cluster with Graylog (for logging) and Prometheus (for metrics). Of course that’s not just wiring 3 things together. In fact, it’ll end up looking like this:

As you know, Kubernetes isn’t just one thing — it’s a system of masters, workers, networking bits, etc(d). Similarly, Graylog comes with a supporting cast (apache2, mongodb, etc), as does Prometheus (telegraf, grafana, etc). Connecting the dots in a deployment like this may seem daunting, but the right tools can make all the difference.

I’ll walk through this using conjure-up and the Canonical Distribution of Kubernetes (CDK). I find the conjure-up interface really helpful for deploying big software, but I know some of you hate GUIs and TUIs and probably other UIs too. For those folks, I’ll do the same deployment again from the command line.

Before we jump in, note that Graylog and Prometheus will be deployed alongside Kubernetes and not in the cluster itself. Things like the Kubernetes Dashboard and Heapster are excellent sources of information from within a running cluster, but my objective is to provide a mechanism for log/metric analysis whether the cluster is running or not.

The Walk Through

First things first, install conjure-up if you don’t already have it. On Linux, that’s simply:

sudo snap install conjure-up --classic

There’s also a brew package for macOS users:

brew install conjure-up

You’ll need at least version 2.5.2 to take advantage of the recent CDK spell additions, so be sure to sudo snap refresh conjure-up or brew update && brew upgrade conjure-up if you have an older version installed.

Once installed, run it:

conjure-up

You’ll be presented with a list of various spells. Select CDK and press Enter.

At this point, you’ll see additional components that are available for the CDK spell. We’re interested in Graylog and Prometheus, so check both of those and hit Continue.

You’ll be guided through various cloud choices to determine where you want your cluster to live. After that, you’ll see options for post-deployment steps, followed by a review screen that lets you see what is about to be deployed:

In addition to the typical K8s-related applications (etcd, flannel, load-balancer, master, and workers), you’ll see additional applications related to our logging and metric selections.

The Graylog stack includes the following:

  • apache2: reverse proxy for the graylog web interface
  • elasticsearch: document database for the logs
  • filebeat: forwards logs from K8s master/workers to graylog
  • graylog: provides an api for log collection and an interface for analysis
  • mongodb: database for graylog metadata

The Prometheus stack includes the following:

  • grafana: web interface for metric-related dashboards
  • prometheus: metric collector and time series database
  • telegraf: sends host metrics to prometheus

You can fine tune the deployment from this review screen, but the defaults will suite our needs. Click Deploy all Remaining Applications to get things going.

The deployment will take a few minutes to settle as machines are brought online and applications are configured in your cloud. Once complete, conjure-up will show a summary screen that includes links to various interesting endpoints for you to browse:

Exploring Logs

Now that Graylog has been deployed and configured, let’s take a look at some of the data we’re gathering. By default, the filebeat application will send both syslog and container log events to graylog (that’s /var/log/*.log and /var/log/containers/*.log from the kubernetes master and workers).

Grab the apache2 address and graylog admin password as follows:

juju status --format yaml apache2/0 | grep public-address
    public-address: <your-apache2-ip>
juju run-action --wait graylog/0 show-admin-password
    admin-password: <your-graylog-password>

Browse to http://<your-apache2-ip> and login with admin as the username and <your-graylog-password> as the password. Note: if the interface is not immediately available, please wait as the reverse proxy configuration may take up to 5 minutes to complete.

Once logged in, head to the Sources tab to get an overview of the logs collected from our K8s master and workers:

Drill into those logs by clicking the System / Inputs tab and selecting Show received messages for the filebeat input:

From here, you may want to play around with various filters or setup Graylog dashboards to help identify the events that are most important to you. Check out the Graylog Dashboard docs for details on customizing your view.

Exploring Metrics

Our deployment exposes two types of metrics through our grafana dashboards: system metrics include things like cpu/memory/disk utilization for the K8s master and worker machines, and cluster metrics include container-level data scraped from the K8s cAdvisor endpoints.

Grab the grafana address and admin password as follows:

juju status --format yaml grafana/0 | grep public-address
    public-address: <your-grafana-ip>
juju run-action --wait grafana/0 get-admin-password
    password: <your-grafana-password>

Browse to http://<your-grafana-ip>:3000 and login with admin as the username and <your-grafana-password> as the password. Once logged in, check out the cluster metric dashboard by clicking the Home drop-down box and selecting Kubernetes Metrics (via Prometheus):

We can also check out the system metrics of our K8s host machines by switching the drop-down box to Node Metrics (via Telegraf)

The Other Way

As alluded to in the intro, I prefer the wizard-y feel of conjure-up to guide me through complex software deployments like Kubernetes. Now that we’ve seen the conjure-up way, some of you may want to see a command line approach to achieve the same results. Still others may have deployed CDK previously and want to extend it with the Graylog/Prometheus components described above. Regardless of why you’ve read this far, I’ve got you covered.

The tool that underpins conjure-up is Juju. Everything that the CDK spell did behind the scenes can be done on the command line with Juju. Let’s step through how that works.

Starting From Scratch

If you’re on Linux, install Juju like this:

sudo snap install juju --classic

For macOS, Juju is available from brew:

brew install juju

Now setup a controller for your preferred cloud. You may be prompted for any required cloud credentials:

juju bootstrap

We then need to deploy the base CDK bundle:

juju deploy canonical-kubernetes

Starting From CDK

With our Kubernetes cluster deployed, we need to add all the applications required for Graylog and Prometheus:

## deploy graylog-related applications
juju deploy xenial/apache2
juju deploy xenial/elasticsearch
juju deploy xenial/filebeat
juju deploy xenial/graylog
juju deploy xenial/mongodb
## deploy prometheus-related applications
juju deploy xenial/grafana
juju deploy xenial/prometheus
juju deploy xenial/telegraf

Now that the software is deployed, connect them together so they can communicate:

## relate graylog applications
juju relate apache2:reverseproxy graylog:website
juju relate graylog:elasticsearch elasticsearch:client
juju relate graylog:mongodb mongodb:database
juju relate filebeat:beats-host kubernetes-master:juju-info
juju relate filebeat:beats-host kubernetes-worker:jujuu-info
## relate prometheus applications
juju relate prometheus:grafana-source grafana:grafana-source
juju relate telegraf:prometheus-client prometheus:target
juju relate kubernetes-master:juju-info telegraf:juju-info
juju relate kubernetes-worker:juju-info telegraf:juju-info

At this point, all the applications can communicate with each other, but we have a bit more configuration to do (e.g., setting up the apache2 reverse proxy, telling prometheus how to scrape k8s, importing our grafana dashboards, etc):

## configure graylog applications
juju config apache2 enable_modules="headers proxy_html proxy_http"
juju config apache2 vhost_http_template="$(base64 <vhost-tmpl>)"
juju config elasticsearch firewall_enabled="false"
juju config filebeat \
  logpath="/var/log/*.log /var/log/containers/*.log"
juju config filebeat logstash_hosts="<graylog-ip>:5044"
juju config graylog elasticsearch_cluster_name="<es-cluster>"
## configure prometheus applications
juju config prometheus scrape-jobs="<scraper-yaml>"
juju run-action --wait grafana/0 import-dashboard \
  dashboard="$(base64 <dashboard-json>)"

Some of the above steps need values specific to your deployment. You can get these in the same way that conjure-up does:

  • <vhost-tmpl>: fetch our sample template from github
  • <graylog-ip>juju run --unit graylog/0 ‘unit-get private-address’
  • <es-cluster>juju config elasticsearch cluster-name
  • <scraper-yaml>: fetch our sample scraper from github; substituteappropriate values for K8S_PASSWORD and K8S_API_ENDPOINT
  • <dashboard-json>: fetch our host and k8s dashboards from github

Finally, you’ll want to expose the apache2 and grafana applications to make their web interfaces accessible:

## expose relevant endpoints
juju expose apache2
juju expose grafana

Now that we have everything deployed, related, configured, and exposed, you can login and poke around using the same steps from the Exploring Logs and Exploring Metrics sections above.

The Wrap Up

My goal here was to show you how to deploy a Kubernetes cluster with rich monitoring capabilities for logs and metrics. Whether you prefer a guided approach or command line steps, I hope it’s clear that monitoring complex deployments doesn’t have to be a pipe dream. The trick is to figure out how all the moving parts work, make them work together repeatably, and then break/fix/repeat for a while until everyone can use it.

This is where tools like conjure-up and Juju really shine. Leveraging the expertise of contributors to this ecosystem makes it easy to manage big software. Start with a solid set of apps, customize as needed, and get back to work!

Give these bits a try and let me know how it goes. You can find enthusiasts like me on Freenode IRC in #conjure-up and #juju. Thanks for reading!

 

16 January, 2018 02:56PM

hackergotchi for VyOS

VyOS

VyOS mission statement

Past year the VyOS project has turned four. Perhaps it's time to update the mission statement I've made back in 2013. This one doesn't exactly contradict it, but needs to include the most important lesson we've learnt: if there is essentially no influx of commited maintainers rather than occasional contributors, the emphasis should be on finding ways to build and retain the maintainers team and enable them to put more time and effort into the project, and thus the commercial services should be a part of the mission statement.

If anyone has fears of VyOS losing anything in the freedom part, please do not fear. There will be a post about release model change that has to do with reconciling the conflicting goals of people who want latest features even at cost of stability and people who want stability, and creating an incentive for commercial users to give back to the project, but none of those come at cost of compromising the software freedom.

So, here's the updated mission statement:

VyOS is a successor of the original Vyatta

VyOS started as a Vyatta Core fork when it became clear that Vyatta Core was abandoned and no one else was going to continue it as a free open source software product. The development team and a substantial part of the user community still consist of long time Vyatta Core users and the primary reason why VyOS came into existence is to not let Vyatta Core effort to die.

Now that Brocade vRouter is no longer sold and supported, VyOS can as well be an update path for its former users, though we cannot provide a zero-effort update path for them of course.

VyOS is a free and open source platform

We believe that software freedom is very important and every user should be free to inspect the code, modify it, contribute to it, build customized images.

We will always keep the entire codebase of VyOS under free licenses. We will always keep the entire toolchain for building VyOS images open source and publicly accessible. We will try to make it as easy to use as possible. We will collaborate with vendors to support latest hardware, virtualization and cloud platforms.

VyOS is an affordable and accessible platform for a diverse user base

We know that price is a serious contributing factor that makes people turn to software-based routers. We also know that many VyOS users are networking enthusiasts and small businesses from around the world who cannot afford or not willing to use expensive hardware, whether commodity or specialized.

We will never make VyOS require any specific hardware to be fully functional. We will try to make VyOS usable on wide range of hardware, from small devices to carrier grade servers.

We are open for collaboration with hardware/software manufacturers who want to adopt VyOS

Maintainers’ work needs to be compensated

While we hope that VyOS is a community-driven project, an important thing is that community contributors typically only do the work they need themselves, but the maintainers get to maintain even features they do not use themselves and do work that is necessary but offers no reward for anyone in the short run such as code refactoring and test writing etc. This work needs to be compensated.

Besides, a large number of VyOS users are for-profit companies and use VyOS to reduce their costs. The maintainers are committed to keep VyOS free as in freedom, but will commercialize the project as well to get compensated for their work and build a team of people required to keep the development sustainable.

16 January, 2018 12:08PM by Daniil Baturin

hackergotchi for Purism PureOS

Purism PureOS

Librem 5 Phone Progress Report – The First of Many More to Come!

First, let me apologize for the silence. It was not because we went into hibernation for the winter, but because we were so busy in the initial preparation and planning of a totally new product while simultaneously orienting an entirely new development team. Since we are more settled into place now, we want to change this pattern of silence and provide regular updates. Purism will be giving weekly news update posts regarding the Librem 5 phone project, alternating progress reports on two fronts:

  • from the technology development point of view (the hardware, kernel, OS, etc.);
  • from the design department (UI+UX), and our collaboration with GNOME and KDE.

To kickoff this new update process, today’s post will discuss the organizational and technological progress of the Librem 5 project since November 2017.

New Team

Just after our successful pre-order crowdfunding campaign (by the way, thank you to all of the backers!) we started to reach out to people who had applied for jobs related to the Librem 5 project. We had well over 100 applicants who showed great passion for the project and had excellent resumes.

Our applicants came from all over the world  with some of the most diverse backgrounds I have ever seen. Todd Weaver (CEO) and myself did more than 80 interviews with applicants over a two weeks period. In the end, we had to narrow down to 15 people that we would make offers to; it was with great regret that we had to turn down so many stellar applicants, but we had to make decisions in a timely fashion and unfortunately the budget isn’t unlimited. During the weeks that followed, we negotiated terms with our proposed team members and started to roll new people into the team (with all that involves in an organizational setting). All of the new team members are now on board as of January 2018. They are not yet shown on our team page, but we will add them soon and make an announcement to present all the individuals who have recently joined our team.

As amazing as our community is, we also received applications from individuals who are so enthusiastic about our project that they want to help us as volunteers! We will reach out to them shortly, now that the core team is in place and settled.

There are so many people to thank for the successful jump start of our phone development project! It was amazing for us to see how much energy and interest we were able to spark with our project. We want to give a big thank you to everyone for reaching out to us and we really appreciate every idea and applicant.

CPU / System On Chip

Block Diagram i.MX6

During our early phase we used a NXP i.MX6 SOC (System On Chip) to begin software evaluation, and the results were pretty promising. This was why we listed the i.MX6 in the campaign description. The most important feature of the i.MX6 was that it is one of only a handful of SOCs supported by a highly functional free software GPU driver set, the Etnaviv driver. The Etnaviv driver has been included in the Linux mainline kernel for quite some time and the matching MESA support has evolved nicely. Briefly after our announcement we were contacted by one of the key driving forces behind the Etnaviv development effort which provides us with valuable insight in to this complex topic.

 

Block Diagram i.MX8

Further work with the i.MX6 showed us that it still uses quite a lot of power so when put under load it would drain a battery quickly, as well as warm up the device.

NXP had been talking about a new family of SOCs, the i.MX8, which would feature a new silicon processor and updated architecture. The release of the i.MX8 had been continuously postponed. Nevertheless, once we realized that the i.MX6 might be too power hungry, the i.MX8 became appealing to us. Hardware prototype operations are always tricky because you have to plan for emerging technologies that you meld with existing parts or materials. Components from original manufacturers sometimes never get released, are discontinued or the availability from the factory grinds to a halt for reasons beyond our control. This is the function of engaging in prototype development so that we can suffer the slings and arrows for you to provide the customer the best possible end product. We have been in active communication with all of our suppliers preparing a development plan that is beneficial for all of us.

At CES in Las Vegas, NXP announced the product release dates for their new SOC, the i.MX8M, along with a set of documentation. This is currently the most likely candidate we will use in the Librem 5. We are very excited about this timely announcement! At Embedded World in Nürnberg, Germany, NXP will announce details and a roadmap. We will be attending and discuss with NXP directly about the i.MX8M for the Librem 5.

We have also decided to use AARCH64, a.k.a. “ARM64”, for the phone software builds as soon as we have i.MX8 hardware. A build server for building ARM64 is now in place and the PureOS development team is beginning to work with the Librem 5 development team on the build process. Adding a second architecture for the FSF endorsed PureOS—that will run on Librem laptops as well as the Librem 5 phone—is a major undertaking that will benefit all future Librem 5 phone development.

Prototype Display for Development Boards

Since the i.MX8 is still not yet easily available, and in order not to unnecessarily slow development progress, we need similar hardware to start developing software. We switched to an i.MX6 Quad Plus board which should provide similar speed for the GPU to what we will find in the i.MX8M. From our contact from the Etnaviv developers we know that they are heavily working on the i.MX8M support so we can expect that Etnaviv will be working on it within the year.

One of the big tasks of our software and design teams, working with our partners (GNOME, KDE, Matrix, Nextcloud, and Monero), will be to create a proper User Interface (UI) and User Experience (UX) for a phone screen. The challenges are that the screen will be between 5″ to 5.5″ diagonally with a resolution of up to full HD (1920×1080), and a functional touchscreen! The amazing teams developing GNOME and KDE/Plasma have already done a great job laying the groundwork technologies and setting up this kind of interface to build, develop, and test with. With such great partners and development teams we are confident that we can successfully integrate the freedom, privacy, and security of PureOS with phone hardware to provide a beautiful user experience.

To help with development, we are already in the process of sourcing components to attach 5.5″ full HD displays to our development boards. Our development boards are already booting a mainline kernel into a Wayland UI nicely. We are evaluating similar displays from several manufacturers. We found a supplier for a matching adapter logic board (HDMI to MIPI). Our hardware engineer has already designed an additional adapter for interfacing the display’s touchscreen so that we will realistically have a 5.5″ full HD screen with touch capability on our development boards.

Potential Manufacturing Sites

The overall plan is to have a custom device manufacturing process setup somewhere where we can manufacture our own devices. Since November of last year we have been intensively researching and evaluating potential manufacturing partners. So far we have been in contact with over 80 potential fabricators and are in the process of negotiating capabilities and terms. No decision has been made yet. We have some promising prospects from all over the world, including Asia, Europe and the USA, and we plan on visiting some of these sites in person possibly by the end of February or March.

Cooperative Relationships

Now that the development team is in place we will be reaching out to our partners. Our UI/UX design team along with phone dev team are working with the GNOME UI/UX team to develop a path forward for mobile interfaces. We will also reach out to others who have partnered with us during the campaign, such as the KDE/Plasma team, Matrix, Nextcloud, Monero and many more.

Conclusion

I hope that the fog has been lifted, and we have answered questions you might have or assuaged fears of our silence. We hope that you enjoyed this first dive into our development process. We here at Purism are all very excited about the Librem 5 phone project, as we are passionate about all of our products, with the phone holding a special place in our hearts and those of the Free Software community. That’s what makes us different from companies rolling out “yet another Android phone”, swapping color palettes or removing headphone jacks under the guise of “innovation”…

See you next week for more news on the Librem 5 project!

16 January, 2018 12:00PM by Nicole Faerber

hackergotchi for SolydXK

SolydXK

New SolydXK ISOs released!

It is time again to release the new SolydXK ISOs!

All SolydXK ISOs are fully updated, including the latest kernel release with the Meltodown vulnarability patch. The ISOs come with a system configuration tool called “SolydXK System Settings”. Following is a list of features added since the 201707 releases:

  • Device Driver Manager (DDM) has been integrated.
  • Debian Plymouth Manager has been integrated.
  • Add new partitions to Fstab.
  • Safely remove old kernel packages.

After installation you can choose additional packages from the Welcome Screen but unfortunately, I had to remove the business application LetoDMS (document management system) as installable from the Welcome Screen. It installs just fine but I haven’t been able to get it to work. I’ve removed the package from our own repository but if you need an Open Source DMS, I recommend to take a look at SeedDMS.

For our German users I’ve decided to officially maintain the German localized ISOs of SolydX 64-bit and SolydK 64-bit. You can download them from the Localized Editions page.

Not all mirrors have synchronized yet but I’ll add them as soon as they come available. The Community editions (32-bit) and the Enthusiast’s editions (based on Debian Buster) will follow later and I will post that as soon as they are released.

Download pages

SolydX: https://solydxk.com/downloads/solydx/
SolydK: https://solydxk.com/downloads/solydk/
Localized: https://solydxk.com/downloads/localized-editions/

Statistics of 2017

To start the new year I’d also like to share some fun statistics of 2017 with you.

Site visits (Top 10)
United States: 32289
Germany: 14569
Turkey: 10275
Japan: 9566
United Kingdom: 9531
Italy: 9396
Canada: 7928
Spain: 7400
France: 5866
Brazil: 5077

Downloads
SolydX 64: 12927
SolydX 32: 1319
SolydX localized: 366
SolydX EE: 955

SolydK 64: 4931
SolydK 32: 314
SolydK localized: 110
SolydK EE: 370

Forum
Posts: 3515
Unique posters: 193

Well, The Netherlands didn’t even make it into the top 10 of site visits. It seems that I have work to do to let my fellow countrymen know that a Dutch operating system exists 😉

Enjoy these new ISOs and visit us at our forum: https://forums.solydxk.com

16 January, 2018 11:43AM by Schoelje

hackergotchi for Ubuntu developers

Ubuntu developers

Benjamin Mako Hill: OpenSym 2017 Program Postmortem

The International Symposium on Open Collaboration (OpenSym, formerly WikiSym) is the premier academic venue exclusively focused on scholarly research into open collaboration. OpenSym is an ACM conference which means that, like conferences in computer science, it’s really more like a journal that gets published once a year than it is like most social science conferences. The “journal”, in iithis case, is called the Proceedings of the International Symposium on Open Collaboration and it consists of final copies of papers which are typically also presented at the conference. Like journal articles, papers that are published in the proceedings are not typically published elsewhere.

Along with Claudia Müller-Birn from the Freie Universtät Berlin, I served as the Program Chair for OpenSym 2017. For the social scientists reading this, the role of program chair is similar to being an editor for a journal. My job was not to organize keynotes or logistics at the conference—that is the job of the General Chair. Indeed, in the end I didn’t even attend the conference! Along with Claudia, my role as Program Chair was to recruit submissions, recruit reviewers, coordinate and manage the review process, make final decisions on papers, and ensure that everything makes it into the published proceedings in good shape.

In OpenSym 2017, we made several changes to the way the conference has been run:

  • In previous years, OpenSym had tracks on topics like free/open source software, wikis, open innovation, open education, and so on. In 2017, we used a single track model.
  • Because we eliminated tracks, we also eliminated track-level chairs. Instead, we appointed Associate Chairs or ACs.
  • We eliminated page limits and the distinction between full papers and notes.
  • We allowed authors to write rebuttals before reviews were finalized. Reviewers and ACs were allowed to modify their reviews and decisions based on rebuttals.
  • To assist in assigning papers to ACs and reviewers, we made extensive use of bidding. This means we had to recruit the pool of reviewers before papers were submitted.

Although each of these things have been tried in other conferences, or even piloted within individual tracks in OpenSym, all were new to OpenSym in general.

Overview

Statistics
Papers submitted 44
Papers accepted 20
Acceptance rate 45%
Posters submitted 2
Posters presented 9
Associate Chairs 8
PC Members 59
Authors 108
Author countries 20

The program was similar in size to the ones in the last 2-3 years in terms of the number of submissions. OpenSym is a small but mature and stable venue for research on open collaboration. This year was also similar, although slightly more competitive, in terms of the conference acceptance rate (45%—it had been slightly above 50% in previous years).

As in recent years, there were more posters presented than submitted because the PC found that some rejected work, although not ready to be published in the proceedings, was promising and advanced enough to be presented as a poster at the conference. Authors of posters submitted 4-page extended abstracts for their projects which were published in a “Companion to the Proceedings.”

Topics

Over the years, OpenSym has established a clear set of niches. Although we eliminated tracks, we asked authors to choose from a set of categories when submitting their work. These categories are similar to the tracks at OpenSym 2016. Interestingly, a number of authors selected more than one category. This would have led to difficult decisions in the old track-based system.

distribution of papers across topics with breakdown by accept/poster/reject

The figure above shows a breakdown of papers in terms of these categories as well as indicators of how many papers in each group were accepted. Papers in multiple categories are counted multiple times. Research on FLOSS and Wikimedia/Wikipedia continue to make up a sizable chunk of OpenSym’s submissions and publications. That said, these now make up a minority of total submissions. Although Wikipedia and Wikimedia research made up a smaller proportion of the submission pool, it was accepted at a higher rate. Also notable is the fact that 2017 saw an uptick in the number of papers on open innovation. I suspect this was due, at least in part, to work by the General Chair Lorraine Morgan’s involvement (she specializes in that area). Somewhat surprisingly to me, we had a number of submission about Bitcoin and blockchains. These are natural areas of growth for OpenSym but have never been a big part of work in our community in the past.

Scores and Reviews

As in previous years, review was single blind in that reviewers’ identities are hidden but authors identities are not. Each paper received between 3 and 4 reviews plus a metareview by the Associate Chair assigned to the paper. All papers received 3 reviews but ACs were encouraged to call in a 4th reviewer at any point in the process. In addition to the text of the reviews, we used a -3 to +3 scoring system where papers that are seen as borderline will be scored as 0. Reviewers scored papers using full-point increments.

scores for each paper submitted to opensym 2017: average, distribution, etc

The figure above shows scores for each paper submitted. The vertical grey lines reflect the distribution of scores where the minimum and maximum scores for each paper are the ends of the lines. The colored dots show the arithmetic mean for each score (unweighted by reviewer confidence). Colors show whether the papers were accepted, rejected, or presented as a poster. It’s important to keep in mind that two papers were submitted as posters.

Although Associate Chairs made the final decisions on a case-by-case basis, every paper that had an average score of less than 0 (the horizontal orange line) was rejected or presented as a poster and most (but not all) papers with positive average scores were accepted. Although a positive average score seemed to be a requirement for publication, negative individual scores weren’t necessary showstoppers. We accepted 6 papers with at least one negative score. We ultimately accepted 20 papers—45% of those submitted.

Rebuttals

This was the first time that OpenSym used a rebuttal or author response and we are thrilled with how it went. Although they were entirely optional, almost every team of authors used it! Authors of 40 of our 46 submissions (87%!) submitted rebuttals.

Lower Unchanged Higher
6 24 10

The table above shows how average scores changed after authors submitted rebuttals. The table shows that rebuttals’ effect was typically neutral or positive. Most average scores stayed the same but nearly two times as many average scores increased as decreased in the post-rebuttal period. We hope that this made the process feel more fair for authors and I feel, having read them all, that it led to improvements in the quality of final papers.

Page Lengths

In previous years, OpenSym followed most other venues in computer science by allowing submission of two kinds of papers: full papers which could be up to 10 pages long and short papers which could be up to 4. Following some other conferences, we eliminated page limits altogether. This is the text we used in the OpenSym 2017 CFP:

There is no minimum or maximum length for submitted papers. Rather, reviewers will be instructed to weigh the contribution of a paper relative to its length. Papers should report research thoroughly but succinctly: brevity is a virtue. A typical length of a “long research paper” is 10 pages (formerly the maximum length limit and the limit on OpenSym tracks), but may be shorter if the contribution can be described and supported in fewer pages— shorter, more focused papers (called “short research papers” previously) are encouraged and will be reviewed like any other paper. While we will review papers longer than 10 pages, the contribution must warrant the extra length. Reviewers will be instructed to reject papers whose length is incommensurate with the size of their contribution.

The following graph shows the distribution of page lengths across papers in our final program.

histogram of paper lengths for final accepted papersIn the end 3 of 20 published papers (15%) were over 10 pages. More surprisingly, 11 of the accepted papers (55%) were below the old 10-page limit. Fears that some have expressed that page limits are the only thing keeping OpenSym from publshing enormous rambling manuscripts seems to be unwarranted—at least so far.

Bidding

Although, I won’t post any analysis or graphs, bidding worked well. With only two exceptions, every single assigned review was to someone who had bid “yes” or “maybe” for the paper in question and the vast majority went to people that had bid “yes.” However, this comes with one major proviso: people that did not bid at all were marked as “maybe” for every single paper.

Given a reviewer pool whose diversity of expertise matches that in your pool of authors, bidding works fantastically. But everybody needs to bid. The only problems with reviewers we had were with people that had failed to bid. It might be reviewers who don’t bid are less committed to the conference, more overextended, more likely to drop things in general, etc. It might also be that reviewers who fail to bid get poor matches which cause them to become less interested, willing, or able to do their reviews well and on time.

Having used bidding twice as chair or track-chair, my sense is that bidding is a fantastic thing to incorporate into any conference review process. The major limitations are that you need to build a program committee (PC) before the conference (rather than finding the perfect reviewers for specific papers) and you have to find ways to incentivize or communicate the importance of getting your PC members to bid.

Conclusions

The final results were a fantastic collection of published papers. Of course, it couldn’t have been possible without the huge collection of conference chairs, associate chairs, program committee members, external reviewers, and staff supporters.

Although we tried quite a lot of new things, my sense is that nothing we changed made things worse and many changes made things smoother or better. Although I’m not directly involved in organizing OpenSym 2018, I am on the OpenSym steering committee. My sense is that most of the changes we made are going to be carried over this year.

Finally, it’s also been announced that OpenSym 2018 will be in Paris on August 22-24. The call for papers should be out soon and the OpenSym 2018 paper deadline has already been announced as March 15, 2018. You should consider submitting! I hope to see you in Paris!

This Analysis

OpenSym used the gratis version of EasyChair to manage the conference which doesn’t allow chairs to export data. As a result, data used in this this postmortem was scraped from EasyChair using two Python scripts. Numbers and graphs were created using a knitr file that combines R visualization and analysis code with markdown to create the HTML directly from the datasets. I’ve made all the code I used to produce this analysis available in this git repository. I hope someone else finds it useful. Because the data contains sensitive information on the review process, I’m not publishing the data.


This blog post was originally posted on the Community Data Science Collective blog.

16 January, 2018 03:38AM

hackergotchi for Xanadu developers

Xanadu developers

La vida después de Flash: multimedia para la Web abierta

Esta es una traducción del artículo original publicado en el blog de Mozilla Hacks. Traducción por juliabis. Flash hizo llegar vídeo, animación, sitios interactivos y, sí, anuncios a miles de millones de usuarios durante más de una década, pero ahora … Sigue leyendo

16 January, 2018 02:40AM by sinfallas

Como hacer shrink (compactar) al disco de una maquina virtual de Virtualbox que contenga un Windows

Cuando utilizamos maquinas virtuales con discos dinámicos estos inevitablemente crecen y aunque eliminemos archivos dentro de ellas este espacio se mantendrá marcado como ocupado y a la hora de manipular las imágenes de disco este espacio adicional puede causar algunos … Sigue leyendo

16 January, 2018 02:36AM by sinfallas

January 15, 2018

hackergotchi for Purism PureOS

Purism PureOS

Purism patches Meltdown and Spectre variant 2, both included in all new Librem laptops

Purism has released a patch for Meltdown (CVE-2017-5754, aka variant 3) as part of PureOS, and includes this latest PureOS image as part of all new Librem laptop shipments. Purism is also providing a microcode update for Intel processors to address Spectre variant 2 (CVE-2017-5715).

Securing an existing PureOS installation

Applying the patch for Meltdown

Running Software Update will upgrade PureOS to include the linux-kernel package and associated dependencies to the 4.14.12 patched version.

This update will require a reboot to apply, after which you should see that you are running 4.14.12. You can use the uname command to check:

user@librem-13v2:~$ uname -a
Linux librem-13v2 4.14.0-3-amd64 #1 SMP Debian 4.14.12-2 (2018-01-06) x86_64 GNU/Linux

Applying the patch for Spectre

Unfortunately, at the moment, patching Spectre variant 2 requires applying a proprietary CPU microcode update from Intel. Since PureOS contains only free software, this means that Purism customers owning Librem laptops will need to add a new repository from Purism (completely independent of the PureOS software repositories) to download and apply this microcode update.

First, create a file called /etc/apt/sources.list.d/purism.list that contains the following lines:

# non-free Purism repo for microcode
deb http://deb.puri.sm/pureos/ green contrib non-free

You will need root permissions to edit this file so if you would like to do this with a graphical editor hit Alt-F2 and type: sudo gedit /etc/apt/sources.list.d/purism.list or if you want to use a text editor in a terminal type sudo nano /etc/apt/sources.list.d/purism.list (no text editor holy wars please!)

Then, add the Purism repository key to your APT keyring:

wget -O - https://deb.puri.sm/pureos/key/purism-nonfre-repo.gpg.key | sudo apt-key add -

Once you have added the key, use apt-key finger to verify that you have the following key, and that its fingerprint matches:

$ apt-key finger
...
pub   rsa4096 2018-01-14 [SC] [expires: 2028-01-12]
      CC2B 0E61 FE48 7DCD 96FA  632C 64CD 8D1B DE94 49B1
uid           [ unknown] Purism non-free package repository (Signing key for the Purism non-free repository for PureOS) <deb@deb.puri.sm>

Then use apt to install the intel-microcode package:

$ sudo apt update
$ sudo apt install intel-microcode

Version 20180108.1 or newer contains the patch for Spectre variant 2. Like with the Meltdown patch, this will require a reboot to take effect.

New installations are secured

Downloadable PureOS installation images have been updated accordingly, and anyone downloading and installing the latest images will be protected against Meltdown. If you reinstall PureOS, you will still need to perform the above steps in the “Applying the Patch for Spectre” section as PureOS doesn’t include the non-free Intel microcode package.

As for Purism customers, all new laptop shipments include Meltdown and Spectre patches, as they will have the latest PureOS image (that includes the Meltdown patch) preloaded and will also have the Spectre variant 2 patch applied. All existing Purism customers will need to follow the above steps to make sure they are protected against both Meltdown and Spectre variant 2.

15 January, 2018 11:00PM by Kyle Rankin

hackergotchi for SparkyLinux

SparkyLinux

Extras for Sparkers

Some of you asked me a few times about Sparky gadgets.
There is good news finally – we already recived Sparky stickers.

We ordered the first, small set of very special stickers with Sparky logo.
They are not made only for fun, but they are reflective and water proof so can be used outside, specially at night time.

The stickers could be placed on computers, laptops, smartphones, tablets, bikes, pots, and watever you’d like.
The sticker size is about 80 x 80 mm.

Anybody who will support Sparky project an amount of 10 Euros (or more) will get Sparky sticker as our big thank.

Sparky stickers

After sending your donation, send your full name and address via the contact form to be chacked with the supporter’s data.
There is no problem to send a stiker everywhere – it can be done wherever you live on the Earth.

Depends of the stickers popularity, we can think about other gadgets, such as different stickers, fridge magnets, small mascots, etc, etc.

As you have probably seen, in the last two months time, our server was (and still is) down one or two times per day.
It happens because, the number of Sparkers and Linux user become bigger and bigger, but it doesn’t make any differet about incomes of Google’s adds or donations.

The point is that the VPS has limited resocures… and there is only one way to fix the situation…
We can expand the VPS to a bigger one, but it is 300PLN (75 Euros about) more per year, we can’t spend it from our private wallet.

So your help will be priceless, consider a donation now, and get an unusual Sparky sticker now!

P.s.
A big thank for our 3 supports; your support helped us to pay an extra bill for additional service around our vps.

 

15 January, 2018 10:02PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Lubuntu Blog: Lubuntu 17.04 has reached End of Life

The Lubuntu Team announces that as a non-LTS release, 17.04 has a 9-month support cycle and, as such, reached end of life on Saturday, January 13, 2018. Lubuntu will no longer provide bug fixes or security updates for 17.04, and we strongly recommend that you update to 17.10, which continues to be actively supported with […]

15 January, 2018 08:32AM

Daniel Pocock: RHL'18 in Saint-Cergue, Switzerland

RHL'18 was held at the centre du Vallon à St-Cergue, the building in the very center of this photo, at the bottom of the piste:

People from various free software communities in the region attended for a series of presentations, demonstrations, socializing and ski. This event is a lot of fun and I would highly recommend that people look out for the next edition. (subscribe to rhl-annonces on lists.swisslinux.org for a reminder email)

Ham radio demonstration

I previously wrote about building a simple antenna for shortwave (HF) reception with software defined radio. That article includes links to purchase all the necessary parts from various sources. Everything described in that article, together with some USB sticks running Debian Hams Live (bootable ham radio operating system), some rolls of string and my FT-60 transceiver, fits comfortably into an OSCAL tote bag like this:

It is really easy to take this kit to an event anywhere, set it up in 10 minutes and begin exploring the radio spectrum. Whether it is a technical event or a village fair, radio awakens curiosity in people of all ages and provides a starting point for many other discussions about technological freedom, distributing stickers and inviting people to future events. My previous blog contains photos of what is in the bag and a video demo.

Open Agriculture Food Computer discussion

We had a discussion about progress building an Open Agriculture (OpenAg) food computer in Switzerland. The next meeting in Zurich will be held on 30 January 2018, please subscribe to the forum topic to receive further details.

Preparing for Google Summer of Code 2018

In between eating fondue and skiing, I found time to resurrect some of my previous project ideas for Google Summer of Code. Most of them are not specific to Debian, several of them need co-mentors, please contact me if you are interested.

15 January, 2018 08:02AM

Nathan Haines: Introducing the Ubuntu Free Culture Showcase for 18.04

Ubuntu’s changed a lot in the last year, and everything is leading up to a really exciting event: the release of 18.04 LTS! This next version of Ubuntu will once again offer a stable foundation for countless humans who use computers for work, play, art, relaxation, and creation. Among the various visual refreshes of Ubuntu, it’s also time to go to the community and ask for the best wallpapers. And it’s also time to look for a new video and music file that will be waiting for Ubuntu users on the install media’s Examples folder, to reassure them that their video and sound drivers are quite operational.

Long-term support releases like Ubuntu 18.04 LTS are very important, because they are downloaded and installed ten times more often than every single interim release combined. That means that the wallpapers, video, and music that are shipped will be seen ten times more than in other releases. So artists, select your best works. Ubuntu enthusiasts, spread the word about the contest as far and wide as you can. Everyone can help make this next LTS version of Ubuntu an amazing success.

All content must be released under a Creative Commons Attribution-Sharealike or Creative Commons Attribute license. (The Creative Commons Zero waiver is okay, too!). Each entrant must only submit content they have created themselves, and all submissions must adhere to the Ubuntu Code of Conduct.

The winners will be featured in the Ubuntu 18.04 LTS release this April!

There are a lot of details, so please see the Ubuntu Free Culture Showcase wiki page for details and links to where you can submit your work from now through March 15th. Good luck!

15 January, 2018 08:00AM

January 14, 2018

hackergotchi for Maemo developers

Maemo developers

With sufficient thrust, pigs fly just fine

0 Add to favourites0 Bury

14 January, 2018 11:34PM by Philip Van Hoof (pvanhoof@gnome.org)

hackergotchi for Ubuntu developers

Ubuntu developers

Simon Raffeiner: What a GNU C Compiler Bug looks like

Back in December a Linux Mint user sent a strange bug report to the darktable mailing list. Apparently the GNU C Compiler (GCC) on his system exited with an unexpected error message, breaking the build process.

14 January, 2018 05:35PM

hackergotchi for OSMC

OSMC

The new OSMC Remote

It's been just over two years since we announced the new OSMC RF Remote, a minimalist way to control all OSMC devices.

It's been very popular, and we've been reluctant to change something that is so well received. We have been monitoring your feedback however and noticed a couple of areas where we could improve it.

Since early December, we've been shipping our new remote controller as standard.

The new remote controller features:

  • Volume control buttons, which replace the Fast Forward and Rewind buttons
  • Out of the box support for Mac and Windows Kodi installations without any driver installation
  • Improved battery life: users can now expect a standard CR2032 cell to deliver an average of eight months battery life

The remote is now in stock and available in our Store and some of our resellers have started to pick up this new model. It will be available at a reduced price until 31st January.

14 January, 2018 05:26PM by Sam Nazarko

January 13, 2018

hackergotchi for Ubuntu developers

Ubuntu developers

Sebastian Dröge: How to write GStreamer Elements in Rust Part 1: A Video Filter for converting RGB to grayscale

This is part one of a series of blog posts that I’ll write in the next weeks, as previously announced in the GStreamer Rust bindings 0.10.0 release blog post. Since the last series of blog posts about writing GStreamer plugins in Rust ([1] [2] [3] [4]) a lot has changed, and the content of those blog posts has only historical value now, as the journey of experimentation to what exists now.

In this first part we’re going to write a plugin that contains a video filter element. The video filter can convert from RGB to grayscale, either output as 8-bit per pixel grayscale or 32-bit per pixel RGB. In addition there’s a property to invert all grayscale values, or to shift them by up to 255 values. In the end this will allow you to watch Big Bucky Bunny, or anything else really that can somehow go into a GStreamer pipeline, in grayscale. Or encode the output to a new video file, send it over the network via WebRTC or something else, or basically do anything you want with it.

Big Bucky Bunny – Grayscale

This will show the basics of how to write a GStreamer plugin and element in Rust: the basic setup for registering a type and implementing it in Rust, and how to use the various GStreamer API and APIs from the Rust standard library to do the processing.

The final code for this plugin can be found here, and it is based on the 0.1 version of the gst-plugin crate and the 0.10 version of the gstreamer crate. At least Rust 1.20 is required for all this. I’m also assuming that you have GStreamer (at least version 1.8) installed for your platform, see e.g. the GStreamer bindings installation instructions.

Table of Contents

  1. Project Structure
  2. Plugin Initialization
  3. Type Registration
  4. Type Class & Instance Initialization
  5. Caps & Pad Templates
  6. Caps Handling Part 1
  7. Caps Handling Part 2
  8. Conversion of BGRx Video Frames to Grayscale
  9. Testing the new element
  10. Properties
  11. What next?

Project Structure

We’ll create a new cargo project with cargo init –lib –name gst-plugin-tutorial. This will create a basically empty Cargo.toml and a corresponding src/lib.rs. We will use this structure: lib.rs will contain all the plugin related code, separate modules will contain any GStreamer plugins that are added.

The empty Cargo.toml has to be updated to list all the dependencies that we need, and to define that the crate should result in a cdylib, i.e. a C library that does not contain any Rust-specific metadata. The final Cargo.toml looks as follows

[package]
name = "gst-plugin-tutorial"
version = "0.1.0"
authors = ["Sebastian Dröge <sebastian@centricular.com>"]
repository = "https://github.com/sdroege/gst-plugin-rs"
license = "MIT/Apache-2.0"

[dependencies]
glib = "0.4"
gstreamer = "0.10"
gstreamer-base = "0.10"
gstreamer-video = "0.10"
gst-plugin = "0.1"

[lib]
name = "gstrstutorial"
crate-type = ["cdylib"]
path = "src/lib.rs"

We’re depending on the gst-plugin crate, which provides all the basic infrastructure for implementing GStreamer plugins and elements. In addition we depend on the gstreamer, gstreamer-base and gstreamer-video crates for various GStreamer API that we’re going to use later, and the glib crate to be able to use some GLib API that we’ll need. GStreamer is building upon GLib, and this leaks through in various places.

With the basic project structure being set-up, we should be able to compile the project with cargo build now, which will download and build all dependencies and then creates a file called target/debug/libgstrstutorial.so (or .dll on Windows, .dylib on macOS). This is going to be our GStreamer plugin.

To allow GStreamer to find our new plugin and make it available in every GStreamer-based application, we could install it into the system- or user-wide GStreamer plugin path or simply point the GST_PLUGIN_PATH environment variable to the directory containing it:

export GST_PLUGIN_PATH=`pwd`/target/debug

If you now run the gst-inspect-1.0 tool on the libgstrstutorial.so, it will not yet print all information it can extract from the plugin but for now just complains that this is not a valid GStreamer plugin. Which is true, we didn’t write any code for it yet.

Plugin Initialization

Let’s start editing src/lib.rs to make this an actual GStreamer plugin. First of all, we need to add various extern crate directives to be able to use our dependencies and also mark some of them #[macro_use] because we’re going to use macros defined in some of them. This looks like the following

extern crate glib;
#[macro_use]
extern crate gstreamer as gst;
extern crate gstreamer_base as gst_base;
extern crate gstreamer_video as gst_video;
#[macro_use]
extern crate gst_plugin;

Next we make use of the plugin_define! macro from the gst-plugin crate to set-up the static metadata of the plugin (and make the shared library recognizeable by GStreamer to be a valid plugin), and to define the name of our entry point function (plugin_init) where we will register all the elements that this plugin provides.

plugin_define!(
    b"rstutorial\0",
    b"Rust Tutorial Plugin\0",
    plugin_init,
    b"1.0\0",
    b"MIT/X11\0",
    b"rstutorial\0",
    b"rstutorial\0",
    b"https://github.com/sdroege/gst-plugin-rs\0",
    b"2017-12-30\0"
);

This is unfortunately not very beautiful yet due to a) GStreamer requiring this information to be statically available in the shared library, not returned by a function (starting with GStreamer 1.14 it can be a function), and b) Rust not allowing raw strings (b”blabla) to be concatenated with a macro like the std::concat macro (so that the b and \0 parts could be hidden away). Expect this to become better in the future.

The static plugin metadata that we provide here is

  1. name of the plugin
  2. short description for the plugin
  3. name of the plugin entry point function
  4. version number of the plugin
  5. license of the plugin (only a fixed set of licenses is allowed here, see)
  6. source package name
  7. binary package name (only really makes sense for e.g. Linux distributions)
  8. origin of the plugin
  9. release date of this version

In addition we’re defining an empty plugin entry point function that just returns true

fn plugin_init(plugin: &gst::Plugin) -> bool {
    true
}

With all that given, gst-inspect-1.0 should print exactly this information when running on the libgstrstutorial.so file (or .dll on Windows, or .dylib on macOS)

gst-inspect-1.0 target/debug/libgstrstutorial.so

Type Registration

As a next step, we’re going to add another module rgb2gray to our project, and call a function called register from our plugin_init function.

mod rgb2gray;

fn plugin_init(plugin: &gst::Plugin) -> bool {
    rgb2gray::register(plugin);
    true
}

With that our src/lib.rs is complete, and all following code is only in src/rgb2gray.rs. At the top of the new file we first need to add various use-directives to import various types and functions we’re going to use into the current module’s scope

use glib;
use gst;
use gst::prelude::*;
use gst_video;

use gst_plugin::properties::*;
use gst_plugin::object::*;
use gst_plugin::element::*;
use gst_plugin::base_transform::*;

use std::i32;
use std::sync::Mutex;

GStreamer is based on the GLib object system (GObject). C (just like Rust) does not have built-in support for object orientated programming, inheritance, virtual methods and related concepts, and GObject makes these features available in C as a library. Without language support this is a quite verbose endeavour in C, and the gst-plugin crate tries to expose all this in a (as much as possible) Rust-style API while hiding all the details that do not really matter.

So, as a next step we need to register a new type for our RGB to Grayscale converter GStreamer element with the GObject type system, and then register that type with GStreamer to be able to create new instances of it. We do this with the following code

struct Rgb2GrayStatic;

impl ImplTypeStatic<BaseTransform> for Rgb2GrayStatic {
    fn get_name(&self) -> &str {
        "Rgb2Gray"
    }

    fn new(&self, element: &BaseTransform) -> Box<BaseTransformImpl<BaseTransform>> {
        Rgb2Gray::new(element)
    }

    fn class_init(&self, klass: &mut BaseTransformClass) {
        Rgb2Gray::class_init(klass);
    }
}

pub fn register(plugin: &gst::Plugin) {
    let type_ = register_type(Rgb2GrayStatic);
    gst::Element::register(plugin, "rsrgb2gray", 0, type_);
}

This defines a zero-sized struct Rgb2GrayStatic that is used to implement the ImplTypeStatic<BaseTransform> trait on it for providing static information about the type to the type system. In our case this is a zero-sized struct, but in other cases this struct might contain actual data (for example if the same element code is used for multiple elements, e.g. when wrapping a generic codec API that provides support for multiple decoders and then wanting to register one element per decoder). By implementing ImplTypeStatic<BaseTransform> we also declare that our element is going to be based on the GStreamer BaseTransform base class, which provides a relatively simple API for 1:1 transformation elements like ours is going to be.

ImplTypeStatic provides functions that return a name for the type, and functions for initializing/returning a new instance of our element (new) and for initializing the class metadata (class_init, more on that later). We simply let those functions proxy to associated functions on the Rgb2Gray struct that we’re going to define at a later time.

In addition, we also define a register function (the one that is already called from our plugin_init function) and in there first register the Rgb2GrayStatic type metadata with the GObject type system to retrieve a type ID, and then register this type ID to GStreamer to be able to create new instances of it with the name “rsrgb2gray” (e.g. when using gst::ElementFactory::make).

Type Class & Instance Initialization

As a next step we declare the Rgb2Gray struct and implement the new and class_init functions on it. In the first version, this struct is almost empty but we will later use it to store all state of our element.

struct Rgb2Gray {
    cat: gst::DebugCategory,
}

impl Rgb2Gray {
    fn new(_transform: &BaseTransform) -> Box<BaseTransformImpl<BaseTransform>> {
        Box::new(Self {
            cat: gst::DebugCategory::new(
                "rsrgb2gray",
                gst::DebugColorFlags::empty(),
                "Rust RGB-GRAY converter",
            ),
        })
    }

    fn class_init(klass: &mut BaseTransformClass) {
        klass.set_metadata(
            "RGB-GRAY Converter",
            "Filter/Effect/Converter/Video",
            "Converts RGB to GRAY or grayscale RGB",
            "Sebastian Dröge <sebastian@centricular.com>",
        );

        klass.configure(BaseTransformMode::NeverInPlace, false, false);
    }
}

In the new function we return a boxed (i.e. heap-allocated) version of our struct, containing a newly created GStreamer debug category of name “rsrgb2gray”. We’re going to use this debug category later for making use of GStreamer’s debug logging system for logging the state and changes of our element.

In the class_init function we, again, set up some metadata for our new element. In this case these are a description, a classification of our element, a longer description and the author. The metadata can later be retrieved and made use of via the Registry and PluginFeature/ElementFactory API. We also configure the BaseTransform class and define that we will never operate in-place (producing our output in the input buffer), and that we don’t want to work in passthrough mode if the input/output formats are the same.

Additionally we need to implement various traits on the Rgb2Gray struct, which will later be used to override virtual methods of the various parent classes of our element. For now we can keep the trait implementations empty. There is one trait implementation required per parent class.

impl ObjectImpl<BaseTransform> for Rgb2Gray {}
impl ElementImpl<BaseTransform> for Rgb2Gray {}
impl BaseTransformImpl<BaseTransform> for Rgb2Gray {}

With all this defined, gst-inspect-1.0 should be able to show some more information about our element already but will still complain that it’s not complete yet.

Caps & Pad Templates

Data flow of GStreamer elements is happening via pads, which are the input(s) and output(s) (or sinks and sources) of an element. Via the pads, buffers containing actual media data, events or queries are transferred. An element can have any number of sink and source pads, but our new element will only have one of each.

To be able to declare what kinds of pads an element can create (they are not necessarily all static but could be created at runtime by the element or the application), it is necessary to install so-called pad templates during the class initialization. These pad templates contain the name (or rather “name template”, it could be something like src_%u for e.g. pad templates that declare multiple possible pads), the direction of the pad (sink or source), the availability of the pad (is it always there, sometimes added/removed by the element or to be requested by the application) and all the possible media types (called caps) that the pad can consume (sink pads) or produce (src pads).

In our case we only have always pads, one sink pad called “sink”, on which we can only accept RGB (BGRx to be exact) data with any width/height/framerate and one source pad called “src”, on which we will produce either RGB (BGRx) data or GRAY8 (8-bit grayscale) data. We do this by adding the following code to the class_init function.

let caps = gst::Caps::new_simple(
            "video/x-raw",
            &[
                (
                    "format",
                    &gst::List::new(&[
                        &gst_video::VideoFormat::Bgrx.to_string(),
                        &gst_video::VideoFormat::Gray8.to_string(),
                    ]),
                ),
                ("width", &gst::IntRange::<i32>::new(0, i32::MAX)),
                ("height", &gst::IntRange::<i32>::new(0, i32::MAX)),
                (
                    "framerate",
                    &gst::FractionRange::new(
                        gst::Fraction::new(0, 1),
                        gst::Fraction::new(i32::MAX, 1),
                    ),
                ),
            ],
        );

        let src_pad_template = gst::PadTemplate::new(
            "src",
            gst::PadDirection::Src,
            gst::PadPresence::Always,
            &caps,
        );
        klass.add_pad_template(src_pad_template);

        let caps = gst::Caps::new_simple(
            "video/x-raw",
            &[
                ("format", &gst_video::VideoFormat::Bgrx.to_string()),
                ("width", &gst::IntRange::<i32>::new(0, i32::MAX)),
                ("height", &gst::IntRange::<i32>::new(0, i32::MAX)),
                (
                    "framerate",
                    &gst::FractionRange::new(
                        gst::Fraction::new(0, 1),
                        gst::Fraction::new(i32::MAX, 1),
                    ),
                ),
            ],
        );

        let sink_pad_template = gst::PadTemplate::new(
            "sink",
            gst::PadDirection::Sink,
            gst::PadPresence::Always,
            &caps,
        );
        klass.add_pad_template(sink_pad_template);

The names “src” and “sink” are pre-defined by the BaseTransform class and this base-class will also create the actual pads with those names from the templates for us whenever a new element instance is created. Otherwise we would have to do that in our new function but here this is not needed.

If you now run gst-inspect-1.0 on the rsrgb2gray element, these pad templates with their caps should also show up.

Caps Handling Part 1

As a next step we will add caps handling to our new element. This involves overriding 4 virtual methods from the BaseTransformImpl trait, and actually storing the configured input and output caps inside our element struct. Let’s start with the latter

struct State {
    in_info: gst_video::VideoInfo,
    out_info: gst_video::VideoInfo,
}

struct Rgb2Gray {
    cat: gst::DebugCategory,
    state: Mutex<Option<State>>,
}

impl Rgb2Gray {
    fn new(_transform: &BaseTransform) -> Box<BaseTransformImpl<BaseTransform>> {
        Box::new(Self {
            cat: gst::DebugCategory::new(
                "rsrgb2gray",
                gst::DebugColorFlags::empty(),
                "Rust RGB-GRAY converter",
            ),
            state: Mutex::new(None),
        })
    }
}

We define a new struct State that contains the input and output caps, stored in a VideoInfo. VideoInfo is a struct that contains various fields like width/height, framerate and the video format and allows to conveniently with the properties of (raw) video formats. We have to store it inside a Mutex in our Rgb2Gray struct as this can (in theory) be accessed from multiple threads at the same time.

Whenever input/output caps are configured on our element, the set_caps virtual method of BaseTransform is called with both caps (i.e. in the very beginning before the data flow and whenever it changes), and all following video frames that pass through our element should be according to those caps. Once the element is shut down, the stop virtual method is called and it would make sense to release the State as it only contains stream-specific information. We’re doing this by adding the following to the BaseTransformImpl trait implementation

impl BaseTransformImpl<BaseTransform> for Rgb2Gray {
    fn set_caps(&self, element: &BaseTransform, incaps: &gst::Caps, outcaps: &gst::Caps) -> bool {
        let in_info = match gst_video::VideoInfo::from_caps(incaps) {
            None => return false,
            Some(info) => info,
        };
        let out_info = match gst_video::VideoInfo::from_caps(outcaps) {
            None => return false,
            Some(info) => info,
        };

        gst_debug!(
            self.cat,
            obj: element,
            "Configured for caps {} to {}",
            incaps,
            outcaps
        );

        *self.state.lock().unwrap() = Some(State {
            in_info: in_info,
            out_info: out_info,
        });

        true
    }

    fn stop(&self, element: &BaseTransform) -> bool {
        // Drop state
        let _ = self.state.lock().unwrap().take();

        gst_info!(self.cat, obj: element, "Stopped");

        true
    }
}

This code should be relatively self-explanatory. In set_caps we’re parsing the two caps into a VideoInfo and then store this in our State, in stop we drop the State and replace it with None. In addition we make use of our debug category here and use the gst_info! and gst_debug! macros to output the current caps configuration to the GStreamer debug logging system. This information can later be useful for debugging any problems once the element is running.

Next we have to provide information to the BaseTransform base class about the size in bytes of a video frame with specific caps. This is needed so that the base class can allocate an appropriately sized output buffer for us, that we can then fill later. This is done with the get_unit_size virtual method, which is required to return the size of one processing unit in specific caps. In our case, one processing unit is one video frame. In the case of raw audio it would be the size of one sample multiplied by the number of channels.

impl BaseTransformImpl<BaseTransform> for Rgb2Gray {
    fn get_unit_size(&self, _element: &BaseTransform, caps: &gst::Caps) -> Option<usize> {
        gst_video::VideoInfo::from_caps(caps).map(|info| info.size())
    }
}

We simply make use of the VideoInfo API here again, which conveniently gives us the size of one video frame already.

Instead of get_unit_size it would also be possible to implement the transform_size virtual method, which is getting passed one size and the corresponding caps, another caps and is supposed to return the size converted to the second caps. Depending on how your element works, one or the other can be easier to implement.

Caps Handling Part 2

We’re not done yet with caps handling though. As a very last step it is required that we implement a function that is converting caps into the corresponding caps in the other direction. For example, if we receive BGRx caps with some width/height on the sinkpad, we are supposed to convert this into new caps with the same width/height but BGRx or GRAY8. That is, we can convert BGRx to BGRx or GRAY8. Similarly, if the element downstream of ours can accept GRAY8 with a specific width/height from our source pad, we have to convert this to BGRx with that very same width/height.

This has to be implemented in the transform_caps virtual method, and looks as following

impl BaseTransformImpl<BaseTransform> for Rgb2Gray {
    fn transform_caps(
        &self,
        element: &BaseTransform,
        direction: gst::PadDirection,
        caps: gst::Caps,
        filter: Option<&gst::Caps>,
    ) -> gst::Caps {
        let other_caps = if direction == gst::PadDirection::Src {
            let mut caps = caps.clone();

            for s in caps.make_mut().iter_mut() {
                s.set("format", &gst_video::VideoFormat::Bgrx.to_string());
            }

            caps
        } else {
            let mut gray_caps = gst::Caps::new_empty();

            {
                let gray_caps = gray_caps.get_mut().unwrap();

                for s in caps.iter() {
                    let mut s_gray = s.to_owned();
                    s_gray.set("format", &gst_video::VideoFormat::Gray8.to_string());
                    gray_caps.append_structure(s_gray);
                }
                gray_caps.append(caps.clone());
            }

            gray_caps
        };

        gst_debug!(
            self.cat,
            obj: element,
            "Transformed caps from {} to {} in direction {:?}",
            caps,
            other_caps,
            direction
        );

        if let Some(filter) = filter {
            filter.intersect_with_mode(&other_caps, gst::CapsIntersectMode::First)
        } else {
            other_caps
        }
    }
}

This caps conversion happens in 3 steps. First we check if we got caps for the source pad. In that case, the caps on the other pad (the sink pad) are going to be exactly the same caps but no matter if the caps contained BGRx or GRAY8 they must become BGRx as that’s the only format that our sink pad can accept. We do this by creating a clone of the input caps, then making sure that those caps are actually writable (i.e. we’re having the only reference to them, or a copy is going to be created) and then iterate over all the structures inside the caps and then set the “format” field to BGRx. After this, all structures in the new caps will be with the format field set to BGRx.

Similarly, if we get caps for the sink pad and are supposed to convert it to caps for the source pad, we create new caps and in there append a copy of each structure of the input caps (which are BGRx) with the format field set to GRAY8. In the end we append the original caps, giving us first all caps as GRAY8 and then the same caps as BGRx. With this ordering we signal to GStreamer that we would prefer to output GRAY8 over BGRx.

In the end the caps we created for the other pad are filtered against optional filter caps to reduce the potential size of the caps. This is done by intersecting the caps with that filter, while keeping the order (and thus preferences) of the filter caps (gst::CapsIntersectMode::First).

Conversion of BGRx Video Frames to Grayscale

Now that all the caps handling is implemented, we can finally get to the implementation of the actual video frame conversion. For this we start with defining a helper function bgrx_to_gray that converts one BGRx pixel to a grayscale value. The BGRx pixel is passed as a &[u8] slice with 4 elements and the function returns another u8 for the grayscale value.

impl Rgb2Gray {
    #[inline]
    fn bgrx_to_gray(in_p: &[u8]) -> u8 {
        // See https://en.wikipedia.org/wiki/YUV#SDTV_with_BT.601
        const R_Y: u32 = 19595; // 0.299 * 65536
        const G_Y: u32 = 38470; // 0.587 * 65536
        const B_Y: u32 = 7471; // 0.114 * 65536

        assert_eq!(in_p.len(), 4);

        let b = u32::from(in_p[0]);
        let g = u32::from(in_p[1]);
        let r = u32::from(in_p[2]);

        let gray = ((r * R_Y) + (g * G_Y) + (b * B_Y)) / 65536;
        (gray as u8)
    }
}

This function works by extracting the blue, green and red components from each pixel (remember: we work on BGRx, so the first value will be blue, the second green, the third red and the fourth unused), extending it from 8 to 32 bits for a wider value-range and then converts it to the Y component of the YUV colorspace (basically what your grandparents’ black & white TV would’ve displayed). The coefficients come from the Wikipedia page about YUV and are normalized to unsigned 16 bit integers so we can keep some accuracy, don’t have to work with floating point arithmetic and stay inside the range of 32 bit integers for all our calculations. As you can see, the green component is weighted more than the others, which comes from our eyes being more sensitive to green than to other colors.

Note: This is only doing the actual conversion from linear RGB to grayscale (and in BT.601 colorspace). To do this conversion correctly you need to know your colorspaces and use the correct coefficients for conversion, and also do gamma correction. See this about why it is important.

Afterwards we have to actually call this function on every pixel. For this the transform virtual method is implemented, which gets a input and output buffer passed and we’re supposed to read the input buffer and fill the output buffer. The implementation looks as follows, and is going to be our biggest function for this element

impl BaseTransformImpl<BaseTransform> for Rgb2Gray {
    fn transform(
        &self,
        element: &BaseTransform,
        inbuf: &gst::Buffer,
        outbuf: &mut gst::BufferRef,
    ) -> gst::FlowReturn {
        let mut state_guard = self.state.lock().unwrap();
        let state = match *state_guard {
            None => {
                gst_element_error!(element, gst::CoreError::Negotiation, ["Have no state yet"]);
                return gst::FlowReturn::NotNegotiated;
            }
            Some(ref mut state) => state,
        };

        let in_frame = match gst_video::VideoFrameRef::from_buffer_ref_readable(
            inbuf.as_ref(),
            &state.in_info,
        ) {
            None => {
                gst_element_error!(
                    element,
                    gst::CoreError::Failed,
                    ["Failed to map input buffer readable"]
                );
                return gst::FlowReturn::Error;
            }
            Some(in_frame) => in_frame,
        };

        let mut out_frame =
            match gst_video::VideoFrameRef::from_buffer_ref_writable(outbuf, &state.out_info) {
                None => {
                    gst_element_error!(
                        element,
                        gst::CoreError::Failed,
                        ["Failed to map output buffer writable"]
                    );
                    return gst::FlowReturn::Error;
                }
                Some(out_frame) => out_frame,
            };

        let width = in_frame.width() as usize;
        let in_stride = in_frame.plane_stride()[0] as usize;
        let in_data = in_frame.plane_data(0).unwrap();
        let out_stride = out_frame.plane_stride()[0] as usize;
        let out_format = out_frame.format();
        let out_data = out_frame.plane_data_mut(0).unwrap();

        if out_format == gst_video::VideoFormat::Bgrx {
            assert_eq!(in_data.len() % 4, 0);
            assert_eq!(out_data.len() % 4, 0);
            assert_eq!(out_data.len() / out_stride, in_data.len() / in_stride);

            let in_line_bytes = width * 4;
            let out_line_bytes = width * 4;

            assert!(in_line_bytes <= in_stride);
            assert!(out_line_bytes <= out_stride);

            for (in_line, out_line) in in_data
                .chunks(in_stride)
                .zip(out_data.chunks_mut(out_stride))
            {
                for (in_p, out_p) in in_line[..in_line_bytes]
                    .chunks(4)
                    .zip(out_line[..out_line_bytes].chunks_mut(4))
                {
                    assert_eq!(out_p.len(), 4);

                    let gray = Rgb2Gray::bgrx_to_gray(in_p);
                    out_p[0] = gray;
                    out_p[1] = gray;
                    out_p[2] = gray;
                }
            }
        } else if out_format == gst_video::VideoFormat::Gray8 {
            assert_eq!(in_data.len() % 4, 0);
            assert_eq!(out_data.len() / out_stride, in_data.len() / in_stride);

            let in_line_bytes = width * 4;
            let out_line_bytes = width;

            assert!(in_line_bytes <= in_stride);
            assert!(out_line_bytes <= out_stride);

            for (in_line, out_line) in in_data
                .chunks(in_stride)
                .zip(out_data.chunks_mut(out_stride))
            {
                for (in_p, out_p) in in_line[..in_line_bytes]
                    .chunks(4)
                    .zip(out_line[..out_line_bytes].iter_mut())
                {
                    let gray = Rgb2Gray::bgrx_to_gray(in_p);
                    *out_p = gray;
                }
            }
        } else {
            unimplemented!();
        }

        gst::FlowReturn::Ok
    }
}

What happens here is that we first of all lock our state (the input/output VideoInfo) and error out if we don’t have any yet (which can’t really happen unless other elements have a bug, but better safe than sorry). After that we map the input buffer readable and the output buffer writable with the VideoFrameRef API. By mapping the buffers we get access to the underlying bytes of them, and the mapping operation could for example make GPU memory available or just do nothing and give us access to a normally allocated memory area. We have access to the bytes of the buffer until the VideoFrameRef goes out of scope.

Instead of VideoFrameRef we could’ve also used the gst::Buffer::map_readable() and gst::Buffer::map_writable() API, but different to those the VideoFrameRef API also extracts various metadata from the raw video buffers and makes them available. For example we can directly access the different planes as slices without having to calculate the offsets ourselves, or we get directly access to the width and height of the video frame.

After mapping the buffers, we store various information we’re going to need later in local variables to save some typing later. This is the width (same for input and output as we never changed the width in transform_caps), the input and out (row-) stride (the number of bytes per row/line, which possibly includes some padding at the end of each line for alignment reasons), the output format (which can be BGRx or GRAY8 because of how we implemented transform_caps) and the pointers to the first plane of the input and output (which in this case also is the only plane, BGRx and GRAY8 both have only a single plane containing all the RGB/gray components).

Then based on whether the output is BGRx or GRAY8, we iterate over all pixels. The code is basically the same in both cases, so I’m only going to explain the case where BGRx is output.

We start by iterating over each line of the input and output, and do so by using the chunks iterator to give us chunks of as many bytes as the (row-) stride of the video frame is, do the same for the other frame and then zip both iterators together. This means that on each iteration we get exactly one line as a slice from each of the frames and can then start accessing the actual pixels in each line.

To access the individual pixels in each line, we again use the chunks iterator the same way, but this time to always give us chunks of 4 bytes from each line. As BGRx uses 4 bytes for each pixel, this gives us exactly one pixel. Instead of iterating over the whole line, we only take the actual sub-slice that contains the pixels, not the whole line with stride number of bytes containing potential padding at the end. Now for each of these pixels we call our previously defined bgrx_to_gray function and then fill the B, G and R components of the output buffer with that value to get grayscale output. And that’s all.

Using Rust high-level abstractions like the chunks iterators and bounds-checking slice accesses might seem like it’s going to cause quite some performance penalty, but if you look at the generated assembly most of the bounds checks are completely optimized away and the resulting assembly code is close to what one would’ve written manually (especially when using the newly-added exact_chunks iterators). Here you’re getting safe and high-level looking code with low-level performance!

You might’ve also noticed the various assertions in the processing function. These are there to give further hints to the compiler about properties of the code, and thus potentially being able to optimize the code better and moving e.g. bounds checks out of the inner loop and just having the assertion outside the loop check for the same. In Rust adding assertions can often improve performance by allowing further optimizations to be applied, but in the end always check the resulting assembly to see if what you did made any difference.

Testing the new element

Now we implemented almost all functionality of our new element and could run it on actual video data. This can be done now with the gst-launch-1.0 tool, or any application using GStreamer and allowing us to insert our new element somewhere in the video part of the pipeline. With gst-launch-1.0 you could run for example the following pipelines

# Run on a test pattern
gst-launch-1.0 videotestsrc ! rsrgb2gray ! videoconvert ! autovideosink

# Run on some video file, also playing the audio
gst-launch-1.0 playbin uri=file:///path/to/some/file video-filter=rsrgb2gray

Note that you will likely want to compile with cargo build –release and add the target/release directory to GST_PLUGIN_PATH instead. The debug build might be too slow, and generally the release builds are multiple orders of magnitude (!) faster.

Properties

The only feature missing now are the properties I mentioned in the opening paragraph: one boolean property to invert the grayscale value and one integer property to shift the value by up to 255. Implementing this on top of the previous code is not a lot of work. Let’s start with defining a struct for holding the property values and defining the property metadata.

const DEFAULT_INVERT: bool = false;
const DEFAULT_SHIFT: u32 = 0;

#[derive(Debug, Clone, Copy)]
struct Settings {
    invert: bool,
    shift: u32,
}

impl Default for Settings {
    fn default() -> Self {
        Settings {
            invert: DEFAULT_INVERT,
            shift: DEFAULT_SHIFT,
        }
    }
}

static PROPERTIES: [Property; 2] = [
    Property::Boolean(
        "invert",
        "Invert",
        "Invert grayscale output",
        DEFAULT_INVERT,
        PropertyMutability::ReadWrite,
    ),
    Property::UInt(
        "shift",
        "Shift",
        "Shift grayscale output (wrapping around)",
        (0, 255),
        DEFAULT_SHIFT,
        PropertyMutability::ReadWrite,
    ),
];

struct Rgb2Gray {
    cat: gst::DebugCategory,
    settings: Mutex<Settings>,
    state: Mutex<Option<State>>,
}

impl Rgb2Gray {
    fn new(_transform: &BaseTransform) -> Box<BaseTransformImpl<BaseTransform>> {
        Box::new(Self {
            cat: gst::DebugCategory::new(
                "rsrgb2gray",
                gst::DebugColorFlags::empty(),
                "Rust RGB-GRAY converter",
            ),
            settings: Mutex::new(Default::default()),
            state: Mutex::new(None),
        })
    }
}

This should all be rather straightforward: we define a Settings struct that stores the two values, implement the Default trait for it, then define a two-element array with property metadata (names, description, ranges, default value, writability), and then store the default value of our Settings struct inside another Mutex inside the element struct.

In the next step we have to make use of these: we need to tell the GObject type system about the properties, and we need to implement functions that are called whenever a property value is set or get.

impl Rgb2Gray {
    fn class_init(klass: &mut BaseTransformClass) {
        [...]
        klass.install_properties(&PROPERTIES);
        [...]
    }
}

impl ObjectImpl<BaseTransform> for Rgb2Gray {
    fn set_property(&self, obj: &glib::Object, id: u32, value: &glib::Value) {
        let prop = &PROPERTIES[id as usize];
        let element = obj.clone().downcast::<BaseTransform>().unwrap();

        match *prop {
            Property::Boolean("invert", ..) => {
                let mut settings = self.settings.lock().unwrap();
                let invert = value.get().unwrap();
                gst_info!(
                    self.cat,
                    obj: &element,
                    "Changing invert from {} to {}",
                    settings.invert,
                    invert
                );
                settings.invert = invert;
            }
            Property::UInt("shift", ..) => {
                let mut settings = self.settings.lock().unwrap();
                let shift = value.get().unwrap();
                gst_info!(
                    self.cat,
                    obj: &element,
                    "Changing shift from {} to {}",
                    settings.shift,
                    shift
                );
                settings.shift = shift;
            }
            _ => unimplemented!(),
        }
    }

    fn get_property(&self, _obj: &glib::Object, id: u32) -> Result<glib::Value, ()> {
        let prop = &PROPERTIES[id as usize];

        match *prop {
            Property::Boolean("invert", ..) => {
                let settings = self.settings.lock().unwrap();
                Ok(settings.invert.to_value())
            }
            Property::UInt("shift", ..) => {
                let settings = self.settings.lock().unwrap();
                Ok(settings.shift.to_value())
            }
            _ => unimplemented!(),
        }
    }
}

Property values can be changed from any thread at any time, that’s why the Mutex is needed here to protect our struct. And we’re using a new mutex to be able to have it locked only for the shorted possible amount of time: we don’t want to keep it locked for the whole time of the transform function, otherwise applications trying to set/get values would block for up to one frame.

In the property setter/getter functions we are working with a glib::Value. This is a dynamically typed value type that can contain values of any type, together with the type information of the contained value. Here we’re using it to handle an unsigned integer (u32) and a boolean for our two properties. To know which property is currently set/get, we get an identifier passed which is the index into our PROPERTIES array. We then simply match on the name of that to decide which property was meant

With this implemented, we can already compile everything, see the properties and their metadata in gst-inspect-1.0 and can also set them on gst-launch-1.0 like this

# Set invert to true and shift to 128
gst-launch-1.0 videotestsrc ! rsrgb2gray invert=true shift=128 ! videoconvert ! autovideosink

If we set GST_DEBUG=rsrgb2gray:6 in the environment before running that, we can also see the corresponding debug output when the values are changing. The only thing missing now is to actually make use of the property values for the processing. For this we add the following changes to bgrx_to_gray and the transform function

impl Rgb2Gray {
    #[inline]
    fn bgrx_to_gray(in_p: &[u8], shift: u8, invert: bool) -> u8 {
        [...]

        let gray = ((r * R_Y) + (g * G_Y) + (b * B_Y)) / 65536;
        let gray = (gray as u8).wrapping_add(shift);

        if invert {
            255 - gray
        } else {
            gray
        }
    }
}

impl BaseTransformImpl<BaseTransform> for Rgb2Gray {
    fn transform(
        &self,
        element: &BaseTransform,
        inbuf: &gst::Buffer,
        outbuf: &mut gst::BufferRef,
    ) -> gst::FlowReturn {
        let settings = *self.settings.lock().unwrap();
        [...]
                    let gray = Rgb2Gray::bgrx_to_gray(in_p, settings.shift as u8, settings.invert);
        [...]
    }
}

And that’s all. If you run the element in gst-launch-1.0 and change the values of the properties you should also see the corresponding changes in the video output.

Note that we always take a copy of the Settings struct at the beginning of the transform function. This ensures that we take the mutex only the shorted possible amount of time and then have a local snapshot of the settings for each frame.

Also keep in mind that the usage of the property values in the bgrx_to_gray function is far from optimal. It means the addition of another condition to the calculation of each pixel, thus potentially slowing it down a lot. Ideally this condition would be moved outside the inner loops and the bgrx_to_gray function would made generic over that. See for example this blog post about “branchless Rust” for ideas how to do that, the actual implementation is left as an exercise for the reader.

What next?

I hope the code walkthrough above was useful to understand how to implement GStreamer plugins and elements in Rust. If you have any questions, feel free to ask them here in the comments.

The same approach also works for audio filters or anything that can be handled in some way with the API of the BaseTransform base class. You can find another filter, an audio echo filter, using the same approach here.

In the next blog post in this series I’ll show how to use another base class to implement another kind of element, but for the time being you can also check the GIT repository for various other element implementations.

13 January, 2018 10:23PM

January 12, 2018

hackergotchi for VyOS

VyOS

Permission denied issues with AWS instances

Quick facts: the issue is caused by an unexpected change in the EC2 system, there is no solution or workaround yet but we are working on it.

In the last week a number of people reported an issue with newly created EC2 instances of VyOS where they could not login to their newly created instance. At first we thought it may be an intermittent fault in the AWS since the AMI has not changes and we could not reproduce the problem ourselves, but the number of reports grew quickly, and our own test instances started showing the problem as well.

Since EC2 instances don't provide any console access, it took us a bit of time to debug. By juggling EBS volumes we finally managed to boot an affected instance with an disk image modified to include our own SSH keys.

The root cause is in our script that checks if the machine is running in EC2. We wanted to produce the AMI from an unmodified image, which required inclusion of the script that checks if the environment is EC2. Executing a script that obtains an SSH key from a remote (even if link-local) address is a security risk since in a less controlled environment an attacker could setup a server that could inject their keys to all VyOS systems.

The key observation was that in EC2, both system-uuid and system-serial-number fields in the DMI data always start with "EC2". We thought this is a good enough condition, and for the few years we've been providing AMIs, it indeed was.

However, Amazon changed it without warning and now the system-uuid may not start with EC2 (serial numbers still do), and VyOS instances stopped executing their key fetching script.

We are working on the 1.1.8 release now, but it will go through an RC phase, while the solution to the AWS issue is needed right now. We'll contact Amazon support to see what are the options, stay tuned.

12 January, 2018 08:48PM by Daniil Baturin

hackergotchi for Ubuntu developers

Ubuntu developers

Xubuntu: Xubuntu 17.10.1 Release

Following the recent testing of a respin to deal with the BIOS bug on some Lenovo machines, Xubuntu 17.10.1 has been released. Official download sources have been updated to point to this point release, but if you’re using a mirror, be sure you are downloading the 17.10.1 version.

No changes to applications are included, however, this release does include any updates made between the original release date and now.

Note: Even with this fix, you will want to update your system to make sure you get all security fixes since the ISO respin, including the one for Meltdown, addressed in USN-3523, which you can read more about here.

12 January, 2018 05:34PM

Xubuntu: Xubuntu 17.04 End Of Life

On Saturday 13th January 2018, Xubuntu 17.04 goes End of Life (EOL). For more information please see the Ubuntu 17.04 EOL Notice.

We strongly recommend upgrading to the current regular release, Xubuntu 17.10.1, as soon as practical. Alternatively you can download the current Xubuntu release and install fresh.

The 17.10.1 release recently saw testing across all flavors to address the BIOS bug found after its release in October 2017. Updated and bugfree ISO files are now available.

12 January, 2018 02:40PM

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, December 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 142 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change at 183 hours per month. It would be nice if we could continue to find new sponsors as the amount of work seems to be slowly growing too.

The security tracker currently lists 21 packages with a known CVE and the dla-needed.txt file 16 (we’re a bit behind in CVE triaging apparently). Both numbers show a significant drop compared to last month. Yet the number of DLA released was not larger than usual (30), instead it looks like December brought us fewer new security vulnerabilities to handle and at the same time we used this opportunity to handle lower priorities packages that were kept on the side for multiple months.

Thanks to our sponsors

New sponsors are in bold (none this month).

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

12 January, 2018 02:15PM

hackergotchi for Tanglu developers

Tanglu developers

PackageKitQt 1.0.0 and 0.10.0 released!

Happy new year every one!

PackageKitQt is a Qt Library to interface with PackageKit

It’s been a while that I don’t do a proper PackageKitQt release, mostly because I’m focusing on other projects, but PackageKit API itself isn’t evolving as fast as it was, so updating stuff is quite easy.

It haven’t seems a proper release in a while, it got several patches from other contributors, including the Qt5 port, so 0.10.0 is for that, the 1.0.0 release is to match PackageKit 1.0.0 API, while 0.10.0 works with that some signals and methods where removed from PackageKit which where useless in PackageKitQt.

Add to 1.0.0 release the complete Offline updates interface some API cleanups, missing enums and a performance improvement on parsing package ids.

Due the complexity and effort used to roll a release, from now on releases will be done with tags on GitHub, so packagers stays tuned 🙂

https://github.com/hughsie/PackageKit-Qt/archive/v1.0.0.tar.gz

https://github.com/hughsie/PackageKit-Qt/archive/v0.10.0.tar.gz

12 January, 2018 02:07PM by dantti

hackergotchi for Purism PureOS

Purism PureOS

New Purist Services – Standard Web Services Done Ethically

The Trouble With Current Options

When you sign up for a communication service, you are typically volunteering to store your personal, unencrypted data on someone else’s remote server farm. You have no way of ensuring that your data is safe or how it is being used by the owner of the server. However, online services are incredibly convenient especially when you have multiple devices.

Limitations of typical online services:

  • Walled gardens – restricting user access to information while automating everything behind the scenes to breed consumer dependence
  • Monetization of user data – trading information about you to anyone willing to offer enough money
  • Incidental collection – weak privacy laws allow for data collection by local and foreign agencies
  • Identity theft – Once an undesirable party obtains your data, either by breach or purchase

Why are we Different?

We here at Purism plan to make our own suite of online apps, the way we do everything else; Ethically. There are many possible services that we can potentially roll out. We have decided to start with:

  • VPN – protect your browsing data from snoops, protect your location from websites
  • Storage – store backups securely
  • Chat – private 1:1 and group chat, voice and video calls
  • Email

Our goal is to release useful, ethical services. This means that our minimum product standard is:

  1. Usability, usability, usability – you don’t think about using the software, you think about why you’re using the software
  2. Libre client – typical free software benefits as well as theoretical and actual multi-platform support
  3. Libre server component – you are not tied to purist as a service provider, you can switch to another provider or roll your own
  4. Client-side encryption – your machine performs the encryption before data is ever sent

Each service will be released one at a time when they have been setup and properly tested. The VPN is our first priority and it is nearing completion. We are very excited to offer internet users a reasonable application alternative to Google and Apple. An alternative that cares about their customers instead of finding every possible way to profit off of them. An alternative from Purism, the Social Purpose Corporation, the company that cares about your freedom, privacy and security.

12 January, 2018 11:42AM by Jonathan Duchnowski