October 17, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Ubuntu Server Development Summary – 17 Oct 2017

Hello Ubuntu Server!

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

Spotlight: Artful Release This Week!

Artful RCs are now published. The RC ISOs may not be the final version, but please get to testing them. If all goes according to plan Artful will be released this week!

cloud-init

  • Currently very busy working on SRU of cloud-init with latest upstream version

curtin

  • Fixed fail to install on disk with pre-existing broken partition table
    (LP: #1722322)

Bug Work and Triage

IRC Meeting

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Uploads to the Development Release (Artful)

cloud-init, 17.1-18-gd4f70470-0ubuntu1, smoser
docker.io, 1.13.1-0ubuntu6, mwhudson
docker.io, 1.13.1-0ubuntu5, mwhudson
libvirt, 3.6.0-1ubuntu5, paelzer
lxd, 2.18-0ubuntu6, stgraber
lxd, 2.18-0ubuntu5, stgraber
lxd, 2.18-0ubuntu4, stgraber
maas, 2.3.0~beta2-6327-gdd05aa2-0ubuntu1, andreserl
nspr, 2:4.16-1ubuntu2, mdeslaur
qemu, 1:2.10+dfsg-0ubuntu3, paelzer
qemu, 1:2.10+dfsg-0ubuntu2, paelzer
sssd, 1.15.3-2ubuntu1, tjaalton
tomcat8, 8.5.21-1ubuntu1, racb
xen, 4.9.0-0ubuntu3, mdeslaur
Total: 14

Uploads to Supported Releases (Trusty, Xenial, Yakkety, Zesty)

cloud-init, xenial, 17.1-18-gd4f70470-0ubuntu1~16.04.1, smoser
cloud-init, zesty, 17.1-18-gd4f70470-0ubuntu1~17.04.1, smoser
curtin, xenial, 0.1.0~bzr532-0ubuntu1~16.04.1, smoser
curtin, zesty, 0.1.0~bzr532-0ubuntu1~17.04.1, smoser
libseccomp, zesty, 2.3.1-2.1ubuntu2~17.04.1, tyhicks
qemu, zesty, 1:2.8+dfsg-3ubuntu2.6, paelzer
squid3, trusty, 3.3.8-1ubuntu6.10, paelzer
xen, zesty, 4.8.0-1ubuntu2.4, mdeslaur
xen, xenial, 4.6.5-0ubuntu1.4, mdeslaur
xen, trusty, 4.4.2-0ubuntu0.14.04.14, mdeslaur
Total: 10

Contact the Ubuntu Server team

17 October, 2017 07:32PM

Jorge Castro: Using kubeadm to upgrade Kubernetes

I’ve started writing for the Heptio Blog, check out my new article on Upgrading to Kubernetes 1.8 with Kubeadm.

Also if you’re looking for more interactive help with Kubernetes, make sure you check out our brand new Kubernetes Office Hours, where we livestream developers answering user questions about Kubernetes. Starting tomorrow (18 October) at 1pm and 8pm UTC, hope to see you there!

 

17 October, 2017 04:09PM

hackergotchi for Purism PureOS

Purism PureOS

Deep dive into Intel Management Engine disablement

Starting today, our second generation (Skylake-based) laptops will now come with the Intel Management Engine neutralized and disabled by default. Users who already received their orders can also update their flash to disable the ME on their machines.

In this post, I will dig deeper and explain in more details what this means exactly, and why it wasn’t done before today for the laptops that were shipping this spring and summer.

The life and times of the ME

Think of the ME as having 4 possible states:

  1. Fully operational ME: the ME is running normally like it does on other manufacturers’ machines (note that this could be a consumer or corporate ME image, which vary widely in the features they ‘provide’)
  2. Neutralized ME: the ME is neutralized/neutered by removing the most “mission-critical” components from it, such as the kernel and network stack.
  3. Disabled ME: the ME is officially “disabled” and is known to be completely stopped and non-functional
  4. Removed ME: the ME is completely removed and doesn’t execute anything at any time, at all.

In my previous blog post about taming the ME, we discussed how we neutralize the ME (note that this was on the first generation, Broadwell-based Purism laptops back then), but we’ve taken things one step further today by not only neutralizing the ME but also by disabling it. The difference between the two might not be immediately visible to some of you, so I’ll clarify below.

  • A neutralized ME is a ME image which had most of its code removed.
    • The way the ME firmware is packaged on the flash, is in the form of multiple modules, and each module has a specific task, such as : Hardware initialization, Firmware updates, Kernel, Network stack, Audio/Video processing, HECI communication over PCI, Java virtual machine, etc. When the ME is neutralized using the me_cleaner tool, most of the modules will be removed. As we’ve seen on Broadwell, that meant almost 93% of the code is removed and only 7% remains (that proportion is different on Skylake, see further below).
    • A neutralized ME means that the ME firmware will encounter an error during its regular boot cycle; It will not find some of its critical modules and it will throw an error and somehow fail to proceed. However, the ME remains operational, it just can’t do anything “valuable”. While it’s unable to communicate with the main CPU through the HECI commands, the PCI interface to the ME processor is still active and lets us poke at the status of the ME for example, which lets us see which error caused it to stop functioning.
  • When the ME is disabled using the “HAP” method (thanks to the ptsecurity research for discovering this trick), however, it doesn’t throw an error “because it can’t load a module”: it actually stops itself in a graceful manner, by design.

The two approaches are similar in that they both stop the execution of the ME during the hardware initialization (BUP) phase, but with the ME disabled through the HAP method, the ME stops on its own, without putting up a fight, potentially disabling things that the forceful “me_cleaner” approach, with the “unexpected error” state, wouldn’t have disabled. The PCI interface for example, is entirely unable to communicate with the ME processor, and the status of the ME is not even retrievable.

So the big, visible difference for us, between a neutralized and a disabled ME, is that the neutralized ME might appear “normal” when coreboot accesses its status, or it might show that it has terminated due to an error, while a disabled ME simply doesn’t give us a status at all—so coreboot will even think that the ME partition is corrupted. Another advantage, is that, from my understanding of the ptsecurity’s research, a disabled ME stops its execution before a neutralized ME does, so there is at least a little bit of extra code that doesn’t get executed when the ME is disabled, compared to a neutralized ME.

Kill it with fire! Then dump it into a volcano.

In our case, we went with an ME that is both neutered and disabled. By doing so, we provide maximum security; even if the disablement of the ME isn’t functioning properly, the ME would still fail to load its mission-critical modules and will therefore be safe from any potential exploits or backdoors (unless one is found in the very early boot process of the ME).

I want to talk about the neutralizing of the Skylake ME then follow up on how the ME was disabled. However, I first want you to understand the differences between the ME on Broadwell systems (ME version 10.x) and the ME on Skylake systems (ME version 11.0.x).

  • The Intel Management Engine can be seen as two things; first, the isolated processor core that run the Management Engine is considered “The ME”, and second, the firmware that runs on the ME Core is also considered as being “the ME”. I often used the two terms interchangeably, but to avoid confusion, I will from now on (try to) refer to them, respectively, as the ME Core and the ME Firmware, but note that if I simply say the ME, then I am probably referring to the ME Firmware.
  • The ME Firmware 10.x was used on Broadwell systems which had an ARC core, while the ME Firmware 11.0.x used on Skylake systems uses an x86 core. What this means is that the architecture used by the ME core is completely different (kind of like how PowerPC and Intel macs used a different architecture, or how most mobile devices use an ARM architecture, the Broadwell ME Core used an ARC architecture). This means that the difference between the 10.x and 11.0.x ME firmwares is major, and the cores themselves are also very different. It’s a bit like comparing arabic to korean!
  • As the format of the ME firmware changed significantly, it took a while to figure out how to decompress the modules and understand how to remove the modules without breaking anything else. Nicola Corna, the author of the me_cleaner tool, recently was able to add support for Skylake machines by removing all the non essential modules.

In my last ME-related post, I gave everyone a rundown of the modules that were in the ME 10.x firmware and which ones were remaining after it was neutered, so, for Skylake, here is the list of modules in a regular ME 11.0.x firmware:

-rw-r--r-- 1 kakaroto kakaroto 184320 Aug 29 16:33 bup.mod
-rw-r--r-- 1 kakaroto kakaroto  36864 Aug 29 16:33 busdrv.mod
-rw-r--r-- 1 kakaroto kakaroto  32768 Aug 29 16:33 cls.mod
-rw-r--r-- 1 kakaroto kakaroto 163840 Aug 29 16:33 crypto.mod
-rw-r--r-- 1 kakaroto kakaroto 389120 Aug 29 16:33 dal_ivm.mod
-rw-r--r-- 1 kakaroto kakaroto  24576 Aug 29 16:33 dal_lnch.mod
-rw-r--r-- 1 kakaroto kakaroto  49152 Aug 29 16:33 dal_sdm.mod
-rw-r--r-- 1 kakaroto kakaroto  16384 Aug 29 16:33 evtdisp.mod
-rw-r--r-- 1 kakaroto kakaroto  16384 Aug 29 16:33 fpf.mod
-rw-r--r-- 1 kakaroto kakaroto  45056 Aug 29 16:33 fwupdate.mod
-rw-r--r-- 1 kakaroto kakaroto  16384 Aug 29 16:33 gpio.mod
-rw-r--r-- 1 kakaroto kakaroto   8192 Aug 29 16:33 hci.mod
-rw-r--r-- 1 kakaroto kakaroto  36864 Aug 29 16:33 heci.mod
-rw-r--r-- 1 kakaroto kakaroto  28672 Aug 29 16:33 hotham.mod
-rw-r--r-- 1 kakaroto kakaroto  28672 Aug 29 16:33 icc.mod
-rw-r--r-- 1 kakaroto kakaroto  16384 Aug 29 16:33 ipc_drv.mod
-rw-r--r-- 1 kakaroto kakaroto  11832 Aug 29 16:33 ish_bup.mod
-rw-r--r-- 1 kakaroto kakaroto  24576 Aug 29 16:33 ish_srv.mod
-rw-r--r-- 1 kakaroto kakaroto  73728 Aug 29 16:33 kernel.mod
-rw-r--r-- 1 kakaroto kakaroto  28672 Aug 29 16:33 loadmgr.mod
-rw-r--r-- 1 kakaroto kakaroto  28672 Aug 29 16:33 maestro.mod
-rw-r--r-- 1 kakaroto kakaroto  28672 Aug 29 16:33 mca_boot.mod
-rw-r--r-- 1 kakaroto kakaroto  24576 Aug 29 16:33 mca_srv.mod
-rw-r--r-- 1 kakaroto kakaroto  36864 Aug 29 16:33 mctp.mod
-rw-r--r-- 1 kakaroto kakaroto  32768 Aug 29 16:33 nfc.mod
-rw-r--r-- 1 kakaroto kakaroto 409600 Aug 29 16:33 pavp.mod
-rw-r--r-- 1 kakaroto kakaroto  16384 Aug 29 16:33 pmdrv.mod
-rw-r--r-- 1 kakaroto kakaroto  24576 Aug 29 16:33 pm.mod
-rw-r--r-- 1 kakaroto kakaroto  61440 Aug 29 16:33 policy.mod
-rw-r--r-- 1 kakaroto kakaroto  12288 Aug 29 16:33 prtc.mod
-rw-r--r-- 1 kakaroto kakaroto 167936 Aug 29 16:33 ptt.mod
-rw-r--r-- 1 kakaroto kakaroto  16384 Aug 29 16:33 rbe.mod
-rw-r--r-- 1 kakaroto kakaroto  12288 Aug 29 16:33 rosm.mod
-rw-r--r-- 1 kakaroto kakaroto  49152 Aug 29 16:33 sensor.mod
-rw-r--r-- 1 kakaroto kakaroto 110592 Aug 29 16:33 sigma.mod
-rw-r--r-- 1 kakaroto kakaroto  20480 Aug 29 16:33 smbus.mod
-rw-r--r-- 1 kakaroto kakaroto  36864 Aug 29 16:33 storage.mod
-rw-r--r-- 1 kakaroto kakaroto   8192 Aug 29 16:33 syncman.mod
-rw-r--r-- 1 kakaroto kakaroto  94208 Aug 29 16:33 syslib.mod
-rw-r--r-- 1 kakaroto kakaroto  16384 Aug 29 16:33 tcb.mod
-rw-r--r-- 1 kakaroto kakaroto  28672 Aug 29 16:33 touch_fw.mod
-rw-r--r-- 1 kakaroto kakaroto  12288 Aug 29 16:33 vdm.mod
-rw-r--r-- 1 kakaroto kakaroto  98304 Aug 29 16:33 vfs.mod

And here is the list of modules in a neutered ME :

-rw-r--r-- 1 kakaroto kakaroto 184320 Oct  4 16:21 bup.mod
-rw-r--r-- 1 kakaroto kakaroto  73728 Oct  4 16:21 kernel.mod
-rw-r--r-- 1 kakaroto kakaroto  16384 Oct  4 16:21 rbe.mod
-rw-r--r-- 1 kakaroto kakaroto  94208 Oct  4 16:21 syslib.mod

The total ME size dropped from 2.5MB to 360KB, which means that 14.42% of the code remains, while 85.58% of the code was neutralized with me_cleaner.

The reason the neutering on Skylake-based systems removed less code than on Broadwell-based systems is because of the code in the ME’s read-only memory (ROM). What this “ROM” means is that a small part of the ME firmware is actually burned in the silicon of the ME Core. The ROM content is the first code executed, loaded internally from the ROM, by the ME core, and it has the simple task of reading the ME firmware from the flash, verifying its signature, making sure it hasn’t been tampered with, loading it in the ME Core’s memory and executing it.

  • On Broadwell, there is about 128KB of code burned in the ME Core’s ROM. That 128KB of code contains the bootloader as well as some system APIs that the other modules can use.
  • On Skylake, the ROM code was decreased to 17KB, leaving only the basic bootloader, and moving the system APIs to a module of their own inside the ME firmware.
  • This means that the total amount of code remaining, including the ROM is 360+17KB out of 2524+17KB = 377/2541 = 14.84% for Skylake, while on Broadwell, it’s 120 + 128KB out of 1624+128KB = 248/1752 = 14.15% of code remaining. The difference is much smaller now when we account for the code hidden in the ROM of the processor.

The problem with the code in the ROM is that it cannot be removed because it’s inside of the processor itself and, well, it’s Read-Only Memory—it cannot be overwritten in any way, by definition. On the bright side, it is nice to see that most of the code that was previously in the ROM has now been moved to the flash in Skylake systems.

The ME firmware itself has multiple “partitions”, each containing something that the ME firmware needs. Some of those partitions will contain code modules, some will contain configuration files, and some will contain “other data” (I don’t really know what). Either way, the ME firmware contains about a dozen different partitions, each for a specific purpose, and two of those partitions contain the majority of the code modules.

Schrödinger’s Wi-Fi

I’ll now explain what has been done to get to this point in the project. When I was done with the coreboot port to the new Skylake machines, I tried to neutralize the ME, thinking it would be a breeze, since me_cleaner claimed support for Skylake. Unfortunately, it wasn’t working as it should and I spent the entire hacking day at the coreboot conference trying to fix it.

The problem is that once the ME was neutralized with me_cleaner, the Wi-Fi module on the Librem was unpredictable: it sometimes would work and sometimes wouldn’t, which was confusing. I eventually realized that if I reboot after replacing the ME, the wifi would keep the same state as it was in before:

  • if I neutralized the ME and reboot, it would still work, but after powering off the machine and turning it on, the wifi would stop working;
  • if I restored a full ME (instead of a neutralized one) and rebooted, the wifi would remain dead;
  • …but if I power off the machine and turn it back on, the wifi would finally be restored.

I figured that it has something to do with how the PCI-Express card is initialized, and I spent quite some time trying to “enable it” from coreboot with a neutralized ME. I’ll spare you the details but I eventually realized that I couldn’t get it to work because the PCIe device completely ignored all my commands and would simply refuse to power up. It turns out that the ME controls the ICC (Integrated Clock Controller) so without it, it would simply not enable the clock for the PCIe device, so the wifi card wouldn’t work and there is nothing you can do about it because only the ME has control over the ICC registers. I tried to test a handful of different ME firmware versions, but surprisingly, the wifi module never worked on any of those images, even when the ME was not neutralized. Obviously, it meant that the ME firmware was not properly configured, so I used the Intel FIT tool (which is used to configure ME images, allowing us to set things like PCIe lanes, and which clocks to enable, and all of that). Unfortunately, even when an image was configured the exact same way as the original ME image we had, the wifi would still not work, and I couldn’t figure out why.

I shelved the problem to concentrate on the release of coreboot and eventually on the SATA issues we were experiencing. The decision was made to release the Librem 13 v2 and Librem 15 v3 with a regular ME until more work was done on that front, because we couldn’t hold back shipments any longer (and because we can provide updates after shipment). Also note that at that time, the support for Skylake in me_cleaner was very rough—it was removing only half of the ME code because the format of the new ME 11.x firmware wasn’t fully known yet.

A few weeks later, I saw the release of unME11 from ptresearch and a week later, Nicola Corna pushed more complete support for Skylake in a testing branch of me_cleaner. I immediatly jumped on it and tested it on our machines. Unfortunately, the wifi issue was still there. I decided to debug the cause by figuring out what me_cleaner does that could be affecting the ME firmware that way.

As I mentioned earlier in this post, the ME firmware is made up of a dozen of partitions, some of those containing code modules, and me_cleaner will remove all the partitions except one, in which it will remove most of the modules and leave only the critical modules needed for the startup of the system. Therefore, I started progressively whitelisting more modules so me_cleaner wouldn’t remove them, and testing if it affected the wifi module. This was annoying to test because I’d have to change me_cleaner, neutralize the ME firmware, then copy the image from my main PC to the Librem then flash the new image, poweroff, then restart the machine, and if the Wifi wasn’t working, which was 99% of the time, I had to copy the files through a USB drive. I eventually restored all of the modules and it was still not working, which made me suspect the cause might be in one of the other partitions, so I gradually added one partition at a time, until the Wifi suddenly worked. I had just added the “MFS” partition, so I started removing the other partitions again one at a time, but keeping the “MFS” partition, and the Wifi was still working. I eventually removed all of the code modules (apart from the critical ones) but keeping the MFS partition, and the wifi was still working. So I had found my fix: I just need to keep the “MFS” partition in the image and the wifi would work.

So many firmwares, so little time

So, what is this mysterious “MFS” partition? There’s not a lot of information about it anywhere online, other than one forum or mailing list user mentioning the MFS partition as “ME File System”. I decided to use a comparative approach.

The fun thing  when comparing ME firmware images: not only are there multiple versions (ex: 10.x vs 11.x), for each single ME version there are multiple “flavors” of it, such as “Consumer” or “Corporate”, and there are also multiple flavors for “mobile” and “desktop”.

  • When I extracted and compared all the partitions of all the variants and flavors, the only difference between a mobile and a desktop image is in the MFS partition, as every other partition shares the same hash between two flavors of the same version.
  • I then compared the various partitions between a configured and a non configured ME firmware, and noticed that what the Intel FIT tool does when you change the system’s configuration is to simply write that configuration inside of the MFS partition.
  • This means that the MFS partition, which doesn’t contain any code modules, is used for storage of configuration files used by the ME firmware. This is somewhat confirmed by the fact that the MFS partition is marked as containing data.

After modifying me_cleaner to add support for the Librem, which allows us to neutralize the ME while keeping the Wifi module working, I discussed with Nicola Corna how to best integrate the feature into me_cleaner. We came to the conclusion that having a new option to allow users to select which partitions to keep would be a better method, so I sent a pull request that adds such a feature.

Unfortunately, while the wifi module was working with this change, I also had an adverse side-effect when adding the MFS partition back into the ME firmware: my machine would refuse to power off, for example, and would have trouble rebooting.

  • The exact behavior is that if I power off the machine, Linux would do the entire power off sequence then stop, and I would have to manually force shutdown the Librem by holding the power button for 5 seconds. As for the rebooting issue, instead of actually rebooting when Linux finishes its poweroff sequence, the system will be frozen for a few seconds before suddenly shutting itself down forcibly, then turning itself back on 5 seconds later, on its own. This isn’t the most critical of issues, but it would be very annoying to users, and unfortunately, I couldn’t find the cause of this strange behavior. All I knew was that if I remove the MFS partition, coreboot says the ME partition is corrupted, and the wifi module doesn’t work, and if I keep the MFS partition, coreboot says the ME partition is valid, the wifi module works, but the poweroff/reboot issues automatically appear.
  • The solution for these issues turned out to be unexpectedly simple. After another of our developers said he was ready to live with the poweroff/reboot issues, and I sent him a neutralized ME for his system, I was told that his machine was working fine with no side-effects at all. I didn’t know what the difference between his machine and mine was, other than the fact that my machine is a prototype and his was a “production” machine. I then tested my neutralized ME on the “production” Librem 13 unit I had on hand, and I didn’t have any side effects of the neutralizing of the ME firmware. I then updated my coreboot build script to add the neutralization option and asked users on our forums to test it, and every one who tested the neutralized ME reported back success with no side-effects. I then realized the problem is probably only caused by the prototype machine that I was using. Well, I can live with that.

Disabling the ME

The next step for me was to start reverse-engineering the ME firmware, like I had done before. This is of course a very long and arduous process that took a while and for which I don’t really have much progress to show. One thing I wanted to reverse-engineer was the MFS file system format so I could see which configuration files are within it and to start eliminating as much from it as possible. I started from the beginning however, by reverse engineering the entry point in the ROM. I will spare you much of the detail and the troubles in trying to understand some of the instructions, and mostly some of the memory accesses. The important thing to know is that before I got too far along, ptresearch announced the discovery of a way to disable the Intel ME, and I needed to test it.

Unfortunately, enabling the HAP bit which disables the ME Core, didn’t work on the Librem: it was causing the power LED to blink very slowly, and nothing I could do would stop it until I removed the battery. I first thought the machine was stuck in a boot loop, but it was just blinking really slowly. I figured out eventually that the reason was that the “HAP” bit was not added in version 11.0.0, but rather in version 11.0.x (where  x > 0). I decided to try a newer ME firmware version and the HAP bit did work on that, which confirmed that the ME disablement was a feature added to the ME after the version the Librem came with (11.0.0.1180). So now I have a newer ME (version 11.0.18.1002) that is disabled thanks to the HAP bit, but… no Wi-Fi again.

I decided to retry using the FIT tool to configure the ME with the exact same settings as the old ME firmware. I went through every setting available to make sure it matches, and when I tried booting it again, the ME Core was disabled and the Wifi module was working. Great Success!

Obviously, I then needed to do plenty of testing, make sure it’s all working as it should, confirm that the ME Core was disabled, test the behavior of the system with a ME firmware both disabled and neutralized, and that it has no side effects other than what we wanted.

My previous coreboot build script was using the ME image from the local machine, but unfortunately, I can’t do that now for disabling the ME since it’s not supported on the ME image that most people have on their machines. So I updated my coreboot build script to make it download the new ME version from a public link (found here), and I used bsdiff to patch the ME image with the proper configuration for the WiFi to work. I made sure to check that the only changes to the ME image is in the MFS partition and is configuration data, so the binary patch does not contain any binary code and we can safely distribute it.

Moving towards the FSP

The next step will be to continue the reverse-engineering efforts, but for now, I’ve put that on hold because ptresearch have announced that they found an exploit in the ME Firmware allowing the executing of unsigned code. This exploit will be announced at the BlackHat Europe 2017 conference in December, so we’ll have to wait and see how their exploit works and what we can achieve with it before going further. Also, once ptresearch release their information, it might be possible for us to work together and share our knowledge. I am hoping that I can get some information from them on code that they already reverse engineered, so I don’t have to duplicate all of their efforts. I’d also like to mention that, just as last time, Igor Skochinsky has generously shared his research with us, but also getting data from ptresearch would be a tremendous help, considering how much work they have already invested on this.

Right now, I have decided to move my focus to investigating the FSP, which is another important binary that needs to be reverse-engineered and removed from coreboot. I don’t think that anyone is currently actively working on it, so hopefully, I can achieve something without duplicating someone else’s work, and we can advance the cause much faster this way. I think I will concentrate first on the PCH initialization code, then move to the memory initialization.

17 October, 2017 03:38PM by Youness Alaoui

hackergotchi for OSMC

OSMC

OSMC security update for OSMC 2017.09-1 and earlier

Platforms affected:

  • OSMC for Raspberry Pi (all models)
  • OSMC for Vero (all models)

A series of vulnerabilities [1] have been discovered in WPA2, a protocol that secures all modern protected Wi-Fi networks. An attacker within range of a victim can exploit these weaknesses using key reinstallation attacks (KRACKs). Concretely, attackers can use this novel attack technique to read information that was previously assumed to be safely encrypted.

This vulnerability has now been mitigated and a fix is included in OSMC for all supported platforms.

We recommend you update your device immediately. This can be done by going to My OSMC -> Updates -> Check for Updates. After updating, your system should report OSMC 2017.9-2 as the version in My OSMC.

Although OSMC has a monthly update cycle, OSMC makes critical bug fixes and fixes for security vulnerabilities immediately available. You can learn more about OSMC's update cycle and about keeping your system up to date here.

[1] Krackattacks.com. (2017). KRACK Attacks: Breaking WPA2.

17 October, 2017 11:59AM by Sam Nazarko

hackergotchi for Deepin

Deepin

Deepin Security Update——Urgently Fixed wpa Security vulnerability DSA-3999-1 in WIFI Connection

The security updates of wpa. Vulnerability Information DSA-3999-1 wpa —Security Updates Security database details: Mathy Vanhoef of the imec-DistriNet research group of KU Leuven discovered multiple vulnerabilities in the WPA protocol, used for authentication in wireless networks. Those vulnerabilities applies to both the access point (implemented in hostapd) and the station (implemented in wpa_supplicant). An attacker exploiting the vulnerabilities could force the vulnerable system to reuse cryptographic session keys, enabling a range of cryptographic attacks against the ciphers used in WPA1 and WPA2. More information can be found in the researchers’s paper, Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2. CVE-2017-13077: ...Read more

17 October, 2017 09:01AM by melodyzou

Deepin Terminal V2.7 is Released

Deepin Terminal V2.7 is a revision. It added some new functions, mainly optimized some details and fixed bugs from users. Newly added the function that nine terminal windows in different colors will be opened by pressing Ctrl + Alt + number 1~9, just configurate the settings of theme_terminal; Newly added the option of run_as_login_shell (Disabled by default), and Login Shell will be opened by default through configuration files; Newly added git vte library to implement the terminal control function, fixed the search issue caused by vte; Optimized the interaction that no prompt of “Unknown option” will come out when selected the terminal default command line option; Optimized ...Read more

17 October, 2017 08:49AM by jingle

LiMux

Geschützt: Kontaktliste Open Government Tag 2017

Es gibt keine Kurzfassung, da dies ein geschützter Beitrag ist.

Der Beitrag Geschützt: Kontaktliste Open Government Tag 2017 erschien zuerst auf Münchner IT-Blog.

17 October, 2017 08:47AM by Stefan Döring

hackergotchi for Deepin

Deepin

Update Record Of Applications In Deepin Store (2017-10)

Update Details of October 17 Application Added: Nuclear, Textadept, DDNet, CPU-X, Caret, Extraterm, Charles, Godot, SeaMonkey Application Updated: blender,jstock,vscode,zotero-standalone,vim

17 October, 2017 03:20AM by longxiang

October 16, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: LXD Weekly Status #19

Introduction

This past week, part of the team was back in New York for more planning meetings, getting the details of the next 6 months, including LXC, LXD and LXCFS 3.0 fleshed out.

The rest of the team made good progress on some smaller feature work, started working on console attach for LXD, looked into SR-IOV passthrough of network cards and infiniband devices and fixed a good chunk of bugs in LXD and LXC.

We also launched the CFP for the containers devroom at FOSDEM 2018!

This week we’ll be pushing out stable releases for all supported LXC, LXD and LXCFS releases as well as releasing LXD 2.19.

Upcoming conferences and events

  • Open Source Summit Europe (Prague, October 2017)
  • Linux Piter 2017 (St. Petersburg, November 2017)
  • FOSDEM 2018 (Brussels, February 2018)

Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.

LXD

LXC

LXCFS

  • Nothing to report

Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.

Ubuntu

  • Uploaded 2.18-0ubuntu4 with a selection of recent bugfixes.
  • Uploaded 2.18-0ubuntu5 with the network limits and dir backend mounting fixes.
  • Uploaded 2.18-0ubuntu6 with the follow-up dir backend fix.

Snap

  • Updated the candidate channel with the same cherry-picks as the deb package.
  • Updated the lxc wrapper in edge to use the newly added snap handling code in the CLI tool.

16 October, 2017 10:02PM

Cumulus Linux

NetDevOps: what does it even mean?

Move over “selfie” — “NetDevOps” is the hottest buzzword that everybody is talking about! It’s so popular that the term even has its own hashtag on Twitter. But when you take the word out of social media, does anyone really know what it means? Or how this perfect portmanteau can revolutionize your data center? Let’s take a moment to discuss what NetDevOps really is all about. In this post, we’ll go over the definition, the best practices, and the tech that best incorporates NetDevOps. Now, when you see #NetDevOps appear on your feed, you can tweet it out with confidence.

What does it all mean?

If you understand the basic principles of DevOps, then congratulations! You’re two-thirds of the way to grasping the concept of NetDevOps. For the uninitiated, DevOps embraces the ideology of interoperability and communication between the development and operations teams in order to break down silos and create better products. The movement also encourages automation and monitoring in order to increase efficiency and reduce error.

DevOps is certainly a great movement, but like the VCR and the DVD player, something new came along and improved upon it. This is where NetDevOps comes in. So, what exactly is NetDevOps? We asked a team of highly-qualified professionals (our in-house engineers) and this is the wisdom they gave us:

  • “NetDevOps is the process of making the running of networking gear at scale as efficient as the running of server gear at scale.”
  • “It’s a practice that is at-scale and uses automated management tools.”
  • “NetDevOps is a culture, movement, or practice that emphasizes the collaboration and communication of both network architects and operators while automating the process of network design and changes. It aims at establishing a culture and environment where building, testing, and releasing network changes can happen rapidly, frequently, and more reliably.”
  • “It’s DevOps with Net in the front.”

Okay, that last definition might be a little too obvious. But if we look at all of these explanations and average them out, we get a pretty consistent definition. Instead of only applying to software developers and IT operations, NetDevOps extends the ideology of DevOps to the network. Simple as that. If you’re already utilizing automation tools in the compute world, why not also use them in networking? Just take the concepts of DevOps, think of them in the context of the network, and you’ve got the NetDevOps meaning!

 

How can it help my data center?

When you start taking the concepts of DevOps and applying them to your network, the benefits start rolling in, from reduced network downtime to increased savings. But how exactly does NetDevOps make those assets a reality? Let’s break it down.

  • Demolish human silos: The whole point of NetDevOps is to encourage communication among teams in order to increase efficiency and foster collaboration so that the company can create the best possible products and services. In other words, there’s no room for the confining human silos that plague companies of any scale. When faced with issues, it’s far more efficient and pragmatic to have multiple teams working towards a solution than letting people turn a blind eye because it’s “not their problem.” This includes extending tools between the application, server and networking space. Finding the best in breed solutions in each use case and applying that across the entire IT infrastructure helps build that consistency between silos. For example, automation tooling that has already worked in the server infrastructure can be extended into the network, and connectivity solutions in the network can be extended into the application, and monitoring solutions used in the application space can be extended across the entire IT infrastructure.
  • Reduce manual intervention with IaC: Let’s face it: having to log into your server and make manual changes every time there’s a change in the network is a hassle. It wastes time and increases the possibility of human error. Fortunately, NetDevOps has a solution — Infrastructure as code (IaC). IaC is the process of managing computer data centers through machine-readable definition files instead of physical intervention. While IaC is often mixed-up with automation and they are both key to successful integration of NetDevOps, it’s important to remember that IaC is different from automation. It’s a term that encompasses more than automation, in that it asks for NetDevOps practices to also be applied to the process of automation. IaC must ensure that automation scripts are free of errors, are able to be redeployed on multiple servers, can be rolled back, and are accessible to all teams. So, when you’re looking to incorporate NetDevOps practices to decrease manual intervention, make sure to consider IaC.
  • Increase automation: Incorporating automation is one of the NetDevOps practices that can greatly improve your data center and reduce human error. What makes automation so valuable for your network is that you can unify it to deploy your network as a single node. Now, you no longer need different automation and deployment methods for the network, application, and server; they’re all brought together in one best-of-breed solution. If you’re ready to start automating and optimizing your network with NetDevOps, check out this blog post about the best practices you need to get started. Or, if you’d like a real-world example of how automation can revolutionize your datacenter, read this case study to discover how leveraging automation helped BlueJeans provision new data centers in under 30 minutes.

What kind of tech supports NetDevOps?

Now that you’ve decided to incorporate NetDevOps principles into your data center, it’s time to invest in the technology that can keep up with your new ideology. Fortunately, Cumulus has your back. If you’re ready to take advantage of all NetDevOps has to offer, here are some products to help:

  • Cumulus Linux: Our open source network operating system for bare metal switches is the first step you can take towards bringing DevOps principles into your network. Start automating with a completely open architecture that allows for easy automation. Plus, existing open source and commercial Linux applications run natively, meaning Cumulus Linux works with your existing toolsets. For more information about how Cumulus Linux can boost your network automation, check out our network automation solutions page.
  • NetQ: The perfect companion to Cumulus Linux, NetQ is our telemetry-based fabric validation system. With NetQ, network automation becomes a breeze. NetQ’s proactive, preventative, and diagnostic capabilities ensure that the network is behaving as intended by automating repetitive tasks. Read this blog post about automating troubleshooting with NetQ to understand NetQ’s unlimited capabilities. In addition to improved automation and monitoring, NetQ has unparalleled visibility that unifies the entire stack in a single view, which means delegating across adjacent teams and busting silos become much easier.

Join the NetDevOps revolution today with Cumulus Linux and NetQ. You can even try out these products and more with Cumulus in the Cloud absolutely free!

 

The post NetDevOps: what does it even mean? appeared first on Cumulus Networks Blog.

16 October, 2017 04:39PM by Madison Emery

hackergotchi for Ubuntu developers

Ubuntu developers

Didier Roche: Ubuntu GNOME Shell in Artful: Day 15

Since the Ubuntu Rally in New York, the Ubuntu desktop team is full speed ahead on the latest improvements we can make to our 17.10 Ubuntu release, Artful Aardvark. Last Thursday was our Final Freeze and I think it’s a good time to reflect some of the changes and fixes that happened during the rally and the following weeks. This list isn’t exhaustive at all, of course, and only cover partially changes in our default desktop session, featuring GNOME Shell by default. For more background on our current transition to GNOME Shell in artful, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 15: Final desktop polish before 17.10 is out

GNOME 3.26.1

Most of you would have noticed already, but most of GNOME modules have been updated to their 3.26.1 release. This means that Ubuntu 17.10 users will be able to enjoy the latest and greatest from the GNOME project. It’s been fun to follow again the latest development release, report bugs, catch up regressions and following new features.

GNOME 3.26.1 introduces in addition to many bug fixes, improvements, documentation and translation, updates resizeable tiling support, which is a great feature that many people will surely take advantage of! Here is the video that Georges has done and blogged about while developing the feature for those who didn’t have a look yet:

A quick Ubuntu Dock fix rampage

I’ve already praised here many times the excellent Dash to Dock upstream for their responsiveness and friendliness. A nice illustration of this occurred during the Rally. Nathan grabbed me in the Desktop room and asked if a particular dock behavior was desired (scrolling on the edge switching between workspaces). First time I was hearing that feature and finding the behavior being possibly confusing, I pointed him to the upstream bug tracker where he filed a bug report. Even before I pinged upstream about it, they noticed the report and engaged the discussion. We came to the conclusion the behavior is unpredictable for most users and the fix was quickly in, which we backported in our own Ubuntu Dock as well with some other glitch fixes.

The funny part is that Chris witnessed this, and reported that particular awesome cooperation effort in a recent Linux Unplugged show.

Theme fixes and suggested actions

With our transition to GNOME Shell, we are following thus more closely GNOME upstream philosophy and dropped our headerbar patches. Indeed, as we previously, for Unity vertical space optimizations paradigm with stripping the title bar and menus for maximized applications, distro-patched a lot of GNOME apps to revert the large headerbar. This isn’t the case anymore. However, it created a different class of issues: action buttons are generally now on the top and not noticeable with our Ambiance/Radiance themes.

Enabled suggested action button (can't really notice it)

We thus introduced some styling for the suggested action, which will consequently makes other buttons noticeable on the top button (this is how upstream Adwaita theme implements it as well). After a lot of discussions on what color to use (we tied of course different shades of orange, aubergine…), working with Daniel from Elementary (proof!), Matthew suggested to use the green color from the retired Ubuntu Touch color palette, which is the best fit we could ever came up with ourself. After some gradient work to make it match our theme, and some post-upload fixes for various states (thanks to Amr for reporting some bugs on them so quickly which forced me to fix them during my flight back home :p). We hope that this change will help users getting into the habit to look for actions in the GNOME headerbars.

Enabled suggested action button

Disabled suggested action button

But That’s not all on the theme front! A lot of people were complaining about the double gradient between the shell and the title bar. We just uploaded for the final freeze some small changes by Marco making them looking a little bit better for both titlebars, headerbars and gtk2 applications, when focused on unfocused, having one or no menus. Another change was made in GNOME Shell css to make our Ubuntu font appear a little bit less blurry than it was under Wayland. A long-term fix is under investigation by Daniel.

Headerbar on focused application before theme change

Headerbar on focused application with new theme fix

Title bar on focused application before theme change

Title bar on focused application with new theme fix

Title bar on unfocused application with new theme fix

Settings fixes

The Dock settings panel evolved quite a lot since its first inception.

First shot at Dock settings panel

Bastien, who had worked a lot on GNOME Control Center upstream, was kindly enough to give a bunch of feedbacks. While some of them were too intrusive so late in the cycle, we implemented most of his suggestions. Of course, even if we live less than 3 kms away from each other, we collaborated as proper geeks over IRC ;)

Here is the result:

Dock settings after suggestions

One of the best advice was to move the background for list to white (we worked on that with Sébastien), making them way more readable:

Settings universal access panel before changes

Settings universal access panel after changes

Settings search Shell provider before changes

Settings search Shell provider after changes

i18n fixes in the Dock and GNOME Shell

Some (but not all!) items accessible via right clicking on applications in the Ubuntu Dock, or even in the upstream Dash in the vanilla session weren’t translated.

Untranslated desktop actions

After a little bit of poking, it appeared that only the Desktop Actions were impacted (what we called “static quicklist” in the Unity world). Those were standardized some years after we introduced it in Unity in Freedesktop spec revision 1.1.

Debian, like Ubuntu is extracting translations from desktop files to include them in langpacks. Glib is thus distro-patched to load correctly those translations. However, the patch was never updated to ensure action names were returning localized strings, as few people are using those actions. After a little bit of debugging, I fixed the patch in Ubuntu and proposed back in the Debian Bug Tracking System. This is now merged for the next glib release there (as the bug impacts both Ubuntu, Debian and all its derivatives).

We weren’t impacted by this bug previously as when we introduced this in Unity, the actions weren’t standardized yet and glib wasn’t supporting it. Unity was thus directly loading the actions itself. Nice now to have fixed that bug so that other people can benefit from it, using Debian and vanilla GNOME Shell on Ubuntu or any other combinations!

Translated desktop actions

Community HUB

Alan announced recently the Ubuntu community hub when we can exchange between developers, users and new contributors.

When looking at this at the sprint, I decided that it could be a nice place for the community to comment on those blog posts rather than creating another silo here. Indeed, the current series of blog post have more than 600 comments, I tried to be responsive on most of them requiring some answers, but I can’t obviously scale. Thanks to some of the community who already took the time to reply to already answered questions there! However, I think our community hub is a better place for those kind of interactions and you should see below, an automated created topic on the Desktop section of the hub corresponding to this blog post (if all goes well. Of course… it worked when we tested it ;)). This is read-only, embedded version and clicking on it should direct you to the corresponding topic on the discourse instance where you can contribute and exchange. I really hope that can foster even more participation inside the community and motivate new contributors!

(edit: seems like there is still some random issues on topic creation, for the time being, I created a topic manually and you can comment on here)

Other highlights

We got some multi-monitor fixes, HiDPI enhancements, indicator extension improvements and many others… Part of the team worked with Jonas from Red Hat on mutter and Wayland on scaling factor. It was a real pleasure to meet him and to have him tagging along during the evenings and our numerous walks throughout Manhattan as well! It was an excellent sprint followed by nice follow-up weeks.

If you want to get a little bit of taste of what happened during the Ubuntu Rally, Chris from Jupiter Broadcasting recorded some vlogs from his trip to getting there, one of them being on the event itself:

As usual, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official Ubuntu desktop team transitions ppa to get a taste of what’s cooking!

Now, it’s almost time to release 17.10 (few days ahead!), but I will probably blog about the upgrade experience in my next and last - for this cycle - report on our Ubuntu GNOME Shell transition!

Edit: As told before, feel free to comment on our community HUB as the integration below doesn’t work for now.

16 October, 2017 04:15PM

hackergotchi for Xanadu developers

Xanadu developers

Un PC sumergido en aceite mineral

Hace unos días compre los componentes para armar un nuevo PC pero al intentar colocarlo en la antigua carcasa me doy cuenta que la tarjeta madre no encaja correctamente por lo que me puse a buscar posibles soluciones y me … Sigue leyendo

16 October, 2017 01:15PM by sinfallas

hackergotchi for Ubuntu developers

Ubuntu developers

Kubuntu General News: Updated Kubuntu 17.10 RC ISOs now available

Following on from yesterday’s 1st spin of the 17.10 RC images by the ubuntu release team, today the RC images (marked Artful Final on the QA tracker) have been re-spun and updated.

Please update your ISOs if you downloaded previous images, and test as before.

Please help us by testing as much as you have time for. Remember, in particular we need i386 testers, on “bare metal” rather than VMs if possible.

Builds are available from:

http://iso.qa.ubuntu.com/qatracker/milestones/383/builds

the CD image to left of the ISO names being a link to take you to download urls/options.

Take note of the Ubuntu Community ISO testing party on Monday 16th at 15:00 UTC:

https://community.ubuntu.com/t/ubuntu-17-10-community-iso-testing/458

Please attend and participate if you are able. The #ubuntu-on-air IRC channel on irc.freenode.net can be joined via a web client found beneath the live stream on ubuntuonair.com, or of course you can join in a normal IRC client.

Happy testing,

Rik Mills

Kubuntu Developer
Kubuntu Release team

16 October, 2017 01:06PM

hackergotchi for Deepin

Deepin

Deepin System Updates (2017.10.16)

Fixed System and Application Bugs Updated Policykit, fixed the issue that environment variable was invalid; Packed WeChat for Enterprise and fixed the issue of minimization; Fixed and updated Qianniu Work; Added and updated applications in Deepin Store;

16 October, 2017 06:12AM by longxiang

October 15, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Stuart Langridge: Charles Paget Wade and the Underthing

I got to spend a few days with Andy and his wife Gaby and their exciting new dog, Iwa. I don’t get to see them as often as I should, but since they’ve now moved rather closer to Castle Langridge we’re going to correct that. And since they’re in the Cotswolds I got to peer at a whole bunch of things. Mostly things built of yellow stone, admittedly. It is a source of never-ending pleasure that despite twenty-three years of conversation we still never run out of things to talk about. There is almost nothing more delightful than spending an afternoon over a pint arguing about what technological innovation you’d take back to Elizabethan England. (This is a harder question than you’d think. Sure, you can take your iPhone back and a solar charger, and it’d be an incredibly powerful computer, but what would they use it for? They can do all the maths that they need; it’s just slower. Maybe you’d build a dynamo and gift them electricity, but where would you get the magnets from? Imagine this interspersed with excellent beer from the Volunteer and you have a flavour of it.)

There were also some Rollright Stones, as guided by Julian Cope’s finest-guidebook-ever The Modern Antiquarian. But that’s not the thing.

The thing is Snowshill Manor. There was a bloke and his name was Charles Paget Wade. Did some painting (at which he was not half bad), did some architecting (also not bad), wrote some poetry. And also inherited a dumper truck full of money by virtue of his family’s sugar plantations in the West Indies. This money he used to assemble an exceedingly diverse collection of Stuff, which you can now go and see by looking around Snowshill. What’s fascinating about this is that he didn’t just amass the Stuff into a big pile and then donate the house to the National Trust as a museum to hold it. Every room in the house was individually curated by him; this room for these objects, that room for those, what he called “an attractive set of rooms pictorially”. There’s some rhyme and some reason — one of the upstairs rooms is full of clanking, rigid, iron bicycles, and another full of suits of samurai armour — but mostly they’re things he just felt fitted together somehow. He’s like Auri from the Kingkiller Chronicles; this room cries out for this thing to be in it. (If you’ve read the first two Kingkiller books but haven’t read The Slow Regard of Silent Things, go and read it and know more of Auri than you currently do.) There’s a room with a few swords, and a clock that doesn’t work, and a folding table, and a box with an enormously ornate lock and a set of lawn bowls, and a cabinet containing a set of spectacles and a picture of his grandmother and a ball carved from ivory inside which is a second ball carved from the same piece of ivory inside which is yet another ball. The rhyme and the reason were all in his head, I think. I like to imagine that sometimes he’d wake up in his strange bedroom with its huge carved crucifix at four in the morning and scurry into the house to carefully carry a blue Japanese vase from the Meridian Room into Zenity and then sit back, quietly satisfied that the cosmic balance was somehow improved. Or to study a lacquered cabinet for an hour and a half and then tentatively shift it an inch to the left, so it sits there just so. So it’s right. I don’t know if the order, the placing, the detail of the collection actually speaks as loudly to anyone as it spoke to him, and it doesn’t matter. You could spend the rest of your life hearing the stories about everything there and never get off the ground floor.

Take that room of samurai armour, for example. One of the remarkable things about the collection (there are so many remarkable things about the collection) is that rather a lot of it is Oriental — Japanese or Chinese, mainly — but Wade never went to China or Japan. A good proportion of the objects came from other stately homes, selling off items after the First World War — whether because none of the family were left, or for financial reasons, or maybe just that the occupants came home and didn’t want it all any more. The armour is a case in point; Wade needed some plumbing done on the house and went off to chat to a plumber’s merchant about it, where he found a box of scrap metal. Since the bloke was the Lord High Emperor of looking for objects that caught his fancy, he had a look through this discarded pile and found in it… about fifteen suits of samurai armour. (A large box, to be sure.) So he asked the merchant what the score was, and was told: oh, those, yeah, take them if you want them.

This sort of thing doesn’t happen to me all that much.

Outside that room, just hanging on the wall, is the door from a carriage; one of the ones with the large wheels, all pulled by horses. Like the cabs that Sherlock Holmes rode in, or that the Queen takes to coronations. It was monogrammed ECC, and had one of those coats of arms where you just know that the family have been around for a while because two different shields have been quartered in it and then it’s been quartered again. After some entirely baseless speculation we discovered that it was owned by Countess Cowper. She married Lord Palmerston; her brother was William Lamb, Lord Melbourne, who was another Prime Minister and had the Australian city named after him; his wife was Lady Caroline Lamb, who infamously described Byron as “mad, bad, and dangerous to know”. History is all intertwined around itself.

None of the clocks in the house work. Apparently at one point Wade had a guest over who glanced at a clock and assumed she had plenty of time to catch her train. Of course, she missed it, and on hearing from him that of course the clocks don’t tell the right time, she was not best pleased. Not sure who it was. Virginia Woolf, or someone like that.

There is too much stuff. He can’t possibly have kept it all in his head. You can’t possibly keep it all in, walking around. Visitors ought to be banned from going into more than three or four rooms; by the time you’ve got halfway through it’s just impossible to give each place the attention it deserves. There are hardly any paintings; Wade liked actual things, not drawings or representations. It’s not an art gallery. It’s a craftsmanship gallery; Wade sought out things that were made, that showed beauty or artistry or ingenuity in their construction. Objects, not drawings; stuff that demonstrates human creation at work. The house is like walking around inside his head, I think. (“Sometimes I think the asylum is a head. We’re inside a huge head that dreams us all into being. Perhaps it’s your head, Batman.”)

Next time you’re near Evesham, go visit.

15 October, 2017 10:49PM

Kubuntu General News: Kubuntu Artful Aardvark (17.10) initial RC images now available

Artful Aardvark (17.10) initial Release Candidate (RC) images are now available for testing. Help us make 17.10 the best release yet!

Note: This is an initial spin of the RC images. It is likely that at least one more rebuild will be done on Monday.

Adam Conrad from the Ubuntu release team list:

Today, I spun up a set of images for everyone with serial 20171015.

Those images are *not* final images (ISO volid and base-files are still
not set to their final values), intentionally, as we had some hiccups
with langpack uploads that are landing just now.

That said, we need as much testing as possible, bugs reported (and, if
you can, fixed), so we can turn around and have slightly more final
images produced on Monday morning. If we get no testing, we get no
fixing, so no time like the present to go bug-hunting.

… Adam

The Kubuntu team will be releasing 17.10 on October 19, 2017.

This is an initial pre-release. Kubuntu RC pre-releases are NOT recommended for:

  • Regular users who are not aware of pre-release issues
  • Anyone who needs a stable system
  • Anyone uncomfortable running a possibly frequently broken system
  • Anyone in a production environment with data or workflows that need to be reliable

Kubuntu pre-releases are recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Kubuntu, KDE, and Qt developers

Getting Kubuntu 17.10 Intial Release Candidate:

To upgrade to Kubuntu 17.10 pre-releases from 17.04, run

sudo do-release-upgrade -d

from a command line.

Download a Bootable image and put it onto a DVD or USB Drive here:

http://iso.qa.ubuntu.com/qatracker/milestones/383/builds (the little CD icon)

See our release notes: https://wiki.ubuntu.com/ArtfulAardvark/Kubuntu

Please report any bugs on Launchpad using the commandline:

ubuntu-bug packagename

Check on IRC channels, Kubuntuforum or the Kubuntu mail lists if you don’t know the package name. Once the bug is reported on Launchpad, please link to it on the qatracker where you got your RC image. Join the community ISO testing party: https://community.ubuntu.com/t/ubuntu-17-10-community-iso-testing/458

KDE bugs (bugs in Plasma or KDE applications) are still filed at https://bugs.kde.org.

15 October, 2017 11:58AM

The Fridge: Please get to testing Artful RCs (20171015)

Adam Conrad, on behalf of the Ubuntu Release Team, has spun up a set of images for everyone with serial 20171015.

Those images are *not* final images (ISO volid and base-files are still
not set to their final values), intentionally, as we had some hiccups
with langpack uploads that are landing just now.

That said, we need as much testing as possible, bugs reported (and, if
you can, fixed), so we can turn around and have slightly more final
images produced on Monday morning.  If we get no testing, we get no
fixing, so no time like the present to go bug-hunting.

https://lists.ubuntu.com/archives/ubuntu-release/2017-October/004224.html

Originally posted to the ubuntu-release mailing list on Sun Oct 15 05:40:12 UTC 2017  by Adam Conrad on behalf of the Ubuntu Release Team

15 October, 2017 07:37AM

hackergotchi for OSMC

OSMC

Grab a Vero 4K for £99 today

We're trying to move into a larger space and grow OSMC further. We hope you can help us with this and snag a bargain in the process.

For 24 hours only, we're offering Vero 4K for just £99, delivered worldwide within five working days.

Here are some key features of Vero 4K:

  • 4K support
  • HEVC / 10-bit / HDR support
  • HD audio
  • Fast 802.11ac WiFi and built in Bluetooth 4.0 support
  • Five years of guaranteed updates and premium support.

Grab yours while you can. The clock is ticking!

15 October, 2017 03:26AM by Sam Nazarko

October 13, 2017

hackergotchi for Purism PureOS

Purism PureOS

Purism Collaborates with Cryptocurrency Monero to Enable Mobile Payments

Purism plans to utilize Monero’s privacy respecting platform to build a cash-like, digital payment system for Librem 5 smartphone users

SAN FRANCISCO, Calif., October 13, 2017 — Purism, maker of security focused hardware and software, today announced a collaboration with Monero, the only secure decentralized currency that is private by default. Purism recently started accepting Monero for payments in its online store, and this is a continuation of the company’s support for the cryptocurrency.

As more central services like Equifax are hacked, exposing vulnerable user data in unprecedented ways that cause permanent damage to people’s privacy, it has become clear that centralized, individually identifiable, historic, and permanent digital footprints create a serious threat to digital privacy and human rights. Purism, on the heels of its successful smartphone crowdfunding campaign which has raised more than $1.5 million, is looking to address this threat by incorporating cryptocurrencies by default into its mobile phone design, beginning with Monero.

“We must proactively plan for and address digital rights issues in the here and now, because by the time we face them in the future the damage will be irreversible,” said Todd Weaver, Founder & CEO of Purism. “Collaboration with Monero allows us to offer users a much lower barrier to entry for leveraging the benefits of a cryptocurrency, and our aim is to make it incredibly simple to use your Librem 5 smartphone to make secure, cash-like payments that safeguard your private information.”

Monero’s cryptocurrency offers a fungible, decentralized, private currency that is created to be identical to centuries of physical world transaction processes, primarily that cash given for goods or services is a one-time, non-recorded, mutual transaction.

“Collaborating with Purism addresses a major pain point for Monero. The Librem 5 makes it easy for the average user to use Monero for real world transactions on a mobile platform. In addition, the Librem 5, by using Free Libre Open Source Software provides the user with the opportunity to verify to a very high level its end point security, privacy and decentralization. This is in sharp contrast to many mobile platforms where the user has to trust a proprietary implementation. I am very excited to see the Librem 5 planning to have Monero support by default,” Francisco Cabañas, Core Team Member, The Monero Project.

“Creating a future where a person can buy or sell digital goods or services and still respect their privacy, similarly to cash but on the Internet, is a long-time dream that we plan to make a reality,” says Weaver.

Integrating Monero into Purism’s Librem 5 smartphone as part of its default mobile payment system can solve the problems plaguing the online transaction space, removing banks from the transaction, removing all central storage of private user data, keeping transactions private between two parties, all backed by the strength of an immutable cryptographic blockchain ledger.

About Monero

The Monero Project is a grassroots, community-driven initiative that advocates for privacy on a global scale by producing several free libre open source software projects, with the flagship offering being Monero, a fungible and decentralized cryptocurrency. The important guiding philosophies of Monero are security (ensuring that users are able to trust Monero with their transactions, without risk of error or attack), privacy (ensuring that users can transact Monero without fear of coercion, censorship, or surveillance), and decentralization (ensuring that no single person or group can control the network or reverse transactions). The goal is to provide a level of fungibility and privacy that is analogous to that of cash for the digital world.

About Purism

Purism is a Social Purpose Corporation devoted to bringing security, privacy, software freedom, and digital independence to everyone’s personal computing experience. With operations based in San Francisco (California) and around the world, Purism manufactures premium-quality laptops, tablets and phones, creating beautiful and powerful devices meant to protect users’ digital lives without requiring a compromise on ease of use. Purism designs and assembles its hardware in the United States, carefully selecting internationally sourced components to be privacy-respecting and fully Free-Software-compliant. Security and privacy-centric features come built-in with every product Purism makes, making security and privacy the simpler, logical choice for individuals and businesses.

Media Contact

Marie Williams, Coderella / Purism
+1 415-689-4029
pr@puri.sm
See also the Purism press room for additional tools and announcements.
 

13 October, 2017 05:22PM by Jeff

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: OpenStack Development Summary – October 13, 2017

Welcome to the seventh Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

OpenStack Distribution

Stable Releases

Current in-flight SRU’s for OpenStack related packages:

Ceph 10.2.9 point release

Ocata Stable Point Releases

Pike Stable Point Releases

Horizon Newton->Ocata upgrade fixes

Recently released SRU’s for OpenStack related packages:

Newton Stable Point Releases

Development Release

OpenStack Pike released in August and is install-able on Ubuntu 16.04 LTS using the Ubuntu Cloud Archive:

sudo add-apt-repository cloud-archive:pike

OpenStack Pike also forms part of the Ubuntu 17.10 release later this month; final charm testing is underway in preparation for full Artful support for the charm release in November.

We’ll be opening the Ubuntu Cloud Archive for OpenStack Queens in the next two weeks; the first uploads will be the first Queens milestones, which will coincide nicely with the opening of the next Ubuntu development release (which will become Ubuntu 18.04 LTS).

OpenStack Snaps

The main focus in the last few weeks has been on testing of the gnocchi snap, which is currently install-able from the edge channel:

sudo snap install --edge gnocchi

The gnocchi snap provides the gnocchi-api (nginx/uwsgi deployed) and gnocchi-metricd service;  Due to some incompatibilities between gnocchi/cradox/python-rados the snap is currently based on the 3.1.11 release; hopefully we should work through the issues with the 4.0.x release in the next week or so, as well as having multiple tracks setup for this snap so you can consume a version known to be compatible with a specific OpenStack release.

Nova LXD

The team is currently planning work for the Queens development cycle; pylxd has received a couple of new features – specifically support for storage pools as provided in newer LXD versions, and streaming of image uploads to LXD which greatly reduces the memory footprint of client applications during uploads.

OpenStack Charms

Queens Planning

Out of the recent Queens PTG, we have a number of feature specs landed in the charms specification repository . There are a few more in the review queue; if you’re interested in plans for the Queens release of the charms next year, this is a great place to get a preview and provide the team feedback on the features that are planned for development.

Deployment Guide

The first version of the new Charm Deployment Guide has now been published to the OpenStack Docs website; we have a small piece of followup work to complete to ensure its published alongside other deployment project guides, but hopefully that should wrap up in the next few days.  Please give the guide a spin and log any bugs that you might find!

Bugs

Over the last few weeks there has been an increased level of focus on the current bug triage queue for the charms; from a peak of 600 open bugs two weeks ago, with around 100 pending triage, we’ve closed out 70 bugs and the triage queue is down to a much more manageable level.  The recently introduced bug triage rota has helped with this effort and should ensure we keep on-top of incoming bugs in the future.

Releases

In the run-up to the August charm release, a number of test scenarios which required manual execution where automated as part of the release testing activity;  this automation work reduces the effort to produce the release, and means that the majority of test scenarios can be run on a regular basis.  As a result, we’re going to move back to a three month release cycle; the next charm release will be towards the end of November after the OpenStack summit in Sydney.

IRC (and meetings)

As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.

13 October, 2017 01:41PM

hackergotchi for Wazo

Wazo

New partnership with Loway's Queuemetrics-Live

We are proud to announce our successful integration with Loway's QueueMetrics-Live call-center suite.

Loway's QueueMetrics-Live call-center suite, is now fully integrated with Wazo IPBX.

Wazo is a unified communication platform and a full-featured IPBX based on Asterisk technology, oriented towards enterprise communications.

QueueMetrics-Live suite collects Asterisk data and generates analytical reports for over 180 metrics, covering all the key categories of call center effective management: Reporting, Supervisor page, Agent page, Quality assessment and much more.

This integration provides professional users with a top class solution for monitoring everything that happens in their call center, turning Wazo IPBX into a 360 degrees call centre platform with reporting and analytics.

We are extremely excited of this strategic partnership with Loway and about this successful integration. By using Wazo IPBX in combination with QueueMetrics-Live you can set up a fully-featured contact centre with a very reasonable budget investment. Wazo users can now benefit from the state-of-the art analytics and reporting system of QueueMetrics suite with a very simple and quick integration.

Said Lorenzo Emilitri, Founder of Loway.

The QueueMetrics connector with Wazo will enhance your call center in one click. The integration is easy and gives to Wazo superpowers on call center statistics.

Said Sylvain Boily, Wazo development team Leader.

QueueMetrics call center suite is available both on premise and hosted cloud service. For more information about QueueMetrics visit the official website at www.queuemetrics.com.

For more information about the Wazo project visit wazo.community.

13 October, 2017 04:00AM by The Wazo Authors

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Artful Aardvark (17.10) Final Freeze

Adam Conrad, on behalf of the Ubuntu Release Team is pleased to announce that artful has entered the Final Freeze period in preparation for the final release of Ubuntu 17.10 next week.

The current uploads in the queue will be reviewed and either accepted or rejected as appropriate by pre-freeze standards, but anything from here on should fit two broad categories:

1) Release critical bugs that affect ISOs, installers, or otherwise can’t be fixed easily post-release.

2) Bug fixes that would be suitable for post-release SRUs, which we may choose to accept, reject, or shunt to -updates for 0-day SRUs on a case-by-base basis.

For unseeded packages that aren’t on any media or in any supported sets, it’s still more or less a free-for-all, but do take care not to upload changes that you can’t readily validate before release.  That is, ask yourself if the current state is “good enough”, compared to the burden of trying to fix all the bugs you might accidentally be introducing with your shiny new upload.

We will shut down cronjobs and spin some RC images late Friday or early Saturday once the archive and proposed-migration have settled a bit, and we expect everyone with a vested interest in a flavour (or two) and a few spare hours here and there to get to testing to make sure we have another uneventful release next week.  Last minute panic is never fun.

https://lists.ubuntu.com/archives/ubuntu-release/2017-October/004221.html

Originally posted to the ubuntu-release mailing list on Fri Oct 13 08:42 UTC 2017 by Adam Conrad on behalf of the Ubuntu Release Team

13 October, 2017 12:46AM

October 12, 2017

Ubuntu Insights: Kubernetes the not so easy way

This is a guest blog by Michael Iatrou

The simplest method to deploy and operate Kubernetes on Ubuntu is with conjure-up. Whether the substrate is a public cloud (AWS, Azure, GCP, etc) private virtualized environments (VMware) or bare metal, conjure-up will allow you to quickly deploy a fully functional, production-grade Kubernetes.

But what if you wanted to delve a bit more into the details of the process? What if you wanted to use directly the core tools of the conjure-up apparatus?

Here is the task at hand: deploy Kubernetes on a bare metal server. The control plane needs to be containerized and retain the same characteristics as a production environment (observability, scalability, upgradability, etc). The worker nodes real estate needs to be elastic, allowing to add/remove nodes on demand, without disruption of the existing services. Extra points for sane networking.

You will need a machine equipped with at least 4 CPU cores, 16GB RAM,100GB free disk space, preferably SSD and one NIC. As I am writing this, I am using MAAS to deploy Ubuntu 16.04.3 on such a machine. I have also configured a Linux bridge (br0) and have attached the NIC (eth0) to it, using MAAS’ network configuration capabilities. Moreover, MAAS will serve as DHCP server and DNS.

We will be using machine containers (LXD) since they provide virtual machine operations semantics, and bare metal performance. We will also leverage Juju and the Canonical Distribution of Kubernetes bundle — yes, the same bundle that we use for production deployments on public cloud and bare-metal.

Let’s SSH into our freshly deployed Xenial, using user ubuntu and update the critical components, LXD and Juju, to their latest stable versions:

$ sudo add-apt-repository ppa:juju/stable -y
$ sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable -y
$ sudo apt update
$ sudo apt dist-upgrade -y
$ sudo apt install lxd juju-2.0 -y

Here are the versions of our tools:

We can now initialize LXD:

We have skipped the creation of a new network bridge, because we want our LXD machine containers to use the existing bridge (br0). We are modifying the default LXD profile accordingly:

$ lxc network attach-profile br0 default eth0

We are now ready to bootstrap our local Juju controller:

$ juju bootstrap lxd lxd-local

The juju controller is now instantiated! As part of the process, two new LXD profiles have been created:

$ lxc profile list
+-----------------+---------+
|      NAME       | USED BY |
+-----------------+---------+
| default         | 0       |
+-----------------+---------+
| juju-controller | 1       |
+-----------------+---------+
| juju-default    | 0       |
+-----------------+---------+

Let’s create a new model for our k8s deployment:

$ juju add-model kubernetes
$ juju models
Controller: lxd-local

Model        Cloud/Region         Status     Machines  Cores  Access  Last connection
controller   localhost/localhost  available         1      -  admin   just now
default      localhost/localhost  available         0      -  admin   just now
kubernetes*  localhost/localhost  available         0      -  admin   never connected

Juju will automatically switch the active model to “kubernetes”. It will also create a new LXD profile, associated with this model:

$ lxc profile list
+-----------------+---------+
|      NAME       | USED BY |
+-----------------+---------+
| default         | 0       |
+-----------------+---------+
| juju-controller | 1       |
+-----------------+---------+
| juju-default    | 0       |
+-----------------+---------+
| juju-kubernetes | 0       |
+-----------------+---------+

So, Juju not only provides isolation through models, but ensures that if a model requires customized LXD containers, no other existing or future LXD profiles will be affected.

For Kubernetes, we will customize the juju-kubernetes profile to enable privileged machine containers and add an SSH key to it. Create a new YAML file juju-lxd-profile.yaml with the following configuration:

name: juju-kubernetes
config:
  user.user-data: |
    #cloud-config
    ssh_authorized_keys:
      - @@SSHPUB@@
  boot.autostart: "true"
  linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
  raw.lxc: |
    lxc.aa_profile=unconfined
    lxc.mount.auto=proc:rw sys:rw
    lxc.cap.drop=
  security.nesting: "true"
  security.privileged: "true"
description: ""
devices:
  aadisable:
    path: /sys/module/nf_conntrack/parameters/hashsize
    source: /dev/null
    type: disk
  aadisable1:
    path: /sys/module/apparmor/parameters/enabled
    source: /dev/null
    type: disk<

Make sure that you have generated an SSH key pair for user “ubuntu”, before you execute the following one-liner:

$ sed -ri "s'@@SSHPUB@@'$(cat ~/.ssh/id_rsa.pub)'" juju-lxd-profile.yaml

Then update the juju-kubernetes LXD profile:

$ lxc profile edit "juju-kubernetes" < juju-lxd-profile.yaml

Final step, deploy Kubernetes already!

$ juju deploy canonical-kubernetes-101

You’ve noticed that I use version 101 of the canonical-kubernetes bundle. I could have as well omitted the version number and allow Juju to automatically get the latest available version. It’s going to take only a few minutes (or more, if you don’t have that SSD I mentioned earlier), before everything is successfully deployed:

All done, let’s start exploring! We need kubectl and the “admin” credentials to interact with the cluster: Install the former as a snap and copy the k8s config using juju:

$ sudo snap install kubectl --classic
kubectl 1.7.4 from 'canonical' installed
$ mkdir -p ~/.kube
$ juju scp kubernetes-master/0:config ~/.kube/config

For the k8s UI experience, get the URL and credentials using:

$ kubectl config view
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: REDACTED
    server: https://172.27.29.19:443>
  name: juju-cluster
contexts:
- context:
    cluster: juju-cluster
    user: admin
  name: juju-context
current-context: juju-context
kind: Config
preferences: {}
users
- name: 
  user:
    password: shannonWouldBeProud
    username: admin

We have a fully operational kubernetes cluster, on bare-metal, with bridged networking, not very different from what conjure-up deploys. Most importantly, we got a glimpse of how Juju and LXD are used behind the scenes. Of course conjure-up offers much more functionality and evolves quickly: its upcoming release adds support for Helm and Deis… The joy is in the journey, but keep moving fast.

12 October, 2017 07:21PM

Ubuntu Podcast from the UK LoCo: S10E32 – Possessive Open Chicken - Ubuntu Podcast

This week we’ve been playing Wifiwars, discuss what happened at the Ubuntu Rally in New York, serve up some command line lurve and go over your feedback.

It’s Season Ten Episode Thirty-Two of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

sudo snap install pulsemixer
pulsemixer
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Wikimedia.

Ubuntu Rally

Trouble comes to NYC

Inside the Ubuntu Rally

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

12 October, 2017 02:00PM

Ubuntu Insights: Security Team Weekly Summary: October 12, 2017

The Security Team weekly reports are intended to be very short summaries of the Security Team’s weekly activities.

If you would like to reach the Security Team, you can find us at the #ubuntu-hardened channel on FreeNode. Alternatively, you can mail the Ubuntu Hardened mailing list at: ubuntu-hardened@lists.ubuntu.com

During the last week, the Ubuntu Security team:

  • Triaged 238 public security vulnerability reports, retaining the 75 that applied to Ubuntu.
  • Published 12 Ubuntu Security Notices which fixed 43 security issues (CVEs) across 9 supported packages.

Ubuntu Security Notices

Bug Triage

Mainline Inclusion Requests

Updates to Community Supported Packages

  • Simon Quigley (tsimonq2) provided debdiffs for trusty-artful for git (LP: #1719740)

Development

  • Reviews:
    • PR 3973/cgroup freezer in support of layouts
    • PR 3998/utilize new seccomp logging features
    • PR 3999/add detection of stale mount namespaces for layouts
    • PR 3872/preserve TMPDIR and HOSTALIASES across snap-confine invocation
    • PR 3958/add support for /home on NFS
    • PR 4008/create missing mountpoints in support of layouts
  • submitted policy-updates-xxx PR 4002
  • submitted small lttng PR 4003
  • submitted small lxd PR 4004
  • fscrypt 0.2.1 and 0.2.2 packaged
  • libseccomp patches rebased to latest

What the Security Team is Reading This Week

Weekly Meeting

More Info

12 October, 2017 12:56PM

hackergotchi for Qubes

Qubes

QSB #34: GUI issue and Xen vulnerabilities (XSA-237 through XSA-244)

Dear Qubes Community,

We have just published Qubes Security Bulletin (QSB) #34: GUI issue and Xen vulnerabilities (XSA-237 through XSA-244). The text of this QSB is reproduced below. This QSB and its accompanying signatures will always be available in the Qubes Security Pack (qubes-secpack).

View QSB #34 in the qubes-secpack:

https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-034-2017.txt

Learn about the qubes-secpack, including how to obtain, verify, and read it:

https://www.qubes-os.org/security/pack/

View all past QSBs:

https://www.qubes-os.org/security/bulletins/

View the XSA Tracker:

https://www.qubes-os.org/security/xsa/

             ---===[ Qubes Security Bulletin #34 ]===---

                          October 12, 2017


   GUI issue and Xen vulnerabilities (XSA-237 through XSA-244)

Summary
========

One of our developers, Simon Gaiser (aka HW42), while working on
improving support for device isolation in Qubes 4.0, discovered a
potential security problem with the way Xen handles MSI-capable devices.
The Xen Security Team has classified this problem as XSA-237 [01], which
was published today.

At the same time, the Xen Security Team released several other Xen
Security Advisories (XSA-238 through XSA-244). The impact of these
advisories ranges from system crashes to potential privilege
escalations. However, the latter seem to be mostly theoretical. See our
commentary below for details.

Finally, Eric Larsson discovered a situation in which Qubes GUI
virtualization could allow a VM to produce a window that has no colored
borders (which are used in Qubes as front-line indicators of trust).
A VM cannot use this vulnerability to draw different borders in place of
the correct one, however. We discuss this issue extensively below.

Technical details
==================

Xen issues
-----------

Xen Security Advisory 237 [01]:

| Multiple issues exist with the setup of PCI MSI interrupts:
| - unprivileged guests were permitted access to devices not owned by
|   them, in particular allowing them to disable MSI or MSI-X on any
|   device
| - HVM guests can trigger a codepath intended only for PV guests
| - some failure paths partially tear down previously configured
|   interrupts, leaving inconsistent state
| - with XSM enabled, caller and callee of a hook disagreed about the
|   data structure pointed to by a type-less argument
| 
| A malicious or buggy guest may cause the hypervisor to crash, resulting
| in Denial of Service (DoS) affecting the entire host.  Privilege
| escalation and information leaks cannot be excluded.

Xen Security Advisory 238 [02]:

| DMOPs (which were a subgroup of HVMOPs in older releases) allow guests
| to control and drive other guests.  The I/O request server page mapping
| interface uses range sets to represent I/O resources the emulation of
| which is provided by a given I/O request server.  The internals of the
| range set implementation require that ranges have a starting value no
| lower than the ending one.  Checks for this fact were missing.
| 
| Malicious or buggy stub domain kernels or tool stacks otherwise living
| outside of Domain0 can mount a denial of service attack which, if
| successful, can affect the whole system.
| 
| Only domains controlling HVM guests can exploit this vulnerability.
| (This includes domains providing hardware emulation services to HVM
| guests.)

Xen Security Advisory 239 [03]:

| Intercepted I/O operations may deal with less than a full machine
| word's worth of data.  While read paths had been the subject of earlier
| XSAs (and hence have been fixed), at least one write path was found
| where the data stored into an internal structure could contain bits
| from an uninitialized hypervisor stack slot.  A subsequent emulated
| read would then be able to retrieve these bits.
| 
| A malicious unprivileged x86 HVM guest may be able to obtain sensitive
| information from the host or other guests.

Xen Security Advisory 240 [04]:

| x86 PV guests are permitted to set up certain forms of what is often
| called "linear page tables", where pagetables contain references to
| other pagetables at the same level or higher.  Certain restrictions
| apply in order to fit into Xen's page type handling system.  An
| important restriction was missed, however: Stacking multiple layers
| of page tables of the same level on top of one another is not very
| useful, and the tearing down of such an arrangement involves
| recursion.  With sufficiently many layers such recursion will result
| in a stack overflow, commonly resulting in Xen to crash.
| 
| A malicious or buggy PV guest may cause the hypervisor to crash,
| resulting in Denial of Service (DoS) affecting the entire host.
| Privilege escalation and information leaks cannot be excluded.

Xen Security Advisory 241 [05]:

| x86 PV guests effect TLB flushes by way of a hypercall.  Xen tries to
| reduce the number of TLB flushes by delaying them as much as possible.
| When the last type reference of a page is dropped, the need for a TLB
| flush (before the page is re-used) is recorded.  If a guest TLB flush
| request involves an Inter Processor Interrupt (IPI) to a CPU in which
| is the process of dropping the last type reference of some page, and
| if that IPI arrives at exactly the right instruction boundary, a stale
| time stamp may be recorded, possibly resulting in the later omission
| of the necessary TLB flush for that page.
| 
| A malicious x86 PV guest may be able to access all of system memory,
| allowing for all of privilege escalation, host crashes, and
| information leaks.

Xen Security Advisory 242 [06]:

| The page type system of Xen requires cleanup when the last reference
| for a given page is being dropped.  In order to exclude simultaneous
| updates to a given page by multiple parties, pages which are updated
| are locked beforehand.  This locking includes temporarily increasing
| the type reference count by one.  When the page is later unlocked, the
| context precludes cleanup, so the reference that is then dropped must
| not be the last one.  This was not properly enforced.
| 
| A malicious or buggy PV guest may cause a memory leak upon shutdown
| of the guest, ultimately perhaps resulting in Denial of Service (DoS)
| affecting the entire host.

Xen Security Advisory 243 [07]:

| The shadow pagetable code uses linear mappings to inspect and modify the
| shadow pagetables.  A linear mapping which points back to itself is known as
| self-linear.  For translated guests, the shadow linear mappings (being in a
| separate address space) are not intended to be self-linear.  For
| non-translated guests, the shadow linear mappings (being the same
| address space) are intended to be self-linear.
| 
| When constructing a monitor pagetable for Xen to run on a vcpu with, the shadow
| linear slot is filled with a self-linear mapping, and for translated guests,
| shortly thereafter replaced with a non-self-linear mapping, when the guest's
| %cr3 is shadowed.
| 
| However when writeable heuristics are used, the shadow mappings are used as
| part of shadowing %cr3, causing the heuristics to be applied to Xen's
| pagetables, not the guest shadow pagetables.
| 
| While investigating, it was also identified that PV auto-translate mode was
| insecure.  This mode was removed in Xen 4.7 due to being unused, unmaintained
| and presumed broken.  We are not aware of any guest implementation of PV
| auto-translate mode.
| 
| A malicious or buggy HVM guest may cause a hypervisor crash, resulting in a
| Denial of Service (DoS) affecting the entire host, or cause hypervisor memory
| corruption.  We cannot rule out a guest being able to escalate its privilege.

Xen Security Advisory 244 [08]:

| The x86-64 architecture allows interrupts to be run on distinct stacks.
| The choice of stack is encoded in a field of the corresponding
| interrupt descriptor in the Interrupt Descriptor Table (IDT).  That
| field selects an entry from the active Task State Segment (TSS).
| 
| Since, on AMD hardware, Xen switches to an HVM guest's TSS before
| actually entering the guest, with the Global Interrupt Flag still set,
| the selectors in the IDT entry are switched when guest context is
| loaded/unloaded.
| 
| When a new CPU is brought online, its IDT is copied from CPU0's IDT,
| including those selector fields.  If CPU0 happens at that moment to be
| in HVM context, wrong values for those IDT fields would be installed
| for the new CPU.  If the first guest vCPU to be run on that CPU
| belongs to a PV guest, it will then have the ability to escalate its
| privilege or crash the hypervisor.
| 
| A malicious or buggy x86 PV guest could escalate its privileges or
| crash the hypervisor.
| 
| Avoiding to online CPUs at runtime will avoid this vulnerability.


GUI daemon issue
-----------------

Qubes OS's GUI virtualization enforces colored borders around all VM
windows. There are two types of windows. The first type are normal
windows (with borders, titlebars, etc.). In this case, we modify the
window manager to take care of coloring the borders. The second type are
borderless windows (with the override_redirect property set to True in
X11 terminology). Here, the window manager is not involved at all, and
our GUI daemon needs to draw a border itself. This is done by drawing a
2px border whenever window content is changed beneath that area. The bug
was that if the VM application had never sent any updates for (any part
of) the border area, the frame was never drawn. The relevant code is in
the gui-daemon component [09], specifically in gui-daemon/xside.c [10]:

    /* update given fragment of window image
     * can be requested by VM (MSG_SHMIMAGE) and Xserver (XExposeEvent)
     * parameters are not sanitized earlier - we must check it carefully
     * also do not let to cover forced colorful frame (for undecoraded windows)
     */
    static void do_shm_update(Ghandles * g, struct windowdata *vm_window,
               int untrusted_x, int untrusted_y, int untrusted_w,
               int untrusted_h)
    {

        /* ... */

        if (!vm_window->image && !(g->screen_window && g->screen_window->image))
            return;
        /* force frame to be visible: */
        /*   * left */
        delta = border_width - x;
        if (delta > 0) {
            w -= delta;
            x = border_width;
            do_border = 1;
        }
        /*   * right */
        delta = x + w - (vm_window->width - border_width);
        if (delta > 0) {
            w -= delta;
            do_border = 1;
        }
        /*   * top */
        delta = border_width - y;
        if (delta > 0) {
            h -= delta;
            y = border_width;
            do_border = 1;
        }
        /*   * bottom */
        delta = y + h - (vm_window->height - border_width);
        if (delta > 0) {
            h -= delta;
            do_border = 1;
        }

        /* ... */

    }

The above code is responsible for deciding whether the colored border
needs to be updated. It is updated if both:
a) there is any window image (vm_window->image)
b) the updated area includes a border anywhere

If neither of these conditions is met, no border is drawn. Note that if
the VM tries to draw anything there (for example, a fake border in a
different color), whatever is drawn will be overridden with the correct
borders, which will stay there until the window is destroyed.

Eric Larsson discovered that this situation (not updating the border
area) is reachable -- and even happens with some real world applications
-- when the VM shows a splash screen with a custom shape. While custom
window shapes are not supported in Qubes OS, VMs do not know this. The
VM still thinks the custom-shaped window is there, so it does not send
updates of content outside of that custom shape.

We fixed the issue by forcing an update of the whole window before
making it visible:

    static void handle_map(Ghandles * g, struct windowdata *vm_window)
    {

        /* ... */

        /* added code */
        if (vm_window->override_redirect) {
            /* force window update to draw colorful frame, even when VM have not
             * sent any content yet */
            do_shm_update(g, vm_window, 0, 0, vm_window->width, vm_window->height);
        }

        (void) XMapWindow(g->display, vm_window->local_winid);
    }

This needs some auxiliary changes in the do_shm_update function, to draw
the frame also in cases when there is no window content yet
(vm_window->image is NULL).

Commentary from the Qubes Security Team
========================================

For the most part, this batch of Xen Security Advisories affects Qubes
OS 3.2 only theoretically. In the case of Qubes OS 4.0, half of them do
not apply at all. We'll comment briefly on each one:

XSA-237 - The impact is believed to be denial of service only. In addition,
          we believe proper use of Interrupt Remapping should offer a generic
          solution to similar problems, to reduce them to denial of
          service at worst.

XSA-238 - The stated impact is denial of service only.

XSA-239 - The attacking domain has no control over what information
          is leaked.

XSA-240 - The practical impact is believed to be denial of service (and does not
          affect HVMs).

XSA-241 - The issue applies only to PV domains, so the attack vector
          is largely limited in Qubes OS 4.0, which uses HVM domains
          by default. In addition, the Xen Security Team considers this
          bug to be hard to exploit in practice (see advisory).

XSA-242 - The stated impact is denial of service only. In addition, the
          issue applies only to PV domains.

XSA-243 - The practical impact is believed to be denial of service. In addition,
          the vulnerable code (shadow page tables) is build-time disabled
          in Qubes OS 4.0.

XSA-244 - The vulnerable code path (runtime CPU hotplug) is not used
          in Qubes OS.

These results reassure us that switching to HVM domains in Qubes OS 4.0
was a good decision.

Compromise Recovery
====================

Starting with Qubes 3.2, we offer Paranoid Backup Restore Mode, which
was designed specifically to aid in the recovery of a (potentially)
compromised Qubes OS system. Thus, if you believe your system might have
been compromised (perhaps because of the bugs discussed in this
bulletin), then you should read and follow the procedure described here:

https://www.qubes-os.org/news/2017/04/26/qubes-compromise-recovery/

Patching
=========

The specific packages that resolve the problems discussed in this
bulletin are as follows:

  For Qubes 3.2:
  - Xen packages, version 4.6.6-32
  - qubes-gui-dom0, version 3.2.12

  For Qubes 4.0:
  - Xen packages, version 4.8.2-6
  - qubes-gui-dom0, version 4.0.5

The packages are to be installed in dom0 via the Qubes VM Manager or via
the qubes-dom0-update command as follows:

  For updates from the stable repository (not immediately available):
  $ sudo qubes-dom0-update

  For updates from the security-testing repository:
  $ sudo qubes-dom0-update --enablerepo=qubes-dom0-security-testing

A system restart will be required afterwards.

These packages will migrate from the security-testing repository to the
current (stable) repository over the next two weeks after being tested
by the community.

If you use Anti Evil Maid, you will need to reseal your secret
passphrase to new PCR values, as PCR18+19 will change due to the new
Xen binaries.

Credits
========

The GUI daemon issue was discovered by Eric Larsson.

The PCI MSI issues were discovered by Simon Gaiser (aka HW42).

For other issues, see the original Xen Security Advisories.

References
===========

[01] https://xenbits.xen.org/xsa/advisory-237.html
[02] https://xenbits.xen.org/xsa/advisory-238.html
[03] https://xenbits.xen.org/xsa/advisory-239.html
[04] https://xenbits.xen.org/xsa/advisory-240.html
[05] https://xenbits.xen.org/xsa/advisory-241.html
[06] https://xenbits.xen.org/xsa/advisory-242.html
[07] https://xenbits.xen.org/xsa/advisory-243.html
[08] https://xenbits.xen.org/xsa/advisory-244.html
[09] https://github.com/QubesOS/qubes-gui-daemon/
[10] https://github.com/QubesOS/qubes-gui-daemon/blob/master/gui-daemon/xside.c#L1317-L1447

--
The Qubes Security Team
https://www.qubes-os.org/security/

12 October, 2017 12:00AM

October 11, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Ubuntu Server Development Summary – 10 Oct 2017

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

Spotlight: Artful Release Next Week

Artful is in its final week of testing. This week the final freeze will occur and release candidate get published. If you are interested head over to the daily ISO page and give Artful a spin!

cloud-init

  • Released cloud-init master to artful
  • Queued SRU of cloud-init:master into Xenial and Zesty
  • Re-enabled tox support for integration tests

curtin

  • Queued SRU of curtin:trunk into Xenial and Zesty
  • Cleanup of artful vmtest cases

git-ubuntu

  • Released version 0.3 is now in the edge snap channel with numerous bug fixes.

Bug Work and Triage

IRC Meeting

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Uploads to the Development Release (Artful)

cloud-init, 17.1-18-gd4f70470-0ubuntu1, smoser
cloud-init, 17.1-17-g45d361cb-0ubuntu1, smoser
cloud-init, 17.1-13-g7fd04255-0ubuntu1, smoser
curtin, 0.1.0~bzr532-0ubuntu1, smoser
docker.io, 1.13.1-0ubuntu5, mwhudson
libseccomp, 2.3.1-2.1ubuntu3, tyhicks
lxd, 2.18-0ubuntu4, stgraber
maas, 2.3.0~beta1-6301-gca25180-0ubuntu1, andreserl
ntp, 1:4.2.8p10+dfsg-5ubuntu3, xnox
postfix, 3.2.3-1, None
Total: 10

Uploads to Supported Releases (Trusty, Xenial, Yakkety, Zesty)

cloud-init, xenial, 0.7.9-233-ge586fe35-0ubuntu1~16.04.2, smoser
cloud-init, zesty, 0.7.9-233-ge586fe35-0ubuntu1~17.04.2, smoser
juju-core, xenial, 2.2.4-0ubuntu0.16.04.1, mwhudson
juju-core, zesty, 2.2.4-0ubuntu0.17.04.1, mwhudson
libvirt, trusty, 1.2.2-0ubuntu13.1.23, paelzer
maas, trusty, 1.9.5+bzr4599-0ubuntu1~14.04.2, andreserl
ruby1.9.1, trusty, 1.9.3.484-2ubuntu1.5, leosilvab
squid3, trusty, 3.3.8-1ubuntu6.10, paelzer
vlan, zesty, 1.9-3.2ubuntu2.17.04.3, paelzer
vlan, xenial, 1.9-3.2ubuntu1.16.04.4, paelzer
vlan, trusty, 1.9-3ubuntu10.5, paelzer
Total: 11

Contact the Ubuntu Server team

11 October, 2017 09:51PM

Andres Rodriguez: MAAS 2.3.0 beta 2 released!

Hello MAASters!

I’m happy to announce that MAAS 2.3.0 Beta 2 has now been released and it is currently available in PPA and as a snap.
PPA Availability
For those running Ubuntu Xenial and would like to use beta 2, please use the following PPA:
ppa:maas/next
Snap Availability
For those running from the snap, or would like to test the snap, please use the Beta channel on the default track:
sudo snap install maas –devmode —beta
 

MAAS 2.3.0 (beta 2)

Issues fixed in this release

https://launchpad.net/maas/+milestone/2.3.0beta2

  • LP: #1711760    [2.3] resolv.conf is not set (during commissioning or testing)

  • LP: #1721108    [2.3, UI, HWTv2] Machine details cards – Don’t show “see results” when no tests have been run on a machine

  • LP: #1721111    [2.3, UI, HWTv2] Machine details cards – Storage card doesn’t match CPU/Memory one

  • LP: #1721548    [2.3] Failure on controller refresh seem to be causing version to not get updated

  • LP: #1710092    [2.3, HWTv2] Hardware Tests have a short timeout

  • LP: #1721113    [2.3, UI, HWTv2] Machine details cards – Storage – If multiple disks, condense the card instead of showing all disks

  • LP: #1721524    [2.3, UI, HWTv2] When upgrading from older MAAS, Storage HW tests are not mapped to the disks

  • LP: #1721587    [2.3, UI, HWTv2] Commissioning logs (and those of v2 HW Tests) are not being shown

  • LP: #1719015    $TTL in zone definition is not updated

  • LP: #1721276    [2.3, UI, HWTv2] Hardware Test tab – Table alignment for the results doesn’t align with titles

  • LP: #1721525    [2.3, UI, HWTv2] Storage card on machine details page missing red bar on top if there are failed tests

  • LP: #1722589    syslog full of “topology hint” logs

  • LP: #1719353    [2.3a3, Machine listing] Improve the information presentation of the exact tasks MAAS is running when running hardware testing

  • LP: #1719361    [2.3 alpha 3, HWTv2] On machine listing page, remove success icons for components that passed the tests

  • LP: #1721105    [2.3, UI, HWTv2] Remove green success icon from Machine listing page

  • LP: #1721273    [2.3, UI, HWTv2] Storage section on Hardware Test tab does not describe each disk to match the design

11 October, 2017 08:57PM

hackergotchi for Tanglu developers

Tanglu developers

Cutelyst 1.9.0 released!

Cutelyst the Qt web framework got a new release. This is a rather small release but has some important fixes so I decided to roll sooner.

The dispatcher logic got 30% faster, parsing URL encoded data is also a bit faster on some cases (using less memory), Context objects can now be instantiated by library users to allow for example getting notifications from SQL databases and be able to forward to Cutelyst actions or Views, pkg-config support has also improved a bit but still misses most modules.

Have fun https://github.com/cutelyst/cutelyst/archive/v1.9.0.tar.gz


11 October, 2017 08:47PM by dantti

hackergotchi for Purism PureOS

Purism PureOS

Over $1.6 million raised for the Librem 5 — What this means for you

This Monday, 14 days early, we have crossed a historic milestone. By helping us reach our $1.5M goal early, you have secured your future and freed yourself from the chains of privacy-stripping mobile platforms and allowed us to continue upholding your digital rights with a convenient product made “by the people and for the people”; you have proven that there is a market demand for in-depth security & privacy-focused smartphones that can withstand the test of credibility, by virtue of true community ownership and auditability of the code.

With this milestone comes not only rejoicing about our collective achievement (and the potential of an even greater achievement in weeks to come, as contributions continue to add-up), but also the assurance that the Librem 5 phone project, as a product, will happen. The dreams of a generation will finally come to reality with a convenient smartphone hardware offering that you can truly own and control.

The $1.5 million milestone allows us to do a couple of things as it relates to the production of the physical product:

  • Immediately resume negotiations with component suppliers, with a much stronger hand (with money on the table to enter contractual relationships)
  • Produce more complete prototypes to evaluate, in order to begin development now
  • Move into hardware production as soon as possible, for the development kit
  • Begin developing the base software platform with the help of the community (fully in the open, upstream-first approach) to bring the product’s software to first stage “usable state” for early adopters.
  • Move into hardware production for finalized hardware products, begin order fulfillment for those who want their devices early (and are ready to help us smooth out the rough edges from the software side, in the beginning).

This will also allow us to seek additional partnerships and investment in parallel to amplify and speed-up our project.

…let’s go above and beyond: to stretch goals!

The goals above already represent a groundbreaking step for users around the world who have been clamoring—for years—for a mobile platform they can truly trust and own. But it’s only the beginning! As we are writing this, we are already at $1.6 million and counting, but we need to push further to accomplish more.

Indeed, to make this hardware product an even more compelling offer beyond early-adopters, we should go beyond the “base platform” and make it into an “awesome user experience”, as much as possible. This is something we hope to achieve by reaching a number of stretch goals in this campaign:

  1. $4m = VoIP phone number, call-in, call-out features: what this means is that we need to reach the $4 million milestone to hire the Matrix team to implement calls to/from the POTS/PSTN, to complement the existing VoIP features.
  2. $6m = Reverse engineering faster WiFi/Bluetooth firmware
  3. $8m = Free encrypted VPN tunnel service for all backers for 1 year
  4. $10m = Run Android applications in isolation on the Librem 5

Let’s do this!

11 October, 2017 07:35PM by Jeff

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Kernel Team Summary- October 11, 2017

October 04 through October 09

Development (Artful / 17.10)

https://wiki.ubuntu.com/ArtfulAardvark/ReleaseSchedule

Important upcoming dates:

      Final Freeze - Oct 12 (~2 days away)
      Ubuntu 17.10 - Oct 19 (~1 week away)
   

We intend to target a 4.13 kernel for the Ubuntu 17.10 release. A 4.13.4 based kernel is available for testing from the artful-proposed pocket of the Ubuntu archive.

Stable (Released & Supported)

  • Released the following security kernel updates to fix embargoed CVE-2017-1000255:

      Zesty linux 4.10.0-37.41
      Xenial linux-hwe 4.10.0-37.41~16.04.1
    
  • Current cycle: 06-Oct through 28-Oct

               06-Oct   Last day for kernel commits for this cycle.
      09-Oct - 14-Oct   Kernel prep week.
      15-Oct - 27-Oct   Bug verification & Regression testing.
               30-Oct   Release to -updates.
  • Next cycle: 27-Oct through 18-Nov

               27-Oct   Last day for kernel commits for this cycle.
      30-Oct - 04-Nov   Kernel prep week.
      05-Nov - 17-Nov   Bug verification & Regression testing.
               20-Nov   Release to -updates.
    

Misc

  • The current CVE status
  • If you would like to reach the kernel team, you can find us at the #ubuntu-kernel
    channel on FreeNode. Alternatively, you can mail the Ubuntu Kernel Team mailing
    list at: kernel-team@lists.ubuntu.com.

11 October, 2017 03:28PM

Ubuntu Insights: elementary on why snaps are right for their Linux distro

elementary is the company behind the elementary OS Linux distribution and the associated app store. Celebrating their tenth anniversary this year, elementary began in 2007 with their first release in 2011. They are currently on their 4th release (Loki) and are working towards their 5th (Juno) with Jupiter, Luna and Freya as previous releases. At the Ubuntu Rally in New York, we spoke to elementary’s founder Daniel Fore and Systems Architect, Cody Garver, to discover what made snaps the right Linux application packaging format for their distro.

How did you find out about snaps?

We were searching for a confined app format and discovered snaps. The team at Canonical has been really friendly and even invited us to be on the technical oversight board which furthered our interest. There was really strong outreach to us and they clearly wanted us involved from the beginning so it’s a been great developer experience. We just weren’t getting that from any of the alternatives. Taking part in the Ubuntu Rally has helped us identify a few issues and solve them along the way too so it’s been a very useful event for us.

What was the appeal of snaps which led you to invest in them?

The confinement that snaps offers provides an extra layer of security. In some cases, third-party developers targeting our platform want to ship dependencies with their app that may turn out to be security-sensitive. Confinement means that we don’t have to spend a lot of time performing a security audit in order for developers to do this; we can both save time reviewing and keep our users safe.

How does building snaps compare to other forms of packaging you produce? How easy was it to integrate with your existing infrastructure and process?

We haven’t begun integrating snap building into our infrastructure just yet, but the simplicity of the format is really promising. At the moment we supply a lot of documentation and packaging templates to our third-party developers and still end up spending a decent amount of time helping them package their apps. Debian packaging can be confusing because it requires multiple files and folders in a very specific format and structure. Having a single snap yaml file seems much easier and makes packaging on our platform more attractive. Secondly, the format isolation allows for less opportunity to break the system or otherwise force abandonment – you can use newer libraries than the system offers without altering the base.

How do you think snap packaging helps your users? Did you get any feedback from them?

We expect our users to appreciate the security model that snaps have. Being able to enable runtime permissions or restricting to certain hardware access will be well received. It is a good message on the privacy side. Users are aware of the problems that unconfined package formats cause, even if they aren’t necessarily aware that lack of confinement is the problem and they tend to be upset with the general lack of stability that PPAs inevitably cause.

How would you improve the snap system?

One suggestion we would make is to improve the error messaging. In other words, make it more obvious for the developers on what steps need to be taken to fix any problems they encounter. There was a decent amount of discussion this week about changing some default options in the Snap yaml that will make snap file sizes much smaller. One concern that we have is the ability for large ISVs who don’t target our platform to be able to provide their own updates outside of the store and we’re interested to see where discussion goes on that.

11 October, 2017 02:00PM

Ubuntu Insights: Private Docker Registries and the Canonical Distribution of Kubernetes

This originally appeared on Tim Van Steenburgh’s blog

How do I use a private image registry with my Kubernetes cluster? How do I set up my own registry? Let’s look at how to perform these tasks on the Canonical Distribution of Kubernetes (CDK).

Using an Existing Insecure Registry

In order to connect to an insecure registry, the Docker daemon must be reconfigured and an --insecure-registry option must be added.

This can be done directly via Juju, using the command:

juju config kubernetes-worker docker-config=”--insecure-registry registry.domain.com:5000"

Creating a Secure CDK Registry

CDK provides an option to deploy a secure Docker registry within the cluster, and expose it via an ingress.

Note: The registry provided is not a production grade registry, and should not be used in a production context.

Requirements

To deploy and use the provided registry, you will need:

  • A DNS entry (registry.acme.com) pointing at the ingress of the cluster (directly, via DNS round robin or with a load balancer)
  • A valid TLS certificate and key for registry.acme.com (registry.crt and registry.key)
  • A set of usernames and passwords stored in a file for htpasswd authentication (format: username:password, one user per line)

Considering a htpasswd.cleartxt file filled with users, the following loop will generate an encoded version of it:

while read line
do
  USER=$(echo ${line} | cut -f1 -d':')
  PASS=$(echo ${line} | cut -f2 -d':')
  docker run \
    --rm \
    xmartlabs/htpasswd \
    ${USER} ${PASS} \
    | tee -a htpasswd.enc
done < htpasswd.cleartxt
sed -i "/^$/d" htpasswd.enc

Deployment

To deploy the registry, run:

juju run-action kubernetes-worker/0 registry \
  domain=registry.acme.com \
  htpasswd=”$(base64 -w0 htpasswd.enc)” \
  htpasswd-plain=”$(base64 -w0 htpasswd.cleartxt)” \
  tlscert=”$(base64 -w0 registry.crt)” \
  tlskey=”$(base64 -w0 registry.key)” \
  ingress=true

Tear down

To tear down the registry, run

juju run-action kubernetes-worker/0 registry \
  delete=true \
  ingress=true

Storage

The registry provided by CDK will use a /srv/registry hostPath to store the images. This means that in case of a rescheduling of the registry (failure, overload…), if the new pod is scheduled on a different host, you will lose your images.

Alternatively, you can use a network mount such as NFS on all workers to benefit from a single point of storage for the images.

Ingress Configuration

The CDK registry action makes the assumption that the ingress running is nginx and will enforce a change of the configuration to increase the client_max_body_size from 1MB to 1GB. This is done via a patch, hence will not overwrite other configuration keys.

If you are using another ingress, deploy with ingress=false and make sure your ingress will support image upload (typical images are ~300MB, and typical CUDA images are 1 to 4GB)

Alternatives

If you want a similar setup but with flexibility on the storage and management via native Kubernetes tools you will find a derived work delivered via a Helm chart on https://github.com/madeden/charts

If you’d like to follow along more closely with CDK development, you can do so in the following places:

Until next time!

11 October, 2017 12:57PM

hackergotchi for ARMBIAN

ARMBIAN

Orange Pi PC2


Ubuntu server – mainline kernel
 
Command line interface – server usage scenarios.

Testing

Ubuntu desktop – mainline kernel
 
Server and light desktop usage scenarios.

Testing

other download options and archive

Known issues

All currently available OS images for H5 boards are experimental

  • don’t use them for anything productive but just to give constructive feedback to developers
  • shutdown might result in reboots instead or board doesn’t really power off (cut power physically)

Notes

Following features that arent’t present in the mainline kernel should work:

  • Ethernet
  • DVFS
  • THS
  • DRM/KMS HDMI display driver with audio (2ch / stereo only) and CEC support

Refer to the status matrix for mainline kernel support status

Features that do not work:

  • CVBS (composite video) output
  • Proper shutdown – switching off the power is recommended
  • Suspend/resume

Features that do not work and will not be added anytime soon:

  • Hardware accelerated video decoding (Cedrus)
  • Mali driver
  • CSI camera input

Desktop

Quick start | Documentation

Preparation

Make sure you have a good & reliable SD card and a proper power supply. Archives can be uncompressed with 7-Zip on Windows, Keka on OS X and 7z on Linux (apt-get install p7zip-full). RAW images can be written with Etcher (all OS).

Boot

Insert SD card into a slot and power the board. (First) boot (with DHCP) takes up to 35 seconds with a class 10 SD Card and cheapest board.

Login

Login as root on HDMI / serial console or via SSH and use password 1234. You will be prompted to change this password at first login. Next you will be asked to create a normal user account that is sudo enabled (beware of default QWERTY keyboard settings at this stage).

Tested hardware

11 October, 2017 03:06AM by igorpecovnik

October 10, 2017

Cumulus Linux

Choosing your chassis: a look at different models

Simplicity, scalability, efficiency, flexibility — who doesn’t want to be able to use those words when talking about their data center? As more and more companies adopt web-scale networking and watch their growth rapidly increase, the need for an equally scalable and powerful solution becomes apparent. Fortunately, Cumulus Networks has a solution. We believe in listening to what our customers want and providing them with what they need; that’s why we support the Facebook Backpack for 64 to 128 ports of 100gig connectivity and the Edge-Core OMP800 for 256 ports of 100gig connectivity. So, what exactly is so great about these chassis? Let’s take a closer, more technical look.

The topology

When designing and building out new data centers, customers have universally agreed on spine and leaf networks as the way to go. Easy scale out by adding more leafs when server racks are added and more manageable oversubscription by adding more spines makes this design an obvious choice. We at Cumulus have built some of the largest data centers in the world out of one-rack-unit switches: 48 port leafs and 32 port spines. These web-scale data centers build “three-tier” spine and leaf networks using spine and leaf “pods” and an additional layer of “superspines” to connect the pods together.

chassis switch
(A three-tier spine and leaf network with superspines at the top and an example of a single spine and leaf pod)

On the other side of the spectrum you have customers building less than 16 racks of compute, easily building two-tier spine and leaf networks with two leafs per rack.

If you have more than 16 racks but don’t want to go through the trouble of designing and cabling up a layer of super spines, there haven’t been many options. Now that Cumulus supports the Facebook Backpack and the Edge-Core OMP800, customers can build much larger pods of at least 128 racks, with two leafs per rack. All of the front panel ports support any mix of interface speeds, from single 100g, 2×50, 1×40, 4×25 or 4x10g breakouts.

chassis switch
(The EdgeCore OMP800 256 port chassis on the left and the Facebook Backpack 128 port chassis on the right)

Both the Backpack and OMP800 use a spine and leaf network inside the chassis, exactly like a spine-superspine network design, but without all the cabling complexity. What both Facebook and Edge-Core have done is taken each leaf and turned it into a line card, and taken each spine and turned it into a fabric card. Inside the chassis switch, everything acts like a single, standalone switch. There is no hidden fabric protocol or secret communication channel between the cards. It’s all ethernet with a fully routed, layer 3 backplane.

chassis switch
(The super spines of the three tier spine and leaf network become the fabric cards within the chassis and the pod spines become the line cards of the chassis)

The line card

On both chassis, each line card contains two Broadcom Tomahawk ASICs, each controlling 16 front panel ports. This is the same Tomahawk ASIC in the 32 port, 100gig switches already offered by Cumulus today. The remaining 16 ports on the line card connect into the fabric, creating 32, 100gig ports: 16 front panel ports that connect to leafs and 16 internal fabric ports. Each ASIC has its own, dedicated CPU. With an ASIC and CPU, each half of the line card acts as a fully independent 32 port switch, running its own copy of Cumulus Linux with its own configuration. This means you can upgrade or reboot half of a line card without impacting the other half.

chassis switch

(This is a line card from the OMP800. The two yellow boxes are the two Tomahawk ASICs, one for each half of the card. At the bottom are the 32 QSFP ports. The connectors at the top provide connectivity to the fabric modules. The two black heatsinks at the top and bottom of the image in the center are the two CPUs, one for each ASIC and each half of the line card)

The Fabric

Inside the fabric, the Backpack and OMP800 do things a little differently. The Backpack has four fabric modules, each with a single Tomahawk ASIC. The OMP800 also has four fabric modules, but puts two Tomahawk ASICs on each module. Just like the line cards, the fabric modules have their own, dedicated CPU per ASIC. Again, each ASIC runs its own full version of Cumulus Linux with its own configuration.

chassis switch
(This is the OMP800 fabric module. The two fabric ASICs are highlighted. There are no QSFP ports, since all connections are internal to the chassis. Just like the line cards, there are two black CPU heatsinks, near the top of the image).

The fabric itself, on both chassis, is fully non-blocking, with 1:1 oversubscription. On the Backpack, there are four ethernet connections from each line card ASIC to each fabric card. On the OMP800, since there are twice as many fabric ASICs, there are only two connections from each line card ASIC to each fabric ASIC.

The Cumulus touch

The idea of multiple connections and a spine and leaf style topology within a chassis is nothing new, but what’s different with the chassis supported by Cumulus is that each element in the chassis is a fully independent switch. Each line card has normal ethernet connections to each fabric card and they just speak BGP between them!

Logging into an OMP800, line card 1, ASIC 1, we see 32 ethernet interfaces: swp1-16 and fp0-15. Like all Cumulus switches the “swp” ports are the front panel ports; fp0-15 are the internal ethernet ports that connect the line card to the fabric cards. We can see each fabric card (labeled “fc”) as an LLDP peer and 16 way ECMP for a route across the fabric ports.

chassis switch
(Here we see 16 LLDP peers, one to each ASIC on each fabric card.)

chassis switch
(The routing output on the OMP800 shows 16 equal cost routes across each of the fp interfaces.)

Because the backplane is just ethernet, it’s easy to look at interface counters, troubleshoot issues between line cards and fabric cards and understand exactly what link or fabric card will be used for a given packet. No more taking screenshots of the support engineer running secret commands on how to troubleshoot your chassis. You could even SPAN a line card’s fabric ports or ERSPAN any port on the fabric. As a former TAC engineer, I think that’s pretty cool.

chassis switch
(Each fp has it’s own Rx, Tx and Error counters. You can see how each individual connection is doing and even shut down a specific port if there were problems)

CPU and physical specs

I mentioned earlier that each ASIC has its own CPU, meaning you can upgrade or reboot an ASIC at a time without impacting the rest of the system. On traditional chassis all of the line cards and all of the fabric cards share a single, central CPU on the supervisor module. On these traditional chassis architectures, that central supervisor owns and manages the whole system. If you lose your supervisor you lose the system. If you want to upgrade you hope that the “hitless upgrade” feature like ISSU works, and that’s if your upgrade path is supported. I’m sure we’ve all seen the feature or major upgrade path that isn’t supported with ISSU.

In contrast, with a stand alone CPU I can upgrade half of a line card or any part of the fabric without worrying. Since it’s just ethernet and just BGP inside, there is no need for a fragile system like ISSU to make everything play nice across software versions.

Finally, looking at the physical aspects, the OMP800 provides 256 ports of connectivity in 10 rack units, saving 14 rack units of space. Having a chassis instead of 32 port switches also saves 256 cables and 512 optics and over 30% in power draw. The savings on optics alone, we all know, will make the chassis pay for itself!

If you’re considering building more than 16 racks of compute, the new Cumulus open networking chassis seem like a no brainer. Try out Cumulus technology for free with Cumulus in the Cloud and get your data center revolution started.

The post Choosing your chassis: a look at different models appeared first on Cumulus Networks Blog.

10 October, 2017 10:53PM by Pete Lumbis

hackergotchi for Ubuntu developers

Ubuntu developers

Canonical Design Team: Designing product page templates for ubuntu.com

During our user testing sessions on ubuntu.com, we often receive feedback from users about content on the site (“I can’t find this”, “I’d like more of that” or “I want to know this”). Accumulated feedback like this contributed to our decision here on the Web team to find a more standardised way of designing our product landing pages. We have two main motivations for doing this work:

1)  To make our users’ lives easier The www.ubuntu.com site has a long legacy of bespoke page design which has resulted in an inconsistent content strategy across some of our pages.  In order to evaluate and compare our products effectively, our users need consistent information delivered in a consistent way.

2) To make our lives easier Here at Canonical, we don’t have huge teams to write copy, make videos or create content for our websites. Because of this our product pages need to be quick and easy to design, build and maintain – which they will be if they all follow a standardised set of guidelines.

After a process of auditing the current site content, researching competitors, and refining a few different design routes – we reached a template that we all agreed was better than what we currently had in most cases.  Here’s some annotated photos of the process.

Web pages printed out with post-it notes

First we completed a thorough content audit of existing ubuntu.com product pages. Here the coloured post-it notes denote different types of content.

Flip-chart of hand-written list of components for a product page

Our audit of the site resulted in this unprioritized ‘short-list’ of possible types of content  to be included on a product page.

Early wireframe sketch 1Early wireframe sketch 2Early wireframe sketch 3

Some examples of early wireframe sketches.

Here is an illustrated wireframe of new template. I use this illustrated wireframe as a guideline for our stakeholders, designers and developers to follow when considering creating new or enhancing existing product pages.

Diagram of a product page template for ubuntu.com

We have begun rolling out this new template across our product pages –  e.g. our server-provisioning page. Our plan is to continue to test, watch and measure the pages using this template and then to iterate on the design accordingly. In the meantime, it’s already making our lives here on the Web Team easier!

10 October, 2017 04:50PM

Sebastian Kügler: 4 reasons why the librem 5 got funded

Librem 5 Plasma MobileLibrem 5 Plasma Mobile
In the past days, the campaign to crowd-fund a privacy-focused smartphone built on top of Free software and in collaboration with its community reached its funding goal of 1.5 million US dollars. While many people doubted that the crowdfunding campaign would succeed, it is actually hardly surprising if we look what the librem 5 promises to bring to the table.

1. Unique Privacy Features: Kill-switches and auditable code

Neither Apple nor Android have convincing stories when it comes to privacy. Ultimately, they’re both under the thumbs of a restrictive government, which, to put it mildly doesn’t give a shit about privacy and has created the most intrusive global spying system in the history of mankind. Thanks to the U.S., we now live in the dystopian future of Orwell’s 1984. It’s time to put an end to this with hardware kill switches that cut off power to the radio, microphone and camera, so phones can’t be hacked into anymore to listen in on your conversations, take photos you never know were taken and send them to people you definitely would never voluntarily share them with. All that comes with auditable code, which is something that we as citizens should demand from our government. With a product on the market supplying these features, it becomes very hard for your government to argue that they really need their staff to use iphones or Android devices. We can and we should demand this level of privacy from those who govern us and handle with our data. It’s a matter of trust.
Companies will find this out first, since they’re driven by the same challenges but usually much quicker to adopt technology.

2. Hackable software means choice

The librem 5 will run a mostly standard Debian system with a kernel that you can actually upgrade. The system will be fully hackable, so it will be easy for others to create modified phone systems based on the librem. This is so far unparalleled and brings the freedom the Free software world has long waited for, it will enable friendly competition and collaboration. All this leads to choice for the users.

3. Support promise

Can a small company such as Purism actually guarantee support for a whole mobile software stack for years into the future? Perhaps. The point is, even in case they fail (and I don’t see why they would!), the device isn’t unsupported. With the librem, you’re not locked into a single vendor’s eco system, but you buy into the support from the whole Free software community. This means that there is a very credible support story, as device doesn’t have to come from a single vendor, and the workload is relatively limited in the first place. Debian (which is the base for PureOS) will be maintained anyway, and so will Plasma as tens of millions of users already rely on it. The relatively small part of the code that is unique to Plasma Mobile (and thus isn’t used on the desktop) is not that hard to maintain, so support is manageable, even for a small team of developers. (And if you’re not happy with it, and think it can be done better, you can even take part.)

4. It builds and enables a new ecosystem

The Free software community has long waited for this hackable device. Many developers just love to see a platform they can build software for that follows their goals, that allows development with a proven stack. Moreover, convergence allows users to blur the lines between their devices, and advancing that goal hasn’t been on the agenda with the current duopoly.
The librem 5 will put Matrix on the map as a serious contender for communication. Matrix has rallied quite a bit of momentum to bring more modern mobile-friendly communication, chat and voice to the Free software eco-system.
Overall, I expect the librem 5 to make Free software (not just open-source-licensed, but openly developed Free software) a serious player also on mobile devices. The Free software world needs such a device, and now is the time to create it. With this huge success comes the next big challenge, actually creating the device and software.

The unique selling points of the librem 5 definitely strike a chord with a number of target groups. If you’re doubtful that its first version can fully replace your current smart phone, that may be justified, but don’t forget that there’s a large number of people and organisations that can live with a more limited feature set just fine, given the huge advantages that private communication and knowing-what’s-going-on in your device brings with it.
The librem 5 really brings something very compelling to the table and those are the reasons why it got funded. It is going to be a viable alternative to Android and iOS devices that allows users to enjoy their digital life privately. To switch off tracking, and to sleep comfortably.
Are you convinced this is a good idea? Don’t hesitate to support the campaign and help us reach its stretch goals!

10 October, 2017 02:21PM

hackergotchi for ARMBIAN

ARMBIAN

Orange Pi Zero 2+ H5


Ubuntu server – mainline kernel
 
Command line interface – server usage scenarios.

Testing

other download options and archive

Known issues

All currently available OS images for H5 boards are experimental

  • don’t use them for anything productive but just to give constructive feedback to developers
  • shutdown might result in reboots instead or board doesn’t really power off (cut power physically)

Quick start | Documentation

Preparation

Make sure you have a good & reliable SD card and a proper power supply. Archives can be uncompressed with 7-Zip on Windows, Keka on OS X and 7z on Linux (apt-get install p7zip-full). RAW images can be written with Etcher (all OS).

Boot

Insert SD card into a slot and power the board. (First) boot (with DHCP) takes up to 35 seconds with a class 10 SD Card and cheapest board.

Login

Login as root on HDMI / serial console or via SSH and use password 1234. You will be prompted to change this password at first login. Next you will be asked to create a normal user account that is sudo enabled (beware of default QWERTY keyboard settings at this stage).

Tested hardware

10 October, 2017 11:40AM by igorpecovnik

October 09, 2017

hackergotchi for Purism PureOS

Purism PureOS

Purism Meets Its $1.5 Million Goal for Security Focused Librem 5 Smartphone One Week After Surging Past the 50% Mark

Self-hosted crowdfunder out grosses combined funding of Purism’s previous three campaigns

SAN FRANCISCO, Calif., October 9, 2017 — Purism, the social purpose corporation which designs and produces popular privacy conscious hardware and software, has reached its $1.5 million crowdfunding goal to create the world’s first encrypted, open smartphone ecosystem that gives users complete device control, the Librem 5. After amassing incredible support from GNU/Linux enthusiasts and the Free/Open-Source community at large, forging partnerships with KDE and the GNOME Foundation in the process, Purism plans to use the remaining two weeks of the campaign to push for its stretch goals and start working on the next steps for bringing the phone to market.

Reaching the $1.5 million milestone weeks ahead of schedule enables Purism to accelerate the production of the physical product. The company plans to move into hardware production as soon as possible to assemble a developer kit as well as initiate building the base software platform, which will be publicly available and open to the developer community.

Breaking away from the iOS/Android OS duopoly, the Librem 5’s isolation-based security-focused PureOS will offer basic communication services: phone, email, messaging, voice, camera, browsing, and will expand after shipment and over time to update with more free software applications, through shared collaboration with the developer community (not “read-only open source”, but true free software collaboration). In addition to the ability to integrate with both GNOME and Plasma Mobile, the $599 Librem 5 will come equipped with hardware kill switches, a popular feature in Purism’s laptops, that allow for users to turn on and off the camera, microphone, WiFi and Bluetooth at will.

“We are thrilled that the community has supported us in making this goal a reality, and now comes the real work of bringing the Librem 5 to production and into the hands of our backers,” says Todd Weaver, Founder and CEO, Purism. “We believe we’ve demonstrated a growing interest in technologies that proactively protect and secure our digital identities, and are proud to be a part of catalyzing this movement.”

The impressive milestone has already generated celebration in the community:

About Purism

Purism is a Social Purpose Corporation devoted to bringing security, privacy, software freedom, and digital independence to everyone’s personal computing experience. With operations based in San Francisco (California) and around the world, Purism manufactures premium-quality laptops, tablets and phones, creating beautiful and powerful devices meant to protect users’ digital lives without requiring a compromise on ease of use. Purism designs and assembles its hardware in the United States, carefully selecting internationally sourced components to be privacy-respecting and fully Free-Software-compliant. Security and privacy-centric features come built-in with every product Purism makes, making security and privacy the simpler, logical choice for individuals and businesses.

Media Contact

Marie Williams, Coderella / Purism
+1 415-689-4029
pr@puri.sm
See also the Purism press room for additional tools and announcements.
 

09 October, 2017 11:59PM by Jeff

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: LXD Weekly status #18


Weekly status for the week of the 2nd to the 8th of October 2017.

Introduction

After everyone got back home from New York City, we got back to work on LXD, LXC and LXCFS.

On the LXD front, other than a large amount of bugfixes, we’ve made our online documentation available through Read The Docs. It can be found here: https://lxd.readthedocs.io

We’ve also been designing and implementing a new API to retrieve an overview of system resources (CPU and RAM) and storage pool resources (disk space and inodes). This should make it easier to interact with a remote LXD daemon sitting on an unknown physical server.

On the LXC side, a new template to generate OCI based application containers has been included. And we’ve otherwise been fixing some of the bugs reported after the LXC 2.1 release.

The other main focus across all projects has been preparing the stable branches so that we can finally release bugfix releases for all the various branches of LXC, LXD and LXCFS. The branches are all now ready and we’re doing some testing on them before releasing. A call for testing is available here.

Lastly, we’re excited to announce that we’ll be running the containers devroom at FOSDEM 2018. More details to come soon.

Upcoming conferences and events

  • Open Source Summit Europe (Prague, October 2017)
  • Linux Piter 2017 (St. Petersburg, November 2017)
  • FOSDEM 2018 (Brussels, February 2018)

Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.

LXD

LXC

LXCFS

  • Nothing to report

Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.

Ubuntu

  • Nothing to report

Snap

  • Fixed a number of issues with the “lxc” wrapper related to filesystem path handling
  • Fixed bash completion integration with a number of Linux distributions

09 October, 2017 07:08PM

hackergotchi for SparkyLinux

SparkyLinux

Payment day coming (2017)

 

Donate Like a year ago, and ago, Sparky needs YOUR help now!

As you probably know, SparkyLinux is a non-profit project so does not earn money.
And probably some of you know that we have to pay bills for hosting server (vps), domains, the power (electricity), broadband (internet connection), etc., etc., from our personal, home budget.

The time to pay for our server coming quickly again so help us sending donation now!

This year we would like to buy a mini computer Raspberry Pi as well, which will be used for testing new images.
The point is to add support to one board’s machines like RPi 3 soon.
An image can be created in chrooted/virtual environment, but has to be tested on a real machine.

All in one, this year we need 1200 PLN for the VPS and 250 PLN for the equipment (all together 1450 PLN = 340 Euros about) until November 9, 2017.

We also have asked for donations our Polish users already at Linuxiarze.pl. Our virtual server hosts a few web pages, all Linux related: SparkyLinux.org, Linuxiarze.pl and ArchiveOS.org.

So please donate now to keep Sparky alive.
Any donation will be very helpful.
Visit the donation page to find how to send out money.
Aneta & Paweł

 

09 October, 2017 06:14PM by pavroo

hackergotchi for Emmabuntüs Debian Edition

Emmabuntüs Debian Edition

EmmaDE-2 The Stretch Version

On October 2nd, 2017, the Emmabuntüs Collective is happy to announce the release of the new Emmabuntüs Debian Edition 2 1.00 (32 and 64 bits), based on Debian9.1 distribution and featuring the XFCE desktop environment. This distribution was originally designed to facilitate the reconditioning of computers donated to humanitarian organizations, starting with the Emmaüs communities [...]

09 October, 2017 02:08PM by yves

EmmaDE-2 La version Stretch

Le Collectif Emmabuntüs est heureux d’annoncer la sortie pour le 2 octobre 2017, de la nouvelle Emmabuntüs Debian Édition 2 1.00 (32 et 64 bits) basée sur la Debian 9.1 Stretch et XFCE. Cette distribution a été conçue pour faciliter le reconditionnement des ordinateurs donnés aux associations humanitaires, notamment, à l’origine, aux communautés Emmaüs (d’où [...]

09 October, 2017 01:50PM by yves

hackergotchi for Ubuntu developers

Ubuntu developers

James Page: Ubuntu Openstack Dev Summary – 9th October 2017

Welcome to the seventh Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

OpenStack Distribution

Stable Releases

Current in-flight SRU’s for OpenStack related packages:

Ceph 10.2.9 point release

Ocata Stable Point Releases

Pike Stable Point Releases

Horizon Newton->Ocata upgrade fixes

Recently released SRU’s for OpenStack related packages:

Newton Stable Point Releases

Development Release

OpenStack Pike released in August and is install-able on Ubuntu 16.04 LTS using the Ubuntu Cloud Archive:

sudo add-apt-repository cloud-archive:pike

OpenStack Pike also forms part of the Ubuntu 17.10 release later this month; final charm testing is underway in preparation for full Artful support for the charm release in November.

We’ll be opening the Ubuntu Cloud Archive for OpenStack Queens in the next two weeks; the first uploads will be the first Queens milestones, which will coincide nicely with the opening of the next Ubuntu development release (which will become Ubuntu 18.04 LTS).

OpenStack Snaps

The main focus in the last few weeks has been on testing of the gnocchi snap, which is currently install-able from the edge channel:

sudo snap install --edge gnocchi

The gnocchi snap provides the gnocchi-api (nginx/uwsgi deployed) and gnocchi-metricd service;  Due to some incompatibilities between gnocchi/cradox/python-rados the snap is currently based on the 3.1.11 release; hopefully we should work through the issues with the 4.0.x release in the next week or so, as well as having multiple tracks setup for this snap so you can consume a version known to be compatible with a specific OpenStack release.

Nova LXD

The team is currently planning work for the Queens development cycle; pylxd has received a couple of new features – specifically support for storage pools as provided in newer LXD versions, and streaming of image uploads to LXD which greatly reduces the memory footprint of client applications during uploads.

OpenStack Charms

Queens Planning

Out of the recent Queens PTG, we have a number of feature specs landed in the charms specification repository . There are a few more in the review queue; if you’re interested in plans for the Queens release of the charms next year, this is a great place to get a preview and provide the team feedback on the features that are planned for development.

Deployment Guide

The first version of the new Charm Deployment Guide has now been published to the OpenStack Docs website; we have a small piece of followup work to complete to ensure its published alongside other deployment project guides, but hopefully that should wrap up in the next few days.  Please give the guide a spin and log any bugs that you might find!

Bugs

Over the last few weeks there has been an increased level of focus on the current bug triage queue for the charms; from a peak of 600 open bugs two weeks ago, with around 100 pending triage, we’ve closed out 70 bugs and the triage queue is down to a much more manageable level.  The recently introduced bug triage rota has helped with this effort and should ensure we keep on-top of incoming bugs in the future.

Releases

In the run-up to the August charm release, a number of test scenarios which required manual execution where automated as part of the release testing activity;  this automation work reduces the effort to produce the release, and means that the majority of test scenarios can be run on a regular basis.  As a result, we’re going to move back to a three month release cycle; the next charm release will be towards the end of November after the OpenStack summit in Sydney.

IRC (and meetings)

As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.

EOM

 


09 October, 2017 10:44AM

hackergotchi for Deepin

Deepin

Deepin has Added Mirror Site Service like Zaragoza University and so on

Today, deepin has added two new mirror sites: Zaragoza University and Nanjing University. As deepin has been widely used all over the world, we will add more and more mirror sites so that deepin users all around the world could get high quality user experience, and especially high quality experience for using Linux desktop. Spain——Zaragoza University http://matojo.unizar.es/deepin/ http://matojo.unizar.es/deepin-cd/ China——Nanjing University https://mirrors.nju.edu.cn/deepin/ https://mirrors.nju.edu.cn/deepin-cd/ http://mirrors.nju.edu.cn/deepin/ http://mirrors.nju.edu.cn/deepin-cd/ We also welcome more mirror sites and open-source communities to provide mirror services for deepin, contact with us: support@deepin.org

09 October, 2017 05:05AM by jingle

hackergotchi for Wazo

Wazo

Sprint Review 17.14

Hello Wazo community! Here comes the release of Wazo 17.14!

We are looking for beta testers for the Wazo Zapier plugin.

Security update

Asterisk: Asterisk 14.6.2 has been included in Wazo 17.14.

New features in this sprint

Wazo Client: The Wazo Client 17.14.1 has been released and can be used to replace the previous CTI client.

Webhooks: Webhooks can be triggered on a per user basis for chat, agent login/logouts and device status changes.

Ongoing features

Plugin management: We want developers to be able to write and share Wazo plugins easily. For this, we need a central place where users can browse plugins and developers can upload them. That's what we call the Plugin Market. Right now, the market already serves the list of available plugins, but it is very static. Work will now be done to add a front end to our plugin market, allowing users to browse, add and modify plugins.

Webhooks: Webhooks allow Wazo to notify other applications about events that happen on the telephony server, e.g. when a call arrives, when it is answered, hung up, when a new contact is added, etc. Webhooks are working correctly, but they still need some polishing: performance tweaking, handling HTTP authentication, listing the history of webhooks triggered, allowing users to setup their own webhooks in order to connect with Zapier, for example. We're also thinking of triggering scripts instead of HTTP requests, making Wazo all the more flexible when interconnecting with other tools.

Performance: We are making changes to the way xivo-ctid-ng handle messages from Asterisk to be able to handle more simultaneous calls.


The instructions for installing Wazo or upgrading Wazo are available in the documentation.

For more details about the aforementioned topics, please see the roadmap linked below.

See you at the next sprint review!

Sources:

09 October, 2017 04:00AM by The Wazo Authors

October 08, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Daniel Pocock: A step change in managing your calendar, without social media

Have you been to an event recently involving free software or a related topic? How did you find it? Are you organizing an event and don't want to fall into the trap of using Facebook or Meetup or other services that compete for a share of your community's attention?

Are you keen to find events in foreign destinations related to your interest areas to coincide with other travel intentions?

Have you been concerned when your GSoC or Outreachy interns lost a week of their project going through the bureaucracy to get a visa for your community's event? Would you like to make it easier for them to find the best events in the countries that welcome and respect visitors?

In many recent discussions about free software activism, people have struggled to break out of the illusion that social media is the way to cultivate new contacts. Wouldn't it be great to make more meaningful contacts by attending more a more diverse range of events rather than losing time on social media?

Making it happen

There are already a number of tools (for example, Drupal plugins and Wordpress plugins) for promoting your events on the web and in iCalendar format. There are also a number of sites like Agenda du Libre and GriCal who aggregate events from multiple communities where people can browse them.

How can we take these concepts further and make a convenient, compelling and global solution?

Can we harvest event data from a wide range of sources and compile it into a large database using something like PostgreSQL or a NoSQL solution or even a distributed solution like OpenDHT?

Can we use big data techniques to mine these datasources and help match people to events without compromising on privacy?

Why not build an automated iCalendar "to-do" list of deadlines for events you want to be reminded about, so you never miss the deadlines for travel sponsorship or submitting a talk proposal?

I've started documenting an architecture for this on the Debian wiki and proposed it as an Outreachy project. It will also be offered as part of GSoC in 2018.

Ways to get involved

If you would like to help this project, please consider introducing yourself on the debian-outreach mailing list and helping to mentor or refer interns for the project. You can also help contribute ideas for the specification through the mailing list or wiki.

Mini DebConf Prishtina 2017

This weekend I've been at the MiniDebConf in Prishtina, Kosovo. It has been hosted by the amazing Prishtina hackerspace community.

Watch out for future events in Prishtina, the pizzas are huge, but that didn't stop them disappearing before we finished the photos:

08 October, 2017 05:36PM

Sebastian Kügler: Diving Langedijk

There’s hardly a better way to spend a sunday diving, even in early fall when the weather gets a little colder and rainier. We went to Zeeland, at the Dutch coast, to a divespot named Langedijk for two shallow shore dives. The water was a somewhat brisk 14°C, but our drysuits kept us toasty even through longe dive.

SteurgarnaalSteurgarnaal
Fluwelen zwemkrabFluwelen zwemkrab
WeduweroosWeduweroos
PitvisPitvis
ZakpijpZakpijp
botervisbotervis
KreeftKreeft

08 October, 2017 05:11PM

October 06, 2017

Scarlett Clark: KDE at #UbuntuRally in New York! KDE Applications snaps!

#UbuntuRally New York

KDE at #UbuntuRally New York

I was happy to attend Ubuntu Rally last week in New York with Aleix Pol to represent KDE.
We were able toaccomplish many things during this week, and that is a result of having direct contact with Snap developers.
So a big thank you out to Canonical for sponsoring me. I now have all of KDE core applications,
and many KDE extragear applications in the edge channel looking for testers.
I have also made a huge dent in also making the massive KDE PIM snap!
I hope to have this done by week end.
Most of our issue list made it onto TO-DO lists 🙂
So from KDE perspective, this sprint was a huge success!

06 October, 2017 08:50PM

Ubuntu Podcast from the UK LoCo: S10E31 – Plausible Dull Story - Ubuntu Podcast

This week we’ve been playing Wasteland 2 and switching back to Firefox. We also discuss Amber Rudd (dullard UK home secretary) not needing to understand encryption, Mycroft 2 is vertical, Firefox is going Quantum, Uber being banned in London and a new Linux laptop from Google.

It’s Season Ten Episode Thirty-One of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

Let’s Talk about the Ubuntu 18.04 LTS Roadmap

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

06 October, 2017 02:00PM

Raphaël Hertzog: My Free Software Activities in September 2017

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h but I only spent 10.5h. During this time, I continued my work on exiv2. I finished reproducing all the issues and then went on doing code reviews to confirm that vulnerabilities were not present when the issue was not reproducible. I found two CVE where the vulnerability was present in the wheezy version and I posted patches in the upstream bug tracker: #57 and #55.

Then another batch of 10 CVE appeared and I started the process over… I’m currently trying to reproduce the issues.

While doing all this work on exiv2, I also uncovered a failure to build on the package in experimental (reported here).

Misc Debian/Kali work

Debian Live. I merged 3 live-build patches prepared by Matthijs Kooijman and added an armel fix to cope with the the rename of the orion5x image into the marvell one. I also uploaded a new live-config to fix a bug with the keyboard configuration. Finally, I also released a new live-installer udeb to cope with a recent live-build change that broke the locale selection during the installation process.

Debian Installer. I prepared a few patches on pkgsel to merge a few features that had been added to Ubuntu, most notably the possibility to enable unattended-upgrades by default.

More bug reports. I investigated much further my problem with non-booting qemu images when they are built by vmdebootstrap in a chroot managed by schroot (cf #872999) and while we have much more data, it’s not yet clear why it doesn’t work. But we have a working work-around…

While investigating issues seen in Kali, I opened a bunch of reports on the Debian side:

  • #874657: pcmanfm: should have explicit recommends on lxpolkit | polkit-1-auth-agent
  • #874626: bin-nmu request to complete two transitions and bring back some packages in testing
  • #875423: openssl: Please re-enable TLS 1.0 and TLS 1.1 (at least in testing)

Packaging. I sponsored two uploads (dirb and python-elasticsearch).

Debian Handbook. My work on updating the book mostly stalled. The only thing I did was to review the patch about wireless configuration in #863496. I must really get back to work on the book!

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

06 October, 2017 08:30AM

Ubuntu Insights: Kubernetes on Ubuntu VMs

Recently /u/Elezium asked the following question on Reddit: Tools to deploy k8s on-premise on top of Ubuntu. This is a question that a lot of people have answered using a combination of MAAS/VMWare/OpenStack for on premise multi-node Kubernetes. If you’re looking for something with more than a two or three machines, those resources are bountiful.

However, the question came to “How do I do Kubernetes on an existing Ubuntu VM”. This is different from LXD, which is typically a good solution — though without a bunch of networking modifications it won’t be reachable from outside that VM.

So, how do you make a single — or even small handful — of VMs run Kubernetes in a production fashion? You could do it all by hand, but we’re well beyond the point where doing things from scratch in a non-repeatable fashion is reasonable, let alone desireable.

First, get two VMs. This is probably the easiest thing, I’m going to use a simple VM running the latest 16.04 Ubuntu Server — though you could use the Desktop or Cloud flavor of Ubuntu. I’ll also be doing these steps from a Mac terminal, but you could do this from an Ubuntu or Windows machine the steps are the same.

Once you have two VMs running with at least 1 core and 1 GB RAM (ideally 2 core and 2 GB RAM each) you’ll need to make sure you have a few things set. First, make sure you can connect to the two VMs over SSH. This is important as the tools we’ll be using are for remote setups.

From your host machine verify you can connect to the IP address of your VM. In my setup, this IP addresses are 172.16.94.129 and 172.16.94.130. Make sure you replace this with the address for your machine.

ssh ubuntu@172.16.94.129
ssh ubuntu@172.16.94.130

If you created a different user, switch ubuntu with that username. You should get a successful connection and may be prompted for a password. Now that you have successfully connected, close this connection by typing exit this will return you to your host machine and we can continue!

If you haven’t already, make sure you have conjure-up installed on your machine. In short, that’s either going to be brew install conjure-up or snap install conjure-up for MacOS and Linux respectively. To verify everything worked issue the following two commands:

conjure-up --version
juju version

The output should be equal to or greater than 2.2 for both conjure-up and juju.

Now we’re going to bootstrap Juju, creating a controller for which we can deploy software to. Normally, this is when you’d just run conjure-up kubernetes but since we’re working in such a unique case — deploying scale out software into a single machine — we’re going to do a lot of the steps conjure-up does, only manually.

To do this, we need to know the IP address of the VM and the username for a sudo user on that machine. Again the user is typically ubuntu and the IP address is from earlier.

juju bootstrap manual/ubuntu@172.16.94.129 marcos-blog

You may get prompted several times for a password, which are all passwords for your VM user.

Once the bootstrap is complete, issue a juju status to verify that you have an empty model. This is an abbreviated instruction for manual bootstrapping, a lot more details are available in the full Juju documentation.

Juju uses models like namespaces for deployment. If you notice when you issue a juju status the default model name is default and there are no machines, applications, or units deployed. However, there is a machine because we used it for bootstrapping. We are tip-toeing into dangerous territory as you shouldn’t use the controller (the machine we bootstrapped) to deploy software. However, that means we would need more VMs!

If you can create more VMs, I’d suggest adding another machine to this deployment and avoid doing this switch to the controller. To do this, skip this step and continue below. If you only have/want two VMs, continue with this step.

If you issue juju models you’ll notice there is a controller model in addition to the default model. If we switch to that model, juju switch controller and issue another juju status you’ll see that there are no applications, no units, but one machine — and it’s our VM from earlier!

 

Now that we have a model with a machine we can get to work. What we’re going to do is manually place a few components, then let the process take care of the rest!

We’ll need to add our other VMs. During this step you can add as many VMs as you’d like, the process is the same. In the following sections I’ll address how to scale out the components beyond this very small deployment.

In order to do so, we’ll need to add the other machines so Juju knows where we want to put our components. To do this, run the following command for each additional machine we’ve not yet told Juju about:

juju add-machine ssh:<user>@<ip>

As with before, replace <user> and <ip> with the proper values from your setup.

Run juju status to verify you have all machines added and registered.

Kubernetes is comprised of a handful of components: etcd, easyrsa, kubernetes-master, kubernetes-worker, and flannel. When you complete a deployment of Kubernetes using conjure-up these components are installed, configured, and connected for you. Conjure-up uses Juju as the driver for these instructions and we’re doing this “manual” deployment manually with the Juju pieces directly.

First, we need to deploy EasyRSA and ETCD onto the machine. However, we don’t want to just smash them together, we’ll use LXD to separate and isolate these components.

juju deploy ~containers/easyrsa --to lxd:0
juju deploy ~containers/etcd --to 0

 

 

Depending on your networking, this will take a few moments to create LXC machines and setup the software. Eventually you’ll end up with a state where etcd is blocked. You don’t need to wait for this to complete before issuing the following commands:

juju deploy ~containers/kubernetes-master --to 0
juju deploy ~containers/kubernetes-worker --to 1

This will combine these two components on the single machine. We’re not going to use LXD for these components since it won’t be routable from outside the VM without messing with the network configuration. As such, we’re deploying --to machine 0, the components will be directly accessible through the VMs IP address.

After a few moments, you’ll find something like the following in juju status

As you can see, there’s still items executing. We could wait for these to complete, but if you’re as impatient as I am, then be thankful we live in an asynchronous world and press forward! The final step is to glue all these components together (and deploy the SDN). To do that, we’ll take the kubernetes-core bundle, which is a super light weight Kubernetes cluster, and deploy that now. It’ll skip over any component you’ve already deployed, add any components not yet deployed, and execute all the required relationships.

juju deploy kubernetes-core

The output for this is pretty verbose, and should look something like the following:

This is to be expected. We see in several places Juju skips over components we’ve already deployed, adds things (flannel) that we’re missing, and finally adds all the relations for these components. This is how we resolve the etcd “blocked” message that it’s missing a certificate authority. You’ll notice that etcd:certificates is connected to easyrsa:client which will provider certs for etcd!

Eventually, after running juju status for a few mins you should end up with the following. A completely deployed Kubernetes cluster.

From this point forward, we’ll need to get the credentials for the cluster. This is done automatically for you with conjure-up. With this method you’ll just need to issue the following

juju scp kubernetes-master/0:config ~/.kube/config

If you already have a Kubernetes config file, choose another path, like ~/.kube/config.cdk and make sure you use export KUBECONFIG=$HOME/.kube/config.cdk to use the new configuration file.

For the final touch, I wanted to show how to scale this up. Ideally, you’d want to use a public cloud, private cloud (VMWare, OpenStack), or MAAS for bare metal. The manual provider is just that — very manual. That said, if you have more VMs you can add them and scale the applications to spread across them. I’m going to add another machine and use it for both etcd and kubernetes-worker.

juju add-machine ssh:ubuntu@172.16.94.131
juju add-unit -n2 etcd --to 1,2
juju add-unit etcd --to 2

The result will be a three node etcd and two nodes for Kubernetes workloads. Again, juju status will show you the status of the cluster at anytime. Eventually everything will converge on active and idle. Once this is done you’ve scaled out the deployment. From here you can continue to add VMs, redeploy everything again, or actually start using Kubernetes!

06 October, 2017 01:47AM

October 05, 2017

Ross Gammon: My FOSS activities for August & September 2017

I am writing this from my hotel room in Bologna, Italy before going out for a pizza. After a successful Factory Acceptance Test today, I might also allow myself to celebrate with a beer. But anyway, here is what I have been up to in the FLOSS world for the last month and a bit.

Debian

  • Uploaded gramps (4.2.6) to stretch-backports & jessie-backports-sloppy.
  • Started working on the latest release of node-tmp. It needs further work due to new documentation being included etc.
  • Started working on packaging the latest goocanvas-2.0 package. Everything is ready except for producing some autopkgtests.
  • Moved node-coffeeify experimental to unstable.
  • Updated the Multimedia Blends Tasks with all the latest ITPs etc.
  • Reviewed doris for Antonio Valentino, and sponsored it for him.
  • Reviewed pyresample for Antonio Valentino, and sponsored it for him.
  • Reviewed a new parlatype package for Gabor Karsay, and sponsored it for him.

Ubuntu

  • Successfully did my first merge using git-ubuntu for the Qjackctl package. Thanks to Nish for patiently answering my questions, reviewing my work, and sponsoring the upload.
  • Refreshed the gramps backport request to 4.2.6. Still no willing sponsor.
  • Tested Len’s rewrite of ubuntustudio-controls, adding a CPU governor option in particular. There are a couple of minor things to tidy up, but we have probably missed the chance to get it finalised for Artful.
  • Tested the First Beta release of Ubuntu Studio 17.10 Artful and wrote the release notes. Also drafted my first release announcement on the Ubunti Studio website which Eylul reviewed and published.
  • Refreshed the ubuntustudio-meta package and requested sponsorship. This was done by Steve Langasek. Thanks Steve.
  • Tested the Final Beta release of Ubuntu Studio 17.10 Artful and wrote the release notes.
  • Started working on a new Carla package, starting from where Víctor Cuadrado Juan left it (ITP in Debian).

05 October, 2017 07:35PM

Cumulus Linux

Cumulus content roundup: October

Welcome back to the Cumulus content roundup! This month, we think it’s time to get our hands dirty and play around with the latest technology. From video tutorials to how-to blogs to thought-provoking articles, this issue brings together all the resources you need to start experimenting with new configurations, networking practices and more. So, what are you waiting for? Let’s get off of the couch (or stay on the couch, if that’s where you work) and start upgrading that datacenter!

What’s new from Cumulus:

Web-scale networking how-to videos: This month, we launched a series of how-to videos to show you the ropes of web-scale networking. What’s the difference between configuring an IP address with Cisco or Cumulus Linux? How do I automate my datacenter? Watch our tutorials to answer these questions!

Cumulus Express — proof that our customers’ success is our success: Announcing Cumulus Express has done great things for Cumulus Networks, but the greatest asset we have is listening to our customers. Read on to see how paying attention to what people want returns the best rewards.

Contain yourself! Best practices for container networking: This webinar covers everything that you need to know about container architecture, the challenges they can pose, and how Cumulus and Mesos work together to overcome those challenges. Plus, it features special guest Edward Hsu, VP of Product Marketing at Mesosphere! Watch it here to learn more.

Automating network troubleshooting with NetQ + Ansible: When it comes to troubleshooting, automation is the way to go. And when you have great technology like NetQ and Ansible working together, automating your datacenter is a no-brainer. Read this blog post to learn how it’s done.

A networking expert on how to experiment with containers using Mesosphere and Cumulus Linux: When you combine Cumulus in the Cloud and Mesosphere, you get the perfect environment to test and play around with containers. This blog post has one of our Sr. Consulting Engineers take you through the steps so you can test out the tech for yourself. Check it out here.

Feeling inspired to learn more about what Cumulus can do for you? Take a look at our learn center, resources page, and solutions section and get your web-scale revolution going!

News from the Web

Top five cloud computing trends of 2017: Cloud computing has played a major role in reshaping how companies conduct business. Several organizations are in the process of migrating to the cloud to achieve scalability, cost-efficiency and enhanced application performance. Read this article to find out what these top trends are.

Linux Foundation to hold global Open Source Networking events, looks to foster local provider, vendor collaboration: As an effort to drive service provider and vendor collaboration around open networking, the Linux Foundation has launched its Open Source Networking Days (OSN) series of events. Learn more about the events here.

Time for DevOps to give some love to networking: The enterprise is starting to come to grips with one of the main problems with DevOps: most of the technologies released so far are bringing more agility to the development side of the model than the operational side. Find out how NetDevOps solves this problem.

The post Cumulus content roundup: October appeared first on Cumulus Networks Blog.

05 October, 2017 06:12PM by Madison Emery

hackergotchi for Ubuntu developers

Ubuntu developers

Jorge Castro: Thoughts on the first Kubernetes Steering Election

The first steering committee election for Kubernetes is now over.  Congratulations to Aaron Crickenberger, Derek Carr, Michelle Noorali, Phillip Wittrock, Quinton Hoole and Timothy St. Clair, who will be joining the newly formed Kubernetes Steering Committe.

If you’re unfamiliar with what the SC does, you can check out their charter and backlog. I was fortunate to work alongside Paris Pittman on executing this election, hopefully the first of many “PB&J Productions”.

To give you some backstory on this, the Kubernetes community has been bootstrapping it’s governance over the past few years, and executing a proper election as stated in the charter was an important first step. Therefore it was critical for us to run an open election correctly.

Thankfully we can stand on the shoulders of giants. OpenStack and Debian are just two examples of projects with well formed processes that have stood the test of time. We then produced a voter’s guide to give people a place where they could find all the information they needed and the candidates a spot to fill in their platform statements.

This morning I submitted a pull request with our election notes and steps so that we can start building our institutional knowledge on the process, and of course, to share with whomever is interested.

Also, a big shout out to Cornell University for providing CIVS as a public service.

05 October, 2017 03:41PM

Sebastian Kügler: Plasma Convergence, technically

A Plasma PhoneA Plasma Phone

In one of my latest blogs, I’ve explained what convergence is, how Plasma benefits from it, and why we consider it a goal for Plasma. This time around, I’ll explain the how, how it works across the stack and how we implemented it. Naturally, this article dives a lot deeper, technically, than my previous one.

Convergence plays a role at different levels of the whole software stack. In this, more technical article, I’ll look at different layers of the software stack, from boot/kernel and middleware to UI controls and overall layout and input methods. After reading this article, you’ll understand how Plasma allows to use the same software on a range of devices, which parts are different, and where code sharing makes sense, and thus happens.
Keep in mind that Convergence, at least for Plasma, doesn’t mean that we ship a lowest-common denominator UI so it “kind of” runs on all things computer, but that it provides a toolbox to build customized UIs that allow taking advantage of specific characteristics of a given target device.

Lower Levels and Packaging

Plasma -- same code, different devicesPlasma — same code, different devices

One aspect of convergence that is of course the deployment side. This doesn’t just include the kernel and bootloader, which needs to be compiled differently for ARM devices and for x86 devices. The rest of the stack is by now largely the same. We are now using the same set of packages and CI for both, mobile and desktop builds, in fact most packages are the same, and the difference between a device set up for mobile use cases and desktop is the selection of packages, and what gets started by default. Everything is integrated to a very large degree, lots of work is shared which means timely updates across the device spectrum we serve.

Controls

Plasma MobilePlasma Mobile

When it comes to user interface controls, such as buttons, text fields, etc., convergence is mostly a solved problem. Touch input is possible, Qt nowadays even ships a virtual keyboard (which Plasma uses for example for password input in the lock-screen), and buttons react to touch events as well. QtQuick-based user interfaces often work quite well with both, keyboard/mouse and touch input, in fact touch is one of the design goals of QtQuick.

Not everything is perfect yet, however, especially text selection and keyboard control of QtQuick-based UIs often still requires custom-code, meaning it needs more development and maintainance time to get right. QWidget-based UIs are still a bit ahead of the game here, though often the benefits of also being able to deploy an app on touch devices (such as many Android devices out there!) make QtQuick an attractive technology to use. We see more and more QtQuick-based applications, as this technology matures also for desktop use-cases.

Widgets

Convergent Plasma WidgetsConvergent Plasma Widgets

Plasma is made of widgets. Even in a standard Plasma desktop, everything is a widget: The menu in the bottom left is a widget, the task manager is, the system tray is a widget, and there are widgets inside the system tray for notifications, battery, sound, network, etc.. Plasma is widgets.
These widgets can be used on any device of course, but it doesn’t always make sense. Some of these widgets are very specific for desktops. The task-manager (that thing you use to switch windows, which is usually located in center of the bottom panel) doesn’t really make sense on a mobile device. For a mobile device, which needs larger areas to touch, something more aking to a full-screen window switcher is useful (and in fact what we use for Plasma Mobile). Other widgets, such as the network connections widget or battery and brightness widgets are perfectly suitable also for mobile devices. Plasma’s architecture allows us to re-use the components that need no or just little changes and use them across devices. That means we can concentrate on the missing bits for each device, and that in turn means we can deliver a feature-rich and consistent UI across devices much easier, while making sure the specific characteristics of a given form-factor are used to their fullest extent.
Again, by sharing the components that make sense to share, we can deliver higher quality features for a given devices with less effort, and thus quicker.

Shell and Look & Feel

Look & Feel setup for PlasmaLook & Feel setup for Plasma

Plasma can dynamically load different so-called shell packages. The shell package defines the overall layout of the workspace environment. On the desktop, it says that there’s a fullscreen wallpaper background, with a folderview, a panel at the bottom and the widgets which are loaded into that panel: application launcher, task-manager, system tray and clock for example.
The shell package is different for each device, as this defines the overall workflow, which is highly dependent on the type of device.

To take differences between devices even further, Plasma has the concept of “Look and Feel” packages, which allow further specilialization how a device, well, looks and feels. There’s the widget style and the wallpaper of course. The Look and feel package also defines interaction patterns, such as if a settings interface should use “instant apply” when a setting is changed, or if it should present an “Apply and Okay” button for the user to save settings. Mobile devices typically use instant apply, while desktop interfaces (at least Plasma’s) use the “Apply and Okay” concept throughout. For Plasma UIs, this can be changed dynamically. Plasma’s Look and Feel features is not just useful in the convergence aspect, it allows also for example to switch between a traditional default Plasma setup and a workspace that closely resembles Unity. These “Look and Feel” packages are available through the KDE store, so they’re easy to install and share. There’s even a cool tool that allows you to create your own Look and Feel packages, very much like themes.

Apps

Subsurface Mobile, Linux, Android and iOS from the same code-baseSubsurface Mobile, Linux, Android and iOS from the same code-base

Finally, at application level, we see more and more convergent applications. Kirigami, a high-level toolkit that supplied components for consistent, touch- and keyboard/mouse-friendly application nagivation and layout makes it very easy to create applications with responsive UIs that adapt well to screen size and density and that show flexibility in their input methods. This doesn’t just work between Laptops and phones, but also allows to create one app that works equally well on desktops, laptops, phones and tablets.
Kirigami complements Plasma’s convergence feature on application side, and we recommend it for most newly developed apps. With Qt and QtQuick being a viable target for Android devices, it increases the possible target audience by a very large degree. As an example, Subsurface Mobile, an application for scuba divers, uses Kirigami and works on Linux desktops, Android and iOS, all from the same code-base.

Make it happen…

If you like the idea of convergence, why not join KDE and help us work on Plasma? Perhaps you’d love to see Plasma on a mobile phone? In that case, consider backing the crowdfunding campaign for the librem5 so we can build a convergent phone!

05 October, 2017 03:20PM

hackergotchi for Purism PureOS

Purism PureOS

Initial Touch and Web Browsing experiments on Librem 5 prototyping boards

When it comes to prototyping the Librem 5, we are working hard and making progress on several sides. As you have seen in yesterday’s testing update blog post, we are working on development hardware in order to start getting software development groundwork done. Today, I’m sharing the results of a quick experiment with web and touch on a prototype board.

So far we have managed to get a pure mainline Linux kernel 4.13.5 to work on the development hardware which is based on a i.MX6DL (dual core 1GHz), although we’re still evaluating i.MX8 in parallel. For this test setup I had a 10″ 1280×800 LVDS display attached, with a temporary capacitive touchscreen. The before mentioned blog post has some pictures of the boards. Why mainline? Because we want to be as upstream and mainline as possible. And we want to be blob free too, so if a current mainline Linux kernel is working then this is very good—and it is!

As the base OS for this test we used Debian unstable for ARM hardfloat (armhf). We need the unstable branch since we need to have Mesa >17.1 for proper 3D support. Why Debian? Because PureOS is based on Debian and again we want to be as close to upstream and mainline as possible, so we try this first. Any deviation from upstream would mean additional support effort, which we want to avoid as much as possible—not just because of the effort involved, but also because we want to create a device that neatly fits into an overall vision of a free and open source IT world.

The 3D acceleration support is provided by the Etnaviv kernel driver (mainline since 4.6) and the userspace Etnaviv support by Mesa. Using this combination we can run Wayland in accelerated mode giving nice UI performance also for 2D applications. For testing UI stuff we are currently fully basing on Wayland with the Weston compositing manager. You can see some more screenshots in yesterday’s post.

A lot of desktop infrastructure still depends on X11 and some X server to run, even if it is only running in the background.

  • Due to its very long history, the X11 protocol and some features of the X server have been in use everywhere in desktop environments and applications, and it will take some time for these to be replaced with newer Wayland based or other code. But this effectively means that we can not yet run, for example, a full GNOME on the development boards.
  • On regular PCs this is usually circumvented by running Xwayland in the background, which provides X server functions and uses Wayland as its drawing surface. For some reason—I am still unsure about the root cause—Xwayland does not work on our setup yet. First of all, Xwayland does not support GLES but only pure OpenGL… but the Etnaviv Mesa driver only supports GLES so Xwayland falls back to software rendering—and then it crashes. So for now, I am stuck with Wayland + Weston.

This is not totally bad since it gives us a good idea of what is already working properly with Wayland and what isn’t.

I also wanted to see how more complex applications (like a web browser) behave in this environment… But all the browsers I tried either used ancient versions of WebKitGTK (and thus GTK2 without Wayland support) or they depended on X11 for some reason. I tried Epiphany, netsurf, Firefox and Chromium (Chromium just silently crashed). So what does a good Haeckse do in this case? Write her own 🙂

Using WebKit2GTK, this is pretty easy and straightforward (my code is below, and as you can see it’s pretty short! But be warned, this was just a quick’n’dirty test thing!), which allowed me to successfully launch a webpage as you can see in the screenshot above. On the left is a terminal window where I started the application and on the right is the application output window displaying the Purism website—ta-da! As I mentioned earlier, we also have a touchscreen attached to the screen of the board and, although you can’t tell just by looking at the screenshot, I can scroll the web view with a finger, and two finger pinch zooming does work. Excellent!

Here’s the code for my ultra-mini webkit browser (which I compiled on the development board itself):

#include <gtk/gtk.h>
#include <glib/gprintf.h>
#include <webkit2/webkit2.h>
#include 


typedef struct {
	GtkWidget *win;
	GtkWidget *entry;
	GtkWidget *webview;
} appdata; 


void uri_entered (GtkEntry *entry, gpointer user_data)
{
appdata *data = (appdata *)user_data;
const gchar *entry_text;

	entry_text = gtk_entry_get_text(GTK_ENTRY(data->entry));
	if (entry_text != NULL && strlen(entry_text) != 0)
		webkit_web_view_load_uri(WEBKIT_WEB_VIEW(data->webview), entry_text);
}


static void web_view_load_changed (WebKitWebView  *web_view,
                                   WebKitLoadEvent load_event,
                                   gpointer        user_data)
{
appdata *data = (appdata *)user_data;

    switch (load_event) {
    case WEBKIT_LOAD_STARTED:
        g_printf("load started '%s'\n", webkit_web_view_get_uri (web_view));
	gtk_entry_set_text(GTK_ENTRY(data->entry), webkit_web_view_get_uri (web_view));
        break;
    case WEBKIT_LOAD_REDIRECTED:
        g_printf("load redirected '%s'\n", webkit_web_view_get_uri (web_view));
	gtk_entry_set_text(GTK_ENTRY(data->entry), webkit_web_view_get_uri (web_view));
        break;
    case WEBKIT_LOAD_COMMITTED:
        g_printf("load committed '%s'\n", webkit_web_view_get_uri (web_view));
        break;
    case WEBKIT_LOAD_FINISHED:
        g_printf("load finished '%s'\n", webkit_web_view_get_uri (web_view));
        break;
    }
}


int main(int argc, char** argv)
{
GtkWidget *vb;
GtkEntryBuffer *eb;
appdata mdata;

	gtk_init(&argc, &argv);
	mdata.win = gtk_window_new(GTK_WINDOW_TOPLEVEL);
	gtk_window_set_default_size (GTK_WINDOW(mdata.win), 640, 480);
	g_signal_connect (mdata.win, "delete_event", G_CALLBACK (gtk_main_quit), NULL);

	vb = gtk_box_new(GTK_ORIENTATION_VERTICAL, 0);
	gtk_container_add(GTK_CONTAINER(mdata.win), GTK_WIDGET(vb));

	eb = gtk_entry_buffer_new(NULL, -1);
	mdata.entry = gtk_entry_new_with_buffer(eb);
	gtk_entry_set_text(GTK_ENTRY(mdata.entry), "https://www.puri.sm");
	g_signal_connect(G_OBJECT(mdata.entry), "activate", G_CALLBACK(uri_entered), &mdata);

	gtk_box_pack_start(GTK_BOX(vb), mdata.entry, FALSE, FALSE, 0);

	mdata.webview = webkit_web_view_new();
	g_signal_connect(mdata.webview, "load-changed", G_CALLBACK(web_view_load_changed), &mdata);
	gtk_box_pack_start(GTK_BOX(vb), mdata.webview, TRUE, TRUE, 0);

	gtk_widget_show_all(mdata.win);

	webkit_web_view_load_uri(WEBKIT_WEB_VIEW(mdata.webview), "https://www.puri.sm");

	gtk_main();
}

And here is a matching Makefile:

CC = gcc
CFLAGS = -g -O2 -Wall

# for GTK3
CFLAGS += `pkg-config --cflags gtk+-wayland-3.0`
LDFLAGS += `pkg-config --libs gtk+-wayland-3.0`

# for WebKit GTK3
CFLAGS += `pkg-config --cflags webkit2gtk-4.0`
LDFLAGS += `pkg-config --libs webkit2gtk-4.0`

OBJS = miniwebkit.o
PRG = miniwebkit

all: $(PRG)

$(PRG): $(OBJS)
	$(CC) -o $(PRG) $(OBJS) $(LDFLAGS)

clean:
	rm -f $(OBJS) $(PRG)

Pretty easy, isn’t it?

05 October, 2017 03:16PM by Nicole Faerber

hackergotchi for Univention Corporate Server

Univention Corporate Server

Use of Univention Corporate Server: Our 3rd Party Charts

Thousands of organizations around the world use Univention Corporate Server every day. And the number increases week after week. The reason among others is that the Univention App Center contains, in addition to many UCS modules and extensions, dozens of professional enterprise applications from various vendors which can be easily integrated and maintained via the App Center. Therefore we are monitoring very closely which of these apps are really used and to which extent. Today, I would like to share some of the insights with you.

Our most important App is the Active Directory component of UCS. Its distribution increases, though its share decreases slightly in comparison to other Apps. Today, it is installed on almost every other active UCS system. It is remarkable how the open source software Samba, on which our App is based, has become a unique solution for the provision of Active Directory domain services for thousands of companies, authorities, and schools.

Third-party Apps

More than every other organization that uses UCS now also uses at least one of those enterprise Applications in the App Center that are provided by a third-party vendor. Have a quick look at our “3rd Party Charts” to see which Apps these are:

App

Rank

ownCloud

1

Kopano

2

OX App Suite

3

Nextcloud

4

Bareos Backup Server

5

OpenVPN4UCS

6

Horde Groupware Webmail Edition

7

opsi – Client Management

8

OpenProject

9

Collabora Online

10

Secure File Sharing is One of the Most Important Application Areas of Open Source Solutions

Looking at this top ten chart and in particular at the positions of the file sharing solutions ownCloud (#1) and Nextcloud (#4), we can derive that an uncomplicated provision and synchronization of data seems to be one of two top application areas for many UCS users. The core functions of these two Apps are precisely in this area.

Collaboration and Groupware Play a Decisive Role in Companies

The second equally important area of use are the topics collaboration and groupware: The most important two providers of these solutions are Kopano (#2) and Open-Xchange (#4) followed by Horde Groupware (#7), which is a pure community solution.

Open Source Apps are also Important for the Operation of IT Infrastructures

Three other of the top ten Apps belong are for the easy and secure operation of IT infrastructures, such as UCS itself: Bareos Backup Server (#5) is a popular and fast growing backup solution, OpenVPN4UCS (#6) makes it possible to manage secure private networks easily with UCS, and “opsi – Client Management” (#8) is primarily used for software distribution and automatic setup and maintenance of Windows-based clients.

Project Management and Online Office Matter, Too

Two other fast-growing Apps have made it into the top ten: OpenProject (#9), a project management software as indicated by its name, and Collabora (#10). Collabora is installed with either ownCloud or Nextcloud and allows to edit text documents, spreadsheets, and presentations directly in the browser – similar to Microsoft Office 365 with multiple users at the same time. A similar extension for the app Open-Xchange is also available in the App Center.

The combination of these Apps and UCS provides for a growing number of organizations a secure, stable, easy-to-use IT platform under their own control. Using UCS, organizations can easily combine cloud services and decide for themselves whether and for what purpose they are used.

Would You Like to Know More?

We would like to know if these charts are of interest to you and whether we should publish them regularly. Please leave your comment below!

Der Beitrag Use of Univention Corporate Server: Our 3rd Party Charts erschien zuerst auf Univention.

05 October, 2017 01:55PM by Peter Ganten

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu LoCo Council: Call for nominations to the LoCo Council

Hello All,

As you may know the LoCo council members are set with a two years
term. Due this situation we are facing the difficult task of replacing
existing members and a whole set of restaffing. A special thanks to
all the existing members for all of the great contributions they have
made while serving with us on the LoCo Council.

So with that in mind, we are writing this to ask for volunteers to
step forward and nominate themselves or another contributor for the
five open positions. The LoCo Council is defined on our wiki page.

Wiki: https://wiki.ubuntu.com/LoCoCouncil

Team Agenda: https://wiki.ubuntu.com/LoCoCouncilAgenda

Typically, we meet up once a month in IRC to go through items on the
team agenda also we started to have Google Hangouts too (The time for
hangouts may vary depending the availability of the members time).
This involves approving new LoCo Teams, Re-approval of Approved LoCo
Teams, resolving issues within Teams, approving LoCo Team mailing list
requests, and anything else that comes along.

We have the following requirements for Nominees:

  • Be an Ubuntu member
  • Be available during typical meeting times of the council
  • Insight into the culture(s) and typical activities within teams is a plus

Here is a description of the current LoCo Council:

They are current Ubuntu Members with a proven track record of activity
in the community. They have shown themselves over time to be able to
work well with others, and display the positive aspects of the Ubuntu
Code of Conduct. They should be people who can judge contribution
quality without emotion while engaging in an interview/discussion that
communicates interest, a welcoming atmosphere, and which is marked by
humanity, gentleness, and kindness.

If this sounds like you, or a person you know, please e-mail the LoCo
Council with your nomination(s) using the following e-mail address:
loco-council<at>lists.ubuntu.com.

Please include a few lines about yourself, or whom you’re nominating,
so we can get a good idea of why you/they’d like to join the council,
and why you feel that you/they should be considered. If you plan on
nominating another person, please let them know, so they are aware.

We welcome nominations from anywhere in the world, and from any LoCo
team. Nominees do not need to be a LoCo Team Contact to be nominated
for this post. We are however looking for people who are active in
their LoCo Team.

The time frame for this process is as follows:

Nominations initially opened: Friday 1st September, 2017

Nominations will close: Wednesday 25th October 2017

We will then forward the nominations to the Community Council,
Requesting they take the following their next meeting to make their
selections.

05 October, 2017 12:25PM

Ubuntu Insights: Webinar: 10-step plan to rollout Cloud devops

Sign up for our new webinar about the 10 steps you need to take to roll-out Cloud devops.

Join us to learn about the benefits of hybrid cloud and how to deal with the technical and operational pitfalls involved in migrating to native cloud.

05 October, 2017 11:30AM