March 28, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 503

Welcome to the Ubuntu Weekly Newsletter. This is issue #503 for the weeks March 13 – 26, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • OerHeks
  • Chris Guiver
  • Darin Miller
  • Alan Pope
  • Valorie Zimmerman
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

28 March, 2017 02:29AM

March 27, 2017

Stéphane Graber: USB hotplug with LXD containers

LXD logo

USB devices in containers

It can be pretty useful to pass USB devices to a container. Be that some measurement equipment in a lab or maybe more commonly, an Android phone or some IoT device that you need to interact with.

Similar to what I wrote recently about GPUs, LXD supports passing USB devices into containers. Again, similarly to the GPU case, what’s actually passed into the container is a Unix character device, in this case, a /dev/bus/usb/ device node.

This restricts USB passthrough to those devices and software which use libusb to interact with them. For devices which use a kernel driver, the module should be installed and loaded on the host, and the resulting character or block device be passed to the container directly.

Note that for this to work, you’ll need LXD 2.5 or higher.

Example (Android debugging)

As an example which quite a lot of people should be able to relate to, lets run a LXD container with the Android debugging tools installed, accessing a USB connected phone.

This would for example allow you to have your app’s build system and CI run inside a container and interact with one or multiple devices connected over USB.

First, plug your phone over USB, make sure it’s unlocked and you have USB debugging enabled:

stgraber@dakara:~$ lsusb
Bus 002 Device 003: ID 0451:8041 Texas Instruments, Inc. 
Bus 002 Device 002: ID 0451:8041 Texas Instruments, Inc. 
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 021: ID 17ef:6047 Lenovo 
Bus 001 Device 031: ID 046d:082d Logitech, Inc. HD Pro Webcam C920
Bus 001 Device 004: ID 0451:8043 Texas Instruments, Inc. 
Bus 001 Device 005: ID 046d:0a01 Logitech, Inc. USB Headset
Bus 001 Device 033: ID 0fce:51da Sony Ericsson Mobile Communications AB 
Bus 001 Device 003: ID 0451:8043 Texas Instruments, Inc. 
Bus 001 Device 002: ID 072f:90cc Advanced Card Systems, Ltd ACR38 SmartCard Reader
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Spot your phone in that list, in my case, that’d be the “Sony Ericsson Mobile” entry.

Now let’s create our container:

stgraber@dakara:~$ lxc launch ubuntu:16.04 c1
Creating c1
Starting c1

And install the Android debugging client:

stgraber@dakara:~$ lxc exec c1 -- apt install android-tools-adb
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following NEW packages will be installed:
 android-tools-adb
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 68.2 kB of archives.
After this operation, 198 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial/universe amd64 android-tools-adb amd64 5.1.1r36+git20160322-0ubuntu3 [68.2 kB]
Fetched 68.2 kB in 0s (0 B/s) 
Selecting previously unselected package android-tools-adb.
(Reading database ... 25469 files and directories currently installed.)
Preparing to unpack .../android-tools-adb_5.1.1r36+git20160322-0ubuntu3_amd64.deb ...
Unpacking android-tools-adb (5.1.1r36+git20160322-0ubuntu3) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up android-tools-adb (5.1.1r36+git20160322-0ubuntu3) ...

We can now attempt to list Android devices with:

stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached

Since we’ve not passed any USB device yet, the empty output is expected.

Now, let’s pass the specific device listed in “lsusb” above:

stgraber@dakara:~$ lxc config device add c1 sony usb vendorid=0fce productid=51da
Device sony added to c1

And try to list devices again:

stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached 
CB5A28TSU6 device

To get a shell, you can then use:

stgraber@dakara:~$ lxc exec c1 -- adb shell
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
E5823:/ $

LXD USB devices support hotplug by default. So unplugging the device and plugging it back on the host will have it removed and re-added to the container.

The “productid” property isn’t required, you can set only the “vendorid” so that any device from that vendor will be automatically attached to the container. This can be very convenient when interacting with a number of similar devices or devices which change productid depending on what mode they’re in.

stgraber@dakara:~$ lxc config device remove c1 sony
Device sony removed from c1
stgraber@dakara:~$ lxc config device add c1 sony usb vendorid=0fce
Device sony added to c1
stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached 
CB5A28TSU6 device

The optional “required” property turns off the hotplug behavior, requiring the device be present for the container to be allowed to start.

More details on USB device properties can be found here.

Conclusion

We are surrounded by a variety of odd USB devices, a good number of which come with possibly dodgy software, requiring a specific version of a specific Linux distribution to work. It’s sometimes hard to accommodate those requirements while keeping a clean and safe environment.

LXD USB device passthrough helps a lot in such cases, so long as the USB device uses a libusb based workflow and doesn’t require a specific kernel driver.

If you want to add a device which does use a kernel driver, locate the /dev node it creates, check if it’s a character or block device and pass that to LXD as a unix-char or unix-block type device.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

27 March, 2017 05:23PM

Ubuntu Insights: Job Concurrency in Kubernetes: LXD & CPU pinning to the rescue

A few days ago, someone shared with me a project to run video transcoding jobs in Kubernetes.

During her tests, made on a default Kubernetes installation on bare metal servers with 40 cores & 512GB RAM, she allocated 5 full CPU cores to each of the transcoding pods, then scaled up to 6 concurrent tasks per node, thus loading the machines at 75% (on paper). The expectation was that these jobs would run at the same speed as a single task.

The result was a slightly underwelming: while concurrency was going up, performance on individual task was going down. At maximum concurrency, they actually observed 50% decrease on single task performance.
I did some research to understand this behavior. It is referenced in several Kubernetes issues such as #10570,#171, in general via a Google Search. The documentation itself sheds some light on how the default scheduler work and why the performance can be impacted by concurrency on intensive tasks.

There are different methods to allocate CPU time to containers:

  • CPU Pining: each container gets a set of cores

CPU Pinning
If the host has enough CPU cores available, allocate 5 “physical cores” that will be dedicated to this pod/container

  • Temporal slicing: considering the host has N cores collectively representing an amount of compute time which you allocate to containers. 5% of CPU time means that for every 100ms, 5ms of compute are dedicated to the task.

Time Slicing
Temporal slicing, each container gets allocated randomly on all nodesObviously, pining CPUs can be interesting for some specific workloads but has a big problem of scale for the simple reason you could not run more pods than you have cores in your cluster.

As a result, Docker defaults to the second one, which also ensures you can have less than 1 CPU allocated to a container.

This has an impact on performance which also happens in HPC or any CPU intensive task.

Can we mitigate this risk? Maybe. Docker provides the cpuset option at the engine level. It’s not however leveraged by Kubernetes. However, LXD containers have the ability to be pined to physical cores via cpusets, in an automated fashion, as explained in this blog post by @stgraber.

This opens 2 new options for scheduling our workloads:

  • Slice up our hosts in several LXD Kubernetes Workers and see if pining CPUs for workers can help up;
  • Include a “burst” option with the native Kubernetes resource primitives, and see if that can help maximise compute throughput in our cluster.

Let’s see how these compare!

TL;DR

You don’t all have the time to read the whole thing so in a nutshell:

  • If you always allocate less than 1 CPU to your pods, concurrency doesn’t impact CPU-bound performance;
  • If you know in advance your max concurrency and it is not too high, then adding more workers with LXD and CPU pinning them always gets you better performance than native scheduling via Docker;
  • The winning strategy is always to super provision CPU limits to the max so that every bit of performance is allocated instantly to your pods

Note: these results are in AWS, where there is a hypervisor between the metal and the units. I am waiting for hardware with enough cores to complete the task. If you have hardware you’d like to throw at this, be my guest and I’ll help you run the tests.

The Plan

In this blog post, we will do the following:

  • Setup various Kubernetes clusters: pure bare metal, pure cloud, in LXD containers with strict CPU allocation.
  • Design a minimalistic Helm chart to easily create parallelism
  • Run benchmarks to scale concurrency (up to 32 threads/node)
  • Extract and process logs from these runs to see how concurrency impacts performance per core

Requirements

For this blog post, it is assumed that:

  • You are familiar with Kubernetes
  • You have notions of Helm charting or of Go Templates, as well as using Helm to deploy stuff
  • Having preliminary knowledge of the Canonical Distribution of Kubernetes (CDK) is a plus, but not required.
  • Downloading the code for this post:
git clone https://github.com/madeden/blogposts
cd blogposts/k8s-transcode

Methodology

Our benchmark is a transcoding task. It uses a ffmpeg workload, designed to minimize time to encode by exhausting all the resources allocated to compute as fast as possible. We use a single video for the encoding, so that all transcoding tasks can be compared. To minimize bottlenecks other than pure compute, we use a relatively low bandwidth video, stored locally on each host.

The transcoding job is run multiple times, with the following variations:

  • CPU allocation from 0.1 to 7 CPU Cores
  • Memory from 0.5 to 8GB RAM
  • Concurrency from 1 to 32 concurrent threads per host
  • (Concurrency * CPU Allocation) never exceeds the number of cores of a single host

We measure for each pod how long the encoding takes, then look at correlations between that and our variables.

Charting a simple transcoder

Transcoding with ffmpeg and Docker

When I want to do something with a video, the first thing I do is call my friend Ronan. He knows everything about everything for transcoding (and more)!

So I asked him something pretty straightforward: I want the most CPU intensive ffmpeg transcoding one liner you can think of.

He came back (in less than 30min) with not only the one liner, but also found a very neat docker image for it, kudos to Julien for making this. All together you get:


docker run --rm -v $PWD:/tmp jrottenberg/ffmpeg:ubuntu \
  -i /tmp/source.mp4 \
  -stats -c:v libx264 \
  -s 1920x1080 \
  -crf 22 \
  -profile:v main \
  -pix_fmt yuv420p \
  -threads 0 \
  -f mp4 -ac 2 \
  -c:a aac -b:a 128k \
  -strict -2 \
  /tmp/output.mp4

The key of this setup is the -threads 0 which tells ffmpeg that it’s an all you can eat buffet.
For test videos, HD Trailers or Sintel Trailers are great sources. I’m using a 1080p mp4 trailer for source.

Helm Chart

Transcoding maps directly to the notion of Job in Kubernetes. Jobs are batch processes that can be orchestrated very easily, and configured so that Kubernetes will not restart them when the job is done. The equivalent to Deployment Replicas is Job Parallelism.

To add concurrency, I initially use this notion. It proved a bad approach, making things more complicated than necessary to analyze the output logs. So I built a chart that creates many (numbered) jobs each running a single pod, so I can easily track them and their logs.


{{- $type := .Values.type -}}
{{- $parallelism := .Values.parallelism -}}
{{- $cpu := .Values.resources.requests.cpu -}}
{{- $memory := .Values.resources.requests.memory -}}
{{- $requests := .Values.resources.requests -}}
{{- $multiSrc := .Values.multiSource -}}
{{- $src := .Values.defaultSource -}}
{{- $burst := .Values.burst -}}
---
{{- range $job, $nb := until (int .Values.parallelism) }}
apiVersion: batch/v1
  kind: Job
  metadata:
   name: {{ $type | lower }}-{{ $parallelism }}-{{ $cpu | lower }}-{{ $memory | lower }}-{{ $job }}
  spec:
   parallelism: 1
   template:
   metadata:
   labels:
     role: transcoder
  spec:
    containers:
    - name: transcoder-{{ $job }}
      image: jrottenberg/ffmpeg:ubuntu
      args: [
        "-y",
        "-i", "/data/{{ if $multiSrc }}source{{ add 1 (mod 23 (add 1 (mod $parallelism (add $job 1)))) }}.mp4{{ else }}{{ $src }}{{ end }}",
        "-stats",
        "-c:v",
        "libx264",
        "-s", "1920x1080",
        "-crf", "22",
        "-profile:v", "main",
        "-pix_fmt", "yuv420p",
        "-threads", "0",
        "-f", "mp4",
        "-ac", "2",
        "-c:a", "aac",
        "-b:a", "128k",
        "-strict", "-2",
        "/data/output-{{ $job }}.mp4"
      ]
      volumeMounts:
      - mountPath: /data
        name: hostpath
      resources:
        requests:
{{ toYaml $requests | indent 12 }}
        limits:
          cpu: {{ if $burst }}{{ max (mul 2 (atoi $cpu)) 8 | quote }}{{ else }}{{ $cpu }}{{ end }}
          memory: {{ $memory }}
      restartPolicy: Never
    volumes:
    - name: hostpath
      hostPath:
        path: /mnt
---
{{- end }}

The values.yaml file that goes with this is very very simple:


# Number of // tasks
parallelism: 8
# Separator name
type: bm
# Do we want several input files
# if yes, the chart will use source${i}.mp4 with up to 24 sources
multiSource: false
# If not multi source, name of the default file
defaultSource: sintel_trailer-1080p.mp4
# Do we want to burst. If yes, resource limit will double request.
burst: false
resources:
requests:
cpu: "4"
memory: 8Gi
max:
cpu: "25"

That’s all you need. Of course, all sources are in the repo for your usage, you don’t have to copy paste this.

Creating test files

Now we need to generate a LOT of values.yaml files to cover many use cases. The reachable values would vary depending on your context. My home cluster has 6 workers with 4 cores and 32GB RAM each, so I used

  • 1, 6, 12, 18, 24, 48, 96 and 192 concurrent jobs (up to 32/worker)
  • reverse that for the CPUs (from 3 to 0.1 in case of parallelism=192)
  • 1 to 16GB RAM

In the cloud, I had 16 core workers with 60GB RAM, so I did the tests only on 1 to 7 CPU cores per task.

I didn’t do anything clever here, just a few bash loops to generate all my tasks. They are in the repo if needed.

Deploying Kubernetes

MAAS / AWS

The method to deploy on MAAS is the same I described in my previous blog about DIY GPU Cluster. Once you have MAAS installed and Juju configured to talk to it, you can adapt and use the bundle file in src/juju/ via:

juju deploy src/juju/k8s-maas.yaml

for AWS, use the k8s-aws.yaml bundle, which specifies c4.4xlarge as the default instances. When it’s done, download he configuration for kubectl then initialize Helm with

juju show-status kubernetes-worker-cpu --format json | \
jq --raw-output '.applications."kubernetes-worker-cpu".units | keys[]' | \
xargs -I UNIT juju ssh UNIT "sudo wget https://download.blender.org/durian/trailer/sintel_trailer-1080p.mp4 -O /mnt/sintel_trailer-1080p.mp4"
done
juju scp kubernetes-master/0:config ~/.kube/config
helm init

Variation for LXD

LXD on AWS is a bit special, because of the network. It breaks some of the primitives that are frequently used with Kubernetes such as the proxying of pods, which have to go through 2 layers of networking instead of 1. As a result,

  • kubectl proxy doesn’t work ootb
  • more importantly, helm doesn’t work because it consumes a proxy to the Tiller pod by default
  • However, transcoding doesn’t require network access but merely a pod doing some work on the file system, so that is not a problem.

The least expensive path to resolve the issue I found is to deploy a specific node that is NOT in LXD but a “normal” VM or node. This node will be labeled as a control plane node, and we modify the deployments for tiller-deploy and kubernetes-dashboard to force them on that node. Making this node small enough will ensure no transcoding get ever scheduled on it.

I could not find a way to fully automate this, so here is a sequence of actions to run:

juju deploy src/juju/k8s-lxd-c-.yaml

This deploys the whole thing and you need to wait until it’s done for the next step. Closely monitor juju status until you see that the deployment is OK, but flannel doesn’t start (this is expected, no worries).

Then adjust the LXD profile for each LXD node must to allow nested containers. In a near future (roadmapped for 2.3), Juju will gain the ability to declare the profiles it wants to use for LXD hosts. But for now, we need to build that manually:

NB_CORES_PER_LXD=4 #This is the same number used above to deploy
for MACHINE in 1 2
do
./src/bin/setup-worker.sh ${MACHINE} ${NB_CORES_PER_LXD}
done

If you’re watching juju status you will see that flannel suddenly starts working. All good! Now download he configuration for kubectl then initialize Helm with


juju scp kubernetes-master/0:config ~/.kube/config
helm init

We need to identify the Worker that is not a LXD container, then label it as our control plane node:

kubectl label $(kubectl get nodes -o name | grep -v lxd) controlPlane=true
kubectl label $(kubectl get nodes -o name | grep lxd) computePlane=true

Now this is where it become manual we need to edit successively rc/monitoring-influxdb-grafana-v4, deploy/heapster-v1.2.0.1, deploy/tiller-deploy and deploy/kubernetes-dashboard, to add

nodeSelector:
controlPlane: “true”

in the definition of the manifest. Use

kubectl edit -n kube-system rc/monitoring-influxdb-grafana-v4

After that, the cluster is ready to run!

Running transcoding jobs

Starting jobs

We have a lot of tests to run, and we do not want to spend too long managing them, so we build a simple automation around them

cd src
TYPE=aws
CPU_LIST="1 2 3"
MEM_LIST="1 2 3"
PARA_LIST="1 4 8 12 24 48"
for cpu in ${CPU_LIST}; do
  for memory in ${CPU_LIST}; do
    for para in ${PARA_LIST}; do
      [ -f values/values-${para}-${TYPE}-${cpu}-${memory}.yaml ] && \
        { helm install transcoder --values values/values-${para}-${TYPE}-${cpu}-${memory}.yaml
          sleep 60
          while [ "$(kubectl get pods -l role=transcoder | wc -l)" -ne "0" ]; do
           sleep 15
          done
        }
     done
  done
done

This will run the tests about as fast as possible. Adjust the variables to fit your local environment

First approach to Scheduling

Without any tuning or configuration, Kubernetes makes a decent job of spreading the load over the hosts. Essentially, all jobs being equal, it spreads them like a round robin on all nodes. Below is what we observe for a concurrency of 12.

NAME READY STATUS RESTARTS AGE IP NODE
bm-12–1–2gi-0–9j3sh 1/1 Running 0 9m 10.1.70.162 node06
bm-12–1–2gi-1–39fh4 1/1 Running 0 9m 10.1.65.210 node07
bm-12–1–2gi-11–261f0 1/1 Running 0 9m 10.1.22.165 node01
bm-12–1–2gi-2–1gb08 1/1 Running 0 9m 10.1.40.159 node05
bm-12–1–2gi-3-ltjx6 1/1 Running 0 9m 10.1.101.147 node04
bm-12–1–2gi-5–6xcp3 1/1 Running 0 9m 10.1.22.164 node01
bm-12–1–2gi-6–3sm8f 1/1 Running 0 9m 10.1.65.211 node07
bm-12–1–2gi-7–4mpxl 1/1 Running 0 9m 10.1.40.158 node05
bm-12–1–2gi-8–29mgd 1/1 Running 0 9m 10.1.101.146 node04
bm-12–1–2gi-9-mwzhq 1/1 Running 0 9m 10.1.70.163 node06

The same spread is realized also for larger concurrencies, and at 192, we observe 32 jobs per host in every case. Some screenshots of kubeUI and Grafana of my tests

Jobs in parallel
KubeUI showing 192 concurrent podsHalf a day testing
Compute Cycles at different concurrenciesLXD Fencing CPUs
LXD pining Kubernetes Workers to CPUsK8s full usage
Aoutch! About 100% on the whole machine

Collecting and aggregating results

Raw Logs

This is where it becomes a bit tricky. We could use an ELK stack and extract the logs there, but I couldn’t find a way to make it really easy to measure our KPIs.
Looking at what Docker does in terms of logging, you need to go on each machine and look into /var/lib/docker/containers//-json.log
Here we can see that each job generates exactly 82 lines of log, but only some of them are interesting:

  • First line: gives us the start time of the log
{“log”:”ffmpeg version 3.1.2 Copyright © 2000–2016 the FFmpeg developers\n”,”stream”:”stderr”,”time”:”2017–03–17T10:24:35.927368842Z”}
  • line 13: name of the source
{“log”:”Input #0, mov,mp4,m4a,3gp,3g2,mj2, from ‘/data/sintel_trailer-1080p.mp4’:\n”,”stream”:”stderr”,”time”:”2017–03–17T10:24:35.932373152Z”}
  • last line: end of transcoding timestamp
{“log”:”[aac @ 0x3a99c60] Qavg: 658.896\n”,”stream”:”stderr”,”time”:”2017–03–17T10:39:13.956095233Z”}

For advanced performance geeks, line 64 also gives us the transcode speed per frame, which can help profile the complexity of the video. For now, we don’t really need that.

Mapping to jobs

The raw log is only a Docker uuid, and does not help use very much to understand to what job it relates. Kubernetes gracefully creates links in /var/log/containers/ mapping the pod names to the docker uuid:

bm-1–0.8–1gi-0-t8fs5_default_transcoder-0-a39fb10555134677defc6898addefe3e4b6b720e432b7d4de24ff8d1089aac3a.log

So here is what we do:
  1. Collect the list of logs on each host:
for i in $(seq 0 1 ${MAX_NODE_ID}); do
  [ -d stats/node0${i} ] || mkdir -p node0${i}
  juju ssh kubernetes-worker-cpu/${i} "ls /var/log/containers | grep -v POD | grep -v 'kube-system'" > stats/node0${i}/links.txt
  juju ssh kubernetes-worker-cpu/${i} "sudo tar cfz logs.tgz /var/lib/docker/containers"
  juju scp kubernetes-worker-cpu/${i}:logs.tgz stats/node0${i}/
  cd node0${i}/
  tar xfz logs.tgz --strip-components=5 -C ./
  rm -rf config.v2.json host* resolv.conf* logs.tgz var shm
  cd ..
done

2. Extract import log lines (adapt per environment for nb of nodes…)

ENVIRONMENT=lxd
MAX_NODE_ID=1
echo "Host,Type,Concurrency,CPU,Memory,JobID,PodID,JobPodID,DockerID,TimeIn,TimeOut,Source" | tee ../db-${ENVIRONMENT}.csv
for node in $(seq 0 1 ${MAX_NODE_ID}); do
  cd node0${node}
  while read line; do
    echo "processing ${line}"
    NODE="node0${node}"
    CSV_LINE="$(echo ${line} | head -c-5 | tr '-' ',')" # node it's -c-6 for logs from bare metal or aws, -c-5 for lxd
    UUID="$(echo ${CSV_LINE} | cut -f8 -d',')"
    JSON="$(sed -ne '1p' -ne '13p' -ne '82p' ${UUID}-json.log)"
    TIME_IN="$(echo $JSON | jq --raw-output '.time' | head -n1 | xargs -I {} date --date='{}' +%s)"
    TIME_OUT="$(echo $JSON | jq --raw-output '.time' | tail -n1 | xargs -I {} date --date='{}' +%s)"
    SOURCE=$(echo $JSON | grep from | cut -f2 -d"'")
    echo "${NODE},${CSV_LINE},${TIME_IN},${TIME_OUT},${SOURCE}" | tee -a ../../db-${ENVIRONMENT}.csv
  done < links.txt
  cd ..
done

Once we have all the results, we load to Google Spreadsheet and look into the results…

Results Analysis

Impact of Memory

Once the allocation is above what is necessary for ffmpeg to transcode a video, memory is a non-impacting variable at the first approximation. However, at the second level we can see a slight increase in performance in the range of 0.5 to 1% between 1 and 4GB allocated.
Nevertheless, this factor was not taken into account.

Inflence RAM
RAM does not impact performance (or only marginally)

Impact of CPU allocation & Pinning

Regardless of the deployment method (AWS or Bare Metal), there is a change in behavior when allocating less or more than 1 CPU “equivalent”.

Being below or above the line

Running CPU allocation under 1 gives the best consistency across the board. The graph shows that the variations are contained, and what we see is an average variation of less than 4% in performance.


Low CPU per pod gives low influence of concurrencyRunning jobs with CPU request Interestingly, the heatmap shows that the worse performance is reached when ( Concurrency * CPU Counts ) ~ 1. I don’t know how to explain that behavior. Ideas?

Heat map CPU lower than 1
If total CPU is about 1 the performance is the worse.

Being above the line

As soon as you allocate more than a CPU, concurrency directly impacts performance. Regardless of the allocation, there is an impact, with concurrency 3.5 leading to about 10 to 15% penalty. Using more workers with less cores will increase the impact, up to 40~50% at high concurrency

As the graphs show, not all concurrencies are made equal. The below graphs show duration function of concurrency for various setups.

aws vs lxd 2
AWS with or without LXD, 2 cores / jobaws vs lxd 4
With 4 Cores
and 5 cores / jobWhen concurrency is low and the performance is well profiled, then slicing hosts thanks to LXD CPU pinning is always a valid strategy.

By default, LXD CPU-pinning in this context will systematically outperform the native scheduling of Docker and Kubernetes. It seems a concurrency of 2.5 per host is the point where Kubernetes allocation becomes more efficient than forcing the spread via LXD.

However, unbounding CPU limits for the jobs will let Kubernetes use everything it can at any point in time, and result in an overall better performance.

When using this last strategy, the performance is the same regardless of the number of cores requested for the jobs. The below graph summarizes all results:

AWS duration function of concurrency
All results: unbounding CPU cores homogenizes performance

Impact of concurrency on individual performance

Concurrency impacts performance. The below table shows the % of performance lost because of concurrency, for various setups.

Performance penalty fct concurrency
Performance is impacted from 10 to 20% when concurrency is 3 or more

Conclusion

In the context of transcoding or another CPU intensive task,

  • If you always allocate less than 1 CPU to your pods, concurrency doesn’t impact CPU-bound performance; Still, be careful about the other aspects. Our use case doesn’t depend on memory or disk IO, yours could.
  • If you know in advance your max concurrency and it is not too high, then adding more workers with LXD and CPU pinning them always gets you better performance than native scheduling via Docker. This has other interesting properties, such as dynamic resizing of workers with no downtime, and very fast provisioning of new workers. Essentially, you get a highly elastic cluster for the same number of physical nodes. Pretty awesome.
  • The winning strategy is always to super provision CPU limits to the max so that every bit of performance is allocated instantly to your pods. Of course, this cannot work in every environment, so be careful when using this, and test if it fits with your use case before applying in production.

These results are in AWS, where there is a hypervisor between the metal and the units. I am waiting for hardware with enough cores to complete the task. If you have hardware you’d like to throw at this, be my guest and I’ll help you run the tests.

Finally and to open up a discussion, a next step could also be to use GPUs to perform this same task. The limitation will be the number of GPUs available in the cluster. I’m waiting for some new nVidia GPUs and Dell hardware, hopefully I’ll be able to put this to the test.

There are some unknowns that I wasn’t able to sort out. I made the result dataset of ~3000 jobs open here, so you can run your own analysis! Let me know if you find anything interesting!

27 March, 2017 04:46PM

Ubuntu Insights: Bare Metal Server Provisioning is Evolving the HPC Market

Globe Picture Andrius Aleksandravičius

In the early days of High Performance Computing (HPC), ‘Big Data’ was just called ‘Data’ and organizations spent millions of dollars to buy mainframes or large data processing/warehousing systems just to gain incremental improvements in the manipulation of information. Today, IT Pros and systems administrators are under more pressure than ever to make the most of these legacy bare metal hardware investments. However, with more and more compute workloads moving to the public cloud, and the natural pressure to do more with less, IT pros are finding it difficult to find balance with existing infrastructure and the new realities of the cloud.  Until now, these professionals have not found the balance needed to achieve more efficiency while using what they already have in-house.

Businesses have traditionally made significant investments in hardware. However, as the cloud has disrupted traditional business models, IT Pros needed to find a way to combine the flexibility of the cloud with the power and security of their bare metal servers or internal hardware infrastructure. Canonical’s MAAS (Metal as a Service) solution allows IT organizations to discover, commission, and (re)deploy bare metal servers within most operating system environments like Windows, Linux, etc.. As new services and applications are deployed, MAAS can be used to dynamically re-allocate physical resources to match workload requirements. This means organizations can deploy both virtual and physical machines across multiple architectures and virtual environments, at scale.

MAAS improves the lives of IT Pros!

MAAS was designed to make complex hardware deployments faster, more efficient, and with more flexibility. One of the key areas where MAAS has found significant success is in High Performance Computing (HPC) and Big Data. HPC relies on aggregating computing power to solve large data-centric problems in subjects like banking, healthcare, engineering, business, science, etc. Many large organizations are leveraging MAAS to modernize their OS deployment toolchain (a set of tool integrations that support the development, deployment, operations tasks) and lower server provisioning times.

These organizations found their tools were outdated thereby prohibiting them from deploying large numbers of servers. Server deployments were slow, modular/monolithic, and could not integrate with tools, drivers, and APIs. By deploying MAAS they were able to speedup their server deployment times as well as integrate with their orchestration platform and configuration management tools like Chef, Ansible, and Puppet, or software modeling solutions like Canonical’s Juju.

For example, financial institutions are using MAAS to deploy Windows servers in their data centre during business hours to support applications and employee systems. Once the day is done, they use MAAS to redeploy the data centre server infrastructure to support Ubuntu Servers and perform batch processing and transaction settlement for the day’s activities. In the traditional HPC world, these processes would take days or weeks to perform, but with MAAS, these organizations are improving their efficiency, reduce infrastructure costs by using existing hardware, while giving these institutions the ability to close out the day’s transitions faster and more efficiently thus giving financial executives the ability to spend more time with their families and having bragging rights at cocktail parties.

HPC is just one great use cases for MAAS where companies can recognize immediate value from their bare metal hardware investments. Over the next weeks we will go deeper into the various use cases for MAAS, but in the meantime, we invite you to try MAAS for yourself on any of the major public clouds using Conjure Up.

If you would like to learn more about MAAS or see a demo, contact us directly.

27 March, 2017 04:29PM

Ubuntu Insights: Video tutorial: learn how to install MAAS

This short video offers step-by-step instructions on how to install MAAS (Metal as a Service) to your machine. Before you start you’ll need:

  • One small server for MAAS and at least one server which can be managed with a BMC.
  • It is recommended to have the MAAS server provide DHCP and DNS on a network the managed machines are connected to.


Metal as a Service (MAAS) gives you automated server provisioning and easy network setup for your physical servers for amazing data centre operational efficiency — on premise, open source and supported.

If you want to learn more about MAAS or schedule a demo with one of our experts, please get in touch.

27 March, 2017 02:34PM

Stephan Adig: SREs Needed (Berlin Area)

SREs Needed (Berlin Area)

We are looking for skilled people for SRE / DevOPS work.

So without further ado, here is the job offering :)

SRE / DevOps

Do you want to be part of an engineering team that focus on building solutions that maximizes use of emerging technologies to transform our business to achieve superior value and scalability? Do you want a career opportunity that combines your skills as an engineer and passion for video gaming? Are you fascinated by technologies behind the internet and cloud computing? If so, join us!

As a part of Sony Computer Entertainment, Gaikai is leading the cloud gaming revolution, putting console-quality video games on any device, from TVs to consoles to mobile devices and beyond.

Our SRE's focus is on three things: overall ownership of production, production code quality, and deployments.

The succesfull candidate, will be self-directed and able to participate in the decision making process at various levels.

We expect our SREs to have opinions on the state of our service, and provide critical feedback during various phases of the operational lifecycle. We are engaged throughout the S/W development lifecycle, ensuing the operational readiness and stability of our service.

Requirements

Minimum of 5+ years working experience in Software Development and/or Linux Systems Administration role.
Strong interpersonal, written and verbal communication skills.
Available to participate in a scheduled on-call rotation.

Skills & Knowledge

Proficient as a Linux Production Systems Engineer, with experience managing large scale Web Services infrastructure.
Development experience in one or more of the following programming languages:

  • Python (preferred)
  • Bash, Java, Node.js, C++ or Ruby

In addition, experience with one or more of the following:

  • NoSQL at scale (eg Hadoop, Mongo clusters, and/or sharded Redis)
  • Event Aggregation technologies. (eg. ElasticSearch)
  • Monitoring & Alerting, and Incident Management toolsets
  • Virtual infrastructure (deployment and management) at scale
  • Release Engineering (Package management and distribution at scale)
  • S/W Performance analysis and load testing (QA or SDET experience: a plus)

Location

  • Berlin, Germany

Who is hiring?

  • Gaikai / Sony Interactive Entertainment

When you are on LinkedIn, you can directly go and apply for this job.
If you want, but you are not forced to, please refer to me.

27 March, 2017 12:19PM

hackergotchi for SparkyLinux

SparkyLinux

Switch Sparky testing to stable

This short tutorial shows you how to switch Sparky based on Debian testing “Stretch” to upcoming Debian stable “Stretch”.

It’s for users whose prefer rock solid stability of Debian stable over never packages provide be Debian testing. So, you do not have to do that, if you stay on testing branch.

1. Change Debian repository from “testing” to “stretch” (use “stretch” not ‘stable”!):
sudo nano /etc/apt/sources.list

2. Change Sparky repository from “testing”:
sudo nano /etc/apt/sources.list.d/sparky-testing.list
deb http://sparkylinux.org/repo/ testing main
deb-src http://sparkylinux.org/repo/ testing main

to “stable” (in the “sparky-testing-list”!):
deb http://sparkylinux.org/repo/ stable main
deb-src http://sparkylinux.org/repo/ stable main

Do not create new “sparky-stable.list” manually!.

3. Change pinning:
sudo nano /etc/apt/preferences.d/sparky
from:
Package: *
SparkyLinux,a=testing
Pin-Priority: 1001

and:
Package: *
SparkyLinux,a=stable
Pin-Priority: -10

to:
Package: *
SparkyLinux,a=testing
Pin-Priority: 500

and:
Package: *
SparkyLinux,a=stable
Pin-Priority: 1001

4. Refresh package list:
sudo apt-get update

5. Upgrade/install ‘sparky-apt’ package (it has to come from Sparky ‘stable’ repos):
sudo apt-get install sparky-apt

This operation downgrades the ‘sparky-apt’ and ‘sparky-core’ packages to version 4~xxxxxxxx.

6. Refresh package list again:
sudo apt-get update

7. Upgrade the system:
sudo apt-get dist-upgrade

That’s all, your Sparky installation is ready for upcoming Debian stable “Stretch” now.


 

27 March, 2017 11:23AM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Timo Aaltonen: Mesa 17.0.2 for 16.04 & 16.10

Hi, Mesa 17.0.2 backports can now be installed from the updates ppa. Have fun testing, and feel free to file any bugs you find using ‘apport-bug mesa’.


27 March, 2017 08:13AM

March 26, 2017

Nathan Haines: Winners of the Ubuntu 17.04 Free Culture Showcase

Spring is here and the release of Ubuntu 17.04 is just around the corner. I've been using it for two weeks and I can't say I'm disappointed! But one new feature that never disappoints me is appearance of the community wallpapers that were selected from the Ubuntu Free Culture Showcase!

Every cycle, talented artists around the world create media and release it under licenses that encourage sharing and adaptation. For Ubuntu 17.04, 96 images were submitted to the Ubuntu 17.04 Free Culture Showcase photo pool on Flickr, where all eligible submissions can be found.

But now the results are in, and the top choices, voted on by certain members of the Ubuntu community, and I'm proud to announce the winning images that will be included in Ubuntu 17.04:

A big congratulations to the winners, and thanks to everyone who submitted a wallpaper. You can find these wallpapers (along with dozens of other stunning wallpapers) today at the links above, or in your desktop wallpaper list after you upgrade or install Ubuntu 17.04 on April 13th.

26 March, 2017 08:35AM

March 25, 2017

hackergotchi for ARMBIAN

ARMBIAN

Tinkerboard

ServerDesktopQuick startNightly releasesKernels
Legacy
There are no legacy builds for this board.
Mainline
There are no mainline builds for this board.
Legacy
There are no legacy builds for this board.
Mainline
There are no mainline builds for this board.

Preparation

Make sure you have a good & reliable SD card and a proper power supply. Archives can be uncompressed with 7-Zip on Windows, Keka on OS X and 7z on Linux (apt-get install p7zip-full). RAW images can be written with Etcher (all OS).

How to boot?

Insert SD card into a slot and power the board. First boot takes around 3 minutes then it might reboot and you will need to wait another one minute to login. This delay is because system creates 128Mb emergency SWAP and expand SD card to it's full capacity. Worst case scenario boot (with DHCP) takes up to 35 seconds.

How to login?

Login as root on HDMI / serial console or via SSH and use password 1234. You will be prompted to change this password at first login. Next you will be asked to create a normal user account that is sudo enabled (beware of default QWERTY keyboard settings at this stage).

How to connect to your router via WIFI?

Required condition: a board with onboard or supported 3rd party wireless adapter on USB

nmtui-connect YOUR_ROUTER_SSID

Nightly desktop 4.4.55
Warning: nightly downloads are automated untested builds and no end user support is provided for them!
Updating from nightly repository?

sed -i "s/apt/beta/" /etc/apt/sources.list.d/armbian.list
apt-get update
apt-get upgrade
When bug is found?

Warning: you are entering developers area and things can be completely broken. Don’t use this in production.
default, vendor provided / legacy (3.4.x – 4.4.x)
To make sure you won’t run into conflicts within newly installed packages, remove them all before upgrade:

 aptitude remove ~nlinux-dtb ~nlinux-u-boot ~nlinux-image ~nlinux-headers 
aptitude remove ~nlinux-firmware ~narmbian-firmware ~nlinux-$(lsb_release -cs)-root

Proceed with install:

apt-get install linux-image-rockchip linux-headers-rockchip 
apt-get install linux-u-boot-tinkerboard-default linux-$(lsb_release -cs)-root-tinkerboard 
apt-get install armbian-firmware sunxi-tools swconfig a10disp
reboot
dev – developement (4.x)
To make sure you won’t run into conflicts within newly installed packages, remove them all before upgrade:

 aptitude remove ~nlinux-dtb ~nlinux-u-boot ~nlinux-image ~nlinux-headers 
aptitude remove ~nlinux-firmware ~narmbian-firmware ~nlinux-$(lsb_release -cs)-root

Proceed with install:

apt-get install linux-image-dev-rockchip linux-dtb-dev-rockchip linux-headers-dev-rockchip 
apt-get install linux-u-boot-tinkerboard-dev linux-$(lsb_release -cs)-root-dev-tinkerboard 
apt-get install armbian-firmware sunxi-tools swconfig a10disp
reboot

25 March, 2017 06:39AM by igorpecovnik

hackergotchi for Ubuntu developers

Ubuntu developers

Simon Quigley: What I've Been Up To

It's been a long time since I've blogged about anything. I've been busy with lots of different things. Here's what I've been up to.

Lubuntu

First off, let's talk about Lubuntu. A couple different actions (or lack of) have happened.

Release Management

Walter Lapchynski recently passed on the position of Lubuntu Release Manager to me. He has been my mentor ever since I joined Ubuntu/Lubuntu in July of 2015, and I'm honored to be able to step up to this role.

Here's what I've done as Release Manager from then to today:

Sunsetted Lubuntu PowerPC daily builds for Lubuntu Zesty Zapus.

This was something we had been looking at for a while, and it just happened to happen immediately after I became Release Manager. It wasn't really our hand pushing this forward, per se. The Ubuntu Technical Board voted to end the PowerPC architecture in the Ubuntu archive for Zesty before this, and I thought it was a good time to carry this forward for Lubuntu.

Helped get Zesty Zapus Beta 1 out the door for Ubuntu and for Lubuntu as well.

Discussed Firefox and ALSA's future in Ubuntu.

When Firefox 52 was released into xenial-updates, it broke Firefox's sound functionality for Lubuntu 16.04 users, as Lubuntu 16.04 LTS uses ALSA, and despite what a certain Ubuntu site says, was because it was disabled in the default build of Firefox, not completely removed. I won't get into it (I don't want to start a flame war), but this wasn't really something Lubuntu messed up, as the original title (and content) of the article ("Lubuntu users are left with no sound after upgrading Firefox") implied.

I recently brought this up for discussion (I didn't know that part just mentioned when I sent the email linked above), and for the time being it will be re-enabled in the Ubuntu build. As we continue to update to future Firefox releases, this will result in bitrot, so eventually we need to switch off of Firefox in the future.

I'm personally against switching to Chromium, as it's not lightweight and it's a bit bloated. I have also recently started using Firefox, and it's been a lot faster for me than Chromium was. But, that's a discussion for another day, and within the next month, I will most likely bring it up for discussion on the lubuntu-devel mailing list. I'll probably write another blog post when I send that email, but we'll see.

Got Zesty Zapus Final Beta/Beta 2 out the door for Lubuntu.

LXQt

Lubuntu 17.04 will not ship with LXQt.

That's basically the bottom line here. ;)

I've been working to start a project on Launchpad that will allow me to upload packages to a PPA and have it build an image from that, but I'm still waiting to hear back on a few things for that.

You may be asking, "So why don't we have LXQt yet?" The answer to that question is, I've been busy with a new job, school, and many other things in life and haven't gotten the chance to heavily work on it much. I have a plan in mind, but it all depends on my free time from this point on.

That being said, if you want to get involved, please don't be afraid to send an email to the Lubuntu developers mailing list. We're all really friendly, and we'll be very willing to get you started, no matter your skill level. This is exactly the reason why LXQt is taking so long. It's because I'm pretty much the only one working on this specific subproject, and I don't have all the time in the world.

Donations

While this isn't specifically highlighting any work I've done in this area, I'd like to provide some information on this.

Lubuntu has been looking for a way to accept donations for a long time. Donations to Lubuntu would help fund:

  • Work on Lubuntu (obviously).
  • Work on upstream projects we use and install by default (LXQt, for example, in the future).
  • Travel to conferences for Lubuntu team members.
  • Much more...

A goal that I specifically have with this is to be transparent as possible about any donations we recieve and specifically where they go. But, we have to get it set up first.

While I am still a minor in the country I reside and (most likely) cannot make any legal decisions about funds (yet), Walter has been looking for a lawyer to help sort out something along the lines of a "Lubuntu Foundation" (or something like that) to manage the funds in a way that doesn't give only one person control. So if you know a lawyer (or are one) that would be willing to help us set that up, please contact me or Walter when you can.

Ubuntu Weekly Newsletter

Before Issue 500 of the Ubuntu Weekly Newsletter, Elizabeth K. Joseph (Lyz) was in the driver's seat of the team. She helped get every release out on time every week without fail (with the exception of two-week issues, but that's irrelevant right now). Before I go on, I just want to say a big thank you to Lyz. Your contributions were really appreciated. :)

She had taken the time to show me not only how to write summaries in the very beginning, but how to edit, and even publish a whole issue. I'm very grateful for the time she spent with me, and I can't thank her enough.

Fast forward to 501, I ended up stepping up to get the email sent to summary writers and ultimately the whole issue published. I was nervous, as I had never published an issue on my own (Lyz and I had always split the tasks), but I successfully pressed the right buttons and got it out. Before publishing, I had some help from Paul White (his last issue contributing, thank you as well) and others to get summaries done and the issue edited.

Since then, I've pretty much stepped up to fill in the gaps for Lyz. I wouldn't necessarily consider anything official yet, but for now, this is where I'll stay.

But, it's tough to get issues of UWN out. I have a new respect for everything Lyz did and all of the hard work she put into each issue. This is a short description of what happens each week:

  • Collect issues during the week, put it on the Google Document.
  • On Friday, clean up the doc and send out to summary writers.
  • Over the weekend, people write summaries.
  • On Sunday, it's copied to the wiki, stats are added, and it's sent out to editors.
  • On Monday, it's published.

Wash, rinse, and repeat.

It's incredibly easy to write summaries. In fact, the email was just sent out earlier to summary writers. If you want to take a minute or two (that's all it takes for contributing a summary) to help us out, hop on to the Google Document, refer to the style guidelines linked at the top, and help us out. Then, when you're done, put your name on the bottom if you want to be credited. Every little bit helps!

Other things

About this website

  • I think I can finally implement a comments section so people can leave easy feedback. This is a huge step forward, given that I write the HTML for this website completely from scratch.
  • I wrote a hacky Python script that I can use for writing blog posts. I can just write everything in Markdown, and it will do all the magic bits. I manually inspect it, then just git add, git commit, and git push it.
  • I moved the website to GitLab, and with the help of Thomas Ward, got HTTPS working.

For the future

  • I've been inspired by some of the Debian people blogging about their monthly contributions to FLOSS, so I'm thinking that's what I'll do. It'll be interesting to see what I actually do in a month's time... who knows what I'll find out? :)

So that's what I've been up to. :)

25 March, 2017 01:27AM by Simon Quigley (tsimonq2@ubuntu.com)

March 24, 2017

Nish Aravamudan: [USD #1] Ubuntu Server Dev git Importer

This is the first in a series of posts about the Ubuntu Server Team’s git importer (usd). There is a lot to discuss: why it’s necessary, the algorithm, using the tooling for doing merges, using the tooling for contributing one-off fixes, etc. But for this post, I’m just going to give a quick overview of what’s available and will follow-up in future posts with those details.

The importer was first announced here and then a second announcement was made here. But both those posts are pretty out-of-date now… I have written a relatively current guide to merging which does talk about the tooling here, and much of that content will be re-covered in future blog posts.

The tooling is browse-able here and can be obtained via

git clone https://git.launchpad.net/usd-importer

This will provide a usd command in the local repository’s bin directory. That command resembles git as being the launching point for interacting with imported trees — both for importing them and for using them:

usage: usd [-h] [-P PARENTFILE] [-L PULLFILE]
 build|build-source|clone|import|merge|tag ...

Ubuntu Server Dev git tool

positional arguments:
 build|build-source|clone|import|merge|tag
 
 build - Build a usd-cloned tree with dpkg-buildpackage
 build-source - Build a source package and changes file
 clone - Clone package to a directory
 import - Update a launchpad git tree based upon the state of the Ubuntu and Debian archives
 merge - Given a usd-import'd tree, assist with an Ubuntu merge
 tag - Given a usd-import'd tree, tag a commit respecting DEP14

...

More information is available at https://wiki.ubuntu.com/UbuntuDevelopment/Merging/GitWorkflow.

You can run usd locally without arguments to view the full help.

Imported trees currently live here. This will probably change in the future as we work with the Launchpad team to integrate the functionality. As you can see, we have 411 repositories currently (as of this post) and that’s a consequence of having the importer running automatically. Every 20 minutes or so, the usd-cron script checks if there are any new publishes of source packages listed in usd-cron-packages.txt in Debian or Ubuntu and runs usd import on them, if so.

I think that’s enough for the first post! Just browsing the code and the imported trees is pretty interesting (running gitk on an imported repository gives you a very interesting visual of Ubuntu development). I’ll dig into details in the next post (probably of many).


24 March, 2017 11:40PM

Nish Aravamudan: (USBSD #1: Goals) Inaugural Ubuntu Server Bug Squashing Day!

As posted on the ubuntu-server mailing list we had our first Ubuntu Server Bug Squashing Day (USBSD) on Wednesday, March 22, 2017. While we may not have had a large community showing, the event was still a success and their is momentum to make this a regular event going forward (more on that below…). This post is about the goals behind USBSD.

[Throughout the following I will probably refer to users by their IRC nicks. When I know their real names, I will try and use them as well at least once so real-person association is available.]

The intent of the USBSD is two-fold:

  1. The Server Team has a triage rotation for all bugs filed against packages in main, which is purely an attempt to provide adequate responses to ‘important’ — ensuring we have ‘good’ bug reports that are actionable and then to put them on to the Server Team’s queue (via subscribing ~ubuntu-server). The goal for triage is not to solve the bugs, it’s simply to respond and put it on the ‘to-fix’ list (which is visible here. But we don’t want that list to just grow without bound (what good is it to respond to a bug but never fix it?), so we need to dedicate some time to working to get a bug to closure (or at least to the upload/sponsorship stage).
  2. Encourage community-driven ownership of bug-fixes and packages. While Robie Basak (rbasak), Christian Ehrhardt (cpaelzer), Josh Powers (powersj) and myself (nacc) all work for Canonical on the Server Team on the release side of things (meaning merges, bug-fixes, etc), there simply is not enough time in each cycle for the four of us alone to address every bug filed. And it’s not to say that the only developers working on packages an Ubuntu Server user cares are us four. But from a coordination perspective for every package in main that is ‘important’ to Ubuntu Server, we are often at least involved. I do not want to diminish by any means any contribution to Ubuntu Server, but it does feel like the broader community contributions have slowed down with recent releases. That might be a good thing ™ in that packages don’t have as many bugs, or it might just be that bugs are getting filed and no one is working on them. By improving our tooling and processes around bugs, we can lower barriers to entry for new contributors and ideally grow ownership and knowledge of packages relevant to Ubuntu Server.

That is a rather long-winded introduction to the goals. Did we meet them?

To the first point, it was a positive experience for those of us working on bugs on the day to have a dedicated place to coordinate and discuss solutions (on IRC at FreeNode/#ubuntu-server as well as well on the Etherpad we used [requires authentication and membership in the ~ubuntu-etherpad Launchpad team]. And I believe a handful of bugs were driven to completion.

To the second point, I was not pinged much at all (if at all) during the US business day on USBSD #1. That was a bit disappointing. But I saw that cpaelzer helped a few different users with several classes of bugs and that was awesome to wake up to! He also did a great job of documenting his bugwork/interactions on the Etherpad.

Follow-on posts will talk about ways we can improve and hopefully document some patterns for bugwork that we discover via USBSDs.

In the meanwhile, we’re tentatively scheduling USBSD #2 for April 5, 2017!


24 March, 2017 11:14PM

hackergotchi for Emmabuntüs Debian Edition

Emmabuntüs Debian Edition

Release Emmabuntüs Debian Edition 1.02 with Lilo and UEFI included!

On March 20, 2017, the Emmabuntüs Collective is happy to announce the release of the new Emmabuntüs Debian Edition 1.02 (32 and 64 bits), based on Debian 8.7 distribution and featuring the XFCE desktop environment. This distribution was originally designed to facilitate the reconditioning of computers donated to humanitarian organizations, starting with the Emmaüs communities (which is where the distribution’s name obviously comes from), [...]

24 March, 2017 10:20PM by shihtzu

Sortie de l’Emmabuntüs Debian Edition 1.02 avec Lilo et UEFI inclus !

Le Collectif Emmabuntüs est heureux d’annoncer la sortie pour le 20 mars 2017, de la nouvelle Emmabuntüs Debian Édition 1.02 (32 et 64 bits) basée sur la Debian 8.7 et XFCE. Cette distribution a été conçue pour faciliter le reconditionnement des ordinateurs donnés aux associations humanitaires, à l’origine aux communautés Emmaüs (d’où son nom), pour [...]

24 March, 2017 10:09PM by shihtzu

hackergotchi for Ubuntu developers

Ubuntu developers

Valorie Zimmerman: Laptop freezing -- figuring out the issues

Hi all, I have an awesome laptop I bought from my son, a hardcore gamer. So used, but also very beefy and well-cared-for. Lately, however, it has begun to freeze, by which I mean: the screen is not updated, and no keyboard inputs are accepted. So I can't even REISUB; the only cure is the power button.

I like to leave my laptop running overnight for a few reasons -- to get IRC posts while I sleep, to serve *ubuntu ISO torrents, and to run Folding@Home.

Attempting to cure the freezing, I've updated my graphics driver, rolled back to an older kernel, removed my beloved Folding@Home application, turned on the fan overnight, all to no avail. After adding lm-sensors and such, it didn't seem likely to be overheating, but I'd like to be sure about that.

Lately I turned off screen dimming at night and left a konsole window on the desktop running `top`. This morning I found a freeze again, with nothing apparent in the top readout:


So I went looking on the internet and found this super post: Using KSysGuard: System monitor tool for KDE. The first problem was that when I hit Control+Escape, I could not see the System Load tab he mentioned or any way to create a custom tab. However, when I started Ksysguard from the commandline, it matches the screenshots in the blog.

Here is my custom tab:


So tonight I'll leave that on my screen along with konsole running `top` and see if there is any more useful information.

24 March, 2017 09:55PM by Valorie Zimmerman (noreply@blogger.com)

Jono Bacon: My Move to ProsperWorks CRM and Feature Requests

As some of you will know, I am a consultant that helps companies build internal and external communities, collaborative workflow, and teams. Like any consultant, I have different leads that I need to manage, I convert those leads into opportunities, and then I need to follow up and convert them into clients.

Managing my time is one of the most critical elements of what I do. I want to maximize my time to be as valuable as possible, so optimizing this process is key. Thus, the choice of CRM has been an important one. I started with Insightly, but it lacked a key requirement: integration.

I hate duplicating effort. I spend the majority of my day living in email, so when a conversation kicks off as a lead or opportunity, I want to magically move that from my email to the CRM. I want to be able to see and associate conversations from my email in the CRM. I want to be able to see calendar events in my CRM. Most importantly, I don’t want to be duplicating content from one place to another. Sure, it might not take much time, but the reality is that I am just going to end up not doing it.

Evaluations

So, I evaluated a few different platforms, with a strong bias to SalesforceIQ. The main attraction there was the tight integration with my email. The problem with SalesforceIQ is that it is expensive, it has limited integration beyond email, and it gets significantly more expensive when you want more control over your pipeline and reporting. SalesforceIQ has the notion of “lists” where each is kind of like a filtered spreadsheet view. For the basic plan you get one list, but beyond that you have to go up a plan in which I get more lists, but it also gets much more expensive.

As I courted different solutions I stumbled across ProsperWorks. I had never heard of it, but there were a number of features that I was attracted to.

ProsperWorks

Firstly, ProsperWorks really focuses on tight integration with Google services. Now, a big chunk of my business is using Google services. Prosperworks integrates with Gmail, but also Google Calendar, Google Docs, and other services.

They ship a Gmail plugin which makes it simple to click on a contact and add them to ProsperWorks. You can then create an opportunity from that contact with a single click. Again, this is from my email: this immediately offers an advantage to me.

ProsperWorks CRM

Yep, that’s not my Inbox. It is an image yanked off the Internet.

When viewing each opportunity, ProsperWorks will then show associated Google Calendar events and I can easily attach Google Docs documents or other documents there too. The opportunity is presented as a timeline with email conversations listed there, but then integrated note-taking for meetings, and other elements. It makes it easy to summarize the scope of the deal, add the value, and add all related material. Also, adding additional parties to the deal is simple because ProsperWorks knows about your contacts as it sucks it up from your Gmail.

While the contact management piece is less important to me, it is also nice that it brings in related accounts for each contact automatically such as Twitter, LinkedIn, pictures, and more. Again, this all reduces the time I need to spend screwing around in a CRM.

Managing opportunities across the pipeline is simple too. I can define my own stages and then it basically operates like Trello and you just drag cards from one stage to another. I love this. No more selecting drop down boxes and having to save contacts. I like how ProsperWorks just gets out of my way and lets me focus on action.

…also not my pipeline view. Thanks again Google Images!

I also love that I can order these stages based on “inactivity”. Because ProsperWorks integrates email into each opportunity, it knows how many inactive days there has been since I engaged with an opportunity. This means I can (a) sort my opportunities based on inactivity so I can keep on top of them easily, and (b) I can set reminders to add tasks when I need to follow up.

ProsperWorks CRM

The focus on inactivity is hugely helpful when managing lots of concurrent opportunities.

As I was evaluating ProsperWorks, there was one additional element that really clinched it for me: the design.

ProsperWorks looks and feels like a Google application. It uses material design, and it is sleek and modern. It doesn’t just look good, but it is smartly designed in terms of user interaction. It is abundantly clear that whoever does the interaction and UX design at ProsperWorks is doing an awesome job, and I hope someone there cuts this paragraph out and shows it to them. If they do, you kick ass!

Of course, ProsperWorks does a load of other stuff that is helpful for teams, but I am primarily assessing this from a sole consultant’s perspective. In the end, I pulled the trigger and subscribed, and I am delighted that I did. It provides a great service, is more cost efficient than the alternatives, provides an integrated solution, and the company looks like they are doing neat things.

Feature Requests

While I dig ProsperWorks, there are some things I would love to encourage the company to focus on. So, ProsperWorks folks, if you are reading this, I would love to see you focus on the following. If some of these already exist, let me know and I will update this post. Consider me a resource here: happy to talk to you about these ideas if it helps.

Wider Google Calendar integration

Currently the gcal integration is pretty neat. One limitation though is that it depends on a gmail.com domain. As such, calendar events where someone invites my jonobacon.com email doesn’t automatically get added to the opportunity (and dashboard). It would be great to be able to associate another email address with an account (e.g. a gmail.com and jonobacon.com address) so when calendar events have either or both of those addresses they are sucked into opportunities. It would also be nice to select which calendars are viewed: I use different calendars for different things (e.g. one calendar for booked work, one for prospect meetings, one for personal etc). Feature Request Link

It would also be great to have ProsperWorks be able to ease scheduling calendar meetings in available slots. I want to be able to talk to a client about scheduling a call, click a button in the opportunity, and ProsperWorks will tell me four different options for call times, I can select which ones I am interested in, and then offer these times to the client, where they can pick one. ProsperWorks knows my calendar, this should be doable, and would be hugely helpful. Feature Request Link

Improve the project management capabilities

I have a dream. I want my CRM to also offer simple project management capabilities. ProsperWorks does have a ‘projects’ view, but I am unclear on the point of it.

What I would love to see is simple project tracking which integrates (a) the ability to set milestones with deadlines and key deliverables, and (b) Objective Key Results. This would be huge: I could agree on a set of work complete with deliverables as part of an opportunity, and then with a single click be able to turn this into a project where the milestones would be added and I could assign tasks, track notes, and even display a burndown chart to see how on track I am within a project. Feature Request Link

This doesn’t need to be a huge project management system, just a simple way of adding milestones, their child tasks, tracking deliverables, and managing work that leads up to those deliverables. Even if ProsperWorks just adds simple Evernote functionality where I can attach a bunch of notes to a client, this would be hugely helpful.

Optimize or Integrate Task Tracking

Tracking tasks is an important part of my work. The gold standard for task tracking is Wunderlist. It makes it simple to add tasks (not all tasks need deadlines), and I can access them from anywhere.

I would love to ProsperWorks to either offer that simplicity of task tracking (hit a key, whack in a title for a task, and optionally add a deadline instead of picking an arbitrary deadline that it nags me about later), or integrate with Wunderlist directly. Feature Request Link

Dashboard Configurability

I want my CRM dashboard to be something I look at every day. I want it to tell me what calendar events I have today, which opportunities I need to follow up with, what tasks I need to complete, and how my overall pipeline is doing. ProspectWorks does some of this, but doesn’t allow me to configure this view. For example, I can’t get rid of the ‘Invite Team Members’ box, which is entirely irrelevant to me as an individual consultant. Feature Request Link

So, all in all, nice work, ProsperWorks! I love what you are doing, and I love how you are innovating in this space. Consider me a resource: I want to see you succeed!

UPDATE: Updated with feature request links.

The post My Move to ProsperWorks CRM and Feature Requests appeared first on Jono Bacon.

24 March, 2017 05:13PM

Costales: New (lovely) uNav for Ubuntu Phone

After a few improvements in uNav I'm so proud of the current version, specially with the last useful feature.

But an image will explain it better. You'll choose your transport mode and the orange line is the public transport :))

New uNav 0.67
Enjoy the freedom in your Ubuntu Phone or tablet!

24 March, 2017 03:03PM by Marcos Costales (noreply@blogger.com)

Luis de Bethencourt: C++ Cheat Sheet

I spend most of my time writing and reading C code, but every once in a while I get to play with a C++ project and find myself doing frequent reference checks to cppreference.com. I wrote myself the most concise cheat sheet I could that still shaved off the majority of those quick checks. Maybe it helps other fellow programmers who occasionally dabble with C++.

class ClassName {
  int priv_member;  // private by default
protected:
  int protect_member;
public:
  ClassName() // constructor
  int get_priv_mem();  // just prototype of func
  virtual ~ClassName() {} // destructor
};

int ClassName::get_priv_mem() {  // define via scope
  return priv_member;
}

class ChildName : public ClassName, public CanDoMult {
public:
  ChildName() {
    protect_member = 0;
  } ...
};

class Square {
  friend class Rectangle; ... // can access private members
};


Containers: container_type<int>
 list -> linked list
  front(), back(), begin(), end(), {push/pop}_{front/back}(), insert(), erase()
 deque ->double ended queue
  [], {push/pop}_{front/back}(), insert(), erase(), front(), back(), begin()
 queue/stack -> adaptors over deque
  push(), pop(), size(), empty()
  front(), back() <- queue
  top() <- stack
 unordered_map -> hashtable
  [], at(), begin(), end(), insert(), erase(), count(), empty(), size()
 vector -> dynamic array
  [], at(), front(), back(), {push/pop}_back, insert(), erase(), size()
 map -> tree
  [], at(), insert(), erase(), begin(), end(), size(), empty(), find(), count()

 unordered_set -> hashtable just keys
 set -> tree just keys

24 March, 2017 01:18PM

hackergotchi for Univention Corporate Server

Univention Corporate Server

Digitalization Package for Schools with Integrated E-Mail and Collaboration Functions

As a leading provider of Open Source-based IT infrastructure solutions, we have now “bundled” a new digitalization package for schools, in the knowledge that only secure, flexible, and user-friendly solutions allow the modernization of the IT infrastructure in education so desperately desired by politicians. To this end, we have comprehensively expanded our special solution tailored to the requirements of education authorities, UCS@school, with groupware functions and the web-based office suite from Open-Xchange. The new software package was also on show at this year’s CeBIT.

It supports schools in their digitalization, and we are delighted that the city of Basel, Switzerland, is already implementing it in a pilot project for 20,000 users.

Trustworthy and Cost-Effective School IT in Practice

“Trustworthy and cost-effective school IT is one of the most important technical requirements for the modernization of the education sector – especially if pupils and teaching staff also want to work with their own devices. There are already sophisticated Open Source solutions available, which aid with learning and offer protected spaces for teachers and student,” said Peter Ganten, Managing Director of Univention. “At CeBIT, we demonstrated how this functions in practice and how education authorities can make the software and identity management in their education facilities more professional at the same time.”

Our Cooperation with Open-Xchange

Our cooperation with Open-Xchange allows us to take the smooth and flexible operation of IT infrastructures in schools to a whole new level. Effective immediately, in addition to professional IT management, education authorities will also be able to make e-mail, collaboration, and office software available to all schools. In addition, they also have the choice between on-premises, cloud, and hybrid environments, can mount Open Source software flexibly, and can integrate proprietary offerings such as Microsoft Office365 and Google G Suite without any problems.

The new digitalization package satisfies all the major requirements of school administrative staff, teachers, and pupils at the same time – and it ensures sensitive data are protected too. Education authorities can install additional software at any time via the integrated App Center so as to provide it to their schools in next to no time.

“We support the digital sovereignty of schools and education authorities: With an Open Source framework, education authorities are in a position to decide for themselves where and how they operate services and want to save their data – without having to forgo familiar offerings like Microsoft Office 365 or special software,” said Peter Ganten. “With our solution, we offer users freedom of choice and facilitate the management of IT infrastructures as well as the apps and programs used.”

Statement from Basel

“In Basel, we want to offer our approximately 20,000 members of teaching staff and students functions for e-mail, calendars, and school-specific applications such as allocation of homework and room planning. To this end, we evaluated a variety of different solutions,” said Markus Bäumler of the IT Media department at the Pedagogical Center in Basel (PZ.BS). “Ultimately, we decided on Univention’s digitalization package of UCS@school with Open-Xchange because it satisfies all of our requirements, offers excellent integration in UCS’ central identity management system, and convinced us completely with its price-performance ratio for licenses and extensive support offering.”

Statement from Open-Xchange

“Univention Corporate Server is our recommended operating system platform for educational facilities, authorities, and companies who want to operate the OX App Suite and manage it themselves – either in their own computer center or on a private cloud operated by a trusted service provider,” explained Rafael Laguna, CEO of Open-Xchange. “UCA@school allows IT administrators to ensure reliably at any time that sensitive and personal data are saved in accordance with the strict requirements of German and European data privacy legislation.”

The Expansion of UCS@school

Teachers and students today require an offering with which they can access a wide range of web services and school-specific solutions as well as contents securely from anywhere and with any device – be it for timetables, homework, room bookings, the secure exchange of information and data or protected discussion rooms.

visual ucs@school

The functions of our digitalization package at a glance:

  • Centrally managed IT and identity management which can be used from anywhere with single sign-on
  • Easy-to-use interface for administrators, teachers, and pupils
  • Secure use and mounting of mobile devices in the school network
  • Services and expansions such as Open-Xchange or ownCloud allow encrypted e-mail communication, calendars, and address books as well as file exchange and editing of documents.
  • App Center with more than 90 integrated expansions for in-house operation and cloud applications
  • Intelligent rights concept for access to digital learning platforms, IT services, and digital media, functions for the automatic assignment of roles for individuals (teaching staff, pupils, administrators) and groups (schools, classes) for the purpose of setting up mailing lists, for example, or sharing certain folders or documents with individual classes or pupils
  • Data import of pupil data from administration software and simple data maintenance at changeover of school year
  • Self-services for the schools to be able to create new users or reset passwords
  • Compliance with strict German data privacy provisions, with the result that all of the users’ data are saved on the facility’s own system and not on third-party servers
  • Secure use of private smartphones and tablets (BYOD)
  • Interfaces for the mounting of learning management systems or school administration software

    visual ucs@school EN

Der Beitrag Digitalization Package for Schools with Integrated E-Mail and Collaboration Functions erschien zuerst auf Univention.

24 March, 2017 12:06PM by Alice Horstmann

hackergotchi for Ubuntu developers

Ubuntu developers

Jo Shields: Mono repository changes, beginning Mono vNext

Up to now, Linux packages on mono-project.com have come in two flavours – RPM built for CentOS 7 (and RHEL 7), and .deb built for Debian 7. Universal packages that work on the named distributions, and anything newer.

Except that’s not entirely true.

Firstly, there have been “compatibility repositories” users need to add, to deal with ABI changes in libtiff, libjpeg, and Apache, since Debian 7. Then there’s the packages for ARM64 and PPC64el – neither of those architectures is available in Debian 7, so they’re published in the 7 repo but actually built on 8.

A large reason for this is difficulty in our package publishing pipeline – apt only allows one version-architecture mix in the repository at once, so I can’t have, say, 4.8.0.520-0xamarin1 built on AMD64 on both Debian 7 and Ubuntu 16.04.

We’ve been working hard on a new package build/publish pipeline, which can properly support multiple distributions, based on Jenkins Pipeline. This new packaging system also resolves longstanding issues such as “can’t really build anything except Mono” and “Architecture: All packages still get built on Jo’s laptop, with no public build logs”

So, here’s the old build matrix:

Distribution Architectures
Debian 7 ARM hard float, ARM soft float, ARM64 (actually Debian 8), AMD64, i386, PPC64el (actually Debian 8)
CentOS 7 AMD64

And here’s the new one:

Distribution Architectures
Debian 7 ARM hard float (v7), ARM soft float, AMD64, i386
Debian 8 ARM hard float (v7), ARM soft float, ARM64, AMD64, i386, PPC64el
Raspbian 8 ARM hard float (v6)
Ubuntu 14.04 ARM hard float (v7), ARM64, AMD64, i386, PPC64el
Ubuntu 16.04 ARM hard float (v7), ARM64, AMD64, i386, PPC64el
CentOS 6 AMD64, i386
CentOS 7 AMD64

The compatibility repositories will no longer be needed on recent Ubuntu or Debian – just use the right repository for your system. If your distribution isn’t listed… sorry, but we need to draw a line somewhere on support, and the distributions listed here are based on heavy analysis of our web server logs and bug requests.

You’ll want to change your package manager repositories to reflect your system more accurately, once Mono vNext is published. We’re debating some kind of automated handling of this, but I’m loathe to touch users’ sources.list without their knowledge.

CentOS builds are going to be late – I’ve been doing all my prototyping against the Debian builds, as I have better command of the tooling. Hopefully no worse than a week or two.

24 March, 2017 10:06AM

hackergotchi for Maemo developers

Maemo developers

Making something that is ‘undoable editable’ with Qt

Among the problems we’ll face is that we want asynchronous APIs that are undoable and that we want to switch to read only, undoable editing, non-undoable editing and that QML doesn’t really work well with QFuture. At least not yet. We want an interface that is easy to talk with from QML. Yet we want to switch between complicated behaviors.

We will also want synchronous mode and asynchronous mode. Because I just invented that requirement out of thin air.

Ok, first the “design”. We see a lot of behaviors, for something that can do something. The behaviors will perform for that something, the actions it can do. That is the strategy design pattern, then. It’s the one about ducks and wing fly behavior and rocket propelled fly behavior and the ostrich that has a can’t fly behavior. For undo and redo, we have the command pattern. We have this neat thing in Qt for that. We’ll use it. We don’t reinvent the wheel. Reinventing the wheel is stupid.

Let’s create the duck. I mean, the thing-editor as I will use “Thing” for the thing that is being edited. We want copy (sync is sufficient), paste (must be aysnc), and edit (must be async). We could also have insert and delete, but those APIs would be just like edit. Paste is usually similar to insert, of course. Except that it can be a combined delete and insert when overwriting content. The command pattern allows you to make such combinations. Not the purpose of this example, though.

Enough explanation. Let’s start! The ThingEditor, is like the flying Duck in strategy. This is going to be more or less the API that we will present to the QML world. It could be your ViewModel, for example (ie. you could let your ThingViewModel subclass ThingEditor).

class ThingEditor : public QObject
{
    Q_OBJECT

    Q_PROPERTY ( ThingEditingBehavior* editingBehavior READ editingBehavior
                 WRITE setEditingBehavior NOTIFY editingBehaviorChanged )
    Q_PROPERTY ( Thing* thing READ thing WRITE setThing NOTIFY thingChanged )

public:
    explicit ThingEditor( QSharedPointer<Thing> &a_thing,
            ThingEditingBehavior *a_editBehavior,
            QObject *a_parent = nullptr );

    explicit ThingEditor( QObject *a_parent = nullptr );

    Thing* thing() const { return m_thing.data(); }
    virtual void setThing( QSharedPointer<Thing> &a_thing );
    virtual void setThing( Thing *a_thing );

    ThingEditingBehavior* editingBehavior() const { return m_editingBehavior.data(); }
    virtual void setEditingBehavior ( ThingEditingBehavior *a_editingBehavior );

    Q_INVOKABLE virtual void copyCurrentToClipboard ( );
    Q_INVOKABLE virtual void editCurrentAsync( const QString &a_value );
    Q_INVOKABLE virtual void pasteCurrentFromClipboardAsync( );

signals:
    void editingBehaviorChanged ();
    void thingChanged();
    void editCurrentFinished( EditCurrentCommand *a_command );
    void pasteCurrentFromClipboardFinished( EditCurrentCommand *a_command );

private slots:
    void onEditCurrentFinished();
    void onPasteCurrentFromClipboardFinished();

private:
    QScopedPointer<ThingEditingBehavior> m_editingBehavior;
    QSharedPointer<Thing> m_thing;
    QList<QFutureWatcher<EditCurrentCommand*> *> m_editCurrentFutureWatchers;
    QList<QFutureWatcher<EditCurrentCommand*> *> m_pasteCurrentFromClipboardFutureWatchers;
};

For the implementation of this class, I’ll only provide the non-obvious pieces. I’m sure you can do that setThing, setEditingBehavior and the constructor yourself. I’m also providing it only once, and also only for the EditCurrentCommand. The one about paste is going to be exactly the same.

void ThingEditor::copyCurrentToClipboard ( )
{
    m_editingBehavior->copyCurrentToClipboard( );
}

void ThingEditor::onEditCurrentFinished( )
{
    QFutureWatcher<EditCurrentCommand*> *resultWatcher
            = static_cast<QFutureWatcher<EditCurrentCommand*>*> ( sender() );
    emit editCurrentFinished ( resultWatcher->result() );
    if (m_editCurrentFutureWatchers.contains( resultWatcher )) {
        m_editCurrentFutureWatchers.removeAll( resultWatcher );
    }
    delete resultWatcher;
}

void ThingEditor::editCurrentAsync( const QString &a_value )
{
    QFutureWatcher<EditCurrentCommand*> *resultWatcher
            = new QFutureWatcher<EditCurrentCommand*>();
    connect ( resultWatcher, &QFutureWatcher<EditCurrentCommand*>::finished,
              this, &ThingEditor::onEditCurrentFinished, Qt::QueuedConnection );
    resultWatcher->setFuture ( m_editingBehavior->editCurrentAsync( a_value ) );
    m_editCurrentFutureWatchers.append ( resultWatcher );
}

For QUndo we’ll need a QUndoCommand. For each undoable action we indeed need to make such a command. You could add more state and pass it to the constructor. It’s common, for example, to pass Thing, or the ThingEditor or the behavior (this is why I used QSharedPointer for those: as long as your command lives in the stack, you’ll need it to hold a reference to that state).

class EditCurrentCommand: public QUndoCommand
{
public:
    explicit EditCurrentCommand( const QString &a_value,
                                 QUndoCommand *a_parent = nullptr )
        : QUndoCommand ( a_parent )
        , m_value ( a_value ) { }
    void redo() Q_DECL_OVERRIDE {
       // Perform action goes here
    }
    void undo() Q_DECL_OVERRIDE {
      // Undo what got performed goes here
    }
private:
    const QString &m_value;
};

You can (and probably should) also make this one abstract (and/or a so called pure interface), as you’ll usually want many implementations of this one (one for every kind of editing behavior). Note that it leaks the QUndoCommand instances unless you handle them (ie. storing them in a QUndoStack). That in itself is a good reason to keep it abstract.

class ThingEditingBehavior : public QObject
{
    Q_OBJECT

    Q_PROPERTY ( ThingEditor* editor READ editor WRITE setEditor NOTIFY editorChanged )
    Q_PROPERTY ( Thing* thing READ thing NOTIFY thingChanged )

public:
    explicit ThingEditingBehavior( ThingEditor *a_editor,
                                   QObject *a_parent = nullptr )
        : QObject ( a_parent )
        , m_editor ( a_editor ) { }

    explicit ThingEditingBehavior( QObject *a_parent = nullptr )
        : QObject ( a_parent ) { }

    ThingEditor* editor() const { return m_editor.data(); }
    virtual void setEditor( ThingEditor *a_editor );
    Thing* thing() const;

    virtual void copyCurrentToClipboard ( );
    virtual QFuture<EditCurrentCommand*> editCurrentAsync( const QString &a_value, bool a_exec = true );
    virtual QFuture<EditCurrentCommand*> pasteCurrentFromClipboardAsync( bool a_exec = true );

protected:
    virtual EditCurrentCommand* editCurrentSync( const QString &a_value, bool a_exec = true );
    virtual EditCurrentCommand* pasteCurrentFromClipboardSync( bool a_exec = true );

signals:
    void editorChanged();
    void thingChanged();

private:
    QPointer<ThingEditor> m_editor;
    bool m_synchronous = true;
};

That setEditor, the constructor, etc: these are too obvious to write here. Here are the non-obvious ones:

void ThingEditingBehavior::copyToClipboard ( )
{
}

EditCurrentCommand* ThingEditingBehavior::editCurrentSync( const QString &a_value, bool a_exec )
{
    EditCurrentCommand *ret = new EditCurrentCommand ( a_value );
    if ( a_exec )
        ret->redo();
    return ret;
}

QFuture<EditCurrentCommand*> ThingEditingBehavior::editCurrentAsync( const QString &a_value, bool a_exec )
{
    QFuture<EditCurrentCommand*> resultFuture =
            QtConcurrent::run( QThreadPool::globalInstance(), this,
                               &ThingEditingBehavior::editCurrentSync,
                               a_value, a_exec );
    if (m_synchronous)
        resultFuture.waitForFinished();
    return resultFuture;
}

And now we can make the whole thing undoable by making a undoable editing behavior. I’ll leave a non-undoable editing behavior as an exercise to the reader (ie. just perform redo() on the QUndoCommand, don’t store it in the QUndoStack and immediately delete or cmd->deleteLater() the instance).

Note that if m_synchronous is false, that (all access to) m_undoStack, and the undo and redo methods of your QUndoCommands, must be (made) thread-safe. The thread-safety is not the purpose of this example, though.

class UndoableThingEditingBehavior : public ThingEditingBehavior
{
    Q_OBJECT
public:
    explicit UndoableThingEditingBehavior( ThingEditor *a_editor,
                                           QObject *a_parent = nullptr );
protected:
    EditCellCommand* editCurrentSync( const QString &a_value, bool a_exec = true ) Q_DECL_OVERRIDE;
    EditCurrentCommand* pasteCurrentFromClipboardSync( bool a_exec = true ) Q_DECL_OVERRIDE;
private:
    QScopedPointer<QUndoStack> m_undoStack;
};

EditCellCommand* UndoableThingEditingBehavior::editCurrentSync( const QString &a_value, bool a_exec )
{
    Q_UNUSED(a_exec)
    EditCellCommand *undoable = ThingEditingBehavior::editCurrentSync(  a_value, false );
    m_undoStack->push( undoable );
    return undoable;
}

EditCellCommand* UndoableThingEditingBehavior::pasteCurrentFromClipboardSync( bool a_exec )
{
    Q_UNUSED(a_exec)
    EditCellCommand *undoable = ThingEditingBehavior::pasteCurrentFromClipboardSync( false );
    m_undoStack->push( undoable );
    return undoable;
}
0 Add to favourites0 Bury

24 March, 2017 09:42AM by Philip Van Hoof (pvanhoof@gnome.org)

hackergotchi for Ubuntu developers

Ubuntu developers

Lubuntu Blog: Lubuntu Zesty Zapus Final Beta has been released!

Lubuntu Zesty Zapus Final Beta (soon to be 17.04) has been released! We have a couple papercuts listed in the release notes, so please take a look. A big thanks to the whole Lubuntu team and contributors for helping pull this release together. You can grab the images from here: http://cdimage.ubuntu.com/lubuntu/releases/zesty/beta-2/

24 March, 2017 03:16AM

The Fridge: Ubuntu 17.04 (Zesty Zapus) Final Beta released

The Ubuntu team is pleased to announce the final beta release of the Ubuntu 17.04 Desktop, Server, and Cloud products.

Codenamed "Zesty Zapus", 17.04 continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, introducing new features and fixing bugs.

This beta release includes images from not only the Ubuntu Desktop, Server, and Cloud products, but also the Kubuntu, Lubuntu, Ubuntu GNOME, UbuntuKylin, Ubuntu MATE, Ubuntu Studio, and Xubuntu flavours.

We’re also pleased with this release to welcome Ubuntu Budgie to the family of Ubuntu community flavours.

The beta images are known to be reasonably free of showstopper CD build or installer bugs, while representing a very recent snapshot of 17.04 that should be representative of the features intended to ship with the final release expected on April 13th, 2017.

Ubuntu, Ubuntu Server, Cloud Images

Yakkety Final Beta includes updated versions of most of our core set of packages, including a current 4.10 kernel, and much more.

To upgrade to Ubuntu 17.04 Final Beta from Ubuntu 16.10, follow these instructions:

The Ubuntu 17.04 Final Beta images can be downloaded at:

Additional images can be found at the following links:

As fixes will be included in new images between now and release, any daily cloud image from today or later (i.e. a serial of 20170323 or higher) should be considered a beta image. Bugs found should be filed against the appropriate packages or, failing that, the cloud-images project in Launchpad.

The full release notes for Ubuntu 17.04 Final Beta can be found at:

https://wiki.ubuntu.com/ZestyZapus/ReleaseNotes

Kubuntu

Kubuntu is the KDE based flavour of Ubuntu. It uses the Plasma desktop and includes a wide selection of tools from the KDE project.

The Final Beta images can be downloaded at:

More information on Kubuntu Final Beta can be found here:

Lubuntu

Lubuntu is a flavor of Ubuntu that targets to be lighter, less resource hungry and more energy-efficient by using lightweight applications and LXDE, The Lightweight X11 Desktop Environment, as its default GUI.

The Final Beta images can be downloaded at:

More information on Lubuntu Final Beta can be found here:

Ubuntu Budgie

Ubuntu Budgie is community developed desktop, integrating Budgie Desktop Environment with Ubuntu at its core.

The Final Beta images can be downloaded at:

More information on Ubuntu Budgie Final Beta can be found here:

Ubuntu GNOME

Ubuntu GNOME is a flavor of Ubuntu featuring the GNOME desktop environment.

The Final Beta images can be downloaded at:

More information on Ubuntu GNOME Final Beta can be found here:

UbuntuKylin

UbuntuKylin is a flavor of Ubuntu that is more suitable for Chinese users.

The Final Beta images can be downloaded at:

More information on UbuntuKylin Final Beta can be found here:

Ubuntu MATE

Ubuntu MATE is a flavor of Ubuntu featuring the MATE desktop environment.

The Final Beta images can be downloaded at:

More information on UbuntuMATE Final Beta can be found here:

Ubuntu Studio

Ubuntu Studio is a flavor of Ubuntu that provides a full range of multimedia content creation applications for each key workflows: audio, graphics, video, photography and publishing.

The Final Beta images can be downloaded at:

More information about Ubuntu Studio Final Beta can be found here:

Xubuntu

Xubuntu is a flavor of Ubuntu that comes with Xfce, which is a stable, light and configurable desktop environment.

The Final Beta images can be downloaded at:

More inormation about Xubuntu Final Beta can be found here:

Regular daily images for Ubuntu, and all flavours, can be found at:

Ubuntu is a full-featured Linux distribution for clients, servers and clouds, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional technical support is available from Canonical Limited and hundreds of other companies around the world. For more information about support, visit http://www.ubuntu.com/support

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at: http://www.ubuntu.com/community/participate

Your comments, bug reports, patches and suggestions really help us to improve this and future releases of Ubuntu. Instructions can be found at: https://help.ubuntu.com/community/ReportingBugs

You can find out more about Ubuntu and about this beta release on our website, IRC channel and wiki.

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

Originally posted to the ubuntu-announce mailing list on Thu Mar 23 22:00:58 UTC 2017 by Adam Conrad on behalf of the Ubuntu Release Team

24 March, 2017 02:32AM

Kubuntu General News: Kubuntu 17.04 Beta 2 released for testers

Today the Kubuntu team is happy to announce that Kubuntu Zesty Zapus (17.04) Beta 2 is released . With this Beta 2 pre-release, you can see and test what we are preparing for 17.04, which we will be releasing April 13, 2017.

Kubuntu 17.04 Beta 2

 

NOTE: This is Beta 2 Release. Kubuntu Beta Releases are NOT recommended for:

* Regular users who are not aware of pre-release issues
* Anyone who needs a stable system
* Anyone uncomfortable running a possibly frequently broken system
* Anyone in a production environment with data or work-flows that need to be reliable

Getting Kubuntu 17.04 Beta 2:
* Upgrade from 16.10: run `do-release-upgrade -d` from a command line.
* Download a bootable image (ISO) and put it onto a DVD or USB Drive : http://cdimage.ubuntu.com/kubuntu/releases/zesty/beta-2/

Release notes: https://wiki.ubuntu.com/ZestyZapus/Beta2/Kubuntu

24 March, 2017 12:08AM

March 23, 2017

hackergotchi for VyOS

VyOS

VyOS 2.0 development digest #9: socket communication functionality, complete parser, and open tasks

Socket communication

A long-awaited (by me, anyway ;) milestone: VyConf is now capable of communicating with clients. This allows us to write a simple non-interactive client. Right now the only supported operaion is "status" (a keepalive of sorts), but the list will be growing.

I guess I should talk about the client before going into technical details of the protocol. The client will be way easier to use than what we have now. Two main problems with CLI tools from VyOS 1.x is that my_cli_bin (the command used by set/delete operations) requires a lot of environment setup, and that cli-shell-api is limited in scope. Part of the reason for this is that my_cli_bin is used in the interactive shell. Since the interactive shell of VyConf will be a standalone program rather than a bash completion hack, we are free to make the non-interactive client more idiomatic as a shell command, closer in user experience to git or s3cmd.

This is what it will look like:


SESSION=$(vycli setupSession)
vycli --session=$SESSION configure
vycli --session=$SESSION set "system host-name vyos"
vycli --session=$SESSION delete "system name-server 192.0.2.1"
vycli --session=$SESSION commit
vycli --session=$SESSION exists "service dhcp-server"
vycli --session=$SESSION commit returnValue "system host-name"
vycli --session=$SESSION --format=json show "interfaces ethernet"

As you can see, first, the top level words are subcommands, much like "git branch". Since the set of top level words is fixed anyway, this doesn't create new limitations. Second, the same client can execute both high level set/delete/commit operations and low level exists/returnValue/etc. methods. Third, the only thing it needs to operate is a session token (I'm thinking that unless it's passed in --session option, vycli should try to get it from an environment variable, but we'll see, let me know what you think about this issue). This way contributors will get an easy way to test the code even before interactive shell is complete; and when VyOS 2.0 is usable, shell scripts and people fond of working from bash rather than the domain-specific shell will have access to all system functions, without worrying about intricate environment variable setup.

The protocol

As I already said in the previous post, VyConf uses Protobuf for serialized messages. Protobuf doesn't define any framing, however, so we have to come up with something. Most popular options are delimiters and length headers. The issue with delimiters is that you have to make sure they do not appear in user input, or you risk losing a part of the message. Some programs choose to escape delimiters, other rely on unusual sequences, e.g. the backend of OPNSense uses three null bytes for it. Since Protobuf is a binary protocol, no sequence is unusual enough, so length headers look like the best option. VyConf uses 4 byte headers in network order, that are followed by a Protobuf message. This is easy enough to implement in any language, so it shouldn't be a problem when writing bindings for other languages.

The code

There is a single client library that can be used by all of the non-interactive client and the interactive shell. It will also serve as the OCaml bindings package for VyConf (Python and other languages wil need their own bindings, but with Protobuf, most of it can be autogenerated).

Parser improvements

Inactive and ephemeral nodes

The curly config parser is now complete. It supports the inactive and ephemeral properties. This is what a config with those will look like:

protocols {
  static {
    /* Inserted by a fail2ban-like script */
    #EPHEMERAL route 192.0.2.78/32 {
      blackhole;
    }
    /* DIsabled by admin */
    #INACTIVE route 203.0.113.128/25 {
      next-hop 203.0.113.1;
    }
  }
}

While I'm not sure if there are valid use cases for it, nodes can be inactive and ephemeral at the same time. Deactivating an ephemeral node that was created by scritp perhaps? Anyway, since both are a part of the config format that the "show" command will produce, we get to support both in the parser too.

Multi nodes

By multi nodes I mean nodes that may have more than one value, such as "address" in interfaces. As you remember, I suggested and implemented a new syntax for such nodes:

interfaces {
  ethernet eth0 {
    address [
      192.0.2.1/24;
      192.0.2.2/24;
    ];
  }
}

However, the parser now supports the original syntax too, that is:.

interfaces {
  ethernet eth0 {
    address 192.0.2.1/24;
    address 192.0.2.2/24;
  }
}

I didn't intend to support it originally, but it was another edge case that prompted me to add it. For config read operations to work correctly, every path in the tree must be unique. The high level Config_tree.set function maintains this invariant, but the parser gets to use lower level primitives that do not, so if a user creates a config with duplicate nodes, e.g. by careless pasting, the config tree that the parser returns will have them too, so we get to detect such situations and do something about it. Configs with duplicate tag nodes (e.g. "ethernet eth0 { ... } ethernet eth0 { ... }") are rejected as incorrect since there is no way to recover from this. Multiple non-leaf nodes with distinct children (e.g. "system { host-name vyos; } system { name-server 192.0.2.1; }") can be merged cleanly, so I've added some code to merge them by moving children of subsequent nodes under the first on and removing the extra nodes afterwards. However, since in the raw config there is no real distinction between leaf and non-leaf nodes, so in case of leaf nodes that code would simply remove all but the first. I've extended it to also move values into the first node, which equates support for the old syntax, except node comments and inactive/ephemeral properties will be inherited from the first node. Then again, this is how the parser in VyOS 1.x behaves, so nothing is lost.

While the show command in VyOS 2.0 will always use the new syntax with curly brackets, the parser will not break the principle of least astonishment for people used to the old one. Also, if we decide to write a migration utility for converting 1.x configs to 2.0, we'll be able to reuse the parser, after adding semicolons to the old config with a simple regulat expression perhaps.

Misc

Node names and unquoted values now can contain any characters that are not reserved, that is, anything but whitespace, curly braces, square brackets, and semicolons.

What's next?

Next I'm going to work on adding low level config operations (exists/returnValue/...) and set commands so that we can do some real life tests.

There's a bunch of open tasks if you want to join the development:

T254 is about preventing nodes with reserved characters in their names early in the process, at the "set" time. There's a rather nasty bug in VyOS 1.1.7 related to this: you can pass a quoted node name with spaces to set and if there is no validation rule attached to the node, as it's with "vpn l2tp remote-access authentication local-users", the node will be created. It will fail to parse correctly after you save and reload the config. We'll fix it in 1.2.0 of course, but we also need to prevent it from ever appearing in 2.0 too.

T255 is about adding the curly config renderer. While we can use the JSON serializer for testing right now, the usual format is also just easier on the eyes, and it's a relatively simple task too.

23 March, 2017 11:11PM by Daniil Baturin

Donations and other ways to support VyOS

Hello, community!

We got many requests about how you can donate, we decided open this possibility to those who asked

After all, this is direct support to the project that all you offer, and we constantly need a support of all types.

As was mentioned before, you can contribute in many ways:

But if you would like to contribute via donation you are welcome to do so!

Raised money will be used for project needs like:

  • Documentation development
  • Tutorials and training courses creation
  • Artwork creation
  • Travels of project maintainers to relevant events 
  • Event organization
  • Videos
  • Features development 
  • Popularization of VyOS
  • Servers
  • Lab
  • Software
  • Hardware

Of course, that is not a complete list of needs that project have but most visible.

Find below most convenient way for donation

If you need invoice, please drop me email or ping me on chat


Thank you!


Bitcoin: 1PpUa61FytNSWhTbcVwZzfoE9u12mQ65Pe

PayPal Subscription

PayPal One time donation

23 March, 2017 11:09PM by Yuriy Andamasov

hackergotchi for Ubuntu developers

Ubuntu developers

Jono Bacon: Community Leadership Summit 2017: 6th – 7th May in Austin

The Community Leadership Summit is taking place on the 6th – 7th May 2017 in Austin, USA.

The event brings together community managers and leaders, projects, and initiatives to share and learn how we build strong, engaging, and productive communities. The event takes place the weekend before OSCON in the same venue, the Austin Contention Center. It is entirely FREE to attend and welcomes everyone, whether you are a community veteran or just starting out your journey!

The event is broken into three key components.

Firstly, we have an awesome set of keynotes this year:

Secondly, the bulk of the event is an unconference where the attendees volunteer session ideas and run them. Each session is a discussion where the topic is discussed, debated, and we reach final conclusions. This results in a hugely diverse range of sessions covering topics such as event management, outreach, social media, governance, collaboration, diversity, building contributor programs, and more. These discussions are incredible for exploring and learning new ideas, meeting interesting people, building a network, and developing friendships.

Finally, we have social events on both evenings where you can meet and network with other attendees. Food and drinks are provided by data.world and Mattermost. Thanks to both for their awesome support!

Join Us

The Community Leadership Summit is entirely FREE to attend. If you would like to join, we would appreciate if you could register (this helps us with expected numbers). I look forward to seeing you there in Austin on the 6th – 7th May 2017!

The post Community Leadership Summit 2017: 6th – 7th May in Austin appeared first on Jono Bacon.

23 March, 2017 04:40PM

Ubuntu Podcast from the UK LoCo: S10E03 – Aloof Puny Wren - Ubuntu Podcast

We discuss website owners filing bugs with Mozilla, GitLab acquiring Gitter, Moodle remote code execution, Windows 10 adverts, KDE Slimbook, 32-bit PowerPC EOL in Ubuntu, a new Vala release and the merger of Sonar GNU/Linux and Vinux.

It’s Season Ten Episode Three of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

23 March, 2017 03:00PM

Ubuntu Insights: Huawei and Canonical Integrate OpenStack and CloudFabric

Huawei Extends its Cooperation with Canonical with the Integration of CloudFabric Data Center Network Solution and Ubuntu Cloud Solutions

Hannover, Germany, March 23, 2017 – Huawei and Canonical today announced they are expanding their cooperation in enterprise and telecom clouds to announce that they have completed the integration of CloudFabric Cloud Data Center Network Solution and Canonical’s Ubuntu OpenStack. The joint solution integrates the Agile Controller, Huawei’s SDN controller, with Ubuntu OpenStack to improve the efficiency of deploying and maintaining multiple data center networks. A large number of controller nodes can be deployed in minutes to interoperate with the cloud platform quickly. Enterprises or telecom cloud platforms that are using or plan to use Ubuntu OpenStack platform can directly connect their OpenStack platform with the Agile Controller to enable quick, flexible service deployment and integration in multiple data centers.

Canonical’s OpenStack Interoperability Lab in Boston builds more than 3000 OpenStack clouds every month to test and verify the interoperability of different hardware, SDN and software combinations, helping customers to integrate and deploy their cloud platforms and SDN solutions in a secure environment. This new joint initiative between Huawei and Canonical includes the integration of Huawei Agile Controller with Juju, Canonical’s service modelling tool, that provides the ability to quickly deploy complex workloads including OpenStack with various SDN controllers. The combination of Huawei Agile Controller and Ubuntu OpenStack with Juju tooling enables the rapid efficient scaling and operation of complex application services while minimizing the need for manual intervention.

“We are honored to expand our strategic relationship with Huawei. Ubuntu OpenStack and Juju integration with the Huawei Agile Controller enhances customer data center management capability, especially when it comes to operating large-scale data center deployments easily. Our collaboration with Huawei delivers even simpler and more efficient automated data center solutions to our customers,” said John Zannos, Vice President of Cloud Alliances and Ecosystem at Canonical.

Huang He, General Manager of the Huawei SDN Controller Domain, said: “Openness is a key factor of a data center network solution. Huawei Agile Controller has passed interoperability certification with multiple providers of commercial OpenStack versions. The successful integration with Canonical reflects the deepened cooperation with cloud platform providers. This joint solution achieved not only automated network device configuration and service orchestration, but also the quick installation and deployment of the controller system itself. This further improves the data center operation efficiency.”

By cooperating with Canonical, Huawei makes another step toward an all-cloud network management ecosystem. Huawei is continuing the effort promoting commercial SDN deployments and creating an open, cooperative, win-win SDN ecosystem. The alliance of Huawei and Canonical benefits enterprise and telecom users by improving the network management efficiency and is a significant to the development of the entire ecosystem.

For further information please contact pr@canonical.com

23 March, 2017 02:18PM

Ubuntu Insights: Out of date software leaves you vulnerable

Two weeks ago, Der Spiegel wrote an article highlighting that out of date software on private clouds was leaving government and political party information vulnerable to being hacked. Given that political organisations being targeted is currently such a hot topic, it is somewhat of a surprise how widespread this issue appears to be. After discovering the size and scope of the problem through their own investigations, Nextcloud decided to take a proactive approach and help organisations’ awareness and address potential vulnerabilities.

The large number of insecure servers came to light as a result of a tool that Nextcloud was developing. Given their findings, Nextcloud took the somewhat unusual industry step to proactively work with Computer Emergency Response Teams in various countries to notify affected people of the risks, in an effort to help keep their data as secure as possible.

The Der Spiegel article and Nextcloud’s response which chose transparency over secrecy and following security best practices are a must read for everyone in the industry and a timely reminder to us all of the importance of updating our software on a regular basis.

As mentioned in NextCloud’s blog response, they have now released the Nextcloud Private Cloud Security Scanner as a quick and simple tool to enable users to regularly check their servers and ensure always up to date software. However the ideal scenario is for software updates to happen automatically and reduce the risk of a security threat as a result, especially so for smaller organisations and consumers, which often lack the technical know-how to maintain their system up to date . This is a feature that’s built into snaps, the universal Linux application packaging format, which is why Nextcloud uses snaps to distribute their software as part of their Nextcloud Box offering. Users of the box will get automated updates of their Nextcloud software whenever a new release is made available in the store. As a matter of fact the NextCloud Box is built on Ubuntu core, the version of Ubuntu entirely built out of snaps. This means that the entire software on the box is seamlessly updated without administrator involvement, and it literally takes no effort to keep your storage secure.

23 March, 2017 10:55AM

Ted Gould: Applications under systemd

When we started to look at how to confine applications enough to allow an appstore that allows for anyone to upload applications we knew that Apparmor could do the filesystem and IPC restrictions, but we needed something to manage the processes. There are kernel features that work well for this, but we didn’t want to reinvent the management of them, and we realized that Upstart already did this for other services in the system. That drove us to decide to use Upstart for managing application processes as well. In order to have a higher level management and abstraction interface we started a small library called upstart-app-launch and we were off. Times change and so do init daemons, so we renamed the project ubuntu-app-launch expecting to move it to systemd eventually.

Now we’ve finally fully realized that transition and ubuntu-app-launch runs all applications and untrusted helpers as systemd services.

bye, bye, Upstart. Photo from: https://pixabay.com/en/goodbye-waving-boy-river-boat-705165/

For the most part, no one should notice anything different. Applications will start and stop in the same way. Even users of ubuntu-app-launch shouldn’t notice a large difference in how the library works. But for people tinkering with the system they will notice a few things. Probably the most obvious is that application log files are no longer in ~/.cache/upstart. Now the log files for applications are managed by journald, which as we get all the desktop services ported to use systemd, will mean that you can see integrated events from multiple processes. So if Unity8 is rejecting your connection you’ll be able to see that next to the error from your application. This should make debugging your applications easier. You’ll also be able to redirect messages off a device realtime, which will help debugging your application on a phone or tablet.

For those who are more interested in details we’re using systemd’s transient unit feature. This allows us to create the unit on the fly with multiple instances of each application. Under Upstart we used a job with instances for each application, but now that we’re taking on more typical desktop style applications we needed to be able to support multi-instance applications, which would have been hard to manage with that approach. We’re generating the service name using this pattern:

ubuntu-app-launch--$(application type)--$(application id)--$(time stamp).service

The time stamp is used to make a unique name for applications that are multi-instance. For applications that ask us to maintain a single instance for them the time stamp is not included.

Hopefully that’s enough information to get you started playing around with applications running under systemd. And if you don’t care to, you shouldn’t even notice this transition.

23 March, 2017 05:00AM

hackergotchi for Maemo developers

Maemo developers

Perfection

Perfection has been reached not when there is nothing left to add, but when there is nothing left to take away.

0 Add to favourites0 Bury

23 March, 2017 12:17AM by Philip Van Hoof (pvanhoof@gnome.org)

March 22, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Jonathan Riddell: Chef Intermediate Training

I did a day’s training at the FLOSS UK conference in Manchester on Chef. Anthony Hodson came from Chef (a company with over 200 employees) to provide this intermediate training which covered writing receipes using test driven development.  Thanks to Chef and Anthony and FLOSS UK for providing it cheap.  Here’s some notes for my own interest and anyone else who cares.

Using chef generate we started a new cookbook called http.

This cookbook contains a .kitchen.yml file.  Test Kitchen is a chef tool to run tests on chef recipes.  ‘kitchen list’ will show the machines it’s configured to run.  Default uses Virtualbox and centos/ubuntu.  Can be changed to Docker or whatever.  ‘kitchen create’ will make them. ‘kitchen converge to deploy. ‘kitchen login’ to log into v-machine. ‘kitchen verify’ run tests.  ‘kitchen test’ will destroy then setup and verify, takes a bit longer.

Write the test first.  If you’re not sure what the test should be write stub/placeholder statements for what you do know then work out the code.

ChefSpec (an RSpec language) is the in memory unit tests for receipes, it’s quicker and does finer grained tests than the Kitchen tests (which use InSpec and do black box tests on the final result).  Run with  chef exec rspec ../default-spec.rb  rspec shows a * for a stub.

Beware if a test passes first time, it might be a false positive.

ohai is a standalone or chef client tool which detects the node attributes and passes to the chef client.  We didn’t get onto this as it was for a follow on day.

Pry is a Ruby debugger.  It’s a Gem and part of chefdk.

To debug recipes use pry in the receipe, drops you into a debug prompt for checking the values are what you think they are.

I still find deploying chef a nightmare, it won’t install in the normal way on my preferred Scaleway server because they’re ARM, by default it needs a Chef server but you can just use chef-client with –local-mode and then there’s chef solo, chef zero and knife solo which all do things that I haven’t quite got my head round.  All interesting to learn anyway.

 

Facebooktwittergoogle_pluslinkedinby feather

22 March, 2017 03:57PM

Canonical Design Team: Bigger, brighter, better – MWC 2017

Our stand occupied the same space as last year with a couple of major
changes this time around – the closure of a previously adjacent aisle
resulting in an increase in overall stand space (from 380 to 456 square
metres). With the stand now open on just two sides, this presented the
design team with some difficult challenges:

  • Maximising site lines and impact upon approach
  • Utilising our existing components – hanging banners, display units,
    alcoves, meeting rooms – to work effectively within a larger space
  • Directing the flow of visitors around the stand

Design solution

Some key design decisions and smaller details:

  • Rotating the hanging fabric banners 90 degrees and moving them
    to the very front of the stand
  • Repositioning the welcome desk to maximise visibility from
    all approaches
  • Improved lighting throughout – from overhead banner illumination
    to alcoves and within all meeting rooms
  • Store room end wall angled 45 degrees to increase initial site line
  • Raised LED screens for increased visibility
  • Four new alcoves with discrete fixings for all 10x alcove screens
  • Bespoke acrylic display units for AR helmets and developer boards
  • Streamlined meeting room tables with new cable management
  • Separate store and staff rooms

Result

With thoughtful planning and attention to detail, our brand presence
at this years MWC was the strongest yet.

Initial design sketches

Plan and site line 3D render

 


Design intent drawings

 

 

 

 

 

3D lettering and stand graphics

 

 

 

 

 

22 March, 2017 01:19PM

hackergotchi for Cumulus Linux

Cumulus Linux

The first and only NOS to support LinkedIn’s Open19 project

Today we are excited to announce our support of Open19, a project spearheaded by LinkedIn. Open19 simplifies and standardizes the 19-inch rack form factor and increases interoperability between different vendors’ technology. Built on the principles of openness, Open19 allows many more suppliers to produce servers that will interoperate and will be interchangeable in any rack environment.

We are thrilled to be the first and only network operating system supporting Open19 for two reasons. First, this joint solution offers complete choice throughout the entire stack — increasing interoperability and efficiency. We believe the ease of use of this new technology helps expand the footprint of web-scale networking and makes it even more accessible and relevant.

The second reason is that we are continually dedicated to innovation within the open community, and this is one more way we can support that mission. We believe that disaggregation is not only the future but the present (read more about why we think disaggregation is here to stay). When a company like LinkedIn jumped into the disaggregate ring, we knew we wanted to be a part of it.

What is Open19?

The primary component, Brick Cage, is a passive mechanical cage that fits in any EIA 19-inch rack, allowing increased interoperability. Brick Cage comes in 12RU or 8RU form-factors with 2RU modularity.

The Open19 platform is based on standard building blocks with the following specifications:

  • Standard 19-inch 4 post rack
  • Brick cage
  • Brick (B), Double Brick (DB), Double High Brick (DHB)
  • Power shelf—12 volt distribution, OTS power modules
  • Optional Battery Backup Unit (BBU)
  • Optional Networking switch (ToR)
  • Snap-on power cables/PCB—200-250 watts per brick
  • Snap-on data cables—up to 100G per brick
  • Provides linear growth on power and bandwidth based on brick size

As a standardized open solution, Open19 promises 3 to 5 times faster rack level integration, which will result in reduced time to market.

 

Open19 and Cumulus Networks

 

The purpose of Open19:

  • Create an open standard that can fit any 19” rack environment for server, storage and networking
  • Optimize base rack cost
    • Reduce commons by 50%
  • Enable fast rack integration
    • 2-3x faster integration time
  • Build an ecosystem that will consolidate requirements and volumes
    • High adoption level
  • Create a solution that will have applicability for large, medium, and small scale data centers

How Cumulus Networks fits in:

Cumulus Linux is the first and only open network operating system to support the Open19 switch. With common benefits of ease of adoption and customization, while maximizing economics, Open19 and Cumulus Linux are helping customers receive the benefits of web-scale networking while standardizing on EIA 19-inch rack to increase interoperability.

What to do next:

If you’re interested in an Open19 Brick cage featuring Cumulus Linux, contact our knowledgeable sales team. And to give Cumulus Linux a spin at zero cost, try Cumulus VX.

If you’d like to learn more about the join solution, check out this solution brief. 

The post The first and only NOS to support LinkedIn’s Open19 project appeared first on Cumulus Networks Blog.

22 March, 2017 11:00AM by Kelsey Havens

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Webinar: How to ensure the ongoing security compliance of Ubuntu 12.04

Many enterprises still run Ubuntu 12.04 LTS but updates will end soon.

Ubuntu 12.04 LTS users are encouraged to upgrade to 14.04 LTS or 16.04 LTS. For some this is easy but for others, particularly for larger deployments, upgrading can be complex.

Watch this on-webinar to learn:

  • How Ubuntu 12.04 LTS users will be impacted after April 28th, 2017
  • Upgrading strategies for 12.04 LTS systems to 14.04 LTS or 16.04 LTS
  • How to extend security maintenance for 12.04 LTS with Ubuntu Advantage

There’s also an interesting Q&A discussion at the end.

Watch the webinar

22 March, 2017 09:53AM

Ubuntu Insights: Distributing a ROS system among multiple snaps

This is a guest post by Kyle Fazzari, Engineer at Canonical. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

One of the key tenets of snaps is that they bundle their dependencies. The fact that they’re self-contained helps their transactional-ness: upgrading or rolling back is essentially just a matter of unmounting one snap and mounting the other. However, historically this was also one of their key downsides: every snap must be standalone. Fortunately, snapd v2.0.10 saw the addition of a content interface that could be used by a producer snap to make its content available for use by a consumer snap. However, that interface was very difficult to utilize when it came to ROS due to ROS’s use of workspaces for both building and running. At long last, support is landing in Snapcraft for building a ROS system that is distributed among multiple snaps, and I wanted to give you a preview of what that will look like.

Why would you want to do that?
Like I said, snaps bundling their dependencies is typically a good thing, and this applies to ROS-based snaps as well. Having an entire ROS system in a single snap that updates transactionally is awesome, and useful for most deployment cases. However, there are some use-cases where this breaks down.

For example, say I’m manufacturing an unmanned aerial vehicle. I want to sell it in such a state that it’s only capable of being piloted via remote control. This is done with a ROS system, which in a simple world would be made up of:

  • One node to act as a driver for the RC radio
  • One node to drive the motors
  • Launch file to connect the two

You get the idea. In addition to that basic platform, I want my users to be able to buy add-on packs. For example, perhaps the vehicle includes a GPS sensor (as well as basic pose sensors). I’d like to sell an add-on pack that adds a very basic “fly here” autopilot, or perhaps a “follow me” mode. That’s another ROS system, perhaps something like:

  • One node to act as a driver for the GPS
  • One node (or perhaps a few) to act as a driver for the pose sensors
  • One node to plan a path
  • One node to take the path and turn it into motor controls
  • A launch file to bring up this system

If we build both of these snaps to be standalone, we quickly run into issues:

  • Lots of duplication between them, as the autopilot snap will need to include most of the base behavior snap
  • They both include (and will try to launch) their own roscore
  • The duplicated snaps in each will try to access their respective hardware. This is a race condition: the first one up will win, the second will die. Or, depending on the hardware interface, they’ll both control it. That’s fun.

Using content sharing, we can actually make the autopilot snap depend upon and utilize the base behavior snap.

Alright, what does this look like?

Let’s simplify our previous example into two snaps: a “ros-base” snap that includes the typical stuff: roscore, roslaunch, etc., and a “ros-app” snap that includes packages that actually do something, specifically the classic talker/listener example. A quick reminder: this will only be possible in Snapcraft v2.28 or later.

Create ros-base
To create the base snap, create a snap/snapcraft.yaml file with the following contents:

 name: ros-base
version: '1.0'
grade: stable
confinement: strict
summary: ROS Base Snap
description: Contains roscore and basic ROS utilities.

slots:
  # This is how we make a part of this snap readable by other snaps.
  # Consumers will need to access the PYTHONPATH as well as various libs
  # contained in this snap, so share the entire $SNAP, not just the ROS
  # workspace.
  ros-base:
    content: ros-base-v1
    interface: content
    read: [/]
 

parts:
  ros-base:
    plugin: catkin
    rosdistro: kinetic
    include-roscore: true
    catkin-packages: [] 

That’s it. Run snapcraft on it, and after a little time you’ll have your base snap (the “provider” snap regarding content sharing). This particular example doesn’t do a whole lot by itself, so let’s move on to our ros-app snap (the “consumer” snap regarding content sharing).

Create ros-app

The starting point for ros-app is the current standalone ROS demo. We’ll use the exact same ROS workspace, but we’ll add a few more things and tweak the YAML a bit.

The recommended way to build a “consumer” snap (assuming it has a build-time dependency on the content shared from the “producer” snap, which ros-app does indeed have on ros-base ) is to create a tarball of the producer’s staging area, and use it as a part to build the consumer.

Concretely, we can tar up the staging area of ros-base and use it to build ros-app , but then filter it out of the final ros-app snap (so as to not duplicate the contents of ros-base ).

So let’s do that now. cd into the directory containing the now-built ros-base snap, tar up its staging area, then move it off into the ros-app area:

$ tar czf ros-base.tar.bz2 stage/
$ mv ros-base.tar.bz2 /path/to/ros-app

Now, in /path/to/ros-app alter the snap/snapcraft.yaml to look something like this:

name: ros-app
version: '1.0'
grade: stable
confinement: strict
summary: ROS App Snap
description: Contains talker/listener ROS packages and a .launch file.

plugs:
  # Mount the content shared from ros-base into $SNAP/ros-base
  ros-base:
    content: ros-base-v1
    interface: content
    target: /ros-base

apps:
  launch-project:
    command: run-system
    plugs: [network, netwo
 rk-bind, ros-base]

parts:
  # The `source` here is the tarred staging area of the ros-base snap.
  ros-base:
    plugin: dump
    source: ros-base.tar.bz2
    # This is only used for building-- filter it out of the final snap.
    prime: [-*]

  # This is mostly unchanged from the standalone ROS example. Notable
  # additions are:
  #  - Using Kinetic now (other demo is Indigo)
  #  - Specifically not including roscore
  #  - Making sure we're building AFTER our underlay
  #  - Spe
 cifying the build- and run-time paths of the underlay
  ros-app:
    plugin: catkin
    rosdistro: kinetic
    include-roscore: false
    underlay:
      # Build-time location of the underlay
      build-path: $SNAPCRAFT_STAGE/opt/ros/kinetic

      # Run-time location of the underlay
      run-path: $SNAP/ros-base/opt/ros/kinetic
    catkin-packages:
      - talker
      - listener
    after: [ros-base]

  # We can't just use roslaunch now, since t
 hat's contained in the
  # underlay. This part will tweak the environment a little to
  # utilize the underlay.
  run-system:
    plugin: dump
    stage: [bin/run-system]
    prime: [bin/run-system]

  # We need to create the $SNAP/ros-base mountpoint for the content
  # being shared.
  mountpoint:
    plugin: nil
    install: mkdir $SNAPCRAFT_PART_INSTALL/ros-base 

Other than the ROS workspace in src/ (which remains unchanged from the other demo so we won’t discuss it here), we need to create a bin/run-system executable that looks something like this:

<code#! ros_base="$SNAP/ros-base" code="" they="" just="" pythonpath="" builds="" it="" ros-base="" one="" as="" well.="" export="" in="" ros="" any="" (assuming="" if="" $snap_arch"="" adding="" #="" wanted="" amd64="" for="" create="" add="" 1="" to="" only="" exit="" roslaunch="" arch:="" esac="" <="" gave="" bin="" be="" sure="" ;;="" $snap_arch="" logic="" course).="" but="" amd64)="" listener="" ld_library_path="" *)="" doesn't,="" echo="" arch="" the="" bash="" "unsupported="" case="" we'll="" talk_and_listen.launch="" would="" this="" of="" could="" us="" there,="" triplet="" so="" here.="" here,="" i'm="" nice="" support="" snapd=""></code#!>

Why is this needed? Because the Catkin plugin can only do so much for you. The ros-base snap includes various python modules and libs outside of its ROS workspace that ros-app needs, so we extend the PYTHONPATH and LD_LIBRARY_PATH to utilize them.

From there, it’s as easy as running roslaunch (which by the way is contained in ros-base).

Run snapcraft on this, and after a few minutes (fairly quick since it’s re-using the base’s staging area to build) you’ll have a ros-app snap

So now I have two ROS snaps. Now what?

You now have your ROS system split between multiple snaps. The first step is to install both snaps:

$ sudo snap install --dangerous ros-base_1.0_amd64.snap
ros-base 1.0 installed
$ sudo snap install --dangerous ros-app_1.0_amd64.snap
ros-app 1.0 installed

Now take a look at snap interfaces :

$ snap interfaces
Slot                      Plug
ros-base:ros-base         -
:alsa                     -
:avahi-observe            -
...
<snip>
...
-                         ros-app:ros-base </snip>

You’ll see that ros-base:ros-base is an available slot, and ros-app:ros-base is an available plug. This interface is currently not connected, so content sharing is not yet taking place. Let’s connect them:

$ sudo snap connect ros-app:ros-base ros-base:ros-base

Taking another look at  snap interfaces  you can see they're now connected:

$ snap interfaces
Slot                      Plug
ros-base:ros-base         ros-app
:alsa                     -
:avahi-observe            -
...
<snip></snip>

And now you can launch this ROS system you now have distributed between two snaps:

$ ros-app.launch-project
<snip>
NODES
  /
    listener (listener/listener_node)
    talker (talker/talker_node)
<snip>
process[talker-2]: started with pid [10649]
process[listener-3]: started with pid [10650]
[ INFO] [1487121136.757225517]: Hello world 0
[ INFO] [1487121136.860879281]: Hello world 1
[ INFO] [1487121136.960885723]: Hello world 2
[ INFO] [1487121137.057481265]: Hello world 3
[INFO] [1487121137.058298]: I heard Hello world 3
<snip></snip></snip></snip>

Conclusion

Multiple ROS users have mentioned that the fact that a ROS snap must be completely self-contained is a problem. Typically it either interferes with their workflow or their business plan. We’ve heard you! We can’t pretend that the snap world of isolated blobs and the ROS world of workspaces merge perfectly, but the content interface takes a big step toward blending these two worlds, and the new features in Snapcraft’s Catkin plugin hopefully makes it as easy as possible to utilize.

I personally look forward to seeing what you do with this!

Original guest post can be found here

22 March, 2017 08:00AM

Elizabeth K. Joseph: Your own Zesty Zapus

As we quickly approach the release of Ubuntu 17.04, Zesty Zapus, coming up on April 13th, you may be thinking of how you can mark this release.

Well, thanks to Tom Macfarlane of the Canonical Design Team you have one more goodie in your toolkit, the SVG of the official Zapus! It’s now been added to the Animal SVGs section of the Official Artwork page on the Ubuntu wiki.

Zesty Zapus

Download the SVG version for printing or using in any other release-related activities from the wiki page or directly here.

Over here, I’m also all ready with the little “zapus” I picked up on Amazon.

Zesty Zapus toy

22 March, 2017 04:01AM

hackergotchi for Blankon developers

Blankon developers

Sokhibi: Buku Inkscape versi Digital Gratis

Bulan ini kurang lebih sudah dua tahun penulis menerbitkan dan mengedarkan buku Inkscape versi cetak, setelah melalui berbagai pertimbangan, hari ini penulis membagikan buku tersebut dalam versi digital (ebook).

Buku versi digital ini sengaja penulis bagikan secara gratis agar masyarakat umum yang tidak punya niat belum memiliki uang lebih untuk membeli buku versi cetak tetap bisa belajar menggunakan Inkscape. Walau demikian penulis tetap menerima dengan senang hati jika ada pembaca yang mau memberi sedikit donasi sukarela. Penulis menerima donasi berupa uang atau pulsa, tata cara melakukan donasi silakan baca halaman Kata Pengantar pada buku yang penulis bagikan.



Unduh Buku Inkscape versi Digital Gratis


Pembaca boleh menyebarkan buku versi digital ini ke masayarakat yang membutuhkan baik secara online ataupun offline, juga dipersilakan untuk membuat mirror di berbagai web atau server pembaca miliki. Jika mirror yang pembaca buat ingin dicantumkan di blog ini, silakan tulis mirror tersebut di bagian komentar blog ini.

Demikian, terima kasih
Semarang, Satu Februari Dua Ribu Tujuh Belas

22 March, 2017 01:45AM by Istana Media (noreply@blogger.com)

March 21, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Stéphane Graber: NVidia CUDA inside a LXD container

LXD logo

GPU inside a container

LXD supports GPU passthrough but this is implemented in a very different way than what you would expect from a virtual machine. With containers, rather than passing a raw PCI device and have the container deal with it (which it can’t), we instead have the host setup with all needed drivers and only pass the resulting device nodes to the container.

This post focuses on NVidia and the CUDA toolkit specifically, but LXD’s passthrough feature should work with all other GPUs too. NVidia is just what I happen to have around.

The test system used below is a virtual machine with two NVidia GT 730 cards attached to it. Those are very cheap, low performance GPUs, that have the advantage of existing in low-profile PCI cards that fit fine in one of my servers and don’t require extra power.
For production CUDA workloads, you’ll want something much better than this.

Note that for this to work, you’ll need LXD 2.5 or higher.

Host setup

Install the CUDA tools and drivers on the host:

wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
sudo apt update
sudo apt install cuda

Then reboot the system to make sure everything is properly setup. After that, you should be able to confirm that your NVidia GPU is properly working with:

ubuntu@canonical-lxd:~$ nvidia-smi 
Tue Mar 21 21:28:34 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:06.0     N/A |                  N/A |
| 30%   30C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GT 730      Off  | 0000:02:08.0     N/A |                  N/A |
| 30%   26C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
|    1                  Not Supported                                         |
+-----------------------------------------------------------------------------+

And can check that the CUDA tools work properly with:

ubuntu@canonical-lxd:~$ /usr/local/cuda-8.0/extras/demo_suite/bandwidthTest
[CUDA Bandwidth Test] - Starting...
Running on...

 Device 0: GeForce GT 730
 Quick Mode

 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			3059.4

 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			3267.4

 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			30805.1

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

Container setup

First lets just create a regular Ubuntu 16.04 container:

ubuntu@canonical-lxd:~$ lxc launch ubuntu:16.04 c1
Creating c1
Starting c1

Then install the CUDA demo tools in there:

lxc exec c1 -- wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
lxc exec c1 -- dpkg -i cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
lxc exec c1 -- apt update
lxc exec c1 -- apt install cuda-demo-suite-8-0 --no-install-recommends

At which point, you can run:

ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.

Which is expected as LXD hasn’t been told to pass any GPU yet.

LXD GPU passthrough

LXD allows for pretty specific GPU passthrough, the details can be found here.
First let’s start with the most generic one, just allow access to all GPUs:

ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
Tue Mar 21 21:47:54 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:06.0     N/A |                  N/A |
| 30%   30C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GT 730      Off  | 0000:02:08.0     N/A |                  N/A |
| 30%   27C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
|    1                  Not Supported                                         |
+-----------------------------------------------------------------------------+
ubuntu@canonical-lxd:~$ lxc config device remove c1 gpu
Device gpu removed from c1

Now just pass whichever is the first GPU:

ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu id=0
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
Tue Mar 21 21:50:37 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:06.0     N/A |                  N/A |
| 30%   30C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
+-----------------------------------------------------------------------------+
ubuntu@canonical-lxd:~$ lxc config device remove c1 gpu
Device gpu removed from c1

You can also specify the GPU by vendorid and productid:

ubuntu@canonical-lxd:~$ lspci -nnn | grep NVIDIA
02:06.0 VGA compatible controller [0300]: NVIDIA Corporation GK208 [GeForce GT 730] [10de:1287] (rev a1)
02:07.0 Audio device [0403]: NVIDIA Corporation GK208 HDMI/DP Audio Controller [10de:0e0f] (rev a1)
02:08.0 VGA compatible controller [0300]: NVIDIA Corporation GK208 [GeForce GT 730] [10de:1287] (rev a1)
02:09.0 Audio device [0403]: NVIDIA Corporation GK208 HDMI/DP Audio Controller [10de:0e0f] (rev a1)
ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu vendorid=10de productid=1287
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
Tue Mar 21 21:52:40 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:06.0     N/A |                  N/A |
| 30%   30C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GT 730      Off  | 0000:02:08.0     N/A |                  N/A |
| 30%   27C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
|    1                  Not Supported                                         |
+-----------------------------------------------------------------------------+
ubuntu@canonical-lxd:~$ lxc config device remove c1 gpu
Device gpu removed from c1

Which adds them both as they are exactly the same model in my setup.

But for such cases, you can also select using the card’s PCI ID with:

ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu pci=0000:02:08.0
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
Tue Mar 21 21:56:52 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:08.0     N/A |                  N/A |
| 30%   27C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
+-----------------------------------------------------------------------------+
ubuntu@canonical-lxd:~$ lxc config device remove c1 gpu 
Device gpu removed from c1

And lastly, lets confirm that we get the same result as on the host when running a CUDA workload:

ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- /usr/local/cuda-8.0/extras/demo_suite/bandwidthTest
[CUDA Bandwidth Test] - Starting...
Running on...

 Device 0: GeForce GT 730
 Quick Mode

 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			3065.4

 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			3305.8

 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			30825.7

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

Conclusion

LXD makes it very easy to share one or multiple GPUs with your containers.
You can either dedicate specific GPUs to specific containers or just share them.

There is no of the overhead involved with usual PCI based passthrough and only a single instance of the driver is running with the containers acting just like normal host user processes would.

This does however require that your containers run a version of the CUDA tools which supports whatever version of the NVidia drivers is installed on the host.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

21 March, 2017 10:08PM

hackergotchi for Univention Corporate Server

Univention Corporate Server

Release Candidate of UCS 4.2 Now Available

Today, we have published the Release Candidate UCS 4.2. Highlight of the release is the new, freely configurable online portal, which you can flexibly adapt to your needs and the one of your organization. Further, a lot less obvious changes we have made are: We have updated the distribution base of UCS to Debian 8 (Jessie) and we have now made natively available a large part of the Debian packages. Hence, we can provide important security and product updates much faster than ever before.

We will release UCS 4.2 beginning of April 2017. Everyone who is curious to know more about UCS 4.2 can see a live demonstration of the Release Candidate at the CeBIT in hall 3 / booth D36-620 till Friday this week.

Freedom of Choice and Self-Determination with UCS

“We promote digital self-determination and transparency. Organizations can use UCS to determine where and how they want to run services and store their data using UCS, without sacrificing services such as Microsoft Office 365 or G Suite from Google”, says Peter Ganten, founder and managing director of our company. “With our solution, we offer users freedom of choice and facilitate, above all, the management of their IT infrastructure as well as of the apps and cloud services they use. At the same time, we are building a bridge between the various applications and client systems, such as Windows, Linux and Mac, with our cross-platform identity management. The sovereignty over the data of the users remains in their own hands which is of enormous importance in public institutions like schools and authorities.”

New online portal for users and administrators

With the new UCS 4.2, we have thoroughly developed the operating concept and the user interface presents itself more tidily than ever before. A new, centralized online portal now also allows end users to quickly access available applications via a self-service. And administrators are provided with all the tools for professional IT management. The portal is freely configurable and can be adapted to the individual needs of your organization, be it an authority, school, or large company. If you are a user of an earlier version, you can also benefit from improved usability, support for current hardware, and higher security.

Technical Updates: Debian 8, Linux Kernel 4.9 LTS, and Samba 4.6

So far we have built UCS on a specially adapted Linux, which is based on Debian 7. With UCS 4.2, Debian 8 (Jessie) will form the new technical base of UCS. With this migration, we will take original Debian packages for UCS. Only certain packages with components that are changed or more up-to-date than those in Debian, such as Samba or OpenLDAP, we then still be created by ourselves. Your organization thus benefits from even faster security updates as well as higher performance levels through Linux Kernel 4.9 with Long Term Support (LTS).

With Samba 4.6 we are one of the first manufacturers to use the newly released version of Samba 4.6 in a productive environment which improves the performance of the Active Directory as well as the exchange of printer drivers with Windows 10 clients.

As a direct installation or Docker image: More than 80 business apps available

The “Dockerization” of over 80 applications in our App Center continues. As an App developer, you can either provide us with your application as a single Docker container or you provide your software as Debian packages and we will take care of converting them into Docker-based applications.

Availability of UCS 4.2

UCS 4.2 will be available from April 2017.

Additional information

Further information on the UCS 4.2 Release Candidate can be found in our forum.

If you would like to get a live demonstration of UCS 4.2, feel free to arrange a meeting with us.

Test UCS Release Candidate UCS 4.2 Now

Der Beitrag Release Candidate of UCS 4.2 Now Available erschien zuerst auf Univention.

21 March, 2017 02:51PM by Alice Horstmann

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: When Products and Digital Signage speak the same language

This is a guest post by Dominique Guinard, Co-founder & CTO at EVRYTHNG. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

Digital signage is booming. From stores to offices and public buildings, screens are now commonplace. This is a domain our partner Screenly masters, managing 10,000 screens and counting. Their secret sauce? Simplicity! Their system is plug and play, making it possible to deploy a screen within minutes. Screenly’s system is built on the popular Raspberry Pi platform and running the new Ubuntu Core operating system, a cutting-edge operating system tailored to deploy apps in the real world.

A few weeks ago, Ubuntu, Screenly and EVRYTHNG sat down together to see if we could work on disrupting the digital signage world with a simple, yet very actionable solution to connect digital screens and products in store.

EVRYTHNG is already busy giving millions of products a digital life with a number of leading brands. Imagine if these products could communicate with digital signage without the need for any app to be installed, but instead simply by scanning the EVRYTHNG identities on the products from the Web.

There are plenty of scenarios in which a product and a screen could tell great stories: Are these shoes available in my size? Is this shirt 100% waterproof? What would I look like in this jacket? What’s best to eat with this wine?

The result is an integrated solution that we presented at Mobile World Congress 2017. Simply scan an item from the Web or by tapping an NFC tag, and off you go. You get the stock inventory in the screen in front of you, a video describing the product on the main screen and some related content on your phone.

How does it work? Products that are digitally enabled with EVRYTHNG get a unique URL each, such as https://tn.gg/HLqc3H8j. This URL can be serialized in a QR code, in an NFC tag or via image recognition. All of these formats (and many more) are supported via our scanning tool, SCANTHNG. SCANTHNG is also available as a Web SDK, meaning that consumers don’t need to install an app to interact with products. Instead, they can scan from a Web page on their phone.

Next, the image is sent to the EVRYTHNG platform, where the product is uniquely identified. The Reactor™ in our platform then programmatically decides what to do. In this case, the user is redirected to a landing page about the product, and the Screenly API is sent the product identifier, stock inventory and any other information that will be used to display the interactive information on the screens.

Such a system can be put in place within minutes thanks to the three platforms: Screenly, Ubuntu Core and EVRYTHNG. It also illustrates the power of products #BornDigital™ with Web capabilities: They can trigger experiences in the real world by combining their data and services on the Web!

Original guest post can be found here

21 March, 2017 09:00AM

David Tomaschik: Useful ARM References

I started playing the excellent IOARM wargame on netgarage. No, don’t be expecting spoilers, hints, or walk-throughs, I’m not that kind of guy. This is merely a list of interesting reading I’ve discovered to help me understand the ARM architecture and ARM assembly.

21 March, 2017 07:00AM

hackergotchi for OSMC

OSMC

OSMC's March update is here with Kodi 17.1

Just over a month ago, we released Kodi Krypton for all OSMC compatible devices. Since then, we've been working tirelessly to fix bugs, improve performance and listen to your feedback.

OSMC's March update is here, and it comes with a lot of improvements, including the new Kodi Krypton 17.1 release. February has been a jam-packed month, seeing the launch of Vero 4K and the new Raspberry Pi Zero with WiFi.

Here's what's new:

Bug fixes

  • Fix an issue where CEC may not work correctly on Vero 4K
  • Fix an issue that may prevent Vero 2 and Vero 4K from booting correctly on a small number of 4K displays
  • Fix an issue where enabling Bluetooth on Vero 4K required a reboot before working correctly
  • Fix visual issues with some screens on the OSMC skin
  • Fix choppy playback with some videos on Vero (late 2014)
  • Fix an issue with DNS TTL and CLASS field length
  • Fix an issue with handling of ClearProperty method
  • Fix an issue with subnet assignment for tethering
  • Fix an issue with search domain reconfiguration
  • Fix an issue with Bluetooth adapter handling
  • Fix an issue with DHCP handling and OFFER stage
  • Fix an issue with DNS proxy response handling
  • Fix an issue with nameservers after DHCP renewal
  • Fix an issue with DHCP request and IP link local
  • Fix an issue with DHCP timer removal handling
  • Fix an issue with missing MTU option from DHCP
  • Fix an issue with passphrases and space characters
  • Fix an issue with passphrases after WPS provisioning
  • Fix an issue with memory leak and wpa_supplicant
  • Fix an issue with memory leak and service ordering
  • Fix an issue preventing some TP-Link WiFi adapters from working correctly
  • Fix an issue where the OSMC Updater may only occupy a quarter of the screen on a 4K display
  • Fix an issue where the DVBLink PVR add-on may cause Kodi to crash after a period of activity
  • Fix an issue where Kodi may crash when playback is paused if experimental A2DP support is installed
  • Fix a power issue which may prevent keyboards from working on Vero 4K
  • Fix an issue on Raspberry Pi 0/1 where duplicate JustBoom DAC configuration file could prevent OSMC from updating
  • Fix an issue where the OSMC logo may look distorted in My OSMC
  • Fix an issue that prevented some remote controllers from working in Kodi after updating to Kodi Krypton
  • Fix an issue where skin settings may be lost after rebooting OSMC
  • Fix a variety of CEC issues on Raspberry Pi

Improving the user experience

  • Added support for Pi Zero W, including WiFi and BT
  • Improved Bluetooth support on Raspberry Pi
  • Added support for driving BT via alternate UART on Raspberry Pi
  • Improved CPU performance on Vero 4K
  • Improved GUI performance on Vero 2 and Vero 4K
  • Added support for Docker and AUFS on Vero 4K
  • Added support for a variety of USB TV tuners on Vero 4K
  • Added support for Hyperion on Vero 4K
  • Add option to disable fanart in the OSMC Skin
  • Improved OSMC skin performance
  • Add click to seek support to the OSMC skin
  • Add support for custom backgrounds to the OSMC skin
  • Improve playback performance on Vero 2 and Vero 4K
  • Add support for WiFi fast-reconnect and band-steering
  • Added support for NOOBS mass storage device booting
  • Improvements to Bluetooth management via My OSMC
  • Improved official OSMC 802.11ac WiFi adapter performance and configuration
  • Added support for additional network filesystems on Vero 4K
  • Improved WiFi performance on Vero 4K
  • Added support for more DVB hardware on Vero 4K
  • Improved boot time on NOOBS install by lazy-mounting /boot partition
  • Improved Kodi add-on performance, particularly with add-ons using long lists
  • Improved Kodi game controller support
  • Estuary skin improvements: including fixes for PVR
  • Improved support for playback of HEVC content on Raspberry Pi

Miscellaneous

  • OSMC buildsystem: ensure that a swap file can always be created when necessary

Wrap up

To get the latest and greatest version of OSMC, simply head to My OSMC -> Updater and check for updates manually on your exising OSMC set up. Of course — if you have updates scheduled automatically you should receive an update notification shortly.

If you enjoy OSMC, please follow us on Twitter or like us on Facebook and consider making a donation if you would like to support further development.

If you'd like to watch 10-bit H265 content or you've a penchant for 4K content, be sure to check out our new Vero 4K.

Enjoy!

21 March, 2017 12:05AM by Sam Nazarko

hackergotchi for Qubes

Qubes

Xen Security Advisory (XSA) Tracker

We’re pleased to announce the new Xen Security Advisory (XSA) Tracker. This tracker clearly shows whether the security of Qubes OS is (or was) affected by any given XSA in a simple “Yes” or “No” format. Since Qubes OS uses Xen for virtualization, we know that many of our users follow new XSA announcements. However, we also understand that most of our users aren’t Xen experts and may not be able to easily determine whether an XSA affects the security of Qubes. We know that this uncertainty can be unsettling, so our aim with the XSA Tracker is to remove any doubt by communicating this information clearly and directly to users, as we already do with Qubes Security Bulletins (QSBs).

It’s worth noting that Qubes has typically not been affected by new XSAs. At present, it has been over six years since the first XSA was published on March 14, 2011. Since that time, 203 XSAs have been published (excluding unused XSA numbers and currently embargoed XSAs). However, only 17 (8.37%) of these XSAs have affected the security of Qubes OS. These Statistics will continue to be updated on the Tracker page as new XSAs are published.

21 March, 2017 12:00AM

March 20, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Dustin Kirkland: Ubuntu and The Classroom Connection



Over ten years ago, my Ubuntu journey began.

On October 7th, 2006, I drove with my wife, Kimberly Kirkland, to help setup her new classroom, in Elgin, Texas.  This was her very first job as a teacher -- 4th grade, starting about a month into the school year as the school added a classroom to their crowded schedule at the very last minute.

After hanging a few posters on the wall, I found 4 old, broken iMac G3's, discarded in the closet.  With a couple of hours on my hands, I pulled each one apart and put together two functional computers.  But with merely 128MB of RAM, rotary hard disks, and a 32-bit PowerPC processor, MacOS 9 wasn't even remotely functional.

Now, I've been around Linux since 1997, but always Red Hat Linux.  In fact, I had spent most of the previous year (2005) staffed by IBM on site at Red Hat in Westford, MA, working on IBM POWER machines.

I had recently heard of this thing called Edubuntu -- a Linux distribution with games and tools and utilities specifically tailored for the classroom -- which sounded perfect for Kim's needs!

After a bit of fiddling with xorg.conf, I eventually got Ubuntu running on the machine.

In fact, it was shortly after that, when I first setup my Launchpad account (2006-10-11) and posted my first comment, with patch, and workaround instructions to Bug #22976 on 2006-12-14:


About a year later, I applied for a job with Canonical and started working on the Ubuntu Server team in February of 2008.  It's been a wonderful decade, helping bring Ubuntu, Linux, free, and open source software to the world.  And in a way, it sort of all started for me in my wife's first classroom.

But there's another story in here, a much more important story, actually.  And it's not my story, it's my wife's. It's Kimberly's story, as a brand new public school teacher...

You see, she was a 20-something year old, recently out of college and with her very first job in the public school system.  She wouldn't even see her first paycheck for another 6 weeks.  And she was setting up her classroom with whatever hand-me-downs, donations, or yard sale items she could find.  And I'm not talking about the broken computers.  I'm talking about the very basics.  Pens, pencils, paper, books, wall hangings -- the school supplies that most of us take for granted entirely.

Some schools and school districts are adequately funded, and can provide for their students, teachers, and classrooms.  Many parents are able to send their kids to school with the supplies requested on their lists -- glue, scissors, folders, bags, whatever.

But so, so, so many are not.  Schools that don't provide supplies.  And parents that can't afford it.  Thousands of kids in every school district in the world empty handed.

Do you know who makes up the slack?

Teachers.

Yes, our dearly beloved, underpaid, overworked, underappreciated, teachers.  They bring the extra pencils, tissues, staplers, and everything else in between, that their kids need.  And it's everywhere, all across the country, our teachers pick up that slack.  And I know this because it's not just my wife, not just Texas.  My mom and dad are both school teachers in Louisiana.  We know teachers all over the world where this is the case.  Teachers spend hundreds -- sometimes thousands -- of their own hard earned dollars to help their students in need and make their classrooms more suitable for learning.

I'm super proud to say that my wife Kim has spent the last year studying and researching education-focused, local and national charities, learning how they work and who they help.

Understanding the landscape, Kim has co-founded a non-profit organization based here in Austin, Texas -- Classroom Connection -- to collect funds to help distribute school supplies to teachers for their students in need.


After a successful GoFundMe campaignClassroom Connection is now fully operational.  You can contribute in any of three ways:

Our kids are our future.  And their success starts with a rich education experience.  We can always do a little more to secure that future.  Which brings us back to Ubuntu, the philosophy:
"I am who I am, because of who we all are."
Thanks,
:-Dustin

20 March, 2017 09:03PM by Dustin Kirkland (noreply@blogger.com)

Jonathan Riddell: Planet KDE: Now with Added Twits

Twitter seems ever dominant and important for communication. Years ago I added a microblogging feed to Planet KDE but that still needed people to add themselves and being all idealistic I added support for anything with an RSS feed assuming people would use more-free identi.ca. But identi.ca went away and Twitter I think removed their RSS ability but got ever more important and powerful.or the relaunched theme a couple of years ago we added some Twitter feeds but they were hidden away and little used.

So today I’ve made them show by default and available down the side.  There’s one which is for all feeds with a #kde tag and one with @kdecommunity feed. You can hide them by clicking the Microblogging link at the top. Let me know what you think.

Update: my Bootstrap CSS failed and on medium sized monitors it moved all the real content down to below the Twitter feeds rather than floating to the side so I’ve moved them to the bottom instead of the side.  Anyone who knows Bootstrap better than me able to help fix?

I’ve also done away with the planetoids. zh.planetkde.org, fr.planetkde.org, pim.planetkde.org and several others. These were little used and when I asked representatives from the communities about them they didn’t even know they existed. Instead we have categories which you can see with the Configure Feed menu at the top to select languages.

I allowed the <embed> tag which allow for embedding YouTube videos and other bits.  Don’t abuse it folks 🙂

Finally Planet KDE moved back to where it belongs: kde.org. Because KDE is a community, it should not be afraid of its community.

Let me know of any issues or improvements that could be made.

Facebooktwittergoogle_pluslinkedinby feather

20 March, 2017 06:00PM

Ubuntu Insights: Three flaws at the heart of IoT security

This blog has been syndicated from SCMagazine UK, contributed by Thibaut Rouffineau – head of devices marketing.

According to the latest estimates by Gartner, the total number of connected devices will reach 6.4 billion by the end of this year. From connected homes, to autonomous vehicles, to futuristic smartdust, the Internet of Things has finally moved beyond the realm of theoretical concept and into our day-to-day lives.

As the presence of IoT devices has become more apparent however, so too has its Achilles heel – security. In the last six months alone, we’ve seen some of the largest DDoS attacks in history, all of which have been achieved through a vast network of infiltrated IoT devices. Given the scale of these attacks, it’s important to understand exactly how the Internet of Things is being infiltrated, what the existing issues are within the IoT, and ultimately, how best to fix them.

With this in mind, here are three of the biggest flaws that currently sit at the very heart of IoT security, along with a few tips for how developers, retailers and even governments can come together to make the internet of things safer for everyone:

1. The IoT product lifespan is too short
Through the combination of low barriers to entry and the huge potential for future products and applications, the Internet of Things represents a very attractive market for the business community. The result has been an IoT gold rush, with many independent developers and existing device manufacturers jumping on the bandwagon in an attempt to get their share of this exciting new sector.
Unfortunately, every gold rush has its losers. With so many companies rushing into a relatively new space – where many of the business models remain untested – it seems only natural to expect a reasonable number of false-starts along the way.

According to estimates from Canonical, over two-thirds of new IoT ventures are doomed to fail, with many projects surviving no longer than 18 months. When these businesses ultimately fail, their various IoT devices are left without ongoing support and vital security updates. The result has been an entire ecosystem of outdated an ultimately unsecured IoT devices just waiting to be hacked.

2. Nobody has taken ownership of the IoT
Across the various production stages of the average IoT device, it’s not always clear who should be responsible for ensuring that an end product is kept secure. Disconnects between different companies involved in the production process mean that, in many cases, security is treated as “someone else’s problem”. This is not helped by the fact that security during the development and maintenance cycles is almost always seen as a cost centre, with different departments passing the buck further down the line rather than taking on responsibility and absorbing the additional costs.

The result of this mentality is potential security holes being left open at all stages of the design process, with physical vulnerabilities being built into hardware, undocumented backdoors being incorporated within the operating system, and a lack of updates opening further vulnerabilities at the application level. To address this, rather than pushing responsibility further down the chain, all stages of the design process must start to incorporate some consideration for the end security of a device.

3. Lack of standardisation in IoT updates
According to research from Canonical, 40 percent of consumers have never performed an update on their connected devices. Given this fact, and that most users simply don’t know how to update IoT devices themselves, security patches must be delivered automatically in a consistent and reliable way.

This is especially true for those devices that do not provide users with an external user interface – something that is becoming increasingly true across the Internet of Things. In addition to providing automatic, centrally-managed updates, IoT device manufacturers must also find ways to roll those updates back as and when required. In several instances, faulty software updates have led to IoT devices being made less secure. In these instances, centralised rollback mechanisms are vital to ensure the long-term security of an IoT device.

While all of these flaws sit at the very heart of IoT security, they are just the tip of a much larger iceberg.

As recent events have shown, the Internet of Things is suffering from numerous vulnerabilities and potential security threats, from botnets and hackers, to spyware and cyber-attacks. To solve this issue, such concerns must be addressed from the ground up at all stages of the IoT. Governments need to provide a sensible level of regulation to limit the ‘gold rush’ mentality of new IoT firms. IoT device manufacturers must also consider the role of security throughout all stages of their designs. Developers themselves need to start incorporating more intelligent and automated update systems, relying on standardised operating systems and centralised software updates rather than numerous bespoke OSs. Even consumers must play their part, thinking carefully about the products they buy and the approaches they take to ensuring maximum security for their own home networks.

IoT security is not an issue that will be fixed overnight, but by incorporating security concerns from IoT infrastructure right through to post-purchase support we can help to make the Internet of Things safer, more reliable and ultimately more secure in 2017.

Original source from SCMagazine here

20 March, 2017 02:17PM

hackergotchi for Maemo developers

Maemo developers

Media Source Extensions upstreaming, from WPE to WebKitGTK+

A lot of good things have happened to the Media Source Extensions support since my last post, almost a year ago.

The most important piece of news is that the code upstreaming has kept going forward at a slow, but steady pace. The amount of code Igalia had to port was pretty big. Calvaris (my favourite reviewer) and I considered that the regular review tools in WebKit bugzilla were not going to be enough for a good exhaustive review. Instead, we did a pre-review in GitHub using a pull request on my own repository. It was an interesting experience, because the change set was so large that it had to be (artificially) divided in smaller commits just to avoid reaching GitHub diff display limits.

394 GitHub comments later, the patches were mature enough to be submitted to bugzilla as child bugs of Bug 157314 – [GStreamer][MSE] Complete backend rework. After some comments more in bugzilla, they were finally committed during Web Engines Hackfest 2016:

Some unforeseen regressions in the layout tests appeared, but after a couple of commits more, all the mediasource WebKit tests were passing. There are also some other tests imported from W3C, but I kept them still skipped because webm support was needed for many of them. I’ll focus again on that set of tests at its due time.

Igalia is proud of having brought the MSE support up to date to WebKitGTK+. Eventually, this will improve the browser video experience for a lot of users using Epiphany and other web browsers based on that library. Here’s how it enables the usage of YouTube TV at 1080p@30fps on desktop Linux:

Our future roadmap includes bugfixing and webm/vp9+opus support. This support is important for users from countries enforcing patents on H.264. The current implementation can’t be included in distros such as Fedora for that reason.

As mentioned before, part of this upstreaming work happened during Web Engines Hackfest 2016. I’d like to thank our sponsors for having made this hackfest possible, as well as Metrological for giving upstreaming the importance it deserves.

Thank you for reading.

 

0 Add to favourites0 Bury

20 March, 2017 11:55AM by Enrique Ocaña González (eocanha@igalia.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Thorsten Wilms: “Hobby harder; it’ll stunt!” RC-car T-shirt design

Backstory

In 2016, RC-car company Arrma released the Outcast, calling it a stunt truck. That label lead to some joking around in the UltimateRC forum. One member had trouble getting his Outcast to stunt. Utrak said “The stunt car didn’t stunt do hobby to it, it’ll stunt “. frystomer went: “If it still doesn’t stunt, hobby harder.” and finally stewwdog was like: “I now want a shirt that reads ‘Hobby harder, it’ll stunt’.” He wasn’t alone, so I created a first, very rough sketch.

Process

After a positive response, I decided to make it look like more of a stunt in another sketch:

Meanwhile, talk went to onesies and related practical considerations. Pink was also mentioned, thus I suddenly found myself confronted with a mental image that I just had to get out:

To find the right alignment and perspective, I created a Blender scene with just the text and boxes and cylinders to represent the car. The result served as template for drawing the actual image in Krita, using my trusty Wacom Intuos tablet.

Result

hobby_harder_121_on_white_1024x0958

This design is now available for print on T-shirts, other apparel, stickers and a few other things, via Redbubble.


Filed under: Illustration, Planet Ubuntu Tagged: Apparel, Blender, Krita, RC, T-shirt

20 March, 2017 11:07AM

Jorge Castro: Lessons Learned from joining the Kubernetes community

Software can be complex and daunting, even more so in distributed systems. So when you or your company decide you’re going to give it a shot, it’s easy to get enamored with the technology and not think about the other things you and your team are going to need to learn to make participating rewarding for everyone.

When we first launched the Canonical Distribution of Kubernetes, our team was new, and while we knew how Kubernetes worked, and what ops expertise we were going to start to bring to market right away, we found the initial huge size of the Kubernetes community to be outright intimidating. Not the people of course, they’re great, it’s the learning curve that can really seem large. So we decided to just dive in head first, and then write about our experiences. While some of the things I mention here work great for individuals, if you have a team on individuals working on Kubernetes at your company then I hope some of these tips will be useful to you. This is by no means an exhaustive list, I’m still finding new things everyday.

Find your SIGs

Kubernetes is divided into a bunch of Special Interest Groups(SIGs). You can find a list here. Don’t be alarmed! Bookmark this page, I use this as my starting off point anytime we needed to find something out in more detail that we could find in the docs or public list. On this page, you’ll find contact information for the leads, and more importantly, when those SIGs meet. Meetings are open to public and (usually) recorded. Find someone on your team to attend these meetings regularly. This is important for a few reasons:

  • k8s moves fast, and if there’s an area you care about, you can miss important information about a feature you care about.
  • It’s high bandwidth since SIGs meet regularly, you won’t find long drawn out technical discussions on the mailing lists like you would on a project that only uses lists, these discussions move much faster when people talk face to face.
  • You get to meet people and put faces to names.
  • People get to see you and recognize your name (and optionally, your face). This will help you later on if you’re stuck and need help or if you want to start participating more.
  • Each team has a slack channel and google group (mailing list), so I prefer to sit in those channels as well as they usually have important information announced there, meeting reminders, and links to the important documents for that SIG.

There’s a SIG just for contributor experience

SIG-contribex - As it ends up there’s an entire SIG who work on improving contributor experience. I found this SIG relatively late when we started, you’ll find that asking quetions here will save you time in the long run. Even if you’re not asking questions yourself you can learn about how the mechanics of project works just by listening in on the conversation.

So many things in /community

https://github.com/kubernetes/community - This should be one of your first starting points, if not the starting point. I put the SIGs above this because I’ve found for most people they’re initially interested in one key area, and you can just go to that SIG directly first to get started then come back to this. That doesn’t mean this isn’t important, if I get lost in something this is usually the place I start to look for something. Try to get everyone on your team to have at least a working understanding of the concepts here, and of course don’t forget the CLA and Code of Conduct.

There’s a YouTube Channel

https://www.youtube.com/c/KubernetesCommunity - I found this channel to be very useful for “catching up”. Many SIGs publish their meetings relatively quickly, and tossing in the channel in the background can help you keep track of what’s going on.

If you don’t have the time do dig into all the SIG meetings, you can concentrate on the weekly community meeting, which is held weekly and a summary of many of things happening around the different SIGs. The community meetings also have demos, so it’s interesting to see how the ecosystem is building tools around k8s; if you can only make one meeting a week, this is probably the one to go to.

The Community Calendar and meetings

This sounds like advanced common sense but there’s a community calendar of events.

Additionally, I found that adding the SIG meetings to our team calendar helps. We like to rotate people around meetings so that they can get experience in what is happening around the project and to ensure that worst case if someone can’t make a meeting someone is there to take notes. If you’re getting started, do yourself a favor volunteer to take notes at a SIG meeting, you will find that you’ll need to pay closer attention to the material and for me it helps me understand concepts better when I have to write it down in a way that makes sense for others.

We also found it useful to not flood one meeting with multiple people. If it’s something important, sure, but if you just want to keep an eye on what’s going on there you can only send one person and then have that person give people a summary at your team standup or whatever. There are so many meetings that you don’t want to fall into the trap of having people sitting in meetings all day instead of getting things done.

OWNERS

Whatever area you’re working on, go up the tree and eventually you’ll find an OWNERS file that list who owns/reviews that section of the code or docs or whatever. I use this as a little checklist when I join the SIG meetings to keep track of who is who. When I eventually went to a SIG meeting at Kubecon, it was nice to meet people who will be reviewing your work or you’ll be having a working relationship with.

Find a buddy

At some point you’ll be sitting in slack and you’ll see some poor person who is asking the same sorts of questions you were. That’s one of the first places you can start to help, just find someone and start talking to them. For me it was “Hey I noticed you sit in SIG-onprem too, you doing bare metal? How’s it going for you?”

It’s too big!

This used to worry me because the project is so large I figured I would never understand the entire thing. That’s ok. It’s totally fine to not know every single thing that’s going on, that’s why people have these meetings and summaries in the first place, just concentrate on what’s important to you and the rest will start to fall into place.

But we only consume Kubernetes, why participate?

One of the biggest benefits of consuming an open source project is taking advantage of the open development process. At some point something you care about will be discussed in the community then you should take advantage of the economies of scale that having so many people working on something gives you. Even if you’re only using k8s on a beautifully set up public cloud from a vendor where you don’t have to worry about the gruesome details, your organization can still learn from all the work that is happening around the ecosystem. I learn about new tools and tips every single day, and even if your participation is “read-only”, you’ll find that there’s value in sharing expertise with peers.

Ongoing process

This post is already too long, so I’ll just have to keep posting more as I keep learning more. If you’ve got any tips to share please leave a comment, or post and send me a link to link to.

20 March, 2017 10:16AM

March 19, 2017

hackergotchi for Tails

Tails

Call for testing: 3.0~beta3

You can help Tails! The third beta for the upcoming version 3.0 is out. We are very excited and cannot wait to hear what you think about it :)

What's new in 3.0~beta3?

Tails 3.0 will be the first version of Tails based on Debian 9 (Stretch). As such, it upgrades essentially all included software.

Other changes since Tails 3.0~beta2 include:

  • Important security fixes!

  • Upgrade to current Debian 9 (Stretch).

  • Tails Greeter:

    • Make the "Formats" settings in Tails Greeter take effect (it was introduced in Tails 3.0~alpha1 but has been broken since then).
    • Add keyboard shortcuts:
      • Alt key for accelerators in the main window
      • Ctrl+Shift+A for setting an administrator password
      • Ctrl+Shift+M for MAC spoofing settings
      • Ctrl+Shift+N for Tor network settings
  • Remove I2P. (This will happen in Tails 2.12 as well.)

  • Reintroduce the X11 guest utilities for VirtualBox (clipboard sharing and shared folders should work again).

  • Upgrade X.Org server and the modesetting driver in hope it will fix crashes when using some Intel graphics cards.

  • Automate the migration from KeePassX databases generated on Tails 2.x to the format required by KeePassX 2.0.x.

Technical details of all the changes are listed in the Changelog.

How to test Tails 3.0~beta3?

We will provide security updates for Tails 3.0~beta3, just like we do for stable versions of Tails.

But keep in mind that this is a test image. We tested that it is not broken in obvious ways, but it might still contain undiscovered issues.

But test wildly!

If you find anything that is not working as it should, please report to us on tails-testers@boum.org.

Bonus points if you first check if it is a known issue of this release or a longstanding known issue.

Get Tails 3.0~beta3

To upgrade, an automatic upgrade is available from 3.0~beta2 to 3.0~beta3.

If you cannot do an automatic upgrade, you can install 3.0~beta3 by following our usual installation instructions, skipping the Download and verify step.

Tails 3.0~beta3 ISO image OpenPGP signature

Known issues in 3.0~beta3

19 March, 2017 07:00PM

hackergotchi for Ubuntu developers

Ubuntu developers

Forums Council: New Ubuntu Member via forums contributions.

Please welcome our newest Member, Paddy Landau.

Paddy has been a long time contributor to the forums, having helped others with their Ubuntu issues for almost 9 years.

Paddys application thread can be viewed here, wiki page here and launchpad account here.

Congratulations from the Forums Council!

If you have been a contributor to the forums and wish to apply for Ubuntu Membership, please follow the process outlined here.


19 March, 2017 11:20AM

David Tomaschik: GOT and PLT for pwning.

So, during the recent 0CTF, one of my teammates was asking me about RELRO and the GOT and the PLT and all of the ELF sections involved. I realized that though I knew the general concepts, I didn’t know as much as I should, so I did some research to find out some more. This is documenting the research (and hoping it’s useful for others).

All of the examples below will be on an x86 Linux platform, but the concepts all apply equally to x86-64. (And, I assume, other architectures on Linux, as the concepts are related to ELF linking and glibc, but I haven’t checked.)

High-Level Introduction

So what is all of this nonsense about? Well, there’s two types of binaries on any system: statically linked and dynamically linked. Statically linked binaries are self-contained, containing all of the code necessary for them to run within the single file, and do not depend on any external libraries. Dynamically linked binaries (which are the default when you run gcc and most other compilers) do not include a lot of functions, but rely on system libraries to provide a portion of the functionality. For example, when your binary uses printf to print some data, the actual implementation of printf is part of the system C library. Typically, on current GNU/Linux systems, this is provided by libc.so.6, which is the name of the current GNU Libc library.

In order to locate these functions, your program needs to know the address of printf to call it. While this could be written into the raw binary at compile time, there’s some problems with that strategy:

  1. Each time the library changes, the addresses of the functions within the library change, when libc is upgraded, you’d need to rebuild every binary on your system. While this might appeal to Gentoo users, the rest of us would find it an upgrade challenge to replace every binary every time libc received an update.
  2. Modern systems using ASLR load libraries at different locations on each program invocation. Hardcoding addresses would render this impossible.

Consequently, a strategy was developed to allow looking up all of these addresses when the program was run and providing a mechanism to call these functions from libraries. This is known as relocation, and the hard work of doing this at runtime is performed by the linker, aka ld-linux.so. (Note that every dynamically linked program will be linked against the linker, this is actually set in a special ELF section called .interp.) The linker is actually run before any code from your program or libc, but this is completely abstracted from the user by the Linux kernel.

Relocations

Looking at an ELF file, you will discover that it has a number of sections, and it turns out that relocations require several of these sections. I’ll start by defining the sections, then discuss how they’re used in practice.

.got
This is the GOT, or Global Offset Table. This is the actual table of offsets as filled in by the linker for external symbols.
.plt
This is the PLT, or Procedure Linkage Table. These are stubs that look up the addresses in the .got.plt section, and either jump to the right address, or trigger the code in the linker to look up the address. (If the address has not been filled in to .got.plt yet.)
.got.plt
This is the GOT for the PLT. It contains the target addresses (after they have been looked up) or an address back in the .plt to trigger the lookup. Classically, this data was part of the .got section.
.plt.got
It seems like they wanted every combination of PLT and GOT! This just seems to contain code to jump to the first entry of the .got. I’m not actually sure what uses this. (If you know, please reach out and let me know! In testing a couple of programs, this code is not hit, but maybe there’s some obscure case for this.)

TL;DR: Those starting with .plt contain stubs to jump to the target, those starting with .got are tables of the target addresses.

Let’s walk through the way a relocation is used in a typical binary. We’ll include two libc functions: puts and exit and show the state of the various sections as we go along.

Here’s our source:

1
2
3
4
5
6
7
8
9
// Build with: gcc -m32 -no-pie -g -o plt plt.c

#include <stdio.h>
#include <stdlib.h>

int main(int argc, char **argv) {
  puts("Hello world!");
  exit(0);
}

Let’s examine the section headers:

1
2
3
4
5
6
7
8
9
There are 36 section headers, starting at offset 0x1fb4:

Section Headers:
  [Nr] Name              Type            Addr     Off    Size   ES Flg Lk Inf Al
  [12] .plt              PROGBITS        080482f0 0002f0 000040 04  AX  0   0 16
  [13] .plt.got          PROGBITS        08048330 000330 000008 00  AX  0   0  8
  [14] .text             PROGBITS        08048340 000340 0001a2 00  AX  0   0 16
  [23] .got              PROGBITS        08049ffc 000ffc 000004 04  WA  0   0  4
  [24] .got.plt          PROGBITS        0804a000 001000 000018 04  WA  0   0  4

I’ve left only the sections I’ll be talking about, the full program is 36 sections!

So let’s walk through this process with the use of GDB. (I’m using the fantastic GDB environment provided by pwndbg, so some UI elements might look a bit different from vanilla GDB.) We’ll load up our binary and set a breakpoint just before puts gets called and then examine the flow step-by-step:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
pwndbg> disass main
Dump of assembler code for function main:
   0x0804843b <+0>:	lea    ecx,[esp+0x4]
   0x0804843f <+4>:	and    esp,0xfffffff0
   0x08048442 <+7>:	push   DWORD PTR [ecx-0x4]
   0x08048445 <+10>:	push   ebp
   0x08048446 <+11>:	mov    ebp,esp
   0x08048448 <+13>:	push   ebx
   0x08048449 <+14>:	push   ecx
   0x0804844a <+15>:	call   0x8048370 <__x86.get_pc_thunk.bx>
   0x0804844f <+20>:	add    ebx,0x1bb1
   0x08048455 <+26>:	sub    esp,0xc
   0x08048458 <+29>:	lea    eax,[ebx-0x1b00]
   0x0804845e <+35>:	push   eax
   0x0804845f <+36>:	call   0x8048300 <puts@plt>
   0x08048464 <+41>:	add    esp,0x10
   0x08048467 <+44>:	sub    esp,0xc
   0x0804846a <+47>:	push   0x0
   0x0804846c <+49>:	call   0x8048310 <exit@plt>
End of assembler dump.
pwndbg> break *0x0804845f
Breakpoint 1 at 0x804845f: file plt.c, line 7.
pwndbg> r
Breakpoint *0x0804845f
pwndbg> x/i $pc
=> 0x804845f <main+36>:	call   0x8048300 <puts@plt>

Ok, we’re about to call puts. Note that the address being called is local to our binary, in the .plt section, hence the special symbol name of puts@plt. Let’s step through the process until we get to the actual puts function.

1
2
3
pwndbg> si
pwndbg> x/i $pc
=> 0x8048300 <puts@plt>:	jmp    DWORD PTR ds:0x804a00c

We’re in the PLT, and we see that we’re performing a jmp, but this is not a typical jmp. This is what a jmp to a function pointer would look like. The processor will dereference the pointer, then jump to resulting address.

Let’s check the dereference and follow the jmp. Note that the pointer is in the .got.plt section as we described above.

1
2
3
4
5
6
7
pwndbg> x/wx 0x804a00c
0x804a00c:	0x08048306
pwndbg> si
0x08048306 in puts@plt ()
pwndbg> x/2i $pc
=> 0x8048306 <puts@plt+6>:	push   0x0
   0x804830b <puts@plt+11>:	jmp    0x80482f0

Well, that’s weird. We’ve just jumped to the next instruction! Why has this occurred? Well, it turns out that because we haven’t called puts before, we need to trigger the first lookup. It pushes the slot number (0x0) on the stack, then calls the routine to lookup the symbol name. This happens to be the beginning of the .plt section. What does this stub do? Let’s find out.

1
2
3
4
5
pwndbg> si
pwndbg> si
pwndbg> x/2i $pc
=> 0x80482f0: push   DWORD PTR ds:0x804a004
   0x80482f6: jmp    DWORD PTR ds:0x804a008

Now, we push the value of the second entry in .got.plt, then jump to the address stored in the third entry. Let’s examine those values and carry on.

1
2
pwndbg> x/2wx 0x804a004
0x804a004:  0xf7ffd918  0xf7fedf40

Wait, where is that pointing? It turns out the first one points into the data segment of ld.so, and the 2nd into the executable area:

1
2
3
0xf7fd9000 0xf7ffb000 r-xp    22000 0      /lib/i386-linux-gnu/ld-2.24.so
0xf7ffc000 0xf7ffd000 r--p     1000 22000  /lib/i386-linux-gnu/ld-2.24.so
0xf7ffd000 0xf7ffe000 rw-p     1000 23000  /lib/i386-linux-gnu/ld-2.24.so

Ah, finally, we’re asking for the information for the puts symbol! These two addresses in the .got.plt section are populated by the linker/loader (ld.so) at the time it is loading the binary.

So, I’m going to treat what happens in ld.so as a black box. I encourage you to look into it, but exactly how it looks up the symbols is a little bit too low level for this post. Suffice it to say that eventually we will reach a ret from the ld.so code that resolves the symbol.

1
2
3
4
5
pwndbg> x/i $pc
=> 0xf7fedf5b:  ret    0xc
pwndbg> ni
pwndbg> info symbol $pc
puts in section .text of /lib/i386-linux-gnu/libc.so.6

Look at that, we find ourselves at puts, exactly where we’d like to be. Let’s see how our stack looks at this point:

1
2
3
4
pwndbg> x/4wx $esp
0xffffcc2c: 0x08048464  0x08048500  0xffffccf4  0xffffccfc
pwndbg> x/s *(int *)($esp+4)
0x8048500:  "Hello world!"

Absolutely no trace of the trip through .plt, ld.so, or anything but what you’d expect from a direct call to puts.

Unfortunately, this seemed like a long trip to get from main to puts. Do we have to go through that every time? Fortunately, no. Let’s look at our entry in .got.plt again, disassembling puts@plt to verify the address first:

1
2
3
4
5
6
7
8
9
10
pwndbg> disass 'puts@plt'
Dump of assembler code for function puts@plt:
   0x08048300 <+0>:	jmp    DWORD PTR ds:0x804a00c
   0x08048306 <+6>:	push   0x0
   0x0804830b <+11>:	jmp    0x80482f0
End of assembler dump.
pwndbg> x/wx 0x804a00c
0x804a00c:	0xf7e4b870
pwndbg> info symbol 0xf7e4b870
puts in section .text of /lib/i386-linux-gnu/libc.so.6

So now, a call puts@plt results in a immediate jmp to the address of puts as loaded from libc. At this point, the overhead of the relocation is one extra jmp. (Ok, and dereferencing the pointer which might cause a cache load, but I suspect the GOT is very often in L1 or at least L2, so very little overhead.)

How did the .got.plt get updated? That’s why a pointer to the beginning of the GOT was passed as an argument back to ld.so. ld.so did magic and inserted the proper address in the GOT to replace the previous address which pointed to the next instruction in the PLT.

Pwning Relocations

Alright, well now that we think we know how this all works, how can I, as a pwner, make use of this? Well, pwning usually involves taking control of the flow of execution of a program. Let’s look at the permissions of the sections we’ve been dealing with:

1
2
3
4
5
6
7
8
9
10
Section Headers:
  [Nr] Name              Type            Addr     Off    Size   ES Flg Lk Inf Al
  [12] .plt              PROGBITS        080482f0 0002f0 000040 04  AX  0   0 16
  [13] .plt.got          PROGBITS        08048330 000330 000008 00  AX  0   0  8
  [14] .text             PROGBITS        08048340 000340 0001a2 00  AX  0   0 16
  [23] .got              PROGBITS        08049ffc 000ffc 000004 04  WA  0   0  4
  [24] .got.plt          PROGBITS        0804a000 001000 000018 04  WA  0   0  4

Key to Flags:
  W (write), A (alloc), X (execute), M (merge), S (strings), I (info),

We’ll note that, as is typical for a system supporting NX, no section has both the Write and eXecute flags enabled. So we won’t be overwriting any executable sections, but we should be used to that.

On the other hand, the .got.plt section is basically a giant array of function pointers! Maybe we could overwrite one of these and control execution from there. It turns out this is quite a common technique, as described in a 2001 paper from team teso. (Hey, I never said the technique was new.) Essentially, any memory corruption primitive that will let you write to an arbitrary (attacker-controlled) address will allow you to overwrite a GOT entry.

Mitigations

So, since this exploit technique has been known for so long, surely someone has done something about it, right? Well, it turns out yes, there’s been a mitigation since 2004. Enter relocations read-only, or RELRO. It in fact has two levels of protection: partial and full RELRO.

Partial RELRO (enabled with -Wl,-z,relro):

  • Maps the .got section as read-only (but not .got.plt)
  • Rearranges sections to reduce the likelihood of global variables overflowing into control structures.

Full RELRO (enabled with -Wl,-z,relro,-z,now):

  • Does the steps of Partial RELRO, plus:
  • Causes the linker to resolve all symbols at link time (before starting execution) and then remove write permissions from .got.
  • .got.plt is merged into .got with full RELRO, so you won’t see this section name.

Only full RELRO protects against overwriting function pointers in .got.plt. It works by causing the linker to immediately look up every symbol in the PLT and update the addresses, then mprotect the page to no longer be writable.

Summary

The .got.plt is an attractive target for printf format string exploitation and other arbitrary write exploits, especially when your target binary lacks PIE, causing the .got.plt to be loaded at a fixed address. Enabling Full RELRO protects against these attacks by preventing writing to the GOT.

References

19 March, 2017 07:00AM

hackergotchi for VyOS

VyOS

VyOS 1.2.0 repository re-structuring

In preparation for the new 1.2.0 (jessie-based) beta release, we are re-populating the package repositories. The old repositories are now archived, you still can find then in the /legacy/repos directory on dev.packages.vyos.net

The purpose of this is two-fold. First, the old repo got quite messy, and Debian people (rightfully!) keep reminding us about it, but it would be difficult to do a gradual cleanup. Second, since the CI server has moved, and so did the build hosts, we need to test how well the new procedures are working. And, additionally, it should tell us if we are prepared to restore VyOS from its source should anything happen to the packages.vyos.net server or its contents.

For perhaps a couple of days, there will be no new nightly builds, and you will not be able to build ISOs yourself, unless you change the repo path in ./configure options by hand. Stay tuned.

19 March, 2017 04:59AM by Daniil Baturin

hackergotchi for Ubuntu developers

Ubuntu developers

Bryan Quigley: RSS Reading – NewsBlur

Bye Tiny

Some recent hacking attempts at my site had convinced me to reduce the number of logins I had to protect on my personal site.   That’s what motivated a move from the -still- awesome Tiny Tiny RSS that I’ve been using since Google Reader ended.   I only follow 13 sites and maintaining my own install simply doesn’t make sense.

* None of the hacking attempts appeared to be targeting Tiny Tiny RSS ~ but then again I’m not sure if I would have noticed if they were.

Enter NewsBlur

My favorite site for finding alternatives to software quickly settled on a few obvious choices.  Then I noticed that one of them was both Open Source and Hosted on their own servers with a freemium model.

It was NewsBlur

I decided to try it out and haven’t looked back.  The interface is certainly different than Tiny (and after 3 years I was very used to Tiny ) but I haven’t really thought about it after the first week.   The only item I found a bit difficult to use was arranging folders ~ I’d really prefer drag and drop.   I only needed to do it once so not a big deal.

The free account has some limitations such as a limit to the number of feeds (64), limit to how fast they update, and no ability to save stories.   The premium account is only $24 a year which seems very reasonable if you want to support this service or need those features.  As of this writing there were about 5800 premium and about 5800 standard users, which seems like a healthy ratio.

Some security notes: the site get’s an A on  SSLLabs.com but they do have HSTS turned explicitly off.   I’m guessing they can’t enable HSTS because they need to serve pictures directly off of other websites that are HTTP only.

NewsBlur’s code is on Github including how to setup your own NewsBlur instance (it’s designed to run on 3 separate servers) or for testing/development.   I found it particularly nice that the guide the site operator will check if NewsBlur goes down is public.  Now, that’s transparency!

They have a bunch of other advanced features (still in free version) that I haven’t even tried yet, such as:

  • finding other stories you would be interested (Launch Intel)
  • subscribing to email newsletters to view in the feed
  • Apps for Android, iPhone and suggested apps for many other OSes
  • Global sharing on NewsBlur
  • Your own personal (public in free version) blurblog to share stories and your comments on them

Give NewsBlur a try today.  Let me know if you like it!

I’d love to see more of this nice combination of hosted web service (with paid & freemium version) and open source project.  Do you have a favorite project that follows this model?   Two others that I know of are Odoo and draw.io.

19 March, 2017 03:34AM

hackergotchi for Blankon developers

Blankon developers

Sokhibi: Menyamakan Jam pada Dua Sistem Operasi Berbeda (bagian 2)

Postingan ini adalah update (pembaruan) dari postingan terdahulu yang berjudul Menyamakan Jam pada Dua Sistem Operasi Berbeda. Dalam contoh pada postingan tersebut penulis masih menggunakan BlankOn 8 Rote, sedangkan postingan ini penulis menggunakan BlankOn X Tambora.

Systemd
Untuk manajemen sistem, BlankOn Linux versi 9 ke bawah menggunakan SysVinit (SysV init) atau sering disebut sebagai system init, namun mulai  BlankOn X Tambora sudah menggunakan systemd. Penggunaan Systemd pada BlankOn Linux terbaru bertujuan untuk inisialisasi, manajemen dan pelayanan penelusuran sistem dan daemons agar lebih cepat.

Penjelasan mengenai systemd baca halaman wiki: https://en.wikipedia.org/wiki/Systemd

Booting Sistem Operasi
Ketika Komputar baru dinyalakan, maka proses booting dimulai dengan perangkat lunak BIOS (Basic Input / Output System) di motherboard untuk memeriksa dan menginisialisasi perangkat keras, selanjutnya BIOS menjalankan bootloader (Blankon menggunakan GRUB). Bootloader mengakses Master Boot Record (MBR) atau GUID Partition Table (GPT) di media simpan yang terpasang untuk memuat dan memulai kernel Linux.

Halah..... malah ngomongin systemd

Penulis asusmsikan bahwa pembaca sudah membaca dan memahami artikel Cara Menyamakan Jam pada Dua Sistem Operasi Berbeda sehingga tidak kebingungan ketika membaca postingan ini.

Untuk menyamakan Jam pada sistem Operasi yang sudah menggunakan systemd lakukan langkah berikut:

  • Login ke sistem operasi lain
  • Atur jam di sistem operasi lain => Simpan
  • Restart komputer
  • Login ke BlankOn X Tambora
  • Buka Terminal
  • Cek Jam/tanggal dengan perintah : $ timedatectl 
  • Kemudian, generate adjtime, yaitu dengan perintah $ sudo timedatectl set-local-rtc 1 
  • Masukkan sandi Root.
  • Selesai

Demikian postingan singkat Menyamakan Jam pada Dua Sistem Operasi Berbeda (bagian 2) ini, semoga bermanfaat.

19 March, 2017 03:08AM by Istana Media (noreply@blogger.com)

March 18, 2017

hackergotchi for rescatux

rescatux

Super Grub2 Disk 2.02s8 downloads







Recommended download (Floppy, CD & USB in one) (Valid for i386, x86_64, i386-efi and x86_64-efi):

Super Grub2 Disk 2.01 rc2 Main MenuSuper Grub2 Disk 2.01 rc2 Main Menu






 

 

 

 

 

EFI x86_64 standalone version:

EFI i386 standalone version:

CD & USB in one downloads:

About other downloads. As this is the first time I develop Super Grub2 Disk out of source code (well, probably not the first time, but the first time in ages) I have not been able to build these other downloads: coreboot, i386-efi, i386-pc, ieee1275, x86_64-efi, standalone coreboot, standalone i386-efi, standalone ieee1275. bfree has helped on this matter and with his help we might have those builds in next releases. If you want such builds drop a mail in the mailing list so that we aware of that need.

 

Source code:

Everything (All binary releases and source code):

Hashes

In order to check the former downloads you can either check the download directory page for this release

or you can check checksums right here:

MD5SUMS

6d2cf16a731798ec6b14e8893563674b  super_grub2_disk_2.02s8_source_code.tar.gz
294417a0e5b58adc6f80ef8ee9b3783e  super_grub2_disk_hybrid_2.02s8.iso
b74cac92e6f74316dd794e98e5ba92e6  super_grub2_disk_i386_efi_2.02s8.iso
7345fdcfad9fa9cd7af6e88d737bc167  super_grub2_disk_i386_pc_2.02s8.iso
0d21930618d36f04826251e51d9f921e  super_grub2_disk_standalone_i386_efi_2.02s8.EFI
ba803db5593b0e5cc50908644fd02fba  super_grub2_disk_standalone_x86_64_efi_2.02s8.EFI
02bed470193e14ca7578ef045f8cc45a  super_grub2_disk_x86_64_efi_2.02s8.iso
cce6f2fae3c7b8507b7635417cc6426e  super_grub2_disk_2.02s8.zip

SHA1SUMS

70f22db153c4d5275c6668b6746b2d2902b7ef40  super_grub2_disk_2.02s8_source_code.tar.gz
cf652d1fe21743180f555a34e89f95763310aaf3  super_grub2_disk_hybrid_2.02s8.iso
31c8a20049bc22cf6e6dd6e4f8c3cb95ef9a71e4  super_grub2_disk_i386_efi_2.02s8.iso
5d47e42a967b9b13809956b21d86f77bcead73c9  super_grub2_disk_i386_pc_2.02s8.iso
cad195239ec05234d4f8d801e087cf8277f7cb53  super_grub2_disk_standalone_i386_efi_2.02s8.EFI
3c1a8b1f86349b66241706adeeef31ac14d415f1  super_grub2_disk_standalone_x86_64_efi_2.02s8.EFI
a942195ca18ffa74874bbde909fe3aa6200dad7c  super_grub2_disk_x86_64_efi_2.02s8.iso
d119ed8ae06475e689106ec9c48e19970f14517b  super_grub2_disk_2.02s8.zip

SHA256SUMS

2a7b845358c6fe5dfc62ea7f0b1d927793fc8b5ee6725760e79ab1bbfd10ffeb  super_grub2_disk_2.02s8_source_code.tar.gz
83e37560147cbe730f95e6a70b2670a68049016cd2460849bbeb8a0b9c1a62be  super_grub2_disk_hybrid_2.02s8.iso
42a9cad65a183a8e924b74ef8c2e1fe6a00fc490021380d99ca817b11c97ea0a  super_grub2_disk_i386_efi_2.02s8.iso
badb54279d5f35a866c043921b56a1583f6d1ee88a80fd3790931cc0aa315709  super_grub2_disk_i386_pc_2.02s8.iso
5196bec9f32132f1855876e3cce8ba3400f6ce7d1fd8d78c7458b2ac4c7c716a  super_grub2_disk_standalone_i386_efi_2.02s8.EFI
55f094295bc22b775009ce2531a6ec99f2dbbd1380d38a155e9f8eb6cce420f3  super_grub2_disk_standalone_x86_64_efi_2.02s8.EFI
f1c0dbb02fb42f2262937f571a2707a718677c2b8cd6a6fb7c899a8d89fffe20  super_grub2_disk_x86_64_efi_2.02s8.iso
c50d2e13e5b03b8abec64f8036767170e99a9944e04b3c6ffb0f6df27940e236  super_grub2_disk_2.02s8.zip

Flattr this!

18 March, 2017 09:05PM by adrian15

Super Grub2 Disk 2.02s8 released

Super Grub2 Disk 2.02s8 stable is here.

Super GRUB2 Disk is a live cd that helps you to boot into most any Operating System (OS) even if you cannot boot into it by normal means.

A new stable release

The former Super Grub2 Disk stable release was 2.02s7 version and released on February 2017 (1 month ago) . New features or changes since previous stable version are:

  • Updated grub 2.02 build to tag: 2.02~rc2 . This is the release candidate for final stable 2.02 upstream Grub release. Please use this build to give them (upstream Grub) feedback on this version. It’s advised to ask here before reporting to them so that we discard the bug being a Super Grub2 Disk specific one.
  • Thanks to a Necrosporus suggestion now default theme starfield is no longer included. Images are now smaller. E.g. the hybrid image size it’s 19.3 MB while former version image size was 22 MB.
Super Grub2 Disk 2.02s5 - Detect and show boot methods in actionSuper Grub2 Disk 2.02s5 – Detect and show boot methods in action

We are going to see which are the complete Super Grub2 Disk features with a demo video, where you can download it, the thank you – hall of fame and some thoughts about the Super Grub2 Disk development.

Please do not forget to read our howtos so that you can have step by step guides (how to make a cdrom or an usb, how to boot from it, etc) on how to use Super Grub2 Disk and, if needed, Rescatux.

Super Grub2 Disk 2.02s4 main menuSuper Grub2 Disk 2.02s3 main menu

Tour

Here there is a little video tour in order to discover most of Super Grub2 Disk options. The rest of the options you will have to discover them by yourself.

Features

Most of the features here will let you boot into your Operating Systems. The rest of the options will improve the Super Grub2 Disk operating systems autodetecting (enable RAID, LVM, etc.) or will deal with minor aspects of the user interface (Colours, language, etc.).

  • Change the language UI
  • Translated into several languages
    • Spanish / Español
    • German / Deutsch
    • French / Français
    • Italian / Italiano
    • Malay / Bahasa Melayu
    • Russian

Super Grub2 Disk 2.01 rc2 Spanish Main Menu

Super Grub2 Disk 2.01 rc2 Spanish Main Menu

  • Detect and show boot methods option to detect most Operating Systems

Super Grub2 Disk 2.01 beta 3 Everything menu making use of grub.cfg extract entries option functionality

Super Grub2 Disk 2.01 beta 3 – Everything menu making use of grub.cfg extract entries option functionality

  • Enable all native disk drivers *experimental* to detect most Operating Systems also in special devices or filesystems
  • Boot manually
    • Operating Systems
    • grub.cfg – Extract entries

      Super Grub2 Disk 2.01 beta 3 grub.cfg Extract entries optionSuper Grub2 Disk 2.01 beta 3 grub.cfg Extract entries option
    • grub.cfg – (GRUB2 configuration files)
    • menu.lst – (GRUB legacy configuration files)
    • core.img – (GRUB2 installation (even if mbr is overwritten))
    • Disks and Partitions (Chainload)
    • Bootable ISOs (in /boot-isos or /boot/boot-isos
    • Extra GRUB2 functionality
      • Enable GRUB2’s LVM support
      • Enable GRUB2’s RAID support
      • Enable GRUB2’s PATA support (to work around BIOS bugs/limitation)
      • Mount encrypted volumes (LUKS and geli)
      • Enable serial terminal
    • Extra Search functionality
      • Search in floppy ON/OFF
      • Search in CDROM ON/OFF
  • List Devices / Partitions
  • Color ON /OFF
  • Exit
    • Halt the computer
    • Reboot the computer

Supported Operating Systems

Excluding too custom kernels from university students Super Grub2 Disk can autodetect and boot most every Operating System. Some examples are written here so that Google bots can see it and also to make more confident the final user who searchs his own special (according to him) Operating System.

  • Windows
    • Windows 10
    • Windows Vista/7/8/8.1
    • Windows NT/2000/XP
    • Windows 98/ME
    • MS-DOS
    • FreeDOS
  • GNU/Linux
    • Direct Kernel with autodetected initrd
      Super Grub2 Disk - Detect any Operating System - Linux kernels detected screenshotSuper Grub2 Disk – Detect any Operating System – Linux kernels detected
      • vmlinuz-*
      • linux-*
      • kernel-genkernel-*
    • Debian / Ubuntu / Mint
    • Mageia
    • Fedora / CentOS / Red Hat Enterprise Linux (RHEL)
    • openSUSE / SuSE Linux Enterpsise Server (SLES)
    • Arch
    • Any many, many, more.
      • FreeBSD
        • FreeBSD (single)
        • FreeBSD (verbose)
        • FreeBSD (no ACPI)
        • FreeBSD (safe mode)
        • FreeBSD (Default boot loader)
      • EFI files
      • Mac OS X/Darwin 32bit or 64bit
Super Grub2 Disk 2.00s2 rc4 Mac OS X entriesSuper Grub2 Disk 2.00s2 rc4 Mac OS X entries (Image credit to: Smx)

Support for different hardware platforms

Before this release we only had the hybrid version aimed at regular pcs. Now with the upcoming new EFI based machines you have the EFI standalone versions among others. What we don’t support is booting when secure boot is enabled.

  • Most any PC thanks to hybrid version (i386, x86_64, i386-efi, x86_64-efi) (ISO)
  • EFI x86_64 standalone version (EFI)
  • EFI i386 standalone version (EFI)
  • Additional Floppy, CD and USB in one download (ISO)
    • i386-pc
    • i386-efi
    • x86_64-efi

Known bugs

  • Non English translations are not completed
  • Enable all native disk drivers *experimental* crashes Virtualbox randomly

Supported Media

  • Compact Disk – Read Only Memory (CD-ROM) / DVD
  • Universal Serial Bus (USB) devices
  • Floppy (1.98s1 version only)

Downloads







Recommended download (Floppy, CD & USB in one) (Valid for i386, x86_64, i386-efi and x86_64-efi):

Super Grub2 Disk 2.01 rc2 Main MenuSuper Grub2 Disk 2.01 rc2 Main Menu






 

 

 

 

 

EFI x86_64 standalone version:

EFI i386 standalone version:

CD & USB in one downloads:

About other downloads. As this is the first time I develop Super Grub2 Disk out of source code (well, probably not the first time, but the first time in ages) I have not been able to build these other downloads: coreboot, i386-efi, i386-pc, ieee1275, x86_64-efi, standalone coreboot, standalone i386-efi, standalone ieee1275. bfree has helped on this matter and with his help we might have those builds in next releases. If you want such builds drop a mail in the mailing list so that we aware of that need.

 

Source code:

Everything (All binary releases and source code):

Hashes

In order to check the former downloads you can either check the download directory page for this release

or you can check checksums right here:

MD5SUMS

6d2cf16a731798ec6b14e8893563674b  super_grub2_disk_2.02s8_source_code.tar.gz
294417a0e5b58adc6f80ef8ee9b3783e  super_grub2_disk_hybrid_2.02s8.iso
b74cac92e6f74316dd794e98e5ba92e6  super_grub2_disk_i386_efi_2.02s8.iso
7345fdcfad9fa9cd7af6e88d737bc167  super_grub2_disk_i386_pc_2.02s8.iso
0d21930618d36f04826251e51d9f921e  super_grub2_disk_standalone_i386_efi_2.02s8.EFI
ba803db5593b0e5cc50908644fd02fba  super_grub2_disk_standalone_x86_64_efi_2.02s8.EFI
02bed470193e14ca7578ef045f8cc45a  super_grub2_disk_x86_64_efi_2.02s8.iso
cce6f2fae3c7b8507b7635417cc6426e  super_grub2_disk_2.02s8.zip

SHA1SUMS

70f22db153c4d5275c6668b6746b2d2902b7ef40  super_grub2_disk_2.02s8_source_code.tar.gz
cf652d1fe21743180f555a34e89f95763310aaf3  super_grub2_disk_hybrid_2.02s8.iso
31c8a20049bc22cf6e6dd6e4f8c3cb95ef9a71e4  super_grub2_disk_i386_efi_2.02s8.iso
5d47e42a967b9b13809956b21d86f77bcead73c9  super_grub2_disk_i386_pc_2.02s8.iso
cad195239ec05234d4f8d801e087cf8277f7cb53  super_grub2_disk_standalone_i386_efi_2.02s8.EFI
3c1a8b1f86349b66241706adeeef31ac14d415f1  super_grub2_disk_standalone_x86_64_efi_2.02s8.EFI
a942195ca18ffa74874bbde909fe3aa6200dad7c  super_grub2_disk_x86_64_efi_2.02s8.iso
d119ed8ae06475e689106ec9c48e19970f14517b  super_grub2_disk_2.02s8.zip

SHA256SUMS

2a7b845358c6fe5dfc62ea7f0b1d927793fc8b5ee6725760e79ab1bbfd10ffeb  super_grub2_disk_2.02s8_source_code.tar.gz
83e37560147cbe730f95e6a70b2670a68049016cd2460849bbeb8a0b9c1a62be  super_grub2_disk_hybrid_2.02s8.iso
42a9cad65a183a8e924b74ef8c2e1fe6a00fc490021380d99ca817b11c97ea0a  super_grub2_disk_i386_efi_2.02s8.iso
badb54279d5f35a866c043921b56a1583f6d1ee88a80fd3790931cc0aa315709  super_grub2_disk_i386_pc_2.02s8.iso
5196bec9f32132f1855876e3cce8ba3400f6ce7d1fd8d78c7458b2ac4c7c716a  super_grub2_disk_standalone_i386_efi_2.02s8.EFI
55f094295bc22b775009ce2531a6ec99f2dbbd1380d38a155e9f8eb6cce420f3  super_grub2_disk_standalone_x86_64_efi_2.02s8.EFI
f1c0dbb02fb42f2262937f571a2707a718677c2b8cd6a6fb7c899a8d89fffe20  super_grub2_disk_x86_64_efi_2.02s8.iso
c50d2e13e5b03b8abec64f8036767170e99a9944e04b3c6ffb0f6df27940e236  super_grub2_disk_2.02s8.zip

Changelog (since former 2.00s2 stable release)

Changes since 2.02s7 version:

  • Use grub-2.02-rc2 upstream grub2 tag
  • Default theme starfield is no longer included. This will make images smaller.
  • (Devel) Make sure normal isos and standalone images have hash files without its full path.
  • (Devel) File hashes generation has been rewritten to work from the single supergrub-mkcommon generate_filename_hashes function
  • (Devel) Now MD5SUMS, SHA1SUMS and SHA256SUMS files are generated as part of the official build.

Changes since 2.02s6 version:

  • Updated grub 2.02 build to tag: 2.02~rc1

Changes since 2.02s5 version:

  • Added Russian language
  • Improved Arch Linux initramfs detection
  • Added i386-efi build support
  • Added i386-efi to the hybrid iso
  • Grub itself is translated when a language is selected
  • Added loopback.cfg file (non officially supported)
  • (Devel) sgrub.pot updated to latest strings
  • (Devel) Added grub-build-004-make-check so that we ensure the build works
  • (Devel) Make sure linguas.sh is built when running ‘grub-build-002-clean-and-update’
  • (Devel) Updated upstream Super Grub2 Disk repo on documentation
  • (Devel) Move core supergrub menu under menus/sgd
  • (Devel) Use sg2d_directory as the base super grub2 disk directory variable
  • (Devel) New supergrub-sourcecode script that creates current git branch source code tar.gz
  • (Devel) New supergrub-all-zip-file script: Makes sure a zip file of everything is built.
  • (Devel) supergrub-meta-mkrescue: Build everything into releases directory in order to make source code more clean.
  • (Devel) New supergrub-official-release script: Build main files, source code and everything zip file from a single script in order to ease official Super Grub
    2 Disk releases.

Changes since 2.02s4 version:

  • Stop trying to chainload devices under UEFI and improve the help people get in the case of a platform mismatch
  • (Devel) Properly support source based built grub-mkfont binary.
  • New options were added to chainload directly either /ntldr or /bootmgr thanks to ntldr command. They only work in BIOS mode.

Changes since 2.02s3 version:

  • Using upstream grub-2.02-beta3 tag as the new base for Super Grub2 Disk’s grub.
  • Major improvement in Windows OS detection (based on BCD) Windows Vista, 7, …
  • Major improvement in Windows OS detection (based on ntldr) Windows XP, 2000, …

Changes since 2.02s2 beta 1 version:

  • (Devel) grub-mkstandalone was deleted because we no longer use it
  • Updated (and added) Copyright notices for 2015
  • New option: ‘Disks and Partitions (Chainload)’ adapted from Smx work
  • Many files were rewritten so that they only loop between devices that actually need to be searched into.
    This enhacement will make Super Grub2 Disk faster.
  • Remove Super Grub2 Disk own devices from search by default. Added an option to be able to enable/disable the Super Grub2 Disk own devices search.

2.02s2 beta 1 changelog:

  • Updated grub 2.02 build to commit: 8e5bc2f4d3767485e729ed96ea943570d1cb1e45
  • Updated documentation for building Super Grub2 Disk
  • Improvement on upstream grub (d29259b134257458a98c1ddc05d2a36c677ded37 – test: do not stop after first file test or closing bracket) will probably make Super Grub2 Disk run faster.
  • Added new grub build scripts so that Super Grub2 Disk uses its own built versions of grub and not the default system / distro / chroot one.
  • Ensure that Mac OS X entries are detected ok thanks to Users dir. This is because Grub2 needs to emulate Mac OS X kernel so that it’s detected as a proper boot device on Apple computers.
  • Thanks to upstream grub improvement now Super Grub2 Disk supports booting in EFI mode when booted from a USB device / hard disk. Actually SG2D was announced previously to boot from EFI from a USB device while it only booted from a cdrom.

2.02s1 beta 1 changelog:

  • Added new option: “Enable all native disk drivers” so that you can try to load: SATA, PATA and USB hard disks (and their partitions) as native disk drives. This is experimental.
  • Removed no longer needed options: “Enable USB” and “Enable PATA”.
  • “Search floppy” and “Search cdrom” options were moved into “Extra GRUB2 functionality menu”. At the same time “Extra Search functionality” menu was removed.
  • Added new straight-forward option: “Enable GRUB2’s RAID and LVM support”.
  • “List devices/partitions” was renamed to “Print devices/partitions”.
  • “Everything” option was renamed to “Detect and show boot methods”.
  • “Everything +” option was removed to avoid confusions.
  • Other minor improvements in the source code.
  • Updated translation files. Now most translations are pending.
  • Updated INSTALL instructions.

Finally you can check all the detailed changes at our GIT commits.

If you want to translate into your language please check TRANSLATION file at source code to learn how to translate into your language.

Thank you – Hall of fame

I want to thank in alphabetical order:

  • The upstream Grub crew. I’m subscribed to both help-grub and grub-devel and I admire the work you do there.
  • Necrosporus for his insistence on making Super Grub2 Disk smaller.

The person who writes this article is adrian15 .

And I cannot forget about thanking bTactic, the enterprise where I work at and that hosts our site.

Some thoughts about Super Grub2 Disk development

Super Grub2 Disk development ideas

I think we won’t improve Super Grub2 Disk too much. We will try to stick to official Grub2 stable releases. Unless a new feature that it’s not included in official Grub2 stable release is needed in order to give additional useful functionalities to Super Grub2 Disk.

I have added some scripts to Super Grub2 Disk build so that writing these pieces of news is more automatic and less prone to errors. Check them out in git repo as you will not find them in 2.02s8 source code.

Old idea: I don’t know when but I plan to readapt some scripts from os-prober. That will let us detect more operating systems. Not sure when though. I mean, it’s not something that worries me because it does not affect too many final users. But, well, it’s something new that I hadn’t thought about.

Again, please send us feedback on what you think it’s missing on Super Grub2 Disk.

Rescatux development

I want to focus on Rescatux development on the next months so that we have an stable release before the end of 2017. Now I need to finish adding UEFI features (most finished), fix the scripts that generate Rescatux source code (difficult) and write much documentation.

(adrian15 speaking)

Getting help on using Super Grub2 Disk

More information about Super Grub2 Disk

Flattr this!

18 March, 2017 09:01PM by adrian15