July 22, 2017

hackergotchi for Xanadu

Xanadu

Actualizaciones Junio 2017

Hoy iniciamos una serie de publicaciones mensuales que tendrán como propósito mostrar a los usuarios los cambios realizados a la distribución que estarán disponibles vía actualización, en esta oportunidad la lista sera un poco larga porque se incluirán todos los … Sigue leyendo

22 July, 2017 01:40AM by sinfallas

July 21, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Dustin Kirkland: Ubuntu 18.04 LTS Desktop Default Application Survey

Back in March, we asked the HackerNews community, “What do you want to see in Ubuntu 17.10?”: https://ubu.one/AskHN

A passionate discussion ensued, the results of which are distilled into this post: http://ubu.one/thankHN

In fact, you can check that link, http://ubu.one/thankHN and see our progress so far this cycle.  We already have a beta code in 17.10 available for your testing for several of those:

And several others have excellent work in progress, and will be complete by 17.10:

In summary -- your feedback matters!  There are hundreds of engineers and designers working for *you* to continue making Ubuntu amazing!

Along with the switch from Unity to GNOME, we’re also reviewing some of the desktop applications we package and ship in Ubuntu.  We’re looking to crowdsource input on your favorite Linux applications across a broad set of classic desktop functionality.

We invite you to contribute by listing the applications you find most useful in Linux in order of preference. To help us parse your input, please copy and paste the following bullets with your preferred apps in Linux desktop environments.  You’re welcome to suggest multiple apps, please just order them prioritized (e.g. Web Browser: Firefox, Chrome, Chromium).  If some of your functionality has moved entirely to the web, please note that too (e.g. Email Client: Gmail web, Office Suite: Office360 web).  If the software isn’t free/open source, please note that (e.g. Music Player: Spotify client non-free).  If I’ve missed a category, please add it in the same format.  If your favorites aren’t packaged for Ubuntu yet, please let us know, as we’re creating hundreds of new snap packages for Ubuntu desktop applications, and we’re keen to learn what key snaps we’re missing.

  • Web Browser: ???
  • Email Client: ???
  • Terminal: ???
  • IDE: ???
  • File manager: ???
  • Basic Text Editor: ???
  • IRC/Messaging Client: ???
  • PDF Reader: ???
  • Office Suite: ???
  • Calendar: ???
  • Video Player: ???
  • Music Player: ???
  • Photo Viewer: ???
  • Screen recording: ???

In the interest of opening this survey as widely as possible, we’ve cross-posted this thread to HackerNews, Reddit, and Slashdot.  We very much look forward to another friendly, energetic, collaborative discussion.

Or, you can fill out the survey here: https://ubu.one/apps1804

Thank you!
On behalf of @Canonical and @Ubuntu

21 July, 2017 10:05PM by Dustin Kirkland (noreply@blogger.com)

Ubuntu Insights: Ubuntu Server Development Summary – 21 July 2017

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

cloud-init

  • cloud-init now supports python 3.6
  • modify Depends such that cloud-init no longer brings ifupdown into an image (LP: #1705639)
  • IPv6 Networking and Gateway fixes (LP: #1694801, #1701097)
  • Other networking fixes (LP: #1695092, #1702513)
  • Numerous CentOS networking commits (LP: #1682014, #1701417, #1686856, #1687725)

Git Ubuntu

  • Added initial linting tool (LP: #1702954)

Bug Work and Triage

IRC Meeting

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Uploads to the Development Release (Artful)

asterisk, 1:13.14.1~dfsg-2ubuntu2, vorlon
billiard, 3.5.0.2-1, None
cloud-init, 0.7.9-221-g7e41b2a7-0ubuntu3, smoser
cloud-init, 0.7.9-221-g7e41b2a7-0ubuntu2, smoser
cloud-init, 0.7.9-221-g7e41b2a7-0ubuntu1, smoser
cloud-init, 0.7.9-212-g865e941f-0ubuntu1, smoser
cloud-init, 0.7.9-210-ge80517ae-0ubuntu1, smoser
freeradius, 3.0.15+dfsg-1ubuntu1, nacc
libvirt, 3.5.0-1ubuntu2, paelzer
multipath-tools, 0.6.4-5ubuntu1, paelzer
multipath-tools, 0.6.4-3ubuntu6, costamagnagianfranco
multipath-tools, 0.6.4-3ubuntu5, jbicha
nginx, 1.12.1-0ubuntu1, teward
ocfs2-tools, 1.8.5-2, None
openldap, 2.4.44+dfsg-8ubuntu1, costamagnagianfranco
puppet, 4.10.4-2ubuntu1, nacc
python-tornado, 4.5.1-2.1~build1, costamagnagianfranco
samba, 2:4.5.8+dfsg-2ubuntu4, mdeslaur
spice, 0.12.8-2.1ubuntu0.1, mdeslaur
tgt, 1:1.0.71-1ubuntu1, paelzer
Total: 20

Uploads to Supported Releases (Trusty, Xenial, Yakkety, Zesty)

freeipmi, yakkety, 1.4.11-1.1ubuntu4~0.16.10, dannf
golang-1.6, xenial, 1.6.2-0ubuntu5~16.04.3, mwhudson
heimdal, zesty, 7.1.0+dfsg-9ubuntu1.1, sbeattie
heimdal, yakkety, 1.7~git20150920+dfsg-4ubuntu1.16.10.1, sbeattie
heimdal, xenial, 1.7~git20150920+dfsg-4ubuntu1.16.04.1, sbeattie
heimdal, trusty, 1.6~git20131207+dfsg-1ubuntu1.2, sbeattie
heimdal, xenial, 1.7~git20150920+dfsg-4ubuntu1.16.04.1, sbeattie
heimdal, trusty, 1.6~git20131207+dfsg-1ubuntu1.2, sbeattie
heimdal, yakkety, 1.7~git20150920+dfsg-4ubuntu1.16.10.1, sbeattie
heimdal, zesty, 7.1.0+dfsg-9ubuntu1.1, sbeattie
iscsitarget, xenial, 1.4.20.3+svn502-2ubuntu4.4, apw
iscsitarget, xenial, 1.4.20.3+svn502-2ubuntu4.3, smb
iscsitarget, trusty, 1.4.20.3+svn499-0ubuntu2.3, smb
iscsitarget, trusty, 1.4.20.3+svn499-0ubuntu2.3, smb
iscsitarget, xenial, 1.4.20.3+svn502-2ubuntu4.3, smb
maas, xenial, 2.2.0+bzr6054-0ubuntu2~16.04.1, andreserl
maas, yakkety, 2.2.0+bzr6054-0ubuntu2~16.10.1, andreserl
maas, zesty, 2.2.0+bzr6054-0ubuntu2~17.04.1, andreserl
mysql-5.5, trusty, 5.5.57-0ubuntu0.14.04.1, mdeslaur
mysql-5.5, trusty, 5.5.57-0ubuntu0.14.04.1, mdeslaur
mysql-5.7, zesty, 5.7.19-0ubuntu0.17.04.1, mdeslaur
mysql-5.7, xenial, 5.7.19-0ubuntu0.16.04.1, mdeslaur
mysql-5.7, xenial, 5.7.19-0ubuntu0.16.04.1, mdeslaur
mysql-5.7, zesty, 5.7.19-0ubuntu0.17.04.1, mdeslaur
nagios-images, zesty, 0.9.1ubuntu0.1, nacc
ntp, yakkety, 1:4.2.8p8+dfsg-1ubuntu2.2, paelzer
postfix, yakkety, 3.1.0-5ubuntu1, vorlon
samba, zesty, 2:4.5.8+dfsg-0ubuntu0.17.04.4, sbeattie
samba, yakkety, 2:4.4.5+dfsg-2ubuntu5.8, sbeattie
samba, xenial, 2:4.3.11+dfsg-0ubuntu0.16.04.9, sbeattie
samba, trusty, 2:4.3.11+dfsg-0ubuntu0.14.04.10, sbeattie
samba, xenial, 2:4.3.11+dfsg-0ubuntu0.16.04.9, sbeattie
samba, trusty, 2:4.3.11+dfsg-0ubuntu0.14.04.10, sbeattie
samba, yakkety, 2:4.4.5+dfsg-2ubuntu5.8, sbeattie
samba, zesty, 2:4.5.8+dfsg-0ubuntu0.17.04.4, sbeattie
spice, zesty, 0.12.8-2ubuntu1.1, mdeslaur
spice, xenial, 0.12.6-4ubuntu0.3, mdeslaur
spice, trusty, 0.12.4-0nocelt2ubuntu1.5, mdeslaur
spice, xenial, 0.12.6-4ubuntu0.3, mdeslaur
spice, trusty, 0.12.4-0nocelt2ubuntu1.5, mdeslaur
spice, zesty, 0.12.8-2ubuntu1.1, mdeslaur
sssd, xenial, 1.13.4-1ubuntu1.6, slashd
walinuxagent, zesty, 2.2.14-0ubuntu1~17.04.1, sil2100
walinuxagent, yakkety, 2.2.14-0ubuntu1~16.10.1, sil2100
walinuxagent, xenial, 2.2.14-0ubuntu1~16.04.1, sil2100
walinuxagent, trusty, 2.2.14-0ubuntu1~14.04.1, sil2100
xen, zesty, 4.8.0-1ubuntu2.2, mdeslaur
xen, yakkety, 4.7.2-0ubuntu1.3, mdeslaur
xen, xenial, 4.6.5-0ubuntu1.2, mdeslaur
xen, trusty, 4.4.2-0ubuntu0.14.04.12, mdeslaur
xen, xenial, 4.6.5-0ubuntu1.2, mdeslaur
xen, trusty, 4.4.2-0ubuntu0.14.04.12, mdeslaur
xen, yakkety, 4.7.2-0ubuntu1.3, mdeslaur
xen, zesty, 4.8.0-1ubuntu2.2, mdeslaur
Total: 54

Contact the Ubuntu Server team

21 July, 2017 06:42PM

hackergotchi for SolydXK

SolydXK

Portuguese ISOs available for download

Lufilte has released the Portuguese 64-bit versions of SolydX and SolydK.

You can download them from our site: https://solydxk.com/downloads/localized-editions/

Thanks Lufilte, for your great work!

21 July, 2017 02:07PM by Schoelje

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Dustin Kirkland: Ubuntu 18.04 LTS Desktop Default Application Survey

Back in March, we asked the HackerNews community, “What do you want to see in Ubuntu 17.10?”: https://ubu.one/AskHN

A passionate discussion ensued, the results of which are distilled into this post: http://ubu.one/thankHN

In fact, you can check that link, http://ubu.one/thankHN and see our progress so far this cycle. We already have a beta code in 17.10 available for your testing for several of those:

And several others have excellent work in progress, and will be complete by 17.10:

In summary -- your feedback matters! There are hundreds of engineers and designers working for *you* to continue making Ubuntu amazing!

Along with the switch from Unity to GNOME, we’re also reviewing some of the desktop applications we package and ship in Ubuntu. We’re looking to crowdsource input on your favorite Linux applications across a broad set of classic desktop functionality.

We invite you to contribute by listing the applications you find most useful in Linux in order of preference. To help us parse your input, please copy and paste the following bullets with your preferred apps in Linux desktop environments. You’re welcome to suggest multiple apps, please just order them prioritized (e.g. Web Browser: Firefox, Chrome, Chromium). If some of your functionality has moved entirely to the web, please note that too (e.g. Email Client: Gmail web, Office Suite: Office360 web). If the software isn’t free/open source, please note that (e.g. Music Player: Spotify client non-free). If I’ve missed a category, please add it in the same format. If your favorites aren’t packaged for Ubuntu yet, please let us know, as we’re creating hundreds of new snap packages for Ubuntu desktop applications, and we’re keen to learn what key snaps we’re missing.

  • Web Browser: ???
  • Email Client: ???
  • Terminal: ???
  • IDE: ???
  • File manager: ???
  • Basic Text Editor: ???
  • IRC/Messaging Client: ???
  • PDF Reader: ???
  • Office Suite: ???
  • Calendar: ???
  • Video Player: ???
  • Music Player: ???
  • Photo Viewer: ???
  • Screen recording: ???

In the interest of opening this survey as widely as possible, we’ve cross-posted this thread to HackerNews, Reddit, and Slashdot. We very much look forward to another friendly, energetic, collaborative discussion.

Thank you!

twitter.com/@DustinKirkland

On behalf of @Canonical and @Ubuntu

21 July, 2017 12:24PM

Jorge Castro: TLDRing your way to a Kubernetes Bare Metal cluster

Alex Ellis has an excellent tutorial on how to install Kubernetes in 10 minutes. It is a summarized version of what you can find in the official documentation. Read those first, this is a an even shorter version with my choices mixed in.

We’ll install 16.04 on some machines, I’m using three. I just chose to use Weave instead of sending you to a choose your-own-network page as you have other stuff to learn before you dive into an opinion on a networking overlay. We’re also in a lab environment so we assume some things like your machines are on the same network.

Prep the Operating System

First let’s take care of the OS. I set up automatic updates, ensure the latest kernel is installed, and then ensure we’re all up to date, whatever works for you:

sudo -s
dpkg-reconfigure unattended-upgrades
apt install linux-generic-hwe-16.04
apt update
apt dist-upgrade
reboot

Prep each node for Kubernetes:

This is just installing docker and adding the kubernetes repo, we’ll be root for these steps:

sudo -s
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main  
EOF

apt update
apt install -qy docker.io kubelet kubeadm kubernetes-cni

On the master:

Pick a machine to be a master, then on that one:

kubeadm init

And then follow the directions to copy your config file to your user account, we only have a few commands left needed with sudo so you can safely exit out and continue with your user account:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Let’s install the network, and then allow workloads to be scheduled on the master (for a lab we want to use all our hardware for workloads!):

kubectl apply -f https://git.io/weave-kube-1.6
kubectl taint nodes --all node-role.kubernetes.io/master-

On each worker node:

On each machine you want to be a worker (yours will be different, the output of kubeadm init will tell you what to do:

sudo kubeadm join --token 030b75.21ca2b9818ca75ef 192.168.1.202:6443 

You might need to tack on a --skip-preflight-checks, see #347, sorry for the inconvenience.

Ensuring your cluster works

It shouldn’t take long for the nodes to come online, just check em out:

$ kubectl get nodes
NAME       STATUS     AGE       VERSION
dahl       Ready      45m       v1.7.1
hyperion   NotReady   16s       v1.7.1
tediore    Ready      32m       v1.7.1

$ kubectl cluster-info
Kubernetes master is running at https://192.168.1.202:6443
KubeDNS is running at https://192.168.1.202:6443/api/v1/namespaces/kube-system/services/kube-dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'
$

Ok you’re cluster is rocking, now

Set up your laptop

I don’t like to be ssh’ed into my cluster unless I’m doing maintenance, so now that we know stuff is working let’s copy the kubernetes config from the master node to our local workstation. You should know how to copy files around systems already, but here’s mine for reference:

 sudo scp /etc/kubernetes/admin.conf jorge@ivory.local:/home/jorge/.kube/config

I don’t need the entire Kubernetes repo on my laptop, so we’ll just install the snap for kubectl and check that I can access the server:

  sudo snap install kubectl --classic
  kubectl get nodes
  kubectl cluster-info

Don’t forget to turn on autocompletion!

Deploy your first application

Let’s deploy the Kubernetes dashboard:

   kubectl create -f https://git.io/kube-dashboard
   kubectl proxy

Then hit up http://localhost:8001/ui.

That’s it, enjoy your new cluster!

Joining the Community

kubeadm is brought to you by SIG Cluster Lifecycle, they have regular meetings that anyone can attend, and you can give feedback on the mailing list. I’ll see you there!

21 July, 2017 11:00AM

Dustin Kirkland: ThankHN: A Thank-You Note to the HackerNews Community, from Ubuntu


A huge THANK YOU to the entire HackerNews community, from the Ubuntu community!  Holy smokes...wow...you are an amazing bunch!  Your feedback in the thread, "Ask HN: What do you want to see in Ubuntu 17.10?" is almost unbelievable!

We're truly humbled by your response.

I penned this thread, somewhat on a whim, from the Terminal 2 lounge at London Heathrow last Friday morning before flying home to Austin, Texas.  I clicked "submit", closed my laptop, and boarded an 11-hour flight, wondering if I'd be apologizing to my boss and colleagues later in the day, for such a cowboy approach to Product Management...

When finally I signed onto the in-flight WiFi some 2 hours later, I saw this post at the coveted top position of HackerNews page 1, with a whopping 181 comments (1.5 comments per minute) in the first two hours.  Impressively, it was only 6am on the US west coast by that point, so SFO/PDX/SEA weren't even awake yet.  I was blown away!

This thread is now among the most discussed thread ever in the history of HackerNews, with some 1115 comments and counting at the time of this blog post.

 2530 comments   3125 points     2016-06-24      UK votes to leave EU    dmmalam
2215 comments 1817 points 2016-11-09 Donald Trump is the president-elect of the U.S. introvertmac
1448 comments 1330 points 2016-05-31 Moving Forward on Basic Income dwaxe
1322 comments 1280 points 2016-10-18 Shame on Y Combinator MattBearman
1215 comments 1905 points 2015-06-26 Same-Sex Marriage Is a Right, Supreme Court Rules imd23
1214 comments 1630 points 2016-12-05 Tell HN: Political Detox Week – No politics on HN for one week dang
1121 comments 1876 points 2016-01-27 Request For Research: Basic Income mattkrisiloff
*1115 comments 1333 points 2017-03-31 Ask HN: What do you want to see in Ubuntu 17.10? dustinkirkland
1090 comments 1493 points 2016-10-20 All Tesla Cars Being Produced Now Have Full Self-Driving Hardware impish19
1088 comments 2699 points 2017-03-07 CIA malware and hacking tools randomname2
1058 comments 1188 points 2014-03-16 Julie Ann Horvath Describes Sexism and Intimidation Behind Her GitHub Exit dkasper
1055 comments 2589 points 2017-02-28 Ask HN: Is S3 down? iamdeedubs
1046 comments 2123 points 2016-09-27 Making Humans a Multiplanetary Species [video] tilt
1030 comments 1558 points 2017-01-31 Welcome, ACLU katm
1013 comments 4107 points 2017-02-19 Reflecting on one very, very strange year at Uber grey-area
1008 comments 1990 points 2014-04-10 Drop Dropbox PhilipA

Rest assured that I have read every single one, and many of my colleagues has followed closely along as well.

In fact, to read and process this thread, I first attempted to print it out -- but cancelled the job before it fully buffered, when I realized that it's 105 pages long!  Here's the PDF (1.6MB), if you're curious, or want to page through it on your e-reader.

So instead, I wrote the following Python script, using the HackerNews REST API, to download the thread from Google Firebase into a JSON document, and import into MongoDB, for item-by-item processing.  Actually, this script will work against any HackerNews thread, and it recursively grabs nested comments.  Next time you're asked to write a recursive function on a white board for a Google interview, hopefully you've remember this code!  :-)

$ cat ~/bin/hackernews.py
#!/usr/bin/python3

import json
import requests
import sys

#https://hacker-news.firebaseio.com/v0/item/14002821.json?print=pretty

def get_json_from_url(item):
url = "https://hacker-news.firebaseio.com/v0/item/%s.json" % item
data = json.loads(requests.get(url=url).text)
#print(json.dumps(data, indent=4, sort_keys=True))
if "kids" in data and len(data["kids"]) > 0:
for k in data["kids"]:
data[k] = json.loads(get_json_from_url(k))
return json.dumps(data)


data = json.loads(get_json_from_url(sys.argv[1]))
print(json.dumps(data, indent=4, sort_keys=False))

It takes 5+ minutes to run, so you can just download a snapshot of the JSON blob from here (768KB), or if you prefer to run it yourself...

$ hackernews.py 14002821 | tee 14002821.json

First some raw statistics...

  • 1109 total comments
  • 713 unique users contributed a comment
  • 211 users contributed more than 1 comment
    • 42 comments/replies contributed by dustinkirkland (that's me)
    • 12 by vetinari
    • 11 by JdeBP
    • 9 by simosx and jnw2
  • 438 top level comments
    • 671 nested/replies
  • 415 properly formatted uses of "Headline:"
    • Thank you!  That was super useful in my processing of these!
  • 519 mentions of Desktop
  • 174 mentions of Server
  • 69 + 64 mentions of Snaps and Core
I'll try to summarize a few of my key interpretations of the trends, having now processed the entire discussion.  Sincere apologies in advance if I've (a) misinterpreted a theme, (b) skipped your favorite them, or (c) conflated concepts.  If any of these are the case, well, please post your feedback in the HackerNews thread associated with this post :-)

First, grouped below are some of the Desktop themes, with some fuzzy, approximate "weighting" by the number of pertinent discussions/mentions/vehemence.
  • Drop MIR/Unity for Wayland/Gnome (351 weight) [Beta available, 17.10]
    • Release/GA Unity 8 (15 weight)
    • Easily, the most heavily requested, major change in this thread was for Ubuntu to drop MIR/Unity in favor of Wayland/Gnome.  And that's exactly what Mark Shuttleworth announced in an Ubuntu Insights post here today.  There were a healthy handful of Unity 8 fans, calling for its GA, and more than a few HackerNews comments lamenting the end of Unity in this thread.
  • Improve HiDPI, 4K, display scaling, multi-monitor (217 weight) [Beta available, 17.10]
    • For the first time in a long time, I feel like a laggard in the technology space!  I own a dozen or so digital displays but not a single 4K or HiDPI monitor.  So while I can't yet directly relate, the HackerNews community is keen to see better support for multiple, high resolution monitors and world class display scaling.  And I suspect you're just a short year or so ahead of much of the rest of the world.
  • Make track pad, touch gestures great (129 weight) [Beta available, 17.10]
    • There's certainly an opportunity to improve the track pad and touch gestures in the Ubuntu Desktop "more Apple-like".
  • Improve Bluetooth, WiFi, Wireless, Network Manager (97 weight) [Beta available, 17.10]
    • This item captures some broad, general requests to make Bluetooth and Wireless more reliable in Ubuntu.  It's a little tough to capture an exact work item, but the relevant teams at Canonical have received the feedback.
  • Better mouse settings, more options, scroll acceleration (89 weight) [Beta available, 17.10]
    • Similar to the touch/track pad request, there was a collection of similar feedback suggesting better mouse settings out-of-the-box, and more fine grained options. 
  • Better NVIDIA, GPU support (87 weight) [In-progress, 17.10]
    • NVIDIA GPUs are extensively used in both Ubuntu Desktops and Servers, and the feedback here was largely around better driver availability, more reliable upgrades, CUDA package access.  For my part, I'm personally engaged with the high end GPU team at NVIDIA and we're actively working on a couple of initiatives to improve GPU support in Ubuntu (both Desktop and Server).
  • Clean up Network Manager, easier VPN (71 weight) [Beta available, 17.10]
    • There were several requests around both Network Manager, and a couple of excellent suggestions with respect to easier VPN configuration and connection.  Given the recent legislation in the USA, I for one am fully supportive of helping Ubuntu users do more than ever before to protect their security and privacy, and that may entail better VPN support.
  • Easily customize, relocate the Unity launcher (53 weight) [Deprecated, 17.10]
    • This thread made it abundantly clear that it's important to people to be able to move, hide, resize, and customize their launcher (Unity or Gnome).  I can certainly relate, as I personally prefer my launcher at the bottom of the screen.
  • Add night mode, redshift, f.lux (42 weight)  [Beta available, 17.10]
    • This request is one of the real gems of this whole exercise!  This seems like a nice, little, bite-sized feature, that we may be able include with minimal additional effort.  Great find.
  • Make WINE and Windows apps work better (10 weight)
    • If Microsoft can make Ubuntu on Windows work so well, why can't Canonical make Windows on Ubuntu work?  :-)  If it were only so easy...  For starters, the Windows Subsystem for Linux "simply" needs to implement a bunch of Linux syscalls, whose source is entirely available.  So there's that :-)  Anyway, this one is really going too be a tough one for us to move the needle on...
  • Better accessibility for disabled users, children (9 weight)
    • As a parent, and as a friend of many Ubuntu users with special needs, this is definitely a worthy cause.  We'll continue to try and push the envelop on accessibility in the Linux desktop.
  • LDAP/ActiveDirectory integration out of the box (7 weight)
    • This is actually a regular request of Canonical's corporate Ubuntu Desktop customers.  We're generally able to meet the needs of our enterprise customers around LDAP and ActiveDirectory authentication.  We'll look at what else we can do natively in the distro to improve this.
  • Add support for voice commands (5 weight)
    • Excellent suggestion.  We've grown so accustomed to "Okay Google...", "Alexa...", "Siri..."  How long until we can, "Hey you, Ubuntu..."  :-)
Grouped below are some themes, requests, and suggestions that generally apply to Ubuntu as an OS, or specifically as a cloud or server OS.
  • Better, easier, safer, faster, rolling upgrades (153 weight)
    • The ability to upgrade from one release of Ubuntu to the next has long been one of our most important features.  A variety of requests have identified a few ways that we should endeavor to improve: snapshots and rollbacks, A/B image based updates, delta diffs, easier with fewer questions, super safe rolling updates to new releases.  Several readers suggested killing off the non-LTS releases of Ubuntu and only releasing once a year, or every 2 years (which is the LTS status quo).  We're working on a number of these, with much of that effort focused on Ubuntu Core.  You'll see some major advances around this by Ubuntu 18.04 LTS.
  • Official hardware that just-works, Nexus-of-Ubuntu (130 weight)
    • This is perhaps my personal favorite suggestion of this entire thread -- for us to declare a "Nexus-of-each-Ubuntu-release", much like Google does for each major Android release.  Hypothetically, this would be an easily accessible, available, affordable hardware platform, perhaps designed in conjunction with an OEM, to work perfectly with Ubuntu out of the box.  That's a new concept.  We do have the Ubuntu Hardware Certification Programme, where we clearly list all hardware that's tested and known to work well with Ubuntu.  And we do work with major manufacturers on some fantastic desktops and laptops -- the Dell XPS and System76 both immediately come to mind.  But this suggestion is a step beyond that.  I'm set to speak to a few trusted partners about this idea in the coming weeks.
  • Lighter, smaller, more minimal (113 weight) [Beta Available, 17.10]
    • Add x-y-z-favorite-package to default install (105 weight)
    • For every Ubuntu user that wants to remove stuff from Ubuntu, to make it smaller/faster/lighter/secure, I'll show you another user who wants to add something else to the default install :-)  This is a tricky one, and one that I'm always keen to keep an eye on.  We try very had to strike a delicate balance between minimal-but-usable.  When we have to err, we tend (usually, but not always) on the side of usability.  That's just the Ubuntu way.  That said, we're always evaluating our Ubuntu Server, Cloud, Container, and Docker images to insure that we minimize (or at least justify) any bloat.  We'll certainly take another hard look at the default package sets at both 17.10 and 18.04.  Thanks for bringing this up and we'll absolutely keep it in mind!
  • More QA, testing, stability, general polish (99 weight) [In-progress, 17.10]
    • The word "polish" is used a total of 24 times, with readers generally asking for more QA, more testing, more stability, and more "polish" to the Ubuntu experience.  This is a tough one to quantify.  That said, we have a strong commitment to quality, and CI/CD (continuous integration, continuous development) testing at Canonical.  As your Product Manager, I'll do my part to ensure that we invest more resources into Ubuntu quality.
  • Fix /boot space, clean up old kernels (92 weight) [In-progress, 17.10]
    • Ouch.  This is such an ugly, nasty problem.  It personally pissed me off so much, in 2010, that I created a script, "purge-old-kernels".  And it personally pissed me off again so much in 2014, that I jammed it into the Byobu package (which I also authored and maintain), for the sole reason to get it into Ubuntu.  That being said, that's the wrong approach.  I've spoken with Leann Ogasawara, the amazing manager and team lead for the Ubuntu kernel team, and she's committed to getting this problem solved once and for all in Ubuntu 17.10 -- and ideally getting those fixes backported to older releases of Ubuntu.
  • ZFS supported as a root filesystem (84 weight)
    • This was one of the more surprising requests I found here, and another real gem.  I know that we have quite a few ZFS fans in the Ubuntu community (of which, I'm certainly one) -- but I had no idea so many people want to see ZFS as a root filesystem option.  It makes sense to me -- integrity checking, compression, copy-on-write snapshots, clones.  In fact, we have some skunkworks engineering investigating the possibility.  Stay tuned...
  • Improve power management, battery usage (73 weight)
    • Longer batteries for laptops, lower energy bills for servers -- an important request.  We'll need to work closer with our hardware OEM/ODM partners to ensure that we're leveraging their latest and greatest energy conservation features, and work with upstream to ensure those changes are integrated into the Linux kernel and Gnome.
  • Security hardening, grsecurity (72 weight)
    • More security!  There were several requests for "extra security hardening" as an option, and the grsecurity kernel patch set.  So the grsecurity Linux kernel is a heavily modified, patched Linux kernel that adds a ton of additional security checks and features at the lowest level of the OS.  But the patch set is huge -- and it's not upstream in the Linux kernel.  It also only applies against the last LTS release of Ubuntu.  It would be difficult, though not necessarily impossible, to offer the grsecurity supported in the Ubuntu archive.  As for "extra security hardening", Canonical is working with IBM on a number of security certification initiatives, around FIPS, CIS Benchmarks, and DISA STIG documentation.  You'll see these becoming available throughout 2017.
  • Dump Systemd (69 weight)
    • Fun.  All the people fighting for Wayland/Gnome, and here's a vocal minority pitching a variety of other init systems besides Systemd :-)  So frankly, there's not much we can do about this one at this point.  We created, and developed, and maintained Upstart over the course of a decade -- but for various reasons, Red Hat, SUSE, Debian, and most of the rest of the Linux community chose Systemd.  We fought the good fight, but ultimately, we lost graciously, and migrated Ubuntu to Systemd.
  • Root disk encryption, ext4 encryption, more crypto (47 weight) [In-progress, 17.10]
    • The very first feature of Ubuntu, that I created when I started working for Canonical in 2008, was the Home Directory Encryption feature introduced in late 2008, so yes -- this feature has been near and dear to my heart!  But as one of the co-authors and co-maintainers of eCryptfs, we're putting our support behind EXT4 Encryption for the future of per-file encryption in Ubuntu.  Our good friends at Google (hi Mike, Ted, and co!) have created something super modern, efficient, and secure with EXT4 Encryption, and we hope to get there in Ubuntu over the next two releases.  Root disk encryption is still important, even more now than ever before, and I do hope we can do a bit better to make root disk encryption easier to enable in the Desktop installer.
  • Fix suspend/resume (24 weight)
    • These were a somewhat general set of bugs or issues around suspend/resume not working as well as it should.  If these are a closely grouped set of corner cases (e.g. multiple displays, particular hardware), then we should be able to shake these out with better QA, bug triage, and upstream fixes.  That said, I remember when suspend/resume never worked at all in Linux, so pardon me while I'm a little nostalgic about how far we've come :-)  Okay...now, yes, you're right.  We should do better.
  • New server installer (19 weight) [Beta available, 17.10]
    • Well aren't you in for a surprise :-)  There's a new server installer coming soon!  Stay tuned.
  • Improve swap space management (12 weight)
    • Another pet peeve of mine -- I feel you!  So I filed this blueprint in 2009, and I'm delighted to say that as of this month (8 years later), Ubuntu 17.04 (Zesty Zapus) will use swap files, rather than swap partitions, by default.  Now, there's a bit more to do -- we should make these a bit more dynamic, tune the swappiness sysctl, etc.  But this is a huge step in the right direction!
  • Reproducible builds (7 weight)
    • Ensuring that builds are reproducible is essential for the security and the integrity of our distribution.  We've been working with Debian upstream on this over the last few years, and will continue to do so.
Ladies and gentlemen, again, a most sincere "thank you", from the Ubuntu community to the HackerNews community.  We value openness -- open source code, open design, open feedback -- and this last week has been a real celebration of that openness for us.  We appreciate the work and effort you put into your comments, and we hope to continue our dialog throughout our future together, and most importantly, that Ubuntu continues to serve your needs and occasionally even exceed your expectations ;-)

Cheers,
:-Dustin

21 July, 2017 10:56AM by Dustin Kirkland (noreply@blogger.com)

Jono Bacon: Clarification: Snappy and Flatpak

Recently, I posted a piece about distributions consolidating around a consistent app store. In it I mentioned Flatpak as a potential component and some people wondered why I didn’t recommend Snappy, particularly due to my Canonical heritage.

To be clear (and to clear up my in-articulation): I am a fan of both Snappy and Flatpak: they are both important technologies solving important problems and they are both driven by great teams. To be frank, my main interest and focus in my post was the notion of a consolidated app store platform as opposed to what the specific individual components would be (other people can make a better judgement call on that). Thus, please don’t read my single-line mention of Flatpak as any criticism of Snappy. I realize that this may have been misconstrued as me suggesting that Snappy is somehow not up to the job, which was absolutely not my intent.

Part of the reason I mentioned Flatpak is that I feel there is a natural center of gravity forming around the GNOME Shell and platform, which many distros are shipping. Within the context of that platform I have seen Flatpak commonly mentioned as a component, hence why I mentioned it. Of course, there is no reason why Snappy couldn’t be that component too, and the Snappy team have been doing great work. I was also under the impression (entirely incorrectly) that Snappy is focusing more on the cloud/server market. It has become clear that the desktop is very much within the focus and domain of Snappy, and I apologize for the confusion.

So, to clear up any potential confusion (I can be an inarticulate clod at times), I am a big fan of Snappy, big fan of Flatpak, and an even bigger fan of a consolidated app store that multiple distros use.? My view is simple: competition is healthy, and we have two great projects and teams vying to make app installation and management on Linux easier. Viva la desktop!

The post Clarification: Snappy and Flatpak appeared first on Jono Bacon.

21 July, 2017 12:26AM

July 20, 2017

The Fridge: Ubuntu 16.10 (Yakkety Yak) End of Life reached on July 20 2017

This is a follow-up to the End of Life warning sent earlier this month to confirm that as of today (July 20, 2017), Ubuntu 16.10 is no longer supported. No more package updates will be accepted to 16.10, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The original End of Life warning follows, with upgrade instructions:

Ubuntu announced its 16.10 (Yakkety Yak) release almost 9 months ago, on October 13, 2016. As a non-LTS release, 16.10 has a 9-month support cycle and, as such, the support period is now nearing its end and Ubuntu 16.10 will reach end of life on Thursday, July 20th.

At that time, Ubuntu Security Notices will no longer include information or updated packages for Ubuntu 16.10.

The supported upgrade path from Ubuntu 16.10 is via Ubuntu 17.04. Instructions and caveats for the upgrade may be found at:

https://help.ubuntu.com/community/ZestyUpgrades

Ubuntu 17.04 continues to be actively supported with security updates and select high-impact bug fixes. Announcements of security updates for Ubuntu releases are sent to the ubuntu-security-announce mailing list, information about which may be found at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-security-announce

Since its launch in October 2004 Ubuntu has become one of the most highly regarded Linux distributions with millions of users in homes, schools, businesses and governments around the world. Ubuntu is Open Source software, costs nothing to download, and users are free to customise or alter their software in order to meet their needs.

Originally posted to the ubuntu-announce mailing list on Thu Jul 20 23:23:31 UTC 2017 by Adam Conrad, on behalf of the Ubuntu Release Team

20 July, 2017 11:35PM

Ubuntu Insights: Testing the future of Juju with snaps

Juju 2.3 is under heavy development, and one thing we all want when we're working on the next big release of our software product is to get feedback from users. Are you solving the problems your user has? Are there bugs in the corner cases that a user can find before the release? Are the performance improvements you made working for everyone like you expect? The more folks that test the software before it's out, the better off your software will be!

With the recent calls for testing out the Cross Model Relations and Storage Improvements coming in Juju 2.3, I think it'd be good to point out how we can leverage the power of channels in snaps to test out the upcoming features in Juju.

To get Juju via snaps, you can search the snap store and install it like so:

$ snap find juju
$ sudo snap install --classic juju

This then drops the Juju binary in the /snap/bin directory.

$ /snap/bin/juju --version
2.2.2-zesty-amd64

That's great that we've got the latest stable version of Juju. Let's see what other versions we can get access to.

Let's try to use the new storage flag on the deploy command that Andrew points out in his blog post.

$ /snap/bin/juju deploy --attach-storage
ERROR flag provided but not defined: --attach-storage

Bummer! That isn't in the stable release of Juju yet. Note that it calls out the flag as not being defined. Let's see if we can get access to a more bleeding edge Juju.

$ snap info juju
name:      juju
summary:   "juju client"
publisher: canonical
contact:   http://jujucharms.com
description: |
  Through the use of charms, juju provides you with shareable, re-usable, and
  repeatable expressions of devops best practices.
commands:
  - juju
tracking:    stable
installed:   2.2.2 (2142) 25MB classic
refreshed:   2017-07-13 16:20:52 -0400 EDT
channels:                                      
  stable:    2.2.2                      (2142) 25MB classic
  candidate: 2.2.2                      (2142) 25MB classic
  beta:      2.2.3+2.2-9909aa4          (2180) 43MB classic
  edge:      2.3-alpha1+develop-1f3f66e (2187) 43MB classic

There we can see that in the edge channel has an upcoming 2.3-alpha release in there. Let's switch to it and test out what's coming in Juju 2.3.

$ sudo snap refresh --edge juju
juju (edge) 2.3-alpha1+develop-1f3f66e from 'canonical' refreshed

$ /snap/bin/juju --version
2.3-alpha1-zesty-amd64

Now let's check out that command Andrew was talking about with the storage feature in Juju 2.3.

$ /snap/bin/juju deploy --attach-storage
ERROR flag needs an argument: --attach-storage

There we go, now we've got access to the upcoming storage features in Juju 2.3 and we can provide great feedback to the dev team.

After we're done testing and providing that feedback we can easily switch back to using the stable release for our normal work.

$ sudo snap refresh --stable juju
juju 2.2.2 from 'canonical' refreshed

Give it a try, check out the latest in the upcoming 2.3 work and file bugs, send feedback, and be ready to leverage the great work being faster.

20 July, 2017 04:37PM

Ubuntu Insights: Webinar: Speed up your software development lifecycle with Kubernetes

Live webinar

2nd August, 4pm UTC | Your timezone

For a complete cloud native application lifecycle Kubernetes needs some tools to “close the loop”. The promise of the new way of doing things, is that you’ll speed up your software delivery with microservices and devops teams. But how should you really do that? Join this webinar to find out!

4 things you’ll learn

  • Why you should try the Canonical Distribution of Kubernetes container solution and how to install it.
  • How to deploy, troubleshoot and manage your application using Weave Cloud Prometheus monitoring.
  • What is Prometheus monitoring, how it works, how to instrument your application using it and how you can get the most out of it.
  • How to speed up your software development lifecycle – ship features faster, fix problems faster.

This webinar will includes a live demo and Q&A. It will also become available to watch on-demand after the live broadcast.

Presenters

  • Marco Ceppi, Product Strategy at Canonical
  • Luke Marsden, Head of Developer Experience at Weaveworks

Register now

20 July, 2017 03:58PM

Ubuntu Insights: Run Django applications on the Canonical Distribution of Kubernetes

Introduction

Canonical’s IS department is responsible for running most of the company’s internal and external services. This includes infrastructure like Landscape and Launchpad, as well as third party software such as internal and external wikis, WordPress blogs and Django sites.

One of the most important functions of our department is to help improve our products by using them in production, providing feedback and submitting patches. We are also constantly looking for ways to help our development teams to run their software in easier and more efficient ways. Kubernetes offers self-healing, easy rollouts/rollbacks and scale-out/scale-back, so we decided to take the Canonical Distribution of Kubernetes for a spin.

Deploying the Canonical Distribution of Kubernetes

Inside Canonical, all new services are deployed with Juju and Mojo on top of our OpenStack clouds. The Canonical Distribution of Kubernetes is distributed as a Juju bundle, which made it a great starting point. We turned the bundle into a Mojo spec and added additional charms like the canonical-livepatch subordinate for live kernel updates and nrpe support for monitoring. Once the spec was ready we deployed it into a clean Juju 2 model with a single command:

mojo run

To make sure we can recover quickly from disasters we also set up a Jenkins job that periodically deploys the entire Kubernetes stack in a Continuous Integration environment. This gives us confidence that the spec stays in a deployable state all of the time. Once we had Kubernetes up and running it was time to pick an application to migrate. We wanted to start with an application that we use every day so that we would discover problems quickly. We also wanted to exercise multiple Kubernetes concepts, for example CronJob resources and ExternalName services. Our choice was to start with an internal Django site which we use to help manage ticket from customers.

Application migration

The application we picked is fairly standard Django code with an Apache frontend. For this first pass we decided not to migrate the database, allowing us to avoid need for stateful components in Kubernetes. The first step was to turn the Django application into a Docker container. To make this possible we had to update settings.py to support Kubernetes secrets. For simplicity, we chose to use environment variables and created stanzas similar to the following:

	DATABASES = {
    	"default": {
        	"ENGINE": "django.db.backends.postgresql_psycopg2",
        	"NAME": os.environ['DB_NAME'],
        	"USER": os.environ['DB_USERNAME'],
        	"PASSWORD": os.environ['DB_PASSWORD'],
        	"HOST": "db",
        	"PORT": int(os.environ['DB_PORT']),
        	}
    	}

Next we needed to find a way for the container to talk to the external database – this was easily achieved using “ExternalName” service:

	kind: Service
	apiVersion: v1
	metadata:
  	  name: database
  	  namespace: default
	spec:
  	  type: ExternalName
  	  externalName: external-db.example.com

Then we simply copied our uwsgi configuration and ran the application like this:

CMD ["/usr/bin/uwsgi-core", "--emperor", "/path/to/config/"]

The next step was to create a Dockerfile for the Apache frontend. This one was slightly more tricky because we wanted to use a single image for the development, staging and production deployments, however there are small configuration differences between each one. Kubernetes documentation suggested that ConfigMaps are normally the best way to solve such problems and sure enough it worked! We added a new “volume” to each of the deployments:

	volumes:
  	- name: config-volume
    	configMap:
      	name: config-dev

Which we then mounted inside the container:

	volumeMounts:
	- name: config-volume
  	mountPath: /etc/apache2/conf-k8s

And finally we included this in the main apache configuration:

Include /etc/apache2/conf-k8s/*.conf

The ConfigMap contains ACLs and also ProxyPass rules appropriate for the deployment. For example, in development we point at the “app-dev” backend like this:

ProxyPass / uwsgi://app-dev:32001/ 
ProxyPassReverse / uwsgi://app-dev:32001/ 

With all of above changes completed, we had the development environment running successfully on Kubernetes!

Further improvements

Of course we wanted to make code updates easier, so we did not stop there. We decided to use Jenkins for Continuous Integration and created two jobs. The first one takes a branch as an argument, runs tests and if they are successful, builds a Docker image and deploys it to our development environment. This allows us to quickly verify changes in an environment that’s set up in exactly the same way as production. Once a developer is happy with changes they submit a merge proposal in Launchpad, which gets reviewed and merged as normal. This is where the second Jenkins job comes in – it starts automatically on trunk change, runs tests, builds a Docker image and deploys it to our staging environment. Due to the nature of the application, we still want a final human sign off before we push to production, but once that’s done it’s a quick push-button process to go live.

What’s next?

We are investigating Kubernetes liveness probes to see if they can improve Django container’s failure detection. We also want to get more familiar with Kubernetes concepts and operation because we would like to offer it to the internal development teams and migrate more applications run by our department to Kubernetes.

Was it worth it?

Absolutely! Some of the biggest wins we captured:

  • We provided lots of feedback to the Kubernetes charm developers which helped them make improvements
  • We submitted multiple charm patches, including juju `actions`, sharing our operational experience with the community
  • We are in a very good position to start offering Kubernetes to Canonical internal development teams
  • We gained experience in migrating application to Kubernetes, which we will use as we move more services to Kubernetes

20 July, 2017 02:06PM

Ubuntu Podcast from the UK LoCo: S10E20 – Wry Mindless Ice - Ubuntu Podcast

We discuss tormenting Mycroft, review the Dell Precision 5520, give you some USB resetting command line lurve and go over your feedback.

It’s Season Ten Episode Twenty of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

20 July, 2017 02:00PM

The Fridge: Ubuntu Weekly Newsletter Issue 513

Welcome to the Ubuntu Weekly Newsletter. This is issue #513 for the weeks of July 3 – 17, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Chris Guiver
  • Athul Muralidhar
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

20 July, 2017 01:38PM

Jonathan Riddell: Akademy 2017

Time to fly off to the sun to meet KDE friends old and new and plan out the next year of freedom fighting. See you in Almería!

 

Facebooktwittergoogle_pluslinkedinby feather

20 July, 2017 11:11AM

hackergotchi for VyOS

VyOS

VyOS 1.2.0 repository re-structuring

In preparation for the new 1.2.0 (jessie-based) beta release, we are re-populating the package repositories. The old repositories are now archived, you still can find then in the /legacy/repos directory on dev.packages.vyos.net

The purpose of this is two-fold. First, the old repo got quite messy, and Debian people (rightfully!) keep reminding us about it, but it would be difficult to do a gradual cleanup. Second, since the CI server has moved, and so did the build hosts, we need to test how well the new procedures are working. And, additionally, it should tell us if we are prepared to restore VyOS from its source should anything happen to the packages.vyos.net server or its contents.

For perhaps a couple of days, there will be no new nightly builds, and you will not be able to build ISOs yourself, unless you change the repo path in ./configure options by hand. Stay tuned.

20 July, 2017 07:43AM by Daniil Baturin

LiMux

EoGov der Stadt als Best Practice auf dem Anwenderforum

Unter der Schirmherrschaft des Bayerischen Landtags fand am 28. und 29. Juni 2017 das 9. Bayerische Anwenderforum statt. Die Teilnehmer_innen aus Verwaltung und Wirtschaft diskutierten im wunderschönen Maximilianeum Fragen des IT-Einsatzes ebenso wie aktuelle Entwicklungen. … Weiterlesen

Der Beitrag EoGov der Stadt als Best Practice auf dem Anwenderforum erschien zuerst auf Münchner IT-Blog.

20 July, 2017 05:28AM by Stefan Döring

hackergotchi for Ubuntu developers

Ubuntu developers

Benjamin Mako Hill: Testing Our Theories About “Eternal September”

Graph of subscribers and moderators over time in /r/NoSleep. The image is taken from our 2016 CHI paper.

Last year at CHI 2016, my research group published a qualitative study examining the effects of a large influx of newcomers to the /r/nosleep online community in Reddit. Our study began with the observation that most research on sustained waves of newcomers focuses on the destructive effect of newcomers and frequently invokes Usenet’s infamous “Eternal September.” Our qualitative study argued that the /r/nosleep community managed its surge of newcomers gracefully through strategic preparation by moderators, technological systems to reign in on norm violations, and a shared sense of protecting the community’s immersive environment among participants.

We are thrilled that, less a year after the publication of our study, Zhiyuan “Jerry” Lin and a group of researchers at Stanford have published a quantitative test of our study’s findings! Lin analyzed 45 million comments and upvote patterns from 10 Reddit communities that a massive inundation of newcomers like the one we studied on /r/nosleep. Lin’s group found that these communities retained their quality despite a slight dip in its initial growth period.

Our team discussed doing a quantitative study like Lin’s at some length and our paper ends with a lament that our findings merely reflected, “propositions for testing in future work.” Lin’s study provides exactly such a test! Lin et al.’s results suggest that our qualitative findings generalize and that sustained influx of newcomers need not doom a community to a descent into an “Eternal September.” Through strong moderation and the use of a voting system, the subreddits analyzed by Lin appear to retain their identities despite the surge of new users.

There are always limits to research projects work—quantitative and qualitative. We think the Lin’s paper compliments ours beautifully, we are excited that Lin built on our work, and we’re thrilled that our propositions seem to have held up!

This blog post was written with Charlie Kiene. Our paper about /r/nosleep, written with Charlie Kiene and Andrés Monroy-Hernández, was published in the Proceedings of CHI 2016 and is released as open access. Lin’s paper was published in the Proceedings of ICWSM 2017 and is also available online.

20 July, 2017 12:12AM

July 19, 2017

Brian Murray: Clarification and changes to release upgrades

I’ve recently made some changes to how do-release-upgrade, called by update-manager when you choose to upgrade releases, behaves and thought it’d be a good time to clarify how things work and the changes made.

When do-release-upgrade is called it reads a meta-release file from changelogs.ubuntu.com to determine what releases are supported and to which release to upgrade. The exact meta-release file changes depending on what arguments, –proposed or –devel-release, are passed to do-release-upgrade. The meta-release file is used to determine which tarball to download and use to actually perform the upgrade. So if you are upgrading from Ubuntu 17.04 to Artful then you are actually using the the ubuntu-release-upgrader code from Artful.

One change implemented some time ago was support for the release upgrade process to skip unsupported releases if you are running a supported release. For example, when Ubuntu 16.10 (Yakkety Yak) becomes end of life and you upgrade from Ubuntu 16.04 (Xenial Xerus) with “Prompt=normal” (found in /etc/update-manager/release-upgrades) then Ubuntu 16.10 will be skipped and you will be upgraded to Ubuntu 17.04 (Zesty Zapus). This ensures that you are running a supported release and helps to test the next LTS upgrade path i.e. from Ubuntu 16.04 to Ubuntu 18.04. Similarly, when Ubuntu 17.04 becomes end of life an upgrade from Ubuntu 16.04, with “Prompt=normal”, will upgrade you to Ubuntu 17.10.

I’ve also just modified the documentation regarding the ‘-d’ switch for ubuntu-release-upgrader and update-manager to make it clear that ‘-d’ is from upgrading from the latest supported release (Ubuntu 17.04 right now) to the development release of Ubuntu. The documentation used to incorrectly imply that any release could be updated to the development release, something that would be an unsafe upgrade path. Additionally, the meta-release-development and meta-release-lts-development files were modified to only contain information about releases relevant to the upgrade path. So meta-release-lts-development is currently empty and meta-release-development only contains information about Ubuntu 17.04 and Artful Aadrvark which will become Ubuntu 17.10.

I hope this makes things a bit clearer!

19 July, 2017 09:52PM

Ubuntu Insights: Kernel Team Summary – July 19, 2017

This newsletter is here to provide a status update from the Ubuntu Kernel Team.There will also be highlights provided for any interesting subjects the team may be working on. If you would like to reach the kernel team, you can find us at the #ubuntu-kernel channel on FreeNode.Alternatively, you can mail the Ubuntu Kernel Team mailing list at: kernel-team@lists.ubuntu.com

Highlights

  • Updated artful to v4.11.10
  • Updated unstable to v4.12.1
  • stress-ng 0.08.09 released:
    • http://smackerelofopinion.blogspot.co.uk/2017/07/new-features-landing-stress-ng-v00809.html
  • Xen Security updates released (Z/Y/X/T)
  • fwts 17.07.00 released: https://wiki.ubuntu.com/FirmwareTestSuite/ReleaseNotes/17.07.00
  • The following kernels were promoted to -updates and -security:  Zesty   4.10.0-28.32  Yakkety 4.8.0-59.64  Trusty  3.13.0-125.174  zesty/linux-raspi2  4.10.0-1011.14  yakkety/linux-raspi2 4.8.0-1043.47
  • Xenial kernel has been re-spun to fix a bug found on the SMB3 encription for CIFS and to add two follow-up fixes for CVE-2017-1000364. The following kernels are in the process of being promoted to -proposed for testing as of now:  Xenial  4.4.0-87.110  xenial/linux-raspi2      4.4.0-1064.72  xenial/linux-snapdragon  4.4.0-1066.71  xenial/linux-aws         4.4.0-1025.34  xenial/linux-gke         4.4.0-1021.21  trusty/linux-lts-xenial  4.4.0-87.110~14.04.1
  • Yakkety EOL As Yakkety EOL’s this Thursday (Jul 20), the kernel update from the last cycle (4.8.0-59.64) will be the last one published by the Stable Team.

 

Devel Kernel Announcements

We intend to target a 4.13 kernel for the Ubuntu 17.10 release. The artful kernel is now based on Linux 4.11. The Ubuntu 17.10 Kernel Freeze is Thurs Oct 5, 2017.    

Stable Kernel Announcements

Current cycle: 14-July through 05-August

14-Jul   Last day for kernel commits for this cycle
17-Jul – 22-Jul   Kernel prep week.
23-Jul – 04-Aug   Bug verification & Regression testing.
07-Aug   Release to -updates.

Next cycle: 04-Aug through 26-Aug

 

04-Aug   Last day for kernel commits for this cycle
07-Aug – 12-Aug   Kernel prep week.
13-Aug – 25-Aug   Bug verification & Regression testing.
28-Aug   Release to -updates.

Status: CVE’s The current CVE status can be reviewed at the following: http://people.canonical.com/~kernel/cve/pkg/ALL-linux.html  

19 July, 2017 05:49PM

Ubuntu Insights: Achieving ROI trumps security as the IoT industry’s biggest challenge

While security concerns continue to grab headlines, business benefits and ROI are the top-ranked challenges for IoT professionals today

London, 19 July 2017 – Despite over 23,000 articles* being written about IoT security in the last 12 months, it’s ensuring a return on investment that represents the biggest challenge for IoT professionals in 2017. That’s according to a new ‘Defining IoT Business Models’ research whitepaper from Canonical – the company behind Ubuntu Core, the IoT operating system.

The report, which surveyed over 360 IoT professionals including developers, vendors, and enterprise users, highlights that – despite a widespread focus on IoT security – 53% believe “quantifying ROI and providing a clear use case” is their most immediate IoT challenge. As an industry concern, this places defining the return on investment above both the lack of available infrastructure (40%) and the need for improved device security (45%).

Despite the IoT industry now being valued at more than $900 billion1, it’s clear that many businesses still don’t understand how to turn a profit or gain genuine business benefits from the internet of things. According to Canonical’s research, 34% of IoT professionals also believe that “quantifying the business benefits” of the internet of things should be their number one priority to encourage greater IoT adoption.

Biggest immediate challenges faced by IoT professionals

  • Quantifying ROI – 53%
  • Device security and privacy – 45%
  • Lack of IoT infrastructure – 40%
  • Lack of budget/investment in IoT – 34%
  • Ensuring integration with the wider ecosystem – 29%
  • Device management / long-term support – 26%
  • Resistance from within the organisation – 25%
  • Ensuring regular updates are installed – 12%

Commenting on these findings, Mike Bell, EVP of IoT and Devices at Canonical said, “The early internet of things was something of a gold rush, with vendors and developers jumping in to secure their share of an exciting and rapidly growing new market. Unfortunately, many of these businesses simply didn’t understand or evaluate how the IoT was going to deliver value – and apparently – the majority still don’t.

“As we move towards 2018, businesses are looking for new ways to ensure that their investments in the IoT are driving financial growth and that their business models will remain sustainable in the years to come. At the forefront of this is a change in the way that businesses monetise the internet of things. Where once, people planned to monetise the IoT through device sales, we are now increasingly moving towards a software defined business model for IoT. With IoT specific operating systems, such as Ubuntu Core, allowing users to install new functionality onto their products, a growing number of businesses are now relying on IoT app stores to generate new revenues and increase their ROI. Through this new model, IoT users – whether corporate, industrial or consumer – can receive the latest features without having to buy a whole new device. This not only provides greater security and functionality for consumers, but also provides a long-term revenue source. With this in mind, businesses shouldn’t need to see ROI as their biggest challenge for the IoT, if anything, it should be their biggest opportunity.”

To download Canonical’s Defining IoT Business Models research whitepaper click here.

ENDS

NOTES TO EDITORS

  1. McKinsey & Company, 2016

Methodology

The Defining IoT Business Models report incorporates original research, commissioned by Canonical and conducted by independent industry publication IoTNow. The research surveyed 361 people from IoTNow’s database of registered IoT professionals. *As part of this research, Canonical also ran a media analysis using the Meltwater news monitoring tool. This found that 23,000 English language articles and news stories have been published on the topic of IoT security between June 2016 and June 2017.

About Canonical

Canonical is the company behind Ubuntu, the leading OS for cloud operations. Most public cloud workloads use Ubuntu, as do most new smart gateways, switches, self-driving cars and advanced robots. Canonical provides enterprise support and services for commercial users of Ubuntu.

Established in 2004, Canonical is a privately held company.

For further information please visit https://www.ubuntu.com/internet-of-things 

19 July, 2017 12:59PM

Ubuntu Insights: Ubuntu Artful Desktop July Shakedown – call for testing

We’re mid-way through the Ubuntu Artful development cycle, with the 17.10 release rapidly approaching on the horizon. Now is a great time to
start exercising the new GNOME goodness that’s landed on our recent daily images! Please download the ISO, test it out on your own hardware, and file
bugs where appropriate.

If you’re lucky enough to find any new bugs, please tag them with ‘julyshakedown’, so we can easily find them from this testing session.

We recently switched the images to GDM as the login manager instead of LightDM, and GNOME Shell is now the default desktop, replacing Unity. These would be great parts of the system to exercise this
early in the cycle. It’s also a good time to test out the Ubuntu on Wayland session to see how it performs in your use cases.

Get started

Suggested tests

This early in the cycle we’re not yet recommending full ISO testing, but some exploratory tests on a diverse range of set-ups would be appropriate. There’s
enough new and interesting stuff in these ISOs that make it worthwhile giving everything a good exercise. Here’s some examples of things you might want to run through to get started.

Ubuntu on Wayland

  • Logging in using the ‘Ubuntu on Wayland’ session for your normal day to day activities
  • Suspend & resume and check everything still functions as expected
  • Attach to, and switch between wired and wireless networks you have nearby
  • Connect any bluetooth devices you have, especially audio devices, and make sure they work as expected
  • Plug in external displays if you have them, and ensure they work as usual

Reporting issues

The Ubuntu Desktop Team are happy to help you with these ISO images. The team are available in #ubuntu-desktop on freenode IRC. If nobody is about in your timezone, you may need to wait until the European work day to find active developers.

Bugs are tracked in Launchpad, so you’ll need an account there to get started.

If you report defects that occur only when running a wayland session please add the tag ‘wayland’ to the bug report.

Remember to use the ‘julyshakedown’ tag on your bugs so we can easily find them!

Known issues

There is a known issue with using Bluetooth audio devices from the greeter. This means that people won’t be able to use screenreaders over Bluetooth at the greeter. Once in the session this should all work as normal though.

Issues specific to wayland:

We look forward to receiving your feedback, and results!

🙂

19 July, 2017 09:11AM

July 18, 2017

Ubuntu Insights: Things to consider when building a robot with open source

So you’re considering (or are in the process of) bringing a robot, using open source software, to market. It’s based on Linux. Maybe you’re using the Robot Operating System (ROS), or the Mission Oriented Operating Suite (MOOS), or yet another open-source middleware that’s helping you streamline development. As development nears something useful, you start receiving some pressure about desired returns on this thing. You might be asked ‘When can we start selling it?’ This is where you reach a crossroads.

You can do one of two things:

  1. Start shipping essentially what you have now
  2. Step back, and treat going to production as an entirely new problem to solve, with new questions to answer

You don’t need to look very far to see examples of people who settled on (1). In fact, the IoT market is flooded with them. With the rush to get devices to market, it’s not at all rare to find devices left with hard-coded credentials, development keys, various security vulnerabilities, and no update path.

Think of Mirai, the botnet that mounted a DDoS attack with traffic surpassing 1Tbps, bringing down some of the biggest websites on the Internet. It’s made up primarily of IoT devices. Does it use super cool black magic developed in a windowless lab (or basement) to overwhelm the devices’ defenses and become their master? Nope, default (and often hard-coded) credentials. Did the manufacturers of these devices react quickly and release updates to all these devices in order to secure them? No, many of them don’t have an update method at all. They recalled them instead.

Rather than rushing to market, take a step back. You can save yourself and your company a lot of pain simply by thinking through a few points.

For example, how is your software updated? You must be able to answer this question. Your software is not perfect. In a few weeks you’ll find out that, when your autonomous HMMWV is used in California, it thinks that little sagebrush is an oak. Or you accidentally included your SSH keys.

How is the base OS updated? Perhaps this is still part of your product, and answering the above question answers this one as well. But perhaps your OS comes from another vendor. How do you get updates from them to your customers? This is where security vulnerabilities can really bite you: a kernel that is never updated, or a severely out-of-date openssl.

Once you have updates figured out, how is your robot recovered if an update goes sideways? My go-to example for this is a common solution to the previous problem: automatic security updates. That’s a great practice for servers and desktops and things that are obviously computers, because most people realize that there’s an acceptable way to turn those off, and it’s not to hold the power button for 5 seconds. Robotic systems (and IoT systems in general) have a bit of an issue in that sometimes they’re not viewed as computers at all. If your robot is behaving oddly, chances are it will be forcefully powered off. If it was behaving oddly because it was installing a kernel update real quick, well, now you have a robotic paperweight with a partially installed kernel. You need to be able to deal with this type of situation.

Finally, what is your factory process? How do you install Linux, ROS (or whatever middleware you’re using), and your own stuff on a device you’re about to ship? Small shops might do it by hand, but that doesn’t scale and is error-prone. Others might make a custom seeded distro ISO, but that’s no small task and it’s not easy to maintain as you update your software. Still others use Chef or some other automation tool with a serious learning curve, and before long you realize that you dumped a significant amount of engineering effort into something that should have been easy.

All of these questions are important. If you find yourself not having clear answers to any of them, you should join our webinar, where we discuss how to build a commercial robot with open source. We’ll help you think through these questions, and be available to answer any more you have!

18 July, 2017 02:58PM

Ubuntu Insights: How modelling helps you avoid getting a stuck OpenStack

Lego model of an Airbus A380-800. Airbus run OpenStack

A “StuckStack” is a deployment of OpenStack that usually, for technical but sometimes business reasons, is unable to be upgraded without significant disruption, time and expense. In the last post on this topic we discussed how many of these clouds became stuck and how the decisions made at the time were consistent with much of the prevailing wisdom of the day. Now, with OpenStack being 7 years old, the recent explosion of growth in container orchestration systems and more businesses starting to make use of cloud platforms, both public and private, OpenStack are under pressure.

No magic solution

If you are still searching for a solution to upgrade your existing StuckStack in place without issues, then I have bad news for you: there are no magic solutions and you are best focusing your energy on building a standardised platform that can be operated efficiently and upgraded easily.

The low cost airlines industry has shown that whilst flyers may aspire to best of breed experience and sit in first or business class sipping champagne with plenty of space to relax, most will choose to fly in the cheapest seat as ultimately the value equation doesn’t warrant them paying more. Workloads are the same. Long term, workloads will run on the platform where it is most economic to run them as the business really doesn’t benefit from running on premium priced hardware or software.

Amazon, Microsoft, Google and other large scale public cloud players know this which is why they have built highly efficient data centres and used models to build, operate and scale their infrastructure. Enterprises have long followed a policy of using best of breed hardware and software infrastructure that is designed, built, marketed, priced, sold and implemented as first class experiences. The reality may not have always lived up to the promise but it matters not now anyway, as the cost model cannot survive in today’s world. Some organisations have tried to tackle this by switching to free software alternatives yet without a change in their own behaviour. Thus find that they are merely moving cost from software acquisition to software operation.The good news is that the techniques used by the large operators, who place efficient operations above all else, are available to organisations of all types now.

What is a software model?

Whilst for many years software applications have been comprised of many objects, processes and services, in recent years it has become far more common for applications to be made up of many individual services that are highly distributed across servers in a data centre and across different data centres themselves.

A simple representation of OpenStack Services

Many services means many pieces of software to configure, manage and keep track of over many physical machines. Doing this at scale in a cost efficient way requires a model of how all the components are connected and how they map to physical resources. To build the model we need to have a library of software components, a means of defining how they connect with one another and a way to deploy them onto a platform, be it physical or virtual. At Canonical we recognised this several years ago and built Juju, a generic software modelling tool that enables operators to compose complex software applications with flexible topologies, architectures and deployment targets from a catalogue of 100s of common software services.

Juju modelling OpenStack Services

In Juju, software services are defined in something called a Charm. Charms are pieces of code, typically written in python or bash that give information about the service – the interfaces declared, how the service is installed, what other services it can connect to etc.

Charms can be simple or complex depending on the level of intelligence you wish to give them. For OpenStack, Canonical, with help from the upstream OpenStack community, has developed a full set of Charms for the primary OpenStack services. The Charms represents the instructions for the model such that it can be deployed, operated scaled and replicated with ease. The Charms also define how to upgrade themselves including, where needed, the sequence in which to perform the upgrade and how to gracefully pause and resume services when required. By connecting Juju to a bare metal provisioning system such as Metal As A Service (MAAS) the logical model of OpenStack can is deployed to physical hardware. By default, the Charms will deploy services in LXC containers which gives greater flexibility to relocate services as required based on the cloud behaviour. Config is defined in the Charms or injected at deploy time by a 3rd party tool such as Puppet or Chef.

There are 2 distinct benefits from this approach: 1 – by creating a model we have abstracted each of the cloud services from the underlying hardware and 2: we have the means to compose new architectures through iterations using the standardised components from a known source. This consistency is what enables us to deploy very different cloud architectures using the same tooling, safe in the knowledge that we will be able to operate and upgrade them easily.

With hardware inventory being managed with a fully automated provisioning tool and software applications modelled, operators can scale infrastructure much more efficiently than using legacy enterprise techniques or building a bespoke system that deviates from core. Valuable development resources can be focused on innovating in the application space, bringing new software services online faster rather than altering standard, commodity infrastructure in a way which will create compatibility problems further down the line.

In the next post I’ll highlight some of the best practises for deploying a fully modelled OpenStack and how you can get going quickly. If you have an existing StuckStack then whilst we aren’t going to be able to rescue it that easily, we will be able to get you on a path to fully supported, efficient infrastructure with operations cost that compares to public cloud.

Upcoming webinar

If you are stuck on an old version of OpenStack and want to upgrade your OpenStack cloud easily and without downtime, watch our on-demand webinar with live demo of an upgrade from Newton to Ocata.

Contact us

If you would like to learn more about migrating to a Canonical OpenStack cloud, get in touch.

18 July, 2017 01:38PM

hackergotchi for Netrunner

Netrunner

Netrunner Rolling 2017.07 released

The Netrunner Team is happy to announce the release of Netrunner Rolling 2017.07 – 64bit ISO. That means 18 months after its last release, Netrunner is now available again on the two biggest non-commercial, community-driven distributions: Debian and Arch/Manjaro. Like with the Debian version, our plan is to release an updated install medium regularely 3-4 […]

18 July, 2017 12:34PM by Netrunner Team

hackergotchi for Ubuntu developers

Ubuntu developers

Colin King: New features landing stress-ng V0.08.09

The latest release of stress-ng V0.08.09 incorporates new stressors and a handful of bug fixes. So what is new in this release?
  • memrate stressor to exercise and measure memory read/write throughput
  • matrix yx option to swap order of matrix operations
  • matrix stressor size can now be 8192 x 8192 in size
  • radixsort stressor (using the BSD library radixsort) to exercise CPU and memory
  • improved job script parsing and error reporting
  • faster termination of rmap stressor (this was slow inside VMs)
  • icache stressor now calls cacheflush()
  • anonymous memory mappings are now private allowing hugepage madvise
  • fcntl stressor exercises the 4.13 kernel F_GET_FILE_RW_HINT and F_SET_FILE_RW_HINT
  • stream and vm stressors have new mdavise options
The new memrate stressor performs 64/32/16/8 bit reads and writes to a large memory region.  It will attempt to get some statistics on the memory bandwidth for these simple reads and writes.  One can also specify the read/write rates in terms of MB/sec using the --memrate-rd-mbs and --memrate-wr-mbs options, for example:

 stress-ng --memrate 1 --memrate-bytes 1G \  
--memrate-rd-mbs 1000 --memrate-wr-mbs 2000 -t 60
stress-ng: info: [22880] dispatching hogs: 1 memrate
stress-ng: info: [22881] stress-ng-memrate: write64: 1998.96 MB/sec
stress-ng: info: [22881] stress-ng-memrate: read64: 998.61 MB/sec
stress-ng: info: [22881] stress-ng-memrate: write32: 1999.68 MB/sec
stress-ng: info: [22881] stress-ng-memrate: read32: 998.80 MB/sec
stress-ng: info: [22881] stress-ng-memrate: write16: 1999.39 MB/sec
stress-ng: info: [22881] stress-ng-memrate: read16: 999.66 MB/sec
stress-ng: info: [22881] stress-ng-memrate: write8: 1841.04 MB/sec
stress-ng: info: [22881] stress-ng-memrate: read8: 999.94 MB/sec
stress-ng: info: [22880] successful run completed in 60.00s (1 min, 0.00 secs)

...the memrate stressor will attempt to limit the memory rates but due to scheduling jitter and other memory activity it may not be 100% accurate.  By careful setting of the size of the memory being exercised with the --memrate-bytes option one can exercise the L1/L2/L3 caches and/or the entire memory.

By default, matrix stressor will perform matrix operations with optimal memory access to memory.  The new --matrix-yx option will instead perform matrix operations in a y, x rather than an x, y matrix order, causing more cache stalls on larger matrices.  This can be useful for exercising cache misses.

To complement the heapsort, mergesort and qsort memory/CPU exercising sort stressors I've added the BSD library radixsort stressor to exercise sorting of hundreds of thousands of small text strings.

Finally, while exercising various hugepage kernel configuration options I was inspired to make stress-ng mmap's to work better with hugepage madvise hints, so where possible all anonymous memory mappings are now private to allow hugepage madvise to work.  The stream and vm stressors also have new madvise options to allow one to chose hugepage, nohugepage or normal hints.

No big changes as per normal, just small incremental improvements to this all purpose stress tool.

18 July, 2017 11:20AM by Colin Ian King (noreply@blogger.com)

David Tomaschik: Hacker Summer Camp 2017 Planning Guide

My hacker summer camp planning posts are among the most-viewed on my blog, and I was recently reminded I hadn’t done one for 2017 yet, despite it being just around the corner!

Though many tips will be similar, feel free to check out the two posts from last year as well:

If you don’t know, Hacker Summer Camp is a nickname for 3 information security conferences in one week in Las Vegas every July/August. This includes Black Hat, BSides Las Vegas, and DEF CON.

Black Hat is the most “corporate” of the 3 events, with a large area of vendor booths, great talks (though not all are super-technical) and a very corporate/organized feel. If you want a serious, straight-edge security conference, Black Hat is for you. Admission is several thousand dollars, so most attendees are either self-employed and writing it off, or paid by their employer.

BSides Las Vegas is a much smaller (~1000 people) conference, that’s heavily community-focused. With tracks intended for those new to the industry, getting hired, and a variety of technical talks, it has something for everyone. It also has my favorite CTF: Pros vs Joes. You can donate for admission, or get in line for one of ~450 free admissions. (Yes, the line starts early. Yes, it quickly sells out.)

DEF CON is the biggest of the conferences. (And, in my opinion, the “main event”.) I think of DEF CON as the Burning Man of hacker conferences: yes, there’s tons of talks, but it’s also a huge opportunity for members of the community to show off what they’re doing. It’s also a huge party at night: tons of music, drinking, pool parties. At DEF CON, there is more to do than can be done, so you’ll need to pick and choose.

Hopefully you already have your travel plans (hotel/airfare/etc.) sorted. It’s a bit late for me to provide advice there this year. :)

What To Do

Make sure you do things. You only get out of Hacker Summer Camp what you put into it. You can totally just go and sit in conference rooms and listen to talks, but you’re not going to get as much out of it as you otherwise could.

Black Hat has excellent classes, so you can get into significantly more depth than a 45 minute talk would allow. If you have the opportunity (they’re expensive), you should take one.

If you’re not attending Black Hat, come over to BSides Las Vegas. They go on in parallel, so it’s a good opportunity for a cheaper option and for a more community feel. At BSides, you can meet some great members of the community, hear some talks in a smaller intimate setting (you might actually have a chance to talk to the speaker afterwards), and generally have a more laid-back time than Black Hat.

DEF CON is entirely up to you: go to talks, or don’t. Go to villages and meet people, see what they’re doing, get hands on with things. Go to the vendor area and buy some lockpicks, WiFi pineapples, or more black t-shirts. Drink with some of the smartest people in the industry. You never know who you’ll meet. Whatever you choose, you can have a blast, but you need to make sure you manage your energy. I’ve made myself physically sick by trying to do it all – just accept that you can’t and take it easy.

I’m particularly excited to check out the IoT village again this year. (As regular readers know, I have a soft spot for the Insecurity of Things.) Likewise, I look forward to seeing small talks in the villages.

Whatever you do, be an active participant. I’ve personally spent too much time not participating: not talking, not engaging, not doing. You won’t get the most out of this week by being a wallflower.

Digital Security

DEF CON has a reputation for being the most dangerous network in the world, but I believe that title depends on how you look at it. In my experience, it’s a matter of quality vs quantity. While I have no doubt that the open WiFi at DEF CON probably has far more than it’s fair share of various hijinks (sniffing, ARP spoofing, HTTPS downgrades, fake APs, etc.), I genuinely don’t anticipate seeing high-value 0-days being deployed on this network. Using an 0-day on the DEF CON network is going to burn it: someone will see it and your 0-day is over. Some of the best malware reversers and forensics experts in the world are present, I don’t anticipate someone using a high-quality bug in modern software on this network and wasting it like that.

Obviously, I can’t make any guarantees, but the following advice approximately matches my own threat model. If you plan to connect to shady networks or CTF-type networks, you probably want to take additional precautions. (Like using a separate laptop, which is the approach I’m taking this year.)

That being said, you should take reasonable precautions against more run of the mill attacks:

  • Use Full Disk Encryption (in case your device gets lost/stolen)
  • Be fully updated on a modern OS (putting off patches? might be the time to fix that)
  • Don’t use open WiFi
  • Turn off any radios you’re not using (WiFi, BT)
  • Disable 3G downgrade on your phone if you can (LTE only)
  • Don’t accept updates offered while you’re in Vegas
  • Don’t run random downloads :)
  • Run a local firewall dropping all unexpected traffic

Using a current, fully patched iOS or Android device should be relatively safe. ChromeOS is a good choice if you just need internet from a laptop-style device. Fully patched Windows/Linux/OS X are probably okay, but you have somewhat larger attack surface and less protection against drive-by malware.

Your single biggest concern on any network (DEF CON or not) should be sending plaintext over the network. Use a VPN. Use HTTPS. Be especially wary of phishing. Use 2-Factor. (Ideally U2F, which is cryptographically designed to be unphishable.)

Personal Security & Safety

This is Vegas. DEF CON aside, watch what you’re doing. There are plenty of pick pockets, con men, and general thieves in Las Vegas. They’re there to prey on tourists, and whether you’re there for a good time or for a con, you’re their prey. Keep your wits about you.

Check ATMs for skimmers. (This is a good life pro tip.) Don’t use the ATMs near the con. If you’re not sure if you can tell if an ATM has a skimmer: bring enough cash in advance. Lock it in your in-room safe.

Does your hotel use RFID-based door locks? May I suggest RFID-blocking sleeves?

Planning to drink? (I am.) Make sure you drink water too. Vegas is super-hot, and dehydration will make you very sick (or worse). I try to drink 1/2 a liter of water for every drink I have, but I rarely meet that goal. It’s still a good goal to have.

FAQ

Are you paranoid?

Maybe. I get paid to execute attacks and think like an attacker, so it comes with the territory. I’m going to an event to see other people who do the same thing. I’m not convinced the paranoia is unwarranted.

Will I get hacked?

Probably not, if you spend a little time preparing.

Should I go to talks?

Are they interesting to you? Go to talks if they’re interesting and timely. Note that most talks are recorded and will be posted online a couple of months after the conferences (or can be bought sooner from Source of Knowledge). A notable exception is that SkyTalks are not recorded. And don’t try to record them yourself – you’ll get bounced from the room.

What’s the 3-2-1 rule?

3 hours of sleep, 2 meals, and 1 shower. Every day. I prefer 2 showers myself – Vegas is pretty hot.

18 July, 2017 07:00AM

Stephen Michael Kellat: Clipped Wings

Well, scratch the last plan I had.1 I will not be able to go to OggCamp 17 as I had planned. Due to a member of my immediate family having been put on the docket for open heart surgery, family wants me to stay on-continent and within six counties of Northeast Ohio if at all possible.2 I do not have a date yet for when that family member will be going into surgery but recovery will be tough.

Yes, I was looking forward to the trip to be able to meet up with everybody in-person. Other relations have indicated to me that, if there is an event next year, they may help me plan for travel. Continuing uncertainty about the status of my job due to proposed cutbacks by the departmental offices in their budget submission to the Congress has not helped things either. I am certainly not happy about this but I will have to soldier on.

Efforts to unravel some of the mysteries behind Outernet continue. Eventually I will be able to put together some sort of paper. My preference was to have presented something at the gathering in Canterbury. I will be having to review plans instead. Learning more about LaTeX may prove useful, I suppose.

The trip was going to be nicely timed before the Fall 2017 semester started at Lakeland Community College. I missed one class during the Spring 2017 semester during to workload constraints at my day job.3 If I had taken that class I could have graduated. If I can talk the program director into getting the capstone offered off-cycle in the fall I may be able to graduate from the program by December 2017.

Somehow the work as an evangelist at West Avenue Church of Christ is continuing.4 It is hard preaching to residents at a nursing home. Shut-in populations still deserve to have the opportunity to hear the Word if they so choose, though.

If you want to talk about anything contained here, I don't have comments on this blog. Use something like Telegram to contact me at https://t.me/smkellat or via Mastodon/GNUSocial/StatusNet/Fediverse at https://quitter.se/alpacaherder/. I've been off IRC for too long so I cannot be found on freenode at the moment. Others have more special, rather direct ways of reaching me.

Have a beautiful day!


  1. It is reasonable to ask exactly which plan at this time as there are so many, though... 

  2. Northern Ireland has 5,460 square miles. Ashtabula, Lake, Geauga, Cuyahoga, Trumbull, and Portage counties come to a mere 2,935 square miles. One is just slightly over half the size of the other. 

  3. It is rare to be in a workplace where you actually have "All Hands On Deck" called and that is an actual operating condition but I digress. 

  4. Eventually the congregation may get a website. That is triaged low. Getting audio issues sorted out in the sanctuary is a higher priority problem right now. 

18 July, 2017 03:00AM

July 17, 2017

hackergotchi for SparkyLinux

SparkyLinux

Lutris

There is a new application available for Sparkers/gamers: Lutris.

Lutris is an open gaming platform for Linux. It helps you install and manage your games in a unified interface.
Our goal is to support every game which runs on Linux, from native to Windows games (via Wine) to emulators and browser games. The desktop application and the website are libre software, your contributions are welcome!

Features:
– Manage your Linux games, Windows games, emulated console games and browser games
– Launch your Steam games
– Community-written installers to ease up your games’ installation
– More than 20 emulators installed automatically or in a single click, providing support for most gaming systems from the late 70’s to the present day
– Download and play libre and freeware games

Installation:
sudo apt update
sudo apt install lutris

or via Sparky APTus Gamer.

Lutris

 

17 July, 2017 08:43PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Alan Pope: Ubuntu Artful Desktop July Shakedown

Ubuntu Artful Desktop July Shakedown

We’re mid-way through the Ubuntu Artful development cycle, with the 17.10 release rapidly approaching on the horizon. Now is a great time to start exercising the new GNOME goodness that’s landed on our recent daily images! Please download the ISO, test it out on your own hardware, and file bugs where appropriate.

If you’re lucky enough to find any new bugs, please tag them with ‘julyshakedown’, so we can easily find them from this testing session.

Ubuntu Artful Desktop

We recently switched the images to GDM as the login manager instead of LightDM, and GNOME Shell is now the default desktop, replacing Unity. These would be great parts of the system to exercise this early in the cycle. It’s also a good time to test out the Ubuntu on Wayland session to see how it performs in your use cases.

Get started

Suggested tests

This early in the cycle we’re not yet recommending full ISO testing, but some exploratory tests on a diverse range of set-ups would be appropriate. There’s enough new and interesting stuff in these ISOs that make it worthwhile giving everything a good exercise. Here’s some examples of things you might want to run through to get started.

Ubuntu on Wayland

  • Logging in using the ‘Ubuntu on Wayland’ session for your normal day to day activities
  • Suspend & resume and check everything still functions as expected
  • Attach to, and switch between wired and wireless networks you have nearby
  • Connect any bluetooth devices you have, especially audio devices, and make sure they work as expected
  • Plug in external displays if you have them, and ensure they work as usual

Reporting issues

The Ubuntu Desktop Team are happy to help you with these ISO images. The team are available in #ubuntu-desktop on freenode IRC. If nobody is about in your timezone, you may need to wait until the European work day to find active developers.

Bugs are tracked in Launchpad, so you’ll need an account there to get started.

If you report defects that occur only when running a wayland session please add the tag ‘wayland’ to the bug report.

Remember to use the 'julyshakedown' tag on your bugs so we can easily find them!

Known issues

There is a known issue with using Bluetooth audio devices from the greeter.  This means that people won’t be able to use screenreaders over Bluetooth at the greeter.  Once in the session this should all work as normal though.

Issues specific to wayland:

We look forward to receiving your feedback, and results!

:)

17 July, 2017 07:00AM

hackergotchi for Wazo

Wazo

Sprint Review 17.10

Hello Wazo community! Here comes the release of Wazo 17.10!

New features in this sprint

Plugins: The Wazo plugin system is still young and did not implement any compatibility restriction across Wazo versions. Newer plugins may become incompatible with older Wazo versions as they are released or older plugins may be incompatible with newer Wazo versions. We have added a restriction to forbid installing an incompatible plugin on your Wazo. Since there is no such restriction in Wazo 17.08 and 17.09, plugins installed on those versions may not work properly.

Plugins: Plugins are now shown in two sections: official plugins that are developed by the Wazo development team and community plugins that are written by the Wazo community.

Plugins: We also improved the search box for plugins, so that it is easier to find plugins that you don't already know about.

Ongoing features

Plugin management: There is still a lot to be done to the plugin management service. e.g. dependency, upgrade, HA, ...

Webhooks: We are adding a new way of interconnecting Wazo with other software: webhooks. Outgoing webhooks allow Wazo to notify other applications about events that happen on the telephony server, e.g. when a call arrives, when it is answered, hung up, when a new contact is added, etc. Incoming webhooks also allow Wazo to be notified of events happening on other applications, e.g. a new customer was added in your CRM, a new employee was added in your directory, etc. Unfortunately, there is no magic and the application in question must be able to send or receive webhooks so that Wazo can talk with it. See also this blog post (sorry, it's in French) about Wazo and webhooks.


The instructions for installing Wazo or upgrading Wazo are available in the documentation.

For more details about the aforementioned topics, please see the roadmap linked below.

See you at the next sprint review!

Sources:

17 July, 2017 04:00AM by The Wazo Authors

July 16, 2017

hackergotchi for Xanadu

Xanadu

Nuevo canal de Telegram xanadudevlog

Con la finalidad de cumplir a totalidad con nuestro contrato social el cual dice que “todos los medios de comunicación usados por la comunidad se mantendrán públicos” hemos creado un canal en Telegram llamado @xanadudevlog en el cual se publicara … Sigue leyendo

16 July, 2017 11:31PM by sinfallas

Repositorio de material para imprimir y compartir

Con la intensión de ayudar a la difusión de nuestra distribución Xanadu GNU/Linux, se ha creado un repositorio en github donde se colocara material tanto en formato PDF como sus fuentes con el propósito de darlo como obsequio en eventos, … Sigue leyendo

16 July, 2017 11:16PM by sinfallas

hackergotchi for SolydXK

SolydXK

SolydXK 9 released!

In the past three weeks we have been testing, improving, developing and exercising parts of our vocabulary that our mothers didn’t even know we had but finally we are satisfied with the result. It is time to release the new SolydX and SolydK version 9.

Changes

  • New themes for SolydX and SolydK. You can choose a light or dark theme.
  • Solydxk Systems has a GUI now where you can encrypt partitions (and your USB flash drive), localize your system, select the fastest repositories, hold back packages and cleanup your system. The encryption part of this application is functioning but still in beta. Use at your own risk!
  • The backport repository was removed by default but can be enabled in the new SolydXK System application.
  • The solydx/k-info packages were integrated in the solydx/k-system-adjustments packages and are now obsolete.

Discontinued applications

  • Updatemanager is replaced by applications from Debian’s repository with similar functionalities and the new SolydXK System GUI.
  • SolydXK Softwaremanager has been replaced with an application from Debian’s repository.
  • Lightdm Manager was replaced with lightdm-gtk-greeter-settings
  • Device Driver Manager (DDM)
  • SolydXK Conky
  • (SolydK) XKSudo
  • (SolydK) kcm-ufw
  • (SolydX) User Manager was replaced with gnome-system-tools
  • (SolydX) Sambashare was replaced with gadmin-samba

Upgrading existing systems
There are two ways to upgrade your SolydXK-8 system to SolydXK-9: a fresh install or upgrading your existing system. It is preferred to download the ISO and do a fresh install (backup your data first!) but if you prefer to upgrade your system we have prepared a script to aid you in the process:

After download, unpack, make the script executable and run the script.
Important: check the output of the apt commands. All systems are configured differently and you might remove packages that you really don’t want to be removed!

If you do not want to upgrade just yet, don’t worry! After the release of SolydXK-9 I will keep the solydxk-8 repository available for another year. Firefox and Thunderbird will be updated in the solydxk-8 repository but they are not tested with SolydXK-8. Again, use them at your own risk!

SolydX RPI
Unfortunately, I have to stop developing SolydX RPI. It was fun doing it but it simply took too much time and effort and I still couldn’t get it to function to the standards I want for SolydXK.

Localized versions
The Dutch version is ready for download. The rest will follow and I will let you know when they come available.

Community Editions
Frank is working hard on the 32-bit and Enthusiast’s Editions. When they are ready I will let you know.

Downloads
You can find more information, and download the ISOs on our product pages:

For any questions or issues, please visit our forum: http://forums.solydxk.com/

16 July, 2017 09:04PM by Schoelje

hackergotchi for Ubuntu developers

Ubuntu developers

Carla Sella


Packaging up a Go app as a snap


After building my first Python snap I was asked to try to build a Go snap.
There is a video about building Go snaps with snapcraft here so after watching it I gave Kurly a try. 

"Kurly is an alternative to the widely popular curl program and is designed to operate in a similar manner to curl, with select features. Notably, kurly is not aiming for feature parity, but common flags and mechanisms particularly within the HTTP(S) realm are to be expected.".

First of all I got familiar with the code and got it on my PC:

$ git clone https://github.com/davidjpeacock/kurly.git

I entered the kurly directory:

$ cd kurly

 I created a snap directory and entered it:

$ mkdir snap
$ cd snap
I created a snapcraft.yaml file with the go plugin (plugin: go):

name: kurly
version: master
summary: kurly is an alternative to the widely popular curl program.
description: |
                     kurly is designed to operate in a similar manner to curl, with select features. Notably, kurly is not aiming for feature parity, but common flags and mechanisms particularly within the HTTP(S) realm are to be expected.

confinement: devmode

apps:
  kurly:
     command: kurly

parts:
  kurly:
     source: .                                              
     plugin: go
     go-importpath: github.com/davidjpeacock/kurly

The go-importpath keyword is important and tells the checked out source to live within a certain path with 'GOPATH'. This is required to work with absolute imports and path checking.


I went back to the root of the project and launched the snapcraft command to build the snap:

$ cd ..
$ snapcraft

Once snapcraft has finished building you will find a kurly_master_amd64.snap file in the root directory of the project.

I installed the kurly snap in devmode to test it and see if worked well in non confined mode so that then I could run it in confined mode and add the plugs needed by the snap to work properly:

$ sudo snap install --dangerous --devmode kurly_master_amd64.snap


If you run:

$ snap list

you will see the kurly snap installed in devmode:

Name    Version  Rev   Developer  Notes
core    16-2     2312  canonical  -
kurly   master   x1               devmode

Now i tried kurly out a bit to see if it worked well, for instance:

$ kurly -v https://httpbin.org/ip
$  kurly -R -O -L http://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-8.7.1-amd64-netinst.iso

Ok fine, it worked so now I tried to install it in confined mode changing the snapcraft.yaml file accordingly (confinement: strict).

I ran snapcraft again and installed the snap:

$ snapcraft
$ sudo snap install --dangerous kurly_master_amd64.snap

You can see from  the snap list command that the app is installed not in devmode anymore:
$ snap list

Name    Version  Rev   Developer  Notes
core    16-2     2312  canonical  -
kurly   master   x2               -

I tried out kurly again and got some errors:

$ kurly -v https://httpbin.org/ip
> GET /ip HTTP/1.1
> User-Agent [Kurly/1.0]
> Accept [*/*]
> Host [httpbin.org]
*Error: Unable to get URL; Get https://httpbin.org/ip: dial tcp: lookup httpbin.org: Temporary failure in name resolution

From the error I could understand that kurly needs the network plug (plugs: [network]) so I changed the snapcraft.yaml file so:

name: kurly
version: master
summary: kurly is an alternative to the widely popular curl program.
description: |
                     kurly is designed to operate in a similar manner to curl, with select features. Notably, kurly is not aiming for feature parity, but common flags and mechanisms particularly within the HTTP(S) realm are to be expected.

confinement: strict

apps:
  kurly:
     command: kurly
     plugs: [network]

parts:
  kurly:
     source: .
     plugin: go
     go-importpath: github.com/davidjpeacock/kurly


 I ran snapcraft and installed the kurly snap again:

$ snapcraft
$ sudo snap install --dangerous kurly_master_amd64.snap
But when I ran a kurly command for downloading a file I got another error:

$kurly -R -O -L http://cdimage.debian.org/debian-cd/current/amd64/iso-cd/debian-8.7.1-amd64-netinst.iso
*Error: Unable to create file 'debian-8.7.1-amd64-netinst.iso' for output
Kurly could not write the file to my home direcotry, so I added the home plug to the snapcraft.yaml file, ran snapcraft and installed the snap again.
This time kurly worked fine.

So here's the final snapcraft.yaml file ready for a PR in Git Hub:
 
name: kurly
version: master
summary: kurly is an alternative to the widely popular curl program.
description: |
                     kurly is designed to operate in a similar manner to curl, with select features. Notably, kurly is not aiming for feature parity, but common flags and mechanisms particularly within the HTTP(S) realm are to be expected.

confinement: strict

apps:
  kurly:
     command: kurly
     plugs: [network, home]

parts:
  kurly:
     source: .
     plugin: go
     go-importpath: github.com/davidjpeacock/kurly

That's it.
The Go snap is done!



 

16 July, 2017 01:36PM by Carla Sella (noreply@blogger.com)

July 15, 2017

hackergotchi for SparkyLinux

SparkyLinux

Sparky 5.0

There are new live/install iso images of SparkyLinux 5.0 “Nibiru” available to download.
Sparky 5 follows rolling release model and is based on Debian testing branch “Buster”.

From Wikipedia:

In Babylonian astronomy, Nibiru (in cuneiform spelled dné-bé-ru or MULni-bi-rum). The term refers to the equinox and the astronomical objects associated with it…
…Nibiru was considered the seat of the summus deus who shepherds the stars like sheep, in Babylon identified with Marduk.

Sparky “Home” edition provides fully featured operating system with lightweight desktops: LXQt, MATE and Xfce.

As usually, Sparky MinimalGUI (Openbox) and MinimalCLI (text based) lets you install the base system with a minimal set of applications and a desktop of your choice, via the Sparky Advanced Installer.

Changes between version 4.5 and 5.0:
– full system upgraded from Debian testing repos as of July 14, 2017
– Linux kernel 4.11.6 as default (4.12.x is available in Sparky ‘unstable’ repo)
– new theme “Sparky5”
– new theme of LXQt edition
– new default wallpaper created by our community member “barti”
– new set of wallpapers of the Nature category, with a few nice landscapes from Poland
– Calamares 3.1.1 as the default system installer
– new tool for checking and displaying notification on your desktop about available updates

Other changes:
– added new repos (not active): wine-staging.com
– email client Icedove replaced by Thunderbird
– changed http to https protocol of all Sparky services, including repository; updating the ‘sparky-apt’ package fixes it automatically
– added two new live system boot options:
1. toram – lets you load the whole live system into ram memory (if you have enough);
2 – text mode – if any problem with normal or failsafe boot, this option runs sparky in text mode and lets you install it using the advanced installer

Sparky edition based on the Openbox window manager (MinimalGUI) has gotten 3 key shortcuts:
– Super+t -> terminal emulator
– Super+r -> gexec
– Super+q -> logout window

No system reinstallation is required.

If you have Sparky up to 4.5 installed on your hard drive, simply make full system upgrade:
sudo apt-get update
sudo apt-get dist-upgrade

If any problem, run:
sudo dpkg --configure -a
sudo apt-get install -f

New iso images of the rolling edition can be downloaded from download/rolling page.

SparkyLinux LXQt

 

15 July, 2017 10:28PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Carla Sella


My first Snap 


I have been testing for Ubuntu for quite a while so I decided to change a bit and give packaging apps  a go, so here I am writing about how I managed to create my first Python Snap.

Snapcraft is new way to package apps and so I thought it would be nice to learn about it, so I went to the Snapcraft site https://snapcraft.io/ and found out, that with Snapcraft you can:
"Package any app for every Linux desktop, server, cloud or device, and deliver updates directly".

"A snap is a fancy zip file containing an application together with its dependencies, and a description of how it should safely run on your system, especially the different ways it should talk to other software.
Snaps are designed to be secure, sandboxed, containerised applications isolated from the underlying system and from other applications. Snaps allow the safe installation of apps from any vendor on mission critical devices and desktops."

So if you got an app that is too new for the Ubuntu archive, you can get it in the Snaps store and install it on Ubuntu or any other Linux distribution  that supports Snaps.

I started by getting in touch with the guys in the Snapcraft channel on Rocket chat:  https://rocket.ubuntu.com/channel/snapcraft that told me to how to start.

First of all I read the "Snap a Python App" tutorial and then applied what I learned to Lbryum a Lightweight lbrycrd client, a fork of the Electrum bitcoin client.

I couldn't believe how easy it was, I am not a developer but  I know how to code and I know a bit of Python.

First of all you need to get familiar with the code of the app you want to snap so I got Lbryum code from Git Hub:

$ sudo apt install git
$ git clone https://github.com/lbryio/lbryum.git


Once I got familiar with the code I installed Snapcraft:

 $ sudo apt install snapcraft

I generated a Snapcraft projet in the lbryum root directory with:

$ snapcraft init

If everything works, you will get this output:

Created snap/snapcraft.yaml.
Edit the file to your liking or run `snapcraft` to get started
 
Now if you check the content of the project's directory, it has been populated with  a "snap" folder containing the snapcraft.yaml file  that I modified  for creating the Lbryum app snap:


name: lbryum 
version: 'master'
summary: Lightweight lbrycrd client
description: |
Lightweight lbrycrd client, a fork of the Electrum bitcoin client
grade: stable
confinement: devmode

apps:
lbryum:
command: lbryum
parts:
lbryum:
source: .
plugin: python
 
Here is the documentation so you can find the meaning of the fields in the snapcraft.yaml file (the fields are quite self explanatory): 



To find out what plugs or parts your app needs, you need to run snapcraft and debug it until you find out all that's needed, so I tried to build it at this stage to make sure that I had the basic definition correct. 

I ran this command from the root of the lbryum-snap directory:

$ snapcraft prime

Obviously I had some errors  that made me make some changes to the snapcraft.yaml file. I found out that the app needs Python2 so I added "python-version: python2" and I specified to use the requirements.txt file of the Lbryum project for the packages needed during install (requirements: requirements.txt):
 
name: lbryum 
version: 'master'
summary: Lightweight lbrycrd client
description: |
Lightweight lbrycrd client, a fork of the Electrum bitcoin client
grade: stable
confinement: devmode

apps:
lbryum:
command: lbryum
parts:
lbryum:
source: .
plugin: python
requirements: requirements.txt
python-version: python2
I  ran:

$ snapcraft clean 

and 
 
$snapcraft prime
 again.



Success!!!! :)

Ok, so now I tried the snap with:

$ sudo snap try --devmode prime/
$ lbryum daemon start
$ lbryum version
$ lbryum commands

played a bit around with it to see if the snap worked well.

Now before shipping the snap or opening a PR in Git Hub, we need to turn confinement on and see if the snap works or if it needs further changes to the snapcraft.yaml file.

So I changed the confinement from devmode to strict (confinement: strict) and then ran:

$ snapcraft

I got this output:

Skipping pull lbryum (already ran)
Skipping build lbryum (already ran)
Skipping stage lbryum (already ran)
Skipping prime lbryum (already ran)
Snapping 'lbryum' -                                                           
Snapped lbryum_master_amd64.snap
I installed the snap:

$ sudo snap install --dangerous lbryum_master_amd64.snap

When I ran lbryum I started getting a lot of errors that made me understand that lbryum needs to access the network for working so I added the network plug (plugs: [network]) : 
name: lbryum 
version: 'master'
summary: Lightweight lbrycrd client
description: |
Lightweight lbrycrd client, a fork of the Electrum bitcoin client
grade: stable
confinement: strict

apps:
lbryum:
command: lbryum
plugs: [network]
parts:
lbryum:
source: .
plugin: python
requirements: requirements.txt
python-version: python2
I ran:

$ snapcraft

again and installed the snap again:

$ sudo snap install --dangerous lbryum_master_amd64.snap
$ lbryum daemon start
$ lbryum version
$ lbryum commands

Works!

Fine, so I opened a PR on Git Hub proposing my snapcraft.yaml file so that they could use it for creating a Lbryum snap.



If you need to debug your snap for finding what is wrong there is also a debugging tool for debugging confined apps:

https://snapcraft.io/docs/build-snaps/debugging


Thats it. End of my first snap adventure :).

15 July, 2017 04:16PM by Carla Sella (noreply@blogger.com)

July 14, 2017

Cumulus Linux

Cumulus content round up

To help you stay in the know on all things data center networking, we’ve gathered some of our favorite content from both our own publishing house and from around the web. We hope this helps you stay up to date on both your own skills and on data center networking trends. If you have any suggestions for next time, let us know in the comment section!

Our fav new pieces at Cumulus Networks

BGP in the data center: Are you leveraging everything BGP has to offer? Probably not. This practical report peels away the mystique of BGP to reveal an elegant and mature, simple yet sophisticated protocol. Author Dinesh Dutt, Chief Scientist at Cumulus Networks, covers BGP operations as well as enhancements that greatly simplify its use so that practitioners can refer to this report as an operational manual. Download the guide.

Magic Quadrant report:  Cumulus Networks has been named a “Visionary” in the Data Center Networking category for 2017 Gartner Magic Quadrant. With 96% of their survey respondents finding open networking to be a relevant buying criterion and with the adoption of white-box switching to reach 22% by 2020, it’s clear that disaggregation is the answer for forward-looking companies. Read the report to learn about the latest data center networking trends.

Customer ebook: We put together a dynamic ebook that organizes our customer experiences by business benefit. Choose the benefit that means the most to your organization and learn how our customers leveraged Cumulus Linux to get there. Start clicking around.

We have loads of educational resources for you to learn about Linux, open networking and Cumulus Networks. Head to our learn center, resource page or solutions section for more information on data center networking trends and topics.

Our fav pieces around the web

Why web-scale is the future: While you may associate web-scale networking with cloud giants like Facebook, Google, and Amazon, it’s not just an architecture for the large scale enterprises anymore. The industry has looked at data centers like theirs and asked the question: “What are they doing that we can mimic at a smaller scale?” Read on to find out.

Learn the basics of Docker Compose: Linux.com put together a series of posts covering how to install, use and manage Docker for a variety of purposes. This final article in the series looks at Docker Compose, which is a tool you can use to create multi-container applications with just one command. Up your container skills here.

Outdated IT certifications: A snapshot: Secretly behind on your certifications? Wondering if you even still need them (Cumulus Linux doesn’t require any certification, after all). Turns out you’re not alone. Get the stats here and then sign up for a Cumulus training camp to update your open networking skills.

The post Cumulus content round up appeared first on Cumulus Networks Blog.

14 July, 2017 10:32PM by Kelsey Havens

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Ubuntu Server Development Summary – 14 Jul 2017

Hello Ubuntu Server!

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

cloud-init and curtin

cloud-init

  • Updated integration test pylxd version to 2.2.4
  • Investigate Python 3.6 test failures (LP: #1703697)
  • Fixed integration test due to apt warning (LP: #1702717)
  • Fix GCE unit test from leaking (LP: #1703935)

curtin

  • Fix detection of RHEL and CentOS
  • Correctly have yum use proxy if set
  • Enable CentOS 6 under vmtest

Git Ubuntu

  • Enabled ‘git ubuntu submit’ (LP: #1696775) to propose a merge request for review automatically.

Bug Work and Triage

IRC Meeting

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Uploads to the Development Release (Artful)

billiard, 3.5.0.2-1, None
libapache2-mod-auth-pgsql, 2.0.3-6.1ubuntu1, costamagnagianfranco
libvirt, 3.5.0-1ubuntu1, paelzer
libvirt, 2.5.0-3ubuntu11, corey.bryant
libyaml, 0.1.7-2ubuntu3, costamagnagianfranco
nginx, 1.12.0-1ubuntu1, teward
ocfs2-tools, 1.8.5-1ubuntu1, nacc
python-django, 1:1.11.3-1ubuntu1, vorlon
python-tornado, 4.5.1-2.1~build1, costamagnagianfranco
ssh-import-id, 5.7-0ubuntu1, kirkland
Total: 10

Uploads to Supported Releases (Trusty, Xenial, Yakkety, Zesty)

freeipmi, yakkety, 1.4.11-1.1ubuntu4~0.16.10, dannf
freeipmi, zesty, 1.4.11-1.1ubuntu4~0.17.04, dannf
freeipmi, xenial, 1.4.11-1.1ubuntu4~0.16.04, dannf
golang-1.6, xenial, 1.6.2-0ubuntu5~16.04.3, mwhudson
iscsitarget, trusty, 1.4.20.3+svn499-0ubuntu2.3, smb
iscsitarget, xenial, 1.4.20.3+svn502-2ubuntu4.3, smb
libseccomp, trusty, 2.1.1-1ubuntu1~trusty4, mvo
libvirt, xenial, 1.3.1-1ubuntu10.11, corey.bryant
libvirt, zesty, 2.5.0-3ubuntu5.3, corey.bryant
libvirt, zesty, 2.5.0-3ubuntu5.2, paelzer
lxcfs, zesty, 2.0.7-0ubuntu1~17.04.2, serge-hallyn
lxcfs, yakkety, 2.0.7-0ubuntu1~16.10.2, serge-hallyn
lxcfs, xenial, 2.0.7-0ubuntu1~16.04.2, serge-hallyn
maas, xenial, 2.2.0+bzr6054-0ubuntu2~16.04.1, andreserl
maas, yakkety, 2.2.0+bzr6054-0ubuntu2~16.10.1, andreserl
multipath-tools, xenial, 0.5.0+git1.656f8865-5ubuntu2.5, cyphermox
nagios-images, zesty, 0.9.1ubuntu0.1, nacc
nginx, zesty, 1.10.3-1ubuntu3.1, sbeattie
nginx, yakkety, 1.10.1-0ubuntu1.3, sbeattie
nginx, xenial, 1.10.3-0ubuntu0.16.04.2, sbeattie
nginx, trusty, 1.4.6-1ubuntu3.8, sbeattie
nginx, xenial, 1.10.3-0ubuntu0.16.04.2, sbeattie
nginx, trusty, 1.4.6-1ubuntu3.8, sbeattie
nginx, yakkety, 1.10.1-0ubuntu1.3, sbeattie
nginx, zesty, 1.10.3-1ubuntu3.1, sbeattie
ntp, zesty, 1:4.2.8p9+dfsg-2ubuntu1.2, paelzer
ntp, yakkety, 1:4.2.8p8+dfsg-1ubuntu2.2, paelzer
ntp, xenial, 1:4.2.8p4+dfsg-3ubuntu5.6, paelzer
ntp, trusty, 1:4.2.6.p5+dfsg-3ubuntu2.14.04.12, racb
php-defaults, xenial, 35ubuntu6.1, nacc
pptpd, xenial, 1.4.0-7ubuntu0.2, stgraber
qemu, zesty, 1:2.8+dfsg-3ubuntu2.3, paelzer
walinuxagent, trusty, 2.2.14-0ubuntu1~14.04.1, sil2100
walinuxagent, xenial, 2.2.14-0ubuntu1~16.04.1, sil2100
walinuxagent, yakkety, 2.2.14-0ubuntu1~16.10.1, sil2100
walinuxagent, zesty, 2.2.14-0ubuntu1~17.04.1, sil2100
Total: 36

Contact the Ubuntu Server team

14 July, 2017 07:58PM

Ubuntu Insights: Ubuntu Desktop Weekly Update: July 14, 2017

GNOME

GDM has now replaced LightDM. We’re working on the transition between display managers to make sure that users are seamlessly transitioned to the new stack. We’re doing regular automated upgrade tests to make sure everything keeps working, but we’re keen to get your bug reports.

We’ve spent time cleaning up the desktop seeds and demoted 70+ packages. This has freed up a little space on the ISO and makes things generally easier to manage.

Good news: transparent terminals under Wayland now work properly, thanks to a patch from Owen Taylor at Fedora. SRUs for previous releases are underway.

Snaps

We’ve been packaging more GNOME apps as Snaps using the gnome-3-24 platform Snap. By utilising the content interface in Snaps, we can share the common libraries between GNOME apps which means the apps themselves are smaller and the maintenance of the core libraries can happen in one place and be shared by all the Snaps using it. We’ll be publishing a how-to guide and some demos next week.

The Libre Office 5.3.4 snap has been promoted to the stable channel. Thanks for the feedback and testing.

Video & Audio

We’ve proposed an upstream fix for gstreamer-vaapi to work towards accelerated video playback.

We’ve also narrowed down a graphical corruption issue in Totem down to a bug in Clutter and we’re working on a fix.

Daniel’s fun fact for the week: Modern Atom chips (Cherry Trail, Apollo Lake) and cheap notebook chips (Braswell) can play 4K H.265 without breaking a sweat. Even on a 2-watt CPU. Unfortunately they usually come with low quality screens and never HDMI 2.0.

We’ve landed some important fixes to audio in Artful this week, and users of Bluetooth and USB speakers should seen a significant improvement in usability – for example switching to the device automatically on connection and preferring the high quality A2DP Bluetooth profile over the low quality HSP/HFP one. There is an important caveat/bug though, because of the way GDM and PulseAudio interact, you can’t use a Bluetooth audio-device with a screen-reader at the greeter. Once you’re logged in though, everything should work again, and for users who don’t need use a screen-reader, the A2DP profile is now available for use once you’re logged in. We’re working on a proper fix for this with upstreams. If you’re using a screen-reader at the greeter I’d like to hear from you.

Updates

Ubuntu 16.10 Yakkety Yak is end-of-life of the end of July.

14 July, 2017 07:48PM

July 13, 2017

Jono Bacon: Consolidating the Linux Desktop App Story: An Idea

When I joined Canonical in 2006, the Linux desktop world operated in a very upstream way. All distributions used the Linux kernel, all used X, and the majority shipped either GNOME, KDE, or both.

The following years mixed things up a little. As various companies pushed for consumer-grade Linux-based platforms (e.g. Ubuntu, Fedora, Elementary, Android etc), the components in a typical Linux platform diversified. Unity, Mir, Wayland, Cinnamon, GNOME Shell, Pantheon, Plasma, Flatpak, Snappy, and others entered the fray. This was a period of innovation, but also endless levels of consternation: people bickering left, right, and center, about which of these components were the best choices.

This is normal in technology, both the innovation and the flapping of feathers in blog posts and forums. As is also normal, when the dust settled a natural set of norms started to take shape.

Today, I believe we face an opportunity to consolidate around some key components, not just to go faster, but to also avoid the mistakes of the past.

App Stores are Hard

Historically, one of the difficulties with shipping a Linux desktop was differentiation.

I remember this vividly in my days at Canonical. People always praised Ubuntu for two main reasons: (1) you could get the exciting new technology in Ubuntu first, and (2) shit just worked.

While the latter was and is always key, the former was always going to have a short shelf life. While enthusiasts are willing to upgrade their desktops every six months, businesses and non-nerds are not, so Ubuntu needed to have a way to differentiate.

The result of course was Unity, Scopes, and the Ubuntu Software Center (and associated developer program). Here’s the thing though: building an app store is relatively simple, but building the ecosystem which makes developers want to get their applications in that store is really hard.

Pictured: An app store that is almost finished.

Most app developers and ISVs don’t care about your product, they care about the size of the market you can expose their product to. They also care about a logical revenue model and simplicity in delivering their app on your store: they want to make money without jumping through hoops.

Building all of this requires huge amounts of work, including engineering, developer engagement, on-boarding, and business development. We took a pretty good swing at it in Ubuntu and it was hard, and Microsoft poured millions of dollars into it for their phone and even that didn’t work.

The moral of this story is that differentiation is important, but we have to be realistic in what it takes to differentiate at this level. I think if we want the Linux desktop to grow, we have to strike the right balance between differentiation (giving people a reason to use your product) and consistency (not re-inventing the wheel).

Now, critics will say that they knew this all the time and Ubuntu should have never focused on Unity, Scopes etc. I don’t believe it is as clear cut as those critics might think: few Linux platforms (if any?) had taken a series whack at building a consumer grade app and developer experience. We tried, it was not successful, and instead of digging up the past I would rather ensure we can inform the future.

The good news is that I think we have a better opportunity for this than ever before.

Building a Standard Linux Desktop Core

What I want to see is that the various distributions put at the core of their platform a central app repository that is based on Flatpak, complete with the ecosystem pieces (e.g. an interface for developers to upload their apps, scripts for scanning packages for security issues, tools to edit app store pages, a payments mechanism to support the purchasing of apps etc).

All distributions would then use this platform instead of trying to reinvent the wheel, but they could customize their own app store experience and filter apps in different ways. For example, a GNOME-based distribution may only want to pull in GTK-based apps, another distro may only want want to support free software apps, another distro may only want apps written in a certain language. This way, no-one is forced into the same policy about what apps they ship: the shared app platform is a big bucket that you can pull the right pieces from.

This would have a number of benefits:

  • We consolidate resources around a central platform.
  • From my experience, app developers and ISVs are freaked out about the Linux world due to all the different platforms. This would provide a singular way of addressing Linux as a platform.
  • We provide a single set of usage data to app developers and ISVs (instead of an individual distro’s stats for downloads, we can show all distros that use the system for download stats). This is an important marketing benefit.
  • Better security: updates can be delivered to multiple different distributions.

Now, of course, this will require some collaboration and there will be some elephants in the room to figure out.

Yep, it is the elephant in the room. Bad dum tish.

One major elephant will be whether this platform supports non-free software. To be completely blunt, unless we support non-free apps (e.g. Slack, Steam, Photoshop, Spotify etc), it will never break into the wider consumer market. People judge their platforms based upon whether they can use the things they like and irrespective of the rights and wrongs in the world, most people depend on or want non-free apps. Of course, I wish we could have a free software and open source technology world like the rest of you, but I think we need to be realistic.

This wouldn’t matter though: distros with a focus on free software can merely filter only the apps that are free software for their users. For another distro that is open to non-free apps, they can also benefit from the same platform.

This approach will offer huge value for companies investing in the Linux desktop too: reduced engineering costs (and expanded innovation), differentiation in how you present and offer apps, and the benefit of likely more app devs and ISVs wanting to ship apps (thus making the platform more valuable).

A Good Start

The good news is that today I feel we have a bunch of the key pieces in place to support this kind of initiative, including:

  • GNOME Software – a simple and powerful store for browsing and installing software.
  • Flatpak – Flatpak is a simple and efficient packaging format for delivering applications (I am recommending Flatpak instead of Snappy as Snappy seems to be more focused on the cloud and server side of things these days, and Flatpak isn’t well suited to cloud/server).
  • Wayland – Wayland is a modern display server.

I think if we took these pieces, brought them under the banner of something such as FreeDesktop, built support from the various distros (e.g. Ubuntu, Fedora, Endless, Debian, Elementary etc), I think it would be a phenomenally valuable initiative and really optimize the success of the Linux desktop.

I would love to hear your thoughts on this, share them in the comments. Good idea? Bad idea? Somewhere in-between?

UPDATE: It seems I inadvertently left the impression in this post that I was not supporting Snappy as a potential component here. Please see this post for a clarification.

The post Consolidating the Linux Desktop App Story: An Idea appeared first on Jono Bacon.

13 July, 2017 07:24PM

Ubuntu Insights: Ubuntu Foundations Development Summary: July 13, 2017

This newsletter is to provide a status update from the Ubuntu Foundations Team. There will also be highlights provided for any interesting subjects the team may be working on.

If you would like to reach the Foundations team, you can find us at the #ubuntu-devel channel on freenode.

Highlights

  • Ubuntu is published in the Windows Store and is installable on Windows 10 insiders preview builds. It is currently phased and will be generally available to insiders soon.
  • Deprecation of PV images for Amazon EC2 in Artful – 17.10
  • skype Linux client previously in partner is EOLed and has been removed from partner, see upstream website for information https://www.skype.com/en/download-skype/skype-for-linux/ (LP: #1701746)
  • 16.04 images are now available on Rackspace OnMetal

The State of the Archive

  • OCaml 4.04 transition in progress
  • The initial ghc migration for the cycle has completed, another round is now in progress

Upcoming Ubuntu Dates

  • 16.10 EoL this month!
  • 16.04.3 point release is scheduled for August 3, 2017

Weekly Meeting

IRC Log: http://ubottu.com/meetingology/logs/ubuntu-meeting/2017/ubuntu-meeting.2017-07-06-15.04.moin.txt

13 July, 2017 06:14PM

Dustin Kirkland: Back on The Changelog, talking Ubuntu 12.04 ESM, Ubuntu on Windows, and Snaps!


I met up with the excellent hosts of the The Changelog podcast at OSCON in Austin a few weeks back, and joined them for a short segment.

That podcast recording is now live!  Enjoy!


The Changelog 256: Ubuntu Snaps and Bash on Windows Server with Dustin Kirkland
Listen on Changelog.com



Cheers,
Dustin

13 July, 2017 04:52PM by Dustin Kirkland (noreply@blogger.com)

James Page: Ubuntu OpenStack Dev Summary – 13th July 2017

Welcome to the fourth Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

OpenStack Distribution

Stable Releases

We still have a few SRU’s in-flight from the June SRU cadence:

Swift: swift-storage processes die if rsyslog is restarted (Kilo, Mitaka)
https://bugs.launchpad.net/ubuntu/trusty/+source/swift/+bug/1683076

Ocata Stable Point Releases
https://bugs.launchpad.net/ubuntu/+bug/1696139

Hopefully those should flush through to updates in the next week; in the meantime we’re preparing to upload fixes for:

Keystone: keystone-manage mapping_engine federation rule testing
https://bugs.launchpad.net/ubuntu/+bug/1655182

Neutron: router host binding id not updated after failover
https://bugs.launchpad.net/ubuntu/+bug/1694337

Development Release

The first Ceph Luminous RC (12.1.0) has been uploaded to Artful and will be backported to the Ubuntu Cloud Archive for Pike soon.

OpenStack Pike b3 is due towards the end of July; we’ve done some minor dependency updates to support progression towards that goal. It’s also possible to consume packages built from the tip of the upstream git repository master branches using:

sudo add-apt-repository ppa:openstack-ubuntu-testing/pike

Packages are automatically built for Artful and Xenial.

OpenStack Snaps

Refactoring to support the switch back to strict mode snaps has been completed. Corey posted last week on ‘OpenStack in a Snap’ so we’ll not cover too much in this update; have a read to get the full low down.

Work continues on snapstack (the CI test tooling for OpenStack snap validation and testing), with changes landing this week to support Class-based setup/cleanup for the base cloud and a logical step/plan method for creating tests.

The move of snapstack to a Class-based setup/cleanup approach for the base cloud enables flexibility where the base cloud required to test a snap can easily be updated. By default this will provide a snap’s tests with a default OpenStack base cloud, however this can now easily be manipulated to add or remove services.

The snapstack code has also been updated to use a step/plan method for creating tests. These objects provide a simple and logical process for creating tests. The developer can now define the snap being tested, and it’s scripts/tests, in a step object. Each base snap and it’s scripts/tests are also define in individual step objects. All of these steps are then put together into a plan object, which is executed to kick off the deployment and tests.

For more details on snapstack you can check out the snapstack code here.

Nova LXD

The refactoring of the VIF plugging codebase to provide support for Linuxbridge and Open vSwitch + the native OVS firewall driver has been landed for Pike; this corrects a number of issues in the VIF plugging workflow between Neutron and Nova(-LXD) for these specific tenant networking configurations.

The nova-lxd subteam have also done some much needed catch-up on pull requests for pylxd (the underlying Python binding for LXD that nova-lxd uses); pylxd 2.2.4 is now up on pypi and includes fixes for improved forward compatibility with new LXD releases and support for passing network timeout configuration for API calls.

Work is ongoing to add support for LXD storage pools into pylxd.

OpenStack Charms

New Charms

Work has started on the new Gnocchi and GlusterFS charms; These should be up and consumable under the ‘openstack-charmers-next’ team on the charm store in the next week.

Gnocchi will support deployment with MySQL (for indexing), Ceph (for storage) and Memcached (for coordination between Gnocchi metricd workers). We’re taking the opportunity to review and refresh the telemetry support across all of the charms, ensuring that the charms are using up-to-date configuration options and are fully integrated for telemetry reporting via Ceilometer (with storage in Gnocchi). This includes adding support for the Keystone, Rados Gateway and Swift charms. We’ll also be looking at the Grafana Gnocchi integration and hopefully coming up with some re-usable sets of dashboards for OpenStack resource metric reporting.

Deployment Guide

Thanks to help from Graham Morrison in the Canonical docs team, we now have a first cut of the OpenStack Charms Deployment Guide – you can take a preview look in its temporary home until we complete the work to move it up under docs.openstack.org.

This is very much a v1, and the team intends to iterate on the documentation over time, adding coverage for things like high-availability and network space usage both in the charms and in the tools that the charms rely on (MAAS and Juju).

IRC (and meetings)

As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.

EOM


13 July, 2017 03:19PM

Ubuntu Podcast from the UK LoCo: S10E19 – Inconclusive Squalid Driving - Ubuntu Podcast

We discuss playing Tomb Raider, OEMs “making distros” is so hot right now, RED make a smartphone from the future, Skype gets an update and users hate it, Gangnam style loses its YouTube crown.

It’s Season Ten Episode Nineteen of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

13 July, 2017 02:00PM

July 12, 2017

hackergotchi for Maemo developers

Maemo developers

Because …

A QEventLoop is a heavy dependency. Not every worker thread wants to require all its consumers to have one. This renders QueuedConnection not always suitable. I get that signals and slots are a useful mechanism, also for thread-communications. But what if your worker thread has no QEventLoop yet wants to wait for a result of what another worker-thread produces?

QWaitCondition is often what you want. Don’t be afraid to use it. Also, don’t be afraid to use QFuture and QFutureWatcher.

Just be aware that the guys at Qt have not yet decided what the final API for the asynchronous world should be. The KIO guys discussed making a QJob and/or a QAbstractJob. Because QFuture is result (of T) based (and waits and blocks on it, using a condition). And a QJob (derived from what currently KJob is), isn’t or wouldn’t or shouldn’t block (such a QJob should allow for interactive continuation, for example — “overwrite this file? Y/N”). Meanwhile you want a clean API to fetch the result of any asynchronous operation. Blocked waiting for it, or not. It’s an uneasy choice for an API designer. Don’t all of us want APIs that can withstand the test of time? We do, yes.

Yeah. The world of programming is, at some level, complicated. But I’m also sure something good will come out of it. Meanwhile, form your asynchronous APIs on the principles of QFuture and or KJob: return something that can be waited for.

Sometimes a prediction of how it will be like is worth more than a promise. I honestly can’t predict what Thiago will approve, commit or endorse. And I shouldn’t.

 

0 Add to favourites0 Bury

12 July, 2017 09:41PM by Philip Van Hoof (pvanhoof@gnome.org)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Storage management in LXD 2.15

 

containers

For a long time LXD has supported multiple storage drivers. Users could choose between zfs, btrfs, lvm, or plain directory storage pools but they could only ever use a single storage pool. A frequent feature request was to support not just a single storage pool but multiple storage pools. This way users would for example be able to maintain a zfs storage pool backed by an SSD to be used by very I/O intensive containers and another simple directory based storage pool for other containers. Luckily, this is now possible since LXD gained its own storage management API a few versions back.

Creating storage pools

A new LXD installation comes without any storage pool defined. If you run lxd init LXD will offer to create a storage pool for you. The storage pool created by lxd init will be the default storage pool on which containers are created.

asciicast

Creating further storage pools

Our client tool makes it really simple to create additional storage pools. In order to create and administer new storage pools you can use the lxc storage command. So if you wanted to create an additional btrfs storage pool on a block device /dev/sdb you would simply use lxc storage create my-btrfs btrfs source=/dev/sdb. But let’s take a look:

asciicast

Creating containers on the default storage pool

If you started from a fresh install of LXD and created a storage pool via lxd init LXD will use this pool as the default storage pool. That means if you’re doing a lxc launch images:ubuntu/xenial xen1 LXD will create a storage volume for the container’s root filesystem on this storage pool. In our examples we’ve been using my-first-zfs-pool as our default storage pool:

asciicast

Creating containers on a specific storage pool

But you can also tell lxc launch and lxc init to create a container on a specific storage pool by simply passing the -s argument. For example, if you wanted to create a new container on the my-btrfs storage pool you would do lxc launch images:ubuntu/xenial xen-on-my-btrfs -s my-btrfs:

asciicast

Creating custom storage volumes

If you need additional space for one of your containers to for example store additional data the new storage API will let you create storage volumes that can be attached to a container. This is as simple as doing lxc storage volume create my-btrfs my-custom-volume:

asciicast

Attaching custom storage volumes to containers

Of course this feature is only helpful because the storage API let’s you attach those storage volume to containers. To attach a storage volume to a container you can use lxc storage volume attach my-btrfs my-custom-volume xen1 data /opt/my/data:

asciicast

Sharing custom storage volumes between containers

By default LXD will make an attached storage volume writable by the container it is attached to. This means it will change the ownership of the storage volume to the container’s id mapping. But Storage volumes can also be attached to multiple containers at the same time. This is great for sharing data among multiple containers. However, this comes with a few restrictions. In order for a storage volume to be attached to multiple containers they must all share the same id mapping. Let’s create an additional container xen-isolated that has an isolated id mapping. This means its id mapping will be unique in this LXD instance such that no other container does have the same id mapping. Attaching the same storage volume my-custom-volume to this container will now fail:

asciicast

But let’s make xen-isolated have the same mapping as xen1 and let’s also rename it to xen2 to reflect that change. Now we can attach my-custom-volume to both xen1 and xen2 without a problem:

asciicast

Summary

The storage API is a very powerful addition to LXD. It provides a set of essential features that are helpful in dealing with a variety of problems when using containers at scale. This short introducion hopefully gave you an impression on what you can do with it. There will be more to come in the future.


This blog was originally featured at Brauner's Blog

12 July, 2017 06:53PM

Brian Murray: Using the Ubuntu Error Tracker for SRUs

The Ubuntu Error Tracker is really good at presenting information about the versions of packages affected by a crash. Additionally, it has current information about crashes regarding stable releases of Ubuntu in addition to the development release. Subsequently, it can be a great resource for verifying that a crash is fixed for the development release or for stable releases.

As a member of the Stable Release Updates team I am excited to see an SRU which includes a bug report either generated from a crash in the Ubuntu Error Tracker (identifiable by either the bug description or errors.ubuntu.com being in the list of subscribers) or with a link to a crash in the Ubuntu Error Tracker.

One example of this is a systemd-resolved crash which while the bug report was not created by the bug bridge does contain a link to a bucket in the Error Tracer. Using the bucket in the Error Tracker we were able to confirm that new version of the package did not appear there and subsequently was no longer experiencing the same crash.

Two crashes about libgweather, bug 1688208 and bug 1695567, are less perfect examples because libgweather ended up causing gnome-shell to crash and the Error Tracker buckets for these crashes only show the version of gnome-shell. But fortunately apport gathers information about the package’s (gnome-shell in this case) dependencies and as the maintainer of the Error Tracker I can query its database. Using that ability I was able to confirm, by querying individual instances in the bucket, that the new version of libgweather did in fact fix both crashes.

So whether you are fixing crashes in the development release of Ubuntu or a stable release keep in mind that its possible to use the Error Tracker to verify that your fix works.

12 July, 2017 05:48PM

Ubuntu Insights: Top 10 snaps in June

From fast and secure file sharing to Kubernetes charts management, from email clients to classic text editors for your Pi, our selection of top snaps for June has something for everyone.

If the term “snaps” doesn’t ring a bell, they are a new way for developers to package their apps, bringing many advantages over traditional package formats such as .deb, .rpm, and others. They are secure, isolated and allow apps to be rolled back should an issue occur. They also aim to work on any distribution or platform, from IoT devices to servers, desktops and mobile devices. Snaps really are the future of Linux application packaging and we’re excited to showcase some great examples of these each month.

Our June selection

1. Wekan

Lauri Ojansivu

Wekan is a self-hosted collaborative Kanban board. Whether you’re maintaining a personal todo list, planning your holidays with some friends, or working in a team on your next revolutionary idea, Kanban boards are an unbeatable tool to keep things organised. After installing Wekan, open a web browser at http://localhost:8080 to create your admin account.

2. Wormhole

Snapcrafters

Wormhole is a command-line tool which makes it possible to get arbitrary-sized files and directories (or short pieces of text) from one computer to another quickly and securely, using password-authenticated key exchange. The two endpoints are identified by using identical “wormhole codes”: in general, the sending machine generates and displays the code, which must then be typed into the receiving machine.

3. Helm

Joe Borg

If you are a K8s deployer, there is no need to explain Helm, the Kubernetes charts manager that allows you to find and use popular software packaged as Kubernetes charts, share your own charts and create reproducible builds of your apps.

4. Pinano

V Bota

The GNU nano text editor, now available as a snap for your armhf boards. When you are using a very lightweight OS (such as Ubuntu Core) on a board, you’ll be glad to have this familiar editor at hand.

  • Install Pinano from the command-line (armhf only):

    sudo snap install pinano

5. Hiri

Milorad Pop-Tosic

Hiri is a Linux alternative to Microsoft Outlook, compatible with Office365 and Exchange, that syncs emails, calendars, folders and everything else. It features assignable tasks, a dashboard to make you aware of how frequently you check your email, reminders and due dates management.

6. Plex Media Server

David Fialho

Plex organizes all of your personal media so you can easily
access and enjoy it through a web UI over the network. Install the Media Server, open a web browser at http://localhost:32401 to create an admin account and it will start indexing and sorting your media files for easy access.

7. Slack-Term

Alan Pope ㋛

Slack-Term is a terminal client for Slack. You should certainly have a look at the README of the project, as you will need to set up a config file with your Slack credentials.

8. Gitter (client)

Snapcrafters

Gitter is a developer focused chat and networking platform that helps to manage, grow and connect communities through messaging, content and discovery. The most notable feature of Gitter is its ability to create rooms based on GitHub and Gitlab projects, integrating notifications for issues, PRs, etc. This snap is the desktop client that you can use to connect to any Gitter group.

9. Mattermost (client)

Snapcrafters

Mattermost is a workplace messaging for web, PCs and phones. MIT-licensed with hundreds of contributors, localised in 11 languages, secure, configurable, and scalable from teams to enterprise. This snap is the desktop client that you can use to connect to any Mattermost server.

10. q

Nathan Handler

q is a command line tool that allows direct execution of SQL-like queries on CSVs/TSVs (and any other tabular text files). It treats ordinary files as database tables, and supports all SQL constructs, such as WHERE, GROUP BY, JOINs etc. It supports automatic column name and column type detection, and provides full support for multiple encodings.

This last one feels truly magical if you are handling heavy spreadsheets and need more horse-power behind your searches!

Three more snaps for DevOps

This month, it’s not a top 10, but a top 13, because three new exciting snaps for DevOps have landed in the store:

11. AWS CLI

Amazon Web Services

  • Install from the command-line:

    sudo snap install aws-cli --classic

12. Heroku

Jeff Dickey

  • Install from the command-line:

    sudo snap install heroku

13. Azure CLI

Scott Moser

  • Install from the command-line:

    sudo snap install azure-cli --classic

12 July, 2017 04:12PM by Ubuntu Insights (david.calle@canonical.com)

Ubuntu Insights: Robot development made easy with Husarion CORE2-ROS running Ubuntu

This is the first in a series of two posts by guest blogger; Dominik Nowak, CEO at Husarion.

We’ve seen many breakthroughs happening in the IT industry over the last decade. Arguably the most meaningful one on the consumer side was the adoption of smartphones and mobile development. What’s the next big thing, now that the smartphones are so common and, let’s face it, slightly boring? We say: robotics.

Business knows that well with many manufacturing lines running solely by robots. The consumer and service side, though, have yet to see a massive breakthrough. We believe it’s all a matter of accessibility and lowering the barrier to entry for developers. There just have to be good, simple tools to quickly prototype and develop robots. To test new ideas and empower engineers, so they can solve many of the issues humanity still faces. Issues that are trickier to tackle than a tap in an app.

Building robots is a challenging task that the Husarion team is trying to make easier. Husarion is a robotic company working on rapid development platform for robots. The products of the company are CORE2 robotic controller and the cloud platform to manage all CORE2 based robots. CORE2 is the second generation of Husarion’s robotics controller and it’s now available here.

CORE2 combines a real-time microcontroller board and a single board computer running Ubuntu. Ubuntu is the most popular Linux distribution not only for desktops, but also for embedded hardware in IoT & robotics applications.

 

CORE2 controller comes in two configurations. The first one with ESP32 Wi-Fi module is dedicated for robotic applications that need low power consumption and real-time, secure remote control. The second one, called CORE2-ROS, basically integrates two boards in one:
– A board with real-time microcontroller running firmware using the Real-Time Operating System (RTOS) and integrating interfaces for motors, encoders and sensors
– A single board computer (SBC) running Linux with ROS packages (Robot Operating System) and other software tools.

The ‘real-time’ board does the low-level job. It contains the efficient STM32F4 series microcontroller which is great for driving motors, reading encoders, communicating with sensors and controlling the whole mechatronic or robotic system. In most applications, the CPU load does not exceed a few percents and the real-time operation is guaranteed by a dedicated programming framework based on RTOS. We also assured the compatibility with Arduino libraries. The majority of the tasks are processed in the microcontroller peripherals, such as timers, communication interfaces, ADCs etc. with the strong support of interrupts and DMA channels. In short, it is not a job for a single-board computer that has the other tasks.

On the other hand, it’s almost obvious that the modern and advanced robotic applications can no longer be based only on microcontrollers, for a few reasons:
– Autonomous robots need a lot of processing power to perform navigation, image and sound recognition, moving in a swarm etc.,
– Writing the advanced software requires standardisation to be efficient – SBCs become more and more popular in the industry and the same is observed for the software written for SBCs, which is very similar to PC computers
– SBCs are getting less expensive every year
– Husarion believes that combining these two worlds is very beneficial in robotics.

CORE2-ROS controller is available in two configurations with Raspberry Pi 3 or ASUS Tinker Board. CORE-ROS is running Ubuntu, Husarion dev & management tools and ROS packets.

In the next post, find out why Husarion decided to use Ubuntu.

12 July, 2017 02:33PM

Sean Davis: Development Release: Exo 0.11.4

After quite some time, the first release candidate for the Exo 0.12.x series is ready for some serious testing!

What’s New in Exo 0.11.4?

This release completes the GTK+ 3 port and can now be used for GTK+ 2 or 3 Xfce application development.

New Features

Bug Fixes

  • Removed --disable-debug flag from make distcheck (Xfce #11556)

Icons

  • Replaced non-standard gnome-* icons
  • Replaced non-existent “missing-image” icon

Deprecations

  • Dropped gdk_window_process_updates for GTK+ 3.22
  • Replaced gdk_pixbuf_new_from_inline usage
  • Replaced gdk_screen_* usage
  • Replaced gtk_style_context_get_background_color usage
  • Removed warnings for gtk_dialog_get_action_area and GioScheduler

Translation Updates

Arabic, Catalan, Chinese (China), Danish, Dutch, French, German, Hebrew, Indonesian, Korean, Lithuanian, Portuguese (Brazil), Russian, Spanish, Swedish

Downloads

The latest version of Exo can always be downloaded from the Xfce archives. Grab version 0.11.4 from the below link.

http://archive.xfce.org/src/xfce/exo/0.11/exo-0.11.4.tar.bz2

  • SHA-256: 54fc6d26eff4ca0525aed8484af822ac561cd26adad4a2a13a282b2d9f349d84
  • SHA-1: 49e0fdf6899eea7aa1050055c7fe2dcddd0d1d7a
  • MD5: 7ad88a19ccb4599fd46b53b04325552c

12 July, 2017 10:34AM

Ubuntu Insights: TechnoSec and Ubuntu Core help DE.OL transition to a smart factory

The industrial internet of things market (IIoT) is estimated to reach US$ 195.47bn by 2022 according to Markets and Markets. A number of companies in this sector recognise the need to modernise their operations – DE.OL, an Italian company who design and manufacture hydraulic cylinders are one of these. Working with TechnoSec, an Italian IIoT start up specialising in M2M technologies, and Ubuntu, DE.OL were able to transition to a smart factory.

Highlights:

  • DE.OL transformed its predictive maintenance approach to optimise efficiency but without having to replace legacy machinery.
  • In one instance, DE.OL averted an oil pump breakdown which would have cost the company €12,000 in downtime and repair bills had they not adopted TechnoSec’s IIoT platform and Ubuntu Core.
  • Ubuntu Core decreases time to deployment for TechnoSec and opens them up to a community of developers to share knowledge and fix bugs.

Learn more by downloading the case study using the form below.

12 July, 2017 10:25AM

Didier Roche: Ubuntu Make as a classic snap: getting a 16.04 snap

This is a suite of blog posts explaining how we snapped Ubuntu Make which is a complex software study case with deep interactions with the system. For more background on this, please refer to our previous blog post giving a quick introduction on the topic. Creating the snap skeleton The snap skeleton was pretty easy to create. Galileo from our community got a first stance at it. We can notice multiple things:

12 July, 2017 09:27AM

Sebastian Kügler: Plasma at Akademy


As every year, also this year, I will be going to KDE’s yearly world summit, Akademy. This year, it will take place in Almería, Spain. In our presentation “Plasma: State of the Union“, Marco and I will talk about what’s going on in your favorite workspace, what we’ve been working on and what cool features are coming to you, and what our plans for the future are. Topics we will cover range Wayland, web browser integration, UI design, mobile and release and support planning. Our presentation will take place on Saturday at 11:05, right after the key note held by Robert Kaye. If you can’t make it to Spain next week, there will likely be video recordings, which I will post here as soon as they’re widely available.

Haste luego!

12 July, 2017 07:18AM

Leo Arias: An errbot snap for simplified chatops

I'm a Quality Assurance Engineer. A big part of my job is to find problems, then make sure that they are fixed and automated so they don't regress. If I do my job well, then our process will identify new and potential problems early without manual intervention from anybody in the team. It's like trying to automate myself, everyday, until I'm no longer needed and have to jump to another project.

However, as we work in the project, it's unavoidable that many small manual tasks accumulate on my hands. This happens because I set up the continuous integration infrastructure, so I'm the one who knows more about it and have easier access, or because I'm the one who requested access to the build farm so I'm the one with the password, or because I configured the staging environment and I'm the only one who knows the details. This is a great way to achieve job security, but it doesn't lead us to higher quality. It's a job half done, and it's terribly boring to be a bottleneck and a silo of information about testing and the release process. All of these tasks should be shared by the whole team, as with all the other tasks in the project.

There are two problems. First, most of these tasks involve delicate credentials that shouldn't be freely shared with everybody. Second, even if the task itself is simple and quick to execute, it's not very simple to document how to set up the environment to be able to execute them, nor how to make sure that the right task is executed in the right moment.

Chatops is how I like to solve all of this. The idea is that every task that requires manual intervention is implemented in a script that can be executed by a bot. This bot joins the communication channel where the entire team is present, and it will execute the tasks and report about their results as a response to external events that happen somewhere in the project infrastructure, or as a response to the direct request of a team member in the channel. The credentials are kept safe, they only have to be shared with the bot and the permissions can be handled with access control lists or membership to the channel. And the operative knowledge is shared with all the team, because they are all listening in the same channel with the bot. This means that anybody can execute the tasks, and the bot assists them to make it simple.

In snapcraft we started writing our bot not so long ago. It's called snappy-m-o (Microbe Obliterator), and it's written in python with errbot. We, of course, packaged it as a snap so we have automated delivery every time we change it's source code, and the bot is also autoupdated in the server, so in the chat we are always interacting with the latest and greatest.

Let me show you how we started it, in case you want to get your own. But let's call this one Baymax, and let's make a virtual environment with errbot, to experiment.

drawing of the Baymax bot

$ mkdir -p ~/workspace/baymax
$ cd ~/workspace/baymax
$ sudo apt install python3-venv
$ python3 -m venv .venv
$ source .venv/bin/activate
$ pip install errbot
$ errbot --init

The last command will initialize this bot with a super simple plugin, and will configure it to work in text mode. This means that the bot won't be listening on any channel, you can just interact with it through the command line (the ops, without the chat). Let's try it:

$ errbot
[...]
>>> !help
All commands
[...]
!tryme - Execute to check if Errbot responds to command.
[...]
>>> !tryme
It works !
>>> !shutdown --confirm

tryme is the command provided by the example plugin that errbot --init created. Take a look at the file plugins/err-example/example.py, errbot is just lovely. In order to define your own plugin you will just need a class that inherits from errbot.BotPlugin, and the commands are methods decorated with @errbot.botcmd. I won't dig into how to write plugins, because they have an amazing documentation about Plugin development. You can also read the plugins we have in our snappy-m-o, one for triggering autopkgtests on GitHub pull requests, and the other for subscribing to the results of the pull requests tests.

Let's change the config of Baymax to put it in an IRC chat:

$ pip install irc

And in the config.py file, set the following values:

BACKEND = 'IRC'
BOT_IDENTITY = {
    'nickname' : 'baymax-elopio',  # Nicknames need to be unique, so append your own.
                                   # Remember to replace 'elopio' with your nick everywhere
                                   # from now on.
    'server' : 'irc.freenode.net',
}
CHATROOM_PRESENCE = ('#snappy',)

Run it again with the errbot command, but this time join the #snappy channel in irc.freenode.net, and write in there !tryme. It works ! :)

screenshot of errbot on IRC

So, this is very simple, but let's package it now to start with the good practice of continuous delivery before it gets more complicated. As usual, it just requires a snapcraft.yaml file with all the packaging info and metadata:

name: baymax-elopio
version: '0.1-dev'
summary: A test bot with errbot.
description: Chat ops bot for my team.
grade: stable
confinement: strict

apps:
  baymax-elopio:
    command: env LC_ALL=C.UTF-8 errbot -c $SNAP/config.py
    plugs: [home, network, network-bind]

parts:
  errbot:
    plugin: python
    python-packages: [errbot, irc]
  baymax:
    source: .
    plugin: dump
    stage:
      - config.py
      - plugins
    after: [errbot]

And we need to change a few more values in config.py to make sure that the bot is relocatable, that we can run it in the isolated snap environment, and that we can add plugins after it has been installed:

import os

BOT_DATA_DIR = os.environ.get('SNAP_USER_DATA')
BOT_EXTRA_PLUGIN_DIR = os.path.join(os.environ.get('SNAP'), 'plugins')
BOT_LOG_FILE = BOT_DATA_DIR + '/err.log'

One final try, this time from the snap:

$ sudo apt install snapcraft
$ snapcraft
$ sudo snap install baymax*.snap --dangerous
$ baymax-elopio

And go back to IRC to check.

Last thing would be to push the source code we have just written to a GitHub repo, and enable the continuous delivery in build.snapcraft.io. Go to your server and install the bot with sudo snap install baymax-elopio --edge. Now everytime somebody from your team makes a change in the master repo in GitHub, the bot in your server will be automatically updated to get those changes within a few hours without any work from your side.

If you are into chatops, make sure that every time you do a manual task, you also plan for some time to turn that task into a script that can be executed by your bot. And get ready to enjoy tons and tons of free time, or just keep going through those 400 open bugs, whichever you prefer :)

12 July, 2017 04:31AM

July 11, 2017

hackergotchi for Maemo developers

Maemo developers

Meet the new Q2 2017 Maemo Community Council

Dear Maemo community, I have the great honor of introducing the new Community Council for the upcoming Q2/2017 period.

**The members of the new council are (in alphabetical order):**

  • Juiceme (Jussi Ohenoja)
  • Mosen (Timo Könnecke)
  • Sicelo (Sicelo Mhlongo)

The voting results can be seen on the [voting page]

I want to thank warmly all the members of the community who participated in this most important action of choosing a new council for us!

The new council shall meet on the #maemo-meeting IRC channel next tuesday 18.06 at 20:00 UTC for the formal handover with the passing council.

Jussi Ohenoja, On behalf of the outgoing Maemo Community Council

0 Add to favourites0 Bury

11 July, 2017 08:17PM by Jussi Ohenoja (juice@swagman.org)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: MAAS Development Summary – July 3rd – 14th

Hello MAASters!

The purpose of this update is to keep our community engaged and informed about the work the team is doing. We’ll cover important announcements, work-in-progress for the next release of MAAS and bugs fixes in release MAAS versions.

MAAS 2.3 (current development release)

  • Completed Django 1.11 transition
      • MAAS 2.3 snap will use Django 1.11 by default.
      • Ubuntu package will use Django 1.11 in Artful+
  • Network beaconing & better network discovery
      • MAAS now listens for [unicast and multicast] beacons on UDP port 5240. Beacons are encrypted and authenticated using a key derived from the MAAS shared secret. Upon receiving certain types of beacons, MAAS will reply, confirming the sender that existing MAAS on the network has the same shared key. In addition, records are kept about which interface each beacon was received on, and what VLAN tag (if any) was in use on that interface. This allows MAAS to determine which interfaces observed the same beacon (and thus must be on the same fabric). This information can also determine if [what would previously have been assumed to be] a separate fabric is actually an alternate VLAN in an existing fabric.
      • The maas-rack send-beacons command is now available to test the beacon protocol. (This command is intended for testing and support, not general use.) The MAAS shared secret must be installed before the command can be used. By default, it will send multicast beacons out all possible interfaces, but it can also be used in unicast mode.
      • Note that while IPv6 support is planned, support for receiving IPv6 beacons in MAAS is not yet available. The maas-rack send-beacons command, however, is already capable of sending IPv6 beacons. (Full IPv6 support is expected to make beacons more flexible, since IPv6 multicast can be sent out on interfaces without a specific IP address assignment, and without resorting to raw sockets.)
      • Improvements to rack registration are now under development, so that users will see a more accurate representation of fabrics upon initial installation or registration of a MAAS rack controller.
  • Bug fixes
    • LP: #1701056: Show correct information for a device details page as a normal user
    • LP: #1701052: Do not show the controllers tab as a normal user
    • LP: #1683765: Fix format when devices/controllers are selected to match those of machines
    • LP: #1684216 – Update button label from ‘Save selection’ to ‘Update selection’
    • LP: #1682489 – Fix Cancel button on add user dialog, which caused the user to be added anyway
    • LP: #1682387 – Unassigned should be (Unassigned)

MAAS 2.2.1

The past week the team was also focused on preparing and QA’ing the new MAAS 2.2.1 point release, which was released on Friday June the 30th. For more information about the bug fixes please visit the following https://launchpad.net/maas/+milestone/2.2.1.

MAAS 2.2.1 is available in:

  • ppa:maas/stable

11 July, 2017 05:53PM

hackergotchi for Xanadu developers

Xanadu developers

Lista de túneles Web (webproxy) para ver páginas bloqueadas

A veces nos encontramos en un entorno donde por cualquier circunstancia no es posible acceder a una web y no tenemos la posibilidad de realizar alguna modificación en el sistema que nos permita saltar dicho bloqueo, es por eso que … Sigue leyendo

11 July, 2017 04:30PM by sinfallas

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Ubuntu Core: Making a factory image with private snaps

This is a follow-up to the ROS prototype to production on Ubuntu Core series to answer a question I received: “What if I want to make an image for the factory, but don’t want to make my snaps public?” This question is of course not robotics-specific, and neither is its answer. In this post we’ll cover two ways to do this.

Before we start, you’ll need a little bit of an Ubuntu Core imaging background. If you followed the ROS prototype to production series (part 5 specifically) you already have the required background, but if you didn’t, check out the tutorial for creating your own Ubuntu Core image.

Assuming you’re up-to-speed and know what I’m talking about when I say “model definition” or “model assertion,” let’s get started on a few different methods for creating an Ubuntu Core image with private snaps.

Method 1: Don’t put your snap in the store at all

It really doesn’t get simpler. Take a look at this example model definition, amd64-model.json:

{
 "type": "model",
 "series": "16",
 "model": "custom-amd64",
 "architecture": "amd64",
 "gadget": "pc",
 "kernel": "pc-kernel",
 "authority-id": "4tSgWHfAL1vm9l8mSiutBDKnnSQBv0c8",
 "brand-id": "4tSgWHfAL1vm9l8mSiutBDKnnSQBv0c8",
 "timestamp": "2017-06-23T21:03:24+00:00",
 "required-snaps": ["kyrofa-test-snap"]
}

Let’s go ahead and turn that into a model assertion:

$ cat amd64-model.json | snap sign -k my-key-name > amd64.model
You need a passphrase to unlock the secret key for
user: "my-key-name"
4096-bit RSA key, ID 0B79B865, created 2016-01-01
...

Now you have your model assertion: amd64.model. If you hand that to ubuntu-image right now you’ll run into a problem:

$ sudo ubuntu-image -c stable amd64.model 
Fetching core
Fetching pc-kernel
Fetching pc
Fetching kyrofa-test-snap
error: cannot find snap "kyrofa-test-snap": snap not found
COMMAND FAILED: snap prepare-image --channel=stable amd64.model /tmp/tmp6p453gk9/unpack

The snap with the name kyrofa-test-snap isn’t actually in the store. But that’s important to note: the model definition (and thus assertion) only contains a list of snap names. If you have a snap locally with that name, even if it’s not in the store, you can tell ubuntu-image to use it to satisfy that name in the assertion with the –extra-snaps option:

$ sudo ubuntu-image -c stable \
         --extra-snaps /path/to/kyrofa-test-snap_0.1_amd64.snap \
         amd64.model
Fetching core
Fetching pc-kernel
Fetching pc
Copying "/path/to/kyrofa-test-snap_0.1_amd64.snap" (kyrofa-test-snap)
kyrofa-test-snap already prepared, skipping
WARNING: "kyrofa-test-snap" were installed from local snaps
disconnected from a store and cannot be refreshed subsequently!
Partition size/offset need to be a multiple of sector size (512).
The size/offset will be rounded up to the nearest sector.

There. You now have an Ubuntu Core image (named pc.img) with your snap preinstalled, without the snap ever needing to be in the store. This works, but it has a big disadvantage which ubuntu-image points out with a warning: preinstalling a snap that isn’t connected to the store means you have no way to update it once devices are flashed with this image. Your only update mechanism would be to ship new images to be flashed.

Method 2: Use a brand store

When you create a store account and visit dashboard.snapcraft.io, you’re viewing your snaps in the standard Ubuntu store. If you install snapd fresh on your system, this is the store it uses by default. While you can release snaps privately on the Ubuntu store, you can’t preinstall those in an image because only you (and the collaborators you’ve added) can obtain access to it. The only way you can make an image in this case would be to make the snaps publicly available, which defeats the whole purpose of this post.

For this use-case, we have what are called brand stores. Brand stores are still hosted in the Ubuntu store, but they’re a custom, curated version of it, meant to be specific to a given company or device. They can inherit (or not) from the standard Ubuntu store, and be open to all developers or locked down to a specific group (which is what we want in our case, to keep things private).

Note that this is a paid feature. You need to request a brand store. Once your request has been granted, you’ll see your new store by visiting “stores you can access” under your name.

There you’ll see the various stores to which you have access. You’ll have at least two: the normal Ubuntu store, and your new brand store. Select the brand store (red rectangle). While you’re here, record your store ID (blue rectangle): you’ll need it in a moment.

From there, registering names/uploading snaps works the same way, but now they go into your brand store instead of the standard one, and assuming you have it unlisted, those snaps are not available to external users. The only caveat today is that at least the first upload for the snap needs to be via the web interface. After that, you can continue to use Snapcraft like normal.

So how does this change things? My “kyrofa-store” inherits snaps from the Ubuntu store, and also contains a “kyrofa-branded-test-snap” published into the stable channel. This snap isn’t available in the Ubuntu store, as you can see if you search for it:

$ snap find kyrofa-branded
The search "kyrofa-branded" returned 0 snaps

But using the store ID we recorded earlier, we can make a model assertion that pulls from the brand store instead of the Ubuntu store. We just need to add the “store” key to the JSON document, making it look like this:

{
  "type": "model",
  "series": "16",
  "model": "custom-amd64",
  "architecture": "amd64",
  "gadget": "pc",
  "kernel": "pc-kernel",
  "authority-id": "4tSgWHfAL1vm9l8mSiutBDKnnSQBv0c8",
  "brand-id": "4tSgWHfAL1vm9l8mSiutBDKnnSQBv0c8",
  "timestamp": "2017-06-23T21:03:24+00:00",
  "required-snaps": ["kyrofa-branded-test-snap"],
  "store": "ky<secret>ek"
}

Sign it just as we did in Method 1, and we can create an Ubuntu Core image with our private, brand-store snap preinstalled as simply as:

$ sudo ubuntu-image -c stable amd64.model
Fetching core
Fetching pc-kernel
Fetching pc
Fetching kyrofa-branded-test-snap
Partition size/offset need to be a multiple of sector size (512).
The size/offset will be rounded up to the nearest sector.

Now, like at the end of Method 1, you have a pc.img ready for the factory. However, with this method, the snaps in the image are all coming from the store, which means they will automatically update as usual.

Conclusion

These are the only two options for doing this today. When I started writing this post I thought there was a third (keeping one’s snap private and creating an image with it), but that turns out to not be the case.

Note that we’ve also received numerous requests for some sort of on-premises/enterprise store, and while such a product is not yet available, the store team is working on it. Once this is available, I’ll write a new post about it.

I hope this proves useful!

Original post can be found here.

11 July, 2017 03:00PM

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, June 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, about 161 work hours have been dispatched among 11 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased slightly with one new bronze sponsor and another silver sponsor is in the process of joining.

The security tracker currently lists 49 packages with a known CVE and the dla-needed.txt file 54. The number of open issues is close to last month.

Thanks to our sponsors

New sponsors are in bold.

One comment | Liked this article? Click here. | My blog is Flattr-enabled.

11 July, 2017 02:49PM