February 22, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Why does software-defined everything matter?

Does anybody still think of a phone as a way of just making calls?

According to Informate Mi less than 10% of the time you spend on a phone is spent giving calls. That’s because phones have evolved from single function device to malleable pocket computers whose entire purpose is defined by the apps that they run. In reality, a smartphone is a software-defined device. What’s much less commonly known is that the world of telco is also becoming software-defined.

‘Software-defined everything’ represents a step change in the telco industry. The entire industry is moving away from a mode of organising and thinking about their network and services as a bunch of boxes with fixed functions to thinking about it as stacks of interacting software.

Look at some of the key themes at MWC this year…. 5G for example. Many people see it as just another iteration in the 1G, 2G, 3G, 4G where what matters is the additional bandwidth for the end user. But behind the scenes a drastic redesign of the telco mobile network is underway where fixed function networking equipment laid out in a static / predefined architecture is being replaced by mini-data centres of generic servers whose function is responsive to the needs of the network. 5G is really about the software-defined telco network.

Another key theme is IoT (Internet of Things). Many believe M2M (the ancestor of IoT) has been part of MWC since times immemorial, so why make a fuss about it all of a sudden? Once again the answer is software. M2M was simple with unidirectional exchanges of data, reflecting the simple nature of the software being run on M2M devices – images were sent down to a digital signage box and telemetry data was sent from an industrial gateway to a monitoring server. But today things are very different. The software run by all these devices has evolved drastically which has changed the very simple nature of these exchanges. For example, as well as displaying advertisements, a digital signage screen might be count the people that pass it or act as a wifi hotspot. IoT is reall about software-defined smart devices.

Autonomous cars, another big theme this year, is yet another example of the software-defined nature of things to come. In car maps was followed by telemetry data coming from cars which was then complemented by in-car hot-spots and infotainment. Today, complex self driving or navigation support systems offer a level of artificial intelligence never seen before in vehicles.

If you’re in Barcelona next week for MWC2017 drop by our booth at Hall P3 – 3K31 to learn how we see Ubuntu at the centre of our software-defined future.

22 February, 2017 04:01PM

hackergotchi for Cumulus Linux

Cumulus Linux

Announcing EVPN for scalable virtualized networks

 

When we set out to build new features for Cumulus Linux, we ask ourselves two questions: 1) How can we make network operators’ jobs easier? And 2) How can we help businesses use web-scale IT principles to build powerful, efficient and highly-scalable data centers? With EVPN, we believe we nailed both.

Why EVPN?

Many data centers today rely on layer 2 connectivity for specific applications and IP address mobility. However, an entire layer 2 data center can bring challenges such as large failure domains, spanning tree complexities, difficulty troubleshooting, and scale challenges as only 4094 VLANS are supported.

Therefore, modern data centers are moving to a layer 3 fabric, which means running a routing protocol, such as BGP or OSPF between the leaf and spine switches. In order to provide layer 2 connectivity, between hosts and VMs on different racks as well as maintain multi-tenant separation, layer 2 overlay solution is deployed such as VXLAN. However, VXLAN does not define a control plane to learn and exchange MAC addresses. Therefore, VXLAN cannot, by itself, deal efficiently with host mobility and fast convergence. Practical deployments of VXLAN without a control plane often use a controller.

Traditional bridging and L2VPN solutions like VPLS and native VXLAN retrieve host location via the data plane, meaning it learns the location of the hosts based on the data frames that pass through the switch. When the bridge receives a frame on a switchport, it will take note of its source MAC address and add it, along with the received port, to the switch’s MAC table. This MAC address table directs traffic to the correct destination. See below.

EVPN-Figure-01

The same data plane learning happens over the MPLS (or other) tunnel also as there is no control plane protocol to exchange the MAC addresses learned on the local switch ports with the remote PEs or VXLAN Virtual Tunnel End Points (VTEPs). Learning solely via the data plane has its own set of limitations, such as limited redundancy, no per-flow load balancing, lack of traffic engineering, and slow convergence.

How does EVPN help me?

Cumulus Networks EVPN is a next-generation control-plane solution for VXLAN tunnels that uses the BGP routing protocol to provide high scale, redundancy, traffic engineering, multi-tenant separation, and fast convergence for host and VM mobility — all while interoperating between vendors. BGP is a well known, mature routing protocol that powers the Internet. It is very robust and scalable. The Multiprotocol BGP (MP-BGP) extensions allow BGP to carry all kinds of information by introducing support for address-families and sub-address-families. All of the above characteristics make BGP very attractive to use for a VXLAN control plane.

In early 2017, Cumulus Quagga will natively contain the new BGP address-family that was created by the IETF for EVPN — the EVPN address-family, as seen below.

EVPN-Figure-02

The MP-BGP EVPN address-family advertises MAC addresses learned on a switch to the remote VTEPs while providing for multi-tenant separation and overlapping addresses. Advertising MAC addresses via BGP allow remote switches to dynamically learn the location of hosts or VMs without requiring data packets to traverse the VXLAN tunnel first. After the MAC address is learned on one switch, then all peered switches will learn it as well, which can reduce broadcast traffic in the network and allow fast convergence when a host or VM moves to a different rack.

For example, in a setup with three leafs and three hosts, Host 1’s MAC address would be learned by leaf01, host 2’s address by leaf02, and host 3’s address by leaf03. This often happens as soon as the link between the leaf and the host comes up, through Gratuitous ARP (GARP). At this point, each leaf would advertise its locally learned MAC address to the other leafs via BGP. Now, all of the leafs have all of the hosts MAC addresses and their locations without requiring a data packet to traverse the switch first.

EVPN-Figure-03

Using the above setup as an example, the command sh evpn vni 10100 mac outputs all the MAC addresses and their locations associated within that VNI, or VXLAN tunnel. For example, EVPN Table for leaf01 shows:

EVPN Command 2

 

Whereas the table for leaf02 similarly shows:

EVPN command

If host 1 then moves to rack 3, leaf03 will learn of the new location and immediately update the other two leafs of the move. It is not necessary to wait for MAC table timeouts or a new data packet sent by the host for data center wide convergence.

Can you summarize the benefits of deploying EVPN?

Cumulus EVPN provides many benefits to a data center, including:

  • Controller-less VXLAN: No controller is needed with EVPN, as it enables VTEP peer discovery through BGP.
  • Scale and Robustness: EVPN uses the standard BGP routing protocol for the control plane. BGP is a mature well-known protocol that powers the internet. For data centers that already run BGP, this involves just adding another address-family.
  • Fast convergence/mobility: The BGP EVPN address family includes features to track host moves across the datacenter, allowing for very fast convergence.
  • Multi-vendor interoperable: Since EVPN is a standard, it will be interoperable with other vendors that adhere to the standard.
  • Support for Active/Active VxLAN: Cumulus EVPN supports host redundancy to switch pairs with an MLAG configuration.
  • Multi-tenancy: Cumulus EVPN supports VXLAN tunnel separation

The post Announcing EVPN for scalable virtualized networks appeared first on Cumulus Networks Blog.

22 February, 2017 04:00PM by Diane Patton

hackergotchi for Ubuntu developers

Ubuntu developers

Simon Raffeiner: Basic management of an Ubuntu Core installation

Part 1: Setting up an “All-Snap” Ubuntu Core image in a QEMU/KVM virtual machine Part 2: Basic management of an Ubuntu Core installation Part 3: Mir and graphics on Ubuntu Core Part 4: Confinement After I’ve set up an “All-Snap” Ubuntu Core virtual machine in the last post, let’s see what I can do with it. Logging in After […]

22 February, 2017 03:43PM

Ubuntu Insights: An Ubuntu snap-based solution for enterprises to control their data

A factor for so many smaller enterprises today is that they are unable to take full advantage of the range of services associated with standard public cloud offerings. Privacy and control are so important to their operation that using a publicly provided service will not work for tech-focused, smart enterprises that place privacy and control as one of their most important requirements.

The successful Nextcloud box, which runs Ubuntu Core, has shown the level of interest that exists in full control and self-hosting. Document sharing and editing, communications and messaging, email, code management are further examples of services that work well as part of a self-hosting solution – especially for small businesses that are sensitive to the privacy of their data. Ubuntu has a ready-made ecosystem of services which will run on a device such as a smart NAS or a router which can benefit enterprises that wish to control and host everything on-site. At MWC this year, we will be demonstrating a new solution for enterprises who want to take advantage of this trend in self-hosted services.

We have taken the very best available from the Ubuntu snap ecosystem and created a solution designed specifically for small enterprises that delivers a range of high quality self-hosted services which are engineered for privacy. Thereby enabling a small business to easily deploy a full suite of relevant, self-hosted services for internal staff to access from any desktop browser sitting on the private LAN. The solution is built to be fully extensible and can easily evolve to include other services which exist within the Ubuntu snap ecosystem.

As well as a demonstrating the product on our stand at MWC next week, it will also be available as an image containing all the featured snaps so anyone can download and run the solution on compatible hardware.

Included in the image is a snap-based offering with Nextcloud file and data management, Rocket.Chat messaging and communication, Collabora Online collaborative document editing suite, Wekan workflow tool, code management and distribution from Gogs , video conferencing using Spreed, and a fully self-hosted email solution and UI using iRedMail. All the services are open source and all integrated using Ubuntu Core for added security with the wider benefit of extending the solution to include any service that exists as a snap. And importantly, the solution comes with a customisable front-end GUI for admin control and service management.

Frank Karlitschek, Managing Director at Nextcloud, said: “This new solution from Ubuntu is the perfect next step on from the Nextcloud box and brings a fantastic blend of snaps to make it super easy for enterprises to experiment with self-hosted services.”

Michael Meeks, Managing Director at Collabora Productivity, said: “Collabora is the driving force behind bringing LibreOffice to every modern device with a browser, so we’re thrilled to contribute an Ubuntu snap of Collabora Online to this solution. By seamlessly integrating the collaborative editing of complex documents we give huge scope for enterprises to reduce cost as well as to simplify their productivity provision.”

Visit us at the Ubuntu Booth in Hall P3 – 3K31 at Mobile World Congress

22 February, 2017 02:25PM

Ubuntu Insights: Webinar: Get Cloud-ready Servers in Minutes with MAAS

Learn how to take full advantage of existing hardware investments by maximising hardware efficiency.

Data center operators are quickly becoming more automated and need to have the flexibility to leverage their hardware infrastructure more efficiently. Canonical’s Metal as a Service (MAAS) solution enables operators to deploy physical hardware at the speed of cloud.

Date: Wednesday 15th March 2017
Time: 17.00 GMT / 12.00 EST / 9.00 PT
Speakers: Dariush Marsh-Mossadeghi, Consulting Architect and Chris Wilder, Cloud Content.

3 things you’ll learn in this webinar

  • How leading companies are using MAAS to improve the efficiency of hybrid cloud deployments
  • How to deploy a cloud-ready data centre quickly and efficiently
  • MAAS capabilities, and best practices for server provisioning

Register for webinar

More info about the webinar

The webinar will also include demos on how administrators can use MAAS to discover existing hardware and automate many management tasks, including:

  • Installing, configuring, and monitoring bare metal hardware
  • Install and upgrade firmware, patches, and updates
  • Automated server utilisation and re-utilisation based on need
  • Discovery of compute, network, capabilities, and storage based on server
  • Power on and off servers as needed

By automating these functions MAAS eliminates the extensive manual process required for traditional server operations and allows organisations to become more operationally efficient.  IT customers need to have the flexibility of not ripping and replacing their entire infrastructure to take advantage of the opportunities the cloud offers. This is why new architectures and business models are emerging. Canonical’s MAAS is a mature solution to help organisations to take full advantage of their cloud and legacy hardware investments.

Register for the webinar

Talk to a scale-out expert

22 February, 2017 11:54AM

Ubuntu Insights: LXD on Debian (using snapd)

LXD logo

Introduction

So far all my blog posts about LXD have been assuming an Ubuntu host with LXD installed from packages, as a snap or from source.

But LXD is perfectly happy to run on any Linux distribution which has the LXC library available (version 2.0.0 or higher), a recent kernel (3.13 or higher) and some standard system utilities available (rsync, dnsmasq, netcat, various filesystem tools, …).

In fact, you can find packages in the following Linux distributions (let me know if I missed one):

We have also had several reports of LXD being used on Centos and Fedora, where users built it from source using the distribution’s liblxc (or in the case of Centos, from an external repository).

One distribution we’ve seen a lot of requests for is Debian. A native Debian package has been in the works for a while now and the list of missing dependencies has been shrinking quite a lot lately.

But there is an easy alternative that will get you a working LXD on Debian today!
Use the same LXD snap package as I mentioned in a previous post, but on Debian!

Requirements

  • A Debian “testing” (stretch) system
  • The stock Debian kernel without apparmor support
  • If you want to use ZFS with LXD, then the “contrib” repository must be enabled and the “zfsutils-linux” package installed on the system

Installing snapd and LXD

Getting the latest stable LXD onto an up to date Debian testing system is just a matter of running:

apt install snapd
snap install lxd

If you never used snapd before, you’ll have to either logout and log back in to update your PATH, or just update your existing one with:

. /etc/profile.d/apps-bin-path.sh

And now it’s time to configure LXD with:

root@debian:~# lxd init
Name of the storage backend to use (dir or zfs) [default=dir]:
Create a new ZFS pool (yes/no) [default=yes]?
Name of the new ZFS pool [default=lxd]:
Would you like to use an existing block device (yes/no) [default=no]?
Size in GB of the new loop device (1GB minimum) [default=15]:
Would you like LXD to be available over the network (yes/no) [default=no]?
Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
Would you like to create a new network bridge (yes/no) [default=yes]?
What should the new bridge be called [default=lxdbr0]?
What IPv4 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
What IPv6 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
LXD has been successfully configured.

And finally, you can start using LXD:

root@debian:~# lxc launch images:debian/stretch debian
Creating debian
Starting debian

root@debian:~# lxc launch ubuntu:16.04 ubuntu
Creating ubuntu
Starting ubuntu

root@debian:~# lxc launch images:centos/7 centos
Creating centos
Starting centos

root@debian:~# lxc launch images:archlinux archlinux
Creating archlinux
Starting archlinux

root@debian:~# lxc launch images:gentoo gentoo
Creating gentoo
Starting gentoo

And enjoy your fresh collection of Linux distributions:

>root@debian:~# lxc list
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
|   NAME    |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| archlinux | RUNNING | 10.250.240.103 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe40:7b1b (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| centos    | RUNNING | 10.250.240.109 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe87:64ff (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| debian    | RUNNING | 10.250.240.111 (eth0) | fd42:46d0:3c40:cca7:216:3eff:feb4:e984 (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| gentoo    | RUNNING | 10.250.240.164 (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe27:10ca (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+
| ubuntu    | RUNNING | 10.250.240.80 (eth0)  | fd42:46d0:3c40:cca7:216:3eff:fedc:f0a6 (eth0) | PERSISTENT | 0         |
+-----------+---------+-----------------------+-----------------------------------------------+------------+-----------+

Conclusion

The availability of snapd on other Linux distributions makes it a great way to get the latest LXD running on your distribution of choice.

There are still a number of problems with the LXD snap which may or may not be a blocker for your own use. The main ones at this point are:

  • All containers are shutdown and restarted on upgrades
  • No support for bash completion

If you want non-root users to have access to the LXD daemon. Simply make sure that a “lxd” group exists on your system and add whoever you want to manage LXD into that group, then restart the LXD daemon.

Extra information

The snapd website can be found at: http://snapcraft.io

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Original article

22 February, 2017 08:04AM

hackergotchi for VyOS

VyOS

VyOS 1.2.0 repository re-structuring

In preparation for the new 1.2.0 (jessie-based) beta release, we are re-populating the package repositories. The old repositories are now archived, you still can find then in the /legacy/repos directory on dev.packages.vyos.net

The purpose of this is two-fold. First, the old repo got quite messy, and Debian people (rightfully!) keep reminding us about it, but it would be difficult to do a gradual cleanup. Second, since the CI server has moved, and so did the build hosts, we need to test how well the new procedures are working. And, additionally, it should tell us if we are prepared to restore VyOS from its source should anything happen to the packages.vyos.net server or its contents.

For perhaps a couple of days, there will be no new nightly builds, and you will not be able to build ISOs yourself, unless you change the repo path in ./configure options by hand. Stay tuned.

22 February, 2017 05:39AM by Daniil Baturin

February 21, 2017

hackergotchi for Univention Corporate Server

Univention Corporate Server

Nextcloud for Private Clouds New in the App Center

Nextcloud Logo

Last week, at the didacta conference in Stuttgart, we announced that you can now install Nextcloud from the Univention App Center.

Many users have been asking us about this as the need for a private cloud technology is big and Nextcloud offers the most security-focused and reliable solution in the market.

About the Nextcloud app

If your organization would like the benefits of easy sharing and collaboration on data online as offered by various cloud vendors but needs to stay in control, ensuring no unauthorized access, Nextcloud is what you are looking for. Nextcloud offers a unique-in-the-industry fully open source solution for on-premise data handling and communication with an uncompromising focus on security and privacy.

We worked with Univention to improve the integration, making it easy to set up for users. Users of the UCS server will be automatically integrated in Nextcloud, receiving their own place with storage to sync and share files. Admins can also just log in and get to work, for example adding certain rules to make sure files don’t get shared where they should not with our File Access Control app or connecting external storage.

I can only recommend system administrators to have a look at Nextcloud apps like our audio/video call integration our full text search.

In the future it would be nice to integrate online document editing, while you might also be interested in integration with Microsoft Outlook.

We currently offer these features to customers – you can get a support contract from us directly or via a partner to make sure your Nextcloud server runs as smooth as possible.

Features

Nextcloud brings together universal access to data with next-generation secure communication and collaboration capabilities under direct control of IT and integrated with existing compliant infrastructure. It’s open, modular architecture, emphasis on security and advanced federation capabilities enable modern enterprises to leverage their existing assets within and across the borders of their organization.

Some of its most interesting features are:

  • Easy web UI and clients for Android, iOS, Windows, Mac and Linux
  • Easy collaboration and sharing files internal and external with optionally enforcable password and expiration date
  • Audio/video chat built in, optional collaborative office document editing, Outlook integration and more
  • External storage support like Windows Network Drive, FTP, WebDAV, NFS and more
  • Many passive and active security features like two-factor authentication, brute force protection and CSP 3.0 as well as an audit log
  • Full admin control over sharing with File Access Control rules like “DOCX can only be downloaded from within local network” or execute actions if certain conditions are met (like a tag is set)

Learn more on all of its features!

Strong Security Measures for Private Cloud

With two-factor authentication, brute force protection and many other security protections built in, the solution provides a wide range of security measures.

Comparison

Compared to other private cloud solutions, Nextcloud offers stronger security measures, with two-factor authentication, brute force protection and many other security protections built in. It is fully open source and the most active file sync and share project being developed.

Learn about migrating from other solutions to Nextcloud.

Target Groups

Nextcloud typically targets organizations from 50 to tens of millions of users in industries including education, government, legal and financial services and manufacturing.

For more information, visit nextcloud.com or follow @Nextclouders on Twitter.

Integration

The software is well integrated into UCS and provides a simple installation. The automatic configuration includes:

  • Administrator is automatically Nextcloud admin
  • Users are enabled to access Nextcloud by default
  • Users and Groups can be enabled/disabled in User & Group Settings
  • User quota can be configured in User settings
  • All User and Group benefits by Nextcloud LDAP Schema
  • Web server fully configured and behind stock UCS web server acting as proxy and doing all TLS magic

You can install this new solution on your UCS server directly from the Univention App Center.

Der Beitrag Nextcloud for Private Clouds New in the App Center erschien zuerst auf Univention.

21 February, 2017 03:13PM by Frank Karlitschek

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Webinar: Qt on Ubuntu Core

Are you interested in developing Qt on Ubuntu Core? Learning how to package Qt apps as snaps? And the solutions they bring to the world of Digital Signage?

If so, join us for a webinar. You will learn the following:

  • Introduction to Ubuntu and Qt in digital signage
  • Why use Qt for digital signage
  • Packaging Qt apps as snap
  • Dealing with hardware variants and GPUs in Ubuntu Core

Date: Wednesday 22nd February 2017
Time: 17:00 – 18:00 (GMT)
Speakers: Nils Christian Roscher-Nielsen: Product Manager (The Qt Company), Pat McGowan: Director of Developer Tools and Apps (Canonical)

Sign up to the webinar

More info on speakers:

Nils Christian Roscher-Nielsen, Product Manager, The Qt Company
Nils is a Qt Product Manager, responsible for Qt Lite as well as a focus on customer relations and content development, after many years as a technical sales engineers. He also serves as a Qt evangelist at tradeshows and conferences. In his role, Nils is responsible for evaluating The Qt Company’s product offering, driving the long term roadmap creation, managing the technology evaluation stage and serves as a key technical adviser and product advocate for Qt. He has worked closely with Qt for the past eight years in Trolltech, Nokia, Digia and now The Qt Company. He holds a M.Sc. degree in Engineering Cybernetics from the Norwegian University of Technology and Science NTNU. Nils is currently based in Oslo, Norway.

Pat McGowan; Director of Developer Tools and Apps, Canonical

John Kourentis, VP Sales and Partnerships, Canonical

21 February, 2017 01:49PM

Harald Sitter: Plasma in a Snap?

…why not!

Shortly before FOSDEM, Aleix Pol asked if I had ever put Plasma in a Snap. While I was a bit perplexed by the notion itself, I also found this a rather interesting idea.

So, the past couple of weeks I spent a bit of time here and there on trying to see if it is possible.

img_20170220_154814

It is!

But let’s start in the beginning. Snap is one of the Linux bundle formats that are currently very much en-vogue. Basically, whatever is necessary to run an application is put into a self-contained archive from which the application then gets run. The motivation is to isolate application building and delivery from the operating system building and delivery. Or in short, you do not depend on your Linux distribution to provide a package, as long as the distribution can run the middleware for the specific bundle format you can get a bundle from the source author and it will run. As an added bonus these bundles usually also get confined. That means that whatever is inside can’t access system files or other programs unless permission for this was given in some form or fashion.

Putting Plasma, KDE’s award-winning desktop workspace, in a snap is interesting for all the same reasons it is interesting for applications. Distributing binary builds would be less of a hassle, testing is more accessible and confinement in various ways can lessen the impact of security issues in the confined software.

With the snap format specifically Plasma has two challenges:

  1. The snapped software is mounted in a changing path that is different from the installation directory.
  2. Confining Plasma is a bit tricky because of how many actors are involved in a Plasma session and some of them needing far-reaching access to system services.

As it turns out problem 1, in particular, is biting Plasma fairly hard. Not exactly a great surprise, after all, relocating (i.e. changing paths of) an installed Plasma isn’t exactly something we’ve done in the past. In fact, it goes further than that as ultimately Plasma’s dependencies need to be relocatable as well, which for example Xwayland is not.

But let’s talk about the snapping itself first. For the purposes of this proof of concept, I simply recycled KDE neon‘s deb builds. Snapcraft, the build tool for snaps, has built-in support for installing debs into a snap, so that is a great timesaver to get things off the ground as it were. Additionally, I used the Plasma Wayland stack instead of the X11 stack. Confinement makes lots more sense with Wayland compared to X11.

Relocatability

Relocatability is a tricky topic. A lot of times one compiles fixed paths into the binary because it is easy to do and it is somewhat secure. Notably, depending on the specific environment at the time of invocation one could be tricked into executing a malicious binary in $PATH instead of the desired one. Explicitly specifying the path is a well-understood safeguard against this sort of problem. Unfortunately, it also means that you cannot move your installed tree anywhere but where it was installed. The relocatable and safe solution is slightly more involved in terms of code as you need to resolve what you want to invoke relative from your location, it being more code and also not exactly trivial to get right is why often times one opts to simply hard-compile paths. This is a problem in terms of packing things into a relocatable snap though. I had to apply a whole bunch of hacks to either resolve binaries from PATH or resolve their location relative. None of these are particularly useful patches but here ya go.

Session

Once all relocatability issues were out of the way I finally had an actual Plasma session. Weeeh!

Confinement

Confining Plasma as a whole is fairly straightforward, albeit a bit of a drag since it’s basically a matter of figuring out what is or isn’t required to make things fly. A lot of logouts and logins is what it takes. Fortunately, snaps have a built-in mechanism to expose DBus session services offered by them. A full blown Plasma session has an enormous amount of services it offers on DBus, from the general purpose notification service to the special interest Plasma Activity service. Being able to expose them efficiently is a great help in tweaking confinement.

Not everything is about DBus though! Sometimes a snap needs to talk with a system service, and obviously, a workspace as powerful as Plasma would need to talk to a bunch of them. Doing advanced access control needs to be done in snapd (the thing that manages installed snaps). Snapd’s interfaces control what is and is not allowed for a snap. To get Plasma to start and work with confinement a bunch of holes need to be poked in the confinement that are outside the scope of existing interface. KWin, in particular, is taking the role of a fairly central service in the Plasma Wayland world, so it needs far-reaching access so it can do its job. Unfortunately, interfaces currently can only be built with snapd’s source tree itself. I made an example interface which covers most of the relevant core services but unless you build a snapd this won’t be particularly easy to try 😉

Summary

All in all, Plasma is easily bundled up once one gets relocatability problems out of the way. And thanks to the confinement control snap and snapd offer, it is also perfectly possible to restrict the workspace through confinement.

I did not at all touch on integration issues however. Running the workspace from a confined bundle is all nice and dandy but not very useful since Plasma won’t have any applications it can launch as they either live on the system or in other snaps. A confined Plasma would know about neither right now.

There is also the lingering question of whether confining like this makes sense at all. Putting all of Plasma into the same snap means this one snap will need lots of permissions and interaction with the host system. At the same time it also means that keeping confinement profiles up to date would be a continuous feat as there are so many things offered and used by this one snap.

One day perhaps we’ll see this in production quality. Certainly not today 🙂

mascot_konqi-app-dev

21 February, 2017 12:25PM

hackergotchi for Univention Corporate Server

Univention Corporate Server

Nextcloud for Private Clouds New in the App Center

Nextcloud Logo

We are happy to offer with Nextcloud, a 2016 fork from ownCloud, another web-based file sharing and collaboration solution for private clouds in our App Center. With this software, you can easily exchange data across the web and collaborate with the absolute focus on security and privacy as it provides full data control to prevent unauthorized access.

The software is also completely open source based.

NextCloud’s Features

The following list of features shows Nextcloud’s open, modular architecture with a focus on security and enhanced collaboration capabilities:

  • Easy web UI and clients for Android, iOS, Windows, Mac and Linux
  • Easy collaboration and sharing files internal and external with optionally enforcable password and expiration date
  • Audio/video chat built in, optional collaborative office document editing, Outlook integration and more
  • External storage support like Windows Network Drive, FTP, WebDAV, NFS and more
  • Many passive and active security features like two-factor authentication, brute force protection and CSP 3.0 as well as an audit log
  • Full admin control over sharing with File Access Control rules like “DOCX can only be downloaded from within local network” or execute actions if certain conditions are met (like a tag is set)

Learn more on all of its features!

Strong Security Measures for Private Cloud

With two-factor authentication, brute force protection and many other security protections built in, the solution provides a wide range of security measures.

Target Groups

Nextcloud typically targets organizations from 50 to tens of millions of users in industries including education, government, legal and financial services and manufacturing.

Integration

The software is well integrated into UCS and provides a simple installation. The automatic configuration includes:

  •  Administrator is automatically Nextcloud admin
  • Users are enabled to access Nextcloud by default
  • Users and Groups can be enabled/disabled in User & Group Settings
  • User quota can be configured in User settings
  • All User and Group benefits by Nextcloud LDAP Schema
  • Web server fully configured and behind stock UCS web server acting as proxy and doing all TLS magic

You can install this new solution on your UCS server directly from the Univention App Center.

Der Beitrag Nextcloud for Private Clouds New in the App Center erschien zuerst auf Univention.

21 February, 2017 10:48AM by Maren Abatielos

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: DAQRI showcases Ubuntu powered Augmented Reality helmet at MWC

This is a guest post by Fabrice Etienne from DAQRI®, leading enterprise augmented reality company. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

DAQRI® are going to be on the Ubuntu booth at Mobile World Congress to showcase the DAQRI Smart Helmet®. The DAQRI Smart Helmet, powered by an Ubuntu AR application, can be used in industrial settings and brings to life all the data generated by the new world of Industrial Internet of Things…

For those unfamiliar, DAQRI Smart Helmet is an advanced, augmented reality helmet powered by a 6th generation Intel Core m7 processor for highly performant multimedia and AR. Its high-speed, wide-angle camera pairs with a dedicated processor for AR applications. The transparent display has been ruggedized for industrial environments and its high brightness makes it suitable for indoor and outdoor use. An integrated RGB camera, a stereo infrared cameras with an infrared light projector work together intuitively allowing the helmet to infer depth. The absolute scale thermal camera offers persistent passive thermal monitoring of industrial equipment. By overlaying data onto the display, thermal anomalies can be quickly identified.

DAQRI Smart Helmet was purpose-built for industrial use. Delivering on the promise of the Industrial Internet of Things, the helmet can decentralize your control room through its data visualization capabilities. As heat is a danger to both workers and equipment, the thermal vision provided by the helmet gives your workers an edge in staying aware of potential dangers and improving maintenance and monitoring. Guided work instructions bring manuals away from the bookshelf and into the 21st Century by laying them out directly into your view. Through these augmented instructions, workers will understand processes quickly, spend less time on each step, and make fewer errors. If more hands-on assistance is required, workers can bring a Remote Expert into their point-of-view to give timely expertise and mobilize your company’s intelligence globally.

Want the opportunity to see DAQRI Smart Helmet in person?

You can at Mobile World Congress by visiting the Ubuntu booth at Hall P3 – 3K31. We’ll see you at MWC in Barcelona, February 27th through March 2nd.

To learn more about DAQRI, please visit www.daqri.com.

DAQRI® is the world’s leading enterprise augmented reality (AR) company with a vision to bring AR everywhere. The company’s flagship product, DAQRI Smart Helmet®, improves safety and efficiency for industrial workers. DAQRI was founded in 2010 by Brian Mullins, and is headquartered in Los Angeles with offices in the UK, Ireland and Austria. Current DAQRI products that deliver on the promise of bringing AR everywhere include: DAQRI Smart Helmet, DAQRI Smart Glasses™, DAQRI Qube™, and DAQRI Smart Hud™.

21 February, 2017 10:00AM

hackergotchi for Clonezilla live

Clonezilla live

Stable Clonezilla live (2.5.0-25) Released

This release of Clonezilla live (2.5.0-25) includes major enhancements and bug fixes.

ENHANCEMENTS and CHANGES
The underlying GNU/Linux operating system was upgraded. This release is based on the Debian Sid repository (as of 2017/Feb/20).
Linux kernel was updated to 4.9.6-3.
Language file sk_SK was updated. Thanks to Ondrej Dzivy Balucha.
Language file es_ES was revised. Thanks to Pablo Hinojosa Nava and Juan Ramón Martínez.
Package sshpass, keychain, nmap, monitoring-plugins-basic, and bicon were added.
An option "-noabo" was added to ocs-sr so that the image can be accessed by others, not only the owner.
* Add local boot menu in the uEFI Clonezilla live.

BUG FIXES
Detect if terminal supports color output before using color outputs in the terminal. Thanks to TF for asking this. Ref: https://sourceforge.net/p/clonezilla/discussion/Clonezilla_live/thread/ec46f3a3/
When creating partition table on a disk, Clonezilla should wait for partition name with extra p, like nvme0n1p1, to take effect. Thanks Bruno Vila Vilariño for reporting this issue.
Failed to use the serial number with space. Thanks to Matt Broadstone for providing the patch.
Ref: https://sourceforge.net/p/clonezilla/bugs/266/
Package whiptail was updated by applying the patch from upstream which improves handling of long strings in whiptail menu: https://git.fedorahosted.org/cgit/newt.git/commit/?id=10bbfd2837eb5ad87416ed2a648231a2a9b7c6fc

21 February, 2017 07:30AM by Steven Shiau

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 499

Welcome to the Ubuntu Weekly Newsletter. This is issue #499 for the week February 13 – 19, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Simon Quigley
  • Chris Guiver
  • Jim Connett
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

21 February, 2017 12:57AM

February 20, 2017

Sam Hewitt: Icons by Me

Time for some self-promotion! I bought the domain iconsbysam.com some time ago to eventually create a site to showcase some of the icon design work I’ve done –I finally got around to doing just that, do check it out:

Icons by Sam

20 February, 2017 07:00PM

hackergotchi for SparkyLinux

SparkyLinux

Linux kernel 4.10.0

 

The first, stable Linux kernel of the 4.10.x line – 4.10.0 just landed in Sparky “unstable” repository.

The Sparky’s Linux kernel is available in Sparky “unstable” repository, so enabled it to upgrade (if you have older version already installed) or to make fresh installation:
https://sparkylinux.org/wiki/doku.php/repository

Follow the Wiki page: https://sparkylinux.org/wiki/doku.php/linux_kernel to install the latest Sparky’s Linux kernel.

Then reboot your machine to take effects.

To quick remove older version of the Linux kernel, simply run APTus-> Remove-> Uninstall Old Kernel script.

 

20 February, 2017 03:43PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: IOTA: IoT revolutionized with a Ledger

Ever since the introduction of digital money, the world quickly came to realize how dire and expensive the consequences of centralized systems are. Not only are these systems incredibly expensive to maintain, they are also “single points of failures” which expose a large number of users to unexpected service interruptions, fraudulent activities and vulnerabilities that can be exploited by malicious hackers.

Thanks to Blockchain, which was first introduced through Bitcoin in 2009, the clear benefits of a decentralized and “trustless” transactional settlement system became apparent. No longer should expensive trusted third parties be used for handling transactions, instead, the flow of money should be handled in a direct, Peer-to-Peer fashion. This concept of a Blockchain (or more broadly, a distributed ledger) has since then become a global phenomenon attracting billions of dollars in investments to further develop the concept.

Even though the potential of this paradigm shift of using Blockchains and transitioning towards decentralized system architectures is known to many, the technical limitations of the Blockchain architecture are hindering further growth and adoption in key areas such as the Internet of Things, where millions of devices need to be able to transact with each other. A single Bitcoin transaction today costs on average $0.83 to make, which in turn means that micro-payments are rendered infeasible. Furthermore, the scalability of the entire network is limited to roughly 7 TPS (Transactions Per Second) – in comparison, Visa on average handles 2000 TPS. These intrinsic properties has made real world deployment of blockchain, where high throughput of transactions and scale matters, impossible.

The team behind IOTA, working on new Blockchain architectures and consensus protocols since 2011, has been developing for the past 2 years a completely new architecture built from scratch that resolves these inherent blockchain limitations while staying true to its core principles. With the invention of the Tangle, which is a completely new open source permissionless distributed ledger architecture, the team has introduced a secure, scalable and lightweight transactional settlement solution with zero fees to the industry. With specific focus on the Internet of Things and Machine-to-Machine Payments, the platform is positioned to become the standard layer for real time settlements and data integrity.

Collaboration with Canonical

The founders of IOTA started exploring machine-oriented microbilling to resolve one of the biggest pain points for connectivity in the realm of Internet of Things, and recognized that Canonical’s unique position and experience in this market was much needed expertise to bring it to life. Now Canonical is working with the IOTA Foundation on use cases that showcase new, machine-oriented business models and micro-billing solutions for the telecommunications market. With the global telecommunication market expected to reach more than $1.5 Trillion in revenue by 2020, IOTA together with Canonical are uniquely positioned to solve one of the biggest problems that telecom companies face today: how to reimagine billing for machines.

The telecommunications market has changed radically over the last decade, with OTT (Over-the-Top) players such as Netflix, Skype, Whatsapp and other service providers increasingly threatening important revenue streams of telco operators. To counter this disruptive competition, stakeholders need to look out for new markets and reshape their service offerings.

The Internet of Things offers exactly this rapidly expanding landscape for communication service providers (CSP’s) to explore new revenue and growth opportunities. With IoT connections expected to reach more than 1 billion by 2020, the need for a secure, cheap and scalable micro-billing mechanism is apparent to everyone.

What to expect at Mobile World Congress

At the Mobile World Congress you can expect to see a live demo of IOTA’s novel micro-billing solution. In collaboration with Canonical and Lime Micro, the telecommunication world will be introduced to the next generation of decentralized, trustless and machine-focused micro-billing. Combined with the concept of a software defined radio, we will showcase how new revenue opportunities for CSP’s and independent application developers alike will look like.

If you are interested to learn more about IOTA and how it can apply to your industry, reach out to the Team at the Canonical/Ubuntu booth in Hall P3 – 3K31 at Mobile World Congress. You can also reach out to them via Twitter or Email (contact@iotatoken.com) In a second blog post we will go into a deep dive into how IOTA works and how to actually use it in combination with snaps.

20 February, 2017 02:02PM

Stuart Langridge: Dividends and director withdrawals

Everyone running their own business except me probably already knows this. But, three years in, I think I’ve finally actually understood in my own mind the difference between a dividend and a director withdrawal. My accountant, Crunch1 have me record both of them when I take money out of the company, and I didn’t really get why until recently. When I finally got it, I wrote myself a note that I could go back to and read when I get confused again, and I thought I’d publish that here so others can see it too.

(Important note: this is not financial advice. If my understanding here differs from your understanding, trust yourself, or your accountant. I’m also likely glossing over many subtleties, etc, etc. If you think this is downright wrong, I’d be interested in hearing. If you think it’s over-simplified, you’re doubtless correct.)


A dividend is a promise to pay you X money.

A director withdrawal is you taking that money out.

So when a pound comes in, you can create a dividend to say: we’ll pay Stuart 80p.

When you take the money out, you record a director withdrawal of 80p.

Dividends are IOUs. Withdrawals are you cashing the IOU in.

So when the “director’s loan account is overdrawn”, that means: you have recorded dividends of N but have recorded director withdrawals of more than N, i.e., you’ve taken out more than the company wants to pay you. This may be because you are owed the amount you took, and recorded director withdrawals for all that but forgot to do a dividend for it, or because you’ve taken more than you’re allowed.

When creating a new dividend (in Crunch) it will (usefully) say what the maximum dividend you can take is; that should be the maximum takeable while still leaving enough money in the account to pay the tax bill.

In the Pay Yourself dashboard (in Crunch) it’ll say “money owed to Stuart”; that’s money that’s been promised with a dividend but not taken out with a withdrawal. (Note: this may be because you forgot to do a withdrawal for money you’ve taken! In theory it would mean money promised with a dividend but not taken, but maybe you took it and just didn’t do a withdrawal to record that you took it. Check.)

  1. who are really handy, online, and are happy to receive emails in which I ask stupid questions over and over again: if you need an accountant too, this referral link will get us both some money off

20 February, 2017 10:02AM

Ubuntu Insights: Running Kubernetes inside LXD

LXD logo

Introduction

For those who haven’t heard of Kubernetes before, it’s defined by the upstream project as:

Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.
It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

It is important to note the “applications” part in there. Kubernetes deploys a set of single application containers and connects them together. Those containers will typically run a single process and so are very different from the full system containers that LXD itself provides.

This blog post will be very similar to one I published last year on running OpenStack inside a LXD container. Similarly to the OpenStack deployment, we’ll be using conjure-up to setup a number of LXD containers and eventually run the Docker containers that are used by Kubernetes.

Requirements

This post assumes you’ve got a working LXD setup, providing containers with network access and that you have at least 10GB of space for the containers to use and at least 4GB of RAM.

Outside of configuring LXD itself, you will also need to bump some kernel limits with the following commands:

sudo sysctl fs.inotify.max_user_instances=1048576  
sudo sysctl fs.inotify.max_queued_events=1048576  
sudo sysctl fs.inotify.max_user_watches=1048576  
sudo sysctl vm.max_map_count=262144

Setting up the container

Similarly to OpenStack, the conjure-up deployed version of Kubernetes expects a lot more privileges and resource access than LXD would typically provide. As a result, we have to create a privileged container, with nesting enabled and with AppArmor disabled.

This means that not very much of LXD’s security features will still be in effect on this container. Depending on how you feel about this, you may choose to run this on a different machine.

Note that all of this however remains better than instructions that would have you install everything directly on your host machine. If only by making it very easy to remove it all in the end.

lxc launch ubuntu:16.04 kubernetes -c security.privileged=true -c security.nesting=true -c linux.kernel_modules=ip_tables,ip6_tables,netlink_diag,nf_nat,overlay -c raw.lxc=lxc.aa_profile=unconfined
lxc config device add kubernetes mem unix-char path=/dev/mem

Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get Kubernetes going.

lxc exec kubernetes -- apt-add-repository ppa:conjure-up/next -y
lxc exec kubernetes -- apt-add-repository ppa:juju/stable -y
lxc exec kubernetes -- apt update
lxc exec kubernetes -- apt dist-upgrade -y
lxc exec kubernetes -- apt install conjure-up -y

And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:

  • Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
  • Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
lxc exec kubernetes -- lxd init

And that’s it for the container configuration itself, now we can deploy Kubernetes!

Deploying Kubernetes with conjure-up

As mentioned earlier, we’ll be using conjure-up to deploy Kubernetes.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

Start it with:

lxc exec kubernetes -- sudo -u ubuntu -i conjure-up
  • Select “Kubernetes Core”
  • Then select “localhost” as the deployment target (uses LXD)
  • And hit “Deploy all remaining applications”

This will now deploy Kubernetes. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected.

Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

Interact with your new Kubernetes

We can ask juju to deploy a new kubernetes workload, in this case 5 instances of “microbot”:

ubuntu@kubernetes:~$ juju run-action kubernetes-worker/0 microbot replicas=5
Action queued with id: 1d1e2997-5238-4b86-873c-ad79660db43f

You can then grab the service address from the Juju action output:

ubuntu@kubernetes:~$ juju show-action-output 1d1e2997-5238-4b86-873c-ad79660db43f
results:
 address: microbot.10.97.218.226.xip.io
status: completed
timing:
 completed: 2017-01-13 10:26:14 +0000 UTC
 enqueued: 2017-01-13 10:26:11 +0000 UTC
 started: 2017-01-13 10:26:12 +0000 UTC

Now actually using the Kubernetes tools, we can check the state of our new pods:

ubuntu@kubernetes:~$ ./kubectl get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-w9nr3 1/1 Running 0 21m
microbot-1855935831-cn4bs 0/1 ContainerCreating 0 18s
microbot-1855935831-dh70k 0/1 ContainerCreating 0 18s
microbot-1855935831-fqwjp 0/1 ContainerCreating 0 18s
microbot-1855935831-ksmmp 0/1 ContainerCreating 0 18s
microbot-1855935831-mfvst 1/1 Running 0 18s
nginx-ingress-controller-bj5gh 1/1 Running 0 21m

After a little while, you’ll see everything’s running:

ubuntu@kubernetes:~$ ./kubectl get pods
NAME READY STATUS RESTARTS AGE
default-http-backend-w9nr3 1/1 Running 0 23m
microbot-1855935831-cn4bs 1/1 Running 0 2m
microbot-1855935831-dh70k 1/1 Running 0 2m
microbot-1855935831-fqwjp 1/1 Running 0 2m
microbot-1855935831-ksmmp 1/1 Running 0 2m
microbot-1855935831-mfvst 1/1 Running 0 2m
nginx-ingress-controller-bj5gh 1/1 Running 0 23m

At which point, you can hit the service URL with:

ubuntu@kubernetes:~$ curl -s http://microbot.10.97.218.226.xip.io | grep hostname
 <p class="centered">Container hostname: microbot-1855935831-fqwjp</p>

Running this multiple times will show you different container hostnames as you get load balanced between one of those 5 new instances.

Conclusion

Similar to OpenStack, conjure-up combined with LXD makes it very easy to deploy rather complex big software, very easily and in a very self-contained way.

This isn’t the kind of setup you’d want to run in a production environment, but it’s great for developers, demos and whoever wants to try those technologies without investing into hardware.

Extra information

The conjure-up website can be found at: http://conjure-up.io
The Juju website can be found at: http://www.ubuntu.com/cloud/juju

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Original article

20 February, 2017 08:00AM

February 19, 2017

Thierry Carrez: Using proprietary services to develop open source software

It is now pretty well accepted that open source is a superior way of producing software. Almost everyone is doing open source those days. In particular, the ability for users to look under the hood and make changes results in tools that are better adapted to their workflows. It reduces the cost and risk of finding yourself locked-in with a vendor in an unbalanced relationship. It contributes to a virtuous circle of continuous improvement, blurring the lines between consumers and producers. It enables everyone to remix and invent new things. It adds up to the common human knowledge.

And yet

And yet, a lot of open source software is developed on (and with the help of) proprietary services running closed-source code. Countless open source projects are developed on GitHub, or with the help of Jira for bugtracking, Slack for communications, Google docs for document authoring and sharing, Trello for status boards. That sounds a bit paradoxical and hypocritical -- a bit too much "do what I say, not what I do". Why is that ? If we agree that open source has so many tangible benefits, why are we so willing to forfeit them with the very tooling we use to produce it ?

But it's free !

The argument usually goes like this: those platforms may be proprietary, they offer great features, and they are provided free of charge to my open source project. Why on Earth would I go through the hassle of setting up, maintaining, and paying for infrastructure to run less featureful solutions ? Or why would I pay for someone to host it for me ? The trick is, as the saying goes, when the product is free, you are the product. In this case, your open source community is the product. In the worst case scenario, the personal data and activity patterns of your community members will be sold to 3rd parties. In the best case scenario, your open source community is recruited by force in an army that furthers the network effect and makes it even more difficult for the next open source project to not use that proprietary service. In all cases, you, as a project, decide to not bear the direct cost, but ask each and every one of your contributors to pay for it indirectly instead. You force all of your contributors to accept the ever-changing terms of use of the proprietary service in order to participate to your "open" community.

Recognizing the trade-off

It is important to recognize the situation for what it is. A trade-off. On one side, shiny features, convenience. On the other, a lock-in of your community through specific features, data formats, proprietary protocols or just plain old network effect and habit. Each situation is different. In some cases the gap between the proprietary service and the open platform will be so large that it makes sense to bear the cost. Google Docs is pretty good at what it does, and I find myself using it when collaborating on something more complex than etherpads or ethercalcs. At the opposite end of the spectrum, there is really no reason to use Doodle when you can use Framadate. In the same vein, Wekan is close enough to Trello that you should really consider it as well. For Slack vs. Mattermost vs. IRC, the trade-off is more subtle. As a sidenote, the cost of lock-in is a lot reduced when the proprietary service is built on standard protocols. For example, GMail is not that much of a problem because it is easy enough to use IMAP to integrate it (and possibly move away from it in the future). If Slack was just a stellar opinionated client using IRC protocols and servers, it would also not be that much of a problem.

Part of the solution

Any simple answer to this trade-off would be dogmatic. You are not unpure if you use proprietary services, and you are not wearing blinders if you use open source software for your project infrastructure. Each community will answer that trade-off differently, based on their roots and history. The important part is to acknowledge that nothing is free. When the choice is made, we all need to be mindful of what we gain, and what we lose. To conclude, I think we can all agree that all other things being equal, when there is an open-source solution which has all the features of the proprietary offering, we all prefer to use that. The corollary is, we all benefit when those open-source solutions get better. So to be part of the solution, consider helping those open source projects build something as good as the proprietary alternative, especially when they are pretty close to it feature-wise. That will make solving that trade-off a lot easier.

19 February, 2017 01:00PM

Stuart Langridge: The Quiet Voice

It’s harder to find news these days. On the one hand, there’s news everywhere you turn. Shrieking at you. On the other, we’re each in a bubble. Articles are rushed out to get clicks; everything’s got a political slant in one direction or another. This is not new. But it does feel like it’s getting worse.

It’s being recognised, though. Buzzfeed have just launched a thing called “Outside Your Bubble“, an admirable effort to “give our audience a glimpse at what’s happening outside their own social media spaces”; basically, it’s a list of links to views for and against at the bottom of certain articles. Boris Smus just wrote up an idea to add easily-digestible sparkline graphs to news articles which provide context to the numbers quoted. There have long been services like Channel 4’s FactCheck and AllSides which try to correct errors in published articles or give a balanced view of the news. Matt Kiser’s WTF Just Happened Today tries to summarise, and there are others.

(Aside: I am bloody sure that there’s an xkcd or similar about the idea of the quiet voice, where when someone uses a statistic on telly, the quiet voice says “that’s actually only 2% higher than it was under the last president” or something. But I cannot for the life of me find it. Help.)

So here’s what I’d like.

I want a thing I can install. A browser extension or something. And when I view an article, I get context and viewpoint on it. If the article says “Trump’s approval rating is 38%”, the extension highlights it and says “other sources say it’s 45% (link)” and “here’s a list of other presidents’ approval ratings at this point in their terms” and “here’s a link to an argument on why it’s this number”. When the article says “the UK doesn’t have enough trade negotiators to set up trade deals” there’s a link to an article claiming that that isn’t a problem and explaining why. If it says “NHS wait times are now longer than they’ve ever been” there’s a graph showing what this response times are, and linking to a study showing that NHS funding is dropping faster than response times are. An article saying that X billion is spent on foreign aid gets a note on how much that costs each taxpayer, what proportion of the budget it is, how much people think it is. It provides context, views from outside your bubble, left and right. You get to see what other people think of this and how they contextualise it; you get to see what quoted numbers mean and understand the background. It’s not political one way or the other; it’s like a wise aunt commentator, the quiet voice that says “OK, here’s what this means” so you’re better informed, of how it’s relevant to you and what people outside your bubble think.

Now, here’s why it won’t work.

It won’t work because it’s a hysterical amount of effort and nobody has a motive to do it. It has to be almost instant; there’s little point in brilliantly annotating an article three days after it’s written when everyone’s already read it. It’d be really difficult for it to be non-partisan, and it’d be even more difficult to make people believe it was non-partisan even if it was. There’s no money in it — it’s explicitly not a thing that people go to, but lives on other people’s sites. And there aren’t browser extensions on mobile. The Washington Post offer something like this with their service to annotate Trump’s tweets, but extending it to all news articles everywhere is a huge amount of work. Organisations with a remit to do this sort of thing — the newly-spun-off Open News from Mozilla and the Knight Foundation, say — don’t have the resources to do anything even approaching this. And it’s no good if you have to pay for it. People don’t really want opposing views, thoughts from outside their bubble, graphs and context; that’s what’s caused this thing to need to exist in the first place! So it has to be trivial to add; if you demand money nobody will buy it. So I can’t see how you pay the army of fact checkers and linkers your need to run this. It can’t be crowd sourced; if it were then it wouldn’t be a reliable annotation source, it’d be reddit, which would be disastrous. But it’d be so useful. And once it exists they can produce a thing which generates printable PDF annotations and I can staple them inside my parents copy of the Daily Mail.

19 February, 2017 12:17PM

Sridhar Dhanapalan: A Complete Literacy Experience For Young Children

From the “I should have posted this months ago” vault…

When I led technology development at One Laptop per Child Australia, I maintained two golden rules:

  1. everything that we release must ‘just work’ from the perspective of the user (usually a child or teacher), and
  2. no special technical expertise should ever be required to set-up, use or maintain the technology.

In large part, I believe that we were successful.

Once the more obvious challenges have been identified and cleared, some more fundamental problems become evident. Our goal was to improve educational opportunities for children as young as possible, but proficiently using computers to input information can require a degree of literacy.

Sugar Labs have done stellar work in questioning the relevance of the desktop metaphor for education, and in coming up with a more suitable alternative. This proved to be a remarkable platform for developing a touch-screen laptop, in the form of the XO-4 Touch: the icons-based user interface meant that we could add touch capabilities with relatively few user-visible tweaks. The screen can be swivelled and closed over the keyboard as with previous models, meaning that this new version can be easily converted into a pure tablet at will.

Revisiting Our Assumptions

Still, a fundamental assumption has long gone unchallenged on all computers: the default typeface and keyboard. It doesn’t at all represent how young children learn the English alphabet or literacy. Moreover, at OLPC Australia we were often dealing with children who were behind on learning outcomes, and who were attending school with almost no exposure to English (since they speak other languages at home). How are they supposed to learn the curriculum when they can barely communicate in the classroom?

Looking at a standard PC keyboard, you’ll see that the keys are printed with upper-case letters. And yet, that is not how letters are taught in Australian schools. Imagine that you’re a child who still hasn’t grasped his/her ABCs. You see a keyboard full of unfamiliar symbols. You press one, and on the screen pops up a completely different looking letter! The keyboard may be in upper-case, but by default you’ll get the lower-case variants on the screen.

A standard PC keyboardA standard PC keyboard

Unfortunately, the most prevalent touch-screen keyboard on the marke isn’t any better. Given the large education market for its parent company, I’m astounded that this has not been a priority.

The Apple iOS keyboardThe Apple iOS keyboard

Better alternatives exist on other platforms, but I still was not satisfied.

A Re-Think

The solution required an examination of how children learn, and the challenges that they often face when doing so. The end result is simple, yet effective.

The standard OLPC XO mechanical keyboard (above) versus the OLPC Australia Literacy keyboard (below)The standard OLPC XO mechanical keyboard (above) versus the OLPC Australia Literacy keyboard (below)

This image contrasts the standard OLPC mechanical keyboard with the OLPC Australia Literacy keyboard that we developed. Getting there required several considerations:

  1. a new typeface, optimised for literacy
  2. a cleaner design, omitting characters that are not common in English (they can still be entered with the AltGr key)
  3. an emphasis on lower-case
  4. upper-case letters printed on the same keys, with the Shift arrow angled to indicate the relationship
  5. better use of symbols to aid instruction

One interesting user story with the old keyboard that I came across was in a remote Australian school, where Aboriginal children were trying to play the Maze activity by pressing the opposite arrows that they were supposed to. Apparently they thought that the arrows represented birds’ feet! You’ll see that we changed the arrow heads on the literacy keyboard as a result.

We explicitly chose not to change the QWERTY layout. That’s a different debate for another time.

The Typeface

The abc123 typeface is largely the result of work I did with John Greatorex. It is freely downloadable (in TrueType and FontForge formats) and open source.

After much research and discussions with educators, I was unimpressed with the other literacy-oriented fonts available online. Characters like ‘a’ and ‘9’ (just to mention a couple) are not rendered in the way that children are taught to write them. Young children are also susceptible to confusion over letters that look similar, including mirror-images of letters. We worked to differentiate, for instance, the lower-case L from the upper-case i, and the lower-case p from the lower-case q.

Typography is a wonderfully complex intersection of art and science, and it would have been foolhardy for us to have started from scratch. We used as our base the high-quality DejaVu Sans typeface. This gave us a foundation that worked well on screen and in print. Importantly for us, it maintained legibility at small point sizes on the 200dpi XO display.

On the Screen

abc123 is a suitable substitute for DejaVu Sans. I have been using it as the default user interface font in Ubuntu for over a year.

It looks great in Sugar as well. The letters are crisp and easy to differentiate, even at small point sizes. We made abc123 the default font for both the user interface and in activities (applications).

The abc123 font in Sugar's Write activity, on an XO laptop screenThe abc123 font in Sugar’s Write activity, on an XO laptop screen

Likewise, the touch-screen keyboard is clear and simple to use.

The abc123 font on the XO touch-screen keyboard, on an XO laptop screenThe abc123 font on the XO touch-screen keyboard, on an XO laptop screen

The end result is a more consistent literacy experience across the whole device. What you press on the hardware or touch-screen keyboard will be reproduced exactly on the screen. What you see on the user interface is also what you see on the keyboards.

19 February, 2017 07:24AM

February 18, 2017

LMDE

Monthly News – February 2017

We’ve got a lot of news to cover this month and many exciting details to share with you. Before we get started, I’d like to take a minute to thank the people who help our project grow. Many thanks to all our sponsors and all the people who send donations to us, many thanks for funding us. Special thanks also to the administration team for their work on the forums this month, the many artists who joined and participate in the design team and of course to our developers for the fantastic work we do together.

Upcoming releases

The new stable ISOs for LMDE 2 “Betsy” should be released this week.

Cinnamon Spices

Work continues in the design team on revamping the authentication, comments and rating systems to make the website compatible with the Facebook, Google and Github APIs.

The development team continues to review and improve the Cinnamon spices. Obsolete applets/desklets/themes/extensions are being removed and buggy ones are being fixed on a daily basis. Some themes which were extremely popular in the past but which hadn’t been updated for years (some of them since 2012) were updated to work with Cinnamon 3.2.

We’re getting very close to a fully functional collection of spices and thanks to the integration with Github and the automated delivery system we don’t expect spices to lag behind Cinnamon in the future anymore. Any changes required for spices to be compatible with an upcoming Cinnamon release can now be implemented directly by the development team, so spices can and should support future versions of Cinnamon even before they are released.

Bluetooth

Bluetooth is going to be much better in Linux Mint 18.2.

Here is what the new Blueberry user interface looks like:

As you can see, a stack switcher was added in the toolbar and new settings were added to the application:

OBEX file transfers are now supported out of the box, so you can send files very easily over Bluetooth to your computer from any remote device.

An option was added also so you can change the Bluetooth name of your computer. That name usually defaults to your hostname or to “mint-0” and many people don’t know how to change it via the command line.

Last but not least, in addition to its cross-desktop system tray, Blueberry now provides a Cinnamon applet which uses symbolic icons and looks similar to other status applets, such as the power, sound or network applets. When this applet is present, the tray icon is hidden.

Xed

A lot of work went into Xed, the generic text editor.

“Word wrap” was made more accessible and added to the menu, so you can enable/disable that function without going in the Xed preferences.

You can also select a few lines and sort them by pressing F10, or using “Edit -> Sort Lines”.

You can now zoom in and out with the menu, keyboard shortcuts or even the mouse wheel to modify the size of the text.

The search now supports regular expressions.

You can now switch between tabs with the mouse wheel.

Python extensions are now supported and porting Gedit 3 extensions to Xed is very easy.

And as you might have noticed in the screenshot above, Xed features really exciting visual improvements. For instance, it comes with smart side and bottom bars which automatically adjust to the loaded content and which you can hide or show with a click of a button.

The ability to prefer dark themes was added, so if you’re using Mint-Y-Darker for instance, you can select whether your text editor should be light or dark.

Xplayer

The media player, Xplayer, also received improvements to its user interface.

All the controls and the seeker bar were placed on the same line and the statusbar was removed to make the application more compact.

You can now control the playback speed with the same keyboard shortcuts as in MPV, so you can make your own slow motion replays, or watch lengthy matches in about half the time it would take.

Subtitles files are now loaded automatically but subtitles are also now hidden by default. You can switch them ON or OFF, or cycle through subtitles tracks by pressing “S” on the keyboard.

You can also cycle through audio/language tracks by pressing “L” on the keyboard.

The OSD (on-screen display) was fixed and now shows the audio track or subtitle track or playback speed you selected, or your position in the movie when seeking forward or backward.

Many bugs were fixed and just like in Xed, the ability to prefer dark themes was added.

Sponsorships:

Linux Mint is proudly sponsored by:

Platinum Sponsors:
Private Internet Access
Gold Sponsors:
Linux VPS Hosting
Silver Sponsors:

Acunetix
Sucuri
Bronze Sponsors:
Vault Networks *
AYKsolutions Server & Cloud Hosting
7L Networks Toronto Colocation *
Goscomb
BGASoft Inc
David Salvo
Milton Security Group
Sysnova Information Systems
Community Sponsors:

Donations in January:

A total of $9,670 were raised thanks to the generous contributions of 483 donors:

$1337 (2nd donation), Shawn C aka “citypw
$108, Oliver Z.
$108, Paul S. E. aka “Paul”
$100 (13th donation), Anon.
$100 (4th donation), Billy Bob Roach
$100 (2nd donation), Nathalie W.
$100 (2nd donation), Bruce H.
$100, William W.
$100, Philip T.
$100, Harold H.
$100, Don Jr.
$77 (19th donation), Wolfgang P.
$75, Roger R.
$75, Fabiano P.
$65 (2nd donation), Jonas H.
$60, Frank R.
$60, James L.
$59, Thomas Ö.
$54 (5th donation), Claude M.
$54, Josef S.
$54, Arnold D.
$54, Stefan P.
$54, Fernando M. R.
$54, Xtant Logic Ltd aka “Xtant Audio
$53 (2nd donation), Jorge R. R.
$50 (24th donation), Go Live Lively
$50 (8th donation), Andrew M.
$50 (6th donation), Robert H. B.
$50 (5th donation), Christopher D.
$50 (3rd donation), José W. F. J.
$50 (2nd donation), George M.
$50 (2nd donation), Tod D.
$50 (2nd donation), Fred W.
$50 (2nd donation), Robert E. H.
$50, Juei C. C.
$50, Tom D.
$50, Allen G.
$50, George V. R.
$50, Mark F.
$50, Steven Hodder
$50, Charles W.
$50, Craig D.
$50, Roderick W.
$44.8, Systemutvikler R. S.
$40.78 (2nd donation), Steve W.
$40 (4th donation), Tomas S.
$40, Kamil R.
$40, Arvid R.
$38, Ingolf B.
$35 (4th donation), Joe K.
$35 (3rd donation), Jeff S.
$35, Toby L.
$35, Ursula C.
$32 (83th donation), Olli K.
$32 (3rd donation), Lars-gunnar S.
$32 (2nd donation), Ian W.
$32 (2nd donation), Tommaso P.
$32 (2nd donation), Mark W.
$32, Michael H.
$32, Arnd S.
$32, Christian M.
$30 (10th donation), Geoff_P
$30 (3rd donation), Jason H
$30 (3rd donation), Tony V. aka “Troot
$30 (3rd donation), Bruce N.
$30, Mark D.
$28 (2nd donation), Dirk S.
$27 (5th donation), Roger D. P. aka “Linux Users Group Monitor Niel
$27, Andre C.
$27, Matthias H.
$27, Thierry B.
$27, Roman L.
$25 (66th donation), Ronald W.
$25 (11th donation), Jaan S.
$25 (7th donation), Larry I.
$25 (5th donation), Jeffery J.
$25 (4th donation), Michael W.
$25 (3rd donation), John C.
$25 (2nd donation), Michael K. S.
$25 (2nd donation), Ian P.
$25 (2nd donation), Stephen M.
$25, Ricardo G.
$25, Juan D.
$25, Michael C.
$25, Blair N.
$25, Graham D.
$25, Andrei P.
$25, Pacific Autotronic Systems, LLC
$25, David W.
$25, Frank R. J.
$23, Andreas E.
$22 (9th donation), Doriano G. M.
$22 (7th donation), Theo Stauffer aka “Theo”
$22 (7th donation), Alessandro P.
$22 (7th donation), Pentti T.
$22 (4th donation), Michael S.
$22 (3rd donation), Matthew Butler aka “goldberg@mint”
$22 (3rd donation), U. Flad aka “Duc Racer”
$22 (3rd donation), Alan R.
$22 (2nd donation), John A.
$22 (2nd donation), Stephan B.
$22 (2nd donation), Nurettin G.
$22 (2nd donation), Tony L. aka “tone39”
$22 (2nd donation), Zahari D. K.
$22, George P.
$22, Marcel H.
$22, Mark G.
$22, Andreas M.
$22, Carsten K.
$22, Craig M.
$22, Henricus V. L.
$22, Tamas K.
$22, Clemens H.
$22, Wolfgang H.
$22, Nikolaus N.
$22, Bruno C.
$22, Didik S.
$22, Peter L.
$22, Olivier J.
$22, Bruno Z.
$22, Tommi R.
$22, Tor A. N.
$22, Xavier Holzl aka “XavierHolzl”
$22, Monika M.
$22, T. H.
$22, Patrick H.
$22, Alberto M. H.
$22, Richard D.
$22, Carlos L. D. C.
$22, Stefan N.
$22, Erno I.
$22, Juan R.
$22, TOnline LDA
$21, Andrzej O. aka “Mintyman”
$20 (5th donation), James A.
$20 (4th donation), Ray P.
$20 (4th donation), Dave G.
$20 (4th donation), Greg W.
$20 (3rd donation), Larry P.
$20 (3rd donation), Peter R.
$20 (3rd donation), Vaughan B.
$20 (2nd donation), Stratis G.
$20 (2nd donation), Petra T.
$20 (2nd donation), Shakeel A.
$20 (2nd donation), Steven J.
$20 (2nd donation), Edward C.
$20, Barbara B.
$20, Jonathan M.
$20, Daniel O.
$20, Lyle O.
$20, Bill M.
$20, Precision P.
$20, Rodney T.
$20, Heath P.
$20, Zeshan B.
$20, Batuhan B.
$20, Michael K.
$20, Alexander Z.
$20, phaendal
$20, David T. aka “Crimson”
$20, Glenn C.
$20, David C.
$20, Zhichang Y.
$20, Lance B.
$20, Charles G.
$20, Dirk H.
$20, Stephen M.
$20, TJ Nelson
$20, James C.
$20, Michael H.
$20, Ashley S.
$20, Jeffrey J.
$20, Andrew S.
$20, Willows A. S. M. C.
$20, Greta G.
$20, Casey Melcher
$20, Qicai Z.
$19, Alejandro S.
$18.5 (6th donation), Marcin Ziółkowski aka Mario Nesta
$17, Lucian B.
$16 (5th donation), Carsten Wehner
$16 (2nd donation), Klaus K.
$16, Patrick S.
$16, Domenico M.
$16, Rob T.
$16, John H.
$16, Hannah V.
$15 (4th donation), Dental SEO Services
$15 (4th donation), Stephen C.
$15 (3rd donation), Vero Beach Dentist
$15 (3rd donation), Tyler B.
$15 (2nd donation), Kirk W.
$15, Mauricio López aka “damonh”
$15, Benjamin P.
$15, Caroline R.
$15, Delaney C.
$13.37, Smiling Cactus Gifts, LLC
$13 (9th donation), Anonymous
$13, Lucas B.
$13, Theofanis-Emmanouil T.
$13, Sigrid K.
$13, Alexandru D.
$12 (70th donation), Tony C. aka “S. LaRocca”
$12 (16th donation), Jobs Hiring Near Me
$12 (9th donation), Stefan M. H.
$12 (5th donation), Raymond M. (retired)
$11 (8th donation), Hans P.
$11 (7th donation), Queenvictoria
$11 (5th donation), JCSenar – linuxirun.com
$11 (4th donation), Tomi P.
$11 (4th donation), Francois B. aka “Makoto
$11 (4th donation), Marcin Bojko
$11 (3rd donation), Lance M.
$11 (3rd donation), Vladimir I.
$11 (3rd donation), Rajesh Nair aka “Nair”
$11 (2nd donation), Jean C. A.
$11 (2nd donation), Ueli L.
$11 (2nd donation), Sven-uwe U.
$11 (2nd donation), Alessandro L.
$11 (2nd donation), Alexandre W.
$11 (2nd donation), Luc B.
$11 (2nd donation), Jan S.
$11 (2nd donation), Franz W.
$11 (2nd donation), Andreas P.
$11 (2nd donation), Alexander Lang
$11, Günter L.
$11, Rosenberger
$11, M B. R. aka “embien”
$11, Trent R.
$11, Heinz L.
$11, Stamatis G.
$11, Christian F.
$11, Edel H.
$11, Teodar G. K.
$11, Timo R.
$11, Christos T.
$11, Frans S.
$11, Håkan K.
$11, Alejandro S. A.
$11, Karel B.
$11, Sybren S.
$11, Nezamaev D.
$11, Attila V.
$11, Bernd T.
$11, Andre H.
$11, Roland K.
$11, Jozsef T.
$11, Florian B.
$11, Morgane R.
$11, Hermanus V.
$11, Bruno V.
$11, Petri A.
$11, Slavo
$11, Gabor S.
$11, Uwe K.
$11, Sven B.
$11, Carsten S.
$11, Hans-werner B.
$11, Dirk S.
$11, Niko C.
$11, Grégoire H.
$11, Gerhard S.
$11, Csaba N.
$11, Arkadijs K.
$11, Christoph D.
$11, Dimitrios P.
$11, Birger T.
$10 (14th donation), Thomas C.
$10 (10th donation), Larry J.
$10 (8th donation), Antoine T.
$10 (8th donation), Hormis K.
$10 (7th donation), Michel C.
$10 (6th donation), Curtis M.
$10 (6th donation), Frank K.
$10 (5th donation), Car Rentals Near Me
$10 (5th donation), Richard L. S.
$10 (5th donation), Paul O.
$10 (4th donation), Car Rentals Near Me
$10 (4th donation), anonymous aka “victorsk”
$10 (3rd donation), Agenor R.
$10 (3rd donation), Stephen C.
$10 (3rd donation), G&A
$10 (3rd donation), Egil J.
$10 (3rd donation), Gaston B.
$10 (2nd donation), William B. Z.
$10 (2nd donation), Sergio D. M. F.
$10 (2nd donation), Allen G.
$10 (2nd donation), Martín P. D. L. G.
$10 (2nd donation), Lars Händler
$10 (2nd donation), John W.
$10 (2nd donation), Paul K.
$10 (2nd donation), Zbigniew D.
$10 (2nd donation), Terrance G.
$10 (2nd donation), Davor T.
$10 (2nd donation), Norman E.
$10, Roman K.
$10, Errol M.
$10, Lauri L. J.
$10, Dirk D.
$10, Michael G.
$10, Roy D.
$10, Willem V. S.
$10, DomDagen aka “DomDagen”
$10, Kevin K.
$10, Ed L.
$10, Fauz E
$10, Santiago A.
$10, Paul H.
$10, Stephen F.
$10, Kovalev A.
$10, Robert M.
$10, Peter B.
$10, Headphonesrepair.com
$10, Steven J.
$10, Dale G. J.
$10, Michael B.
$10, Massimo I.
$10, Michał S. aka “Ribald
$10, Chuck Carey
$10, Marcio P.
$10, Donald P.
$10, Tolstenko Ilya
$10, Bob W.
$10, Andereh H.
$10, Yoichi N.
$10, Greg K.
$10, Cody T.
$10, Jeff H.
$10, 高山 公一郎
$10, Zhidkov A.
$10, Gary P.
$10, Alastair M.
$10, Kasper W.
$10, Keith H.
$10, Alfred F.
$10, Lowell D.
$10, Steven M.
$10, Brian W.
$8 (2nd donation), Helmut S.
$8, Sergey M.
$7.5 (2nd donation), L L.
$7.25, Andrzej P.
$7 (7th donation), CV Smith
$7 (2nd donation), Conrado G.
$7 (2nd donation), Von L.
$7 (2nd donation), Andre J.
$6 (2nd donation), Oliver Q. aka “oqv”
$6, Mark W.
$6, Monika L.
$5 (29th donation), LM aka “LinuxMint
$5 (10th donation), Eugene T.
$5 (7th donation), Snorri Gylfason
$5 (6th donation), Risikolebensversicherung-Vergleich
$5 (5th donation), Korneliusz M. aka “audiokor
$5 (5th donation), Cathi I.
$5 (4th donation), Dean A. aka “LinuxGeek”
$5 (4th donation), Olaf B.
$5 (3rd donation), Gabriele S.
$5 (3rd donation), Paweł B.
$5 (3rd donation), Dirk M.
$5 (3rd donation), Andre Cardoso
$5 (3rd donation), Arnold
$5 (3rd donation), kuponiarnia.pl
$5 (3rd donation), Vladimir U.
$5 (2nd donation), Bhavinder Jassar
$5 (2nd donation), Dmitry P.
$5 (2nd donation), Pau S. F.
$5 (2nd donation), Rodolfo B.
$5 (2nd donation), Michael C.
$5 (2nd donation), Kārlis M.
$5 (2nd donation), Crossword Solver
$5 (2nd donation), Richard A.
$5, Robert G.
$5, John D. aka “siliconjohn”
$5, Weyman S.
$5, Sourav B. aka “rmad17
$5, Matthew O.
$5, aka “Cachafaz”
$5, Edward S.
$5, Fabian Peter Hammerle
$5, Mattias W.
$5, Voicu R.
$5, Jesus F. E. F.
$5, Richard K.
$5, Julius K.
$5, Christoph C.
$5, Giorgio F. L.
$5, Andre A.
$5, Adrien R.
$5, Hans H.
$5, Gyu H. O. aka “karistuck
$5, Hans-peter P.
$5, John C. M.
$5, Jose A. S. S.
$5, Damian C.
$5, Arturas C.
$5, Juan P.
$5, Samuel aka “LEGOlord208
$5, Mika J.
$5, Emanuele S.
$5, Jozo M.
$5, Yehuda D. aka “uda
$5, Khaled M.
$5, Bartłomiej L.
$5, Otto Skultety aka “ottos”
$5, Simonetta E.
$5, Olivier R.
$5, Jamie P.
$5, Jerzy D.
$5, Robert T.
$5, Reuben R.
$5, Ivan K.
$5, Krzysztof G.
$5, Francisco G. P.
$5, Vadimír P.
$5, Sasha S.
$5, Ivan B.
$4, Agnieszka Z.
$3.78 (6th donation), Matthew B.
$3.75 (7th donation), Matthew B.
$3.5 (2nd donation), Another Canadian happily enjoying Linux
$3 (26th donation), Kouji K. aka “杉林晃治
$3 (9th donation), elogbookloan
$3 (5th donation), Marko Jagodić
$3 (3rd donation), Rajalaptop
$3, Oscar C. G.
$3, https://msfindom.com
$3, Gumersindo M. D.
$3, 1 CUP AWESOME
$3, Zhbanov O.
$3, Michal L.
$3, Clara G. G.
$3, Joywebstudio
$3, Aldonin S.
$2.5, Suhartas
$2.5, Catalin C. aka “Canizares”
$2 (3rd donation), Mansur S.
$2 (2nd donation), Timothy B.
$2 (2nd donation), AlephAlpha
$2, Зайцев П.
$2, GoCamp24
$2, Zakharov M.
$2, Shishio’s Place
$40.2 from 40 smaller donations

If you want to help Linux Mint with a donation, please visit http://www.linuxmint.com/donors.php

Rankings:

  • Distrowatch (popularity ranking): 2785 (1st)
  • Alexa (website ranking): 4134

18 February, 2017 12:28PM by Clem

February 17, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu 16.04.2 LTS Released

The Ubuntu team is pleased to announce the release of Ubuntu 16.04.2 LTS (Long-Term Support) for its Desktop, Server, and Cloud products, as well as other flavours of Ubuntu with long-term support.

Like previous LTS series’, 16.04.2 includes hardware enablement stacks for use on newer hardware. This support is offered on all architectures except for 32-bit powerpc, and is installed by default when using one of the desktop images. Ubuntu Server defaults to installing the GA kernel, however you may select the HWE kernel from the installer bootloader.

As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 16.04 LTS.

Kubuntu 16.04.2 LTS, Xubuntu 16.04.2 LTS, Mythbuntu 16.04.2 LTS, Ubuntu GNOME 16.04.2 LTS, Lubuntu 16.04.2 LTS, Ubuntu Kylin 16.04.2 LTS, Ubuntu MATE 16.04.2 LTS and Ubuntu Studio 16.04.2 LTS are also now available. More details can be found in their individual release notes:

https://wiki.ubuntu.com/XenialXerus/ReleaseNotes#Official_flavours

Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, Ubuntu Base, and Ubuntu Kylin. All the remaining flavours will be supported for 3 years.

To get Ubuntu 16.04.2

In order to download Ubuntu 16.04.2, visit:

http://www.ubuntu.com/download

Users of Ubuntu 14.04 will be offered an automatic upgrade to 16.04.2 via Update Manager. For further information about upgrading, see:

https://help.ubuntu.com/community/XenialUpgrades

As always, upgrades to the latest version of Ubuntu are entirely free of charge.

We recommend that all users read the 16.04.1 release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:

https://wiki.ubuntu.com/XenialXerus/ReleaseNotes

If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

Help Shape Ubuntu

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:

http://www.ubuntu.com/community/get-involved

About Ubuntu

Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:

http://www.ubuntu.com/support

More Information

You can learn more about Ubuntu and about this release on our website listed below:

http://www.ubuntu.com/

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

http://lists.ubuntu.com/mailman/listinfo/ubuntu-announce

Originally posted to the ubuntu-announce mailing list on Fri Feb 17 01:45:12 UTC 2017 by Adam Conrad

17 February, 2017 11:01PM

hackergotchi for SparkyLinux

SparkyLinux

Extra 0.0.1

There is a new, small application available in Sparky repos, for Enlightenment lovers: Extra 0.0.1.

Extra is a app which allows you to install elementary themes on your computer.
The app is based on elementary and uses efl libraries to download the theme.
It does so by fetching themes from extra.enlightenment.org.

Installation:
sudo apt-get update
sudo apt-get install extra

Extra

There is also a short entry about Extra in Sparky Wiki:
https://sparkylinux.org/wiki/doku.php/extra

 

17 February, 2017 10:01PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Simon Raffeiner: Setting up an “All-Snap” Ubuntu Core image in a QEMU/KVM virtual machine

Part 1: Setting up an “All-Snap” Ubuntu Core image in a QEMU/KVM virtual machine Part 2: Basic management of an Ubuntu Core installation Part 3: Mir and graphics on Ubuntu Core Part 4: Confinement You’ve probably heard a lot about Snappy and Ubuntu Core in the past couple of months. Since the whole ecosystem is slightly becoming “tryable”, […]

17 February, 2017 09:48PM

hackergotchi for rescatux

rescatux

Super Grub2 Disk 2.02s7 downloads







Recommended download (Floppy, CD & USB in one) (Valid for i386, x86_64, i386-efi and x86_64-efi):

Super Grub2 Disk 2.01 rc2 Main MenuSuper Grub2 Disk 2.01 rc2 Main Menu






 

 

 

 

 

EFI x86_64 standalone version:

EFI i386 standalone version:

CD & USB in one downloads:

About other downloads. As this is the first time I develop Super Grub2 Disk out of source code (well, probably not the first time, but the first time in ages) I have not been able to build these other downloads: coreboot, i386-efi, i386-pc, ieee1275, x86_64-efi, standalone coreboot, standalone i386-efi, standalone ieee1275. bfree has helped on this matter and with his help we might have those builds in next releases. If you want such builds drop a mail in the mailing list so that we aware of that need.

 

Source code:

Everything (All binary releases and source code):

Hashes

In order to check the former downloads you can either check the download directory page for this release

or you can check checksums right here:

MD5SUMS

232097ea540cde61d9caf72ac0910bf6  super_grub2_disk_2.02s7_source_code.tar.gz
f7963b543da22b83287d7ca3e9b99576  super_grub2_disk_hybrid_2.02s7.iso
15c2bf2ab0e56b1b730ffd702afc2983  super_grub2_disk_i386_efi_2.02s7.iso
1f91cb552d0e7f66e0e1ea8375071cfc  super_grub2_disk_i386_pc_2.02s7.iso
5abeb72308b19c509c509341e7c1f9f2  super_grub2_disk_standalone_i386_efi_2.02s7.EFI
f9b416f63063bd5c9a472e95799e749b  super_grub2_disk_standalone_x86_64_efi_2.02s7.EFI
619af11e95d10a7bfa03b07c6f1003ce  super_grub2_disk_x86_64_efi_2.02s7.iso

SHA1SUMS

cfe1fc2d9e9ce2cfc5e23047cc63f745c5d1a23e  super_grub2_disk_2.02s7_source_code.tar.gz
64da45d1df8455726bdcb81d85b5543f41ee8a76  super_grub2_disk_hybrid_2.02s7.iso
830392a0fd9efe934cb9b2bfa4e26a53a04e842e  super_grub2_disk_i386_efi_2.02s7.iso
ecd2d58e47da70ab4694e4ba9afb4d9214cb9c5c  super_grub2_disk_i386_pc_2.02s7.iso
3b0b21e634a7ce3c459bfad992106d722b4b46b8  super_grub2_disk_standalone_i386_efi_2.02s7.EFI
ac7328f1bc7cc87e59679aa35fb55a655b5ca5b4  super_grub2_disk_standalone_x86_64_efi_2.02s7.EFI
8e2b7160390de34478a755fc17fbd5a3b8c64099  super_grub2_disk_x86_64_efi_2.02s7.iso

SHA256SUMS

7528bf6fb3470e6282bc574f28f4062b2308fbc21b039508d15cba407ddedd88  super_grub2_disk_2.02s7_source_code.tar.gz
6b871fc4df2708cddced3d702671d16855a8f9e800d6c34c53106a89973faf83  super_grub2_disk_hybrid_2.02s7.iso
82ffa6fd4c0dbe1561f15e732480a1b7f4beab4da27a0f15c657cacf0e20cc8c  super_grub2_disk_i386_efi_2.02s7.iso
19eb36408275a6a8945a28b5b32b4999e250d2e9a0244b19f2d96de7edaefa5e  super_grub2_disk_i386_pc_2.02s7.iso
42d9774f58f896a98351b9ac793021a7b06d98d2e53b49adbc284986c6f9a9cb  super_grub2_disk_standalone_i386_efi_2.02s7.EFI
05176f79e61c28456b713225043eb69b98bddf26287d8e7e153c6b19d829faf0  super_grub2_disk_standalone_x86_64_efi_2.02s7.EFI
03153f2a8d3f912cfcc893ad5b940384cb7a44abcc91f7395eeccbf86b94154d  super_grub2_disk_x86_64_efi_2.02s7.iso

Flattr this!

17 February, 2017 08:04PM by adrian15

Super Grub2 Disk 2.02s7 released

Super Grub2 Disk 2.02s7 stable is here.

Super GRUB2 Disk is a live cd that helps you to boot into most any Operating System (OS) even if you cannot boot into it by normal means.

A new stable release

The former Super Grub2 Disk stable release was 2.02s6 version and released on January 2017 (1 month ago) . New features or changes since previous stable version are:

  • Updated grub 2.02 build to tag: 2.02~rc1 . This is the release candidate for final stable 2.02 upstream Grub release. Please use this build to give them (upstream Grub) feedback on this version. It’s advised to ask here before reporting to them so that we discard the bug being a Super Grub2 Disk specific one.
Super Grub2 Disk 2.02s5 - Detect and show boot methods in actionSuper Grub2 Disk 2.02s5 – Detect and show boot methods in action

We are going to see which are the complete Super Grub2 Disk features with a demo video, where you can download it, the thank you – hall of fame and some thoughts about the Super Grub2 Disk development.

Please do not forget to read our howtos so that you can have step by step guides (how to make a cdrom or an usb, how to boot from it, etc) on how to use Super Grub2 Disk and, if needed, Rescatux.

Super Grub2 Disk 2.02s4 main menuSuper Grub2 Disk 2.02s3 main menu

Tour

Here there is a little video tour in order to discover most of Super Grub2 Disk options. The rest of the options you will have to discover them by yourself.

Features

Most of the features here will let you boot into your Operating Systems. The rest of the options will improve the Super Grub2 Disk operating systems autodetecting (enable RAID, LVM, etc.) or will deal with minor aspects of the user interface (Colours, language, etc.).

  • Change the language UI
  • Translated into several languages
    • Spanish / Español
    • German / Deutsch
    • French / Français
    • Italian / Italiano
    • Malay / Bahasa Melayu
    • Russian
Super Grub2 Disk 2.01 rc2 Spanish Main MenuSuper Grub2 Disk 2.01 rc2 Spanish Main Menu
  • Detect and show boot methods option to detect most Operating Systems
Super Grub2 Disk 2.01 beta 3 Everything menu making use of grub.cfg extract entries option functionalitySuper Grub2 Disk 2.01 beta 3 – Everything menu making use of grub.cfg extract entries option functionality
  • Enable all native disk drivers *experimental* to detect most Operating Systems also in special devices or filesystems
  • Boot manually
    • Operating Systems
    • grub.cfg – Extract entries

      Super Grub2 Disk 2.01 beta 3 grub.cfg Extract entries optionSuper Grub2 Disk 2.01 beta 3 grub.cfg Extract entries option
    • grub.cfg – (GRUB2 configuration files)
    • menu.lst – (GRUB legacy configuration files)
    • core.img – (GRUB2 installation (even if mbr is overwritten))
    • Disks and Partitions (Chainload)
    • Bootable ISOs (in /boot-isos or /boot/boot-isos
  • Extra GRUB2 functionality
    • Enable GRUB2’s LVM support
    • Enable GRUB2’s RAID support
    • Enable GRUB2’s PATA support (to work around BIOS bugs/limitation)
    • Mount encrypted volumes (LUKS and geli)
    • Enable serial terminal
  • Extra Search functionality
    • Search in floppy ON/OFF
    • Search in CDROM ON/OFF
  • List Devices / Partitions
  • Color ON /OFF
  • Exit
    • Halt the computer
    • Reboot the computer

Supported Operating Systems

Excluding too custom kernels from university students Super Grub2 Disk can autodetect and boot most every Operating System. Some examples are written here so that Google bots can see it and also to make more confident the final user who searchs his own special (according to him) Operating System.

  • Windows
    • Windows 10
    • Windows Vista/7/8/8.1
    • Windows NT/2000/XP
    • Windows 98/ME
    • MS-DOS
    • FreeDOS
  • GNU/Linux
    • Direct Kernel with autodetected initrd
      Super Grub2 Disk - Detect any Operating System - Linux kernels detected screenshotSuper Grub2 Disk – Detect any Operating System – Linux kernels detected
      • vmlinuz-*
      • linux-*
      • kernel-genkernel-*
    • Debian / Ubuntu / Mint
    • Mageia
    • Fedora / CentOS / Red Hat Enterprise Linux (RHEL)
    • openSUSE / SuSE Linux Enterpsise Server (SLES)
    • Arch
    • Any many, many, more.
  • FreeBSD
    • FreeBSD (single)
    • FreeBSD (verbose)
    • FreeBSD (no ACPI)
    • FreeBSD (safe mode)
    • FreeBSD (Default boot loader)
  • EFI files
  • Mac OS X/Darwin 32bit or 64bit
Super Grub2 Disk 2.00s2 rc4 Mac OS X entriesSuper Grub2 Disk 2.00s2 rc4 Mac OS X entries (Image credit to: Smx)

Support for different hardware platforms

Before this release we only had the hybrid version aimed at regular pcs. Now with the upcoming new EFI based machines you have the EFI standalone versions among others. What we don’t support is booting when secure boot is enabled.

  • Most any PC thanks to hybrid version (i386, x86_64, i386-efi, x86_64-efi) (ISO)
  • EFI x86_64 standalone version (EFI)
  • EFI i386 standalone version (EFI)
  • Additional Floppy, CD and USB in one download (ISO)
    • i386-pc
    • i386-efi
    • x86_64-efi

Known bugs

  • Non English translations are not completed
  • Enable all native disk drivers *experimental* crashes Virtualbox randomly

Supported Media

  • Compact Disk – Read Only Memory (CD-ROM) / DVD
  • Universal Serial Bus (USB) devices
  • Floppy (1.98s1 version only)

Downloads







Recommended download (Floppy, CD & USB in one) (Valid for i386, x86_64, i386-efi and x86_64-efi):

Super Grub2 Disk 2.01 rc2 Main MenuSuper Grub2 Disk 2.01 rc2 Main Menu






 

 

 

 

 

EFI x86_64 standalone version:

EFI i386 standalone version:

CD & USB in one downloads:

About other downloads. As this is the first time I develop Super Grub2 Disk out of source code (well, probably not the first time, but the first time in ages) I have not been able to build these other downloads: coreboot, i386-efi, i386-pc, ieee1275, x86_64-efi, standalone coreboot, standalone i386-efi, standalone ieee1275. bfree has helped on this matter and with his help we might have those builds in next releases. If you want such builds drop a mail in the mailing list so that we aware of that need.

 

Source code:

Everything (All binary releases and source code):

Hashes

In order to check the former downloads you can either check the download directory page for this release

or you can check checksums right here:

MD5SUMS

232097ea540cde61d9caf72ac0910bf6  super_grub2_disk_2.02s7_source_code.tar.gz
f7963b543da22b83287d7ca3e9b99576  super_grub2_disk_hybrid_2.02s7.iso
15c2bf2ab0e56b1b730ffd702afc2983  super_grub2_disk_i386_efi_2.02s7.iso
1f91cb552d0e7f66e0e1ea8375071cfc  super_grub2_disk_i386_pc_2.02s7.iso
5abeb72308b19c509c509341e7c1f9f2  super_grub2_disk_standalone_i386_efi_2.02s7.EFI
f9b416f63063bd5c9a472e95799e749b  super_grub2_disk_standalone_x86_64_efi_2.02s7.EFI
619af11e95d10a7bfa03b07c6f1003ce  super_grub2_disk_x86_64_efi_2.02s7.iso

SHA1SUMS

cfe1fc2d9e9ce2cfc5e23047cc63f745c5d1a23e  super_grub2_disk_2.02s7_source_code.tar.gz
64da45d1df8455726bdcb81d85b5543f41ee8a76  super_grub2_disk_hybrid_2.02s7.iso
830392a0fd9efe934cb9b2bfa4e26a53a04e842e  super_grub2_disk_i386_efi_2.02s7.iso
ecd2d58e47da70ab4694e4ba9afb4d9214cb9c5c  super_grub2_disk_i386_pc_2.02s7.iso
3b0b21e634a7ce3c459bfad992106d722b4b46b8  super_grub2_disk_standalone_i386_efi_2.02s7.EFI
ac7328f1bc7cc87e59679aa35fb55a655b5ca5b4  super_grub2_disk_standalone_x86_64_efi_2.02s7.EFI
8e2b7160390de34478a755fc17fbd5a3b8c64099  super_grub2_disk_x86_64_efi_2.02s7.iso

SHA256SUMS

7528bf6fb3470e6282bc574f28f4062b2308fbc21b039508d15cba407ddedd88  super_grub2_disk_2.02s7_source_code.tar.gz
6b871fc4df2708cddced3d702671d16855a8f9e800d6c34c53106a89973faf83  super_grub2_disk_hybrid_2.02s7.iso
82ffa6fd4c0dbe1561f15e732480a1b7f4beab4da27a0f15c657cacf0e20cc8c  super_grub2_disk_i386_efi_2.02s7.iso
19eb36408275a6a8945a28b5b32b4999e250d2e9a0244b19f2d96de7edaefa5e  super_grub2_disk_i386_pc_2.02s7.iso
42d9774f58f896a98351b9ac793021a7b06d98d2e53b49adbc284986c6f9a9cb  super_grub2_disk_standalone_i386_efi_2.02s7.EFI
05176f79e61c28456b713225043eb69b98bddf26287d8e7e153c6b19d829faf0  super_grub2_disk_standalone_x86_64_efi_2.02s7.EFI
03153f2a8d3f912cfcc893ad5b940384cb7a44abcc91f7395eeccbf86b94154d  super_grub2_disk_x86_64_efi_2.02s7.iso

Changelog (since former 2.00s2 stable release)

Changes since 2.02s6 version:

  • Updated grub 2.02 build to tag: 2.02~rc1

Changes since 2.02s5 version:

  • Added Russian language
  • Improved Arch Linux initramfs detection
  • Added i386-efi build support
  • Added i386-efi to the hybrid iso
  • Grub itself is translated when a language is selected
  • Added loopback.cfg file (non officially supported)
  • (Devel) sgrub.pot updated to latest strings
  • (Devel) Added grub-build-004-make-check so that we ensure the build works
  • (Devel) Make sure linguas.sh is built when running ‘grub-build-002-clean-and-update’
  • (Devel) Updated upstream Super Grub2 Disk repo on documentation
  • (Devel) Move core supergrub menu under menus/sgd
  • (Devel) Use sg2d_directory as the base super grub2 disk directory variable
  • (Devel) New supergrub-sourcecode script that creates current git branch source code tar.gz
  • (Devel) New supergrub-all-zip-file script: Makes sure a zip file of everything is built.
  • (Devel) supergrub-meta-mkrescue: Build everything into releases directory in order to make source code more clean.
  • (Devel) New supergrub-official-release script: Build main files, source code and everything zip file from a single script in order to ease official Super Grub
    2 Disk releases.

Changes since 2.02s4 version:

  • Stop trying to chainload devices under UEFI and improve the help people get in the case of a platform mismatch
  • (Devel) Properly support source based built grub-mkfont binary.
  • New options were added to chainload directly either /ntldr or /bootmgr thanks to ntldr command. They only work in BIOS mode.

Changes since 2.02s3 version:

  • Using upstream grub-2.02-beta3 tag as the new base for Super Grub2 Disk’s grub.
  • Major improvement in Windows OS detection (based on BCD) Windows Vista, 7, …
  • Major improvement in Windows OS detection (based on ntldr) Windows XP, 2000, …

Changes since 2.02s2 beta 1 version:

  • (Devel) grub-mkstandalone was deleted because we no longer use it
  • Updated (and added) Copyright notices for 2015
  • New option: ‘Disks and Partitions (Chainload)’ adapted from Smx work
  • Many files were rewritten so that they only loop between devices that actually need to be searched into.
    This enhacement will make Super Grub2 Disk faster.
  • Remove Super Grub2 Disk own devices from search by default. Added an option to be able to enable/disable the Super Grub2 Disk own devices search.

2.02s2 beta 1 changelog:

  • Updated grub 2.02 build to commit: 8e5bc2f4d3767485e729ed96ea943570d1cb1e45
  • Updated documentation for building Super Grub2 Disk
  • Improvement on upstream grub (d29259b134257458a98c1ddc05d2a36c677ded37 – test: do not stop after first file test or closing bracket) will probably make Super Grub2 Disk run faster.
  • Added new grub build scripts so that Super Grub2 Disk uses its own built versions of grub and not the default system / distro / chroot one.
  • Ensure that Mac OS X entries are detected ok thanks to Users dir. This is because Grub2 needs to emulate Mac OS X kernel so that it’s detected as a proper boot device on Apple computers.
  • Thanks to upstream grub improvement now Super Grub2 Disk supports booting in EFI mode when booted from a USB device / hard disk. Actually SG2D was announced previously to boot from EFI from a USB device while it only booted from a cdrom.

2.02s1 beta 1 changelog:

  • Added new option: “Enable all native disk drivers” so that you can try to load: SATA, PATA and USB hard disks (and their partitions) as native disk drives. This is experimental.
  • Removed no longer needed options: “Enable USB” and “Enable PATA”.
  • “Search floppy” and “Search cdrom” options were moved into “Extra GRUB2 functionality menu”. At the same time “Extra Search functionality” menu was removed.
  • Added new straight-forward option: “Enable GRUB2’s RAID and LVM support”.
  • “List devices/partitions” was renamed to “Print devices/partitions”.
  • “Everything” option was renamed to “Detect and show boot methods”.
  • “Everything +” option was removed to avoid confusions.
  • Other minor improvements in the source code.
  • Updated translation files. Now most translations are pending.
  • Updated INSTALL instructions.

Finally you can check all the detailed changes at our GIT commits.

If you want to translate into your language please check TRANSLATION file at source code to learn how to translate into your language.

Thank you – Hall of fame

I want to thank in alphabetical order:

  • The upstream Grub crew. I’m subscribed to both help-grub and grub-devel and I admire the work you do there.

The person who writes this article is adrian15 .

And I cannot forget about thanking bTactic, the enterprise where I work at and that hosts our site.

Some thoughts about Super Grub2 Disk development

Super Grub2 Disk development ideas

There are two main improvements than can be made to Super Grub2 Disk in the next releases:

  • Fix md5sum files associated to iso files so that they don’t have to full path.

Old idea: I don’t know when but I plan to readapt some scripts from os-prober. That will let us detect more operating systems. Not sure when though. I mean, it’s not something that worries me because it does not affect too many final users. But, well, it’s something new that I hadn’t thought about.

Again, please send us feedback on what you think it’s missing on Super Grub2 Disk.

Rescatux development

I want to focus on Rescatux development on the next months so that we have an stable release before the end of 2017. Now I need to finish adding UEFI features, fix the scripts that generate Rescatux source code (difficult) and write much documentation.

(adrian15 speaking)

Getting help on using Super Grub2 Disk

More information about Super Grub2 Disk

Flattr this!

17 February, 2017 07:59PM by adrian15

hackergotchi for Ubuntu developers

Ubuntu developers

Kubuntu General News: Kubuntu 16.04.2 LTS Update Available

The second point release update to our LTS release 16.04 is out now. This contains all the bugfixes added to 16.04 since its first release in April. Users of 16.04 can run the normal update procedure to get these bugfixes. In addition, we suggest adding the Backports PPA to update to Plasma 5.8.5. Read more about it: http://kubuntu.org/news/plasma-5-8-5-bugfix-release-in-xenial-and-yakkety-backports-now/

Warning: 14.04 LTS to 16.04 LTS upgrades are problematic, and should not be attempted by the average user. Please install a fresh copy of 16.04.2 instead. To prevent messages about upgrading, change Prompt=lts with Prompt=normal or Prompt=never in the /etc/update-manager/release-upgrades file. As always, make a thorough backup of your data before upgrading.

See the Ubuntu 16.04.2 release announcement and Kubuntu Release Notes.

Download 16.04.2 images.

17 February, 2017 05:42PM

Ubuntu Insights: Snapcraft 2.27 has been released

Hello snapcrafters!

We are pleased to announce the release snapcraft 2.27:
https://launchpad.net/snapcraft/+milestone/2.27

Contributions

This release has seen some contributions from outside of the snapcraft core team, so we want to give a shout out to these folks, here’s a team thank you for:

  • Colin Watson
  • John Lenton
  • Kit Randel
  • Loïc Minier
  • Marco Trevisan
  • elespike

New in this release

Faster iteration

This release brings in many features to speed up development and iteration, the biggest under the covers improvement is caching of stage-packages works correctly again succesive pull steps including a repeated set of stage-packages will be a breeze.

The other improvment is that delta uploads are now possible, it is currenly disabled but can be toggled by a feature flag in the environment, just set DELTA_UPLOADS_EXPERIMENTAL=1 and enjoy the benefits. The tentative plan is for this to be the default in snapcraft 2.28

classic confinement

Improvements have been made to the experimental classic confinement build setup to be more robust and reliable. These improvements allow to build classic confined snaps that work across a wider set of OS releases (particularly those with differing glibc versions). An early adopter of this work is conjure-up which now sports Trusty Tahr support. Learn more about conjure-up by visiting http://conjure-up.io/

python plugin

The python plugin has also received some attention with regards to classic confinement. Most importantly it now does not leak any variables specific to the plugin into the environment.

Another improvement that has been made is that the plugin is now capable of detecting already staged interpreter instances and use that instead of providing one itself. This allows one to choose their own interpreter (which is important for classic confined snaps until the core snap implements use of –library-path for ld).
Making use of your own interpreter is really easy as it uses the common language already implemented in snapcraft (the plugin is just now smarter), here’s a snippet:

parts:
my-python-app:
source: ...
plugin: python
after: [python]
python:
source: https://www.python.org/ftp/python/3.6.0/Python-3.6.0.tar.xz
plugin: autotools
configflags: [--prefix=/usr]
build-packages: [libssl-dev]
prime:
- -usr/include

And with that you get to use python 3.6.0 in your snap!

CI builds

Previous to snapcraft 2.27 it was not possible to build on non snapd enabled environments as the core snap needs to be available on the system where the classic confined snap is to be built. From this version onwards it should be possible to build classic confined snaps either with cleanbuild or Launchpad builders as snapcraft is hinted about the environment and sets up core accordingly.

Building on other lxd remotes

A simple but useful feature is offloading builds to different instances, with that in mind one can now offload cleanbuild executions onto other lxd remotes. It is as simple as

snapcraft cleanbuild --remote my-remote

To create my-remote just follow the setup instructions on https://linuxcontainers.org/lxd/getting-started-cli/#multiple-hosts

Setting up environment

No more wrapper scripts just to setup on environment entry, this is now tied into an app entry in apps. Here’s a quick example:

apps:
vim:
command: bin/vim
environment:
VIMRUNTIME: $SNAP/share/vim/vim80

Releasing to channel tracks

Releasing to tracks worked out of the box, this is a user experience improvement on the result one sees when trying do to so.

If you are wondering what tracks are, here’s a simple explanation, they are like a Long Term Support channel added to your regular stability level channels (i.e.; stable, candidate, beta, edge), this is useful for cases where some users need to stick to a major version number such as the case of etcd where some might want to stick to 2.3 while others are happy with tracking latest (which is an implicit track).

From a snap developer point of view, here’s how to push and release to edge on the 0.2 track,

$ snapcraft push hello_0.3_amd64.snap --release 0.2/edge
Pushing 'hello_0.3_amd64.snap' to the store.
Uploading hello_0.3_amd64.snap [==============================================] 100%
Ready to release!
Revision 3 of 'hello' created.
Arch Track Series Channel Version Revision
amd64 0.2 16 stable - -
candidate - -
beta - -
edge 0.3 3

And here’s how you would release,

$ snapcraft release hello 3 0.2/beta
Arch Track Series Channel Version Revision
amd64 0.2 16 stable - -
candidate - -
beta 0.3 3
edge 0.3 3

The ‘0.2/beta’ channel is now open.

Others

For the full list of things available on 2.27 feel free to check https://launchpad.net/snapcraft/+milestone/2.27

Final Notes

To get the source for this release check it out at https://github.com/snapcore/snapcraft/releases/tag/2.27

A great place to collaborate and discuss features, bugs and ideas on snapcraft is snapcraft@lists.snapcraft.io mailing list or on the snapcraft channel on Rocket Chat https://rocket.ubuntu.com/channel/snapcraft

To file bugs, please go to https://bugs.launchpad.net/snapcraft/+filebug.

Happy snapcrafting!
— Sergio and the team

17 February, 2017 03:39PM

Dustin Kirkland: HOWTO: Automatically import your public SSH keys into LXD Instances

Just another reason why LXD is so awesome...

You can easily configure your own cloud-init configuration into your LXD instance profile.

In my case, I want cloud-init to automatically ssh-import-id kirkland, to fetch my keys from Launchpad.  Alternatively, I could use gh:dustinkirkland to fetch my keys from Github.

Here's how!

First, edit your default LXD profile (or any other, for that matter):

$ lxc edit profile default

Then, add the config snippet, like this:

config:
user.vendor-data: |
#cloud-config
users:
- name: root
ssh-import-id: gh:dustinkirkland
shell: /bin/bash
description: Default LXD profile
devices:
eth0:
name: eth0
nictype: bridged
parent: lxdbr0
type: nic
name: default

Save and quit in your interactive editor, and then launch a new instance:

$ lxc launch ubuntu:x
Creating amazed-manatee
Starting amazed-manatee

Find your instance's IP address:

$ lxc list
+----------------+---------+----------------------+----------------------------------------------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+----------------+---------+----------------------+----------------------------------------------+------------+-----------+
| amazed-manatee | RUNNING | 10.163.22.135 (eth0) | fdce:be5e:b787:f7d2:216:3eff:fe1c:773 (eth0) | PERSISTENT | 0 |
+----------------+---------+----------------------+----------------------------------------------+------------+-----------+

And now SSH in!

$ ssh ubuntu@10.163.22.135
$ ssh -6 ubuntu@fdce:be5e:b787:f7d2:216:3eff:fe1c:773

Enjoy!
:-Dustin

17 February, 2017 03:35PM by Dustin Kirkland (noreply@blogger.com)

Ubuntu Insights: MWC17: The Future of Wireless Networks

The telecom industry is not as buoyant as it was some years back. Telecom operators ‘ revenues are under pressure due to innovations from over the top players. Costs are spiralling out of control because of 4/5G deployments, fibre to the premise, social networking data explosions, 4K video streaming, IoT and more. Time to market was always measured in months, not days or hours.

What if all of this can be changed for the better? What if costs can be reduced exponentially? What if time to market can be expressed in minutes? What if telecom startups can help create thousands of new ideas and solutions that are generating new revenues? What if we can make telecom innovation the new “sexy” trend for 2017?

Impossible?

In the beginning of 2016 it looked impossible that software defined radio would be something that excited people. The collaboration between Lime Micro and Canonical changed that. The LimeSDR is the first software defined radio that can be programmed via open source apps, called snaps, that anybody can download from an app store. There are now multiple thousands of developers who have or shortly will receive their LimeSDR. They will be able to create all types of protocols and share them among the community. LTE, LoRa, Bluetooth, ZigBee and many more. Even invent their own protocols. Generation Y, the millennials, are discovering that wireless innovation is fun.

To make sure these new diamonds of wireless innovation are not lost upon us, we need to provide them with a market. That market will be created via the launch of open source production-ready base stations with app stores. We really liked how the last crowdfunding campaign created a community of innovators. That is why we will after Mobile World Congress launch the first telecom production-ready hardware crowdfunding campaign, called LimeNet.

Why open source the design for base stations?

As stated before, telecom operators have their costs spiralling out of control. Base stations need to become dramatically cheaper because with future protocols like 5G we will have exponentially more of them. Not only the price of a base station needs to go down, but also the total cost of ownership. Everything from who deploys, maintains and supports base stations, how and where will be put into question.

Why app stores on base stations?

The first reason is to decide what software you want to use. We are open sourcing the hardware but we want to see both open source and commercial software compete. The value is in software defining base stations. Just like on your mobile phone, some apps will be free and others are paid for or have in-app purchases.

If telecom innovators can make money by selling solutions to both telecom operators and their customers then more new revenue generating solutions will be launched. Installing these solutions via apps from an app store, makes it an easy and quick process. In minutes you can go from nothing to a working solution that automatically integrates with other apps and back-end systems.

What about security and manageability?

The number one Cloud operating system in the world is Ubuntu. Canonical has taken the same Ubuntu that is being used by Netflix, Uber, AirBnB, Snapchat and many others and shrunk it down to Ubuntu Core. We introduced lots of changes to make running third-party apps, called Snaps, secure and transactionally upgradeable. This means that if something goes wrong you can roll back to the previous working version. You can implement DevOps for Devices and continuously roll out new updates and functionalities in a controlled way. Any time a security issue arises, it can be easily patched. Snaps are contained, hence bugs or exploits don’t affect the other snaps or the operating system.

What about telecom software?

On MWC we will showcase LTE stacks from companies like Amarisoft and Eurecom/ OpenAirInterface, as well as EPC solutions from Quortus. Telecom solutions will no longer need a lengthy RFP process. You just download the Snap from the Brand Store, test it and you are ready for roll-out. Procurement of software should be based on features, quality and fit for purpose. This process should be measured in days at most. Not months or sometimes years. In a world of integrations in minutes, you will be able to change your mind. To allow everybody to be able to run a complete 4G network,  Eurecom and Canonical have enabled an open software ecosystem for 4G-ready networking powered by OpenAirInterface and Canonical model-driven NFV solution that can be deployed as network apps on any cloud and easily integrated into the new base station with a snap.

Where can I get the LTE-ready open source apps?

Today, OpenAirInterface develops an ecosystem for open source software/hardware development for the core network (EPC) and access-network (EUTRAN) of 3GPP cellular networks. It  offers a 5G Cellular Stack based on commercial off-the-shelf (COTS) hardware that can be used as legacy packages, Juju Charms, and Ubuntu Core Snaps.

Will telecom operators be the only enterprises buying and running base stations?

The answer will be “definitely NOT”! We will be showcasing solutions from Telet Research and Soracom that allow others to run base stations and telecom infrastructure as well.  In a software defined world, we can make deployment of private mobile infrastructure as simple as rolling out WiFi. With the arrival of unlicensed and licensed shared access (LSA) spectrum, small cells can be remotely configured as a managed service, just as you can buy cloud compute and storage. IoT SIM cards and IoT specific value added services, capable of operating on private and existing mobile networks  will be available for purchase in quantities as small as one. Hotels and homes that currently have poor or non-existent mobile coverage  be able to guarantee perfect coverage, even if their telecom operator doesn’t.  Meeting rooms underground should have perfect coverage. Rural communities should be able to deploy their own networks. Industrial consortiums as well. Networks don’t have to be for mobile, they can be for any type of smart device.

Multi Operator Neutral Host (MONeH) solutions offer a highly advantageous business model; they are quicker and less expensive to set up, yet manage to provide coverage for multiple operators in areas where conventional macro network builds simply are not cost effective or are not appropriate (such as in Areas of Outstanding Natural Beauty).   These solutions are not limited to just mobile services – they can also offer Fixed Wireless Broadband and 5G IoT services on the same SDR-based small cells.

IoT-Ready and New Revenue Generating

Soracom will showcase IoT SIMs that can go into low cost NB-IoT or LTE-M type of devices such as the 5-network FiPy from Pycom.

What about using custom protocols for new types of devices. Spur is a great example of how a hotel, bank or any consumer facing business that runs their own base station could install a Spur Snap to also have immediate feedback on service quality.

The traditional innovation killer: OSS integrations

In a telecom world where every service needs to be integrated into billing, call centre support, inventory management, workflow management and lots of other systems, an app store which allows you to launch thousands of new services each year needs a new way of thinking as well.

Supporting devices with lots of different app solutions from many vendors, requires IoT cloud native support platforms. RevTwo will be demoing theirs. The best of cloud, mobile and IoT all into one support platform.

Billing has been traditionally very challenging as well, particularly at the edge of the network. Most billing systems are centralised, expensive and are hard to scale and protect from tampering. IOTA’s next-generation Blockchain solution resolves this and allows for billing systems to be build in a distributed manner in which adding more base stations makes the complete system more scalable, resilient, tamper-proof and above all: free of fees. Each base station will be part of a distributed ledger. Unlike traditional Blockchain, IOTA can do fast transaction handling without fees, endure glitchy connectivity from main net and scale, which they will demo on the booth.  

Sometimes things break in a network or have to be upgraded and you will have to dispatch people or take automatic repair actions. To show you how this works the effortless Salesforce IoT Cloud integration and solutions will be demoed.

What will open source base stations look like?

In a software defined world the answer can be: “Totally Different”! SocialVend will be demoing what the new base stations will look like when you combine them with their vendmini™. Experience Social Telecom Vending on MWC in which a vending machine becomes a base station, provides you with SIMs, allows you to top up your balance and via an app store can do a million things more.

Come and see us at MWC2017 in Hall 3
Come and see the future of wireless networks at the Ubuntu booth in Hall P3 – 3K31. Book a meeting with our executive team.

17 February, 2017 10:00AM

Lubuntu Blog: Lubuntu 16.04.2 Has Been Released!

Thanks to all the hard work from our contributors, we are pleased to announce that Lubuntu 16.04.2 LTS has been released! What is Lubuntu? Lubuntu is an official Ubuntu flavor based on the Lightweight X11 Desktop Environment (LXDE). The project’s goal is to provide a lightweight yet functional distribution. Lubuntu specifically targets older machines with […]

17 February, 2017 04:12AM

February 16, 2017

hackergotchi for Emmabuntüs Debian Edition

Emmabuntüs Debian Edition

YES, JERRY CAN ! (Jerrycan)

Le vendredi 25 mai 2012 a été un jour pas comme les autres pour la communauté du Libre de Côte d’Ivoire. La ville de Yamoussoukro, communément appelée Yakro par les Ivoiriens, a vécu un événement des plus marquants de l’histoire du coworking2. « La nuit du partage », comme ont décidé de la nommer les coworkers, a [...]

16 February, 2017 07:47PM by shihtzu

hackergotchi for Tanglu developers

Tanglu developers

Cutelyst 1.4.0 released, C100K ready.

Yes, it’s not a typo.

Thanks to the last batch of improvements and with the great help of jemalloc, cutelyst-wsgi can do 100k request per second using a single thread/process on my i5 CPU. Without the use of jemalloc the rate was around 85k req/s.

This together with the EPoll event loop can really scale your web application, initially I thought that the option to replace the default glib (on Unix) event loop of Qt had no gain, but after increasing the connection number it handle them a lot better. With 256 connections the request per second using glib event loop get’s to 65k req/s while the EPoll one stays at 90k req/s a lot closer to the number when only 32 connections is tested.

Beside these lovely numbers Matthias Fehring added a new Session backend memcached and a change to finally get translations to work on Grantlee templates. The cutelyst-wsgi got –socket-timeout, –lazy, many fixes, removal of usage of deprecated Qt API, and Unix signal handling seems to be working properly now.

Get it! https://github.com/cutelyst/cutelyst/archive/r1.4.0.tar.gz

Hang on FreeNode #cutelyst IRC channel or Google groups: https://groups.google.com/forum/#!forum/cutelyst

Have fun!


16 February, 2017 07:20PM by dantti

hackergotchi for SparkyLinux

SparkyLinux

Enlightenment 0.21.6

 

There is an update of the Enlightenment 0.21.6 ready in Sparky repository now.

Make upgrade as before:
sudo apt-get update
sudo apt-get dist-upgrade

If you would like to make fresh installations, run:
sudo apt-get update
sudo apt-get install enlightenment

 

16 February, 2017 05:29PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Lime Microsystems and Canonical announce LimeNET crowdfunding

  • Sequel to initial $1million successful crowdfunding campaign follow hugely successful first round of crowdfunding for open-source, 5G and IoT capable SDR with app stores
  • Second round will provide a market for round-one developers to sell their apps on the first open source mobile and IoT base station with app stores
  • New revenues for telco will be combined with reducing costs and time to market
  • Companies will demonstrate LimeSDR hardware and software at Ubuntu’s stand at MWC. Book today for a briefing on both companies’ transformative vision for the telco industry
  • Register at Limenet.net to be notified when the crowdfunding campaign goes live

London UK, and Guildford, UK, 16 February 2017: Lime Microsystems and Canonical have today announced the upcoming launch of the second round of crowdfunding for LimeSDR the flexible, next-generation, open source Software Defined Radio. The new campaign called LimeNET is intended for use primarily as a mobile and IoT base station. LimeNET base stations hold the potential to completely transform the way telco networks run by shifting the emphasis and value away from proprietary hardware to open hardware with app stores on top.

To be notified of when the campaign is live please register here.

Confronted with flat revenues, spiralling infrastructure costs and massively escalating data demands, the telco industry is facing a crisis point. It needs exponentially more cost-effective solutions, as well as new revenue streams, and needs to find them quickly. Operators face a simple choice; either revise their business models, or lose market share to new incumbents.

Lime Micro and Canonical are looking to turn the mobile telephony business model on its head. Telco hardware is expensive, slow to develop, and has proven a ‘break’ to innovation in the industry. By ‘open sourcing’ Lime Microsystems’ 5G and IoT capable SDR base station design, Lime and Canonical are looking to effectively ‘commoditise’ network hardware and shift the value centre towards software.

LimeSDR-based base stations can not only run cellular standards from 2G or 5G, as well as IoT protocols like LoRa, Sigfox, NB-IoT, LTE-M, Weightless and others but any type of wireless protocol. Open source base stations allow R&D departments to try out new ideas around industrial IoT, content broadcasting and many more. Commoditised base stations allow any enterprise to run their own base station and get spectrum from their operators as a service. Base stations can have new form factors as well, like being embedded into vending machines or attached to drones.

“It’s clear that existing telco business models are quickly running out of steam,” commented Maarten Ectors, VP IoT, Next-Gen Networks & Edge Cloud, Canonical, “and that operators need to find new revenue streams. Together with Lime Microsystems, we’re looking to initiate a ‘herding’ behaviour that will usher in the age of the largely software-enabled telco network. Through its open sourced SDR design Lime will encourage a wide range of manufacturers to produce more cost-effective base stations. And, following enormous interest in our first crowdfunding initiative, we already have the critical mass of developers required to deliver the significant software innovation the industry requires.”

“This kind of model is, without a doubt, where the industry needs to go,” commented Ebrahim Bushehri, CEO, Lime Microsystems. “There are several reasons why Canonical’s heavy commitment in this project over the past couple of years has been so important. For one, Canonical shares our vision of an entirely software-enabled future for telco and IoT networks. Secondly, Canonical’s efficient, hyper-secure IoT OS Ubuntu Core is the perfect platform to enable this vision. Thirdly, this collaboration has helped us to gather the critical mass of developers required to kick-start the programme.”

Over 3,600 developers are currently involved in efforts to create apps, called Snaps, for LimeSDR, with several free and paid-for apps having already appeared on the open community LimeSDR App Store, as well as Lime’s invite-only app store, LimeNET.

Hailed as the ‘Arduino for telco engineers’1, LimeSDR’s first funding round met with considerable success. LimeSDR, together with Ubuntu Core, has been heralded as ‘democratising a critical part of telecoms networks’2, opening up the world of wireless to a wide range of developer-innovation. As a result of LimeSDR’s considerable potential to transform the industry the project attracted a number of large corporate telco sponsors, including BT’s EE, and eventually achieved nearly double its revenue target of $500,000.

Other applications for LimeSDR include use as an IoT gateway, an aviation transponder, a utility meter, in media streaming and broadcasting, radio astronomy, RADAR, drone command and control, radio astronomy, and more.

Mobile World Congress 2017 location:
To find out more about the second round of LimeSDR crowdfunding visit the Ubuntu Booth in Hall P3 – 3K31 at Mobile World Congress to experience what is possible with open source base stations with app stores.

– ENDS –

1 – Computer Weekly
2 – Tadsblog

About Lime Microsystems
Lime Microsystems specialises in field programmable RF (FPRF) transceivers and SDR boards for the next generation of wireless broadband systems.

These products offer an unprecedented level of configurability and allow system designers to create wireless communication networking equipment that can be set and reconfigured to run on any wireless communications frequency and mobile standard.

Lime’s technology has been adopted by organisations around the world for a wide range of applications from consumer communications equipment — femtocells and repeaters — to software defined radio devices for military and emergency services. Applications include comms infrastructure, disaster relief networks, M2M technology and test / verification systems.

The company is renowned internationally for its analogue, mixed-mode and RF design as well as its expertise in end-system applications. Its technology enables a single platform design to be implemented anywhere in the world, regardless of the countless local standard / frequency variants.

Lime works closely with industry partners to optimise RF and baseband solutions to ensure the ecosystem for the entire end equipment design is in place. Its partnerships help customers achieve high performance with lower device and manufacturing costs, less design resource and optimised inventory.

Established in 2005, Lime Microsystems is a privately held company.

For further information please visit http://www.limemicro.com/

About Canonical
Canonical is the company behind Ubuntu, the leading OS for cloud operations. Most public cloud workloads use Ubuntu, as do most new smart gateways, switches, self-driving cars and advanced robots. Canonical provides enterprise support and services for commercial users of Ubuntu.

Established in 2004, Canonical is a privately held company.

For further information please visit https://www.ubuntu.com/

Media Contacts:
EMEA:
Alex Warren/Rachel Nulty
Wildfire
+44 208 408 8000
canonical@wildfirepr.com

US
March Communications
ubuntu@marchcomms.com
+1 (617) 960-9900.

16 February, 2017 03:04PM

hackergotchi for Serbian GNU/Linux

Serbian GNU/Linux

Сербиан дизајн на вашем КДЕ-у















Пошто је стигло више сличних питања од КДЕ корисника, о томе како могу подесити овдашњи дизајн на свој десктоп, следи кратко упутство. Потребно је додати следећу ризницу међу постојеће изворе:

deb http://master.dl.sf.net/project/serbian2014/Archive/repo/ stretch main

Командом која следи извршити додавање кључа:

wget -O - https://sf.net/projects/serbian2014/files/Archive/repo/key/deb.gpg.key | apt-key add -

Када извршите ажурирање ризнице, следи инсталација одговарајућих пакета:

sddm-theme-serbian    (пријавни екран)

plasma-theme-serbian    (уводни екран, тема површи, шема боја, тапет)

serbian-icon-theme    (пакет икона)

Ако желите исту декорацију прозора, потребно је имати инсталиран пакет: kwin-decoration-oxygen

Упутство је применљиво за КДЕ 5 кориснике дистрибуција Debian, Kubuntu и Mint, као и за све њихове деривате.

16 February, 2017 09:37AM by Debian Srbija (noreply@blogger.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Valorie Zimmerman: Folding, origami, and Folding@Home

A few months ago, I started Folding@Home in the Ubuntu Folding team. I really enjoy checking my standings each night before I go to bed. What is Folding@Home? https://folding.stanford.edu/home/about-us/. Has Folding at Home actually done anything useful? Check Reddit and see what you think.

Team 45104 Rankings. http://wiki.ubuntu.com/FoldingAtHomeTeamUbuntu if you are interested in competing while contributing. It seems like interest has fallen off in the past year or so, which is a bit sad. On the other hand, it makes climbing up the standings easier!

I was reminded to make this post while watching NOVA tonight on PBS, about Origami. There are so many new applications to this ancient art of folding paper in art, in mathematics, physics and material science, and even biology. You can see it online if PBS is not available to you.

PS: right now, I have 921,667 points, which puts me in the top 180 in TeamUbuntu (#179 to be precise).

16 February, 2017 06:36AM by Valorie Zimmerman (noreply@blogger.com)

February 15, 2017

Ubuntu Insights: Deploying Kubernetes on Bare Metal

Fast forward to 6 minutes, 42 seconds to begin the demo.

In this demo Marco Ceppi deploys a fully functional Kubernetes cluster on 10 nodes.

If you’re interested in bare metal Kubernetes, we invite you to join us and other contributors in the sig-onprem community.

Not sure how to get started? Join us in our Getting started with the Canonical Distribution of Kubernetes webinar on the 22nd of February.

15 February, 2017 06:45PM

Simos Xenitellis: Summary of @DellCarePRO Ubuntu Basics Webinar (Feb 2017)

Last week there was a webinar from @DellCarePRO titled Ubuntu Basic Webinar.

Today the webinar video Ubuntu Basics Webinar has been posted online, and here is the summary.

Introduction

Ubuntu Certified hardware page

 

If your Dell laptop comes with Ubuntu, you can get the installation ISO (Recovery Image) from dell.com.

Ubuntu installation as dual-boot

Installing Ubuntu.

Installing Ubuntu.

 

Ubuntu installed.

Ubuntu installed.

Explaining: The Menu Bar

 

Explaining: Dash

 

Explaining: Ubuntu Software Center

 

Explaining: Keyboard shortcuts

 

Explaining: Software and Updates

 

Explaining: Multiple Monitor configuration

Talk by Barton George

Presenting Barton George and Project Sputnik. Barton George headed an internal effort in Dell to get Ubuntu on a high-end laptop, with a budget of just $40,000 and six months to deliver.

 

Funding came from the Dell Innovation Fund, with the aim to establish if an Ubuntu laptop would work.

 

Contrary to other efforts, this one was for a high-end offering. It would involve the community and get feedback from the community in order to change perceptions.

 

Very well-received. arstechnica, o’reilly radar, techcrunch, The Wall Street Journal.

 

Positive feedback from the twitter-sphere.

 

Expansion from the initial XPS 13 with Ubuntu, to a new 6th gen Intel laptop along with a whole line of Latitude Ubuntu laptops. And an All-in-One Ubuntu desktop.

There was emphasis that the initial fund of $40,000 to investigate whether an Ubuntu laptop would be a viable product, delivered multiple times the profits to Dell.

15 February, 2017 05:11PM

Ubuntu Insights: GPUs & Kubernetes for Deep Learning — Part 1/3

Ubuntu & Kubernetes

A few weeks ago I shared a side project about Building a DYI GPU cluster for k8s to play with Kubernetes with a proper ROI vs. AWS g2 instances.

This was spectacularly interesting when AWS was lagging behind with old nVidia K20s cards (which are not supported anymore on the latest drivers). But with the addition of the P series (p2.xlarge, 8xlarge and 16xlarge) the new cards are K80s with 12GB RAM, outrageously more powerful than the previous ones.

Baidu just released a post on the Kubernetes blog about the PaddlePaddle setup, but they only focused on CPUs. I thought it would be interesting looking at a setup of Kubernetes on AWS adding some GPU nodes, then exercise a Deep Learning framework on it. The docs say it is possible…

This post is the first of a sequence of 3: Setup the GPU cluster (this blog), Adding Storage to a Kubernetes Cluster (right afterwards), and finally run a Deep Learning training on the cluster (working on it, coming up post MWC…).

The plan

In this blog, we will:

  1. Deploy k8s on AWS in a development mode (no HA, colocating etcd, the control plane and PKI)
  2. Deploy 2x nodes with GPUs (p2.xlarge and p2.8xlarge instances)
  3. Deploy 3x nodes with CPU only (m4.xlarge)
  4. Validate GPU availability

Requirements

For what follows, it is important that:

  • You understand Kubernetes 101
  • You have admin credentials for AWS
  • If you followed the other posts, you know we’ll be using the Canonical Distribution of Kubernetes, hence some knowledge about Ubuntu, Juju and the rest of Canonical’s ecosystem will help.

Foreplay

  • Make sure you have Juju installed.

On Ubuntu,

sudo apt-add-repository ppa:juju/stable
sudo apt update
sudo apt install -yqq juju

for other OSes, lookup the official docs

Then to connect to the AWS cloud with your credentials, read this page

  • Finally copy this repo to have access to all the sources
git clone https://github.com/madeden/blogposts ./
cd blogposts/k8s-gpu-cloud

OK! Let’s start GPU-izing the world!

Deploying the cluster

Bootstrap

As usual start with the bootstrap sequence. Just be careful that p2 instances are only available in us-west-2, us-east-1 and eu-west-2 as well as the us-gov regions. I have experienced issues running p2 instances on the EU side hence I recommend using a US region.

juju bootstrap aws/us-east-1 — credential canonical — constraints “cores=4 mem=16G root-disk=64G” 
# Creating Juju controller “aws-us-east-1” on aws/us-east-1
# Looking for packaged Juju agent version 2.1-rc1 for amd64
# Launching controller instance(s) on aws/us-east-1…
# — i-0d48b2c872d579818 (arch=amd64 mem=16G cores=4)
# Fetching Juju GUI 2.3.0
# Waiting for address
# Attempting to connect to 54.174.129.155:22
# Attempting to connect to 172.31.15.3:22
# Logging to /var/log/cloud-init-output.log on the bootstrap machine
# Running apt-get update
# Running apt-get upgrade
# Installing curl, cpu-checker, bridge-utils, cloud-utils, tmux
# Fetching Juju agent version 2.1-rc1 for amd64
# Installing Juju machine agent
# Starting Juju machine agent (service jujud-machine-0)
# Bootstrap agent now started
# Contacting Juju controller at 172.31.15.3 to verify accessibility…
# Bootstrap complete, “aws-us-east-1” controller now available.
# Controller machines are in the “controller” model.
# Initial model “default” added.

Deploying instances

Once the controller is ready we can start deploying services. In my previous posts, I used bundles which are shortcuts to deploy complex apps.

If you are already familiar with Juju you can run juju deploy src/k8s-gpu.yaml and jump to the end of this section. For the others interested in getting into the details, this time we will deploy manually, and go through the logic of the deployment.

Kubernetes is made up of 5 individual applications: Master, Worker, Flannel (network), etcd (cluster state storage DB) and easyRSA (PKI to encrypt communication and provide x509 certs).

In Juju, each app is modeled by a charm, which is a recipe for how to deploy it.

At deployment time, you can give constraints to Juju, either very specific (instance type) or laxist (# of cores). With the latter, Juju will elect the cheapest instance matching your constraints on the target cloud.

First thing to do, is deploy the applications:

juju deploy cs:~containers/kubernetes-master-11 --constraints "cores=4 mem=8G root-disk=32G"
# Located charm "cs:~containers/kubernetes-master-11".
# Deploying charm "cs:~containers/kubernetes-master-11".
juju deploy cs:~containers/etcd-23 --to 0
# Located charm "cs:~containers/etcd-23".
# Deploying charm "cs:~containers/etcd-23".
juju deploy cs:~containers/easyrsa-6 --to lxd:0
# Located charm "cs:~containers/easyrsa-6".
# Deploying charm "cs:~containers/easyrsa-6".
juju deploy cs:~containers/flannel-10
# Located charm "cs:~containers/flannel-10".
# Deploying charm "cs:~containers/flannel-10".
juju deploy cs:~containers/kubernetes-worker-13 --constraints "instance-type=p2.xlarge" kubernetes-worker-gpu
# Located charm "cs:~containers/kubernetes-worker-13".
# Deploying charm "cs:~containers/kubernetes-worker-13".
juju deploy cs:~containers/kubernetes-worker-13 --constraints "instance-type=p2.8xlarge" kubernetes-worker-gpu8
# Located charm "cs:~containers/kubernetes-worker-13".
# Deploying charm "cs:~containers/kubernetes-worker-13".
juju deploy cs:~containers/kubernetes-worker-13 --constraints "instance-type=m4.2xlarge" -n3 kubernetes-worker-cpu
# Located charm "cs:~containers/kubernetes-worker-13".
# Deploying charm "cs:~containers/kubernetes-worker-13".

Here you can see an interesting property in Juju that we never approached before: naming the services you deploy. We deployed the same kubernetes-worker charm twice, but twice with GPUs and the other without. This gives us a way to group instances of a certain type, at the cost of duplicating some commands.

Also note the revision numbers in the charms we deploy. Revisions are not directly tight to versions of the software they deploy. If you omit them, Juju will elect the latest revision, like Docker would do on images.

Adding the relations & exposing software

Now that the applications are deployed, we need to tell Juju how they are related. For example, the Kubernetes master needs certificates to secure its API. Therefore, there is a relation between the kubernetes-master:certificates and easyrsa:client.

This relation means that once the 2 applications are connected, some scripts will run to query the EasyRSA API to create the required certificates, then copy them in the right location on the k8s master.

These relations then create statuses in the cluster, to which charms can react.

Essentially, very high level, think Juju as a pub-sub implementation of application deployment. Every action inside or outside of the cluster posts a message to a common bus, and charms can react to these and perform additional actions, modifying the overall state… and so on and so on until equilibrium is reached.

Let’s add the relations:

juju add-relation kubernetes-master:certificates easyrsa:client
juju add-relation etcd:certificates easyrsa:client
juju add-relation kubernetes-master:etcd etcd:db
juju add-relation flannel:etcd etcd:db
juju add-relation flannel:cni kubernetes-master:cni
for TYPE in cpu gpu gpu8
do 
 juju add-relation kubernetes-worker-${TYPE}:kube-api-endpoint kubernetes-master:kube-api-endpoint
 juju add-relation kubernetes-master:cluster-dns kubernetes-worker-${TYPE}:kube-dns
 juju add-relation kubernetes-worker-${TYPE}:certificates easyrsa:client
 juju add-relation flannel:cni kubernetes-worker-${TYPE}:cni
 juju expose kubernetes-worker-${TYPE}
done
juju expose kubernetes-master

Note at the end the expose commands.

These are instructions for Juju to open up a firewall in the cloud for specific ports of the instances. Some are predefined in charms (Kubernetes Master API is 6443, Workers open up 80 and 443 for ingresses) but you can also force them if you need (for example, when you manually add stuff in the instances post deployment).

Adding CUDA

CUDA does not have an official charm yet (coming up very soon!!), but there is my demoware implementation which you can find on GitHub. It has been updated for this post to CUDA 8.0.61 and drivers 375.26.

Make sure you have the charm tools available, clone and build the CUDA charm:

sudo apt install charm charm-tools
# Exporting the ENV
mkdir -p ~/charms ~/charms/layers ~/charms/interfaces
export JUJU_REPOSITORY=${HOME}/charms
export LAYER_PATH=${JUJU_REPOSITORY}/layers
export INTERFACE_PATH=${JUJU_REPOSITORY}/interfaces
# Build the charm
cd ${LAYER_PATH}
git clone https://github.com/SaMnCo/layer-nvidia-cuda cuda
charm build cuda

This will create a new folder called builds in JUJU_REPOSITORY, and another called cuda in there.

Now you can deploy the charm

juju deploy --series xenial $HOME/charms/builds/cuda
juju add-relation cuda kubernetes-worker-gpu
juju add-relation cuda kubernetes-worker-gpu8

This will take a fair amount of time as CUDA is very long to install (CDK takes about 10min and just CUDA probably 15min).
Nevertheless, at the end the status should show:

juju status
Model    Controller     Cloud/Region   Version
default  aws-us-east-1  aws/us-east-1  2.1-rc1
App                     Version  Status       Scale  Charm              Store       Rev  OS      Notes
cuda                             active           2  cuda               local         2  ubuntu  
easyrsa                 3.0.1    active           1  easyrsa            jujucharms    6  ubuntu  
etcd                    2.2.5    active           1  etcd               jujucharms   23  ubuntu  
flannel                 0.7.0    active           6  flannel            jujucharms   10  ubuntu  
kubernetes-master       1.5.2    active           1  kubernetes-master  jujucharms   11  ubuntu  exposed
kubernetes-worker-cpu   1.5.2    active           3  kubernetes-worker  jujucharms   13  ubuntu  exposed
kubernetes-worker-gpu   1.5.2    active           1  kubernetes-worker  jujucharms   13  ubuntu  exposed
kubernetes-worker-gpu8  1.5.2    active           1  kubernetes-worker  jujucharms   13  ubuntu  exposed
Unit                       Workload     Agent      Machine  Public address  Ports           Message
easyrsa/0*                 active       idle       0/lxd/0  10.0.0.122                      Certificate Authority connected.
etcd/0*                    active       idle       0        54.242.44.224   2379/tcp        Healthy with 1 known peers.
kubernetes-master/0*       active       idle       0        54.242.44.224   6443/tcp        Kubernetes master running.
  flannel/0*               active       idle                54.242.44.224                   Flannel subnet 10.1.76.1/24
kubernetes-worker-cpu/0    active       idle       4        52.86.161.22    80/tcp,443/tcp  Kubernetes worker running.
  flannel/4                active       idle                52.86.161.22                    Flannel subnet 10.1.79.1/24
kubernetes-worker-cpu/1*   active       idle       5        52.70.5.49      80/tcp,443/tcp  Kubernetes worker running.
  flannel/2                active       idle                52.70.5.49                      Flannel subnet 10.1.63.1/24
kubernetes-worker-cpu/2    active       idle       6        174.129.164.95  80/tcp,443/tcp  Kubernetes worker running.
  flannel/3                active       idle                174.129.164.95                  Flannel subnet 10.1.22.1/24
kubernetes-worker-gpu8/0*  active       idle       3        52.90.163.167   80/tcp,443/tcp  Kubernetes worker running.
  cuda/1                   active       idle                52.90.163.167                   CUDA installed and available
  flannel/5                active       idle                52.90.163.167                   Flannel subnet 10.1.35.1/24
kubernetes-worker-gpu/0*   active       idle       1        52.90.29.98     80/tcp,443/tcp  Kubernetes worker running.
  cuda/0*                  active       idle                52.90.29.98                     CUDA installed and available
  flannel/1                active       idle                52.90.29.98                     Flannel subnet 10.1.58.1/24
Machine  State    DNS             Inst id              Series  AZ
0        started  54.242.44.224   i-09ea4f951f651687f  xenial  us-east-1a
0/lxd/0  started  10.0.0.122      juju-65a910-0-lxd-0  xenial  
1        started  52.90.29.98     i-03c3e35c2e8595491  xenial  us-east-1c
3        started  52.90.163.167   i-0ca0716985645d3f2  xenial  us-east-1d
4        started  52.86.161.22    i-02de3aa8efcd52366  xenial  us-east-1e
5        started  52.70.5.49      i-092ac5367e31188bb  xenial  us-east-1a
6        started  174.129.164.95  i-0a0718343068a5c94  xenial  us-east-1c
Relation      Provides                Consumes                Type
juju-info     cuda                    kubernetes-worker-gpu   regular
juju-info     cuda                    kubernetes-worker-gpu8  regular
certificates  easyrsa                 etcd                    regular
certificates  easyrsa                 kubernetes-master       regular
certificates  easyrsa                 kubernetes-worker-cpu   regular
certificates  easyrsa                 kubernetes-worker-gpu   regular
certificates  easyrsa                 kubernetes-worker-gpu8  regular
cluster       etcd                    etcd                    peer
etcd          etcd                    flannel                 regular
etcd          etcd                    kubernetes-master       regular
cni           flannel                 kubernetes-master       regular
cni           flannel                 kubernetes-worker-cpu   regular
cni           flannel                 kubernetes-worker-gpu   regular
cni           flannel                 kubernetes-worker-gpu8  regular
cni           kubernetes-master       flannel                 subordinate
kube-dns      kubernetes-master       kubernetes-worker-cpu   regular
kube-dns      kubernetes-master       kubernetes-worker-gpu   regular
kube-dns      kubernetes-master       kubernetes-worker-gpu8  regular
cni           kubernetes-worker-cpu   flannel                 subordinate
juju-info     kubernetes-worker-gpu   cuda                    subordinate
cni           kubernetes-worker-gpu   flannel                 subordinate
juju-info     kubernetes-worker-gpu8  cuda                    subordinate
cni           kubernetes-worker-gpu8  flannel                 subordinate

Let us see what nvidia-smi gives us:

juju ssh kubernetes-worker-gpu/0 sudo nvidia-smi
Tue Feb 14 13:28:42 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.26                 Driver Version: 375.26                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K80           On   | 0000:00:1E.0     Off |                    0 |
| N/A   33C    P0    81W / 149W |      0MiB / 11439MiB |     95%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

On the more powerful 8xlarge,

juju ssh kubernetes-worker-gpu8/0 sudo nvidia-smi
Tue Feb 14 13:59:24 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.26                 Driver Version: 375.26                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K80           On   | 0000:00:17.0     Off |                    0 |
| N/A   41C    P8    31W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla K80           On   | 0000:00:18.0     Off |                    0 |
| N/A   36C    P0    70W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  Tesla K80           On   | 0000:00:19.0     Off |                    0 |
| N/A   44C    P0    57W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   3  Tesla K80           On   | 0000:00:1A.0     Off |                    0 |
| N/A   38C    P0    70W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   4  Tesla K80           On   | 0000:00:1B.0     Off |                    0 |
| N/A   43C    P0    57W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   5  Tesla K80           On   | 0000:00:1C.0     Off |                    0 |
| N/A   38C    P0    69W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   6  Tesla K80           On   | 0000:00:1D.0     Off |                    0 |
| N/A   44C    P0    58W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   7  Tesla K80           On   | 0000:00:1E.0     Off |                    0 |
| N/A   38C    P0    71W / 149W |      0MiB / 11439MiB |     39%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Aaaand yes!! We have our 8 GPUs as expected so 8x 12GB = 96GB Video RAM!

At this stage, we only have them enabled on the hosts. Now let us add GPU support in Kubernetes.

Adding GPU support in Kubernetes

By default, CDK will not activate GPUs when starting the API server and the Kubelets. We need to do that manually (for now).

Master update

On the master node, update /etc/default/kube-apiserver to add:

# Security Context
KUBE_ALLOW_PRIV="--allow-privileged=true"
before restarting the API Server. This can be done programmatically with:
juju show-status kubernetes-master --format json | \
    jq --raw-output '.applications."kubernetes-master".units | keys[]' | \
    xargs -I UNIT juju ssh UNIT "echo -e '\n# Security Context \nKUBE_ALLOW_PRIV=\"--allow-privileged=true\"' | sudo tee -a /etc/default/kube-apiserver && sudo systemctl restart kube-apiserver.service"

So now the Kube API will accept requests to run privileged containers, which are required for GPU workloads.

Worker nodes

On every worker, /etc/default/kubelet to add the GPU tag, so it looks like:

# Security Context
KUBE_ALLOW_PRIV="--allow-privileged=true"
# Add your own!
KUBELET_ARGS="--experimental-nvidia-gpus=1 --require-kubeconfig --kubeconfig=/srv/kubernetes/config --cluster-dns=10.1.0.10 --cluster-domain=cluster.local"

before restarting the service.

This can be done with

for WORKER_TYPE in gpu gpu8
do
    juju show-status kubernetes-worker-${WORKER_TYPE} --format json | \
        jq --raw-output '.applications."kubernetes-worker-'${WORKER_TYPE}'".units | keys[]' | \
        xargs -I UNIT juju ssh UNIT "echo -e '\n# Security Context \nKUBE_ALLOW_PRIV=\"--allow-privileged=true\"' | sudo tee -a /etc/default/kubelet"
juju show-status kubernetes-worker-${WORKER_TYPE} --format json | \
    jq --raw-output '.applications."kubernetes-worker-'${WORKER_TYPE}'".units | keys[]' | \
    xargs -I UNIT juju ssh UNIT "sudo sed -i 's/KUBELET_ARGS=\"/KUBELET_ARGS=\"--experimental-nvidia-gpus=1\ /' /etc/default/kubelet && sudo systemctl restart kubelet.service"

done

Testing our setup

Now we want to know if the cluster actually has GPU enabled. To validate, run a job with an nvidia-smi pod:

kubectl create -f src/nvidia-smi.yaml
Then wait a little bit and run the log command:
kubectl logs $(kubectl get pods -l name=nvidia-smi -o=name -a)
Tue Feb 14 14:14:57 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.26                 Driver Version: 375.26                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  Tesla K80           Off  | 0000:00:17.0     Off |                    0 |
| N/A   47C    P0    56W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   1  Tesla K80           Off  | 0000:00:18.0     Off |                    0 |
| N/A   39C    P0    70W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   2  Tesla K80           Off  | 0000:00:19.0     Off |                    0 |
| N/A   48C    P0    57W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   3  Tesla K80           Off  | 0000:00:1A.0     Off |                    0 |
| N/A   41C    P0    70W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   4  Tesla K80           Off  | 0000:00:1B.0     Off |                    0 |
| N/A   47C    P0    58W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   5  Tesla K80           Off  | 0000:00:1C.0     Off |                    0 |
| N/A   40C    P0    69W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   6  Tesla K80           Off  | 0000:00:1D.0     Off |                    0 |
| N/A   48C    P0    59W / 149W |      0MiB / 11439MiB |      0%      Default |
+-------------------------------+----------------------+----------------------+
|   7  Tesla K80           Off  | 0000:00:1E.0     Off |                    0 |
| N/A   41C    P0    72W / 149W |      0MiB / 11439MiB |    100%      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

Ẁhat is intersting here is that the pod sees all the cards, even if we only shared the /dev/nvidia0 char device. At runtime, we would have problems.

If you want to run multi GPU containers, you need to share all char devices like we do in the second yaml file (nvidia-smi-8.yaml)

Conclusion

We reached the first milestone of our 3 part journey: the cluster is up & running, GPUs are activated, and Kubernetes will now welcome GPU workloads.

If you are a data scientist or running Kubernetes workloads that could benefit of GPUs, this already gives you an elegant and very fast way of managing your setups. But usually in this context, you also need to have storage available between the instances, whether it is to share the dataset or to exchange results.

Kubernetes offers many options to connect storage. In the second part of the blog, we will see how to automate adding EFS storage to our instances, then put it to good use with some datasets!

In the meantime, feel free to contact me if you have a specific use case in the cloud for this to discuss operational details. I would be happy to help you setup you own GPU cluster and get you started for the science!

Tearing down

Whenever you feel like it, you can tear down this cluster. These instances can be pricey, hence powering them down when you do not use them is not a bad idea.

juju kill-controller aws/us-east-1

This will ask for confirmation then destroy everything… But now, you are just a few commands and a coffee away from rebuilding it, so that is not a problem.

15 February, 2017 03:02PM

hackergotchi for Blankon developers

Blankon developers

i15n: Aku

Penerjemahan BlankOn Authentication

Resources: django.po

15 February, 2017 03:01PM

BlankOn Malang: Pelatihan Migrasi Open Source di Dinkes Lamongan

Pada tanggal 28 dan 29 Oktober 2011  Komunitas Open Source dan Linux Lamongan  dan Dinas Kesehatan Kabupaten Lamongan mengadakan pelatihan migrasi ke Open Source. Sistem Operasi yang dipilih setelah disepakati adalah BlankonLinux. Ada beberapa hal yang menyebabkan Dinkes Lamongan dan KOSLA memilih BlankOn Linux. Ubuntu memang bagus, tapi karena terlalu cepat update rilis baru (tiap 6 bulan sekali) menyebabkan Sistem Operasi yang diinstall terasa jadul. ...

15 February, 2017 03:01PM

Ahmad Haris: Radix Royal Edisi Saya

Kali ini saya akan membahas salah satu gitar dari koleksi saya. Gitar ini buatan Indonesia. Nasib baik, saya bisa punya edisi khusus dari merek beken ini.

Radix Guitars adalah merek gitar terkenal di Indonesia. Sudah lama berdiri. Saya sendiri pernah punya yang edisi awal-awal sampai akhirnya pindah tangan ke teman.

Selang beberapa waktu, nasib baik, berkah pertemanan,  saya dibuatkan oleh mas Toien (pemilik Radix Guitars) gitar sesuai selera. Saya mengambil bentuk dari Radix Royal. Berikut penampakannya:

Radix Guitars - Royal Special

Radix Guitars – Royal Special

Pembeda

Edisi ini memang berbeda dengan model aslinya, perbedaannya terletak pada knob, tremolo, 3 way switch serta headstocknya.

Headstock

The Headstock

Bridge yang terpasang di gitar ini adalah SyncroniZR, kebetulan saya punya part ini sejak lama, dan layak digunakan.

Setelah gitar ini datang (saat datang, barengan dengan gitarnya mas Andre Tiranda – Siksa Kubur, Radix Dogma berwarna hijau) saya mencoba-coba sekilas gitar baru saya ini. Sekilas pandang, ini OK. Secara estetika fisik, keren banget.

Hari berikutnya, saya perlu mengganti sepasang pickup yang sudah ada. Sebenarnya dari awal saya sudah diminta untuk mengirim sepasang pickup, namun tidak sempat mengirimkan. Jadinya diberikan pickup standar yang ada. Kenapa saya harus mengganti? ini karena selera saja, saya terbiasa menggunakan pickup Seymour Duncan Sh1 dan Sh4. Dari situ bisa saya buat acuan apakah suara dan rasa gitar ini sesuai ekspektasi saya. Ekspektasi saya adalah gitar ini berkarakter Brown Sound.

Saat mengganti, saya melakukan eksperimen, sama memiliki sepasang Seymour Duncan Sh1 dan Sh4 (ini berwarna hijau) juga memiliki Stranough Demonium yang konon suaranya mirip dengan Sh4. Jadilah saya mencoba sekalian, Sh1 di neck, Stranough Demonium di bridge.

Sh1 + Demonium

Sh1 + Demonium

Build Quality

Kesan pertama adalah keren banget, rapi dan cihuiy dah. Flame (veeneer) topnya bagus banget). Bagian belakang warna natural tanpa pelapis glosy, sehingga tangan akan merasakan kayu gitarnya.

Bodi

Bodi

 

Bodi Belakang

Bodi Belakang

Salah satu bentuk kerapiannya adalah saat bagian belakangnya terbuka (bagian penutup per tremolo). Sangat rapi dan halus. Berikut saya sertakan perbandingannya.

Bagian Belakang Radix

Bagian Belakang Radix

 

Bagian Belakang JEM77WDP

Bagian Belakang JEM77WDP

Kesimpulan

Gitar ini menurut saya sangat menyenangkan sekali. Enak dimainkan dan suaranya bener sesuai ekspektasi. Beberapa teman dekat saya ikutan mencoba dan semuanya mengakui. Salah satunya sudah memesan satu buah.

Sementara itu dulu deh, kalau ada yang kurang kalian bisa menanyakan dan saya akan memperbarui isi konten ini.


15 February, 2017 08:05AM

hackergotchi for Ubuntu developers

Ubuntu developers

David Tomaschik: BSidesSF 2017

BSidesSF 2017 was, by far, the best yet. I’ve been to the last 5 or so, and had a blast at almost every one. This year, I was super busy – gave a talk, ran a workshop, and I was one of the organizers for the BSidesSF CTF. I’ve posted the summary and slides for my talk and I’ll update the video link once it gets posted.

I think it’s important to thank the BSidesSF organizers – they did a phenomenal job with an even bigger venue and I think everyone loved it. It was clearly a success, and I can only imagine how much work it takes to plan something like this.

It’s also important to note that our perennial venue, DNA Lounge, (except that one year we don’t talk about) is having some money problems. Apparently you can’t spend more than you bring in each year. This is the venue that, in addition to hosting BSidesSF, also hosts Cyberdelia. This is a venue that allows all kinds of independent art and events to thrive in one of the most expensive cities in the country. I encourage you to reach out and go to a show, buy some pizza, or just donate to their Patreon. If my encouragement is not enough, how about some from Razor and Blade?

Again, big thanks to BSidesSF and DNA Lounge for such a successful event!

15 February, 2017 08:00AM

hackergotchi for Cumulus Linux

Cumulus Linux

Facebook’s chassis, Disaggregate & a webinar you don’t want to miss

Several weeks ago, we let out some big news about Backpack, Facebook’s chassis, running Cumulus Linux. If you missed the news, check it out here.

As part of our launch, we attended Facebook’s exclusive event, Disaggregate, to talk about all things open networking. Our CTO and cofounder, JR Rivers, gave a stellar presentation covering a short history of open networking (“I watched the Googles of the world grow up”), how ONIE was born and why Cumulus Linux was created to help an industry evolve, scale and build better networks. You can watch the full presentation here.

We also manned our station at the event, answering questions about our integration with Backpack and even demoing Cumulus Linux on the product. Some of our takeaways from the event included:

  • There was a nice mix of customers. From hyper-scale to SaaS, traditional organizations to state and local governments — all types of organizations were represented. Some were there to learn about the benefits of open networking while others were looking for ways to scale their footprint.
  • Most discussions with customers were about their ability to build and/or use their custom applications that traditional lock-in vendors don’t support, which gave them freedom and choice. Others were interested in how Backpack can help them scale while reducing complexity.
  • A number of vendors had representation including Barefoot Networks and Apstra.

The launch and event meant a lot to us. Our support of Backpack signifies a new era of open networking where we can bring complete freedom of choice and technology to the data center network, allowing for organizations of all sizes to access the benefits of web-scale networking.

To talk more about the integration with Backpack, our thoughts on the chassis and how this helps businesses move to web-scale networking, we’re hosting a webinar featuring our very own JR Rivers. If you want to hear about open networking with Cumulus Linux and Backpack straight from the source, you don’t want to miss this.

facebook chassis at disaggregate

 

 

Pack your backpack: Chassis is in session
This live webinar has already aired! Watch the recording.

Originally recorded: February 16, 2016
10am PST
Featuring JR Rivers

 

 

 

 

 

 

 

The post Facebook’s chassis, Disaggregate & a webinar you don’t want to miss appeared first on Cumulus Networks Blog.

15 February, 2017 01:29AM by Kelsey Havens

February 14, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Julian Andres Klode: jak-linux.org moved / backing up

In the past two days, I moved my main web site jak-linux.org (and jak-software.de) from a very old contract at STRATO over to something else: The domains are registered with INWX and the hosting is handled by uberspace.de. Encryption is provided by Let’s Encrypt.

I requested the domain transfer from STRATO on Monday at 16:23, received the auth codes at 20:10 and the .de domain was transferred completely on 20:36 (about 20 minutes if you count my overhead). The .org domain I had to ACK, which I did at 20:46 and at 03:00 I received the notification that the transfer was successful (I think there was some registrar ACKing involved there). So the whole transfer took about 10 1/2 hours, or 7 hours since I retrieved the auth code. I think that’s quite a good time 🙂

And, for those of you who don’t know: uberspace is a shared hoster that basically just gives you an SSH shell account, directories for you to drop files in for the http server, and various tools to add subdomains, certificates, virtual users to the mailserver. You can also run your own custom build software and open ports in their firewall. That’s quite cool.

I’m considering migrating the blog away from wordpress at some point in the future – having a more integrated experience is a bit nicer than having my web presence split over two sites. I’m unsure if I shouldn’t add something like cloudflare there – I don’t want to overload the servers (but I only serve static pages, so how much load is this really going to get?).

in other news: off-site backups

I also recently started doing offsite backups via borg to a server operated by the wonderful rsync.net. For those of you who do not know rsync.net: You basically get SSH to a server where you can upload your backups via common tools like rsync, scp, or you can go crazy and use git-annex, borg, attic; or you could even just plain zfs send your stuff there.

The normal price is $0.08 per GB per month, but there is a special borg price of $0.03 (that price does not include snapshotting or support, really). You can also get a discounted normal account for $0.04 if you find the correct code on Hacker News, or other discounts for open source developers, students, etc. – you just have to send them an email.

Finally, I must say that uberspace and rsync.net feel similar in spirit. Both heavily emphasise the command line, and don’t really have any fancy click stuff. I like that.


Filed under: General

14 February, 2017 11:52PM

Sina Mashek: RX 4XX: Getting AMDGPU to work in Manjaro Linux

RX460 Woes

I finally found a solution to getting my RX460 to work in Manjaro. Since I did something a little different, I am recording it here for myself and anybody else who finds this post.

While this is written using Manjaro, it should work with Arch and other distributions (some files might be in a different location, though). It should also work with any supported card.

I use

nano
to edit the files below. Use whatever you are most comfortable with, but remember to run it with sudo.

The Process

First, will need these:1)Pacman will fail if you don’t have an up-to-date repository.

sudo pacman -S "xorg-server>=1.18" "linux>=4.9" xf86-video-amdgpu xf86-input-libinput

Then, edit

/etc/default/grub
, changing:

GRUB_CMDLINE_LINUX=""

to:

GRUB_CMDLINE_LINUX="amdgpu.exp_hw_support=1"

After this, run:

sudo update-grub

Then edit

/etc/modprobe.d/radeon_blacklist.conf
(it might create a new file) and add:

blacklist radeon

Next, inside of

/etc/X11/xorg.conf.d/90-mhwd.conf
, delete everything in the file and add:2)Confession: I didn’t empty out 90-mhwd.conf.

Section "Device"
    Identifier    "RX460"
    Driver        "amdgpu"
EndSection

Finally, I restarted and booted into the latest linux kernel (4.10.0-1-MANJARO) with amdgpu support!

I hope this helps others!

   [ + ]

1. Pacman will fail if you don’t have an up-to-date repository.
2. Confession: I didn’t empty out 90-mhwd.conf.

The post RX 4XX: Getting AMDGPU to work in Manjaro Linux appeared first on Sina Mashek.

14 February, 2017 06:21PM

Dustin Kirkland: Kubernetes InstallFest at ContainerWorld -- Feb 21, 2017!


We at Canonical have been super busy fine tuning your experience with Kubernetes, Docker, and LXD on Ubuntu!

Amazingly, you're merely two commands away from standing up a fully functional, minimal Kubernetes cluster on any Ubuntu 16.04 LTS system...

$ sudo snap install --classic conjure-up
$ conjure-up kubernetes-core

Or, if you're feeling more enterprisey and want the full experience, try:

$ conjure-up canonical-kubernetes

I hope to meet some of you at ContainerWorld in Santa Clara next week.  Marco Ceppi and I are running a Kubernetes installfest workshop on Tuesday, February 21, 2017, from 3pm - 4:30pm.  I can guarantee that every single person who attends will succeed in deploying their own Kubernetes cluster to a public cloud (AWS, Azure, or Google), or to their Ubuntu laptop or VM.

Also, I'm giving a talk entitled, "Using the Right Container Technology for the Job", on Wednesday, February 22, 2017 from 1:30pm - 2:10pm.

Finally, I invite you to check out this 30-minute podcast with David Daly, from DevOpsChat, where we talked quite a bit about Containers and Kubernetes and the experience we're working on in Ubuntu...


Cheers,
:-Dustin

14 February, 2017 04:19PM by Dustin Kirkland (noreply@blogger.com)

Ubuntu Insights: The Nextcloud Box at MWC

This is a guest post by Frank Karlitschek, Founder of the Nextcloud box. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

At the Mobile World Congress this February, Nextcloud will showcase a device built to to allow users to bring their their data back under their control in a very literal sense: the Nextcloud Box.

The success and attention from launching the Nextcloud box which runs Ubuntu Core demonstrated just how much interest exists in using inexpensive hardware to bring together innovative, open source hosted solutions for people that want full privacy and control of their data, and which will extend to benefit from the wider Ubuntu ecosystem around IoT.

Nextcloud is a project providing a privacy and security focused alternative to cloud services like those from Apple, Google and Dropbox. It has developed a solution for self-hosting that delivers the many benefits like file syncing and sharing, document editing, audio-video calls and much more, under an open source license.

The Nextcloud Box builds on the 1TB USB PiDrive by WDLabs with a Nextcloud case around it. The user has to supply the box with a Raspberry Pi while the included MicroSD Card contains Canonical’s Snappy Ubuntu with a Nextcloud Snap. This powerful combination delivers secure, fully automatic unattended updates to your software, making this the perfect device for users who want a private cloud without the fuzz of maintaining it!

The project started from a conversation end 2015 between Nextcloud founder Frank Karlitschek and some folks from WDLabs, the business growth incubator of storage solutions company Western Digital Corporation. WDLabs provides innovative storage solutions for DIY devices like the Raspberry Pi series, devices which are very flexible and low-cost.

The plan was to develop an inexpensive, easy to use device that would allow people to run their own private cloud service at home. After discussing and working on some prototypes, it became clear we needed a very easy, reliable and secure solution underlying Nextcloud. This is where Canonical came in: Snaps provide an excellent way to distribute updates in a way that is both secure and does not risk breaking end user devices. Within a few months, a device was put together with full automatic updates, delivering Nextcloud software in an even easier way to users.

Announcement video with Frank Karlitschek (Nextcloud), Jane Silber (Canonical) and Joe Lee (WDLabs).

You can find a bit of the history of the Nextcloud Box in this blog post and find out how Snaps were helpful in delivering an easy to use solution in this webinar.

At MWC at the Ubuntu booth, Nextcloud founder Frank will be present and demonstrate the Nextcloud box and discuss its development You can book a meeting to learn more about Nextcloud, the Nextcloud Box and IoT technology.

14 February, 2017 04:08PM

Michael Lustfield: Secure Password Vault Using Yubikey

Up until recently, my passwords were stored in a rather precarious manner. For my birthday, I decided it would be a nice gift to myself to perform a complete password refresh. This involved taking inventory of every password there was any record or memory of and resetting it to unique and cryptographically random password of random length--between ~25 and ~200 characters long. Now I have reason to keep these passwords secure!

My Delay

Most people that know me would be surprised to learn I never needed a password vault. It was possible to avoid using a password vault by memorizing different algorithms. This worked well because an employer and year/quarter could be fed into the algorithm to produce work-centric time-based passwords.

This comes with some obvious issues. The first, and likely biggest, issue being that I'm not able to memorize an algorithm that wouldn't reveal a good portion of the pattern after ~5 cracked passwords.

The previous solution included coming up with a weak and easy algorithm as well as a strong and difficult alterantive. It also included replacing each after a few years of use. Unfortunately, forgetting old didn't fit into the equation.

The Vault

The first step is deciding on a tool to use for the password vault. After doing a review and audit of various tools, I settled on KeePassX. Although it uses the same database format as KeePass2, I trust this tool significantly more. Every person considering a solution for storing this much private data should do their own research in order to trust their decision.

When doing the password refresh, no "current" password was moved to the vault. Instead, new passwords were generated and services updated with the new password before erasing old records.

In Comes LUKS

It should be obvious that a very strong password should be set on the keepass database. Maybe less obvious is that it would be rather silly to give keepass our full trust. Despite having reviewed the source code and knowing smarter people have already done the same, it's still a good idea to provide an extra layer of protection. Remember, this is data that should be kept very secure.

Being familiar with LUKS, I saw it as the obvious tool for this job. LUKS provides the ability to store a tiny little file used for encryption that can be backed up just like any other file.

LUKS also provides the ability to store headers in a separate file. The headers include the eight available key slots as well as other data required to unlock the encrypted volume. Headers can get a bit large but they are static so they become virtually non-existent with differential backups. The encrypted volume only needs to fit your password database and only needs to be large enough to accommodate growth. This will be the size consumed for any differential backup that includes the encrypted volume.

To build a playground structure similar to mine:

mkdir -p ~/.luks/{crypts,headers,mnt}

To build files for encryption:

dd if=/dev/urandom of=~/.luks/headers/vault bs=1MB count=2
dd if=/dev/urandom of=~/.luks/crypts/vault bs=200KB count=1
mkdir /.luks/mnt/vault

It's recommended to use --use-random to ensure a stronger entropy pool. When creating the LUKS volumes, use a memorable and secure password. This will later be removed and kept as a backup.

Making cryptography:

sudo cryptsetup luksFormat ~/.luks/crypts/vault \
    --header ~/.luks/headers/vault \
    -s 512 --align-payload=0 --use-random

Now that the encryption stuff has been configured, some sysadmin stuff needs to be performed. This is pretty basic so explanation will be skipped.

It's a root thing:

cryptsetup open ~/.luks/crypts/vault \
    --header ~/.luks/headers/vault vault
mkfs.ext2 -I 128 /dev/mapper/vault
mount /dev/mapper/vault ~/.luks/mnt/vault
chown $user:$user ~/.luks/mnt/vault

Closing it up (also root):

umount ~/.luks/mnt/vault
cryptsetup close vault

Yubikey Encryption

The only reasonably secure way to trust the yubikey seems to be with the challenge-response / hmac-sha1 option. This seems to accept an input password up to 64 characters long, combine it with a secret, and produce a 40 character long hash.

This was actually a pretty big concern for me because [0-9a-f]{40} wouldn't take a computer too terribly much time to crack. After some thinking, it became quite obvious that the simple solution was using the yubikey hash as a portion of the complete password rather than the whole thing.

Pro-tip: Most of the tools I reviewed that used a yubikey as an authentication factor only utilized this return value. That includes the 'yubikey-luks' package in a few package repositories. Most tools didn't even include a sane option for decryption.

Configuring the Yubikey:

  1. Install and launch "yubikey-personalization-gui"
  2. Select Challenge-Response tab, then HMAC-SHA1
  3. Configuration Slot: 2
  4. Configuration Protection: <strongly recommended | but not serial>
  5. Require User Input: <recommend yes | this means touching key>
  6. Click Generate, then Write Configuration

If there's any intention of using the key as a permanent resident, it would be wise to reset slot 1 and ensure it does not respond to contact (user input).

Password Derivatives

To produce a strong password for LUKS (the encrypted volume), the algorithm used should produce a key that is both variable in length and character set. As unlikely as it is that the yubikey is storing entered passwords and caching generated hashes, yubikey is now closed source and there's absolutely zero proof that isn't happening. This is describing paranoia, but addressing the silly fear is quite easy.

My first algorithm looked much like this:

salt='71'
read -sp '' -t 20 vault_key
len="${#vault_key}"
luks_pass="${vault_key:5}$(/usr/bin/ykchalresp -2 \
    "$(sha256sum <<<"${vault_key:0:8}$salt${vault_key:$(($len - 5)):4}" | cut -d ' ' -f 1)")"
# sudo cryptsetup open [...]
unset vault_key luks_pass

# sample_in:  YouAreCorrectHorse,ThatIs@BatteryStaple!
# sample_out: eCorrectHorse,ThatIs@BatteryStaple!ac3bc63c4949f8c902ea49a7d9409f506c79bcdc

If able, coming up with a more secure algorithm than this would be a good idea. If using this sample, at least change the salt. Verifying checksums of binaries accessed of the script checking checksums would also be an excellent idea.

If the configuration was set to require user input, processing will stop at the "luks_pass=" line and the yubikey will begin blinking green. Once the key has been touched it will emit solid green until the hash is generated and returned.

Pro-tip: sha512sum produces a string too large for ykchalresp (64 limit)

Adding Factors

Knowing the final derived password means the original plain password can finally be retired. If there is no backup of the headers file, this would be an excellent time to make the copy and stick it away in a safe.

To add the yubikey-derived key:

sudo cryptsetup luksAddKey ~/.luks/headers/vault
# first enter the old (current) password
# enter the derived password
# enter it a second time

To delete the old key:

sudo cryptsetup luksKillSlot ~/.luks/headers/vault 0
# note: slot 0 is the first used and will have the plain password
#       this can be verified using luksDump
# enter the old password (for this slot)

Up to eight key slots are available for storing description keys. The same process that was used above can be repeated to add additional devices with the only exception being that no keys will be deleted.

Vault Access

Now that all record of that key for copy/paste purposes and the clipboard has been scrubbed, all that's left is to build a convenient script to make accessing the vault a bit less painful.

I have included a very simple script to use as a starting point for your venture.

Final Thoughts

It would be nice to build a very strong and universal algorithm.

Most attacks that could hijack this derived password would also imply the attacker has already made it into the system far enough to grab a copy of the keepass file after the volume were mounted. If the intrusion is ever detected, ample time will be available to do another password refresh using a new password vault and encrypted volume.

Attachments:

image0 access_vault

14 February, 2017 06:00AM

Sridhar Dhanapalan: Interview with Australian Council for Computers in Education Learning Network

Adam Holt and I were interviewed last night by the Australian Council for Computers in Education Learning Network about our not-for-profit work to improve educational opportunities for children in the developing world.

We talked about One Laptop per Child, OLPC Australia and Sugar Labs. We discussed the challenges of providing education in the developing world, and how that compares with the developed world.

Australia poses some of its own challenges. As a country that is 90% urbanised, the remaining 10% are scattered across vast distances. The circumstances of these communities often share both developed and developing world characteristics. We developed the One Education programme to accommodate this.

These lessons have been developed further into Unleash Kids, an initiative that we are currently working on to support the community of volunteers worldwide and take to the movement to the next level.

14 February, 2017 05:47AM

Sridhar Dhanapalan: Creating an Education Programme

OLPC Australia had a strong presence at linux.conf.au 2012 in Ballarat, two weeks ago.

I gave a talk in the main keynote room about our educational programme, in which I explained our mission and how we intend to achieve it.

Even if you saw my talk at OSDC 2011, I recommend that you watch this one. It is much improved and contains new and updated material. The YouTube version is above, but a higher quality version is available for download from Linux Australia.

The references for this talk are on our development wiki.

Here’s a better version of the video I played near the beginning of my talk:

I should start by pointing out that OLPC is by no means a niche or minor project. XO laptops are in the hands of 8000 children in Australia, across 130 remote communities. Around the world, over 2.5 million children, across nearly 50 countries, have an XO.

Investment in our Children’s Future

The key point of my talk is that OLPC Australia have a comprehensive education programme that highly values teacher empowerment and community engagement.

The investment to provide a connected learning device to every one of the 300 000 children in remote Australia is less than 0.1% of the annual education and connectivity budgets.

For low socio-economic status schools, the cost is only $80 AUD per child. Sponsorships, primarily from corporates, allow us to subsidise most of the expense (you too can donate to make a difference). Also keep in mind that this is a total cost of ownership, covering the essentials like teacher training, support and spare parts, as well as the XO and charging rack.

While our principal focus is on remote, low socio-economic status schools, our programme is available to any school in Australia. Yes, that means schools in the cities as well. The investment for non-subsidised schools to join the same programme is only $380 AUD per child.

Comprehensive Education Programme

We have a responsibility to invest in our children’s education — it is not just another market. As a not-for-profit, we have the freedom and the desire to make this happen. We have no interest in vendor lock-in; building sustainability is an essential part of our mission. We have no incentive to build a dependency on us, and every incentive to ensure that schools and communities can help themselves and each other.

We only provide XOs to teachers who have been sufficiently enabled. Their training prepares them to constructively use XOs in their lessons, and is formally recognised as part of their professional development. Beyond the minimum 15-hour XO-certified course, a teacher may choose to undergo a further 5-10 hours to earn XO-expert status. This prepares them to be able to train other teachers, using OLPC Australia resources. Again, we are reducing dependency on us.

OLPC Australia certificationsCertifications

Training is conducted online, after the teacher signs up to our programme and they receive their XO. This scales well to let us effectively train many teachers spread across the country. Participants in our programme are encouraged to participate in our online community to share resources and assist one another.

OLPC Australia online training processOnline training process

We also want to recognise and encourage children who have shown enthusiasm and aptitude, with our XO-champion and XO-mechanic certifications. Not only does this promote sustainability in the school and give invaluable skills to the child, it reinforces our core principle of Child Ownership. Teacher aides, parents, elders and other non-teacher adults have the XO-basics (formerly known as XO-local) course designed for them. We want the child’s learning experience to extend to the home environment and beyond, and not be constrained by the walls of the classroom.

There’s a reason why I’m wearing a t-shirt that says “No, I won’t fix your computer.” We’re on a mission to develop a programme that is self-sustaining. We’ve set high goals for ourselves, and we are determined to meet them. We won’t get there overnight, but we’re well on our way. Sustainability is about respect. We are taking the time to show them the ropes, helping them to own it, and developing our technology to make it easy. We fundamentally disagree with the attitude that ordinary people are not capable enough to take control of their own futures. Vendor lock-in is completely contradictory to our mission. Our schools are not just consumers; they are producers too.

As explained by Jonathan Nalder (a highly recommended read!), there are two primary notions guiding our programme. The first is that the nominal $80 investment per child is just enough for a school to take the programme seriously and make them a stakeholder, greatly improving the chances for success. The second is that this is a schools-centric programme, driven from grassroots demand rather than being a regime imposed from above. Schools that participate genuinely want the programme to succeed.

OLPC Australia programme cycleProgramme cycle

Technology as an Enabler

Enabling this educational programme is the clever development and use of technology. That’s where I (as Engineering Manager at OLPC Australia) come in. For technology to be truly intrinsic to education, there must be no specialist expertise required. Teachers aren’t IT professionals, and nor should they be expected to be. In short, we are using computers to teach, not teaching computers.

The key principles of the Engineering Department are:

  • Technology is an integral and seamless part of the learning experience – the pen and paper of the 21st century.
  • To eliminate dependence on technical expertise, through the development and deployment of sustainable technologies.
  • Empowering children to be content producers and collaborators, not just content consumers.
  • Open platform to allow learning from mistakes… and easy recovery.

OLPC have done a marvellous job in their design of the XO laptop, giving us a fantastic platform to build upon. I think that our engineering projects in Australia have been quite innovative in helping to cover the ‘last mile’ to the school. One thing I’m especially proud of is our instance on openness. We turn traditional systems administration practice on its head to completely empower the end-user. Technology that is deployed in corporate or educational settings is typically locked down to make administration and support easier. This takes control completely away from the end-user. They are severely limited on what they can do, and if something doesn’t work as they expect then they are totally at the mercy of the admins to fix it.

In an educational setting this is disastrous — it severely limits what our children can learn. We learn most from our mistakes, so let’s provide an environment in which children are able to safely make mistakes and recover from them. The software is quite resistant to failure, both at the technical level (being based on Fedora Linux) and at the user interface level (Sugar). If all goes wrong, reinstalling the operating system and restoring a journal (Sugar user files) backup is a trivial endeavour. The XO hardware is also renowned for its ruggedness and repairability. Less well-known are the amazing diagnostics tools, providing quick and easy indication that a component should be repaired/replaced. We provide a completely unlocked environment, with full access to the root user and the firmware. Some may call that dangerous, but I call that empowerment. If a child starts hacking on an XO, we want to hire that kid 🙂

Evaluation

My talk features the case study of Doomadgee State School, in far-north Queensland. Doomadgee have very enthusiastically taken on board the OLPC Australia programme. Every one of the 350 children aged 4-14 have been issued with an XO, as part of a comprehensive professional development and support programme. Since commencing in late 2010, the percentage of Year 3 pupils at or above national minimum standards in numeracy has leapt from 31% in 2010 to 95% in 2011. Other scores have also increased. Think what you may about NAPLAN, but nevertheless that is a staggering improvement.

In federal parliament, Robert Oakeshott MP has been very supportive of our mission:

Most importantly of all, quite simply, One Laptop per Child Australia delivers results in learning from the 5,000 students already engaged, showing impressive improvements in closing the gap generally and lifting access and participation rates in particular.

We are also engaged in longitudinal research, working closely with respected researchers to have a comprehensive evaluation of our programme. We will release more information on this as the evaluation process matures.

Join our mission

Schools can register their interest in our programme on our Education site.

Our Prospectus provides a high-level overview.

For a detailed analysis, see our Policy Document.

If you would like to get involved in our technical development, visit our development site.

Credits

Many thanks to Tracy Richardson (Education Manager) for some of the information and graphics used in this article.

14 February, 2017 05:35AM

Ubuntu Insights: Network management with LXD (2.3+)

LXD logo

Introduction

When LXD 2.0 shipped with Ubuntu 16.04, LXD networking was pretty simple. You could either use that “lxdbr0” bridge that “lxd init” would have you configure, provide your own or just use an existing physical interface for your containers.

While this certainly worked, it was a bit confusing because most of that bridge configuration happened outside of LXD in the Ubuntu packaging. Those scripts could only support a single bridge and none of this was exposed over the API, making remote configuration a bit of a pain.

That was all until LXD 2.3 when LXD finally grew its own network management API and command line tools to match. This post is an attempt at an overview of those new capabilities.

Basic networking

Right out of the box, LXD 2.3 comes with no network defined at all. “lxd init” will offer to set one up for you and attach it to all new containers by default, but let’s do it by hand to see what’s going on under the hood.

To create a new network with a random IPv4 and IPv6 subnet and NAT enabled, just run:

stgraber@castiana:~$ lxc network create testbr0
Network testbr0 created

You can then look at its config with:

stgraber@castiana:~$ lxc network show testbr0
name: testbr0
config:
 ipv4.address: 10.150.19.1/24
 ipv4.nat: "true"
 ipv6.address: fd42:474b:622d:259d::1/64
 ipv6.nat: "true"
managed: true
type: bridge
usedby: []

If you don’t want those auto-configured subnets, you can go with:

stgraber@castiana:~$ lxc network create testbr0 ipv6.address=none ipv4.address=10.0.3.1/24 ipv4.nat=true
Network testbr0 created

Which will result in:

stgraber@castiana:~$ lxc network show testbr0
name: testbr0
config:
 ipv4.address: 10.0.3.1/24
 ipv4.nat: "true"
 ipv6.address: none
managed: true
type: bridge
usedby: []

Having a network created and running won’t do you much good if your containers aren’t using it.
To have your newly created network attached to all containers, you can simply do:

stgraber@castiana:~$ lxc network attach-profile testbr0 default eth0

To attach a network to a single existing container, you can do:

stgraber@castiana:~$ lxc network attach my-container default eth0

Now, lets say you have openvswitch installed on that machine and want to convert that bridge to an OVS bridge, just change the driver property:

stgraber@castiana:~$ lxc network set testbr0 bridge.driver openvswitch

If you want to do a bunch of changes all at once, “lxc network edit” will let you edit the network configuration interactively in your text editor.

Static leases and port security

One of the nice thing with having LXD manage the DHCP server for you is that it makes managing DHCP leases much simpler. All you need is a container-specific nic device and the right property set.

root@yak:~# lxc init ubuntu:16.04 c1
Creating c1
root@yak:~# lxc network attach testbr0 c1 eth0
root@yak:~# lxc config device set c1 eth0 ipv4.address 10.0.3.123
root@yak:~# lxc start c1
root@yak:~# lxc list c1
+------+---------+-------------------+------+------------+-----------+
| NAME |  STATE  |        IPV4       | IPV6 |    TYPE    | SNAPSHOTS |
+------+---------+-------------------+------+------------+-----------+
|  c1  | RUNNING | 10.0.3.123 (eth0) |      | PERSISTENT | 0         |
+------+---------+-------------------+------+------------+-----------+

And same goes for IPv6 but with the “ipv6.address” property instead.

Similarly, if you want to prevent your container from ever changing its MAC address or forwarding traffic for any other MAC address (such as nesting), you can enable port security with:

root@yak:~# lxc config device set c1 eth0 security.mac_filtering true

DNS

LXD runs a DNS server on the bridge. On top of letting you set the DNS domain for the bridge (“dns.domain” network property), it also supports 3 different operating modes (“dns.mode”):

  • “managed” will have one DNS record per container, matching its name and known IP addresses. The container cannot alter this record through DHCP.
  • “dynamic” allows the containers to self-register in the DNS through DHCP. So whatever hostname the container sends during the DHCP negotiation ends up in DNS.
  • “none” is for a simple recursive DNS server without any kind of local DNS records.

The default mode is “managed” and is typically the safest and most convenient as it provides DNS records for containers but doesn’t let them spoof each other’s records by sending fake hostnames over DHCP.

Using tunnels

On top of all that, LXD also supports connecting to other hosts using GRE or VXLAN tunnels.

A LXD network can have any number of tunnels attached to it, making it easy to create networks spanning multiple hosts. This is mostly useful for development, test and demo uses, with production environment usually preferring VLANs for that kind of segmentation.

So say, you want a basic “testbr0” network running with IPv4 and IPv6 on host “edfu” and want to spawn containers using it on host “djanet”. The easiest way to do that is by using a multicast VXLAN tunnel. This type of tunnels only works when both hosts are on the same physical segment.

root@edfu:~# lxc network create testbr0 tunnel.lan.protocol=vxlan
Network testbr0 created
root@edfu:~# lxc network attach-profile testbr0 default eth0

This defines a “testbr0” bridge on host “edfu” and sets up a multicast VXLAN tunnel on it for other hosts to join it. In this setup, “edfu” will be the one acting as a router for that network, providing DHCP, DNS, … the other hosts will just be forwarding traffic over the tunnel.

root@djanet:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.lan.protocol=vxlan
Network testbr0 created
root@djanet:~# lxc network attach-profile testbr0 default eth0

Now you can start containers on either host and see them getting IP from the same address pool and communicate directly with each other through the tunnel.

As mentioned earlier, this uses multicast, which usually won’t do you much good when crossing routers. For those cases, you can use VXLAN in unicast mode or a good old GRE tunnel.

To join another host using GRE, first configure the main host with:

root@edfu:~# lxc network set testbr0 tunnel.nuturo.protocol gre
root@edfu:~# lxc network set testbr0 tunnel.nuturo.local 172.17.16.2
root@edfu:~# lxc network set testbr0 tunnel.nuturo.remote 172.17.16.9

And then the “client” host with:

root@nuturo:~# lxc network create testbr0 ipv4.address=none ipv6.address=none tunnel.edfu.protocol=gre tunnel.edfu.local=172.17.16.9 tunnel.edfu.remote=172.17.16.2
Network testbr0 created
root@nuturo:~# lxc network attach-profile testbr0 default eth0

If you’d rather use vxlan, just do:

root@edfu:~# lxc network set testbr0 tunnel.edfu.id 10
root@edfu:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

And:

root@nuturo:~# lxc network set testbr0 tunnel.edfu.id 10
root@nuturo:~# lxc network set testbr0 tunnel.edfu.protocol vxlan

The tunnel id is required here to avoid conflicting with the already configured multicast vxlan tunnel.

And that’s how you make cross-host networking easily with recent LXD!

Conclusion

LXD now makes it very easy to define anything from a simple single-host network to a very complex cross-host network for thousands of containers. It also makes it very simple to define a new network just for a few containers or add a second device to a container, connecting it to a separate private network.

While this post goes through most of the different features we support, there are quite a few more knobs that can be used to fine tune the LXD network experience.
A full list can be found here: https://github.com/lxc/lxd/blob/master/doc/configuration.md

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Original article

14 February, 2017 03:53AM

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 498

Welcome to the Ubuntu Weekly Newsletter. This is issue #498 for the week February 6 – 12, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Paul White
  • Chris Guiver
  • Jim Connett
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

14 February, 2017 03:46AM by lyz

February 13, 2017

hackergotchi for SparkyLinux

SparkyLinux

pmrp

There is a new app available in Sparky repos: pmrp.

pmrp (Poor Man’s Radio Player) is an Internet radio player script written in bash.

Features :
– 350 hand-picked radio stations.
– Music, news, talk-shows, interviews, comedy, plays and much much more.
– Easy menu system to browse-navigate between different radio stations.
– Now playing information.
– Very low memory footprint.
– No configuration required. Ready to play from the word go.

The script is created by ‘hakerdefo’ and released under the Public Domain Mark http://creativecommons.org/publicdomain/mark/1.0

Installation:
sudo apt-get update
sudo apt-get install pmrp

It works in a text console so launch it with the command:
pmrp
or from the desktop menu-> Multimedia-> pmrp

pmrp

 

13 February, 2017 11:39PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, January 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, about 159 work hours have been dispatched among 13 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased slightly thanks to Exonet joining us.

The security tracker currently lists 37 packages with a known CVE and the dla-needed.txt file 36. The situation is roughly similar to last month even though the number of open issues increased slightly.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

13 February, 2017 05:33PM

Ubuntu Insights: Eclipse foundation: IoT developer survey

The third annual IoT Developer Survey, hosted by Eclipse IoT has just been launched. In previous years it has provided interesting insights about how developers are building IoT solutions, for example – did you know that 73% of people use Linux to develop their IoT projects? – Canonical is pleased to support this initiative once again!

Take the survey below and share with friends if you feel they’d be interested!

Take the survey

13 February, 2017 04:49PM