September 22, 2017

hackergotchi for Univention Corporate Server

Univention Corporate Server

Let’s Encrypt – Making the Internet That Little Bit Securer Every Day

a free, automated, open source certificate authority

Launched in 2014 by the Electronic Frontier Foundation, the University of Michigan, and Mozilla, the Let’s Encrypt project recently issued its 100 millionth automated, free certificate and is thus truly living up to its slogan of “Encrypt the entire web”.

In the aftermath of the Snowdon revelations of 2013, Let’s Encrypt declared its goal of making SSL/TLS certificates available for everyone on the Internet and promoting free encryption on the web. Since then, the project has won over a wide range of notable companies and acquired sponsors such as Akamai and Cisco as well as most recently Netflix.
The project aims to keep the administrative efforts required of the user as low as possible. With minimal configuration, it should be possible to put an HTTPS server in a position to request certificates autonomously.
The new Let’s Encrypt App for UCS is also based on this maxim and is covered in more detail later in this article.

How Let’s Encrypt Works – Automated and Transparent

One decisive aspect of the design of Let’s Encrypt’s infrastructure is a completely automated process for the creation and verification of a requested certificate and complete transparency with regard to these transactions. This makes it possible to view who requested a certificate for which domain at any time so as to avoid abuse.

The technology behind it – ACME-based keys and challenges for the clients

From a technological perspective, Let’s Encrypt is based on the ACME protocol. ACME stands for Automated Certificate Management Environment and was designed for Let’s Encrypt by the Internet Security Research Group (ISRG). It is based on JSON and HTTPS and has already been implemented in clients of various forms.

Image Source: www.letsencrypt.org

The ACME client generates an authorized pair of keys for the respective domain in cooperation with the Let’s Encrypt project’s servers. This pair of keys is then entitled to request or revoke a certificate for the domain. If the pair of keys is registered with Let’s Encrypt, the client generates a certificate signing request (CSR), signs it with its key, and sends it to Let’s Encrypt. The Let’s Encrypt servers then select a challenge that the client has to complete. For example, this might be: Save file “x” on your web server with the content “y”. As soon as the client reports that it has completed the challenge, Let’s Encrypt checks whether it has done so successfully (in this case, whether file “x” can be found on the requested domain with the content “y”.

Wide Range of Validation Options for Added Flexibility

If the validation is successful, the client receives its certificate and can be included in the web server Apache or IMAP server Dovecot.

Although validation via a specific file on the web server of the requesting host is by far the most used and implemented, there is also the option of validation via DNS among others. However, this is not possible with all clients and requires the manual creation of a TXT record in the DNS of the domain for which a certificate is requested. One client which supports this form of validation is Let’s Encrypt’s official “certbot”, for example.

Internally, Let’s Encrypt works with a root certificate, which signs both intermediate certificates. One of these two intermediate certificates then signs all the automatically issued certificates.
Once a certificate has been initially issued, the majority of clients work with mechanisms such as Cron to retrieve a new certificate automatically following a schedule. This is because the certificates issued by Let’s Encrypt are generally only valid for 90 days so as to render (long-term) misuse of keys and certificates more difficult and promote automation of the process among users.

Integration of a Let’s Encrypt Client in UCS for the Creation of Secure Certificates

In the scope of the Cool Solutions, Univention offers a largely automated integration of a Let’s Encrypt client, which has recently been made available in the App Center. Following the installation of the “Let’s Encrypt” app in UCS 4.2, the app settings in the App Center can be used to enter the desired domains and configure the use of the certificate in Apache, Dovecot, and Postfix. Clicking on “Save Changes” starts a script which retrieves the certificate and configures it in the respective services. And that’s it – setup is complete! A Cron job set up during the installation ensures that a new certificate is retrieved every 30 days so that there is always a valid certificate available in the system. The app uses the client implementation “acme-tiny” and validates the domain with the help of the classic method of a special file which is provided to the Let’s Encrypt servers via the Apache web service.

Outlook: A Little Bit More Security for the Web Every Day

Although Let’s Encrypt has already contributed significantly to the “fully encrypted web” with 100 million certificates, the project is by no means drawing to a standstill. For example, support for wildcard certificates was announced just recently and should be available from January 2018. This and further developments such as the IPv6 support incorporated in mid-2016 will make free certificates available for more and more people and render the web a little bit more secure day by day. In February of this year, the EFF reported that half of the traffic on the world wide web is now encrypted. In view of the continuously growing number of domains encrypted with Let’s Encrypt, a healthy optimism with regard to this development is therefore thoroughly justified. In particular in the course of the consistent expansion of Internet monitoring by internationally operating secret services and the monopolization of services used by the masses by corporations such as Facebook, Open Source-based, democratizing projects such as Let’s Encrypt are essential for the continued existence of a free and fair Internet.

Der Beitrag Let’s Encrypt – Making the Internet That Little Bit Securer Every Day erschien zuerst auf Univention.

22 September, 2017 10:40AM by Irina Feller

September 21, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S10E29 – Adamant Terrible Hammer - Ubuntu Podcast

This is Le Crossover Ubuntu Mashup Podcast thingy recorded live at UbuCon Europe in Paris, France.

It’s Season Ten Episode Twenty-Nine of the Ubuntu Podcast! Alan Pope, Martin Wimpress, Marius Quabeck, Max Kristen, Rudy and Tiago Carrondo are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

21 September, 2017 08:10PM

hackergotchi for Purism PureOS

Purism PureOS

Make your own Librem 5 concept art.


A few days ago, a very talented Librem 5 enthusiast asked me for some HD material to create his own Librem 5 concept art, so I have put together a couple of blank renders of the handset, along with the logos in SVG format.

All this design is currently a work in progress and I believe in collaborative efforts. I believe in the people’s power. I believe in the fact that we don’t own Creativity. We just own the pleasure of expressing it. I see Creativity as a global positive energy that vibrates and grows through all of us. We should never restrict its freedom of movement. Freely collaborating and sharing with the world is the essence of the Free Software movement and is what Purism is made of.

In that regard I thought I would make those files public for anyone to freely join the fun.
So, if you feel like expressing your artistic skills and your vision of what could be a smartphone that is made for user’s respect and software freedom, feel free to do so!

Download the Librem 5 Concept Pack

Enjoy! 🙂

21 September, 2017 07:40PM by François Téchené

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Microsoft and Canonical Increase Velocity with Azure Tailored Kernel

By Leann Ogasawara, Director of Kernel Engineering

Ubuntu has long been a popular choice for Linux instances on Azure.  Our ongoing partnership with Microsoft has brought forth great results, such as the support of the latest Azure features, Ubuntu underlying SQL Server instances, bash on Windows, Ubuntu containers with Hyper-V Isolation on Windows 10 and Windows Servers, and much more.

Canonical, with the team at Microsoft Azure, are now delighted to announce that as of September 21, 2017, Ubuntu Cloud Images for Ubuntu 16.04 LTS on Azure have been enabled with a new Azure tailored Ubuntu kernel by default.  The Azure tailored Ubuntu kernel will receive the same level of support and security maintenance as all supported Ubuntu kernels for the duration of the Ubuntu 16.04 LTS support life.

The kernel itself is provided by the linux-azure kernel package. The most notable highlights for this kernel include:

  • Infiniband and RDMAcapability for Azure HPC to deliver optimized performance of compute intensive workloads on Azure A8, A9, H-series, and NC24r.
  • Full support for Accelerated Networking in Azure.  Direct access to the PCI device provides gains in overall network performance offering the highest throughput and lowest latency for guests in Azure.  Transparent SR-IOV eliminates configuration steps for bonding network devices.  SR-IOV for Linux in Azure is in preview but will become generally available later this year.
  • NAPI and Receive Segment Coalescing for 10% greater throughput on guests not using SR-IOV.
  • 18% reduction in kernel size.
  • Hyper-V socket capability — a socket-based host/guest communication method that does not require a network.
  • The very latest Hyper-V device drivers and feature support available.

The ongoing collaboration between Canonical and Microsoft will also continue to produce upgrades to newer kernel versions providing access to the latest kernel features, bug fixes, and security updates.  Any Ubuntu 16.04 LTS image brought up from the Azure portal after September 21st will be running on this Azure tailored Ubuntu kernel.

How to verify which kernel is used:

$ uname -r

4.11.0-1011-azure

 

Instances using the Azure tailored Ubuntu kernel will, of course, be supportable through Canonical’s Ubuntu Advantage service, available for purchase on our online shop or through sales@canonical.com in three tiers:

  • Essential: designed for self-sufficient users, providing access to our self-support portal as well as a variety of Canonical tools and services.
  • Standard: adding business-hours web and email support on top of the contents of Essential, as well as a 2-hour to 2-business days response time (severity 1-4).
  • Advanced: adding 24×7 web and email support on top of the contents of Essential, as well as a 1-hour to 1-business day response time (severity 1-4).

The Azure tailored Ubuntu kernel will not support the Canonical Livepatch Service at the time of this announcement, but investigation is underway to evaluate delivery of this service in the future.

If, for now, you prefer livepatching at scale over the above performance improvements, it is possible to revert to the standard kernel, using the following commands:

 

$ sudo apt install linux-virtual linux-cloud-tools-virtual

$ sudo apt purge linux*azure

$ sudo reboot

 

As we continue to collaborate closely with various Microsoft teams on public cloud, private cloud, containers and services, you can expect further boosts in performance, simplification of operations at scale, and enablement of new innovations and technologies.

21 September, 2017 04:00PM

Ante Karamatić: Ime odbijeno

Nakon 8-9 dana i poslanog maila, danas sam dobio obavijest o tome što se dešava s mojom prijavom. Pa prenosim u cijelosti:

dana 12.09.2017 poslano je rezervacija u TS Zagreb (e – tvrtka). i poštom je poslana dok. i  RZ obrazac u Hitro.hr Zagreb
Papirna dokumentacija je predana  na sud 13.09.2017.Rezevacija imena nije prošla . .Obavijest je predignuta sa suda 18.09.2017(.Hirto.hr  – Zagreb)
Obavijest je poštom danas stigla u Hitro.hr  – Šibenik (21.09.2017.). Zvala sam Vas na mobitel da bi mogli predigniti potvrdu ,ali mi se niko ne javlja.
Stoga Vas obavještvam da možete predignuti obavijest u HITRO:HR Šibenik.

Dakle, eTvrtka je jedno veliko ništa; obična laž i prijevara. I dalje se dokumenti šalju poštom. Da se razumijemo, ovo nije problem službenika koji su bili sustretljivi. Ovo je problem organizacije države, odnosno Vlade. Službenici su tu žrtve isto kao i mi, koji pokušavamo nešto stvoriti.

Dakle, ime je odbijeno.

U Republici Hrvatskoj je potrebno proći 10 dana kako biste saznali možete li pokrenuti tvrtku s određenim imenom. U drugim državama ovakve stvari ni ne postoje, već se tvrtke pokreću unutar jednog dana. Ako želimo biti plodno tlo za poduzetništvo, hitro.hr treba ukinuti (potpuno je besmislen) i uvesti suvremene tehnologije; algoritmi mogu pregledavati imena i to treba biti samo web stranica. Nikakvi protokoli, plaćanja, stajanja u redu.

21 September, 2017 03:47PM

hackergotchi for Ubuntu

Ubuntu

Ubuntu Community Council 2017 election under way!

The Ubuntu Community Council election has begun and ballots sent out to all Ubuntu Members. Voting closes September 27th at end of day UTC.

The following candidates are standing for 7 seats on the council:

Please contact the community-council@lists.ubuntu.com list if you are an Ubuntu Member but did not receive a ballot. Voting instructions were sent to the public address defined in Launchpad, or your launchpad_id@ubuntu.com address if not. Please also make sure you check your spam folder first.

We’d like to thank all the candidate for their willingness to serve in this capacity, and members for their considered votes.

Originally posted to the ubuntu-news-team mailing list on Tue Sep 12 14:22:49 UTC 2017 by Mark Shuttleworth

21 September, 2017 03:28PM by José Antonio Rey

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Kubernetes Snaps: The Quick Version

This article originally appeared on George Kraft’s blog

When we built the Canonical Distribution of Kubernetes (CDK), one of our goals was to provide snap packages for the various Kubernetes clients and services: kubectl, kube-apiserver, kubelet, etc.

While we mainly built the snaps for use in CDK, they are freely available to use for other purposes as well. Let’s have a quick look at how to install and configure the Kubernetes snaps directly.

The Client Snaps

This covers: kubectl, kubeadm, kubefed

Nothing special to know about these. Just snap install and you can use them right away:

$ sudo snap install kubectl --classic
kubectl 1.7.4 from 'canonical' installed
$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

The Server Snaps

This covers: kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kube-proxy

Example: kube-apiserver

We will use kube-apiserver as an example. The other services generally work the same way.

Install with snap install

This creates a systemd service named snap.kube-apiserver.daemon. Initially, it will be in an error state because it’s missing important configuration:

$ systemctl status snap.kube-apiserver.daemon
● snap.kube-apiserver.daemon.service - Service for snap application kube-apiserver.daemon
   Loaded: loaded (/etc/systemd/system/snap.kube-apiserver.daemon.service; enabled; vendor preset: enabled)
   Active: inactive (dead) (Result: exit-code) since Fri 2017-09-01 15:54:39 UTC; 11s ago
   ...

Configure kube-apiserver using snap set.

sudo snap set kube-apiserver \
  etcd-servers=https://172.31.9.254:2379 \
  etcd-certfile=/root/certs/client.crt \
  etcd-keyfile=/root/certs/client.key \
  etcd-cafile=/root/certs/ca.crt \
  service-cluster-ip-range=10.123.123.0/24 \
  cert-dir=/root/certs

Note: Any files used by the service, such as certificate files, must be placed within the /root/ directory to be visible to the service. This limitation allows us to run a few of the services in a strict confinement mode that offers better isolation and security.

After configuring, restart the service and you should see it running:

$ sudo service snap.kube-apiserver.daemon restart
$ systemctl status snap.kube-apiserver.daemon
● snap.kube-apiserver.daemon.service - Service for snap application kube-apiserver.daemon
   Loaded: loaded (/etc/systemd/system/snap.kube-apiserver.daemon.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2017-09-01 16:02:33 UTC; 6s ago
   ...

Configuration

The keys and values for snap set map directly to arguments that you would
normally pass to the service. You can view a list of arguments by invoking the
service directly, e.g. kube-apiserver -h.

For configuring the snaps, drop the leading dashes and pass them through
snap set. For example, if you want kube-apiserver to be invoked like this

kube-apiserver --etcd-servers https://172.31.9.254:2379 --allow-privileged

You would configure the snap like this:

snap set kube-apiserver etcd-servers=https://172.31.9.254:2379 allow-privileged=true

Note, also, that we had to specify a value of true for allow-privileged. This
applies to all boolean flags.

Going deeper

Want to know more? Here are a couple good things to know:

If you’re confused about what snap set ... is actually doing, you can read
the snap configure hooks in

/snap/<snap-name>/current/meta/hooks/configure

to see how they work.

The configure hook creates an args file here:

/var/snap/<snap-name>/current/args

This contains the actual arguments that get passed to the service by the snap:

$ cat /var/snap/kube-apiserver/current/args 
--cert-dir "/root/certs"
--etcd-cafile "/root/certs/ca.crt"
--etcd-certfile "/root/certs/client.crt"
--etcd-keyfile "/root/certs/client.key"
--etcd-servers "https://172.31.9.254:2379"
--service-cluster-ip-range "10.123.123.0/24"

Note: While you can technically bypass snap set and edit the args file directly, it’s best not to do so. The next time the configure hook runs, it will obliterate your changes. This can occur not only from a call to snap set but also during a background refresh of the snap.

The source code for the snaps can be found here: https://github.com/juju-solutions/release/tree/rye/snaps/snap

We’re working on getting these snaps added to the upstream Kubernetes build process. You can follow our progress on that here: https://github.com/kubernetes/release/pull/293

If you have any questions or need help, you can either find us at #juju on
freenode, or open an issue against https://github.com/juju-solutions/bundle-canonical-kubernetes and we’ll help you out as soon as we can.

21 September, 2017 01:46PM

Ubuntu Insights: Security Team Weekly Summary: September 21, 2017

The Security Team weekly reports are intended to be very short summaries of the Security Team’s weekly activities.

If you would like to reach the Security Team, you can find us at the #ubuntu-hardened channel on FreeNode. Alternatively, you can mail the Ubuntu Hardened mailing list at: ubuntu-hardened@lists.ubuntu.com

During the last week, the Ubuntu Security team:

  • Triaged 489 public security vulnerability reports, retaining the 152 that applied to Ubuntu.
  • Published 6 Ubuntu Security Notices which fixed 122 security issues (CVEs) across 5 supported packages.

Bug Triage

Mainline Inclusion Requests

Development

  • validate license and deprecate aliases in the review tools
  • reviews
    • broadcom-asic-control updates PR 3898
    • bootstrap.c of snap-confine calling snap-update-ns PR 3621
    • s390x and i386 socket snap-seccomp test failures fix (PR 3900)
    • network interface update PR 3898
    • ‘mount host system fonts in desktop interface’ PR 3889
    • ‘enable partial apparmor support’ PR 3814
    • ‘run secondary-arch tests via gcc-multilib’ PR 3901
    • apparmor profile changes for snap-confine calling snap-update-ns PR 3621
  • implement/submit PR 3919 for miscellaneous policy updates xxix
  • implement/submit PR 3921 for miscellaneous policy updates xxix for 2.28
  • policy update for org.freedesktop.DBus ListNames() PR 3928

  • regression and manual testing of LSM stacking with AppArmor and SELinux

  • fscrypt 0.2.1 packaged
  • upload apparmor 2.11.0-2ubuntu17 for systemd stub resolver
  • send up patch to upstream apparmor to drop /var/run alternation in favor of /run

What the Security Team is Reading This Week

Weekly Meeting

More Info

21 September, 2017 01:22PM

Scarlett Clark: KDE: Randa 2017! KDE Neon Snappy and more

Another successful Randa meeting! I spent most of my days working on snappy packaging for KDE core applications, and I have most of them done!

Snappy Builds on KDE Neon

We need testers! Please see Using snappy to get started.

In the evenings I worked on getting all my appimage work moved into the KDE infrastructure so that the community can take over.

I learned a great deal about accessibility and have been formulating ways to improve KDE neon in this area.

Randa meetings are crucial to the KDE community for developer interaction, brainstorming, and bringing great new things to KDE.
I encourage all of you to please consider a donation at https://www.kde.org/fundraisers/randameetings2017/

21 September, 2017 12:54PM

September 20, 2017

hackergotchi for Kali Linux

Kali Linux

Kali Linux 2017.2 Release

We are happy to announce the release of Kali Linux 2017.2, available now for your downloading pleasure. This release is a roll-up of all updates and fixes since our 2017.1 release in April. In tangible terms, if you were to install Kali from your 2017.1 ISO, after logging in to the desktop and running ‘apt update && apt full-upgrade’, you would be faced with something similiar to this daunting message:

1399 upgraded, 171 newly installed, 16 to remove and 0 not upgraded.
Need to get 1,477 MB of archives.
After this operation, 1,231 MB of additional disk space will be used.
Do you want to continue? [Y/n]

That would make for a whole lot of downloading, unpacking, and configuring of packages. Naturally, these numbers don’t tell the entire tale so read on to see what’s new in this release.

New and Updated Packages in Kali 2017.2

In addition to all of the standard security and package updates that come to us via Debian Testing, we have also added more than a dozen new tools to the repositories, a few of which are listed below. There are some really nice additions so we encourage you to ‘apt install’ the ones that pique your interest and check them out.

  • hurl – a useful little hexadecimal and URL encoder/decoder
  • phishery – phishery lets you inject SSL-enabled basic auth phishing URLs into a .docx Word document
  • ssh-audit – an SSH server auditor that checks for encryption types, banners, compression, and more
  • apt2 – an Automated Penetration Testing Toolkit that runs its own scans or imports results from various scanners, and takes action on them
  • bloodhound – uses graph theory to reveal the hidden or unintended relationships within Active Directory
  • crackmapexec – a post-exploitation tool to help automate the assessment of large Active Directory networks
  • dbeaver – powerful GUI database manager that supports the most popular databases, including MySQL, PostgreSQL, Oracle, SQLite, and many more
  • brutespray – automatically attempts default credentials on discovered services

On top of all the new packages, this release also includes numerous package updates, including jd-gui, dnsenum, edb-debugger, wpscan, watobo, burpsuite, and many others. To check out the full list of updates and additions, refer to the Kali changelog on our bug tracker.

Ongoing Integration Improvements

Beyond the new and updated packages in this release, we have also been working towards improving the overall integration of packages in Kali Linux. One area in particular is in program usage examples. Many program authors assume that their application will only be run in a certain manner or from a certain location. For example, the SMBmap application has a binary name of ‘smbmap’ but if you were to look at the usage example, you would see this:

Examples:

$ python smbmap.py -u jsmith -p password1 -d workgroup -H 192.168.0.1
$ python smbmap.py -u jsmith -p 'aad3b435b51404eeaad3b435b51404ee:da76f2c4c96028b7a6111aef4a50a94d' -H 172.16.0.20
$ python smbmap.py -u 'apadmin' -p 'asdf1234!' -d ACME -h 10.1.3.30 -x 'net group "Domain Admins" /domain'

If you were a novice user, you might see these examples, try to run them verbatim, find that they don’t work, assume the tool doesn’t work, and move on. That would be a shame because smbmap is an excellent program so we have been working on fixing these usage discrepancies to help improve the overall fit and finish of the distribution. If you run ‘smbmap’ in Kali 2017.2, you will now see this output instead:

Examples:

$ smbmap -u jsmith -p password1 -d workgroup -H 192.168.0.1
$ smbmap -u jsmith -p 'aad3b435b51404eeaad3b435b51404ee:da76f2c4c96028b7a6111aef4a50a94d' -H 172.16.0.20
$ smbmap -u 'apadmin' -p 'asdf1234!' -d ACME -h 10.1.3.30 -x 'net group "Domain Admins" /domain'

We hope that small tweaks like these will help reduce confusion to both veterans and newcomers and it’s something we will continue working towards as time goes on.

Learn More About Kali Linux

In the time since the release of 2017.1, we also released our first book, Kali Linux Revealed, in both physical and online formats. If you are interested in going far beyond the basics, really want to learn how Kali Linux works, and how you can leverage its many advanced features, we encourage you to check it out. Once you have mastered the material, you will have the foundation required to pursue the Kali Linux Certified Professional certification.

Kali ISO Downloads, Virtual Machines and ARM Images

The Kali Rolling 2017.2 release can be downloaded via our official Kali Download page. This release, we have also updated our Kali Virtual Images and Kali ARM Images downloads. As always, if you already have Kali installed and running to your liking, all you need to do in order to get up-to-date is run the following:

apt update
apt dist-upgrade
reboot

We hope you enjoy this fine release as much as we enjoyed making it!

20 September, 2017 11:38PM by dookie

hackergotchi for Ubuntu developers

Ubuntu developers

Jamie Strandboge: Easy ssh into libvirt VMs and LXD containers

Finding your VMs and containers via DNS resolution so you can ssh into them can be tricky. I was talking with Stéphane Graber today about this and he reminded me of his excellent article: Easily ssh to your containers and VMs on Ubuntu 12.04.

These days, libvirt has the `virsh dominfo` command and LXD has a slightly different way of finding the IP address.

Here is an updated `~/.ssh/config` that I’m now using (thank you Stéphane for the update for LXD):

Host *.lxd
    #User ubuntu
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(lxc list -c s4 $(echo %h | sed "s/\.lxd//g") %h | grep RUNNING | cut -d' ' -f4) %p
 
Host *.vm
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(virsh domifaddr $(echo %h | sed "s/\.vm//g") | awk -F'[ /]+' '{if (NR>2 && $5) print $5}') %p

You may want to uncomment `StrictHostKeyChecking` and `UserKnownHostsFile` depending on your environment (see `man ssh_config`) for details.

With the above, I can ssh in with:

$ ssh foo.vm uptime
16:37:26 up 50 min, 0 users, load average: 0.00, 0.00, 0.00
$ ssh bar.lxd uptime
21:37:35 up 12:39, 2 users, load average: 0.55, 0.73, 0.66

Enjoy!


Filed under: canonical, ubuntu, ubuntu-server

20 September, 2017 09:39PM

Serge Hallyn: Namespaced File Capabilities

Namespaced file capabilities

As of this past week, namespaced file capabilities are available in the upstream kernel. (Thanks to Eric Biederman for many review cycles and for the final pull request)

TL;DR

Some packages install binaries with file capabilities, and fail to install if you cannot set the file capabilities. Such packages could not be installed from inside a user namespace. With this feature, that problem is fixed.

Yay!

What are they?

POSIX capabilities are pieces of root’s privilege which can be individually used.

File capabilites are POSIX capability sets attached to files. When files with associated capabilities are executed, the resulting task may end up with privilege even if the calling user was unprivileged.

What’s the problem

In single-user-namespace days, POSIX capabilities were completely orthogonal to userids. You can be a non-root user with CAP_SYS_ADMIN, for instance. This can happen by starting as root, setting PR_SET_KEEPCAPS through prctl(2), and dropping the capabilities you don’t want and changing your uid.  Or, it can happen by a non-root user executing a file with file capabilities.  In order to append such a capability to a file, you require the CAP_SETFCAP capability.

User namespaces had several requirements, including:

  1. an unprivileged user should be able to create a user namespace
  2. root in a user namespace should be privileged against its resources
  3. root in a user namespace should be unprivileged against any resources which it does not own.

So in a post-user-namespace age, unprivileged user can “have privilege” with respect to files they own. However if we allow them to write a file capability on one of their files, then they can execute that file as an unprivileged user on the host, thereby gaining that privilege. This violates the third user namespace requirement, and is therefore not allowed.

Unfortunately – and fortunately – some software wants to be installed with file capabilities. On the one hand that is great, but on the other hand, if the package installer isn’t able to handle the failure to set file capabilities, then package installs are broken. This was the case for some common packages – for instance httpd on centos.

With namespaced file capabilities, file capabilities continue to be orthogonal with respect to userids mapped into the namespace. However they capabilities are tagged as belonging to the host uid mapped to the container’s root id (0).  (If uid 0 is not mapped, then file capabilities cannot be assigned)  This prevents the namespace owner from gaining privilege in a namespace against which they should not be privileged.

 

Disclaimer

The opinions expressed in this blog are my own views and not those of Cisco.


20 September, 2017 03:37PM

Ubuntu Insights: Kernel Team Summary – September 20, 2017

September 13 through September 18

Development (Artful / 17.10)

https://wiki.ubuntu.com/ArtfulAardvark/ReleaseSchedule

Important upcoming dates:

      Final Beta - Sept 28 (~1 week away)
      Kernel Freeze - Oct 5 (~2 weeks away)
      Final Freeze - Oct 12 (~3 weeks away)
      Ubuntu 17.10 - Oct 19 (~4 weeks away)
   

We intend to target a 4.13 kernel for the Ubuntu 17.10 release. A 4.13.1 based kernel is available for testing from the artful-proposed pocket of the Ubuntu archive. As a reminder, the Ubuntu 17.10 Kernel Freeze is Thurs Oct 5, 2017.

Stable (Released & Supported)

  • All kernels have been re-spun to include a fix for high priority CVE-2017-1000251.

  • SRU cycle completed successfully and the following kernel updates have been released:

      trusty             3.13.0-132.181
      trusty/lts-xenial  4.4.0-96.119~14.04.1
      xenial             4.4.0-96.119
      xenial/snapdargon  4.4.0-1076.81
      xenial/raspi2      4.4.0-1074.82
      xenial/aws         4.4.0-1035.44
      xenial/gke         4.4.0-1031.31
      xenial/gcp         4.10.0-1006.6
      zesty              4.10.0-35.39
      zesty/raspi2       4.10.0-1018.21
    
    • The following kernel snap updates have been released in the snap store:
      gke-kernel          4.4.0.1031.32
      aws-kernel          4.4.0.1035.37
      dragonboard-kernel  4.4.0.1076.68
      pi2-kernel          4.4.0.1074.74
      pc-kernel           4.4.0.96.101
      
  • Current cycle: 15-Sep through 07-Oct

               15-Sep  Last day for kernel commits for this cycle.
      18-Sep - 23-Sep  Kernel prep week.
      24-Sep - 06-Oct  Bug verification & Regression testing.
               09-Oct  Release to -updates.
    
  • Next cycle: 06-Oct through 28-Oct

           06-Oct  Last day for kernel commits for this cycle.
    

    09-Oct - 14-Oct Kernel prep week.
    15-Oct - 27-Oct Bug verification & Regression testing.
    30-Oct Release to -updates.

Misc

20 September, 2017 02:00PM

Didier Roche: Ubuntu GNOME Shell in Artful: Day 13

Now that GNOME 3.26 is released, available in Ubuntu artful, and final GNOME Shell UI is confirmed, it’s time to adapt our default user experience to it. Let’s discuss how we worked with dash to dock upstream on the transparency feature. For more background on our current transition to GNOME Shell in artful, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 13: Adaptive transparency for Ubuntu Dock

GNOME Shell 3.26 excellent new release ships thus some dynamic panel transparency by default. If no window is next to the top panel, the bar is itself is translucent. If any windows is next to it, the panel becomes opaque. This feature is highlighted on the GNOME 3.26 release note. As we already discussed this on a previous blog post, it means that the Ubuntu Dock default opacity level doesn’t fit very well with the transparent top panel on an empty desktop.

Previous default Ubuntu Dock transparency

Even if there were some discussions within GNOME about keeping or reverting this dynamic transparency feature, we reached out the Dash to Dock guys during the 3.25.9x period to be prepared. Started then some excellent discussions on the pull request which was already rolling full speed ahead.

The first idea was to have dynamic transparency. Having one status for the top panel, and another one for the Dock itself. However, this gives some weirds user experience after playing with it a little bit:

We can feel there are too much flickering, having both parts of the UI behaving independently. The idea I raised upstream was thus to consider all Shell UI (which is, in the Ubuntu session the top panel and Ubuntu Dock) as a single entity. Their opacity status is thus linked, as one UI element. François agreed and had the same idea on his mind, before implementing it. The results is way more natural:

Those options are implemented as options in Dash to Dock settings panel, and we just set this last behavior as the default in Ubuntu Dock.

We made sure that this option is working well with the various dock settings we expose in the Settings application:

In particular, you can see that intelli-hide is working as expected: dock opacity changing while the Dock is vanishing and when forcing it to show up again, it’s at the maximum opacity that we set.

The default with no application next to the panel or dock is now giving some good outlook:

Default empty Ubuntu artful desktop

The best part is the following: as we are getting closer to release and there is still a little bit of work upstream to get everything merged in Dash to Dock itself for options and settings UI which doesn’t impact Ubuntu Dock, Michele has prepared a cleaned up branch that we can cherry-pick from directly in our ubuntu-dock branch that they will keep compatible with master for us! Now that the Feature Freeze and UI Freeze exceptions have been approved, the Ubuntu Dock package is currently building in the artful repository alongside other fixes and some shortcuts improvements.

As usual, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official Ubuntu desktop team transitions ppa to get a taste of what’s cooking!

It’s really a pleasure to work with Dash to Dock upstream, I’m using this blog opportunity to thank them again for everything they do and the cooperation they ease out for our use case.

20 September, 2017 11:25AM

hackergotchi for VyOS

VyOS

VyOS development digest #10

At last, there are some news. In the order of immediate importance...

1.1.8 release plan

There have been some uncertainty over this issue and it wasn't clear if we'll be able to make an 1.1.8 release or not with squeeze's death, but recently Kim and I got squeeze builds to work again, and this enables us to finally make one.

What's certain is that bugfixes from 1.2.0 are going to make it there. What's not yet certain is which features we should cherry-pic. OpenVPN user/password auth, for example, is definitely safe and well tested enough to bring it to 1.1.8.

1.2.0 development status

1.1.8, of course, is nothing more than a maintenance release. But, we are way closer to a full feature release now that, especially with the work done by two awesome contributors, namely Christian Poessinger and Jules Taplin. Among recent contrubutions are multiple fixes to IPsec operational and configuration mode (in particular, "show vpn ipsec sa" works properly now), correct deletion of VTI interfaces, and there's also work being done on integrating mDNS repeater.

1.2.0, Python, and code rewrites

This was already discussed in http://blog.vyos.net/vyos-2-dot-0-development-digest-number-7-python-coding-guidelines-for-config-scripts-in-1-dot-2-0-and-config-parser-for-vyconf and http://blog.vyos.net/vyos-2-dot-0-development-digest-number-5-doing-1-dot-2-x-and-2-dot-0-development-in-parallel

By now, the Python library is "beta" rather than "alpha" and it has already been used to rewrite the cron ("set system task-scheduler") scripts by Tania Dzyubenko and me.

The library is now a proper Python package and it's installed as vyos.config module. You can use it for VyOS scripting, as well as code rewrites.

It has also been moved out of the vyatta-cfg package. The package where the new rewritten code goes is https://github.com/vyos/vyos-1x

You can find the rewritten cron script here: https://github.com/vyos/vyos-1x/blob/current/src/conf-mode/vyos-update-crontab.py
As you can see, it's architecturally pretty different from the older scripts. You can find the guideline it's written according to here in the wiki: https://wiki.vyos.net/wiki/Python_coding_guidelines

The architecture boils down to this: all VyOS config reads are confined to one function that converts it into an abstract representation, the rest of the logic is split into separate "verify", "generate", and "apply" stages that, accordingly, verify config correctness, generate configuration files, and apply them to the live system.

I'll re-iterate the reasons for these changes:
  • Testability: if only one place in the code really needs VyOS to work, the rest can be test on developers' workstations and build hosts, by hand as well as with automated unit and integration tests
  • Easier syntax changes: same, redesigned syntax can translate to the same abstract representation or a modified version of it, and there will not be need to weed out hundreds instance of the old syntax all over the script
  • Transactional commits: if the config correctness checking stage is clearly separated, once all scripts are rewritten in this manner, it will be possible to implement commit dry-run and abort commits if an error is detected before any change to the live system is made, thus greatly increasing the system's robustness

Scripts written in this manner will be reusable in VyOS 2.0 once it's ready with little change, thus ensuring more gradual rather than radical rewrite.

2.0 style command definitions in VyOS 1.2.0

If you look into the vyos-1x package, you will notice that there are no command "templates". That's right.

As you remember, the future VyOS 2.0 and its config backend will not be using the old style command "templates" (bunches of directories with node.def files). There is no way to get rid of them in VyOS 1.x, but we still can abstract them away, thus enabling a more gradual rewrite in this area too.

There are multiple problems with those old style templates. They are notoriously hard to navigate even for experienced developers and are a repellent for newcomers. They are equally hard to syntax check and the only real way to find out if they have any chance to work is to install a package on a test VyOS instance and try them by hand.
And last but not least, they allow embedded shell scripts that further spread the logic all over and make debugging even harder than it already is.

New style templates are in XML. Before anyone says "why not JSON", tell me if JSON got a widely accepted schema language and its implementation (I'm aware of some attempts, but...). XML had been machine-verifiable for almost two decades already.
XML interface definitions for VyOS come with a RelaxNG schema (https://github.com/vyos/vyos-1x/blob/current/schema/interface_definition.rng), which is a slightly modified schema from VyConf (https://github.com/vyos/vyconf/blob/master/data/schemata/interface_definition.rnc) so the definitions will be reusable with very minimal changes.

The great thing about schemas is not only that people can know the complete grammar for certain, but also that it can be automatically verified. The scripts/build-command-templates script that converts the XML definitions to old style templates also verifies them against the schema, so a bad definition will cause the package build to fail. I do agree that the format is verbose, but there is no other format now that would allow this. Besides, a specialized XML editor can alleviate the issue with verbosity.

Right now that script is complete enough to produce the templates for cron, but there's still work to be done. For example, it doesn't support the "allowed:" statement used for command completion. Any testing and patches are greatly appreciated!


20 September, 2017 11:19AM by Daniil Baturin

hackergotchi for Ubuntu developers

Ubuntu developers

James Page: OpenStack Charms @ Denver PTG

Last week, myself and a number of the OpenStack Charms team had the pleasure of attending the OpenStack Project Teams Gathering in Denver, Colorado.

The first two days of the PTG where dedicated to cross project discussions, with the last three days focused on project specific discussion and work in dedicated rooms.

Here’s a summary of the charm related discussion over the week.

Cross Project Discussions

Skip Level Upgrades

This topic was discussed at the start of the week, in the context of supporting upgrades across multiple OpenStack releases for operators.  What was immediately evident was this was really a discussion around ‘fast-forward’ upgrades, rather than actually skipping any specific OpenStack series as part of a cloud upgrade.  Deployments would still need to step through each OpenStack release series in turn, so the discussion centred around how to make this much easier for operators and deployment tools to consume than it has been to-date.

There was general agreement on the principles that all steps required to update a service between series should be supported whilst the service is offline – i.e. all database migrations can be completed without the services actually running;  This would allow multiple upgrade steps to be completed without having to start services up on interim steps. Note that a lot of projects all ready support this approach, but its never been agreed as a general policy as part of the ‘supports-upgrade‘ tag which was one of the actions resulting from this discussion.

In the context of the OpenStack Charms, we already follow something along these lines for minimising the amount of service disruption in the control plane during OpenStack upgrades; with implementation of this approach across all projects, we can avoid having to start up services on each series step as we do today, further optimising the upgrade process delivered by the charms for services that don’t support rolling upgrades.

Policy in Code

Most services in OpenStack rely on a policy.{json,yaml} file to define the policy for role based access into API endpoints – for example, what operations require admin level permissions for the cloud. Moving all policy default definitions to code rather than in a configuration file is a goal for the Queens development cycle.

This approach will make adapting policies as part of an OpenStack Charm based deployment much easier, as we only have to manage the delta on top of the defaults, rather than having to manage the entire policy file for each OpenStack release.  Notably Nova and Keystone have already moved to this approach during previous development cycles.

Deployment (SIG)

During the first two days, some cross deployment tool discussions where held for a variety of topics; of specific interest for the OpenStack Charms was the discussion around health/status middleware for projects so that the general health of a service can be assessed via its API – this would cover in-depth checks such as access to database and messaging resources, as well as access to other services that the checked service might depend on – for example, can Nova access Keystone’s API for authentication of tokens etc. There was general agreement that this was a good idea, and it will be proposed as a community goal for the OpenStack project.

OpenStack Charms Devroom

Keystone: v3 API as default

The OpenStack Charms have optionally supported Keystone v3 for some time; The Keystone v2 API is officially deprecated, so we had discussion around approach for switching the default API deployed by the charms going forwards; in summary

  • New deployments should default to the v3 API and associated policy definitions
  • Existing deployments that get upgraded to newer charm releases should not switch automatically to v3, limiting the impact of services built around v2 based deployments already in production.
  • The charms already support switching from v2 to v3, so v2 deployments can upgrade as and when they are ready todo so.

At some point in time, we’ll have to automatically switch v2 deployments to v3 on OpenStack series upgrade, but that does not have to happen yet.

Keystone: Fernet Token support

The charms currently only support UUID based tokens (since PKI was dropped from Keystone); The preferred format is now Fernet so we should implement this in the charms – we should be able to leverage the existing PKI key management code to an extent to support Fernet tokens.

Stable Branch Life-cycles

Currently the OpenStack Charms team actively maintains two branches – the current development focus in the master branch, and the most recent stable branch – which right now is stable/17.08.  At the point of the next release, the stable/17.08 branch is no longer maintained, being superseded by the new stable/XX.XX branch.  This is reflected in the promulgated charms in the Juju charm store as well.  Older versions of charms remain consumable (albeit there appears to be some trimming of older revisions which needs investigating). If a bug is discovered in a charm version from a inactive stable branch, the only course of action is to upgrade the the latest stable version for fixes, which may also include new features and behavioural changes.

There are some technical challenges with regard to consumption of multiple stable branches from the charm store – we discussed using a different team namespace for an ‘old-stable’ style consumption model which is not that elegant, but would work.  Maintaining more branches means more resource effort for cherry-picks and reviews which is not feasible with the currently amount of time the development team has for these activities so no change for the time being!

Service Restart Coordination at Scale

tl;dr no one wants enabling debug logging to take out their rabbits

When running the OpenStack Charms at scale, parallel restarts of daemons for services with large numbers of units (we specifically discussed hundreds of compute units) can generate a high load on underlying control plane infrastructure as daemons drop and re-connect to message and database services potentially resulting in service outages. We discussed a few approaches to mitigate this specific problem, but ended up with focus on how we could implement a feature which batched up restarts of services into chunks based on a user provided configuration option.

You can read the full details in the proposed specification for this work.

We also had some good conversation around how unit level overrides for some configuration options would be useful – supporting the use case where a user wants to enable debug logging for a single unit of a service (maybe its causing problems) without having to restart services across all units to support this.  This is not directly supported by Juju today – but we’ll make the request!

Cross Model Relations – Use Cases

We brainstormed some ideas about how we might make use of the new cross-model relation features being developed for future Juju versions; some general ideas:

  • Multiple Region Cloud Deployments
    • Keystone + MySQL and Dashboard in one model (supporting all regions)
    • Each region (including region specific control plane services) deployed into a different model and controller, potentially using different MAAS deployments in different DC’s.
  • Keystone Federation Support
    • Use of Keystone deployments in different models/controllers to build out federated deployments, with one lead Keystone acting as the identity provider to other peon Keystones in different regions or potentially completely different OpenStack Clouds.

We’ll look to use the existing relations for some of these ideas, so as the implementation of this feature in Juju becomes more mature we can be well positioned to support its use in OpenStack deployments.

Deployment Duration

We had some discussion about the length of time taken to deploy a fully HA OpenStack Cloud onto hardware using the OpenStack Charms and how we might improve this by optimising hook executions.

There was general agreement that scope exists in the charms to improve general hook execution time – specifically in charms such as RabbitMQ and Percona XtraDB Cluster which create and distribute credentials to consuming applications.

We also need to ensure that we’re tracking any improvements made with good baseline metrics on charm hook execution times on reference hardware deployments so that any proposed changes to charms can be assessed in terms of positive or negative impact on individual unit hook execution time and overall deployment duration – so expect some work in CI over the next development cycle to support this.

As a follow up to the PTG, the team is looking at whether we can use the presence of a VIP configuration option to signal to the charm to postpone any presentation of access relation data to the point after which HA configuration has been completed and the service can be accessed across multiple units using the VIP.  This would potentially reduce the number (and associated cost) of interim hook executions due to pre-HA relation data being presented to consuming applications.

Mini Sprints

On the Thursday of the PTG, we held a few mini-sprints to get some early work done on features for the Queens cycle; specifically we hacked on:

Good progress was made in most areas with some reviews already up.

We had a good turnout with 10 charm developers in the devroom – thanks to everyone who attended and a special call-out to Billy Olsen who showed up with team T-Shirts for everyone!

We have some new specs already up for review, and I expect to see a few more over the next two weeks!

EOM


20 September, 2017 10:51AM

LiMux

Netzwerk „Digitale Städte“: Öffentlichkeitsarbeit im Fokus

Im September traf sich das Netzwerk „Digitale Städte“. Dieses Mal hatte die Stadtverwaltung München eingeladen. In diesem Netzwerk diskutierten Vertreter der Städte Bochum, Dortmund, Freiburg, Hamburg, Hannover, Köln, Leipzig, Münster, Norderstedt, Stuttgart und Ulm neueste … Weiterlesen

Der Beitrag Netzwerk „Digitale Städte“: Öffentlichkeitsarbeit im Fokus erschien zuerst auf Münchner IT-Blog.

20 September, 2017 10:22AM by Stefan Döring

hackergotchi for Ubuntu developers

Ubuntu developers

Ante Karamatić: Cvrčci

 

Čekanje

Veli hitro.hr kako se rezervacija imena rješava u 3 (tri) radna dana. Zahtjev je podnesen u utorak, 12.9. Danas je 20.9. Čuju se samo cvrčci.

20 September, 2017 08:05AM

Mohamad Faizul Zulkifli: APRX On Ubuntu Repository

Good news! i just noticed that aprx packages already listed on Ubuntu repository.



Aprx is a software package designed to run on any POSIX platform (Linux/BSD/Unix/etc.) and act as an APRS Digipeater and/or Internet Gateway. Aprx is able to support most APRS infrastructure deployments, including single stand-alone digipeaters, receive-only Internet gateways, full RF-gateways for bi-directional routing of traffic, and multi-port digipeaters operating on multiple channels or with multiple directional transceivers.

For more info visit:-



If you want to know more about aprs and ham radio visit:-







20 September, 2017 08:01AM by 9M2PJU (noreply@blogger.com)

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter 519

Welcome to the Ubuntu Weekly Newsletter. This is issue #519 for the weeks of September 5 – 18, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Chris Guiver
  • Athul Muralidhar
  • Alan Diggs (Le Schyken, El Chicken)
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

20 September, 2017 04:11AM by tsimonq2

hackergotchi for Ubuntu developers

Ubuntu developers

Kubuntu General News: Call for design: Artful Banner for Kubuntu.org website

Kubuntu 17.10 — code-named Artful Aardvark — will be released on October 19th, 2017. We need a new banner for the website, and invite artists and designers to submit designs to us based on the Plasma wallpaper and perhaps the mascot design.

The banner is 1500×385 SVG.

Please submit designs to the Kubuntu-devel mail list.

20 September, 2017 02:25AM

September 19, 2017

Cumulus Linux

Troubleshooting your Docker Swarm container network with Host Pack

Say you are a network engineer, and you recently were told your company will be building applications using a distributed/microservices architecture with containers moving forward. You know how important this is for the developers — it gives them tremendous flexibility to develop and deploy money making applications. However, what does this mean for the network? It can be much more technically challenging to plan, operate and manage a network with containers than a traditional network. The containers may need to talk with each other and to the outside world, and you won’t even know IF they exist, let alone WHERE they exist! Yet, the network engineer is responsible for the containers connectivity and high availability, so troubleshooting your Docker Swarm container network efficiently is imperative.

Since the containers are deployed inside a host — on a virtual ethernet network — they can be invisible to network engineers. Orchestration tools such as Docker Swarm, Apache Mesos or Kubernetes make it very easy to spin up and take down containers from various hosts on a network, and they may even do this without human intervention. Many containers are also ephemeral and the traffic patterns between the servers hosting containers can be very dynamic and constantly shifting throughout the network.

troubleshooting with Docker Swarm

Cumulus Networks understands this challenge, and is stepping up to the plate to help network engineers and operations teams by providing an unparalleled solution. Cumulus now offers Host Pack which offers container visibility with Docker Swarm using the same technology we use in NetQ. Host Pack, which gives you container service visibility through NetQ, provides the tools network and application engineers need to design, update, manage and troubleshoot your Docker Swarm container network. NetQ with Host Pack provides an end-to-end robust network and a holistic network view that even includes diagnostic technology that’s like having your own time machine — imperative for a rapidly changing environment.

How does Host Pack work with NetQ technology to provide container service visibility?

Docker Swarm is an orchestration tool that deploys and manages containers that are part of a service, including adding and destroying containers as needed and providing the load balancing between them. By tapping into what Docker Engine is doing and what Docker Swarm deploys, we can see exactly where the important applications exist. This helps us, as network engineers, design, plan and troubleshoot. Further, if we can keep a history and easily keep track of changes, then this can help with troubleshooting our dynamic network that once worked or evaluating a past scenario. This is where Host Pack is invaluable.

troubleshooting with Docker Swarm
Host Pack allows the NetQ agent to be placed on each host. Using the above example, Docker Swarm starts an apache service. A Swarm manager node creates 10 instances of an apache container, and Swarm determines where to run them in the cluster, (a cluster is all the servers managed by Docker swarm) based on availability and attaches them to an internal virtual network on the hosting server. More information on this behavior can be found in the Host Pack with Docker Swarm validated design guide.

The NetQ agent on the host, included with Host Pack, communicates with the NetQ telemetry server (TS) about its newly installed containers. NetQ also taps into Docker Swarm and keeps track of all the servers, services and activity per cluster.

Since the NetQ agents are also placed on the switches, the TS keeps track of all the network events together. This allows us the holistic view mentioned above — all the way from the switches down to the service and container level.

So how important is it, really?

Let’s say you just received a call early in the morning saying that the performance to a crucial container application seemed to have degraded this morning. As a network engineer, you use NetQ to check the interfaces, BGP and the routing table. All seems fine. Maybe something happened to one or more of the containers? Before making that assumption and calling (and begging) the application team to check it out, probably starting a ping-pong effect, you can look it all up yourself in a few easy steps to get it working before everyone else comes in the office and needs access to the application.

Troubleshooting your Docker Swarm container network is easy. With one command, run from anywhere in the network including the TS, we see that we have four servers in this Docker Swarm cluster. Which ones are manager nodes and worker nodes are also displayed:

troubleshooting with Docker Swarm

With Host Pack adding visibility to the container services, you can easily see which services are active on the network, and which ports are exposed for them. We can see Docker Swarm has currently 2 services up, and the crucial application is called apache_crucial. We can also use the port mapping information to check to be sure the firewalls are correct to let the port pass through. Notice I am performing this command directly from a spine switch, but it can be done anywhere in the network, including the servers.

Troubleshooting with Docker Swarm

Next, let’s check which leafs the crucial application connects through. (if we cannot find the correct command or service, TAB is always useful!).

troubleshooting with Docker Swarm

We can see the service, apache_crucial, on the left. This service deploys five containers, 1-5. It is clear in this case they are distributed between three servers. The above graph also depicts which switch they are connected to. We see the crucial application is now residing off of leaf03 and leaf04! That is not right — those are older leafs with slower ports, and all the servers under them are old too. The old servers are meant for non-crucial applications!

Let’s check which containers are connected to a specific switchport on a leaf as shown below. We also know swp1 on leaf03 is a slow port.

troubleshooting with Docker Swarm

Something happened to deploy some crucial containers to a server under the wrong leafs. When did this happen?

Let’s check the most recent changes to our crucial service as we know it was working earlier. As always, we can pipe any bash command into NetQ. The below bash command limits the output to the top 20 lines:

troubleshooting with Docker Swarm

We can see in this case that a container on our crucial service was on server01 but got deleted about an hour ago (look at the DBState column), and other containers for that service were added to server03 and server04 instead to maintain the level of service. Someone must have accidentally added server03 and server04 to the wrong swarm. Server01 must have had an incident about an hour ago, and caused Swarm to move the containers off server01 to server03 and server04.

Host Pack also has the capability to perform diagnostics (we like to call this the “time machine”) which is also seen in NetQ. We can see information about the service as it was minutes, hours or days ago. So, just looking at the summary now vs two hours ago, we can see four containers stopped running on server01 and moved to the other servers, also indicating something may have happened to server01.

So, let’s get those containers back on server01, and we will be home free.

Can I use Host Pack to predetermine the impact a change can make?

What’s better than resolving open issues really fast? Ensuring that outages don’t happen in the first place! As one example, let’s say we need to remove leaf02 from service but are concerned with what will happen to our money making applications. We can easily see what containers will be impacted and the effect on them with one simple command as seen below.

troubleshooting with Docker Swarm

Everything shown in green means there would be no impact, yellow would have a partial performance hit, and red means it would be down if we were to take leaf02 out of service. So, we see that the service would be affected and 40% (2) of the containers connectivity would be affected. However, there would be only a 50% performance hit to 40% of the containers (20% total bandwidth loss) since we are running Host pack with Layer 3 connectivity to dual leafs. Now, we know to replace those containers first. This information will greatly help us make an informed decision planning network upgrades and changes, and when we can perform them.

This is awesome, what now?

I have shown you only a few features within NetQ and Host Pack. There are many, many more outside of specifically troubleshooting your Docker Swarm container network. To learn more about our features, check out the NetQ User’s Guides. Watch the tech video, schedule a demo, download a demo using a virtual topology or try out Cumulus in the Cloud to see more.

 

The post Troubleshooting your Docker Swarm container network with Host Pack appeared first on Cumulus Networks Blog.

19 September, 2017 09:17PM by Diane Patton

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Ubuntu Server Development Summary – 19 Sep 2017

Hello Ubuntu Server!

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

Spotlight: KVM Integration test backend

Cloud-init recently added changes to our integration testing backend to support KVM images in addition to LXC. This will allow us to exercise more complex network and storage scenarios to ensure proper behavior in complex environments.

cloud-init and curtin

cloud-init

  • integration tests: Enable NoCloud KVM platform as a test suite. Launches KVM via a wrapper around qemu-system and sets a custom SSH port for communication with the test images.
  • chef users can now pin a desired omibus_version in #cloud-config they wish to install with chef: omnibus_version: X.Y.Z
  • Add cloud-init collect-logs comandline utility to tar and gzip cloud-init logs needed to file bugs
  • Apport integration so ubuntu-bug cloud-init works on Artful (to be SRU’d)
  • Fix Bug #1715690 and Bug #1715738 – Cloud-init config modules are now skipped if distribution doesn’t match module’s supported distros. Modules can be forced to run setting ‘unverified_module’.
  • LP: #1675063 VMWare datasource now reacts to user network customization and renders network configuration version 1 from the datasource.
  • LP: #1717147 Fix a CentOS 7 regression which ignored dhclient lease files beginning with dhclient-. Datasource now handles ‘dhclient-‘ and ‘dhclient.’ prefix.
  • LP: #1717611 walinuxagent sometimes fails to deliver certificates during service upgrade. Cloud-init now waits 15 mins instead of 1 for the wireservice to become healthy.
  • LP: #1717598 Fix GCE cloud instance user-data processing.

curtin

  • bzr r.526 – Ensure iscsi service handles shutdown properly. The following iscsi configurations are validated as passing shutdown (plain, lvm over iscsi, and raid over iscsi)

Bug Work and Triage

IRC Meetings

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

SRU bug closure highlights

  • LP: #1713990 – virt-install failes on s38-x with options –location and –extra-args
  • LP: #1393842 – Cannot createa default RHEL7 vm in virt-manager
  • LP: #1099947 – nut-server fails to execute udeva for UPS solutions due to permissions issues.

Uploads to the Development Release (Artful)

apache2, 2.4.27-2ubuntu3, mdeslaur
cloud-init, 0.7.9-283-g7eb3460b-0ubuntu1, smoser
cloud-init, 0.7.9-281-g10f067d8-0ubuntu1, smoser
cloud-init, 0.7.9-280-ge626966e-0ubuntu1, smoser
docker.io, 1.13.1-0ubuntu3, vorlon
dpdk, 17.05.2-0ubuntu1, paelzer
groovy, 2.4.8-2, None
haproxy, 1.7.9-1ubuntu1, chiluk
lxc, 2.1.0-0ubuntu1, stgraber
maas, 2.3.0~alpha3-6250-g58f83f3-0ubuntu1, andreserl
nagios-nrpe, 3.2.0-4ubuntu2, brian-murray
ndg-httpsclient, 0.4.3-1, None
nmap, 7.60-1ubuntu1, doko
simplestreams, 0.1.0~bzr450-0ubuntu1, smoser
sysstat, 11.5.7-1ubuntu1, paelzer
tomcat8, 8.5.16-1, None
walinuxagent, 2.2.17-0ubuntu1, sil2100
Total: 17

Uploads to Supported Releases (Trusty, Xenial, Yakkety, Zesty)

apache2, zesty, 2.4.25-3ubuntu2.3, mdeslaur
apache2, xenial, 2.4.18-2ubuntu3.5, mdeslaur
apache2, trusty, 2.4.7-1ubuntu4.18, mdeslaur
bind9, zesty, 1:9.10.3.dfsg.P4-10.1ubuntu5.2, mdeslaur
bind9, xenial, 1:9.10.3.dfsg.P4-8ubuntu1.8, mdeslaur
bind9, trusty, 1:9.9.5.dfsg-3ubuntu0.16, mdeslaur
cloud-init, xenial, 0.7.9-233-ge586fe35-0ubuntu1~16.04.2, smoser
cloud-init, zesty, 0.7.9-233-ge586fe35-0ubuntu1~17.04.2, smoser
cloud-init, zesty, 0.7.9-233-ge586fe35-0ubuntu1~17.04.1, smoser
cloud-init, xenial, 0.7.9-233-ge586fe35-0ubuntu1~16.04.1, smoser
libvirt, trusty, 1.2.2-0ubuntu13.1.22, paelzer
nut, xenial, 2.7.2-4ubuntu1.2, paelzer
nut, trusty, 2.7.1-1ubuntu1.2, paelzer
qemu, zesty, 1:2.8+dfsg-3ubuntu2.4, mdeslaur
qemu, xenial, 1:2.5+dfsg-5ubuntu10.15, mdeslaur
qemu, trusty, 2.0.0+dfsg-2ubuntu1.35, mdeslaur
walinuxagent, trusty, 2.2.17-0ubuntu1~14.04.1, sil2100
walinuxagent, xenial, 2.2.17-0ubuntu1~16.04.1, sil2100
walinuxagent, zesty, 2.2.17-0ubuntu1~17.04.1, sil2100
walinuxagent, trusty, 2.2.17-0ubuntu1~14.04.1, sil2100
walinuxagent, xenial, 2.2.17-0ubuntu1~16.04.1, sil2100
walinuxagent, zesty, 2.2.17-0ubuntu1~17.04.1, sil2100
Total: 22

Contact the Ubuntu Server team

19 September, 2017 09:16PM

Ubuntu Insights: Results of the Ubuntu Desktop Applications Survey


 

This article originally appeared on Dustin Kirkland’s blog

I had the distinct honor to deliver the closing keynote of the UbuCon Europe conference in Paris a few weeks ago.  First off — what a beautiful conference and venue!  Kudos to the organizers who really put together a truly remarkable event.  And many thanks to the gentleman (Elias?) who brought me a bottle of his family’s favorite champagne, as a gift on Day 2 🙂  I should give more talks in France!

In my keynote, I presented the results of the Ubuntu 18.04 LTS Default Desktops Applications Survey, which was discussed at length on HackerNews, Reddit, and Slashdot.  With the help of the Ubuntu Desktop team (led by Will Cooke), we processed over 15,000 survey responses and in this presentation, I discussed some of the insights of the data.

The team is now hard at work evaluating many of the suggested applications, for those of you that aren’t into the all-Emacs spin of Ubuntu 😉

Moreover, we’re also investigating a potential approach to make the Ubuntu Desktop experience perhaps a bit like those Choose-Your-Own-Adventure books we loved when we were kids, where users have the opportunity to select each of their prefer applications (or stick with the distro default) for a handful of categories, during installation.

Marius Quabeck recorded the session and published the audio and video of the presentation here on YouTube:

 


You can download the slides here, or peruse them below:



Let’s talk about the Ubuntu 18.04 LTS Roadmap! from Dustin Kirkland

19 September, 2017 06:48PM

hackergotchi for ARMBIAN

ARMBIAN

NanoPi M3


Ubuntu server – 4.11.12 kernel
 
Command line interface – server usage scenarios.

Testing

Ubuntu desktop – 4.11.12 kernel
 
Server and light desktop usage scenarios.

Testing

other download options and archive

Desktop

Quick start | Documentation

Preparation

Make sure you have a good & reliable SD card and a proper power supply. Archives can be uncompressed with 7-Zip on Windows, Keka on OS X and 7z on Linux (apt-get install p7zip-full). RAW images can be written with Etcher (all OS).

Boot

Insert SD card into a slot and power the board. (First) boot (with DHCP) takes up to 35 seconds with a class 10 SD Card and cheapest board.

Login

Login as root on HDMI / serial console or via SSH and use password 1234. You will be prompted to change this password at first login. Next you will be asked to create a normal user account that is sudo enabled (beware of default QWERTY keyboard settings at this stage).

19 September, 2017 06:47PM by igorpecovnik

hackergotchi for Purism PureOS

Purism PureOS

GNOME Foundation Partners with Purism to Support Its Efforts to Build the Librem 5 Smartphone

Orinda, CA/San Francisco, September 19, 2017 – The GNOME Foundation has provided their endorsement and support of Purism’s efforts to build the Librem 5, which if successful will be the world’s first free and open smartphone with end-to-end encryption and enhanced user protections. The Librem 5 is a hardware platform the Foundation is interested in advancing as a GNOME/GTK phone device. The GNOME Foundation is committed to partnering with Purism to create hackfests, tools, emulators, and build awareness that surround moving GNOME/GTK onto the Librem 5 phone.

As part of the collaboration, if the campaign is successful the GNOME Foundation plans to enhance GNOME Shell and general performance of the system with Purism to enable features on the Librem 5.

Various GNOME technologies are used extensively in embedded devices today, and GNOME developers have experienced some of the challenges that face mobile computing specifically with the Nokia 770, N800 and N900, the One Laptop Per Child project’s XO laptop and FIC’s Neo1973 mobile phone.

“Having a Free/Libre and Open Source software stack on a mobile device is a dream-come-true for so many people, and Purism has the proven team to make this happen. We are very pleased to see Purism and the Librem 5 hardware be built to support GNOME.” — Neil McGovern, Executive Director, GNOME Foundation

“Purism is excited to work with many communities and organizations to advance the digital rights of people. Getting endorsement from GNOME Foundation for the Librem 5 hardware gets us all one-step closer to a phone that avoids the handcuffs of Android and iOS.” — Todd Weaver, Founder & CEO, Purism

About the GNOME Foundation

The GNOME Project was started in 1997 by two then-university students, Miguel de Icaza and Federico Mena Quintero. Their aim: to produce a free (as in freedom) desktop environment. Since then, GNOME has grown into a hugely successful enterprise. Used by millions of people across the world, it is the most popular desktop environment for GNU/Linux and UNIX-type operating systems. The desktop has been utilized in successful, large-scale enterprise, and public deployments, and the project’s developer technologies are utilized in a large number of popular mobile devices.

The GNOME Foundation is an organization committed to supporting the advancement of GNOME, comprised of hundreds of volunteer developers and industry-leading companies. The Foundation is a member directed, 501(c)(3) non-profit organization that provides financial, organizational, and legal support to the GNOME project. The GNOME Foundation is supporting the pursuit of software freedom through the innovative, accessible, and beautiful user experience created by GNOME contributors around the world. More information about GNOME and the GNOME Foundation can be found at GNOME.org and Foundation.GNOME.org. Become a friend of GNOME at GNOME.org/friends/

For further comments and information, contact the GNOME press contact team at gnome-press-contact@gnome.org.

About Purism

Purism is a Social Purpose Corporation devoted to bringing security, privacy, software freedom, and digital independence to everyone’s personal computing experience. With operations based in San Francisco (California) and around the world, Purism manufactures premium-quality laptops, tablets and phones, creating beautiful and powerful devices meant to protect users’ digital lives without requiring a compromise on ease of use. Purism designs and assembles its hardware in the United States, carefully selecting internationally sourced components to be privacy-respecting and fully Free-Software-compliant. Security and privacy-centric features come built-in with every product Purism makes, making security and privacy the simpler, logical choice for individuals and businesses.

Media Contact

Marie Williams, Coderella / Purism
+1 415-689-4029
pr@puri.sm
See also the Purism press room for additional tools and announcements.
 

Please see also our stance on GNOME vs KDE as it relates to the Librem 5 campaign.

19 September, 2017 06:30PM by Jeff

hackergotchi for ARMBIAN

ARMBIAN

NanoPi K2


Ubuntu server – mainline kernel
 
Command line interface – server usage scenarios.

Experimental

Ubuntu desktop – legacy kernel
 
Multimedia and desktop usage scenarios.

Testing

other download options and archive

  • mainline kernel images: HDMI output not yet implemented
  • troubles with HDMI to VGA converters. Not many screen resolution supported

Desktop

Quick start | Documentation

Preparation

Make sure you have a good & reliable SD card and a proper power supply. Archives can be uncompressed with 7-Zip on Windows, Keka on OS X and 7z on Linux (apt-get install p7zip-full). RAW images can be written with Etcher (all OS).

Boot

Insert SD card into a slot and power the board. (First) boot (with DHCP) takes up to 35 seconds with a class 10 SD Card and cheapest board.

Login

Login as root on HDMI / serial console or via SSH and use password 1234. You will be prompted to change this password at first login. Next you will be asked to create a normal user account that is sudo enabled (beware of default QWERTY keyboard settings at this stage).

19 September, 2017 03:26PM by igorpecovnik

NanoPi Neo Plus2


Ubuntu server – mainline kernel
 
Command line interface – server usage scenarios.

Experimental

other download options and archive

Known issues

All currently available OS images for H5 boards are experimental

  • don’t use them for anything productive but just to give constructive feedback to developers
  • shutdown might result in reboots instead or board doesn’t really power off (cut power physically)

Desktop

Quick start | Documentation

Preparation

Make sure you have a good & reliable SD card and a proper power supply. Archives can be uncompressed with 7-Zip on Windows, Keka on OS X and 7z on Linux (apt-get install p7zip-full). RAW images can be written with Etcher (all OS).

Boot

Insert SD card into a slot and power the board. (First) boot (with DHCP) takes up to 35 seconds with a class 10 SD Card and cheapest board.

Login

Login as root on HDMI / serial console or via SSH and use password 1234. You will be prompted to change this password at first login. Next you will be asked to create a normal user account that is sudo enabled (beware of default QWERTY keyboard settings at this stage).

19 September, 2017 10:37AM by igorpecovnik

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Webinar: OpenStack Pike is here, what’s new?

Sign up for our new webinar about the Canonical OpenStack Pike release. Join us to learn about the new features and how to upgrade from Ocata to Pike using OpenStack Charms.

 

19 September, 2017 10:04AM

September 18, 2017

Kubuntu General News: Plasma 5.11 beta available in unofficial PPA for testing on Artful

Adventurous users and developers running the Artful development release can now also test the beta version of Plasma 5.11. This is experimental and can possibly kill kittens!

Bug reports on this beta go to https://bugs.kde.org, not to Launchpad.

The PPA comes with a WARNING: Artful will ship with Plasma 5.10.5, so please be prepared to use ppa-purge to revert changes. Plasma 5.11 will ship too late for inclusion in Kubuntu 17.10, but should be available via backports soon after release day, October 19th, 2017.

Read more about the beta release: https://www.kde.org/announcements/plasma-5.10.95.php

If you want to test on Artful: sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt-get update && sudo apt full-upgrade -y

The purpose of this PPA is testing, and bug reports go to bugs.kde.org.

18 September, 2017 11:12PM

Dustin Kirkland: Results of the Ubuntu Desktop Applications Survey


I had the distinct honor to deliver the closing keynote of the UbuCon Europe conference in Paris a few weeks ago.  First off -- what a beautiful conference and venue!  Kudos to the organizers who really put together a truly remarkable event.  And many thanks to the gentleman (Elias?) who brought me a bottle of his family's favorite champagne, as a gift on Day 2 :-)  I should give more talks in France!

In my keynote, I presented the results of the Ubuntu 18.04 LTS Default Desktops Applications Survey, which was discussed at length on HackerNews, Reddit, and Slashdot.  With the help of the Ubuntu Desktop team (led by Will Cooke), we processed over 15,000 survey responses and in this presentation, I discussed some of the insights of the data.

The team is now hard at work evaluating many of the suggested applications, for those of you that aren't into the all-Emacs spin of Ubuntu ;-)

Moreover, we're also investigating a potential approach to make the Ubuntu Desktop experience perhaps a bit like those Choose-Your-Own-Adventure books we loved when we were kids, where users have the opportunity to select each of their prefer applications (or stick with the distro default) for a handful of categories, during installation.

Marius Quabeck recorded the session and published the audio and video of the presentation here on YouTube:


You can download the slides here, or peruse them below:


Cheers,
Dustin

18 September, 2017 10:34PM by Dustin Kirkland (noreply@blogger.com)

Ubuntu Insights: MAAS 2.3.0 Alpha 3 release!

This article originally appeared on Andres Rodriguez’s blog

MAAS 2.3.0 (alpha3)

New Features & Improvements

Hardware Testing (backend only)

MAAS has now introduced an improved hardware testing framework. This new framework allows MAAS to test individual components of a single machine, as well as providing better feedback to the user for each of those tests. This feature has introduced:

  • Ability to define a custom testing script with a YAML definition – Each custom test can be defined with YAML that will provide information about the test. This information includes the script name, description, required packages, and other metadata about what information the script will gather. This information can then be displayed in the UI.
  • Ability to pass parameters – Adds the ability to pass specific parameters to the scripts. For example, in upcoming beta releases, users would be able to select which disks they want to test if they don’t want to test all disks.
  • Running test individually – Improves the way how hardware tests are run per component. This allows MAAS to run tests against any individual component (such a single disk).
  • Adding additional performance tests
    • Added a CPU performance test with 7z.
    • Added a storage performance test with fio.

Please note that individual results for each of the components is currently only available over the API. Upcoming beta release will include various UI improvements that will allow the user to better surface and interface with these new features.

Rack Controller Deployment in Whitebox Switches

MAAS has now the ability to install and configure a MAAS rack controller once a machine has been deployed. As of today, this feature is only available when MAAS detects the machine is a whitebox switch. As such, all MAAS certified whitebox switches will be deployed with a MAAS rack controller. Currently certified switches include the Wedge 100 and the Wedge 40.

Please note that this features makes use of the MAAS snap to configure the rack controller on the deployed machine. Since snap store mirrors are not yet available, this will require the machine to have access to the internet to be able to install the MAAS snap.

Improved DNS Reloading

This new release introduces various improvements to the DNS reload mechanism. This allows MAAS to be smarter about when to reload DNS after changes have been automatically detected or made.

UI – Controller Versions & Notifications

MAAS now surfaces the version of each running controller, and notifies the users of any version mismatch between the region and rack controllers. This helps administrators identify mismatches when upgrading their MAAS on a multi-node MAAS cluster, such as a HA setup.

Issues fixed in this release

  • #1702703    Cannot run maas-regiond without /bin/maas-rack
  • #1711414    [2.3, snap] Cannot delete a rack controller running from the snap
  • #1712450    [2.3] 500 error when uploading a new commissioning script
  • #1714273    [2.3, snap] Rack Controller from the snap fails to power manage on IPMI
  • #1715634    ‘tags machines’ takes 30+ seconds to respond with list of 9 nodes
  • #1676992    [2.2] Zesty ISO install fails on region controller due to postgresql not running
  • #1703035    MAAS should warn on version skew between controllers
  • #1708512    [2.3, UI] DNS and Description Labels misaligned on subnet details page
  • #1711700    [2.x] MAAS should avoid updating DNS if nothing changed
  • #1712422    [2.3] MAAS does not report form errors on script upload
  • #1712423    [2.3] 500 error when clicking the ‘Upload’ button with no script selected
  • #1684094    [2.2.0rc2, UI, Subnets] Make the contextual menu language consistent across MAAS
  • #1688066    [2.2] VNC/SPICE graphical console for debugging purpose on libvirt pod created VMs
  • #1707850    [2.2] MAAS doesn’t report cloud-init failures post-deployment
  • #1711714    [2.3] cloud-init reporting not configured for deployed ubuntu core systems
  • #1681801    [2.2, UI] Device discovery – Tooltip misspelled
  • #1686246    [CLI help] set-storage-layout says Allocated when it should say Ready
  • #1621175    BMC acc setup during auto-enlistment fails on Huawei model RH1288 V3

For full details please visit https://launchpad.net/maas/+milestone/2.3.0alpha3

18 September, 2017 06:35PM

Ubuntu Insights: LXD: Weekly Status #15

Introduction

This week has been pretty quiet as far as upstream changes since half the team was attending the Open Source Summity, the Linux Plumbers Conference and the Linux Security Summit in Los Angeles, California.

We got to talk with other container runtime maintainers, kernel developers and users, having a lot of very productive discussions that should lead to a number of exciting features going forward.

Outside of that, we’ve been focusing on tweaks to the LXD snap, having it work on more platforms and better handle module loading. LXD 2.18 will work properly for Solus 3 users and we’re almost ready with Fedora 26, OpenSUSE 42.3 and OpenSUSE Tumbleweed too.

LXD 2.18 is scheduled to be released tomorrow (Tuesday 19th of September).

Upcoming conferences and events

  • Open Source Summit Europe (Prague, October 2017)
  • Linux Piter 2017 (St. Petersburg – November 2017)

Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.

LXD

LXC

LXCFS

Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.

Ubuntu

  • Nothing to report this week

Snap

  • Call “modprobe” outside of the snap environment when module loading is needed.
  • Added support for Solus 3 to our CI environment.

18 September, 2017 05:18PM

hackergotchi for SparkyLinux

SparkyLinux

Manokwari Desktop

There is a new desktop available for Sparkers: Manokwari.

Manokwari is a desktop shell for GNOME 3 with Gtk+ and HTML5 frontend. Manokwari is used by BlankOn Linux distribution.

Recommended way to install Manokwari Desktop on Sparky host:
sudo apt update
sudo apt install sparky-desktop-manokwari

or via APTus 0.3.x with ‘sparky-desktop’ >= 20170918

Successfully installed and tested on Sparky 4 and 5.

Manokwari with menu
Manokwari desktop on Sparky with system’s menu.

Manokwari with system panel
Manokwari desktop on Sparky with system’s config panel.

Used: Bromo theme, Ultra-flat-icons and Sparky’s default wallpaper.

Known issues:
– the desktop doesn’t start in a virtual machine, only on a physical one;

Have fun !

 

18 September, 2017 03:00PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Andres Rodriguez: MAAS 2.3.0 Alpha 3 release!

MAAS 2.3.0 (alpha3)

New Features & Improvements

Hardware Testing (backend only)

MAAS has now introduced an improved hardware testing framework. This new framework allows MAAS to test individual components of a single machine, as well as providing better feedback to the user for each of those tests. This feature has introduced:

  • Ability to define a custom testing script with a YAML definition – Each custom test can be defined with YAML that will provide information about the test. This information includes the script name, description, required packages, and other metadata about what information the script will gather. This information can then be displayed in the UI.

  • Ability to pass parameters – Adds the ability to pass specific parameters to the scripts. For example, in upcoming beta releases, users would be able to select which disks they want to test if they don’t want to test all disks.

  • Running test individually – Improves the way how hardware tests are run per component. This allows MAAS to run tests against any individual component (such a single disk).

  • Adding additional performance tests

    • Added a CPU performance test with 7z.

    • Added a storage performance test with fio.

Please note that individual results for each of the components is currently only available over the API. Upcoming beta release will include various UI improvements that will allow the user to better surface and interface with these new features.

Rack Controller Deployment in Whitebox Switches

MAAS has now the ability to install and configure a MAAS rack controller once a machine has been deployed. As of today, this feature is only available when MAAS detects the machine is a whitebox switch. As such, all MAAS certified whitebox switches will be deployed with a MAAS rack controller. Currently certified switches include the Wedge 100 and the Wedge 40.

Please note that this features makes use of the MAAS snap to configure the rack controller on the deployed machine. Since snap store mirrors are not yet available, this will require the machine to have access to the internet to be able to install the MAAS snap.

Improved DNS Reloading

This new release introduces various improvements to the DNS reload mechanism. This allows MAAS to be smarter about when to reload DNS after changes have been automatically detected or made.

UI – Controller Versions & Notifications

MAAS now surfaces the version of each running controller, and notifies the users of any version mismatch between the region and rack controllers. This helps administrators identify mismatches when upgrading their MAAS on a multi-node MAAS cluster, such as a HA setup.

Issues fixed in this release

  • #1702703    Cannot run maas-regiond without /bin/maas-rack
  • #1711414    [2.3, snap] Cannot delete a rack controller running from the snap
  • #1712450    [2.3] 500 error when uploading a new commissioning script
  • #1714273    [2.3, snap] Rack Controller from the snap fails to power manage on IPMI
  • #1715634    ‘tags machines’ takes 30+ seconds to respond with list of 9 nodes
  • #1676992    [2.2] Zesty ISO install fails on region controller due to postgresql not running
  • #1703035    MAAS should warn on version skew between controllers
  • #1708512    [2.3, UI] DNS and Description Labels misaligned on subnet details page
  • #1711700    [2.x] MAAS should avoid updating DNS if nothing changed
  • #1712422    [2.3] MAAS does not report form errors on script upload
  • #1712423    [2.3] 500 error when clicking the ‘Upload’ button with no script selected.
  • #1684094    [2.2.0rc2, UI, Subnets] Make the contextual menu language consistent across MAAS
  • #1688066    [2.2] VNC/SPICE graphical console for debugging purpose on libvirt pod created VMs
  • #1707850    [2.2] MAAS doesn’t report cloud-init failures post-deployment
  • #1711714    [2.3] cloud-init reporting not configured for deployed ubuntu core systems
  • #1681801    [2.2, UI] Device discovery – Tooltip misspelled
  • #1686246    [CLI help] set-storage-layout says Allocated when it should say Ready
  • #1621175    BMC acc setup during auto-enlistment fails on Huawei model RH1288 V3

For full details please visit:

https://launchpad.net/maas/+milestone/2.3.0alpha3

18 September, 2017 02:24PM

LMDE

Monthly News – September 2017

Before we go through the news I’d like to thank you for your donations, your help and your generosity.

Codename for Linux Mint 18.3

Linux Mint 18.3 will be called “Sylvia”.

Backup Tools

The feedback you gave us last month helped us further improve our backup tool and identify the need for a system restore utility.

We talked to Tony George, the developer behind Timeshift.

Timeshift is an excellent tool which focuses on creating and restoring system snapshots. It’s a great companion to mintBackup which focuses on personal data.

The two applications will be installed by default and complement each others in Linux Mint 18.3.

We’re currently working with Tony to improve translations and desktop integration for Timeshift, add window progress support into it and improve its support for HiDPI.

If you want to help translate it please head over to https://translations.launchpad.net/linuxmint/latest/+pots/timeshift.

On Github, Timeshift is available https://github.com/teejee2008/timeshift.

System Reports

At the end of the last development cycle I mentioned the idea of a tool which would bring information to users and help them troubleshoot issue. This is an ambitious project and we’re still not sure it will land in the next release, at least not fully…

I say not fully because this tool received its codename (“mintReport”), because we started implementing it and because one of its feature is now completely ready and will be shipped with Linux Mint 18.3.

That feature is the gathering of crash reports.

Using apport as a backend, a report is made whenever an application crashes.

MintReport lists these reports and generates stack traces for them:

Non-experienced users rarely know how to produce a stack trace and that information is crucial to developers when they’re not able to reproduce a bug.

This tool will make it much easier for anyone to produce these traces.

It also suggests the installation of debugging symbols (-dbg packages) when these are missing and warns in case of mismatches.

Linux Mint 18.3 will ship with mintReport and debugging symbols by default.

Cinnamon improvements

HiDPI will be enabled by default in Cinnamon 3.6.

The configuration module for cinnamon spices (applets, desklets, extensions, themes) was completely revamped:

Nemo extensions are now able to pass the name of their configuration tool to Nemo in order to get a “Configure” button in the Nemo plugins dialog:

This makes it easier to integrate extensions properly and not clutter the application menu.

Other improvements

The Driver Manager was given better HiDPI support and better detection of CPUs and microcode packages.

Synaptic dialogs (used by the Software Sources, Language Settings and the Update Manager) received support for window progress.

The toolbar of the PDF reader, Xreader, was improved. The history buttons were replaced with navigation buttons (history can still be browsed via the menubar). The two zoom buttons were switched and a zoom reset button was added to make Xreader consistent with other Xapps. As we speak Xreader is also getting support to detect your screen size, so that 100% zoom means that what you see on the screen is exactly the size the document would have on paper.

In Xplayer, the media player, the fullscreen window was improved to look cleaner and to be more consistent with the player’s window mode.

Nemo-preview received support for animated GIFs.

Translations for Nemo extensions, cinnamon-session and cinnamon-settings-daemon are now handled by cinnamon-translations (and thus will be greatly improved).

Sponsorships:

Platinum Sponsors:
Private Internet Access
Gold Sponsors:
Linux VPS Hosting
Silver Sponsors:

Acunetix
Sucuri
Bronze Sponsors:
Vault Networks *
AYKsolutions Server & Cloud Hosting
7L Networks Toronto Colocation *
Goscomb
BGASoft Inc
David Salvo
Thomas K
Community Sponsors:

Donations in August:

A total of $9,151 were raised thanks to the generous contributions of 429 donors:

$218, Benedikt R.
$200 (5th donation), Radomír Č.
$200 (2nd donation), B
$109 (2nd donation), DvW informatiemanagement en advies
$109 (2nd donation), Volker Meyer
$109, Cyril M.
$109, Jens H.
$109, Sandro P.
$109, Frederic S.
$109, Norbert J.
$109, Martin L.
$109, Mohammad A.
$109, Till J.
$100, Dave J. aka “cube”
$100, Ean P.
$100, Sparo V.
$100, Friedrich A. C.
$100, Carl M.
$82, James T. aka “jamest”
$80 (2nd donation), Pierre-elie H.
$54 (2nd donation), J. F. .
$54, Sylvain G. aka “Chewie”
$54, JosuKa
$54, Francis M.
$54, Jan B.
$54, Miguel V.
$54, Yannick B.
$54, Tim B.
$54, Juergen G.
$54, Nejdet C.
$54, Claudia K.
$54, Angel M. A.
$50 (20th donation), Anthony C. aka “ciak”
$50 (16th donation), Robert D B.
$50 (4th donation), Thomas T. aka “FullTimer1489”
$50 (3rd donation), TehGhodTrole
$50 (3rd donation), John C.
$50 (2nd donation), Jesper D.
$50, Graeme H.
$50, Rajeswari S.
$50, Kenneth D. V.
$50, Colin B.
$50, Dan O.
$50, Nick L.
$50, Robert S.
$50, Eric M.
$50, Edward J.
$50, Erwin D.
$50, Janice G.
$50, Charles S.
$50, Robert O.
$50, Lawrence T.
$50, Gregory S.
$50, Frederic G.
$50, Richard P.
$50, Thomas P.
$50, Eric S.
$50, Sebastian aka “Seb”
$44, Rik K.
$40 (2nd donation), Roy V. K.
$40 (2nd donation), Robert F. aka “robfish”
$40 (2nd donation), Soumyashant Nayak
$38 (33rd donation), Mark W.
$38 (2nd donation), Bjarte O.
$36 (2nd donation), Milan V.
$35 (3rd donation), Borisov G. aka “method
$35, Norman S. M.
$33 (90th donation), Olli K.
$33 (2nd donation), Sachindra Prosad Saha aka “Love you grand dad”
$33 (2nd donation), Lars N.
$33, Anestis P.
$33, Regina M.
$33, Ayman A.
$33, Osvaldo F.
$30 (4th donation), Johannes B.
$30 (4th donation), Anonymous User
$30 (3rd donation), Caleb P.
$30 (2nd donation), Paul S.
$30, יוסף כהן
$30, Etienne B.
$30, Michel D.
$30, Devon B.
$30, Dirk P.
$30, Dennis B.
$27.75, Steven M.
$27 (5th donation), Peter M.
$27 (2nd donation), Frederik M.
$27 (2nd donation), Mario K.
$27 (2nd donation), Ralf D.
$27, Martin S.
$27, Alexander H.
$27, Kalheinz S.
$27, Andreas Lahrmann
$25 (73th donation), Ronald W.
$25 (11th donation), Kwan L.
$25 (5th donation), Widar H.
$25 (5th donation), Charles W.
$25 (3rd donation), Michael M.
$25 (2nd donation), Roberto O. L.
$25 (2nd donation), Tommy T.
$25, Percy Winterburn aka “Percy”
$25, Gareth J.
$25, Pieter V. D. R.
$25, Gordon M.
$25, Wesley A. S.
$25, John R.
$25, Martin L.
$25, Carl S.
$25, Wayne K.
$25, Sky High, Inc.
$25, Leash
$25, Imagez A.
$25, Tommy T.
$22 (9th donation), Derek R.
$22 (5th donation), Rüdiger K.
$22 (5th donation), Henrik H.
$22 (4th donation), www.dogpanions.co.uk
$22 (3rd donation), Paul N.
$22 (3rd donation), Ralf O.
$22 (3rd donation), Vesa K.
$22 (2nd donation), Frank C.
$22 (2nd donation), Mr S. J. W.
$22 (2nd donation), Nicolaas V. D. R.
$22 (2nd donation), nobody
$22 (2nd donation), Alberto A.
$22, Bernhard J.
$22, Niels K. S.
$22, Borut K.
$22, Erich K.
$22, J-luc P.
$22, Dick B.
$22, Ovidiu F.
$22, Heinz B.
$22, Bernard F.
$22, Mark N.
$22, Mathias W.
$22, Nico D.
$22, Tony M.
$22, Gerard D.
$20 (29th donation), Curt Vaughan aka “curtvaughan ”
$20 (28th donation), Go Live Lively
$20 (26th donation), Utah B.
$20 (17th donation), Larry J.
$20 (11th donation), Jeffery J.
$20 (6th donation), Jason H
$20 (6th donation), Alistair G.
$20 (3rd donation), George M.
$20 (3rd donation), Allen G.
$20 (3rd donation), T. P. .
$20 (2nd donation), Donald M.
$20 (2nd donation), Daniel O.
$20 (2nd donation), Dayton L.
$20 (2nd donation), Michael E.
$20 (2nd donation), Michael M.
$20 (2nd donation), Stijn K.
$20 (2nd donation), Larry R.
$20, Luc C.
$20, RRKMAT
$20, Christian L.
$20, Richard B.
$20, Greg R. H.
$20, Shawn M.
$20, A M. K.
$20, Harold P. A.
$20, Jonathan O.
$20, Robert T.
$20, Raymond B.
$20, Matthew M.
$20, Muntasir S.
$20, Franco C. M.
$20, Jose M.
$20, Irvin H.
$20, Edward F.
$20, Juha M.
$20, J B. P.
$20, Robert P.
$20, Gordon S.
$20, Tony L.
$20, Michael D.
$20, Robert Z.
$20, Mladen M.
$16 (8th donation), Johann J.
$16 (5th donation), Linux Hardware Guide
$16 (4th donation), Radomír Č.
$16 (2nd donation), Joss S.
$16, Aleksander V.
$16, Wilfried J.
$16, William S.
$16, Gérard T.
$16, Jacques L.
$15, Gerald F.
$15, Мальцев М.
$15, gmq
$15, Patrick F.
$15, gmq
$15, Jose D. C.
$15, Ralph W.
$15, Andrew P.
$15, Hans G. H.
$15, Louy R. T.
$13 (16th donation), Anonymous
$13, Iain C.
$12 (77th donation), Tony C. aka “S. LaRocca”
$12 (23rd donation), JobsHiringnearMe
$12 (7th donation), Johann J.
$12, ROBBICOM.de
$12, Carsten F.
$11 (9th donation), Gerard C.
$11 (8th donation), Queenvictoria
$11 (6th donation), Andreas M.
$11 (4th donation), W. Georgi
$11 (4th donation), Manuel C. aka “Manel”
$11 (3rd donation), Nigel B.
$11 (2nd donation), Yann S.
$11 (2nd donation), Philip E.
$11 (2nd donation), Bengt J.
$11 (2nd donation), Andre H.
$11 (2nd donation), Andrew R.
$11 (2nd donation), Paul B.
$11 (2nd donation), Yann S.
$11 (2nd donation), Max P.
$11, Kai Berk Özer
$11, Francesco R.
$11, Gilbert G.
$11, Lee
$11, Thomas J.
$11, Jean-Christophe H. aka “Jic”
$11, Marc B.
$11, Konstantin M.
$11, Fm K.
$11, Philip C.
$11, Dirk S.
$11, Florian S.
$11, Alberto T.
$11, Franklin P.
$11, Günther S.
$11, Janne P.
$11, Peter R. S.
$11, John B.
$11, Thomas P.
$11, Thomas B.
$11, Jacek M.
$11, Lois S.C.
$10 (21st donation), Thomas C.
$10 (12th donation), Frank K.
$10 (12th donation), Paul O.
$10 (12th donation), Christopher R.
$10 (11th donation), HotelsNearbyMe
$10 (8th donation), Dinu P.
$10 (7th donation), Lance M.
$10 (6th donation), Gary P.
$10 (5th donation), Nelson I.
$10 (5th donation), Bartosz Wierucki
$10 (5th donation), Terrance G.
$10 (4th donation), Agenor Marrero
$10 (4th donation), Jason D.
$10 (3rd donation), Don Bhrayan
$10 (3rd donation), Gary L.
$10 (2nd donation), Doyle B.
$10 (2nd donation), Felipe L. L. aka “Neubius”
$10 (2nd donation), Tyler B.
$10 (2nd donation), Horacio R.
$10 (2nd donation), Antone H.
$10 (2nd donation), Hussain H.
$10 (2nd donation), Michael B.
$10 (2nd donation), Ray M.
$10 (2nd donation), Thomas H.
$10, Betty T.
$10, Miguel P.
$10, Vishal G.
$10, Hans G. H.
$10, Alan T.
$10, Danyle V.
$10, Josme A. D. S.
$10, Frank W.
$10, Christopher P.
$10, Pedro G.
$10, Jeff F.
$10, Shoot Around Corners
$10, Michael S.
$10, Marino W.
$10, André S.
$10, Ian M.
$10, Tobias K.
$10, John L.
$10, Christopher J.
$10, Дроник В.
$10, Cesar L. P.
$10, Sandel C.
$10, Michael W.
$10, RHEV
$10, Christopher G.
$10, Tommaso S.
$10, Roberto C.
$10, Chi F. T.
$10, Gustavo A. B.
$10, Ba T. N.
$10, Eric P.
$10, George G.
$9.99, Eric G.
$9, Victor H.
$8 (3rd donation), Udo M.
$7, Martin K.
$5 (16th donation), Eugene T.
$5 (9th donation), Jim A.
$5 (8th donation), Guillaume G. aka “Tidusrose”
$5 (8th donation), Kouji aka “杉林晃治
$5 (7th donation), Kouji aka “杉林晃治
$5 (7th donation), Bhavinder Jassar
$5 (6th donation), Aliki K.
$5 (6th donation), John M.
$5 (5th donation), Paul S.
$5 (5th donation), Lazada Philippines Voucher
$5 (5th donation), Blazej P. aka “bleyzer”
$5 (4th donation), NAGY Attila aka “GuBo”
$5 (4th donation), Michael J. N. J.
$5 (4th donation), J. S. .
$5 (3rd donation), Bongoville
$5 (3rd donation), RexAlan
$5 (3rd donation), James F.
$5 (3rd donation), Keith K.
$5 (3rd donation), Russell S.
$5 (3rd donation), Laurent M aka “lolomeis”
$5 (3rd donation), Jimmy M.
$5 (3rd donation), Stefan B.
$5 (2nd donation), Wiktor M. aka “wikuś”
$5 (2nd donation), Thomas D. Y.
$5 (2nd donation), Russell S.
$5 (2nd donation), Leandro Cortese aka “AriX
$5 (2nd donation), Matteo A.
$5 (2nd donation), Johan H.
$5 (2nd donation), Jerzy D.
$5 (2nd donation), Mik aka “mikstico”
$5 (2nd donation), Arkadiusz T.
$5 (2nd donation), Alexander R.
$5, Michael P.
$5, Ramla L. M.
$5, Peter G.
$5, Robert R.
$5, Pablo Santos
$5, Andrzej P.
$5, Erly T. O.
$5, Simon S.
$5, Mateusz K.
$5, Jens B.
$5, Jake M.
$5, Hannes G.
$5, Star Tipster
$5, Ricardo S. C.
$5, Jean M.
$5, Stefano C.
$5, Gianni S.
$5, Dennis
$5, Edwin S.
$5, Felix G. F. G.
$5, Robert O.
$5, Ian R.
$5, Pietro F.
$5, rptev
$5, Wiryanto Y.
$5, cheval a vendre
$5, Pilar G. C.
$5, Luca P.
$5, Ingo J.
$5, Sara A. C.
$5, Verpackungsdruck aka “Klischeeherstellung
$5, David H.
$5, Kurt W.
$5, Vitor P. S.
$4 (7th donation), David Y.
$3.95 (11th donation), Matthew B.
$3 (3rd donation), SEO Sunshine Coast
$3 (2nd donation), Maxime H.
$3 (2nd donation), Gianluigi M.
$3, Martin
$3, Diane R.
$3, Marius S.
$3, Marcelo S. P.
$3, Ширяев К.
$3, Nicolas P.
$2.5 (3rd donation), Wojtek N.
$62.88 from 48 smaller donations

If you want to help Linux Mint with a donation, please visit http://www.linuxmint.com/donors.php

Rankings:

  • Distrowatch (popularity ranking): 2679 (1st)
  • Alexa (website ranking): 4169

18 September, 2017 12:06PM by Linux Mint

hackergotchi for ARMBIAN

ARMBIAN

Le Potato


Ubuntu server – mainline kernel
 
Command line interface – server usage scenarios.

Experimental

Ubuntu desktop – legacy kernel
 
Multimedia and desktop usage scenarios.

Experimental

other download options and archive

  • troubles with HDMI to VGA converters. Not many screen resolution supported

Desktop

Quick start | Documentation

Preparation

Make sure you have a good & reliable SD card and a proper power supply. Archives can be uncompressed with 7-Zip on Windows, Keka on OS X and 7z on Linux (apt-get install p7zip-full). RAW images can be written with Etcher (all OS).

Boot

Insert SD card into a slot and power the board. (First) boot (with DHCP) takes up to 35 seconds with a class 10 SD Card and cheapest board.

Login

Login as root on HDMI / serial console or via SSH and use password 1234. You will be prompted to change this password at first login. Next you will be asked to create a normal user account that is sudo enabled (beware of default QWERTY keyboard settings at this stage).

18 September, 2017 08:13AM by igorpecovnik

hackergotchi for Ubuntu developers

Ubuntu developers

David Tomaschik: Getting Started in Offensive Security

Please note that this post, like all of those on my blog, represents only my views, and not those of my employer. Nothing in here implies official hiring policy or requirements.

I’m not going to pretend that this article is unique or has magic bullets to get you into the offensive security space. I also won’t pretend to speak for others in that space or in other areas of information security. It’s a big field, and it turns out that a lot of us have opinions about it. Mubix maintains a list of posts like this so you can see everyone’s opinions. I highly recommend the post “So You Want to Work in Security” by Parisa Tabriz for a view that’s not specific to offensive security. (Though there’s a lot of cross-over.)

My personal area of interest – some would even say expertise – is offensive application security, which includes activities like black box application testing, reverse engineering (but not, generally, malware reversing), penetration testing, and red teaming. I also do whitebox code review and various other things, but mostly I attack things using the same tools and techniques that an illicit attacker would. Of course, I do this in the interest of securing those systems and learning from the experience to help engineer stronger and more robust systems.

I do a lot of work with recruiting and outreach in our company, so I’ve had the chance to talk to many people about what I think makes a good offensive security engineer. After a few dozen times and much reflection, I decided to write out my thoughts on getting started. Don’t believe this is all you need, but it should help you get started.

A Strong Sense of Curiousity and a Desire to Learn

This isn’t a field or a speciality that you get into after a few courses and can stop there. To be successful, you’ll have to constantly keep learning. To keep learning like that, you have to want to keep learning. I spend a lot of my weekends and evenings playing with technology because I want to understand how it works (and consequently, how I can break it). There’s a lot of ways to learn things that are relevant to this field:

  • Reddit
  • Twitter (follow a bunch of industry people)
  • Blogs (perhaps even mine…)
  • Books (my favorites in the resources section)
  • Courses
  • Attend Conferences (Network! Ask people what they’re doing!)
  • Watch Conference Videos
  • Hands on Exercises

Everyone has a different learning style, you’ll have to learn what works for you. I learn best by doing (hands-on) and somewhat by reading. Videos are just inspiration for me to look more into something. Twitter and Reddit are the starting grounds to find all the other resources.

I see an innate passion for this field in most of the successful people I know. Many of us would do this even if we weren’t paid (and do some of it in our spare time anyway). You don’t have to spend every waking moment working, but you do have to keep moving forward or get left behind.

Understanding the Underlying System

To identify, understand, and exploit security vulnerabilities, you have to understand the underlying system. I’ve seen “penetration testers” who don’t know that paths on Linux/Unix systems start with and use / as the path separator. Watching someone try to exploit a potential LFI with \etc\passwd is just painful. (Hint: it doesn’t work.)

If you’re attacking web applications, you should at least have some understanding of:

  • The HTTP Protocol
  • The Same Origin Policy
  • The programming language used
  • The operating system underneath

For non-web networked applications:

  • A basic understanding of TCP/IP (or UDP/IP, if applicable)
  • The OSI Model
  • Basic computer architecture (stack, heap, etc.)
  • Language used for implementation

You don’t have to know everything about every layer, but each item you don’t know is either something you’ll potentially miss, or something that will cost you time. You’ll learn more as you develop your skills, but there’s some fundamentals that will help you get started:

  • Learn at least one interpreted and one compiled programming language.
    • Python and ruby are a good choice for interpreted languages, as most security tools are written in one of those, so you can modify & create your own tools when needed.
    • C is the classic language for demonstrating memory corruption vulnerabilities, and doesn’t hide a lot of the underlying system, so a good choice for a compiled language.
  • Know basic use of both Linux and Windows. Basic use includes:
    • Network configuration
    • Command line basics
    • How services are run
  • Learn a bit about x86/x86-64 architecture.
    • What are pointers?
    • What is the stack and the heap?
    • What are registers?

You don’t have to have a full CS degree (but it certainly wouldn’t hurt), but if you don’t understand how developers do their work, you’ll have a much harder time looking for and exploiting vulnerabilities. Many of the best penetration testers and security researchers have had experience as network administrators, systems administrators, or developers – this experience is incredibly useful in understanding the underlying systems.

The CIA Triad

To understand security at all, you should understand the CIA triad. This has nothing to do with the American intelligence agency, but everything to do with 3 pillars of information security: Confidentiality, Integrity, and Availability.

Confidentiality refers to allowing only authorized access to data. For example, preventing access to someone else’s email falls into confidentiality. This idea has strong parallels to the notion of privacy. Encryption is often used (and misused) in the pursuit of confidentiality. Heartbleed is an example of a well-known bug affecting confidentiality.

Integrity refers to allowing only authorized changes to state. This can be the state of data (avoiding file tampering), the state of execution (avoiding remote code execution), or some combination. Most of the “exciting” vulnerabilities in information security impact integrity. GHOST is an example of a well-known bug affecting integrity.

Availability is, perhaps, the easiest concept to understand. This refers to the ability of a service to be access by legitimate users when they want to access it. (And probably also as the speed they’d like.)

These 3 concepts are the main areas of concern for security engineers.

Understanding Vulnerabilities

There are many ways to categorize vulnerabilities, so I won’t try to list them all, but find some and understand how they work. The OWASP Top 10 is a good start for web vulnerabilities. The Modern Binary Exploitation course from RPISEC is a good choice for understanding “Binary Exploitation”.

It’s really valuable to distinguish a bug from a vulnerability. Most vulnerabilities are bugs, most bugs are not vulnerabilities. Bugs are accidentally-introduced misbehavior in software. Vulnerabilities are ways to gain access to a higher (or different) privilege level in an unintended fashion. Generally, a bug must violate one of the 3 pillars of the CIA triad to be classified as a vulnerability. (Though this is often subjective, see [systemd bug].)

Doing Security

At some point, it stops being about what you know and starts being about what you can do. Knowing things is useful in being able to do, but merely reciting facts is not very useful in actual offensive security. Getting hands-on experience is critical, and this is one field where you need to be careful how to do it. Please remember that, however you choose to practice, you should stay legal and observe all applicable laws.

There’s a number of different options here that build relevant skills:

  • Formal classes with hands-on components
  • CTFs (please note that most CTF challenges have little resemblence to actual security work)
  • Wargames (see CTFs, but some are closer)
  • Lab work
  • Bug bounties

Of these, lab work is the most relevant to me, but also the one requiring the most time investment to setup. Typically, a lab will involve setting up one or more practices machines with known-vulnerable software (though feel free to progress to finding unknown issues). I’ll have a follow-up post with information on building an offensive security practice lab.

Bug bounties are a good option, but to a beginner, they’ll be very daunting because much of the low-hanging fruit will be gone, and there should be no known vulnerabilities to practice on. Getting into bug bounties without any prior experience at all is likely to only teach frustration and anger.

Resources

There are some suggested resources for getting started in Offensive Security. I’ll try to maintain them if I receive suggestions from other members of the community.

Web Resources (Reading/Watching)

Books

Courses

Lab Resources

I’ll have a follow-up about building a lab soon, but there’s some things worth looking at here:

Conclusion

This isn’t an especially easy field to get started in, but it’s the challenge that keeps most of us into it. I know I need to constantly be pushing the edge of my capabilities and of technology for it to stay satisfying. Good luck, and maybe you’ll soon be the author of one of the great resources in our community.

If you have other tips/resources that you think should have been included here, drop me a line or reach me on Twitter.

18 September, 2017 07:00AM

hackergotchi for Wazo

Wazo

Sprint Review 17.13

Hello Wazo community! Here comes the release of Wazo 17.13!

Security update

Asterisk: Asterisk 14.6.1 has been included in Wazo 17.13. It contains many security fixes including one for the RTPbleed bug.

New features in this sprint

Admin UI: The user plugin now allows the administrator to change the user's group membership from the user form.

Webhooks: Add the ability to trigger webhooks only for events regarding a given user.

Ongoing features

Plugin management: We want developers to be able to write and share Wazo plugins easily. For this, we need a central place where users can browse plugins and developers can upload them. That's what we call the Plugin Market. Right now, the market already serves the list of available plugins, but it is very static. Work will now be done to add a front end to our plugin market, allowing users to browse, add and modify plugins.

Webhooks: Webhooks allow Wazo to notify other applications about events that happen on the telephony server, e.g. when a call arrives, when it is answered, hung up, when a new contact is added, etc. Webhooks are working correctly, but they still need some polishing: performance tweaking, handling HTTP authentication, listing the history of webhooks triggered, allowing users to setup their own webhooks in order to connect with Zapier, for example. We're also thinking of triggering scripts instead of HTTP requests, making Wazo all the more flexible when interconnecting with other tools.


The instructions for installing Wazo or upgrading Wazo are available in the documentation.

For more details about the aforementioned topics, please see the roadmap linked below.

See you at the next sprint review!

Sources:

18 September, 2017 04:00AM by The Wazo Authors

September 17, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, August 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, about 189 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours is the same as last month.

The security tracker currently lists 59 packages with a known CVE and the dla-needed.txt file 60. The number of packages with open issues decreased slightly compared to last month but we’re not yet back to the usual situation. The number of CVE to fix per package tends to increase due to the increased usage of fuzzers.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

17 September, 2017 08:24AM

hackergotchi for ev3dev

ev3dev

ev3dev in episode 127 of `podcast.__init__`

podcast.__init__ is the Podcast About Python and the People Who Make It Great, and Episode 127 is about Legos, robots, and Python! Tobias Macey talks with David Lechner and Denis Demidov about the ev3dev project and how you can program your Lego Mindstorms with Python!

17 September, 2017 12:00AM by @ddemidov

September 16, 2017

hackergotchi for ArcheOS

ArcheOS

Digital 3D Orthognathic Surgery with the OrtogOnBlender addon

Timelapse video of orthognatic planning steps

The first time I understood the concept of programming, it was in the year 2001, when I realized that a text with some codes was translated into a three-dimensional scene with some animated elements. I was astounded by it, as when I was three years old I realized that the water that was coming into the refrigerator freezer were those transparent pebbles my grandparents used in the juices they prepared for the family.

Since then I have always consumed some information about programming. Sometimes it would even cheer me up and write some codes, but the lack of practical use made me confine myself to just exercising my brain to understand some abstractions that involved that reality.


Many years and many books later, here I start teaching 3D computer graphics courses for the health area and realize that the organization of Blender graphic interface more confused than it helped the doctors in their didactic absorptions.

Faced with this difficulty on the part of the students, I motivated myself to create a methodology that would facilitate students' understanding in a field that proved to be quite challenging in the didactic context: orthognathic 3d digital surgery planning.

Automatically created deformation area

Unlike the other courses I had taught so far, orthognathic surgery, when taught in the "manual mode," where the student needs to understand detail by detail of commands, is almost impossible to apply to a short course. The problem is that it is precisely short courses that are the most sought after and not offering them translates into not generating income.

Together with Dr. Everton da Rosa, Dentist, orthognathic surgery specialist, I began to develop a series of small scripts that would help students automate one task or another. I did one, it worked, I cheered up... I did another, it worked out and another... I picked up a little and kept staggering and achieving, although not in the most elegant way, at least it was functional.

Jaw rotation

Blender uses Python for its scripts and this language is marvelous for anyone who wants to try to program, since the code is pretty clean. My excitement was growing to the point that I was fissured by the small challenges that were appearing.

Initially I intended to create a series of scripts, but realized that I could bundle everything into an addon, which would be installable in Blender. When programming and working with different objects and modes, I was intuitively understanding how everything happened, to the point of thinking of a solution, writing without referencing and code working!

Automatic creation of vertex groups

It was what I needed to become a hermit cyber... locked in the bedroom, just thinking about how to solve the problems and expand the functionalities. In unlikely five days, though not finalized, I was able to mount an addon sufficiently functional to simplify and automate a series of steps that comprise the planning of orthognathic surgery.

Interestingly, as a good nerd who follows the best practices of self-taught effort, I did not ask anyone for anything, I did not post on forums or social media groups on programming, nor did I ask for help from friends who are masters of language. I found everything I needed in the documentation, videos on Youtube, tutorials on the internet and, pampering ... in templates provided by Blender himself! In short, I did not ask for direct help, but I got it from the work and goodwill of people who wanted to help and made excellent materials, I am very grateful to all!

Jaw after osteotomy (Cork on Blender)

I intend to make the addon available soon (as we did with Cork on Blender), after testing it with my students. It is not yet complete, since it lacks the part of the surgical splint (a sort of guide for fitting the teeth) and some other activities such as automatic parenting with bone armature.

Even so, I am very satisfied, mainly for having implemented the automatic process of creating areas of influence based on osteotomy. With it, the cut bones, when moved make the soft tissue (skin, fat, etc.) deform. You can not imagine the challenge of teaching this in the "manual mode." Luckily it all came down to clicking two buttons, which is not bad!

Deformation radiated by the skull

I'm so happy and motivated that the will it gives is to go around programming and automating everything, but I felt on the skin another situation that is inherent in this kind of knowledge ... a code needs to be elegant and be prepared for exceptions and mine does not pass nor close to it. The trend now is to lapse the code and try to make it decent, worthy of being shared, used and improved by the community and interested in using it.

That's it, now I'm going because I had an insight on how to solve a problem in one part of the code... a big hug! : D

16 September, 2017 12:45PM by cogitas3d (noreply@blogger.com)

hackergotchi for Tails

Tails

Call for testing: 3.2~rc1

You can help Tails! The first release candidate for the upcoming version 3.2 is out. We are very excited and cannot wait to hear what you think about it :)

What's new in 3.2~rc1?

Significant changes since Tails 3.1 include:

  • Upgrade to Tails Installer 4.4.19, which gets rid of the splash screen, detects when Tails already is installed on the target device (and then proposes to upgrade), and generally improves the UX. We are very interested in reports about problems with this new version of Tails Installer.

  • The Root Terminal has gone through some significant back-end changes; please make sure it works like before (or better)!

  • Add PPPoE support; if you have a DSL or dial-up connection that uses PPPoE, please give it a try!

  • Bluetooth support is now completely disabled (details: #14655). If this makes it hard for you to use Tails, please let us know!

  • Upgrade to Linux 4.12.12, which improves hardware support, e.g. better support for the NVIDIA Maxwell series of graphics cards.

  • Upgrade to Thunderbird 52.3.0. Ideally it should work exactly like before, or better.

Technical details of all the changes are listed in the Changelog.

How to test Tails 3.2~rc1?

Keep in mind that this is a test image. We tested that it is not broken in obvious ways, but it might still contain undiscovered issues.

But test wildly!

If you find anything that is not working as it should, please report to us on tails-testers@boum.org.

Bonus points if you first check if it is a known issue of this release or a longstanding known issue.

Upgrade from 3.1

  1. Start Tails 3.1 on a USB stick installed using Tails Installer and set an administration password.

  2. Run this command in a Terminal to select the "alpha" upgrade channel and start the upgrade:

    echo TAILS_CHANNEL=\"alpha\" | sudo tee -a /etc/os-release && \
         tails-upgrade-frontend-wrapper
    

    and enter the administration password when asked for the "password for amnesia".

  3. After the upgrade is installed, restart Tails and choose Applications ▸ Tails ▸ About Tails to verify that you are running Tails 3.2~rc1.

Download and install

You can install 3.2~rc1 by following our usual installation instructions, skipping the Download and verify step.

Tails 3.2~rc1 ISO image OpenPGP signature
Tails 3.2~rc1 torrent

Known issues in 3.2~rc1

  • GNOME screen keyboard (that replaced Florence in this version) is not working: pressing its on-screen keyboard buttons do nothing (#14675).

Longstanding known issues

What's coming up?

Tails 3.2 is scheduled on September 26.

Have a look at our roadmap to see where we are heading to.

We need your help and there are many ways to contribute to Tails (donating is only one of them). Come talk to us!

16 September, 2017 10:34AM

September 15, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Ubuntu Desktop Weekly Update: September 15, 2017

A fairly short update this week as we’re in bug fixing mode ahead of final beta in a couple of weeks.

GNOME

This week saw the release of GNOME 3.26, and we’re ready to ship it in 17.10. This will bring new versions of the core applications and new features as described in the GNOME release notes.

If you’ve been running 17.10 for a while you will have already been using 3.25, the development branch of 3.26, and so you will have already be familiar with 3.26

We’ve also been working on adding support for progress bars and urgent notifications to the Dash to Dock extension and we ported Dash to Dock settings to the new Control Center layout for 3.26

We’ve been working with the GNOME community on documentation to help people transitioning from Unity to GNOME and we tracked down and fixed a GDM but which was selecting the wrong session at login. Patches are upstream.

Snaps

Our patches to add PolicyKit support have been cherry picked for snapd 2.28. This will allow you to install Snaps without having to login to Ubuntu One.

We have built new Snaps for gnome-characters and gnome-logs.

Updates

  • Chromium 61.0.3163.79 got promoted to stable channel and will be tested published soon.
  • Chromium dev channel is updated to 62.0.3202.9.

 

15 September, 2017 06:47PM

hackergotchi for Purism PureOS

Purism PureOS

GNOME & KDE: The Purism Librem 5 phone is building a shared platform, not walled gardens

You might have heard about our Librem 5 phone campaign that we recently launched and that has now crossed the $300,000 milestone. If you are reading this particular blog post, it is quite probably because you are a member of the great GNOME/KDE/freedesktop community, and if you were expecting the Librem 5 to be only “a GNOME phone” and exclusionary of others you will be happy to know that Purism is working with both KDE e.V. and the GNOME Foundation, and will continue to do so.

As a matter of fact, to the question “Will you be running GNOME, Plasma, or your own custom UI?”, our campaign page’s FAQ stated, from the beginning:

“We will be working with both GNOME/GTK and KDE/Plasma communities, and have partnered with the foundations behind them for the middleware layer. PureOS currently is GNOME-based and our great experience with working with GNOME as an upstream as well as GNOME’s OS and design-centric development model; however we will also test, support, and develop with KDE and the KDE community, and of course we will support Qt for application development. We will continue to test GNOME and Plasma, and should have a final direction within a month after funding success. Whatever is chosen, Purism will be working with both communities in an upstream-first fashion.”

As a point of clarification, Purism is supporting GNOME/GTK and will continue to do so; Purism is also supporting KDE/Plasma and will continue; forming partnerships with these great communities is a way to establish our long-term commitment to those goals.

Likewise, Purism will ship PureOS by default on the Librem 5, but will support and work with other GNU/Linux distributions wishing to take advantage of this device.

The Librem 5 is about users reclaiming their rights to freedom, privacy and security on their mobile communication devices (also known as pocket computer, smartphone, etc.) with a platform that they love and trust. It is not about creating walled gardens, erecting barriers and division in the free desktop community, and reigniting the Desktop Wars of the past:

We are planning to empower users to run both GNOME, KDE, or whatever they see fit, on their GNU+Linux phone—just like we can have both GNOME and KDE on the same desktop/laptop today. The fact that we are going to be making an integrated convenient product that may or may not be a vanilla or heavily modified version of one of these two desktops as the “official recommended turnkey product choice for customers” takes away nothing from the value of these environments or from the ability to run and tinker with whatever Free and Open-Source software you see fit on your device—a device that you can truly own.

What we are providing here is a reference platform that is not Android, for both GNOME and KDE communities—we just so happen to need to provide it as a turnkey usable product for less tech-savvy customers as well, while doing it 100% in the open, upstream-first, like a true Free Software project should be. Right now, the exact set of software technologies we will base our “integrated product” on—whether closely based on KDE, or GNOME—is something we are still evaluating and will decide along the way. There is no “us” vs “them” here. The two projects are in different states of advancement when it comes to mobile and touch technologies, and both communities have their specificities, expertise, and strengths. No matter which project we pick as the basis to invest most of our technical resources in, both projects will win:

  • Even if one project is not chosen as the reference product user interface, it gains a hardware reference platform that community members can standardize on, and thus improve itself however they see fit.
  • This is not the nineties. GNOME and KDE have had a healthy collaboration relationship for the better part of a decade now!
  • We light up a competitive fire again in the hearts of contributors in both communities—and beyond. We can now fight for a platform we truly own, from the backend and middleware to the graphical user interface. No more proprietary UIs, no more “fork everything in middleware!”
  • We will still provide support to developers and testers across the board, everybody is welcome.

From a higher perspective, we believe this campaign is vital to the relevance of Free Software and the viability of GNU+Linux (vs Android+Linux) beyond the desktop, and to protect ourselves from pervasive surveillance and data capitalism. We hope you will see it in this light as well.

15 September, 2017 01:34PM by Jeff

hackergotchi for Univention Corporate Server

Univention Corporate Server

New: UCS 4.2 App Appliances in Own Corporate Branding

Our App Center team has been busy as usual, releasing four Apps from the Univention App Center as App Appliances. An App Appliance bundles UCS and an App in a virtual machine. The Appliances are available for the virtualization and cloud formats KVM, VMware, and VirtualBox. In addition to the pre-configured App, they also contain a pre-configured UCS system and a management system for administrating the App itself and its users. App Appliances are thus a particularly easy way to start an App without having to install it via the in UCS integrated Univention App Center.

App manufacturers, in turn, find it particularly attractive to be able to choose their own corporate branding for their Appliance. And it only takes very little effort to do so via the App Provider Portal. Thanks to this feature, areas such as the Bootsplash, the welcome screen, the setup, and the portal of the UCS system show the colors and logo of the respective App.

Corporate branding in Appliances exemplified by the Appliances OpenProject, ownCloud and SuiteCRM

Our App Center team has also recently expanded the UCS portal for the UCS 4.2 appliances: To help users find their way more quickly the first time they access the portal, an overlay is provided to show them the first steps. The displayed text can be configured. Thus, the App creator can explain the first steps with the Appliance to the user, making it even easier to get it started.

Univention Portal mit Kopano Overlay

Find the App Appliances Kopano, OpenProjectownCloud and SuiteCRM in the App Catalog.


Fancy more input? These blog articles might also interest you:

If you have any questions, please feel free to use our comment function below or subscribe to our newsletter for regular updates.

Subscribe to Newsletter

Der Beitrag New: UCS 4.2 App Appliances in Own Corporate Branding erschien zuerst auf Univention.

15 September, 2017 07:00AM by Nico Gulden

hackergotchi for Deepin

Deepin

Deepin Security Update——Urgently Fixed BlueBorne Security vulnerability CVE-2017-1000250 in Bluetooth implementations

Armis Labs revealed a new attack vector endangering major mobile, desktop, and IoT operating systems, including Android, iOS, Windows, and Linux, and the devices using them. The new vector is dubbed “BlueBorne”, as it spread through the air (airborne) and attacks devices via Bluetooth. Armis has also disclosed eight related zero-day vulnerabilities, four of which are classified as critical. BlueBorne allows attackers to take control of devices, access corporate data and networks, penetrate secure “air-gapped” networks, and spread malware laterally to adjacent devices. Armis reported these vulnerabilities to the responsible actors, and is working with them as patches are being ...Read more

15 September, 2017 06:15AM by melodyzou

Deepin System Updates (2017.09.15)

Updated PulseAudio to Version 10.0 Supported more hardware, now AirPlay hardware supported; Newly inserted USB sound card or connected Bluetooth device, the system will give priority to selecting them as the default without manual settings by users; Improved memory function for hot-swap device configuration; Supported GNU Hurd kernel; Supported 32-bit applications on 64-bit system in padsp. Fixed System and Application Bugs Deep Security Update Fix Bluetooth Protocol Critical Vulnerability BlueBorne (CVE-2017-1000250); Synaptics configuration file back to the previous version, to solve part of the specific model of touchpad can not be use; Update Firefox, Chrome, Opera corresponding Flash plug-in package ...Read more

15 September, 2017 04:01AM by longxiang

hackergotchi for Qubes

Qubes

Thank You for Supporting Qubes!

Dear Qubes Community,

When we reflect on how the Qubes userbase has grown over the past few years, we are humbled by the number of people who have chosen to join us in entrusting the security of their digital lives to Qubes. We recognize the immense responsibility this places on us. This sense of duty is what drives our work to make Qubes as secure as it can be.

We are further humbled by the many generous donations that have been made this year. Qubes is protecting real people around the world in ever greater numbers, and many of you have shown your appreciation by giving back to the project. We are truly grateful for your support. Thank you.

Top Donors of 2017

We’d like to take this opportunity to thank the top donors of 2017 (so far!):

  • 50,000 EUR from the VPN service Mullvad!
  • 10 BTC from an anonymous donor!
  • 10,000 USD from zby, angel investor!
  • 1,000 USD recurring annual donation from Eric Grosse!

Thank you to these donors and to everyone who has donated to the Qubes Decentralized Bitcoin Fund and the Qubes Open Collective! Your donations continue to fund work on Qubes OS. Thanks to your support, we’ve just released Qubes 4.0-rc1, and we’re getting ever closer to a stable release!

Our Work Continues

Today, Qubes safeguards tens of thousands of users around the globe in their work and personal lives, including every member of the Qubes Team. But the path here has been a long and difficult one, in terms of both the great dedication required of the team and the monetary costs that Invisible Things Lab has borne, and continues to bear, so that the project could continue throughout the years.

Without a doubt, it’s all been worth it. Qubes is our passion. It’s part of our lives. We’re gratified and exhilarated to see Qubes bringing real value to people around the world, and we’re more determined than ever to make Qubes the best free and open-source secure operating system it can be – for everyone. We know that many of you feel the same way we do. If Qubes is important to you, please consider joining us in supporting its ongoing development. Everyone’s support is valuable to us, no matter how large or how small. Together, we can ensure that Qubes is around to protect us all for a long time to come.

Sincerely,
The Qubes OS Team

15 September, 2017 12:00AM

September 14, 2017

hackergotchi for VyOS

VyOS

Change is coming to VyOS project

People often ask us the same questions, such as if we know about Debian 6 EOL, or when 1.2.0 will be released, or when this or that feature will be implemented. The short answer, for all of those: it depends on you. Yes, you, the VyOS users.

Here’s what it takes to run an open source project of this type. There are multiple tasks, and they all have to be done:

  • Emergency fixes and security patches

  • Routine bug fixes, cleanups, and refactoring

  • Development of new features

  • Documentation writing

  • Testing (including writing automated tests)


All those tasks need hands (ideally, connected to a brain). Emergency bug fixes and security patches needs a team of committed people who can do this job on a short notice, which is attainable in two ways, either there are people for whom it’s their primary job, or the team of committed people is large enough to have people with spare time at any given moment.

Cleanups and refactoring are also things that need a team of committed people because those are things that no one benefits from in a short run, it’s about making life easier for contributors and improving the sustainability of the project, keeping it from becoming an unmanageable mess. Development of new features needs people who are personally interested in those features and have the expertise to integrate them in a right way. It’s perfect if they also maintain their code, but if they simply hand documented and maintainable code to the maintainers team, that’s good enough.

Now, the sad truth is that VyOS Project has none of those. The commitment to using it among its users greatly exceeds the commitment to contributing to it. While we don’t know for certain how many people are using VyOS, we have at least some data. At the moment, there are 600 users of the official AMI on AWS. There were 11k+ users last month on user guide page and it’s constantly growing since the time when I took up the role of the community manager of the VyOS project. We are also aware about companies that have around 1k VyOS instances and companies that rely on VyOS in their business operations in one way or another. But still, if we talk about consumers vs. contributors, we see 99% consumers vs 1% contributors relation.


My original idea was to raise awareness of the VyOS project by introducing a new website, refreshing the forum look, activating social media channels and introducing modern collaboration tools to make participation in the project easier, open new ways how users and companies can participate and contribute. Finally bigger user base means there’s a larger pool of people and companies who can contribute to the project. We also launched commercial support with idea that if companies that using VyOS for their businesses can’t or just don’t want to participate in the project directly, the may be willing to support the project by purchasing support subscriptions.




10 months later I can admit that I was partially wrong in my thoughts. While consumer user base growing rapidly, i just can’t tell the same about contributors and this is a pity. Sure, we got a few new contributors, some of them contribute occasionally, other are more active, and some old contributors are back (Thank you guys for joining/re-joining VyOS!). We are also working with several companies that are showing interest in VyOS as a platform and contribute to the project in commercial means and via human resources, and that is great, however, it’s not enough at this scale.


At this point, I started thinking that current situation is not something that can be considered as fair and not really make sense.


This are just some of questions that came to my mind frequently:


  • Why those who not contributing literally nothing to the project, getting the same as others who spend their time and resources?

  • Why companies like ALP group using VyOS in their business and claiming publicly that they will return improvements to upstream when they are not actually returning anything? Why do some people think that they can come to IRC/Chat and demand something without contributing anything?

  • Why are those cloud providers that using VyOS for their businesses not bothering to support the project in any way?


I would like to remind you of the basic principles of the VyOS philosophy established from its start:


VyOS is a community driven project!

VyOS always will be an open source project!

VyOS will not have any commercial or any other special versions!


However, if we all want VyOS to be a great project, we all need to adhere to those principles, otherwise, nothing will happen. Community driven means that the driving force behind improvements should be those interested in them. Open source means we can’t license a proprietary component from a third party if existing open source software does not provide the feature you need. Finally, free for everyone means we all share responsibility for the success or failure of the project.


I’m happy and proud to be part of VyOS community and I really consider as my duty to help the project and the community grow. I’m doing what I can, and I expect that if you also care about the project, you will participate too.


We all can contribute to the project, no matter if you are developer or network engineer or neither of this.


There are many tasks that can be done by individuals with zero programming involved:


  • Documentation (documenting new features, improving existing wiki pages, or rewriting old documentation for Vyatta Core)

  • Support community in forums/IRC/chat (we have English and localized forums, and you can request a channel in your native language like our Japanese community did)

  • Feature requests (well described use cases from which can benefit all our community: note that a good feature request should make it easier for developers to implement it, just saying you want MPLS is not quite the same as researching existing open source implementations, trying them out, making sample configs contributors with coding skills can use for the reference, drafting CLI design and so on!)

  • Testing & Bug reports

  • Design discussions, such as those in the VyOS 2.0 development digest


If you work at the company that uses VyOS for business needs please consider  talking with CEO/CTO about:

  • Providing full/part time worker(s) to accomplish tasks listed above

  • Provide paid accounts in common clouds for development needs

  • Provide HW and license for laboratory (we need quite a lot of HW to support all of the hypervisors, same is true about licenses for interworking testing)

  • Buy commercial support & services



In January, we’d like to have a meeting with all current contributors to discuss what we can do to increase participation in the project.

Meanwhile, I would like to ask you to share this blog post to all whom it may concern.

All of the VyOS users (especially those companies that use VyOS in their business) should be aware that is time to start participate in the project if you want to keep using VyOS and rely on it in the future.


Brace yourself.

Change is coming!

14 September, 2017 07:12PM by Yuriy Andamasov

Cumulus Linux

Your guide to selecting pluggable transceiver modules

The high-cost of vendor-locked optics has spawned a lot of ingenuity over the years as other ‘non-approved’ manufacturers build the same optics to the same spec and try to get them to work as a low-cost alternative to preferred ODMs. But the whitebox revolution has now leveled the playing field. Lower cost whitebox hardware can work with low-cost or high-cost optics, without discrimination based on manufacturer brand. In the case of a data center deployment, the cost savings of using lower cost optics can translate to millions of dollars.

As long as the box manufacturer and the optics manufacturer both build to industry standards — both formal and informal ones — optics from any manufacturer should be able to work on any box. Most of the implementation details are specified by standards. However, that doesn’t guarantee that you can pick any module on the internet, order a thousand units and have a successful deployment.

At Cumulus Networks, we do everything we can to ensure a smooth, easy deployment. We believe that one of the critical benefits of disaggregation is that it provides you with the ability to choose whatever hardware and software best suit your business needs.

When it comes to hardware, it is important to consider things like compatibility and interoperability well before it is time to deploy. We’ve put together this quick guide to use alongside our hardware compatibility list so that when you’re ready to go web-scale with Cumulus Linux, deployment will be a breeze.

Pluggable transceiver module considerations

Although Ethernet has become commonplace and simple to implement, in this new world of high-speed ethernet networking, the underlying physical infrastructure and technologies have become more and more complex. This section of the post covers the standard considerations for pluggable transceiver modules. After we set the playing field, I’ll go into the details about how Cumulus Networks approaches the subject.

Here is a quick overview of the four-dimensional pluggable transceiver module technology matrix to help make the differences clear.

pluggable transceiver module
Additionally there are other dimensions to the matrix that affect interoperability between modules and switches:

  • Module/port compatibility: The port has to support the module requirements. When an LR optical module is inserted, the port hardware must detect it and supply more power than for an SR module. Similarly, high speed technologies like Infiniband and FiberChannel come in SFP/QSFP form factors, but an Infiniband module may not also support Ethernet, causing compatibility problems.
  • Tx/Rx and control signalling: Innovations in the translation of the received signal from the SFP/QSFP back into an understandable and uncorrupted pattern of bits in the switch port have created several different electronics implementations inside the switch hardware. In our hardware compatibility list, there are various transmit and receive hardware technologies that accomplish this translation. Above that layer, Cumulus Networks currently supports ten different primary switching ASICs that influence the way that the port is configured and controlled. That adds two more dimensions to the Ethernet technology matrix that needs to be tested.
  • Manufacturer implementations of the standards: Each switch and module manufacturer has particular hardware implementations that might influence compatibility with other manufacturer implementations.

Because of the complexity of this matrix, nearly all traditional switch vendors support only a specific set of optics. Often the approved optics are branded with their own name. Some switch vendors use vendor locking — software checks that guarantee only their supported optics will work.
As I’ll explain shortly, at Cumulus Networks, we verify our products using a plethora of optics to ensure our software configures the switch properly. We believe in the benefits of disaggregation, and we do not limit our customers to only hardware we tested. That would not only completely go against what we stand for, it would also be a hinderance to many.

The Cumulus approach to pluggable transceiver modules

Cumulus embraces the power of choice, supporting many open networking switches and facilitating interoperability with any validated optics or pluggables. We conduct significant testing of our switch platforms using standard optics, but also give customers the freedom to choose any validated brand. Here are some of our hardware programs and policies to encourage both efficiency and choice:

The HCL provides a list of recommended pluggables for each platform. That list identifies the Cumulus Linux versions that support each recommended pluggable. The HCL list includes pluggables qualified by both UNH and Cumulus Networks Engineering.

  • If a pluggable is on the HCL, but not listed as recommended for use with a particular platform on the HCL, then Cumulus has found that this specific combination is not recommended.
  • We recommend using hardware that is listed on our hardware compatibility list. However, some customers have successfully deployed with modules validated elsewhere. In these specific cases, the customers took on rigorous pre-deployment lab testing that included the hardware and software configurations that match the production deployment. If a customer decides to use a brand not on our verified list as seen in the HCL, we have a detailed set of deployment guidelines including a detailed test plan to help customers self-certify their particular modules in the switches that they plan to deploy.
  • We also offer a turnkey solution, Cumulus Express, that includes approved hardware and pluggables right in the box. It’s a completely plug and play experience so you do not have to worry about a thing.

With the fast-changing nature and proliferation of low-cost manufacturers, it’s close to impossible to test every single low-cost vendor. That said, at Cumulus Networks we do our best to ensure customers have all the information they need to choose hardware that best fits their needs. Our testing is focused on a variety of platforms that represent each switching ASIC and port hardware type (Tx/Rx and control signalling dimensions mentioned above), using a subset of ‘name-brand’ and ‘low-cost’ modules.

The beauty of disaggregation is that the choice is yours

In nearly all cases, whitebox switches and whitebox optics just work right out of the box. If you are concerned about compatibility issues derailing your deployment, check out our list of compatible pluggables and hardware on our hardware compatibility list.

Of course if you do not want to worry about compatibility, there is a streamlined way to go about deployment. Cumulus Express not only comes with Cumulus Linux installed and licensed, but it also comes with compatible hardware. Everything you need to deploy is there in the box — making deployment a breeze. Learn more about Cumulus Express.

The post Your guide to selecting pluggable transceiver modules appeared first on Cumulus Networks Blog.

14 September, 2017 06:36PM by Mike Brown

hackergotchi for Ubuntu developers

Ubuntu developers

Costales: Píldora snap: Crea una aplicación snap en 5 minutos

En sólo 5 minutos vas a empaquetar tu primera aplicación snap. ¡Más fácil, imposible!
¿Aceptas el reto? ;) ¡Adelante!

vídeo tutorial


Basado en la conferencia de Alan Pope y Martin Wimpress de la 2ª Ubucon Europea:

snap! snap! snap!

14 September, 2017 05:17PM by Marcos Costales (noreply@blogger.com)

Ubuntu Podcast from the UK LoCo: S10E28 – Momentous Sparkling Jellyfish - Ubuntu Podcast

This week we’ve been adding LED lights to a home studio, we announce the winner of the Entroware Apollo competition, serve up some GUI love and go over your feedback.

It’s Season Ten Episode Twenty-Eight of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

Entroware Competition Winner!

Congratulations to Dave Hingley for creating The Ubuntu Guys comic which was scripted in 20 minutes.

All entries

In no particular order here are all the entries

Roger Light

Neil McPhail

Sorry Neil, it’s 2017 and we still can’t edit tweets!

Andy Partington

Joe Ressington

Paul Gault

Robert Rijkhoff

Gentleman Punter

Ivan Pejić

Mattias Wernér

Masoud Abkenar

Johan Seyfferdt

Ovidiu Serban

Ryan Warnock

Dave Hingley

Ian Phillips

Brain Walton

Martin Tallosy

Lucy Walton

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

14 September, 2017 03:30PM

hackergotchi for Purism PureOS

Purism PureOS

Purism and KDE to Work Together on World’s First Truly Free Smartphone

Berlin/San Francisco, September 14, 2017 — Purism and KDE are partnering to help KDE adapt Plasma Mobile to Purism’s Librem 5 smartphone.

KDE develops Plasma Mobile, a free, open and full-featured graphical environment for mobile devices. Plasma Mobile has been tested on several off-the-shelf devices. However, most smartphones include hardware that requires proprietary software to work. This clashes with KDE’s principles of freedom and openness. It also makes building difficult, since many details of the hardware are kept secret, preventing complete access to all the components.

Purism, the manufacturer that builds high-quality, top of the range and freedom-respecting devices, is currently running a crowdfunding campaign which will allow the company to build the first fully free and open smartphone: The Librem 5.

The shared vision of freedom, openness and personal control for end users has brought KDE and Purism together in a common venture. Both organisations agree that cooperating will help bring a truly free and open source smartphone to the market. KDE and Purism will work together to make this happen.

“Building a Free Software and privacy-focused smartphone has been a dream of the KDE community for a long time. We created Plasma to not just run on desktops and laptops, but for the whole spectrum of devices.” says Lydia Pintscher, President of KDE e.V.. “Partnering with Purism will allow us to ready Plasma Mobile for the real world and integrate it seamlessly with a commercial device for the first time. The Librem 5 will make Plasma Mobile shine the way it deserves.”

“Having full access to Purism’s hardware platform is a dream for the KDE community,” says Lydia Pintscher, President of KDE e.V. “Partnering with Purism will allow us to integrate Plasma Mobile seamlessly with a commercial device for the first time. The Librem 5 will make Plasma Mobile shine the way it deserves.”

“KDE has created an evolved, completely free platform in Plasma Mobile,” says Todd Weaver, CEO of Purism. “We feel that Plasma Mobile will become a serious contender that may break the current duopoly and bring a full-featured, fully free/libre and open source mobile operating system to the market. We look forward to trying out Plasma Mobile on our test hardware and working with KDE’s community.”

About KDE

KDE is an international community of developers, designers, writers, translators and users that work together to achieve a world in which everyone has control over their digital life and enjoys freedom and privacy. KDE produces Plasma, an advanced and friendly desktop and a graphical environment for mobile devices. KDE also fosters and sponsors the creation of hundreds of apps for Linux, Windows, MacOS, Android and many other platforms; as well as frameworks, libraries and utilities that help developers create applications faster and easier.

About Purism

Purism is a Social Purpose Corporation devoted to bringing security, privacy, software freedom, and digital independence to everyone’s personal computing experience. With operations based in San Francisco (California) and around the world, Purism manufactures premium-quality laptops, tablets and phones, creating beautiful and powerful devices meant to protect users’ digital lives without requiring a compromise on ease of use. Purism designs and assembles its hardware in the United States, carefully selecting internationally sourced components to be privacy-respecting and fully Free-Software-compliant. Security and privacy-centric features come built-in with every product Purism makes, making security and privacy the simpler, logical choice for individuals and businesses.

Media Contact

Marie Williams, Coderella / Purism
+1 415-689-4029
pr@puri.sm
See also the Purism press room for additional tools and announcements.
 

Please see also our stance on GNOME vs KDE as it relates to the Librem 5 campaign.

14 September, 2017 03:00PM by Jeff

hackergotchi for Ubuntu developers

Ubuntu developers

Sebastian Kügler: Help us create a privacy-focused Free software smartphone!

The news is out! KDE and Purism are working together on a Free software smartphone featuring Plasma Mobile. Purism is running a crowdfunding campaign right now, and if that succeeds, with the help of KDE, the plan is to deliver a smartphone based on Plasma Mobile in January 2019.

Why do I care?

Data collection and evesdropping has become a very common problem. Not only governments (friendly and less-friendly) are spying on us, collecting information about our private lives, also companies are doing so. There is a lot of data about the average user stored in databases around the world that not only allows them to impersonate you, but also to steal from you, to kidnap your data, and to make your life a living hell. There is hardly any effective control how this data is secured, and the more data is out there, the more interesting a target it is to criminals. Do you trust random individuals with your most private information? You probably don’t, and this is why you should care.

Protect your data

Mockup of a Plasma Mobile based phoneMockup of a Plasma Mobile based phone
Mockup of a Plasma Mobile based phone[/caption]The only way to re-gain control before bad things happen is to make sure as little data as possible gets collected. Yet, most electronic products out there do the exact opposite. Worse, the market for smartphones is a duopoly of two companies, neither of which has as a goal the protection of its users. It’s just different flavors of bad.

There’s a hidden price to the cheap services of the Googles and Facebooks of this world, and that is collection of data, which is then sold to third parties. Hardly any user is aware of the problems surrounding that.

KDE has set out to provide users an alternative. Plasma Mobile was created to give users a choice to regain control. We’re building an operating system, transparently, based on the values of Free software and we build it for users to take back control.

Purism and KDE

In the past week, we’ve worked with Purism, a Social Purpose Corporation devoted to bringing security, privacy, software freedom, and digital independence to everyone’s personal computing experience, to create a mobile phone that allows users to regain control.
Purism has started a crowdfunding campaign to collect the funds to make the dream of a security and privacy focused phone.

Invest in your future

By supporting this campaign, you can invest not only into your own future, become an early adopter of the first wave of privacy-protecting personal communication devices, but also to proof that there is a market for products that act in the best interest of the users.

Support the crowdfunding campaign, and help us protect you.

14 September, 2017 02:02PM

Ubuntu Insights: Security Team Weekly Summary: September 14, 2017

The Security Team weekly reports are intended to be very short summaries of the Security Team’s weekly activities.

If you would like to reach the Security Team, you can find us at the #ubuntu-hardened channel on FreeNode. Alternatively, you can mail the Ubuntu Hardened mailing list at: ubuntu-hardened@lists.ubuntu.com

During the last week, the Ubuntu Security team:

  • Triaged 234 public security vulnerability reports, retaining the 109 that applied to Ubuntu.
  • Published 6 Ubuntu Security Notices which fixed 15 security issues (CVEs) across 5 supported packages.

Ubuntu Security Notices

Bug Triage

Mainline Inclusion Requests

Updates to Community Supported Packages

  • Gianfranco Costamagna provided a debdiff for xenial for check-all-the-things (LP: #1597245)

  • Simon Quigley (tsimonq2) provided a debdiff for xenial for karchive (LP: #1712948)

  • James Cowgill (jcowgill) provided debdiffs for xenial and zesty for mbedtls (LP: #1714640)

Call for Testing

  • Updates for WordPress are available in the security-proposed PPA and are just waiting for some testing before being published. Jump into #ubuntu-hardened on Freenode and ping the security team member on community duty if you are interested in helping test this community supported package.

Development

What the Security Team is Reading This Week

Weekly Meeting

More Info

14 September, 2017 01:31PM

hackergotchi for Univention Corporate Server

Univention Corporate Server

Shed Light on the “IT jungle” with a Domain Controller

The professional structure of domains and the use of domain controllers bring order to IT infrastructures. This is especially important when organizations are growing rapidly. Professional domain management allows their IT to grow dynamically. Otherwise, the infrastructure becomes a kind of “patchwork carpet” of many small solutions and unorganized resources, some of which act independently of each other, may interfere with each other and thus require a high level of maintenance. Not to mention the complexity of maintaining users twice as often and the risks associated with data replication, data protection, and system reliability.

In the following article, we first explain briefly what a domain is and then describe the tasks of a domain controller. Finally, we become practical and see how the concept of “domain/domain controllers” has been implemented in Univention Corporate Server.

The Concept of a Domain

A domain is a conceptual entity that is characterized by a common security and trust context. This means, the members of the domain know and trust each other. External systems and users do not have access to the resources and services provided within the domain, such as computers, files, etc.

Computer Domain ModelStructure of a Domain

Members in a domain can be, for example, users and groups, but also client computers and server systems. The core component of such a domain is the information about who is a member and how this member can authenticate, i.e. can prove his own membership.

The Domain Controller: Definition and Task

The management of this information is done by at least one server system, which is a member of the domain and is designated by its position as a domain controller. It controls and manages the domain respectively its belonging information. In small environments, one domain controller is often sufficient. In medium and large environments, several such domain controllers are generally used for reasons of failure safety and load distribution. All the data is automatically synchronized between these different domain controllers (keyword: replication).

In such a domain, various services are offered for which authentication is necessary. That is, the users and computers must demonstrate that they are a member of the domain before they get access. Examples are file and print services. Various methods can be used for authentication, such as LDAP authentication, RADIUS, or Kerberos (see below).

Domains Need Names

A domain also always has a name. Computers such as clients and server systems that are members of the domain, have a so-called Fully Qualified Domain Name (FQDN), which is composed of the host name and the domain name:

Domain name: intranet.example.org
FQDN:        ucs-01.intranet.example.org
     Host name -^   | ^- Domain name

Through this FQDN systems and services in the network can be identified and found.

Domain Services for Authentication

UCS now creates such a domain, manages users and computer data, controls access rights, and provides various, also supporting services for authentication.

This includes:

  • OpenLDAP as directory service – using, for example, the LDAP base dc=intranet,dc=example,dc=org
  • Kerberos – using, for example, the Kerberos realm INTRANET.EXAMPLE.ORG
  • DNS – especially Kerberos, but also many other services require a working name resolution via DNS – for example with the DNS zone intranet.example.org
  • SAML – for web-based Single Sign-On
  • Optional:
    • RADIUS – for example for WLAN
    • Active Directory – for example with the LDAP base DC = intranet, DC = example, DC = org
    • NetBIOS / WINS – for example INTRANET

These services, which in principle can also be operated independently, are pre-configured and intertwined in UCS in such a way that they are valid and functioning within the same domain. As a result, UCS provides an optimal basis for operating a heterogeneous IT environment in which various systems and services can be connected via the domain functionality.

Further information on UCS’s authentication services can be found at Login and Authentication Services.

We’d be happy if this article brought you some light into the “domain jungle”.

Der Beitrag Shed Light on the “IT jungle” with a Domain Controller erschien zuerst auf Univention.

14 September, 2017 12:14PM by Michael Grandjean

hackergotchi for Ubuntu developers

Ubuntu developers

Didier Roche: Ubuntu GNOME Shell in Artful: Day 12

We’ll focus today on our advanced user base. We, of course, try to keep our default user experience as comprehensible as possible for the wider public, but we want as well to think about our more technical users by fine tuning the experience… and all of this, obviously, while changing our default session to use GNOME Shell. For more background on our current transition to GNOME Shell in artful, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 12: Alt-tab behavior for advanced users.

Some early feedbacks that we got (probably from people being used to Unity or other desktop environments) is that application switching via Alt-Tab isn’t something completely natural to them. However, we still think, and this was a common shared experience between both GNOME Shell and Unity that model fits better in general. People who disagrees can still install one of the many GNOME Shell extensions for this.

When digging a little bit more, we see that the typical class of users complaining about that model is the power users, who has multiple windows of the same applications (typically terminals), and want to switch quickly between the last 2 windows of either:

  • the current application
  • the focused window of the last 2 applications

The first case is covered by [Alt] + [Key above tab] (and that won’t change even for ex-Unity users)1. However the second case isn’t.

That could lead to some frustrating experience if you have a window (typically a browser) standing in the background to read documentation, and having a terminal on top. If you want to quickly switch back and forth to your terminal (having multiple windows), you end up with:

Note that I started from one terminal window and a partially covered browser window, to end up, after 2 quick alt-tabs to two terminal windows covering the browser application.

We want to experiment back with quick alt-tab. Quick alt-tab is alt-tabbing before the switcher appears (the switcher doesn’t appear right away to avoid flickering). We can imagine in that case, we can switch between the last focused window of the last 2 applications. In the previous example, it would put us back to the initial state. However, if we wait enough for the switcher to be displayed, then, you are in the “application mode”, where you switch between all applications, and the default (if you don’t go into the thumbnail preview), is then to raise all windows of that applications. That forced us to increase slightly the delay before the switcher appears.

That was the default of our previous user experience and we didn’t experience bug report about that behavior, meaning that it seems that it fits both power user, and a more traditional type of users (having mostly one window per application, and so not being impacted, or not using quick alt-tab, but the application switcher itself, which doesn’t change behavior). We proposed a patch and opened a discussion upstream to see how we can converge to this idea, which might evolve and being refined in a later release to only be restricted to terminals from where the discussion seems to lead on. For now, as usual, this change is limited to the ubuntu session and doesn’t impact the GNOME vanilla one.

Here is the result:

Another slight inconsistency was the [Alt] + [key above tab] in the switcher itself. Some GNOME Shell releases ago, going into the window preview mode in the switcher was enabling selecting a particular window instance, but was still raising all other windows of the selected application. The selected window was just the most top one. Later, this behavior has been changed to only raise the selected window.

While using [Alt] + [Tab] followed by [Alt] + [key above tab] selecting directly the second window of the current app made sense in the first approach (as, after all, selecting the first window was the same effect than selecting the whole app), now that only the selected window is raised, it makes sense to select the first window of the current application on initial key press. Furthermore, that’s already the behavior of pressing the [down] array key in the alt-tab switcher. For Ubuntu Artful, this behavior is as well only available in the ubuntu session, however, it seems that upstream is considering the patch and you might see it in the next GNOME release.

As usual, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official Ubuntu desktop team transitions ppa to get a taste of what’s cooking!

We hope to find a way to satisfy both advanced and casual users, tuning the right balance between the 2 use cases. Focusing on a wider audience doesn’t necessarily mean we can’t make the logical flow compatible with other type of users.


  1. Note that I didn’t write [Alt] + [~] because any sane keyboard layout would have a ² key above tab. :)

14 September, 2017 10:25AM

hackergotchi for ev3dev

ev3dev

Announcing ev3dev-jessie-2017-09-14 Release

This is a stable maintenance release. We’ve pulled in the updates and security fixes from Debian 8.9 and the most recent ev3dev kernel.

There is a fix that should make Bluetooth start more reliably on the EV3 (i.e. Bluetooth shows as “Unavailable” until the EV3 is rebooted).

The only other notable change is for those with the color screen hack, a crash at boot is fixed.

Most of our work has been focused on ev3dev-stretch. It is nearing beta quality, so you might want to check it out.

Download

You can find SD card images on our download page.

14 September, 2017 12:00AM by @dlech

September 13, 2017

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Kernel Team Summary – September 13, 2017

September 05 through September 12

Development (Artful / 17.10)

https://wiki.ubuntu.com/ArtfulAardvark/ReleaseSchedule

Important upcoming dates:

      Final Beta - Sept 28 (~2 weeks away)
      Kernel Freeze - Oct 5 (~3 weeks away)
      Final Freeze - Oct 12 (~4 weeks away)
      Ubuntu 17.10 - Oct 19 (~5 weeks away)
   

We intend to target a 4.13 kernel for the Ubuntu 17.10 release. A 4.13 based kernel is available for testing from the artful-proposed pocket of the Ubuntu archive. As a reminder, the Ubuntu 17.10 Kernel Freeze is Thurs Oct 5, 2017.

Stable (Released & Supported)

  • The Xenial kernel and it’s derivatives were re-spun to address http://bugs.launchpad.net/bugs/1715636 “Xenial update to 4.4.78 stable release broke Address Sanitizer”.

  • The following kernels were promoted to -proposed for testing:

      zesty   4.10.0-34.38
      xenial  4.4.0-95.118
      trusty  3.13.0-131.180
    
    
      zesty/raspi2       4.10.0-1017.20
      xenial/raspi2      4.4.0-1072.80
      xenial/snapdragon  4.4.0-1074.79
      xenial/aws         4.4.0-1033.42
      xenial/gke         4.4.0-1029.29
      xenial/gcp         4.10.0-1005.5
      xenial/hwe         4.10.0-34.38~16.04.1
      trusty/lts-xenial  4.4.0-94.117~14.04.1
    
  • Current cycle: 25-Aug through 16-Sep

              25-Aug  Last day for kernel commits for this cycle.
     28-Aug - 02-Sep  Kernel prep week.
     03-Sep - 15-Sep  Bug verification & Regression testing.
              18-Sep  Release to -updates.
    
    
    
  • Next cycle: 15-Sep through 07-Oct

               15-Sep  Last day for kernel commits for this cycle.
      18-Sep - 23-Sep  Kernel prep week.
      24-Sep - 06-Oct  Bug verification & Regression testing.
               09-Oct  Release to -updates.
    

Misc

13 September, 2017 07:46PM