June 30, 2016

hackergotchi for Ubuntu developers

Ubuntu developers

Daniel Holbach: Snappy Playpen event next Tuesday

Next week on Tuesday, 5th July, we want to have our next Snappy Playpen event. As always we are going to work together on snapping software for our repository on github. Whatever app, service or piece of software you bring is welcome.

The focus of last week was ironing out issues and documenting what we currently have. Some outcomes of this were:

We want to continue this work, but add a new side to this: upstreaming our work. It is great that we get snaps working, but it is much better if the upstream project in question can take over the ownership of snaps themselves. Having snapcraft.yaml in their source tree will make this a lot easier. To kick off this work, we started some documentation on how to best do that and track this effort.

You are all welcome to the event and we look forward to work together with you. Coordination is happening on #snappy on Freenode and Gitter. We will make sure all our experts are around to help you if you have questions.

Looking forward to seeing you there!

30 June, 2016 03:39PM

Ubuntu Podcast from the UK LoCo: S09E18 – Suspicious Package - Ubuntu Podcast

It’s Episode Eighteen of Season Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

We’re here again!

In this week’s show:

  • We discuss the snap packaging format.

  • We also discuss going to Download Festival and discovering Open Store.

  • We share a Command Line Lurve Clonezilla, which is an amazing way to copy bits from one harddisk to another.

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

    • Andy Smith at the brilliant Bitfolk upgraded our VPS data transfer allowance without us even asking! Go buy your VPS from them!
    • Entroware have released another beast of a Laptop, worth looking into
    • David Wolski told us how to monitor progress using dd itself. Here are the three examples he gave in the show:
      sudo dd if=raspbian.img of=/dev/sdb bs=512 status=progress
      pv bigfile.iso | md5sum
    • Asa similarly emailed. Here are those examples too:
      killall -USR1 dd
      watch -n 5 "killall -USR1 dd"
  • This weeks cover image is taken from Wikimedia.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

30 June, 2016 02:00PM


Linux Mint 18 “Sarah” MATE released!

The team is proud to announce the release of Linux Mint 18 “Sarah” MATE Edition.

Linux Mint 18 Sarah MATE Edition

Linux Mint 18 is a long term support release which will be supported until 2021. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

New features:

This new version of Linux Mint contains many improvements.

For an overview of the new features please visit:

What’s new in Linux Mint 18 MATE“.

Important info:

The release notes provide important information about known issues, as well as explanations, workarounds and solutions.

To read the release notes, please visit:

Release Notes for Linux Mint 18 MATE

System requirements:

  • 512MB RAM (1GB recommended for a comfortable usage).
  • 9GB of disk space (20GB recommended).
  • 1024×768 resolution (on lower resolutions, press ALT to drag windows with the mouse if they don’t fit in the screen).


  • The 64-bit ISO can boot with BIOS or UEFI.
  • The 32-bit ISO can only boot with BIOS.
  • The 64-bit ISO is recommend for all modern computers (Almost all computers sold in the last 10 years are equipped with 64-bit processors).

Upgrade instructions:

  • If you are running the BETA, click the refresh button in your Update Manager and apply any outstanding level 1 updates. Note also that samba was removed in the stable release as it negatively impacted boot speed. To remove samba, open a terminal and type “apt purge samba”.
  • It will also be possible to upgrade from Linux Mint 17.3. Upgrade instructions will be published next month after the stable release of Linux Mint 18.

Download links:

Here are the download links for the 64-bit ISO:

A 32-bit ISO image is also available at https://www.linuxmint.com/download_all.php.

Integrity and authenticity checks:

Once you have downloaded an image, please verify its integrity and authenticity.

Anyone can produce fake ISO images, it is your responsibility to check you are downloading the official ones.


We look forward to receiving your feedback. Thank you for using Linux Mint and have a lot of fun with this new release!

30 June, 2016 11:45AM by Clem

Linux Mint 18 “Sarah” Cinnamon released!

The team is proud to announce the release of Linux Mint 18 “Sarah” Cinnamon Edition.

Linux Mint 18 Sarah Cinnamon Edition

Linux Mint 18 is a long term support release which will be supported until 2021. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

New features:

This new version of Linux Mint contains many improvements.

For an overview of the new features please visit:

What’s new in Linux Mint 18 Cinnamon“.

Important info:

The release notes provide important information about known issues, as well as explanations, workarounds and solutions.

To read the release notes, please visit:

Release Notes for Linux Mint 18 Cinnamon

System requirements:

  • 512MB RAM (1GB recommended for a comfortable usage).
  • 9GB of disk space (20GB recommended).
  • 1024×768 resolution (on lower resolutions, press ALT to drag windows with the mouse if they don’t fit in the screen).


  • The 64-bit ISO can boot with BIOS or UEFI.
  • The 32-bit ISO can only boot with BIOS.
  • The 64-bit ISO is recommend for all modern computers (Almost all computers sold in the last 10 years are equipped with 64-bit processors).

Upgrade instructions:

  • If you are running the BETA, click the refresh button in your Update Manager and apply any outstanding level 1 updates. Note also that samba was removed in the stable release as it negatively impacted boot speed. To remove samba, open a terminal and type “apt purge samba”.
  • It will also be possible to upgrade from Linux Mint 17.3. Upgrade instructions will be published next month.

Download links:

Here are the download links for the 64-bit ISO:

A 32-bit ISO image is also available at https://www.linuxmint.com/download_all.php.

Integrity and authenticity checks:

Once you have downloaded an image, please verify its integrity and authenticity.

Anyone can produce fake ISO images, it is your responsibility to check you are downloading the official ones.


We look forward to receiving your feedback. Thank you for using Linux Mint and have a lot of fun with this new release!

30 June, 2016 11:43AM by Clem

hackergotchi for ARMBIAN


Orange Pi+ 2

7z arhives can be uncompressed with 7-Zip on Windows, Keka on Mac and 7z on Linux (apt-get install p7zip-full). RAW images can be written with Rufus (Win xp/7/8/10) or DD in Linux/Mac.

  • Debian Wheezy, Jessie or Ubuntu Trusty, Xenial based. Compiled from scratch
  • Install images are reduced to actual data size with small reserve
  • Root password is 1234. You will be prompted to change this password and to create a normal user at first login.
  • First boot takes longer (around 3min) than usual (20s) because it updates package list, regenerates SSH keys and expand partition to fit your SD card. It might reboot one time automatically. Second boot also take little longer (around 3min) because it creates 128MB emergency swap space
  • Ready to compile external modules. Tested with this wireless adapter
  • Ethernet adapter with DHCP and SSH server ready on default port (22)
  • Wireless adapter with DHCP ready if present but disabled (/etc/network/interfaces, WPA2: normal connect or AP mode)
  • desktop environment upgrade ready
  • NAND, SATA, eMMC and USB install script is included (nand-sata-install)
  • Serial console enabled
  • Enabled automatic security update download for basic system and kernel. Upgrades are done via standard apt-get upgrade method
  • Login script shows: board name with large text, distribution base, kernel version, system load, up time, memory usage, IP address, CPU temp, drive temp, ambient temp from Temper if exits, SD card usage, battery conditions and number of updates to install.
Performance tweaks

  • /tmp & /log = RAM, ramlog app saves logs to disk daily and on shut-down (Wheezy and Jessie w/o systemd)
  • automatic IO scheduler. (check /etc/init.d/armhwinfo)
  • journal data writeback enabled. (/etc/fstab)
  • commit=600 to flush data to the disk every 10 minutes (/etc/fstab)
  • optimized CPU frequency scaling 480-1010Mhz (392-996Mhz @Freescale, 600-2000Mhz @Exynos & S905) with interactive governor (/etc/init.d/cpufrequtils)
  • eth0 interrupts are using dedicated core (Allwinner based boards)
Legacy kernel


  • Ubuntu Trusty or Debian Jessie based (where Ubuntu fails)
  • HW accelerated video playback where possible (legacy kernel only)
  • MALI Open GLES on A10 / A20 / H3 (legacy kernel only)
  • Pre-installed: Firefox, LibreOffice Writer, Thunderbird
  • Lightweight XFCE desktop
  • Autologin, when normal user is created – no login manager (/etc/default/nodm)

30 June, 2016 09:22AM by igorpecovnik

June 29, 2016

hackergotchi for Ubuntu developers

Ubuntu developers

Kubuntu: Kubuntu Podcast goes Open and Unplugged

Podcast fans will know that we were struck down with lucky show thirteen. Google Hangouts crashed out twice, and we lost the live stream. We ended up half an hour late, with no Hangouts, and a hastily make-shift YouTube live stream hooked together in record time by the #awesome Ovidiu-florin Bogdan.

The upsimascot_konqi-commu-journalistde of this being that we were rescued again by the amazing Big Blue Button.

We have decided the we are going to move to using Big Blue Button permanently for the Podcast show, which is great news for you in the audience.

Why ?

It means that you can join us on the show live. That’s right; You too can join us in the Big Blue Button conference server whilst we are making and recording the show. Maybe you just want to listen in live and watch, or perhaps ask questions and make comments in the built-in chat system.

Of course you can take it a step further and join our Audio conference bridge and interact, chat, make comments and ask questions. Provided you use the “Hand Up” feature to grab our attention first.

So come and join us in Room 1 of the Kubuntu Big Blue Button Conference Server. Password is welcome.

Wednesday 6th July at 19:00 UTC

To get the access details drop by IRC a few minutes before the show starts, at freenode.net #kubuntu-podcast. Or you can join IRC directly from this website via the embedded IRC client on our Podcast page.

29 June, 2016 09:32PM

Colin King: What's new in stress-ng 0.06.07?

Since my last blog post about stress-ng, I've pushed out several more small releases that incorporate new features and (as ever) a bunch more bug fixes.  I've been eyeballing gcov kernel coverage stats to find more regions in the kernel where stress-ng needs to exercise.   Also, testing on a range of hardware (arm64, s390x, etc) and a range of kernels has eeked out some bugs and helped me to improve stress-ng.  So what's new?

New stressors:
  • ioprio  - exercises ioprio_get(2) and ioprio_set(2) (I/O scheduling classes and priorities)
  • opcode - generates random object code and executes this, generating and catching illegal instructions, bus errors,  segmentation  faults,  traps and floating  point errors.
  • stackmmap - allocates a 2MB stack that is memory mapped onto a temporary file. A recursive function works down the stack and flushes dirty stack pages back to the memory mapped file using msync(2) until the end of the stack is reached (stack overflow). This exercises dirty page and stack exception handling.
  • madvise - applies random madvise(2) advise settings on pages of a 4MB file backed shared memory mapping.
  • pty - exercise pseudo terminal operations.
  • chown - trivial chown(2) file ownership exerciser.
  • seal - fcntl(2) file SEALing exerciser.
  • locka - POSIX advisory locking exerciser.
  • lockofd - fcntl(2) F_OFD_SETLK/GETLK open file description lock exerciser.
Improved stressors:
  • msg: add in IPC_INFO, MSG_INFO, MSG_STAT msgctl calls
  • vecmath: add more ops to make vecmath more demanding
  • socket: add --sock-type socket type option, e.g. stream or seqpacket
  • shm and shm-sysv: add msync'ing on the shm regions
  • memfd: add hole punching
  • mremap: add MAP_FIXED remappings
  • shm: sync, expand, shrink shm regions
  • dup: use dup2(2)
  • seek: add SEEK_CUR, SEEK_END seek options
  • utime: exercise UTIME_NOW and UTIME_OMIT settings
  • userfaultfd: add zero page handling
  • cache:  use cacheflush() on systems that provide this syscall
  • key:  add request_key system call
  • nice: add some randomness to the delay to unsync nicenesses changes
If any new features land in Linux 4.8 I may add stressors for them, but for now I suspect that's about it for the big changes for stress-ng for the Ubuntu Yakkey 16.10 release.

29 June, 2016 04:46PM by Colin Ian King (noreply@blogger.com)

Ubuntu App Developer Blog: Snapcraft 2.12: an ecosystem of parts, qmake and gulp

Snapcraft 2.12 is here and is making its way to your 16.04 machines today.

This release takes Snapcraft to a whole new level. For example, instead of defining your own project parts, you can now use and share them from a common, open, repository. This feature was already available in previous versions, but is now much more visible, making this repo searchable and locally cached.

Without further ado, here is a tour of what’s new in this release.


2.12 introduces ‘snapcraft update’, ‘search’ and ‘define’, which bring more visibility to the Snapcraft parts ecosystem. Parts are pieces of code for your app, that can also help you bundle libraries, set up environment variables and other tedious tasks app developers are familiar with.

They are literally parts you aggregate and assemble to create a functional app. The benefits of using a common tool is that these parts can be shared amongst developers. Here is how you can access this repository.

  • snapcraft update : refresh the list of remote parts
  • snapcraft search : list and search remote parts
  • snapcraft define : display information and content about a remote part


To get a sense of how these commands are used, have a look at the above example, then you can dive into details and what we mean by “ecosystem of parts”.

Snap name registration

Another command you will find useful is the new ‘register’ one. Registering a snap name is reserving the name on the store.

  • snapcraft register


As a vendor or upstream, you can secure snap names when you are the publisher of what most users expect to see under this name.

Of course, this process can be reverted and disputed. Here is what the store workflow looks like when I try to register an already registered name:


On the name registration page of the store, I’m going to try to register ‘my-cool-app’, which already exists.


I’m informed that the name has already been registered, but I can dispute this or use another name.


I can now start a dispute process to retrieve ownership of the snap name.

Plugins and sources

Two new plugins have been added for parts building: qmake and gulp.


The qmake plugin has been requested since the advent of the project, and we have seen many custom versions to fill this gap. Here is what the default qmake plugin allows you to do:

  • Pass a list of options to qmake
  • Specify a Qt version
  • Declare list of .pro files to pass to the qmake invocation


The hugely popular nodejs builder is now a first class citizen in Snapcraft. It inherits from the existing nodejs plugin and allows you to:

  • Declare a list of gulp tasks
  • Request a specific nodejs version


SVN is still a major version control system and thanks to Simon Quigley from the Lubuntu project, you can now use svn: URIs in the source field of your plugins.


Many other fixes made their way into the release, with two highlights:

  • You can now use hidden .snapcraft.yaml files
  • snapcraft cleanbuild’ now creates ephemeral LXC containers and won’t clutter your drive anymore

The full changelog for this milestone is available here and the list of bugs in sight for 2.13 can be found here. Note that this list will probably change until the next release, but if you have a Snapcraft itch to scratch, it’s a good list to pick your first contribution from.

Install Snapcraft

On Ubuntu

Simply open up a terminal with Ctrl+Alt+t and run these commands to install Snapcraft from the Ubuntu archives on Ubuntu 16.04 LTS

sudo apt update
sudo apt install snapcraft

On other platforms

Get the Snapcraft source code ›

Get snapping!

There is a thriving community of developers who can give you a hand getting started or unblock you when creating your snap. You can participate and get help in multiple ways:

29 June, 2016 03:20PM by David Callé (david.calle@canonical.com)

Kubuntu Wire: Plasma 5.6.5 and Frameworks 5.23 available in Kubuntu 16.04 Backports



1. sudo apt-add-repository ppa:kubuntu-ppa/backports
2. sudo apt update
3. sudo apt full-upgrade -y

29 June, 2016 02:43PM


Monthly News – June 2016


The Cinnamon and MATE editions of Linux Mint 18 will be announced this week. They successfully passed QA (quality testing) yesterday and they’re on their way towards a stable release.

I’d like to thank all the people who participated in testing the BETA. Your feedback helped us a lot: 1229 comments were processed, 64 bugs were fixed, boot speed was improved and new ideas were gathered for Linux Mint 18.1.

Many thanks also to all the people who fund Linux Mint. We received 419 donations in May. Your help is very important to us and we’ve very proud to have your support.

Finally I’d like to thank the development team for their efforts both during the cycle and the BETA phase, and the moderation team for their diligence on the forums and the IRC.

I hope you’ll have a great time with Linux Mint 18, see you after the release 🙂


Linux Mint is proudly sponsored by:

Platinum Sponsors:
Private Internet Access
Gold Sponsors:
Linux VPS Hosting
Silver Sponsors:
Bronze Sponsors:
Vault Networks *
AYKsolutions Server & Cloud Hosting
7L Networks Toronto Colocation *
BGASoft Inc
David Salvo
Milton Security Group
Sysnova Information Systems
Community Sponsors:

Donations in May:

A total of $7882 was raised thanks to the generous contributions of 419 donors:

$250 (3rd donation), Kuberan N.
$111 (3rd donation), Tibor aka “tibbi
$111, Jasper O.
$111, Carlo O.
$111, Jean-yves B.
$100 (7th donation), Samson S. aka “Samtastic”
$100, Gerald P. aka “Gerry”
$100, Richard W.
$100, PIXELTEX (Schweiz) GmbH
$100, Karoly H.
$100, Igor F.
$89, Hervé M.
$77, Robert L. T.
$75, Kevin B.
$70 (7th donation), Doug L.
$62, Heinz G.
$60, Laon E.
$55 (2nd donation), Rolandas R. aka “Elektronas
$55 (2nd donation), Przemysław K.
$55, S S.
$55, M. S. aka “Thx for your great job, HTH!”
$55, Michel L.
$55, Francis B.
$55, Kai V.
$55, Friedrich W. J.
$55, Garry M.
$55, Francois E.
$55, Jukka S.
$50 (10th donation), Anthony C. aka “ciak”
$50 (8th donation), Peter S. aka “Pierre”
$50 (5th donation), Carl G.
$50 (2nd donation), Eric S.
$50, Manfred K.
$50, Brian B.
$50, Christopher K.
$50, James Mc
$50, Andrew H.
$50, Hugh O.
$50, David B.
$50, Gerald J. G.
$50, Martin Langsholt aka “mlan”
$50, Andrew G.
$50, Will Y.
$40, Markus H.
$40, 진구 봉
$37 (3rd donation), Philip G. aka “-PGG-”
$37 (2nd donation), Jens G.
$35, Steven E.
$35, Sue A.
$33 (75th donation), Olli K.
$33 (3rd donation), Paul A.
$33 (2nd donation), Carsten M.
$33 (2nd donation), Andrew L.
$33, Glyn J.
$33, Michael W.
$33, Bioventure Consulting GmbH
$33, Klaus K.
$33, David J.
$33, Marc B.
$33, Rudolf J.
$30 (4th donation), Marek G.
$30 (3rd donation), Bo S. Y.
$30 (2nd donation), Matthew B.
$30, Paul H.
$30, NetGorilla
$30, John E. O.
$30, Thomas G.
$30, Tomas S.
$28 (25th donation), Mark W.
$28 (8th donation), J J. V. K.
$28, Andreas T.
$28, Gjc B.
$28, Krzysztof P.
$28, Jörg P.
$28, Harry B.
$26 (2nd donation), Candelario E.
$25 (58th donation), Ronald W.
$25 (8th donation), Jaan S.
$25 (5th donation), Guillaume C.
$25 (4th donation), Darek P.
$25 (3rd donation), Steven W.
$25 (3rd donation), Platypus Products
$25 (2nd donation), Andres G.
$25 (2nd donation), Randall M.
$25 (2nd donation), Dennis C.
$25, Nick M aka “Aggiemomo”
$25, Dr. B. D.
$25, Daniel Clarke aka “Dan”
$25, David Thomas
$25, Dennis C.
$25, Floyd B.
$25, Allan G.
$25, Donald S.
$25, Otis L.
$25, Srinivas K.
$23, Jorge F.
$22 (3rd donation), Johann J. A.
$22 (2nd donation), Franz Johannes Schütz aka “Josch”
$22 (2nd donation), Tandblekningspenna för vita tänder
$22 (2nd donation), Maurice G.
$22, Jorge F. M.
$22, Nard aka “Plons”
$22, Nicola F.
$22, Jean-pierre P.
$22, Martin L.
$22, Petros T.
$22, Nurettin G.
$22, Sven M.
$22, Robert Y.
$22, Stefan K.
$22, Vladimir L.
$22, Jeremy M.
$22, Juan B. S.
$22, Franck A.
$22, Tandblekningspenna för vita tänder
$22, Klaus S.
$22, Allart P.
$22, Ute S.
$22, David C.
$22, Vaughan T.
$21, MysticAli3n-Wear
$20.2, Victor H.
$20 (15th donation), Curt Vaughan aka “curtvaughan ”
$20 (13th donation), Jt Spratley aka “Go Live Lively
$20 (8th donation), Dave I.
$20 (7th donation), Kwan L.
$20 (5th donation), Michel S.
$20 (4th donation), Doug B. aka “Doug B.”
$20 (3rd donation), Mali aka “Saltman”
$20 (2nd donation), Psychics4Today.com
$20 (2nd donation), Srikanth B.
$20 (2nd donation), Erwin K.
$20 (2nd donation), Roger S.
$20 (2nd donation), Charles W.
$20 (2nd donation), Roger B.
$20, Anton M.
$20, Lloyd D.
$20, Kelvin S.
$20, Mark D.
$20, Marc B.
$20, Joe H.
$20, Jon H.
$20, Chris Coyle
$20, Terry B.
$20, Harry N.
$20, Ray J.
$20, David L.
$20, Mark N.
$18, Murray C.
$17 (3rd donation), Georg K.
$17, Alessandro P.
$17, Pieter T.
$17, Lorenzo F.
$17, Pablo M. S.
$17, Bostjan S.
$17, Richard L.
$17, Francisco J. D. S. F.
$17, Kārlis M.
$17, Andrew H.
$17, Georg B.
$17, Francesca S. S.
$17, John H.
$17, Mehmet Y.
$15 (6th donation), David W.
$15 (5th donation), Kirill C.
$15 (3rd donation), Felippe H D de Castro
$15 (2nd donation), Dmitry S.
$15, Oscar R.
$15, Eriks T.
$15, Derek T.
$15, Gabriel B.
$15, David K.
$15, Gregory D.
$14, Rene Breitinger aka “sL1k
$14, Simon C.
$14, Guenther S.
$13, Abdulkadir H.
$12 (62th donation), Tony C. aka “S. LaRocca”
$12 (5th donation), Stefan M. H.
$12 (2nd donation), Syed Ammar
$12 (2nd donation), Juergen F.
$12, Tony M.
$11 (8th donation), Andreas S.
$11 (4th donation), Joshua R.
$11 (4th donation), Michael P. aka “www.perron.de
$11 (4th donation), Denis Besnard
$11 (4th donation), Paul G.
$11 (3rd donation), Jofre P. C.
$11 (3rd donation), Laurent H.
$11 (3rd donation), Tomi P.
$11 (3rd donation), François L.
$11 (2nd donation), Jimmy Bouma aka “jimbo_tank”
$11 (2nd donation), Bruno B.
$11 (2nd donation), JGB S.
$11 (2nd donation), Annette T.
$11, Louis L.
$11, Claire S.
$11, Rory T.
$11, Stephan V.
$11, Herbert E.
$11, Matthias H.
$11, Giovanni R.
$11, Tangi M.
$11, Raimond L.
$11, Tibor L. S.
$11, Stephan F.
$11, Leonardo L. T.
$11, Georg G.
$11, Kuno G. aka “nuko”
$11, Oliver B.
$11, Alexandre G.
$11, Henrik T.
$11, Mark H.
$11, Johannes K.
$11, Dirk H.
$11, Jorge B. P.
$11, Fernisse M.
$11, Vadim G.
$11, Rodolfo G. aka “kwendenarmo
$11, Rainer L.
$11, Evgeni W.
$11, Ive V.
$11, Andy G.
$11, James B.
$11, Sascha M. aka “Drakon
$11, Ziad J.
$11, Simon F.
$11, Silvia B.
$10 (52th donation), Tsuguo S.
$10 (22nd donation), Carlos W.
$10 (10th donation), Nicolás Costa de la Colina aka “NCosta”
$10 (8th donation), Mike C.
$10 (8th donation), Jobs Near Me aka “Jobs Hiring
$10 (6th donation), Thomas C.
$10 (4th donation), Hemant Patel
$10 (3rd donation), 末次 英.
$10 (3rd donation), Tomi K.
$10 (3rd donation), Doug B. aka “Doug B.”
$10 (3rd donation), Peter K.
$10 (2nd donation), Leopoldo G.
$10 (2nd donation), Paul O.
$10 (2nd donation), Mattias E.
$10 (2nd donation), Charles S.
$10 (2nd donation), Glenn S.
$10 (2nd donation), Zach G.
$10 (2nd donation), Henri-ppc
$10 (2nd donation), Charles E.
$10, David G.
$10, Chris K.
$10, Zoran H.
$10, Michael S.
$10, Peter H.
$10, Egil J.
$10, Weliton J. D. S.
$10, John K.
$10, Brian G.
$10, Pradeep B.
$10, Nitin S.
$10, Pablo G.
$10, Peter F.
$10, AxiomTech Solutions
$10, Avinandan D.
$10, Andrzej G.
$10, Abdulrahman A.
$10, Olivier D.
$10, Hsiao R.
$10, Heinz H.
$10, Don W.
$10, Bjoern M.
$10, Dennis W.
$10, Mariano E.
$10, Michael M.
$10, James S.
$10, Steven J.
$10, Сапожников Д.
$10, Soumyashant N.
$10, John M.
$10, Andrej K.
$10, Tibor B.
$10, Romad F.
$10, Andrew M.
$10, Bob Holderman
$10, Thomas F.
$10, Vladimir S.
$10, Phelps W.
$10, Paulo F. D. S. M.
$10, Sheldon H.
$10, Douglas B.
$8 (16th donation), Toronto Maple Leafs
$8 (10th donation), Kevin O. aka “Kev”
$7.5, BoldView Group
$7 (6th donation), Yevgeniy A.
$6 (27th donation), Raymond E.
$6 (9th donation), Arvis Lacis aka “arvislacis
$6 (5th donation), Jeldert P.
$6 (3rd donation), Dominic R.
$6 (2nd donation), Antonio P.
$6 (2nd donation), Niko K.
$6 (2nd donation), Paweł B.
$6 (2nd donation), Stefan M.
$6 (2nd donation), Collin H.
$6 (2nd donation), Lars-erik J.
$6 (2nd donation), Giuseppino M.
$6 (2nd donation), Nereo S. S. aka “JoGary”
$6, Alessandro Z.
$6, Simon T.
$6, David B.
$6, José J. S. M.
$6, Reza Eftekhar aka “Reza Eftekhar”
$6, Gino C.
$6, Francois B. aka “Makoto
$6, Michele M.
$6, Gisela V. S.
$6, Sebastian B.
$6, Alvaro L. A.
$6, Samuel J.
$6, Vanja B. aka “Yorkin”
$6, J.C.Navarrete
$6, Steve K.
$6, Volker H.
$6, Petar P.
$5.01 (3rd donation), John M.
$5 (26th donation), LM aka “LinuxMint
$5 (18th donation), Kouji K.
$5 (16th donation), Libertad Tecnologica
$5 (13th donation), Miljenko D. aka “ljacmi”
$5 (9th donation), Hakim
$5 (5th donation), Artur T.
$5 (4th donation), Nicholas S.
$5 (4th donation), Robert M. aka “Moho”
$5 (4th donation), John V. D.
$5 (3rd donation), Vyacheslav K. aka “veZuk”
$5 (3rd donation), Cathi I.
$5 (2nd donation), RexAlan
$5 (2nd donation), Garrett C.
$5 (2nd donation), Grzegorz I.
$5 (2nd donation), Samarth M.
$5 (2nd donation), David S.
$5 (2nd donation), Richard Farrell
$5, 赵 国华
$5, Marcos R. S.
$5, William H.
$5, Лайкин В.
$5, Craig P.
$5, Fabio M.
$5, Petrov P.
$5, John H.
$5, John B. aka “Frogging101”
$5, Oscar O. P.
$5, Von L.
$5, Luis Torres aka “DavLu”
$5, Palo Internet Marketing, LLC.
$5, Subhadeep D.
$5, Поздеев К.
$5, John R.
$5, Shang J.
$5, Antonio R. D. M.
$5, Pawel J. aka “pjanek”
$5, Lynn T.
$5, Antti H.
$5, Sucipto
$5, 王 朝松
$5, Wei L.
$5, Zbynek Vrzalik aka “Zack”
$5, Malcolm.J aka “Gunner”
$5, John M.
$5, Sherwood L.
$5, Siong H. T.
$4 (3rd donation), Serbu I. aka “ionut.linux”
$4, Anthony G.
$4, Vladimir B.
$3.47, Beau R. aka “Dr. Delphius
$3 (18th donation), Kouji K.
$3 (4th donation), elogbookloan
$3 (3rd donation), Dawid W.
$3 (2nd donation), Samuel V. G.
$3, Gavin L.
$3, Artur B.
$3, Francisco J. M.
$3, Your Homework Help
$3, Ex-SSR Computers
$3, Varun B.
$3, Mark W.
$3, Erik S.
$3, Igor B.
$3, Erik P.
$3, Roman M.
$2.5, Eric E.
$2.2 (2nd donation), Chris R.
$2.05 (3rd donation), Michael M.
$2 (11th donation), sprintcowboy
$2 (4th donation), Serguei S.
$2 (2nd donation), Iguler I.
$2, Simon L.
$2, Alejandro C. N.
$2, Rainier
$2, Bodhik
$2, John Parfrey. aka “Sparkie
$14.37 from 21 smaller donations

If you want to help Linux Mint with a donation, please visit http://www.linuxmint.com/donors.php


  • Distrowatch (popularity ranking): 2995 (1st)
  • Alexa (website ranking): 8905

29 June, 2016 12:43PM by Clem

hackergotchi for SolydXK


Localized Portuguese ISOs

Lufilte has made the Portuguese localized ISOs available as 64-bit downloads:

Thanks Lufilte for these great ISOs!

29 June, 2016 06:49AM by Schoelje

hackergotchi for VyOS


MediaWiki update complete

MediaWiki has been updated successfully, in the end. It didn't exactly go smoothly, but it appears to work normally now.

Let us know if you spot any issues.

29 June, 2016 01:06AM by Yuriy Andamasov

June 28, 2016

hackergotchi for Ubuntu developers

Ubuntu developers

Aaron Honeycutt: SELF 2016

This post has been in my box since the 19th, I just got a bit lazy on finishing it up and posting it sorry!

This SELF (SouthEast LinuxFest) was as great the one before… ok maybe a little bit better with all the beer sponsored by my favorite VPS Linode and Google. I mean A LOT of beer!



There was also a ton of Ubuntu devices at the booth. From gaming, convergence and a surprise visit from the UbuntuFL LoCo penguin!

img_20160610_102623_26997424843_o img_20160610_104832_27572691806_o img_20160610_123009_26997408133_o img_20160610_112609_27533559201_o

I even found a BQ M10 Ubuntu Tablet out in the wild!



We also had awesome booth neighbors: system76 and Linode! I loved this trip from exploring the city again to making n



I loved this trip from exploring the city again to making new friends!

img_20160610_211320_26997269103_o img_20160610_200539_27606537975_o img_20160610_203315_27329392750_o 27329347250_d2e6733091_o




28 June, 2016 11:31PM

hackergotchi for VyOS


MediaWiki update

Hi everyone,

We are updating the MediaWiki setup, so the vyos.net website may be inaccessible for a brief period.

Sorry for the inconvenience!

28 June, 2016 10:50PM by Yuriy Andamasov

hackergotchi for Ubuntu developers

Ubuntu developers

Bryan Quigley: When should i386 support for Ubuntu end?

Are you running i386 (32-bit) Ubuntu?   We need your help to decide how much longer to build i386 images of Ubuntu Desktop, Server, and all the flavors.

There is a real cost to support i386 and the benefits have fallen as more software goes 64-bit only.

Please fill out the survey here ONLY if you currently run i386 on one of your machines.  64-bit users will NOT be affected by this, even if you run 32-bit applications.

28 June, 2016 08:04PM

hackergotchi for SolydXK


Localized Japanese and Italian versions

Balloon has made the Japanese localized ISOs available as 64-bit downloads:
or from Balloon’s site:

Belze has made the Italian localized ISOs available as 64-bit downloads

Thanks Balloon and Belze for these great ISOs!

28 June, 2016 05:40PM by Schoelje

hackergotchi for Blankon developers

Blankon developers

Yudha HT: Pemaketan Debian, #BlankOn @ Freenode 9 Juni 2016

Hanya sekedar melakukan update agar ada produksi tulisan.

Pada tanggal 9 Juni 2016 ada kelas pemaketan debian oleh pak Mahyudin di #blankon chat.freenode.net. Silakan cek irclog di sini.

Saya sendiri tidak mengikutinya karena ketiduran satu dan lain hal. Namun dengan melalui irclog dapat mengikutinya. Dan hasilnya bisa dilihat di http://tempel.blankon.in/2115322.

Mari kita tunggu kiprah Pak Mahyudin berikutnya. Saya sendiri menunggu penjelasan bagaimana otomasi irgsh bekerja.

28 June, 2016 03:38PM

hackergotchi for Ubuntu developers

Ubuntu developers

Zygmunt Krynicki: The /etc/os-release zoo

If you've ever wanted to do something differently depending on the /etc/os-release but weren't in the mood of installing every common distribution under the sun, look no further. I give you the /etc/os-release zoo project.

A project like this is never complete so please feel free to contribute additional distribution bits there.

28 June, 2016 12:16PM by Zygmunt Krynicki (noreply@blogger.com)

Canonical Design Team: Juju GUI 2.0

Juju is a cloud orchestration tool which enables users to build models to run applications. You can just as easily use it to deploy a simple WordPress blog or a complex big data platform. Juju is a command line tool but also has a graphical user interface (GUI) where users can choose services from a store, assemble them visually in the GUI, build relations and configure them with the service inspector.

Juju GUI allows users to

  • Add charms and bundles from the charm store
  • Configure services
  • Deploy applications to a cloud of their choice
  • Manage charm settings
  • Monitor model health

Over the last year we’ve been working on a redesign of the Juju GUI. This redesign project focused on improving four key areas, which also acted as our guiding design principles.

1. Improve the functionality of the core features of the GUI

  • Organised similar areas of the core navigation to create a better UI model.
  • Reduced the visual noise of the canvas and the inspector to help users navigate complex models.
  • Introduced a better flow between the store and the canvas to aid adding services without losing context.
Hero before
Hero after

‹ ›

Empty state of the canvas


Hero before
Hero after

‹ ›

Integrated store


Hero before
Hero after

‹ ›

Apache charm details


2. Reduce cognitive load and pace the user

  • Reduced the amount of interaction patterns to minimise the amount of visual translation.
  • Added animation to core features to inform users of the navigation model in an effort to build a stronger concept of home.
  • Created a symbiotic relationship between the canvas and the inspector to help navigation of complex models.
Hero before
Hero after

‹ ›

Mediawiki deployment


3. Provide an at-a-glance understanding of model health

  • Prioritised the hierarchy of status so users are always aware of the most pressing issues and can discern which part of the application is effected.
  • Easier navigation to units with a negative status to aid the user in triaging issues.
  • Used the same visual patterns throughout the web app so users can spot problematic issues.
Hero before
Hero after

‹ ›

Mediawiki deployment with errors


4. Surface functions and facilitate task-driven navigation

  • Established a new hierarchy based on key tasks to create a more familiar navigation model.
  • Redesigned the inspector from the ground up to increase discoverability of inspector led functions.
  • Simplified the visual language and interaction patterns to help users navigate at-a-glance and with speed to triage errors, configure or scale out.
  • Surfaced relevant actions at the right time to avoid cluttering the UI.
Hero before
Hero after

‹ ›

Inspector home view


Hero before
Hero after

‹ ›

Inspector errors view


Hero before
Hero after

‹ ›

Inspector config view


The project has been amazing, we’re really happy to see that it’s launched and are already planning the next updates.

28 June, 2016 10:39AM

Canonical Design Team: Design in the open

As the Juju design team grew it was important to review our working process and to see if we could improve it to create a more agile working environment. The majority of employees at Canonical work distributed around the globe, for instance the Juju UI engineering team has employees from Tasmania to San Francisco. We also work on a product which is extremely technical and feedback is crucial to our velocity.

We identified the following aspects of our process which we wanted to improve:

  • We used different digital locations for storing their design outcomes and assets (Google Drive, Google Sites and Dropbox).
  • The entire company used Google Drive so it was ideal for access, but its lacklustre performance, complex sharing options and poor image viewer meant it wasn’t good for designs.
  • We used Dropbox to store iterations and final designs but it was hard to maintain developer access for sharing and reference.
  • Conversations and feedback on designs in the design team and with developers happened in email or over IRC, which often didn’t include all interested parties.
  • We would often get feedback from teams after sign-off, which would cause delays.
  • Decisions weren’t documented so it was difficult to remember why a change had been made.

Finding the right tool

I’ve always been interested in the concept of designing in the open. Benefits of the practice include being more transparent, faster and more efficient. They also give the design team more presence and visibility across the organisation. Kasia (Juju’s project manager) and I went back and forth on which products to use and eventually settled on GitHub (GH).

The Juju design team works in two week iterations and at the beginning of a new iteration we decided to set up a GH repo and trial the new process. We outlined the following rules to help us start:

  • Issues should be created for each project.
  • All designs/ideas/wireframes should be added inline to the issues.
  • All conversations should be held within GH, no more email or IRC conversations, and notes from any meetings should be added to relevant issues to create a paper trail.


As the iteration went on, feedback started rolling in from the engineering team without us requesting it. A few developers mentioned how cool it was to see how the design process unfolded. We also saw a lot of improvement in the Juju design team: it allowed us to collaborate more easily and it was much easier to keep track of what was happening.

At the end of the trial iteration, during our clinic day, we closed completed issues and uploaded the final assets to the “code” section of the repo, creating a single place for our files.

After the first successful iteration we decided to carry this on as a permanent part of our process. The full range of benefits of moving to GH are:

  • Most employees of Canonical have a GH account and can see our work and provide feedback without needing to adopt a new tool.
  • Project management and key stakeholders are able to see what we’re doing, how we collaborate, why a decision has been made and the history of a project.
  • Provides us with a single source for all conversations which can happen around the latest iteration of a project.
  • One place where anyone can view and download the latest designs.
  • A single place for people to request work.


As a result of this change our designs are more accessible which allows developers and stakeholders to comment and collaborate with the design team aiding in our agile process. Below is an example thread where you can see how GH is used in the process. I shows how we designed the new contextual service block actions.


28 June, 2016 09:52AM

Ubuntu App Developer Blog: New Ubuntu SDK Beta Version

A few days ago we have released the first Beta of the Ubuntu SDK IDE using the LXD container solution to build and execute applications. 

A few days ago we have released the first Beta of the Ubuntu SDK IDE using the LXD container solution to build and execute applications.

The first reports were positive, however one big problem was discovered pretty quickly:

Applications would not start on machines using the proprietary Nvidia drivers. Reason for this is that indirect GLX is not allowed by default when using those. The applications need to have access to:

  1. The glx libraries for the currently used driver
  2. The DRI and Nvidia device files

Luckily the snappy team already tackled a similar problem, so thanks to Michael Vogt (a.k.a mvo) we had a first idea how to solve it by reusing the Nvidia binaries and device files from the host by mounting them into the container.

However it is a bit more complicated in our case, because once we have the devices and directories mounted into the containers they will stay there permanently. This is a problem because the Nvidia binary directory has a version numbering, e.g. /usr/lib/nvidia-315, which changes with the currently loaded module and would stop the container from booting after the driver was changed and the old directory on the host is gone, or the container would use the wrong nvidia dir if it was not removed from the host.

The situation gets worse with optimus graphics cards were the user can switch between a integrated and dedicated graphics chip, which means device files in /dev can come and go between reboots.

Our solution to the problem is to check the integrity of the containers on every start of the Ubuntu SDK IDE and if problems are detected, the user is informed and asked for the root password to run automatic fixes. Those checks and fixes are implemented in the “usdk-target” tool and can be used from the CLI as well.

As a bonus this work will enable direct rendering for other graphics chips as well, however since we do not have access to all possible chips there might be still special cases that we could not catch.

So please report all problems to us on one of those channels:

We have released the new tool into the Tools-development PPA where the first beta was released too. However existing container might not be completely fixed automatically. They are better be recreated or manually fixed. To manually fix an existing container use the maintain mode from the options menu and add the current user into the “video” group.

To get the new version of the IDE please update the installed Ubuntu SDK IDE package:

$ sudo apt-get update && sudo apt-get install ubuntu-sdk-ide ubuntu-sdk-tools

28 June, 2016 05:53AM by Benjamin Zeller (benjamin.zeller@canonical.com)

hackergotchi for Blankon developers

Blankon developers

Sokhibi: Review Singkat BlankOn X Tambora Beta-2

Setelah kurang lebih satu tahun Pengembang BlankOn tidak merillis Distro kebanggaannya, hari ini tanggal 27 Juni 2016 yang bertepatan dengan Bulan Ramadhan 1437 H Tim Pengembang BlankOn merillis BlankOn X Tambora Beta-2, berikut penulis salin-tempel pengumuman dari millis resmi Pengembang BlankOn:
Assalamu'alaikum, Salam Sejahtera Bagi Umat Seluruh Alam.

Pada malam ini, Pengembang BlankOn dengan semangat mengumumkan Rilis Beta 2 BlankOn Tambora setelah tertunda sekian lama karena jodoh tak kunjung datang.
Rilis Beta 2 kali ini mendapatkan perbaikan-perbaikan diantaranya Manokwari, BlankOn Installer serta dukungan kartu gravis nvidia hybrid.

Rilis Beta 2 ini dapat diunduh dari http://cdimage.blankonlinux.or.id/blankon/rilis/10.0/beta-2/. Jika menemukan kutu, Anda bisa berpartisipasi aktif dengan ikut melaporkan di http://dev.blankonlinux.or.id/report/33

Selamat Mencoba!

Yang belum dapat THR, harap sabar. Atau kalau mau datang langsung, bisa ke Surabaya, sebelahnya Hitech Mall.
Tadi sehabis sahur sambil menunggu waktu Subuh penulis sengaja mengunduh BlankOn X Tambora Beta-2 tersebut dengan koneksi Internet yang pas-pasan, singkat cerita BlankOn X Tambora Beta-2 akhiranya berhasil penulis unduh dan pasang di salah satu laptop milik penulis, di bawah ini adalah gambar tampilan Desktopnya
Penulis sengaja memasang BlankOn versi ini tujuannya adalah untuk ikut menguji dan mencari Bug yang kemungkinan masih sangat banyak, berikut ini adalah beberapa kekurangan dari BlankOn X Tambora Beta-2 pada laptop milik penulis
  • Belum tersedia Tool untuk memudahkan pemasangan aplikasi tertentu selain menggunakan Terminal, untuk itu penulis memasang Synaptic lewat terminal.
  • Touchpad di laptop penulis tidak otomatis aktif, dan ketika mau diaktifkan lewat Setting ternyata tida tersedia menu untuk mengaktifkannya seperti pada BlankOn 8 Rote dan BlankOn 9 Suroboyo

  • Ketika penulis menyalakan ulang laptop dan login ke BlankOn 8 Rote, Partisi tempat terpasang BlankOn X Tambora Beta-2 tidak dapat diakses dengan meninggalkan pesan eror seperti gambar di bawah ini
Selain mencari beberapa kekurangan dan bug, penulis juga secara singkat mencari beberapa menu baru yang diusung oleh BlankOn X Tambora Beta-2, berikut adalah beberapa menu baru yang penulis temukan;
  • Corebird, yaitu aplikasi client Twitter untuk Desktop Linux yang dijalankan secara native dengan GTK+

  •  Calendar, menu ini penulis temukan dalam kelompok menu Office, namun ketika diklik tidak mau tampil sehingga penulis tidak tahu fungsinya

Demikian Review singkat tentang BlankOn X Tambora Beta-2 yang sempat penulis uji dengan waktu yang sesingkat-singkatnya, review dan perkembangan lainnya akan segera penulis update jika sudah ada waktu yang cukup longgar untuk menguji kembali BlankOn X Tambora Beta-2.

28 June, 2016 03:07AM by Istana Media (noreply@blogger.com)

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 471

28 June, 2016 02:19AM

June 27, 2016

hackergotchi for SparkyLinux


Updates 2016/06/27


3rd party updates in Sparky repository ready to go:
– Boot Repair 4ppa38
– EFL 1.17.2
– ePSXe 2.0.5 – the emulator is available for 32 and 64 bit systems now. If you have older version (32 bit) installed on a 64 bit system, uninstall it before installing newer 64 bit version.
– WPS Office
– XnView 0.80

Upgrade your system as usually:
sudo apt-get update
sudo apt-get dist-upgrade

or via ‘Update Tool’ or ‘Synaptic’ if you wish.


27 June, 2016 05:45PM by pavroo

hackergotchi for AlienVault OSSIM

AlienVault OSSIM

Reverse Engineering Malware

The AlienVault Labs team does a lot of malware analysis as a part of their security research. I interviewed a couple members of our Labs team, including Patrick Snyder, Eddie Lee, Peter Ewane and Krishna Kona, to learn more about how they do it.

Here are some of the approaches and tools and techniques they use for reverse engineering malware, which may be helpful to you in your own malware hunting endeavors. Please watch the webcast they did recently with Javvad Malik on reverse engineering malware and hear details and examples of how the Labs team investigated OceanLotus, PowerWare and Linux malware in recent situations.

Approaches in reverse engineering a malware sample

  • Reverse engineer: The most obvious approach is to completely reverse engineer a piece of malware. This obviously takes a great amount of time, so other approaches are more practical.
  • Exploitation techniques: Another approach you can take is to focus on the exploitation techniques of a piece of malware. Occasionally you will see a piece of malware that is using a new exploitation technique, or is exploiting a zero-day vulnerability. In this case you may be interested only in the specific exploitation technique so you can timebox your analysis and only look at the exploitation mechanisms.
  • Obfuscation: Malware will often obfuscate itself and make itself difficult to analyze. You might come across malware that you have seen before without obfuscation. In that case you may only want to focus on reverse engineering the new parts.
  • Encryption methods: A common type of malware these days is ransomware. Ransomware essentially encrypts the victim's files and locks them up so that they can't be accessed or read. Oftentimes the authors of ransomware will make mistakes when they implement the encryption mechanisms. So if you focus your research on the encryption mechanisms you might be able to find weaknesses in their implementation and/or you might be able to find hard-coded keys or weak algorithms.
  • C&C communication: This is something that is pretty commonly done when looking at malware. Analysts often want to figure out what the communication protocol is between a piece of malware on the client's side and the server on the command and control side. The communication protocol can actually give you a lot of hints about the malware’s capabilities.
  • Attribution: Murky area - kind of like a dark art. It usually involves a lot of guesswork, knowledge of malicious hacking teams and looking at more than one piece of malware.
  • Categorization and clustering: You can reverse engineer malware from a broader point of view. This involves looking at malware in bulk and doing a broad-stroke analysis on lots of different malware, rather than doing a deep dive.


Now, let’s look at techniques that can be utilized while analyzing malware.

  • First of all, we use static analysis. This is the process of analyzing malware or binaries without actually running them. It can be as simple as looking at metadata from a file. It can range from doing disassembly or decompilation of malware code to symbolic execution, which is something like virtual execution of a binary without actually executing it in a real environment.
  • Conversely, dynamic analysis is the process of analyzing a piece of malware when you are running it in a live environment. In this case, you are often looking at the behavior of the malware and looking at the side effects of what it is doing. You are running tools like process monitor and sysmon to see what kinds of artifacts a piece of malware produces after it is run.
  • We also use automated analysis. Oftentimes if you are looking at malware you want to automate things just to speed up the process to save time. However, use caution, as with automated analysis sometimes things get missed because you are trying to do things generically.
  • If a piece of malware contains things like anti-debugging routines or anti-analysis mechanisms, you may want to perform a manual analysis. You need to pick the right tools for the job.


  • IDA Pro is a really good tool for analyzing various samples of malware with diverse backgrounds. It also has a good add-on called HEX Rays Decompiler, which is a tool that can convert assembly language into more easily read pseudocode. It can help you in understanding the functionality of the code more quickly than looking at assembly language. When you open a sample in IDA Pro, you see the entry point of the malware. It has a graph view as well and you can switch between both hex code and the graph view. It will give you a quick representation of the mapping of the flow of execution as well. It has an SDK you can use if you like to develop plug-ins and automate and extract some of the useful information. IDA Pro also has a Python API that you can use if you prefer Python. The tool also has debugging functionality, but mainly it is used for static reverse engineering of malware.

Here’s IDA Pro:

  • Another tool we use in our labs is Radare2. It is a free open source reversing framework.
  • There are also debuggers like The GNU Project Debugger (GDB), WinDbg and Wind River.
  • For Windows samples, PEiD, PEStudio, PE32 tools are great. You can also use these tools on executables and get some initial classification of your samples.

Here’s PEiD:

  • Other tools that we use include strings, file, and otool. They help us initially find the platform of the sample. If you look at some snapshots of these tools, they can tell you where the entry point of that sample is, what section, and if the sample is packed. They can also detect more than a hundred packers and can detect decrypters and compilers as well.

Here’s the file utility:

Generally, when we get a bunch of samples or an archive of samples from open-source feed, we use a file utility to find out if the file is a regular executable or for a Windows platform or OSX or Linux, or is it just a text file or a script.

  • Immunity Debugger is another popular debugger. If you open a sample in Immunity Debugger, it will give you an alert saying that the sample is packed and be asked if you want to proceed with the analysis. If you continue the analysis, you see an entry point, which is where it pushes all of the registers to stack. You can use this debugger to step through the execution of the sample to see the unpacked sample in memory. You can continue analyzing the samples step-by-step and use the debuggers in the tool for finding the malware's activities and the effects it has on the system.

Here’s Immunity Debugger:

For capturing network traffic, we use Wireshark or TCPDump.

For monitoring the activity on the system, we use system monitor and Regshot.

Sandboxes are another important step in reverse engineering malware, as often there are functionalities malware doesn't exhibit unless it is running in a suitable environment. One sandbox, malwr, comes from the people who built Cuckoo Sandbox. With malwr, you submit a sample and run it inside a VM. You can then run various dynamic analysis tools and static analysis tools referenced above and turn this into a nice, readable report.

Here is malwr:

  • Another Sandbox that is relatively new is Hybrid-Analysis. It is made by Payload Security and it functions very similar to malwr, but they have some of their own custom sandboxes running that may or may not be based on Cuckoo.

Here is Hybrid-Analysis:

Another major Sandbox tool for identifying malware is VirusTotal. VirusTotal is owned by Google, and they arguably have the biggest repository of both malware and known file types in general layout. If you are looking for any particular malware, it typically shows up in VirusTotal.

Here is VirusTotal:

Another new contender is DeepViz. DeepViz is being developed very actively, with new features on a regular basis. DeepViz functions very similarly to other Sandboxes, but sometimes it is beneficial to submit the same sample to multiple sandboxes to see if the behavior matches up or if it reacts differently.

Here is DeepViz:

Which brings us to Cuckoo. Cuckoo is a malware analysis system. It contains many different tools, including some of the dynamic and static analysis tools that we mentioned earlier. Also, it is free. While other sandboxes are free, you are sharing your data by using them. If you set up Cuckoo on your own system you can keep everything localized and keep it to yourself, especially if you are analyzing something you don't want the world to know about yet.

Here is Cuckoo:

Open Threat Exchange (OTX) is another key component we use in malware analysis.

To find out more about OTX there is a documentation center. You can also see information on our forums. There is a section specifically for OTX where you can see pulses. Also, just a few weeks ago we announced some enhancements to the OTX API. If you are a blogger, please note you can now embed pulses. So if you write a blog, you can just simply embed it within so users can read it and directly download the IoCs and other information. Read more.

Connecting OTX to your USM platform helps you to manage risk better and effectively take action on threats. A free trial of AlienVault USM is available.


27 June, 2016 03:58PM

hackergotchi for Ubuntu developers

Ubuntu developers

Sergio Schvezov: The Snapcraft Parts Ecosystem

Today I am going to be discussing parts. This is one of the pillars of snapcraft (together with plugins and the lifecycle).

For those not familiar, this is snapcraft’s general purpose landing page, http://snapcraft.io/ but if you are a developer and have already been introduced to this new world of snaps, you probably want to just go and hop on to http://snapcraft.io/create/

If you go over this snapcraft tour you will notice the many uses of parts and start to wonder how to get started or think that maybe you are duplicating work done by others, or even better, maybe an upstream. This is where we start to think about the idea of sharing parts and this is exactly what we are going to go over in this post.

To be able to reproduce what follows, you’d need to have snapcraft 2.12 installed.

An overview to using remote parts

So imagine I am someone wanting to use libcurl. Normally I would write the part definition from scratch and be on with my own business but surely I might be missing out on something about optimal switches used to configure the package or even build it. I would also need to research on how to use the specific plugin required. So instead, I’ll see if someone already has done the work for me, hence I will,

$ snapcraft update
Updating parts list... |
$ snapcraft search curl
curl       A tool and a library (usable from many languages) for client side URL tra...

Great, there’s a match, but is this what I want?

$ snapcraft define curl
Maintainer: 'Sergio Schvezov <sergio.schvezov@ubuntu.com>'
Description: 'A tool and a library (usable from many languages) for client side URL transfers, supporting FTP, FTPS, HTTP, HTTPS, TELNET, DICT, FILE and LDAP.'

  - --enable-static
  - --enable-shared
  - --disable-manual
  plugin: autotools
  - -bin
  - -lib/*.a
  - -lib/pkgconfig
  - -lib/*.la
  - -include
  - -share
  source: http://curl.haxx.se/download/curl-7.44.0.tar.bz2
  source-type: tar

Yup, it’s what I want.

An example

There are two ways to use these parts in your snapcraft.yaml, say this is your parts section

       plugin: autotools
       source: .

My client part which is using sources that sit alongside this snapcraft.yaml, will hypothetically fail to build as it depends on the curl library I don’t yet have. There are some options here to get this going, one using after in the part definition implicitly, another involving composing and last but not least just copy pasting what snapcraft define curl returned for the part.


The implicit path is really straightforward. It only involves making the part look like:

       plugin: autotools
       source: .
       after: [curl]

This will use the cached definition of the part and may potentially be updated by running snapcraft update.


What if we like the part, but want to try out a new configure flag or source release? Well we can override pieces of the part; so for the case of wanting to change the source:

        plugin: autotools
        source: .
        after: [curl]
        source: http://curl.haxx.se/download/curl-7.45.0.tar.bz2

And we will get to build curl but using a newer version of curl. The trick is that the part definition here is missing the plugin entry, thereby instructing snapcraft to look for the full part definition from the cache.


This path is a path one would take if they want full control over the part. It is as simple as copying in the part definition we got from running snapcraft define curl into your own. For the sake of completeness here’s how it would look like:

        plugin: autotools
        source: .
        after: [curl]
            - --enable-static
            - --enable-shared
            - --disable-manual
        plugin: autotools
            - -bin
            - -lib/*.a
            - -lib/pkgconfig
            - -lib/*.la
            - -include
            - -share
        source: http://curl.haxx.se/download/curl-7.44.0.tar.bz2
        source-type: tar

Sharing your part

Now what if you have a part and want to share it with the rest of the world? It is rather simple really, just head over to https://wiki.ubuntu.com/snapcraft/parts and add it.

In the case of curl, I would write a yaml document that looks like:

origin: https://github.com/sergiusens/curl.git
maintainer: Sergio Schvezov <sergio.schvezov@ubuntu.com>
  A tool and a library (usable from many languages) for
  client side URL transfers, supporting FTP, FTPS, HTTP,
project-part: curl

What does this mean? Well, the part itself is not defined on the wiki, just a pointer to it with some meta data, the part is really defined inside a snapcraft.yaml living in the origin we just told it to use.

The extent of the keywords is explained in the documentation, that is an upstream link to it.

The core idea is that a maintainer decides he wants to share a part. Such a maintainer would add a description that provides an idea of what that part (or collection of parts) is doing. Then, last but not least, the maintainer declares which parts to expose to the world as maybe not all of them should. The main part is exposed in project-part and will carry a top level name, the maintainer can expose more parts from snapcraft.yaml using the general parts keyword. These parts will be namespaced with the project-part.

27 June, 2016 03:57PM by Sergio Schvezov (sergiusens@gmail.com)

Canonical Design Team: New starter Davide (Project Manager) – “A working team is very precious”

Meet the newest member of the Design Team, project manager Davide Casa. He will be working with the Platform Team to keep us all in check and working towards our goals. I sat down with him to discuss his background, what he thinks makes a good project manager and what his first week was like at Canonical (spoiler alert – he survived it).


You can read Davide’s blog here, and reach out to him on Github and Twitter with @davidedc.

Tell us a bit about your background?

My background is in Computer Science (I did a 5 year degree). I also studied for an MBA in London.

Computer science is a passion of mine. I like to keep up to date with latest trends and play with programming languages. However, I never got paid for it, so it’s more like a hobby now to scratch an artistic itch. I often get asked in interviews: “why aren’t you a coder then?” The simple answer is that it just didn’t happen. I got my first job as a business analyst, which then developed into project management.

What do you think makes a good project manager?

I think the soft skills are incredibly relevant and crucial to the role. For example: gathering what the team’s previous experience of project management was, and what they expect from you, and how deeply and quickly you can change things.

Is project management perceived as a service or is there a practise of ‘thought leadership’?

In tech companies it varies. I’ve worked in Vodafone as a PM and you felt there was a possibility to practice a “thought leadership”, because it is such a huge company and things have to be dealt with in large cycles. Components and designs have to be agreed on in batches, because you can’t hand-wave your way through 100s of changes across a dozen mission-critical modules, it would be too risky. In some other companies less so. We’ll see how it works here.

Apart from calendars, Kanban boards and post-it notes  – what else can be used to help teams collaborate smoothly?

Indeed one of the core values of Agile is “the team”. I think people underestimate the importance of cohesiveness in a team, e.g. how easy it is for people to step forward and make mistakes without fear. A cohesive team is something that is very precious and I think that’s a regularly underestimated. You can easily buy tools and licenses, which are “easy solutions” in a way. The PM should also help to improve the cohesiveness of a team, for example creating processes that people can rely on in order to avoid attrition, and resolve things. Also to avoid treating everything like a special case to help deal with things “proportionally”.

What brings you to the Open Source world?

I like coding, and to be good coder, one must read good code. With open source the first thing you do is look around to see what others are doing and then you start to tinker with it. It has almost never been relevant for me to release software without source.

Have you got any side projects you’re currently working on?

I dabble in livecoding, which is an exotic niche of people that do live visuals and sounds with code (see our post on Qtday 2016). I am also part of the Toplap collective which works a lot on those lines too.

I also dabble in creating an exotic desktop system that runs on the web. It’s inspired by the Squeak environment, where everything is an object and is modifiable and inspectable directly within the live system. Everything is draggable, droppable and composable. For example, for a menu pops up you can change any button, both the labelling or the function it performs, or take apart any button and put it anywhere else on the desktop or in any open window. It all happens via “direct manipulation”. Imagine a paint application where at any time while working you can “open” any button from the toolbar and change what the actual painting operation does (John Maeda made such a paint app actually).

The very first desktop systems all worked that way. There was no concept of a big app or “compile and run again”. Something like a text editor app would just be a text box providing functions. The functions are then embodied in buttons and stuck around the textbox, and voila, then you have your very own flavour of text editor brought to life. Also in these live systems most operations are orthogonal: you can assume you can rotate images, right? Hence by the same token you can rotate anything on the screen. A whole window for example, or text. Two rotating lines and a few labels become a clock. The user can combine simple widgets together to make their own apps on the fly!

What was the most interesting thing you’ve learned in your first week here?

I learned a lot and I suspect that will never stop. The bread and butter here is strategy and design, which in other companies is only just a small area of work. Here it is the core of everything! So it’ll be interesting to see how this ‘strategy’ works. And how the big thinking starts with the visuals or UX in mind, and from that how it steers the whole platform. An exciting example of this can be seen in the Ubuntu Convergence story.

That’s the essence of open source I guess…

Indeed. And the fact that anti-features such as DRM, banners, bloatware, compulsory registrations and basic compilers that need 4GB of installation never live long in it. It’s our desktop after all, is it not?

27 June, 2016 01:10PM

BunsenLabs Linux

BunsenLabs Reviews

@johnraff and @hhh - I had a similar problem yesterday, only it was my own use of the Debian Wheezy Backports, instead of the recommended backport procedure for bunsenlabs. I fixed this and right away have had no problems encountered since.

EDIT: Previously had edited the /etc/apt/sources.list with:

# Wheezy Backport alternative
deb http://ftp.debian.org/debian wheezy-backports main

Then later, I commented this line out (with #'s)

27 June, 2016 01:03PM by jalexander9

hackergotchi for Ubuntu developers

Ubuntu developers

Canonical Design Team: App Design Clinic – OwnCloud App #9

The Ubuntu App Design Clinic is back! This month members of the Design Team James Mulholland (UX Designer), Jouni Helminen (Visual Designer) and Andrea Bernabei (UX Engineer) sat down with Dan Wood, contributor to the OwnCloud app.

What is OwnCloud?

OwnCloud is an open source project, self-hosted file sync and share app platform. Access & sync your files, contacts, calendars & bookmarks across your devices.

You can contribute to it here.

We covered:

  • First case usage – the first point of entry for the user, maybe a file manager or a possible tooltip introduction.
  • Convergent thinking – how the app looks across different surfaces.
  • Top-level navigation – using the header to display actions, such as settings.
  • Using Online Accounts to sync other accounts to the cloud.
  • Using sync frequency or instant syncing.

If you missed it, or want to watch it again, here it is:

The next App Design Clinic is yet to be confirmed. Stay tuned.


27 June, 2016 12:16PM

hackergotchi for SolydXK


New SolydXK ISOs: 201606

It is time again for the new SolydXK ISOs!

These are some of the changes:

  • Firefox ESR is now used from Debian repository instead of custom built and installed from the SolydXK repository.
  • You can now use custom mount points in the Live Installer. Double click on a partition to select a pre-defined mount point or write your custom mount point.
  • Improved command handling of SolydXK applications for the Enthusiast’s Editions.
  • The SolydXK scripts were moved from /usr/local/bin to /usr/bin.
  • Grizzler improved the /usr/bin/apt script. Run apt in terminal to see a list of commands with explanation.
  • SolydX RPI has been built from scratch and is based on Raspbian.
  • And many more smaller changes that I forgot to mention here 😉

The community editions and localized editions will follow later. I will post the release of those editions as soon as they are ready. Until then the previous versions will be available for download from our site.

You can find more information, and download the ISOs on our product pages:
SolydX: https://solydxk.com/downloads/solydx/
SolydK: https://solydxk.com/downloads/solydk/
SolydX RPI: https://solydxk.com/downloads/solydx-rpi/

For any questions or issues, please visit our forum: http://forums.solydxk.com/


27 June, 2016 11:48AM by Schoelje

hackergotchi for Ubuntu developers

Ubuntu developers

Alessio Treglia: A – not exactly United – Kingdom


Island of Ventotene – Roman harbour

There once was a Kingdom strongly United, built on the honours of the people of Wessex, of Mercia, Northumbria and East Anglia who knew how to deal with the invasion of the Vikings from the east and of Normans from the south, to come to unify the territory under an umbrella of common intents. Today, however, 48% of them, while keeping solid traditions, still know how to look forward to the future, joining horizons and commercial developments along with the rest of Europe. The remaining 52%, however, look back and can not see anything in front of them if not a desire of isolation, breaking the European dream born on the shores of Ventotene island in 1944 by Altiero Spinelli, Ernesto Rossi and Ursula Hirschmann through the “Manifesto for a free and united Europe“. An incurable fracture in the country was born in a referendum on 23 June, in which just over half of the population asked to terminate his marriage to the great European family, bringing the UK back by 43 years of history.

<Read More…[by Fabio Marzocca]>

27 June, 2016 07:54AM

Paul Tagliamonte: Hello, Sense!

A while back, I saw a Kickstarter for one of the most well designed and pretty sleep trackers on the market. I fell in love with it, and it has stuck with me since.

A few months ago, I finally got my hands on one and started to track my data. Naturally, I now want to store this new data with the rest of the data I have on myself in my own databases.

I went in search of an API, but I found that the Sense API hasn't been published yet, and is being worked on by the team. Here's hoping it'll land soon!

After some subdomain guessing, I hit on api.hello.is. So, naturally, I went to take a quick look at their Android app and network traffic, lo and behold, there was a pretty nicely designed API.

This API is clearly an internal API, and as such, it's something that should not be considered stable. However, I'm OK with a fragile API, so I've published a quick and dirty API wrapper for the Sense API to my GitHub..

I've published it because I've found it useful, but I can't promise the world, (since I'm not a member of the Sense team at Hello!), so here are a few ground rules of this wrapper:

  • I make no claims to the stability or completeness.
  • I have no documentation or assurances.
  • I will not provide the client secret and ID. You'll have to find them on your own.
  • This may stop working without any notice, and there may even be really nasty bugs that result in your alarm going off at 4 AM.
  • Send PRs! This is a side-project for me.

This module is currently Python 3 only. If someone really needs Python 2 support, I'm open to minimally invasive patches to the codebase using six to support Python 2.7.

Working with the API:

First, let's go ahead and log in using python -m sense.

$ python -m sense
Sense OAuth Client ID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Sense OAuth Client Secret: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Sense email: paultag@gmail.com
Sense password: 
Attempting to log into Sense's API
Attempting to query the Sense API
The humidity is **just right**.
The air quality is **just right**.
The light level is **just right**.
It's **pretty hot** in here.
The noise level is **just right**.

Now, let's see if we can pull up information on my Sense:

>>> from sense import Sense
>>> sense = Sense()
>>> sense.devices()
{'senses': [{'id': 'xxxxxxxxxxxxxxxx', 'firmware_version': '11a1', 'last_updated': 1466991060000, 'state': 'NORMAL', 'wifi_info': {'rssi': 0, 'ssid': 'Pretty Fly for a WiFi (2.4 GhZ)', 'condition': 'GOOD', 'last_updated': 1462927722000}, 'color': 'BLACK'}], 'pills': [{'id': 'xxxxxxxxxxxxxxxx', 'firmware_version': '2', 'last_updated': 1466990339000, 'battery_level': 87, 'color': 'BLUE', 'state': 'NORMAL'}]}

Neat! Pretty cool. Look, you can even see my WiFi AP! Let's try some more and pull some trends out.

>>> values = [x.get("value") for x in sense.room_sensors()["humidity"]][:10]
>>> min(values)
>>> max(values)

I plan to keep maintaining it as long as it's needed, so I welcome co-maintainers, and I'd love to see what people build with it! So far, I'm using it to dump my room data into InfluxDB, pulling information on my room into Grafana. Hopefully more to come!

Happy hacking!

27 June, 2016 01:42AM

Dustin Kirkland: HOWTO: Host your own SNAP store!

SNAPs are the cross-distro, cross-cloud, cross-device Linux packaging format of the future.  And we're already hosting a fantastic catalog of SNAPs in the SNAP store provided by Canonical.  Developers are welcome to publish their software for distribution across hundreds millions of Ubuntu servers, desktops, and devices.

Several people have asked the inevitable open source software question, "SNAPs are awesome, but how can I stand up my own SNAP store?!?"

The answer is really quite simple...  SNAP stores are really just HTTP web servers!  Of course, you can get fancy with branding, and authentication, and certificates.  But if you just want to host SNAPs and enable downstream users to fetch and install software, well, it's pretty trivial.

In fact, Bret Barker has published an open source (Apache License) SNAP store on GitHub.  We're already looking at how to flesh out his proof-of-concept and bring it into snapcore itself.

Here's a little HOWTO install and use it.

First, I launched an instance in AWS.  Of course I could have launched an Ubuntu 16.04 LTS instance, but actually, I launched a Fedora 24 instance!  In fact, you could run your SNAP store on any OS that currently supports SNAPs, really, or even just fork this GitHub repo and install it stand alone..  See snapcraft.io.

Now, let's find and install a snapstore SNAP.  (Note that in this AWS instance of Fedora 24, I also had to 'sudo yum install squashfs-tools kernel-modules'.

At this point, you're running a SNAP store (webserver) on port 5000.

Now, let's reconfigure snapd to talk to our own SNAP store, and search for a SNAP.

Finally, let's install and inspect that SNAP.

How about that?  Easy enough!


27 June, 2016 01:09AM by Dustin Kirkland (noreply@blogger.com)

June 26, 2016

Simos Xenitellis: Trying out LXD containers on Ubuntu on DigitalOcean

You can have LXD containers on your home computer, you can also have them on your Virtual-Private Server (VPS). If you have any further questions on LXD, see https://www.stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

Here we see how to configure on a VPS at DigitalOcean (yeah, referral). We go cheap and select the 512MB RAM and 20GB disk VPS for $5/month. Containers are quite lightweight, so it’s interesting to see how many we can squeeze. We are going to use ZFS for the storage of the containers, stored on a file and not a block device. Here is what we are doing today,

  1. Set up LXD on a 512MB RAM/20GB diskspace VPS
  2. Create a container with a web server
  3. Expose the container service to the Internet
  4. Visit the webserver from our browser

Set up LXD on DigitalOcean


When creating the VPS, it is important to change these two options; we need 16.04 (default is 14.04) so that it has ZFS pre-installed as a kernel module, and we try out the cheapest VPS offering with 512MB RAM.

Once we create the VPS, we connect with

$ ssh root@    # change with the IP address you get from the DigitalOcean panel
The authenticity of host ' (' can't be established.
ECDSA key fingerprint is SHA256:7I094lF8aeLFQ4WPLr/iIX4bMs91jNiKhlIJw3wuMd4.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 16.04 LTS (GNU/Linux 4.4.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com/

0 packages can be updated.
0 updates are security updates.

root@ubuntu-512mb-ams3-01:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
Hit:2 http://ams2.mirrors.digitalocean.com/ubuntu xenial InRelease 
Get:3 http://security.ubuntu.com/ubuntu xenial-security/main Sources [24.9 kB]
Fetched 10.2 MB in 4s (2,492 kB/s)
Reading package lists... Done
Building dependency tree 
Reading state information... Done
13 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@ubuntu-512mb-ams3-01:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
 dnsmasq-base initramfs-tools initramfs-tools-bin initramfs-tools-core
 libexpat1 libglib2.0-0 libglib2.0-data lshw python3-software-properties
 shared-mime-info snapd software-properties-common wget
13 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 6,979 kB of archives.
After this operation, 78.8 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
update-initramfs: Generating /boot/initrd.img-4.4.0-24-generic
W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
Processing triggers for libc-bin (2.23-0ubuntu3) ...

We update the package list and then upgrade any packages that need upgrading.

root@ubuntu-512mb-ams3-01:~# apt policy lxd
 Installed: 2.0.2-0ubuntu1~16.04.1
 Candidate: 2.0.2-0ubuntu1~16.04.1
 Version table:
 *** 2.0.2-0ubuntu1~16.04.1 500
 500 http://mirrors.digitalocean.com/ubuntu xenial-updates/main amd64 Packages
 500 http://security.ubuntu.com/ubuntu xenial-security/main amd64 Packages
 100 /var/lib/dpkg/status
 2.0.0-0ubuntu4 500
 500 http://mirrors.digitalocean.com/ubuntu xenial/main amd64 Packages

The lxd package is already installed, all the better. Nice touch 🙂

root@ubuntu-512mb-ams3-01:~# apt install zfsutils-linux
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-doc zfs-zed
Suggested packages:
 default-mta | mail-transport-agent samba-common-bin nfs-kernel-server
The following NEW packages will be installed:
 libnvpair1linux libuutil1linux libzfs2linux libzpool2linux zfs-doc zfs-zed
0 upgraded, 7 newly installed, 0 to remove and 0 not upgraded.
Need to get 881 kB of archives.
After this operation, 2,820 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
zed.service is a disabled or a static unit, not starting it.
Processing triggers for libc-bin (2.23-0ubuntu3) ...
Processing triggers for systemd (229-4ubuntu6) ...
Processing triggers for ureadahead (0.100.0-19) ...
root@ubuntu-512mb-ams3-01:~# _

We installed zfsutils-linux in order to be able to use ZFS as storage for our containers. In this tutorial we are going to use a file as storage (still, ZFS filesystem) instead of a block device. If you subscribe to the DO Beta for block storage volumes, you can get a proper block device for the storage of the containers. Currently free to beta members, available only on the NYC1 datacenter.

root@ubuntu-512mb-ams3-01:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda1  20G  1.1G 18G     6% /
root@ubuntu-512mb-ams3-01:~# _

We got 18GB free diskspace, so let’s allocate 15GB for LXD.

root@ubuntu-512mb-ams3-01:~# lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd-pool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 15
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes
we accept the default settings for the bridge configuration
Warning: Stopping lxd.service, but it can still be activated by:
LXD has been successfully configured.
root@ubuntu-512mb-ams3-01:~# _

What we did,

  • we initialized LXD with the ZFS storage backend,
  • we created a new pool and gave a name (here, lxd-pool),
  • we do not have a block device, so we get a (sparse) image file that contains the ZFS filesystem
  • we do not want now to make LXD available over the network
  • we want to configure the LXD bridge for the inter-networking of the containters

Let’s create a new user and add them to the lxd group,

root@ubuntu-512mb-ams3-01:~# adduser ubuntu
Adding user `ubuntu' ...
Adding new group `ubuntu' (1000) ...
Adding new user `ubuntu' (1000) with group `ubuntu' ...
Creating home directory `/home/ubuntu' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: ********
Retype new UNIX password: ********
passwd: password updated successfully
Changing the user information for ubuntu
Enter the new value, or press ENTER for the default
 Full Name []: <ENTER>
 Room Number []: <ENTER>
 Work Phone []: <ENTER>
 Home Phone []: <ENTER>
 Other []: <ENTER>
Is the information correct? [Y/n] Y
root@ubuntu-512mb-ams3-01:~# _

The username is ubuntu. Make sure you add a good password, since we do not deal in this tutorial with best security practices. Many people use scripts on these VPSs that try common usernames and passwords. When you create a VPS, it is nice to have a look at /var/log/auth.log for those failed attempts to get into your VPS. Here are a few lines from this VPS,

Jun 26 18:36:15 digitalocean sshd[16318]: Failed password for root from port 45863 ssh2
Jun 26 18:36:15 digitalocean sshd[16320]: Connection closed by port 49378 [preauth]
Jun 26 18:36:17 digitalocean sshd[16318]: Failed password for root from port 45863 ssh2
Jun 26 18:36:20 digitalocean sshd[16318]: Failed password for root from port 45863 ssh2

We add the ubuntu user into the lxd group in order to be able to run commands as a non-root user.

root@ubuntu-512mb-ams3-01:~# adduser ubuntu lxd
Adding user `ubuntu' to group `lxd' ...
Adding user ubuntu to group lxd
root@ubuntu-512mb-ams3-01:~# _

We are now good to go. Log in as user ubuntu and run an LXD command to list images.


Create a Web server in a container

We launch (init and start) a container named c1.


The ubuntu:x in the screenshot is an alias for Ubuntu 16.04 (Xenial), that resides in the ubuntu: repository of images. You can find other distributions in the images: repository.

As soon as the launch action was completed, I run the list action. Then, after a few seconds, I run it again. You can notice that it took a few seconds before the container actually booted and got an IP address.

Let’s enter into the container by executing a shell. We update and then upgrade the container.

ubuntu@ubuntu-512mb-ams3-01:~$ lxc exec c1 -- /bin/bash
root@c1:~# apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [94.5 kB]
Hit:2 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:3 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [94.5 kB]
Fetched 9819 kB in 2s (3645 kB/s) 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
13 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@c1:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
 dnsmasq-base initramfs-tools initramfs-tools-bin initramfs-tools-core libexpat1 libglib2.0-0 libglib2.0-data lshw python3-software-properties shared-mime-info snapd
 software-properties-common wget
13 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 6979 kB of archives.
After this operation, 3339 kB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 initramfs-tools all 0.122ubuntu8.1 [8602 B]
Processing triggers for initramfs-tools (0.122ubuntu8.1) ...
Processing triggers for libc-bin (2.23-0ubuntu3) ...

Let’s install nginx, our Web server.

root@c1:~# apt install nginx
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following additional packages will be installed:
 fontconfig-config fonts-dejavu-core libfontconfig1 libfreetype6 libgd3 libjbig0 libjpeg-turbo8 libjpeg8 libtiff5 libvpx3 libxpm4 libxslt1.1 nginx-common nginx-core
Suggested packages:
 libgd-tools fcgiwrap nginx-doc ssl-cert
The following NEW packages will be installed:
 fontconfig-config fonts-dejavu-core libfontconfig1 libfreetype6 libgd3 libjbig0 libjpeg-turbo8 libjpeg8 libtiff5 libvpx3 libxpm4 libxslt1.1 nginx nginx-common nginx-core
0 upgraded, 15 newly installed, 0 to remove and 0 not upgraded.
Need to get 3309 kB of archives.
After this operation, 10.7 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://archive.ubuntu.com/ubuntu xenial/main amd64 libjpeg-turbo8 amd64 1.4.2-0ubuntu3 [111 kB]
Processing triggers for ufw (0.35-0ubuntu2) ...

Is the Web server running? Let’s check with the ss command (preinstalled, from package iproute2)

root@c1:~# ss -tula 
Netid State Recv-Q Send-Q Local Address:Port Peer Address:Port 
udp UNCONN 0 0 *:bootpc *:* 
tcp LISTEN 0 128 *:http *:* 
tcp LISTEN 0 128 *:ssh *:* 
tcp LISTEN 0 128 :::http :::* 
tcp LISTEN 0 128 :::ssh :::*

The parameters mean

  • -t: Show only TCP sockets
  • -u: Show only UDP sockets
  • -l: Show listening sockets
  • -a: Show all sockets (makes no difference because of previous options; it’s just makes an easier word to remember, tula)

Of course, there is also lsof with the parameter -i (IPv4/IPv6).

root@c1:~# lsof -i
dhclient 240 root 6u IPv4 45606 0t0 UDP *:bootpc 
sshd 306 root 3u IPv4 47073 0t0 TCP *:ssh (LISTEN)
sshd 306 root 4u IPv6 47081 0t0 TCP *:ssh (LISTEN)
nginx 2034 root 6u IPv4 51636 0t0 TCP *:http (LISTEN)
nginx 2034 root 7u IPv6 51637 0t0 TCP *:http (LISTEN)
nginx 2035 www-data 6u IPv4 51636 0t0 TCP *:http (LISTEN)
nginx 2035 www-data 7u IPv6 51637 0t0 TCP *:http (LISTEN)

From both commands we verify that the Web server is indeed running inside the VPS, along with a SSHD server.

Let’s change a bit the default Web page,

root@c1:~# nano /var/www/html/index.nginx-debian.html


Expose the container service to the Internet

Now, if we try to visit the public IP of our VPS at we obviously notice that there is no Web server there. We need to expose the container to the world, since the container only has a private IP address.

The following iptables line exposes the container service at port 80. Note that we run this as root on the VPS (root@ubuntu-512mb-ams3-01:~#), NOT inside the container (root@c1:~#).

iptables -t nat -I PREROUTING -i eth0 -p TCP -d --dport 80 -j DNAT --to-destination

Adapt accordingly the public IP of your VPS and the private IP of your container (10.x.x.x). Since we have a web server, this is port 80.

We have not made this firewall rule persistent as it is outside of our scope; see iptables-persistent on how to make it persistent.

Visit our Web server

Here is the URL, so let’s visit it.


That’s it! We created an LXD container with the nginx Web server, then exposed it to the Internet.


26 June, 2016 11:57PM

BunsenLabs Linux

BL added to Debian Derivatives

BunsenLabs has been invited to join the Debian Derivatives Census.




Please note that posts in News & Announcements now automatically get reposted on Planet Debian/deriv...


Please do not post Off Topic content in News & Announcements! Thanks!

26 June, 2016 11:56PM by hhh

June 25, 2016

hackergotchi for Ubuntu developers

Ubuntu developers

Dimitri John Ledkov: Post-Brexit - The What Now?

Out of 46,500,001 electorate 17,410,742 voted to leave, which is a mere 37.4% or just over a third. [source]. On my books this is not a clear expression of the UK wishes.

The reaction that the results have caused are devastating. The Scottish First Minister has announced plans for 2nd Scottish Independence referendum [source]. Londoners are filing petitions calling for Independent London [source, source]. The Prime Minister announced his resignation [source]. Things are not stable.

I do not believe that super majority of the electorate are in favor of leaving the EU. I don't even believe that those who voted to leave have considered the break up of the UK as the inevitable outcome of the leave vote. There are numerous videos on the internet about that, impossible to quantify or reliably cite, but for example this [source]

So What Now?


I urge everyone to start protesting the outcome of the mistake that happened last Thursday. 4th of July is a good symbolic date to show your discontent with the UK governemnt and a tiny minority who are about to cause the country to fall apart with no other benefits. Please stand up and make yourself heard.
  • General Strikes 4th & 5th of July
There are 64,100,000 people living in the UK according to the World Bank, maybe the government should fear and listen to the unheard third. The current "majority" parliament was only elected by 24% of electorate.

It is time for people to actually take control, we can fix our parliament, we can stop austerity, we can prevent the break up of the UK, and we can stay in the EU. Over to you.

ps. How to elect next PM?

Electing next PM will be done within the Conservative Party, and that's kind of a bummer, given that the desperate state the country currently is in. It is not that hard to predict that Boris Johnson is a front-runner. If you wish to elect a different PM, I urge you to splash out 25 quid and register to be a member of the Conservative Party just for one year =) this way you will get a chance to directly elect the new Leader of the Conservative Party and thus the new Prime Minister. You can backdoor the Conservative election here.

25 June, 2016 07:24PM by Dimitri John Ledkov (noreply@blogger.com)

Simos Xenitellis: Trying out LXD containers on our Ubuntu

This post is about containers, a construct similar to virtual machines (VM) but so much lightweight that you can easily create a dozen on your desktop Ubuntu!

A VM virtualizes a whole computer and then you install in there the guest operating system. In contrast, a container reuses the host Linux kernel and simply contains just the root filesystem (aka runtimes) of our choice. The Linux kernel has several features that rigidly separate the running Linux container from our host computer (i.e. our desktop Ubuntu).

By themselves, Linux containers would need some manual work to manage them directly. Fortunately, there is LXD (pronounced Lex-deeh), a service that manages Linux containers for us.

We will see how to

  1. setup our Ubuntu desktop for containers,
  2. create a container,
  3. install a Web server,
  4. test it a bit, and
  5. clear everything up.

Set up your Ubuntu for containers

If you have Ubuntu 16.04, then you are ready to go. Just install a couple of extra packages that we see below. If you have Ubuntu 14.04.x or Ubuntu 15.10, see LXD 2.0: Installing and configuring LXD [2/12] for some extra steps, then come back.

Make sure the package list is up-to-date:

sudo apt update
sudo apt upgrade

Install the lxd package:

sudo apt install lxd

If you have Ubuntu 16.04, you can enable the feature to store your container files in a ZFS filesystem. The Linux kernel in Ubuntu 16.04 includes the necessary kernel modules for ZFS. For LXD to use ZFS for storage, we just need to install a package with ZFS utilities. Without ZFS, the containers would be stored as separate files on the host filesystem. With ZFS, we have features like copy-on-write which makes the tasks much faster.

Install the zfsutils-linux package (if you have Ubuntu 16.04.x):

sudo apt install zfsutils-linux

Once you installed the LXD package on the desktop Ubuntu, the package installation scripts should have added you to the lxd group. If your desktop account is a member of that group, then your account can manage containers with LXD and can avoid adding sudo in front of all commands. The way Linux works, you would need to log out from the desktop session and then log in again to activate the lxd group membership. (If you are an advanced user, you can avoid the re-login by newgrp lxd in your current shell).

Before use, LXD should be initialized with our storage choice and networking choice.

Initialize lxd for storage and networking by running the following command:

$ sudo lxd init
Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: lxd-pool
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 30
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes 
> You will be asked about the network bridge configuration. Accept all defaults and continue.
Warning: Stopping lxd.service, but it can still be activated by:
 LXD has been successfully configured.
$ _

We created the ZFS pool as a filesystem inside a (single) file, not a block device (i.e. in a partition), thus no need for extra partitioning. In the example I specified 30GB, and this space will come from the root (/) filesystem. If you want to look at this file, it is at /var/lib/lxd/zfs.img.


That’s it! The initial configuration has been completed. For troubleshooting or background information, see https://www.stgraber.org/2016/03/15/lxd-2-0-installing-and-configuring-lxd-212/

Create your first container

All management commands with LXD are available through the lxc command. We run lxc with some parameters and that’s how we manage containers.

lxc list

to get a list of installed containers. Obviously, the list will be empty but it verifies that all are fine.

lxc image list

shows the list of (cached) images that we can use to launch a container. Obviously, the list will be empty but it verifies that all are fine.

lxc image list ubuntu:

shows the list of available remote images that we can use to download and launch as containers. This specific list shows Ubuntu images.

lxc image list images:

shows the list of available remote images for various distributions that we can use to download and launch as containers. This specific list shows all sort of distributions like Alpine, Debian, Gentoo, Opensuse and Fedora.

Let’s launch a container with Ubuntu 16.04 and call it c1:

$ lxc launch ubuntu:x c1
Creating c1
Starting c1
$ _

We used the launch action, then selected the image ubuntu:x (x is an alias for the Xenial/16.04 image) and lastly we use the name c1 for our container.

Let’s view our first installed container,

$ lxc list

| NAME | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS    |
| c1   | RUNNING | (eth0) |      | PERSISTENT | 0            |

Our first container c1 is running and it has an IP address (accessible locally). It is ready to be used!

Install a Web server

We can run commands in our container. The action for running commands, is exec.

$ lxc exec c1 -- uptime
 11:47:25 up 2 min, 0 users, load average: 0.07, 0.05, 0.04
$ _

After the action exec, we specify the container and finally we type command to run inside the container. The uptime is just 2 minutes, it’s a fresh container :-).

The — thing on the command line has to do with parameter processing of our shell. If our command does not have any parameters, we can safely omit the –.

$ lxc exec c1 -- df -h

This is an example that requires the –, because for our command we use the parameter -h. If you omit the –, you get an error.

Let’s get a shell in the container, and update the package list already.

$ lxc exec c1 bash
root@c1:~# apt update
Ign http://archive.ubuntu.com trusty InRelease
Get:1 http://archive.ubuntu.com trusty-updates InRelease [65.9 kB]
Get:2 http://security.ubuntu.com trusty-security InRelease [65.9 kB]
Hit http://archive.ubuntu.com trusty/universe Translation-en 
Fetched 11.2 MB in 9s (1228 kB/s) 
Reading package lists... Done
root@c1:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Processing triggers for man-db ( ...
Setting up dpkg (1.17.5ubuntu5.7) ...
root@c1:~# _

We are going to install nginx as our Web server. nginx is somewhat cooler than Apache Web server.

root@c1:~# apt install nginx
Reading package lists... Done
Building dependency tree
Setting up nginx-core (1.4.6-1ubuntu3.5) ...
Setting up nginx (1.4.6-1ubuntu3.5) ...
Processing triggers for libc-bin (2.19-0ubuntu6.9) ...
root@c1:~# _

Let’s view our Web server with our browser. Remeber the IP address you got, so I enter it into my browser.


Let’s make a small change in the text of that page. Back inside our container, we enter the directory with the default HTML page.

root@c1:~# cd /var/www/html/
root@c1:/var/www/html# ls -l
total 2
-rw-r--r-- 1 root root 612 Jun 25 12:15 index.nginx-debian.html

We can edit the file with nano, then save


Finally, let’s check the page again,


Clearing up

Let’s clear up the container by deleting it. We can easily create new ones when we need them.

$ lxc list
| NAME | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS    |
| c1   | RUNNING | (eth0) |      | PERSISTENT | 0            |
$ lxc stop c1
$ lxc delete c1
$ lxc list
| NAME | STATE   | IPV4                 | IPV6 | TYPE       | SNAPSHOTS    |

We stopped (shutdown) the container, then we deleted it.

That’s all. There are many more ideas on what do with containers. Here are the first steps on setting up our Ubuntu desktop and trying out one such container.

25 June, 2016 12:26PM

BunsenLabs Linux

Month of June Only: BL T-Shirts

Not to worry, gents. Our users have come through with flying colors, the orders will be shipped!


Thanks everyone. You still have over a week to order if you're in The States and want one.

25 June, 2016 05:16AM by hhh

June 24, 2016

hackergotchi for SparkyLinux


Linux kernel 4.6.3


There is an update of the latest stable version of Linux kernel 4.6.3 available in Sparky “unstable” repository.

Make sure you have Sparky “unstable” repository http://sparkylinux.org/wiki/doku.php/repository active to upgrade or install the latest kernel.
sudo apt-get install linux-image-sparky-amd64
686 non-pae:
sudo apt-get install linux-image-sparky-686
sudo apt-get install linux-image-sparky-686-pae

Then reboot your machine to take effects.

To quick remove older version of the Linux kernel, simply run APTus-> Remove-> Uninstall Old Kernel script.


24 June, 2016 11:15PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Valorie Zimmerman: Akademy! and fundraising


Akademy is approaching! And I can hardly wait. This spring has been personally difficult, and meeting with friends and colleagues is the perfect way to end the summer. This year will be special because it's in Berlin, and because it is part of Qt.con, with a lot of our freedom-loving friends, such as Qt, VideoLAN, Free Software Foundation Europe and KDAB. As usual, Kubuntu will also be having our annual meetup there.

Events are expensive! KDE needs money to support Akademy the event, support for those who need travel and lodging subsidy, support for other events such as our Randa Meetings, which just successfully ended. We're still raising money to support the sprints:


Of course that money supports Akademy too, which is our largest annual meeting.

Ubuntu helps here too! The Ubuntu Community fund sends many of the Kubuntu team, and often funds a shared meal as well. Please support the Ubuntu Community Fund too if you can!

I'm going!

I can't seem to make the image a link, so go to https://qtcon.org/ for more information.

24 June, 2016 11:07PM by Valorie Zimmerman (noreply@blogger.com)

Ubuntu Insights: HOWTO: Host your own SNAP store!

SNAPs are the cross-distro, cross-cloud, cross-device Linux packaging format of the future.  And we’re already hosting a fantastic catalog of SNAPs in the SNAP store provided by Canonical.  Developers are welcome to publish their software for distribution across hundreds millions of Ubuntu servers, desktops, and devices.

Several people have asked the inevitable open source software question, “SNAPs are awesome, but how can I stand up my own SNAP store?!?”

The answer is really quite simple…  SNAP stores are really just HTTP web servers!  Of course, you can get fancy with branding, and authentication, and certificates.  But if you just want to host SNAPs and enable downstream users to fetch and install software, well, it’s pretty trivial.

In fact, Bret Barker has published an open source (Apache License) SNAP store on GitHub.  We’re already looking at how to flesh out his proof-of-concept and bring it into snapcore itself.

Here’s a little HOWTO install and use it.

First, I launched an instance in AWS.  Of course I could have launched an Ubuntu 16.04 LTS instance, but actually, I launched a Fedora 24 instance!  In fact, you could run your SNAP store on any OS that currently supports SNAPs, really, or even just fork this GitHub repo and install it stand alone..  See snapcraft.io.

Now, let’s find and install a snapstore SNAP.  (Note that in this AWS instance of Fedora 24, I also had to ‘sudo yum install squashfs-tools kernel-modules’.

At this point, you’re running a SNAP store (webserver) on port 5000.

Now, let’s reconfigure snapd to talk to our own SNAP store, and search for a SNAP.

Finally, let’s install and inspect that SNAP.<

How about that?  Easy enough!

Original article

24 June, 2016 07:58PM

hackergotchi for Univention Corporate Server

Univention Corporate Server

Cool Solutions – Guacamole…Not Just a Dip!

What are “Cool Solutions”?

Cool Solutions is the name we use to describe Univention solutions which expand UCS with practical, advantageous functions and are also sometimes employed by our customers. These solutions are regularly presented in the Univention Wiki in the form of Cool Solutions articles.

In a new series of articles, we want to introduce you to the five most popular “Cool Solutions” over the next few weeks. Today we are starting with Guacamole – and no, we don’t mean the tasty Mexican dip this time.

What is Guacamole?

Logo GuacamoleGuacamole is an Open Source software (Apache license) which allows remote access to computers. The advantage in its use lies in the fact that it supports different protocols (VNC, RDP, SSH, etc.) and only one browser is required for the access itself. The software was developed by the Open Source developers of the Guacamole project.

The software itself comprises two components – the frontend Guacamole and the backend guacd. Guacamole is written in Java and is supplied as a so-called servlet container (e.g., Tomcat). The software provides an HTML5 frontend, which allows a range of different options for accessing an external system. Different access methods such as RDP, SSH, and VNC can be employed via corresponding dependency packages. Guacamole is an RDP, SSH, or VNC client which functions without additional software or add-ons in the user’s browser.

The backend guacd establishes the actual connection to the target system and forwards the output on to the frontend Guacamole.


How is Guacamole installed?

Guacamole can be installed and operated in the standard way via provision of a servlet server (e.g., Tomcat). Attention also needs to be paid to correct installation of the dependency. However, Guacamole can now also be operated via Docker. In this case, all the necessary dependencies are supplied.

In our wiki article on Guacamole we describe the installation and configuration via Docker on a UCS 4.1 server.

What advantage does Guacamole offer?

The idea behind Guacamole is to configure different remote accesses at user level via a single platform in order to prevent having to create avoidable port openings in the firewalls. Guacamole utilizes the availability of existing methods for access to RDP, SSH, and VNC, and does not include any functions of its own for operating these protocols.

Where do we use Guacamole?

When performing project management for our customers we mostly use Guacamole in Amazon CloudFormation (topic-based, preconfigured template environments) in order to allow access to Windows server systems via RDP. Before we used Guacamole, different port forwardings generally needed to be configured in the UCS system and ports needed to be opened in the firewalls, which resulted in potential security vulnerabilities. In addition, the UCS user needed to ensure that an RDP client was installed on the operating system. Thanks to the use of Guacamole, this step is no longer necessary and the RDP access can be conducted conveniently from the browser.

How can you install Guacamole?

As of UCS 4.0 it is possible to install and run Guacamole with Docker. The installation of the necessary containers is performed as usual via the command line with Docker. The current Docker implementation of Guacamole requires the installation of a database. Without this, the Guacamole container will not start. As soon as the three components

  • database (MySQL or PostgreSQL),
  • guacd,
  • and Guacamole

are installed, Guacamole can be accessed via the URL http://localhost:8080 and the configuration performed via the web interface.

How do I configure Univention Guacamole?

In Amazon CloudFormation we use the “NoAuth” plugin for RDP connections, with which all connections can be used without previous authentication. The configuration is not performed in this case via Guacamole’s web interface, but rather via a configuration file in the Guacamole Docker container. Following a subsequent restart of the Guacamole Docker container, the new configuration is available and the RDP connections can be used.

More information on guacamole you can find here:

Der Beitrag Cool Solutions – Guacamole…Not Just a Dip! erschien zuerst auf Univention.

24 June, 2016 02:25PM by Timo Denissen

hackergotchi for Ubuntu developers

Ubuntu developers

Dirk Deimeke: Linkdump 25/2016 ...

Nettigkeiten der vergangenen Woche.

Eine der besten Zusammenfassungen, warum man sein eigenes Blog starten sollte: Starte (d)ein Blog – heute!

Dies ist keine Übung - externe Festplatten werden im schlimmsten Fall übrigens auch verschlüsselt. In jedem Fall sollte man die Verschlüsselung rechtzeitig bemerken, sonst ist Essig mit Restore.

Es lohnt sich der Aufwand, sich selber kennen zu lernen. Man könnte sogar ein ganz netter Typ sein. :-) Positives Selbstwertgefühl, wie Sie Ihre persönlichen Stärken im Job am besten einsetzen.

The Life-Changing Magic Of Shorter Emails comes from the "Captain Obvious Department" ... (sorry!).

Ja, wissen wir, aber Ausbildung tut trotzdem Not. Der Fachkräftemangel ist ein Phantom.

The Epic Story of Dropbox’s Exodus From the Amazon Cloud Empire - nice one about a major infrastructure change without interruption of service.

Billig ist nicht immer günstig, auch wenn es sehr einfach wirkt, Fachwissen ist dennoch nötig: Auf diese Aspekte müssen Admins achten.

24 June, 2016 03:36AM by Dirk Deimeke (nospam@example.com)

June 23, 2016

Xubuntu: Looking for memorable and fun Xubuntu stories!

To celebrate Xubuntu’s tenth birthday*, the Xubuntu team is glad to announce a new campaign and competition!

We’re looking for your most memorable and fun Xubuntu story. In order to participate, submit the story to xubuntu-contacts@lists.ubuntu.com. Or you may share an image (photo, drawing, painting, etc) to Elizabeth K. Joseph <lyz@ubuntu.com> and Pasi Lallinaho <pasi@shimmerproject.org>, please restrict your file size to a maximum of 5M.

For example, have you shared Xubuntu with a friend or family member, and had them react in a memorable way? Or have you created Xubuntu-themed cookies, cakes or artwork? No story or experience is too simple to share and don’t be restricted by these examples, surprise us!

Bonus: Share it on Twitter and hashtag it with #LoveXubuntu and during the competition, the Xubuntu team will retweet a posts on the Twitter account for Xubuntu. Additionally, we encourage to share your stories all over the social media!

At the end of the competition, we will select 5 finalists. All finalists will receive a set of Xubuntu stickers from UnixStickers! We will pick 2 winners from the finalists who will also receive a Xubuntu t-shirt! We will be in touch with the finalists and winners after the contest has ended to check their address details and preferred t-shirt size and color (for winners).

Notes on licensing: Submissions to the #LoveXubuntu campaign will be accepted under the CC BY-SA 4.0 license and available for use for Xubuntu marketing in the future without further consent from the participants. That said, we’re friendly folks and will try to communicate with you before using your story or image!

* The first official Xubuntu release was 6.06, released on June 1, 2006.

23 June, 2016 09:12PM

Kubuntu: Kubuntu Dojo 2 – Kubuntu Ninjas

Want to get deeper under the hood with Kubuntu?

Interested in becoming a developer?

Then come and join us in the Kubuntu Dojo:

Thursday 30th June 2016 – 18:00 UTC

Packaging is one of the primary development tasks in a large Linux distribution project. Packaging is the essential way of getting the latest and best software to the user.

We continue our Kubuntu Dojo and Ninja developers training courses. These courses are free to attend, and take place on the last Thursday of the month at 18:00 UTC.

This is course number 2, where we will look at Debian and Ubuntu packaging. Candidates will create their first packages including uploading them to their own PPA on LaunchPad, all deliver inside our online video classroom.

Details for accessing the Kubuntu Big Blue Button Conference server will be announced in the G+ event stream, and on IRC: irc://irc.freenode.net


Why it rocks

All the cool kids are doing it.
Packagers know everyone.
Not only will you be part of an elite group, but also get to know with Debian’s finest, as well as KDE developers and other application developers.

For more details about the Kubuntu Ninja’s programme see our wiki:


23 June, 2016 06:22PM

The Fridge: Ubuntu Membership Board call for nominations

As you may know, Ubuntu Membership is a recognition of significant and sustained contribution to Ubuntu and the Ubuntu community. To this end, the Community Council recruits members of our current membership community for the valuable role of reviewing and evaluating the contributions of potential members to bring them on board or assist with having them achieve this goal.

We have seven members of our boards expiring from their 2 year terms within the next couple months, which means we need to do some restaffing of this Membership Board.

We’re looking for Ubuntu Members who can participate either in the 20:00 UTC meetings or 22:00 UTC (if you can make both, even better).

Both the 20:00 UTC and the 22:00 UTC meetings happen once a month, specific day may be discussed by the board upon addition of new members.

We have the following requirements for nominees:

  • be an Ubuntu member (preferably for some time)
  • be confident that you can evaluate contributions to various parts of our community
  • be committed to attending the membership meetings
  • broad insight into the Ubuntu community at large is a plus

Additionally, those sitting on membership boards are current Ubuntu Members with a proven track record of activity in the community. They have shown themselves over time to be able to work well with others and display the positive aspects of the Ubuntu Code of Conduct. They should be people who can discern character and evaluate contribution quality without emotion while engaging in an interview/discussion that communicates interest, a welcoming atmosphere, and which is marked by humanity, gentleness, and kindness. Even when they must deny applications, they should do so in such a way that applicants walk away with a sense of hopefulness and a desire to return with a more complete application rather than feeling discouraged or hurt.

To nominate yourself or somebody else (please confirm they wish to accept the nomination and state you have done so), please send a mail to the membership boards mailing list (ubuntu-membership-boards at lists.ubuntu.com). You will want to include some information about the nominee, a launchpad profile link and which time slot (20:00 or 22:00) the nominee will be able to participate in.

We will be accepting nominations through Friday July 1st at 12:00 UTC. At that time all nominations will be forwarded to the Community Council who will make the final decision and announcement.

Thanks in advance to you and to the dedication everybody has put into their roles as board members.

Originally posted to the ubuntu-news-team mailing list on Mon Jun 20 13:04:03 UTC 2016 by Svetlana Belkin, on the behalf of the Community Council

23 June, 2016 06:01PM

Ubuntu Podcast from the UK LoCo: S09E17 – Sherlock Holmes’ Smarter Brother - Ubuntu Podcast

It’s Episode Seventeen of Season Nine of the Ubuntu Podcast! Mark Johnson, Alan Pope, Laura Cowen, Martin Wimpress, and Mycroft are here and speaking to your brain.

We’re here – all of us!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

23 June, 2016 02:00PM

Ubuntu App Developer Blog: Snapd 2.0.9: full snap confinement on Elementary 0.4

As of today and part of our weekly release cadence, a new snapd is making its way to your 16.04 systems. Here is what’s new!

Command line

  • snap interfaces can now give you a list of all snaps connected to a specific interface:
  • Introduction of snap run <app.command>, which will provide a clean and simple way to run commands and hooks for any installed revision of a snap. As of writing this post, to try it, you need to wait for a newer core snap to be promoted to the stable channel, or alternatively, switch to the beta channel with snap refresh --channel=beta ubuntu-core


  • Enable full confinement on Elementary 0.4 (Loki)
  • If a distribution doesn’t support full confinement through Apparmor and seccomp, snaps are installed in devmode by default.


  • Installing the core snap will now request a restart
  • Rest API: added support to send apps per snap, to allow finer-grained control of snaps from the Software center.

Have a look at the full changelog for more details.

What’s next?

Here are some of the fixes already lined up for the next snapd release:

  • New interfaces to enable more system access for confined snaps, such as “camera”, “optical-drive” and “mpris”. This will give a lot more latitude for media players (control through the mpris dbus interface, playing DVDs, etc.) and communication apps. You can try them now by building snapd from source.
  • Better handling of snaps on nvidia graphics
  • And much more to come, watch the new Snapcraft social channels ( twitter, google+, facebook) for updates!

23 June, 2016 01:37PM by David Callé (david.calle@canonical.com)

hackergotchi for Univention Corporate Server

Univention Corporate Server

New Release of UCS@school Simplifies the Central Management of User Accounts

With the release of UCS@school 4.1 R2, we have equipped our IT solution for schools with two new additional functions for the central management of digital identities and authorizations. From now on, UCS@school allows teaching staff and pupils to use their accounts at various schools. And stored school administration data can now be automatically imported into the identity management system of UCS. These new functions significantly reduce the maintenance efforts of managing the identities and authorizations of pupils and teaching staff. With these two new features we increased UCS@school’s standing as a solution for creative, centralized, and cost effective IT management of applications and digital identities in schools.

Cross-school user accounts for assigning students and teachers to various schoolsclass room with raised hands

With the setup of cross-school user accounts we wanted to meet a desire of our school customers as meanwhile more than one third of pupils and teaching staff attends more than one school. Till now, in order to access school computers, applications, common data or email accounts, they needed a separate user name and password in each school. With this new option, school administrators can now authorize users at any number of schools and permit access to the necessary IT at those locations. User authorization is automatically transferred to the particular online learning platform, file sharing or email software in the different schools.

Automated import of user data from school administration software

A second substantial improvement in the release of UCS@school 4.1 R2 is the implementation of an automated synchronization of user data between administration and school IT. Schools are very much like large companies in relation to the amount of “employees” they have. But unlike companies, the entire ” workforce” changes yearly. Each year they change classes or even schools. Until now this meant a double expenditure of effort as changes had to be made not only in the administration software but also in the schools’ IT.

And so our development department expanded the mechanism for the automation of data import from user accounts in school administration into individual schools. User names and email addresses are generated automatically out of the data and made available to the identity management. Upon request further variables can be defined.

For these imports UCS@school provides an interface which enables data to be directly converted. So different data formats like XML, JSON, CSV can simply be transferred and processed on location via UCS@school. This allows schools, for example, at the beginning of the school year, or in the case of a change of school, to automatically make user accounts available. A manual change isn’t necessary and the work is significantly reduced.

Further functions of the new UCS@school version:

  • Life-cycle management
  • Test runs for the importation and automated deactivation of data
  • Automated deletion of antiquated user accounts
  • Preview function for outcomes of automatically imported data

Der Beitrag New Release of UCS@school Simplifies the Central Management of User Accounts erschien zuerst auf Univention.

23 June, 2016 01:18PM by Maren Abatielos

June 22, 2016

hackergotchi for Cumulus Linux

Cumulus Linux

Independence from L2 Data Centers

We’ve all been there. That “non-disruptive” maintenance window that should “only be a blip”. You sit down at the terminal at 10pm expecting that adding a new server to the MLAG domain or upgrading a single switch will be a simple process, only to lose the rack of dual-attached servers and spend the rest of your Thursday night frantically trying to bring the cluster back online.

If I never spend another evening troubleshooting an outage caused by MLAG, I’ll die happy!

While MLAG provides higher availability than single attaching or a creating multi-port bond to a single switch, it comes with the cost of a delicate balancing act. What if there was a way to provide redundancy without MLAG’s fragility and its risk to maintenance windows?

We at Cumulus Networks have seen many of our customers solve these problems by leveraging Cumulus Quagga, our enhanced version of the routing suite, on their server hosts, so we’ve decided to call it Routing on the Host and make it broadly available for download.

By leveraging the routing protocols OSPF or BGP all the way to the server, we can resolve that MLAG problem once and for all.


Over the last five years, data centers have evolved to deploy more routing and less bridging. In the early days of data center architecture, we routed down to a pair of aggregation switches, with a large layer 2 domain below it. Then the network evolved to include routing to the top of rack switches, where a local MLAG pair acted as the means to connect to the rack of servers below. Because routing limits loops and broadcasts while allowing for equal cost multipath (ECMP) for all-active link utilization, Routing on the Host lets you finally realize these advantages all the way to the server and start looking at more resilient networks running at higher speeds.



With Routing on the Host, you can use ECMP for high I/O workloads, like Ceph storage or Apache Spark; you can connect servers to more than two top of rack switches and share them all equally.



For clustered solutions, like Ceph storage, the loss of a rack of servers due to an MLAG issue can have a major impact on the network as the cluster must replicate and rebalance all of the data that was just temporarily lost. The added network stability from Routing on the Host can prevent these small outages that have a massive impact.

When it’s 10pm on Thursday and it’s time to execute maintenance, Routing on the Host allows for zero packet loss due to network changes and upgrades. You can change the OSPF link metric or BGP AS Path on the top of rack switch that is about to undergo maintenance, gracefully removing it from the data path. This signals the server to send traffic to other attached top of rack switches. Once all traffic has routed around the switch undergoing maintenance, you can execute the software upgrades or configuration changes. You can even try this yourself with a demo I made in Cumulus VX.


Finally, Routing on the Host provides a superior network design for bare metal and containers. By relying on OSPF or BGP unnumbered, a server can easily connect anywhere in the data center and advertise IP addresses for itself or containers. With containers, you can use Routing on the Host to advertise the overlay endpoint and allow Docker networking or a networking plugin to dynamically build overlays.

Running Cumulus Quagga directly to a server provides real benefits in a simple manner. Never again do you have to worry about MLAG maintenance taking down your environment or a server administrator confusing server bridge and bond configurations, causing a catastrophic bridging loop. Service providers and web scale companies have relied on entirely routed environments to provide the highest scalability and reliability in their networks.

Routing on the Host is a way to get this same power and flexibility throughout the entire data center.

Download Cumulus Quagga and start your journey of independence from late night troubleshooting and L2 data centers!




The post Independence from L2 Data Centers appeared first on Cumulus Networks Blog.

22 June, 2016 09:27PM by Pete Lumbis

hackergotchi for Maemo developers

Maemo developers

2016-06-14 Meeting Minutes

Meeting held 2016-06-14 on FreeNode, channel #maemo-meeting (logs)

Attending:reinob, Win7Mac, M4rtinK, eekkelund, pichlo

Partial: chem|st

Absent: juiceme

Summary of topics (ordered by discussion):

  • Topic Coding Competetion
  • Topic Bitcoins, HTTPS

(Topic Coding Competetion):

  • Council is thinking to reduce categories due to low amount of participants
  • Four categories: Something new, Fixing/updating, Beginner and Whislist

(Topic Bitcoins, HTTPS):

  • chem|st will find legal disclaimer for donations made in BTC
  • Cert is needed for https, Council was discussing about which cert to chooce

Action Items:
  • old items:
    • The next GA meeting should be announced soon.
    • Tax-exempt status
    • Could we make coding competetion happen
  • new items:
    • Find legal disclaimer for Bitcoin donations

Solved Action Items:
  • Ask about tor and https from techstaff
0 Add to favourites0 Bury

22 June, 2016 07:27PM by Eetu Kahelin (eetu.kahelin@metropolia.fi)

2016-06-07 Meeting Minutes

Meeting held 2016-06-07 on FreeNode, channel #maemo-meeting (logs)

Attending:reinob, eekkelund, Win7Mac, juiceme, pichlo



Summary of topics (ordered by discussion):

  • Topic Tax-exempt
  • Topic More Coding Competetion discussion

(Topic Tax-exempt):

  • juiceme has application form for Maemo community tax-exempt status
  • Finanical reports for the past 2 years to complete needs to be applied
  • "Being tax-exempt means we're a registered charity."
  • Is there any professional account managers among our community? Council could need little help.

(Topic More Coding Competetion discussion):

  • Discussion about basic tasks regarding Conding Competetion
  • eekkelund promised to think about leading the Coding Competetion

Action Items:
  • old items:
    • The next GA meeting should be announced soon.
    • Ask about tor and https from techstaff
    • Could we make coding competetion happen
  • new items:
    • Tax-exempt status

Solved Action Items:
  • Find out if https is doable (pichlo)
  • Check missing priviledges to the Council blog (juiceme)
0 Add to favourites0 Bury

22 June, 2016 07:26PM by Eetu Kahelin (eetu.kahelin@metropolia.fi)

hackergotchi for Ubuntu developers

Ubuntu developers

Dirk Deimeke: Es wird so langsam ...

Und wieder einen Schritt weiter.

Gestern, am längsten und vermutlich auch schwülsten Tag des Jahres, hatte ich meine Prüfung zum 2. Kyu und tatsächlich auch bestanden.

Die Programme werden immer komplexer und jetzt werden auch "Haltungsnoten" vergeben. ;-) Ist natürlich Quatsch, aber je weiter man kommt, desto mehr wollen die Prüfer auch sehen, dass nicht nur die Technik verstanden und durchgeführt wurde, sondern dass sie auch mit Stil und Haltung durchgeführt wird.

Da habe ich - gelinde gesagt - noch Verbesserungspotential.

Ich arbeite daran.

22 June, 2016 07:16AM by Dirk Deimeke (nospam@example.com)

June 21, 2016

Jonathan Riddell: KDE neon Press Coverage and Comments

KDE neon User Edition 5.6 came out a couple of weeks ago, let’s have a look at the commentry.

Phoronix stuck to their reputation by announcing it a day early but redeemed them selves with a follow up article KDE neon: The Rock & Roll Distribution. “KDE neon feels amazing. There’s simply no other way to say it.

CIO had an exclusive interview with moi, “It is a continuously updated installable image that can be used not just for exploration and testing but as the main operating system for people enthusiastic about the latest desktop software.”

For the Spanish speaker MuyLinux wrote KDE Neon lanza su primera versión para usuarios. “La primera impresión ha sido buena.” or “The first impression was good”.

On YouTube we got a review from Jeff Linux Turner. “This thing’s actually pretty good.  I like it.” While Wooden User gives an unvoiced tour with funky music.  Riba Linux has the same but with more of an indy soundtrack.

Reddit had several threads on it including a review by luxitanium which I’ll selectively quote with “Is it ready for consumers? It is definitely getting there, oh yes“.

The award winning Spanish language KDE Blog covered Probando KDE Neon User Edition 5.6. “Estamos ante un gran avance para la Comunidad KDE” or “We are facing a breakthrough for the KDE Community“.

Meanwhile on Twitter:

Want to meet the genius behind the neon light? Harald is giving a talk at the opensuse conference on Thursday. Do drop by in Nürnberg.

facebooktwittergoogle_pluslinkedinby feather

21 June, 2016 10:29PM

Forums Council: New Ubuntu Member via forums contributions

Please welcome our newest Member, vasa1.

Not only vasa1 has been a long time contributor to the forums, but he’s also a member of the Forums Staff.

vasa1’s application thread can be viewed here.

Congratulations from the Forums Council!

If you have been a contributor to the forums and wish to apply to Ubuntu Membership, please follow the process outlined here.

21 June, 2016 05:23PM

hackergotchi for Tails


Tails report for May, 2016


  • segfault sent his first and second report on his GSoC on Tails Server.

  • We've prepared a patched version of Icedove and Torbirdy for the next Tails release. Icedove users can now use the automatic configuration wizard and by default, requests will be done only over secure protocols. Other Icedove improvements include using a hkps keyserver in Enigmail, default to POP if persistence is enabled, IMAP if not. We've disabled remote email account creation. All our patches are being upstreamed to Thunderbird and Torbirdy.

Documentation and website

User experience


  • Our channel #tails for user support moved from IRC to XMPP. For further details read our support page.

  • The new mirror pool is now used by Tails Upgrader, by users who download Tails without using our Download And Verification Extension for Firefox (aka. DAVE), for any download that is not supported by DAVE (e.g. release candidates), and for downloads started from a web browser that has JavaScript disabled. So, in summary two of the use cases of this work are covered already, and only the "downloading with DAVE" use case is left to complete. As of May 31st we now have 36 active mirrors.


  • Mediapart, a French online investigative journal that uses Tails for their work, is the first media organization to answer positively our call to support Tails financially. The terms of our partnership are still being discussed.

  • We submitted a proposal to fund reproducible builds to the Mozilla Open Source Support.


Upcoming events

  • jvoisin and fr33tux are going to give a talk on Tails at Nuit du Hack

On-going discussions

Press and testimonials


  • The documentation is now also available in Italian language. We welcome the Italian translation team, thanks for all of the work done!

Overall translation of the website

  • de: 50% (2648) strings translated, 4% strings fuzzy, 44% words translated
  • fa: 47% (2492) strings translated, 7% strings fuzzy, 54% words translated
  • fr: 64% (3376) strings translated, 4% strings fuzzy, 65% words translated
  • it: 17% (896) strings translated, 2% strings fuzzy, 17% words translated
  • pt: 31% (1660) strings translated, 6% strings fuzzy, 29% words translated

Total original words: 53532

Core pages of the website

  • de: 79% (1432) strings translated, 6% strings fuzzy, 79% words translated
  • fa: 40% (726) strings translated, 9% strings fuzzy, 42% words translated
  • fr: 74% (1341) strings translated, 5% strings fuzzy, 77% words translated
  • it: 49% (886) strings translated, 6% strings fuzzy, 56% words translated
  • pt: 55% (1001) strings translated, 9% strings fuzzy, 55% words translated

Total original words: 16492


  • Tails has been started more than 541.950 times this month. This makes 17.482 boots a day on average.
  • 6359 downloads of the OpenPGP signature of Tails ISO from our website.
  • 112 bug reports were received through WhisperBack.

21 June, 2016 05:17PM

hackergotchi for Tanglu developers

Tanglu developers

Cutelyst 0.12.0 is out!

Cutelyst a web framework built with Qt is now closer to have it’s first stable release, with it becoming 3 years old at the end of the year I’m doing my best to finally iron it to get an API/ABI compromise, this release is full of cool stuff and a bunch of breaks which most of the time just require recompiling.

For the last 2-3 weeks I’ve been working hard to get most of it unit tested, the Core behavior is now extensively tested with more than 200 tests. This has already proven it’s benefits resulting in improved and fixed code.

Continuous integration got broader runs with gcc and clang on both OSX and Linux (Travis) and with MSVC 12 and 14 on Windows (Appveyor), luckily most of the features I wanted where implemented but the compiler is pretty upsetting. Running Cutelyst on Windows would require uwsgi which can be built with MinGW but it’s in experimental state, the developer HTTP engine, is not production ready so Windows usefulness is limited at the moment.

One of the ‘hypes’ of the moment is non-blocking web servers, and this release also fixes this so that uwsgi –async <number_of_requests> is properly handled, of course there is no magic, if you enable this on blocking code the requests will still have to wait your blocking task to finish, but there are many benefits of using this if you have non-blocking code. At the moment once a slot is called to process the request and say you want to do a GET on some webservice you can use the QNetworkAccessManager do your call and create a local QEventLoop so once the QNetworkReply finish() is emitted you continue processing. Hopefully some day QtSql module will have an async API but you can of course create a Thread Queue.

A new plugin called StatusMessage was also introduced which generates an ID which you will use when redirecting to some other page and the message is only displayed once, and doesn’t suffer from those flash race conditions.

The upload parser for Content-Type  multipart/form-data got a huge boost in performance as it now uses QByteArrayMatcher to find boundaries, the bigger the upload the more apparent is the change.

Chunked responses also got several fixes and one great improvement which will allow to use it with classes like QXmlStreamWriter by just passing the Response class (which is now a QIODevice) to it’s constructor or setDevice(), on the first write the HTTP headers are sent and it will start creating chunks, for some reason this doesn’t work when using uwsgi protocol behind Nginx, I still need to dig and maybe disable the chunks markup depending on the protocol used by uwsgi.

A Pagination class is also available to easy the work needed to write pagination, with methods to set the proper LIMIT and OFFSET on Sql queries.

Benchmarks for the TechEmpower framework were written and will be available on Round 14.

Last but not least there is now a QtCreator integration, which allows for creating a new project and Controller classes, but you need to manually copy (or link) the qtcreator directory to ~/.config/QtProject/qtcreator/templates/wizard.


As usual many bug fixes are in.

Help is welcome, you can mail me or hang on #cutelyst at freenode.

Download here.


21 June, 2016 02:04PM by dantti

hackergotchi for Ubuntu developers

Ubuntu developers

Dirk Deimeke: Timewarrior 0.9.5 Alpha Release ...

Da ist sie endlich, die erste Alpha-Version von Timewarrior. Timewarrior ist eine Kommandozeilen-Applikation, die es erlaubt, Zeit zu erfassen.

Es gibt einen Hook von Taskwarrior zu Timewarrior.

Langfristig betrachtet, wird Timewarrior das timesheet-Kommando in Taskwarrior ablösen.

Den News-Eintrag mit einem schicken Screenshot findet Ihr hier.

21 June, 2016 05:50AM by Dirk Deimeke (nospam@example.com)

hackergotchi for VyOS


Social network integration for VyOS 2.0

There are still many design decisions we have to make for the next generation VyOS.

No project can become successful these days if it lacks social network integration. For this reason we started working on it right away.

When you install VyOS, you will be prompted to link it to at least one supported social network, Facebook or Twitter. If you choose to use Facebook, you will be prompted to join the official VyOS group and join or create a group for your network.

One of the functions of social networks is to connect like-minded people and let them tell one another about their interests. For this reason we are adding "likes" to the command line interface. You will be able to like a particular feature and see who else likes it.

We will be using the number of likes to prioritize features. Features that get the most likes will get the most attention, so make sure to like the features you use.

Knowing who else likes features you use will also help you find people to share experience or ask for help.

vyos@vyos# like protocols ospf

vyos@vyos# run show likes protocols ospf
dmbaturin likes OSPF
syncer likes OSPF
jrandomhacker likes OSPF

The other important function is to let other people know what you are up to. VyOS 2.0 will automatically post information about new commits, image upgrades, and other events to Facebook and Twitter. You can also choose to send all log messages to your Twitter account. While there is no mechanism in place for this, in the future VyOS may be able to post screenshots of error messages to Instagram.

And, it's a while off, but maybe at some point you will be able to control your routers through posts in social network groups. We call this concept SDN (Socially Defined Networking). Stay tuned!

21 June, 2016 12:49AM by Yuriy Andamasov

hackergotchi for Ubuntu


Ubuntu Weekly Newsletter Issue 470

Welcome to the Ubuntu Weekly Newsletter. This is issue #470 for the week June 13 – 19, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Paul White
  • Simon Quigley
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

21 June, 2016 12:37AM by tsimonq2