February 18, 2019

hackergotchi for Kali Linux

Kali Linux

Kali Linux 2019.1 Release

Welcome to our first release of 2019, Kali Linux 2019.1, which is available for immediate download. This release brings our kernel up to version 4.19.13, fixes numerous bugs, and includes many updated packages.

Tool Upgrades

The big marquee update of this release is the update of Metasploit to version 5.0, which is their first major release since version 4.0 came out in 2011.

root@kali:~# msfconsole

     ,           ,
    /             \
   ((__---,,,---__))
      (_) O O (_)_________
         \ _ /            |\
          o_o \   M S F   | \
               \   _____  |  *
                |||   WW|||
                |||     |||


       =[ metasploit v5.0.2-dev                           ]
+ -- --=[ 1852 exploits - 1046 auxiliary - 325 post       ]
+ -- --=[ 541 payloads - 44 encoders - 10 nops            ]
+ -- --=[ 2 evasion                                       ]
+ -- --=[ ** This is Metasploit 5 development branch **   ]

msf5 >

Metasploit 5.0 is a massive update that includes database and automation APIs, new evasion capabilities, and usability improvements throughout. Check out their in-progress release notes to learn about all the new goodness

Kali Linux 2019.1 also includes updated packages for theHarvester, DBeaver, and more. For the complete list of updates, fixes, and additions, please refer to the Kali Bug Tracker Changelog.

ARM Updates

The 2019.1 Kali release for ARM includes the return of Banana Pi and Banana Pro, both of which are on the 4.19 kernel. Veyron has been moved to a 4.19 kernel and the Raspberry Pi images have been simplified so it is easier to figure out which one to use. There are no longer separate Raspberry Pi images for users with TFT LCDs because we now include re4son’s kalipi-tft-config script on all of them, so if you want to set up a board with a TFT, run ‘kalipi-tft-config’ and follow the prompts.

Download Kali Linux 2019.1

If you would like to check out this latest and greatest Kali release, you can find download links for ISOs and Torrents on the Kali Downloads page along with links to the Offensive Security virtual machine and ARM images, which have also been updated to 2019.1. If you already have a Kali installation you’re happy with, you can easily upgrade in place as follows.

root@kali:~# apt update && apt -y full-upgrade

Ensuring your Installation is Updated

To double check your version, first make sure your Kali package repositories are correct.

root@kali:~# cat /etc/apt/sources.list
deb http://http.kali.org/kali kali-rolling main non-free contrib

Then after running ‘apt -y full-upgrade’, you may require a ‘reboot’ before checking:

root@kali:~# grep VERSION /etc/os-release
VERSION="2019.1"
VERSION_ID="2019.1"
root@kali:~#
root@kali:~# uname -a
Linux kali 4.19.0-kali1-amd64 #1 SMP Debian 4.19.13-1kali1 (2019-01-03) x86_64 GNU/Linux

If you come across any bugs in Kali, please open a report on our bug tracker. We’ll never be able to fix what we don’t know about.

18 February, 2019 06:42PM by dookie

hackergotchi for LiMux

LiMux

Kennst Du die SmartCity App? Infos aus erster Hand

Beim Barcamp #mucgov18 im Mai 2018 zielte eine Session darauf ab, Feedback und Ideen zur Weiterentwicklung der Münchner SmartCity App direkt von den Bürgerinnen und Bürgern zu erfragen und somit deren Bedürfnisse besser zu verstehen. … Weiterlesen

Der Beitrag Kennst Du die SmartCity App? Infos aus erster Hand erschien zuerst auf Münchner IT-Blog.

18 February, 2019 03:12PM by Lisa Zech

hackergotchi for Ubuntu developers

Ubuntu developers

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, January 2019

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, about 204.5 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 12 hours (out of 12 hours allocated).
  • Antoine Beaupré did 9 hours (out of 20.5 hours allocated, thus keeping 11.5h extra hours for February).
  • Ben Hutchings did 24 hours (out of 20 hours allocated plus 5 extra hours from December, thus keeping one extra hour for February).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 42.5 hours (out of 20.5 hours allocated + 25.25 extra hours, thus keeping 3.25 extra hours for February).
  • Hugo Lefeuvre did 20 hours (out of 20 hours allocated).
  • Lucas Kanashiro did 5 hours (out of 4 hours allocated plus one extra hour from December).
  • Markus Koschany did 20.5 hours (out of 20.5 hours allocated).
  • Mike Gabriel did 10 hours (out of 10 hours allocated).
  • Ola Lundqvist did 4.5 hours (out of 8 hours allocated + 6.5 extra hours, thus keeping 8 extra hours for February, as he also gave 2h back to the pool).
  • Roberto C. Sanchez did 10.75 hours (out of 20.5 hours allocated, thus keeping 9.75 extra hours for February).
  • Thorsten Alteholz did 20.5 hours (out of 20.5 hours allocated).

Evolution of the situation

In January we again managed to dispatch all available hours (well, except one) to contributors. We also still had one new contributor in training, though starting in February Adrian Bunk has become a regular contributor. But: we will lose another contributor in March, so we are still very much looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker currently lists 40 packages with a known CVE and the dla-needed.txt file has 42 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

18 February, 2019 02:05PM

Robert Ancell: GIFs in GNOME

    Here is the story of how I fell down a rabbit hole and ended up learning far more about the GIF image format than I ever expected...
    We had a problem with users viewing a promoted snap using GNOME Software. When they opened the details page they'd have huge CPU and memory usage. Watching the GIF in Firefox didn't show a problem - it showed a fairly simple screencast demoing the app without any issues.
    I had a look at the GIF file and determined:
    • It was quite large for a GIF (13Mb).
    • It had a lot of frames (625).
    • It was quite high resolution (1790×1060 pixels).
    • It appeared the GIF was generated from a compressed video stream, so most of the frame data was just compression artifacts. GIF is lossless so it was faithfully reproducing details you could barely notice.
    GNOME Software uses GTK+, which uses gdk-pixbuf to render images. So I had a look a the GIF loading code. It turns out that all the frames are loaded into memory. That comes to 625×1790×1060×4 bytes. OK, that's about 4.4Gb... I think I see where the problem is. There's a nice comment in the gdk-pixbuf source that sums up the situation well:

     /* The below reflects the "use hell of a lot of RAM" philosophy of coding */

    They weren't kidding. 🙂

    While this particular example is hopefully not the normal case the GIF format has has somewhat come back from the dead in recent years to be a popular format. So it would be nice if gdk-pixbuf could handle these cases well. This was going to be a fairly major change to make.

    The first step in refactoring is making sure you aren't going to break any existing behaviour when you make changes. To do this the code being refactored should have comprehensive tests around it to detect any breakages. There are a good number of GIF tests currently in gdk-pixbuf, but they are mostly around ensuring particular bugs don't regress rather than checking all cases.

    I went looking for a GIF test suite that we could use, but what was out there was mostly just collections of GIFs people had made over the years. This would give some good real world examples but no certainty that all cases were covered or why you code was breaking if a test failed.

    If you can't find what you want, you have to build it. So I wrote PyGIF - a library to generate and decode GIF files and made sure it had a full test suite. I was pleasantly surprised that GIF actually has a very well written specification, and so implementation was not too hard. Diversion done, it was time to get back to gdk-pixbuf.

    Tests plugged in, and the existing code actually has a number of issues. I fixed them, but this took a lot of sanity to do so. It would have been easier to replace the code with new code that met the test suite, but I wanted the patches to be back-portable to stable releases (i.e. Ubuntu 16.04 and 18.04 LTS).

    And with a better foundation, I could now make GIF frames load on demand. May your GIF viewing in GNOME continue to be awesome.

    18 February, 2019 12:55PM by Robert Ancell (noreply@blogger.com)

    February 17, 2019

    hackergotchi for VyOS

    VyOS

    Where is my OVA dude?

     

    Greetings!

    As we mentioned in emails, OVA bit delayed, as always, there are reasons for that.

    First one is VMWare Ready validation that we hope will finish soon.

    The second one is a significant rework of OVA distribution that includes configuration via OVF environment (see T722)

    17 February, 2019 05:17PM by Yuriy Andamasov (yuriy@sentrium.io)

    hackergotchi for SparkyLinux

    SparkyLinux

    Sparky Online

    There is a new, small tool available for Sparkers: Sparky Online.

    What is Sparky Online?
    Sparky Online checks your web site to tell you does it online and display a pop up message on your dekstop.

    It is not moved to Sparky repos, but can be installed manually from Sparky’s GitHub repo.

    1. Install dependencies:
    sudo apt update
    sudo apt install cron iputils-ping wget yad

    2. Download ‘sparky-online’ and ‘sparky-online-cron’ files from the GitHub repo:
    github.com/sparkylinux/sparky-online
    3. Modify ‘sparky-online’ – place your web site address to WEB section, then:
    sudo cp sparky-online /usr/bin/
    sudo chmod +x /usr/bin/sparky-online

    4. Modify ‘sparky-online-cron’ – it is set to check a web site every 1 hour, so change the valvue to your (in hours), and ‘pavroo’ to your user name, then:
    sudo cp sparky-online-cron /etc/cron.d/
    sudo chown root:root /etc/cron.d/sparky-online-cron
    sudo systemctl restart cron

    Sparky Online

    17 February, 2019 12:10PM by pavroo

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Lubuntu Blog: Lubuntu 18.04.2 has been released!

    Thanks to all the hard work from our contributors, we are pleased to announce that Lubuntu 18.04.2 LTS has been released! What is Lubuntu? Lubuntu is an official Ubuntu flavor which uses the Lightweight X11 Desktop Environment (LXDE). The project’s goal is to provide a lightweight yet functional Linux distribution based on a rock solid […]

    17 February, 2019 12:01AM

    February 16, 2019

    Sam Hewitt: Basic Linux Virtualization with KVM

    I am no expert on all the ins and outs of virtualization, hell, before I started looking into this stuff a “hypervisor” to me was just a really cool visor.

    Geordi La Forge

    But after a reading a bunch of documentation, blog posts and StackExchange entries, I think I have enough of a basic understanding—or at least I have learnt enough to get it to work for my limited use case—to write some instructions.

    The virtualization method I went with is Kernel-based Virtual Machine (KVM) which, to paraphrase Wikipedia, is a virtualization module in the Linux kernel that allows it to function as a hypervisor, i.e. it is able to create, run and manage virtual machines (emulated computer systems). 🤓

    Creating a Virtualization Server with KVM

    My home server runs Ubuntu and (among other things) I have set it up to use KVM and QEMU for virtualization, plus I have the libvirt toolset installed for managing virtual machines from the command line and to help with accessing virtual machines over my local network on my other Linux devices.

    Notes  
    My Server OS Ubuntu 18.04.2
    My Client OS Fedora 29
    VM OS whatever you prefer, for the example I’m using Fedora 29

    For all of the following instructions, I am going to assume you are logged into your server (or whatever is going to be your virtualization hardware) and are in a terminal prompt (either directly or over ssh).

    Part 0: Prerequisites

    First, you have to see if your server’s processor supports hardware virtualization. You can do so by running the following command.

    egrep -c '^flags.*(vmx|svm)' /proc/cpuinfo
    

    This will check information on your CPU for the relevant extension support and return a number (based on the number of cores in your CPU). If it is greater than 0 your machine supports virtualization! 🎉 But if there is no result or 0, than it does not and there’s no point in continuing.

    Part 1: Server Setup

    Next, we have to install KVM, and the other required software for a virtualization environment. For an Ubuntu-based server do the following.

    sudo apt install qemu-kvm libvirt-bin virtinst bridge-utils
    

    Next, start and enable the virtualization service:

    sudo systemctl enable libvirtd.service
    sudo systemctl start libvirtd.service
    

    It’s as simple as that. Now you can also use virsh from the libvirt toolset to see the status virtual machines:

    virsh --list all
    

    But you’ll likely not see any listed but rather something like:

    Id    Name                           State
    ----------------------------------------------------
    

    On to installation!

    Part 2: Installing a Virtual Machine

    I’m going to assume for this part that you have already downloaded a disk image of your desired operating system, that will be used for the virtual machine, and you know where it is on the server.

    Deploying a virtual machine only requires one command: virt-install but it has several option flags that you’ll need to go through and adjust to your preference.

    The following is an example using Fedora 29.

    sudo virt-install \
    --name Fedora-29 \
    --ram=2048 \
    --vcpus=2 \
    --cpu host \
    --disk path=/var/lib/libvirt/images/Fedora-29.qcow2,size=32,device=disk,bus=virtio,format=qcow2 \
    --cdrom /var/lib/libvirt/boot/Fedora-Workstation-netinst-x86_64-29-1.2.iso \
    --connect qemu:///system \
    --os-type=linux \
    --graphics spice \
    --hvm --noautoconsole
    

    From the above, the following are the bits you’ll need to edit to your preferences

    name Name your virtual machine (VM)
    ram Assign an amount of memory (in MB) to be used by the VM
    vcpus Select a number of CPU cores to be assigned to the VM
    disk The disk image used by the virtual machine. You need only specify the name (i.e change Fedora-29 to something else) and update the size=32 to a desired capacity for the disk image (in GB).
    cdrom The path to the boot image that is to be used by the virtual machine. It need not be in /var/lib/libvirt/boot but the full path must be included here.

    The disk format (qcow2) and I/O bus and things aren’t things I’m gonna tinker with or know enough about, I’m just trusting other information I found.

    Once you have the config flags set, and you have ran virt-install you will likely see an output similar the following.

    WARNING  No operating system detected, VM performance may suffer. Specify an OS with --os-variant for optimal results.
    
    Starting install...
    Allocating 'Fedora-29.qcow2'
    Domain installation still in progress. You can reconnect to the console to complete the installation process.
    

    The “WARNING” is just that and nothing to worry about.

    At this point your virtual machine should be up and running and ready for you to connect to it. You can check the status of your virtual machines by again running virsh --list all and you should see something like:

    Id    Name                           State
    ----------------------------------------------------
     3     Fedora-29                      running
     -     Debian-9.7.0                   shut off
    

    You can create as many virtual machines as your server can handle at this point, though I wouldn’t recommend running too many concurrently as there’s only so far you can stretch the sharing of hardware resources.

    Part 3: Connecting to your Virtual Machine(s)

    To connect to your virtual machine you’re going to use a tool called Virtual Machine Manager, there are a few other applications out there but this one worked the best/most consistently for me. You can likely install it on your system in the command line, using a package manager, as virt-manager.

    Virtual Machine Manager Logo

    Virtual Machine Manager can create and manage virtual machines just as we did in the command line on the server, but we’re going to use it on your computer as a client to connect to virtual machine(s) running remotely on your server.

    To add a connection, from the main window menubar, you’re going to go File > Add Connection..., which brings up the following dialog.

    Virtual Machine Manager Add Connection

    The hypervisor we are using is QEMU/KVM so that is fine as is, but in this dialog you will need to check Connect to remote host over SSH and enter your username and the hostname (or IP address) for your server, so it resembles the above, then hit “Connect”.

    If all goes well, your server should appear in the main window with a list of VMs (see below for an example) and you can double-click to on any machines in the list to connect.

    Virtual Machine Manager Main Window

    Doing so will launch a new window and from there you can carry on as if it were a regular computer and go through the operating system install process.

    Virtual Machine Manager Connected

    Closing this window or quitting the Virtual Machine Manager app will not stop the virtual machine as it will always be running on your server.

    You can start and stop and even delete machines on your server using virt-manager on your computer, but it can also be done from the command line on your server with virsh, using some fairly straightforward commands:

    # to suspend a machine
    sudo virsh suspend Fedora-29
    # to shutdown a machine
    sudo virsh shutdown Fedora-29
    # to resume a machine
    sudo virsh resume Fedora-29
    # to remove a machine
    sudo virsh undefine Fedora-29
    sudo virsh destroy Fedora-29
    

    A Few Notes

    Now unless you have astoundingly good Wi-Fi your best bet is to connect to your server over a wired connection—personally I have a direct connection via an ethernet cable between my server and another machine—otherwise (I found) there will be quite a bit of latency.

    16 February, 2019 09:00PM

    Ubuntu Studio: Updates for February 2019

    With Ubuntu 19.04’s feature freeze quickly approaching, we would like to announce the new updates coming to Ubuntu Studio 19.04. Updated Ubuntu Studio Controls This is really a bit of a bugfix for the version of Ubuntu Studio Controls that landed in 18.10. Ubuntu Studio Controls dramatically simplifies audio setup for the JACK Audio Connection […]

    16 February, 2019 08:31PM

    February 15, 2019

    hackergotchi for Purism PureOS

    Purism PureOS

    How Purism avoids the FaceTime remote camera viewing

    With the Major iPhone FaceTime bug that lets you hear the audio of the person you are calling… before they pick up, it’s probably a good time to remind everyone how Purism gives you peace of mind – because with Purism, your device protects you by default.

    Hardware Kill Switches.

    Librem Hardware Kill Switches

    What this means is that there’s a physical switch that severs the circuit to your webcam and microphone.

    Because you cannot really trust software you cannot verify. And since Apple™’s FaceTime™ is not Free Software – with its source code released so that anyone can verify their security claims, like the one we use at Purism – how can you trust what cannot be verified?

    At Purism, both our Librem laptops, and the upcoming Librem 5 phone include this rather simple switch, that makes it remarkably easy to guarantee that the camera and microphone have no electrical circuit enabled.

    See? Powerful simple privacy protection built into all Purism products by default.

    The post How Purism avoids the FaceTime remote camera viewing appeared first on Purism.

    15 February, 2019 07:59PM by Todd Weaver

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Daniel Pocock: SFK, OSCAL and Toastmasters expanding into Kosovo

    Back in August 2017, I had the privilege of being invited to support the hackathon for women in Prizren, Kosovo. One of the things that caught my attention at this event was the enthusiasm with which people from each team demonstrated their projects in five minute presentations at the end of the event.

    This encouraged me to think about further steps to support them. One idea that came to mind was introducing them to the Toastmasters organization. Toastmasters is not simply about speaking, it is about developing leadership skills that can be useful for anything from promoting free software to building successful organizations.

    I had a look at the Toastmasters club search to see if I could find an existing club for them to visit but there doesn't appear to be any in Kosovo or neighbouring Albania.

    Starting a Toastmasters club at the Innovation Centre Kosovo

    In January, I had a conference call with some of the girls and explained the idea. They secured a venue, Innovation Centre Kosovo, for the evening of 11 February 2019.

    Albiona and I met on Saturday, 9 February and called a few people we knew who would be good candidates to give prepared speeches at the first meeting. They had 48 hours to prepare their Ice Breaker talks. The Ice Breaker is a 4-6 minute talk that people give at the beginning of their Toastmasters journey.

    Promoting the meeting

    At our club in EPFL Lausanne, meetings are promoted on a mailing list. We didn't have that in Kosovo but we were very lucky to be visited by Sara Koci from the morning TV show. Albiona and I were interviewed on the rooftop of the ICK on the day of the meeting.

    The first meeting

    That night, we had approximately 60 people attend the meeting.

    Albiona acted as the meeting presider and trophy master and I was the Toastmaster. At the last minute we found volunteers for all the other roles and I gave them each an information sheet and a quick briefing before opening the meeting.

    One of the speakers, Dion Deva, has agreed to share the video of his talk publicly:

    The winners were Dhurata, best prepared speech, Arti, best impromptu speech and Ardora for best evaluation:

    After party

    Afterwards, some of us continued around the corner for pizzas and drinks and discussion about the next meeting.

    Future events in Kosovo and Albania

    Software Freedom Kosovo will be back from 4-7 April 2019 and I would encourage people to visit.

    OSCAL in Tirana, Albania is back on 18-19 May 2019 and they are still looking for extra speakers and workshops.

    Many budget airlines now service Prishtina from all around Europe - Prishtina airport connections, Tirana airport connections.

    15 February, 2019 11:08AM

    hackergotchi for Univention Corporate Server

    Univention Corporate Server

    How-To: Single Sign-On for Nextcloud

    Log in once and automatically gain access to all programs and services – Single Sign-On (SSO) is a proven tool against the ever-increasing password fatigue among users. This is why many companies and educational institutions make it possible for users to log on centrally and only once.
    It is also easy to set up Single Sign-On with UCS (see links at the end of this article). In this article I would like to show you how to link Nextcloud to UCS’s SSO mechanism.

    However, before you start configuring Nextcloud and UCS, you should double-check that Single Sign-On with SAML works in your UCS environment. Just open the following URL in your browser:

    https://<Hostname of Domaincontroller Master>/univention/saml

    Nextcloud-App: SSO & SAML authentication

    As with UCS, Nextcloud has a wealth of apps that provide additional services and features. The app SSO & SAML authentication integrates Nextcloud into an existing SSO solution. It comes preinstalled with the Nextcloud version from the Univention App Center and all you have to do is activate it.
    Simply log into your Nextcloud installation as an administrator and go to the Apps section in the menu at the in the upper right corner. On the left, go to Integration and enable the app by clicking on the button of the same name.

    Then switch to Nextcloud settings via the menu in the upper right corner. Scroll to the new item SSO & SAML authentication on the left. Left click on the Use integrated SAML authentication button. A configuration screen pops up in which you make the following settings:

    • Select the „Allow login only if there is an account on another Backend“ checkbox.
    • Next, decide whether you want to allow Nextcloud login only via SSO or also via LDAP. We recommend to activate the option „Allow the use of multiple user-backends“, as you will not lose administrative access to your Nextcloud installation in case of problems with SSO.
    • Enter uid in the „Attribute to map the UID to“ field.
    • You can leave the field „Optional display name of the identity provider“ empty to accept the default setting „SSO & SAML Login“. You can also enter your own identifier, such as Single Sign-On.
    • In the box „Identifier of the IdP entity“, enter the address https://master.ucs.demo/simplesamlphp/saml2/idp/metadata.php and replace master.ucs.demo with the hostname under which your IdP can be reached. Tip: You can find out the name of the host by using the command ucr, which you enter in a terminal window on the Domaincontroller Master:
      $ ucr get saml/idp/entityID
      https://master.ucs.demo/simplesamlphp/saml2/idp/metadata.php
    • Below, in the field „URL target of the IdP where the SP will send the Authentication Request Message“, enter the address https://master.ucs.demo/simplesamlphp/saml2/idp/SSOService.php; replace master.ucs.demo with the correct hostname.

    Click Show optional Identity Provider settings to expand two more fields.

    • The first box (URL Location of the IdP where the SP will send the SLO Request) should contain https://master.ucs.demo/simplesamlphp/saml2/idp/initSLO.php?RelayState=/simplesamlphp/logout.php (with the appropriate hostname instead of master.ucs.demo).
    • In the box „Public X.509 Certificate of the IdP“, enter the certificate of the UCS IdP.

    You can find the certificate if you look at the metadata of the IdP in your browser. The URL can be found with the ucr get saml/idp/entityID command, which you enter into a terminal window. If you open the address in your browser, you will see an XML file. The „ds:X509 Certificate“ entry contains the certificate that you copy & paste into the Nextcloud configuration.

    This is what the settings of the Nextcloud app look like on our test computer:

    Configuring Service Providers in UCS

    On the UCS site, you now need to create a service provider (SP) for Nextcloud. Log in as Administrator and open the Univention Management Console. Switch to the „Domain“ category (highlighted in blue) and open the „SAML identity provider“ module there. Click on Add to create a new service provider entry containing the following settings:
    • Enter https://master.ucs.demo/nextcloud/apps/user_saml/saml/metadata in the box „Service provider identifier“ and replace master.ucs.demo with the correct hostname of your Nextcloud server.
    • Below, enter the following for the box “Respond to this service provider URL after you have logged in“: https://master.ucs.demo/nextcloud/apps/user_saml/saml/acs
    • The box „Format of NameID attribute“ must contain the following entry: urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified
    • Finally, type uid in the box „Name of the attribute that is used as NameID“. This is what the settings look like on our test computer:

    Click on Extended Settings on the left and select the Allow transmission of ldap attributes to the service provider“ option on the right. Finally, click Create to save the settings.

    Activating Single Sign-On for Users

    For UCS users to be able to use the SSO for Nextcloud, you must activate the service provider at their user objects. To do this, switch to the module „Users“ (yellow) in the Univention Management Console. Now select one or more accounts you wish to edit and click Edit. On the left switch to the tab called „Account“ and scroll down to „SAML Settings“. Click Add to unlock service providers for the users. Select the Nextcloud Service Provider you just created, then click Add, select the box Overwrite and Save your changes at the top of the page.


    Single Sign-on in UCS at management console 1 Password for All Services and Networks with Single Sign-on
    Learn how Single Sign-On (SSO) helps to work more efficiently and safely. For users, SSO means a one-time login and the subsequent use of various programs without having to log in repeatedly individually.


     

    Der Beitrag How-To: Single Sign-On for Nextcloud erschien zuerst auf Univention.

    15 February, 2019 10:44AM by Valentin Heidelberger

    hackergotchi for Ubuntu developers

    Ubuntu developers

    The Fridge: Ubuntu 18.04.2 LTS released

    The Ubuntu team is pleased to announce the release of Ubuntu 18.04.2 LTS (Long-Term Support) for its Desktop, Server, and Cloud products, as well as other flavours of Ubuntu with long-term support.

    Like previous LTS series, 18.04.2 includes hardware enablement stacks for use on newer hardware. This support is offered on all architectures and is installed by default when using one of the desktop images.

    Ubuntu Server defaults to installing the GA kernel; however you may select the HWE kernel from the installer bootloader.

    This update also adds Raspberry Pi 3 as a supported image target for Ubuntu Server, alongside the existing Raspberry Pi 2 image.

    As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 18.04 LTS.

    Kubuntu 18.04.2 LTS, Ubuntu Budgie 18.04.2 LTS, Ubuntu MATE 18.04.2 LTS, Lubuntu 18.04.2 LTS, Ubuntu Kylin 18.04.2 LTS, and Xubuntu 18.04.2 LTS are also now available. More details can be found in their individual release notes:

    https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes#Official_flavours

    Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, and Ubuntu Base. All the remaining flavours will be supported for 3 years.

    To get Ubuntu 18.04.2

    In order to download Ubuntu 18.04.2, visit:

    http://www.ubuntu.com/download

    Users of Ubuntu 16.04 will be offered an automatic upgrade to 18.04.2 via Update Manager. For further information about upgrading, see:

    https://help.ubuntu.com/community/BionicUpgrades

    As always, upgrades to the latest version of Ubuntu are entirely free of charge.

    We recommend that all users read the 18.04.2 release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:

    https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes

    If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

    #ubuntu on irc.freenode.net
    http://lists.ubuntu.com/mailman/listinfo/ubuntu-users
    http://www.ubuntuforums.org
    http://askubuntu.com

    Help Shape Ubuntu

    If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:

    http://www.ubuntu.com/community/get-involved

    About Ubuntu

    Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

    Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:

    http://www.ubuntu.com/support

    More Information

    You can learn more about Ubuntu and about this release on our website listed below:

    http://www.ubuntu.com/

    To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

    http://lists.ubuntu.com/mailman/listinfo/ubuntu-announce

    Originally posted to the ubuntu-announce mailing list on Fri Feb 15 02:52:36 UTC 2019 by Adam Conrad, on behalf of the Ubuntu Release Team

    15 February, 2019 07:25AM

    February 14, 2019

    Podcast Ubuntu Portugal: S01E23 – 2/3 do cluster de tiagos

    Neste episódio convidámos o Tiago Carreira, e assim conseguimos 2/3 do cluster de Tiagos presente na FOSDEM 2019, para nos falar sobre a sua experiência na FOSDEM, mas acima de tudo veio contar-nos como correu o Config Managment Camp em Ghent. Já sabes: Ouve, subscreve e partilha!

    • https://seclists.org/oss-sec/2019/q1/119
    • https://brauner.github.io/2019/02/12/privileged-containers.html
    • https://bugs.launchpad.net/ubuntu/xenial/+source/pciutils/+bug/1815212
    • https://cfgmgmtcamp.eu/
    • https://sintra2019.ubucon.org/
    • https://www.openstack.org/coa

    Patrocínios

    Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

    Atribuição e licenças

    A imagem de capa: prilfish e está licenciada como CC BY 2.0.

    A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

    cujo texto integral pode ser lido aqui

    Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

    14 February, 2019 11:32PM

    Cumulus Linux

    Kernel of Truth season 2 episode 1: EVPN on the host

    Subscribe to Kernel of Truth on iTunes, Google Play, SpotifyCast Box and Sticher!

    Click here for our previous episode.

    Guess who’s back? Back again? The real Kernel of Truth podcast is back with season 2 and we’re starting off this season with all things EVPN! This topic is near and dear to Attilla de Groots’ heart having talked about it in his recent blog here. He now joins Atul Patel and our host Brian O’Sullivan to talk more about EVPN on host for multi-tenancy.

    Join as we as discuss the problem that we’re solving for, how to deploy EVPN on the host, what the caveats are when deploying and more.

    Guest Bios

    Brian O’Sullivan: Brian currently heads Product Management for Cumulus Linux. For 15 or so years he’s held software Product Management positions at Juniper Networks as well as other smaller companies. Once he saw the change that was happening in the networking space, he decided to join Cumulus Networks to be a part of the open networking innovation. When not working, Brian is a voracious reader and has held a variety of jobs, including bartending in three countries and working as an extra in a German soap opera. You can find him on Twitter at @bosullivan00.

    Attilla de Groot: Attilla has spent the last 15 years at the cutting edge of networking, having spent time with KPN, Amsterdam Internet Exchange, and HP, with exposure to technology from Cisco, HP, Juniper, and Huawei. He now works for Cumulus Networks, the creators of open networking, where he is able to continue his interest in open architecture design and automation. You can find him on Twitter at @packet_ninja.

    Atul Patel: Atul has vast experience in various networking technologies working at networking vendors including Cisco, Procket, Motorola and 3Com. He now works at Cumulus Networks where he enjoys working with Linux open networking, frr, virtualization and orchestration technologies.

    The post Kernel of Truth season 2 episode 1: EVPN on the host appeared first on Cumulus Networks engineering blog.

    14 February, 2019 09:06PM by Katie Weaver

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Jonathan Riddell: G+ Takeout

    Google+ does rather killoff the notion I had of Google as a highly efficient company who always produce top quality work.  Even using the takeout website to download the content from Google+ I found a number of obvious bugs and poor features.  But I did get my photos in the end so for old times sakes here’s a random selection.

    1a8a0n0gqj2mq
    A marketing campaign that failed to take off

    1bsc2o3kyhjlu.JPG
    Sprints in Munich thanks to the city council’s KDE deployment were always fun.

    1bogmkij7mzb6
    Launching KDE neon with some pics of my office and local castle.

    1bsmv13wngar6
    One day I took a trip with Nim to Wales and woke up in somewhere suspiciously like the Shire from Lord of the Rings

    1chq4qpaex94y.jpeg
    KDE neon means business

    1dde2jg4rwl2q
    Time to go surfing. This ended up as a music video.

    That’s about it.  Cheereo Google+, I’ve removed you from www.kde.org, one social media platform too many for this small world.

    Facebooktwitterlinkedinby feather

    14 February, 2019 04:50PM

    hackergotchi for Tails

    Tails

    Tails report for January, 2019

    Releases

    The following changes were introduced in Tails 3.12:

    • New installation methods
      • For macOS, the new method is much simpler as it uses a graphical tool (Etcher) instead of the command line.
      • For Windows, the new method is much faster as it doesn't require 2 USB sticks and an intermediary Tails anymore. The resulting USB stick also works better on newer computers with UEFI.
      • For Debian and Ubuntu, the new method uses a native application (GNOME Disks) and you don't have to install Tails Installer anymore.
      • For other Linux distributions, the new method is faster as it doesn't require 2 USB sticks and an intermediary Tails anymore.
    • Starting Tails should be a bit faster on most machines. (#15915)
    • Tell users to use sudo when they try to use su on the command line.
    • Fix the black screen when starting Tails with some Intel graphics cards. (#16224)

    Code

    • A bunch of Foundations Team members had a sprint focused on porting Tails to Debian 10 (Buster). For details, see the full report.

    Documentation and website

    User experience

    Hot topics on our help desk

    The month started with this questions:

    1. Black screen after the boot menu with Intel GPU (i915)
    2. Partially applied incremental upgrades cause all kinds of trouble

    But after the release of Tails 3.12, the hottest topics were:

    1. Regression on some Intel GPU (Braswell, Kaby Lake)
    2. Electrum Phishing Attack - Upstream Fix Committed

    Infrastructure

    • Our infrastructure was targeted by a distributed denial-of-service (DDoS) attack that caused a couple of temporary outages. We're discussing ways to protect ourselves in the future.

    • We kept polishing the automated test suite for Additional Software and hope it will be merged in time for the next Tails release.

    • We kept investigating options to make our CI faster, shorten the development feedback loop, and thus make our developers' work more efficient and pleasurable. We will soon be able to benchmark our currently preferred option.

    • We dealt with the fallout of the big infrastructure changes done in December. A few issues remain but things are starting to run more smoothly again :)

    Funding

    • We close our end-of-year donation campaign. We don't have the final numbers yet.

    • We submitted 2 applications to the NLnet NGI Zero PET project.

    Outreach

    Press and testimonials

    Translations

    All the website

    • de: 46% (2832) strings translated, 7% strings fuzzy, 41% words translated
    • es: 51% (3137) strings translated, 4% strings fuzzy, 42% words translated
    • fa: 32% (1982) strings translated, 10% strings fuzzy, 33% words translated
    • fr: 88% (5418) strings translated, 1% strings fuzzy, 86% words translated
    • it: 32% (1969) strings translated, 5% strings fuzzy, 28% words translated
    • pt: 25% (1553) strings translated, 7% strings fuzzy, 21% words translated

    Total original words: 65286

    Core pages of the website

    • de: 71% (1252) strings translated, 12% strings fuzzy, 74% words translated
    • es: 80% (1417) strings translated, 9% strings fuzzy, 82% words translated
    • fa: 34% (615) strings translated, 13% strings fuzzy, 33% words translated
    • fr: 96% (1705) strings translated, 1% strings fuzzy, 96% words translated
    • it: 63% (1112) strings translated, 17% strings fuzzy, 65% words translated
    • pt: 45% (798) strings translated, 13% strings fuzzy, 48% words translated

    Metrics

    • Tails has been started more than 749 304 times this month. This makes 24 171 boots a day on average.
    • 7 403 downloads of the OpenPGP signature of Tails ISO from our website.
    • 88 bug reports were received through WhisperBack.

    How do we know this?

    14 February, 2019 08:34AM

    hackergotchi for LiMux

    LiMux

    Letzte freie Plätze für die Veranstaltung „Für München!“

    „Die Digitalisierung und die Weiterentwicklung unserer Stadt, das sind genau meine Themen. Da würde ich gerne mitreden!“ Wer so denkt, sollte sich die Veranstaltung „Für München!“ am 26. Februar ab 19 Uhr vormerken. Denn da … Weiterlesen

    Der Beitrag Letzte freie Plätze für die Veranstaltung „Für München!“ erschien zuerst auf Münchner IT-Blog.

    14 February, 2019 07:52AM by Stefan Döring

    February 13, 2019

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Jono Bacon: Forbes Piece: Six Hallmarks of Successful Crowdfunding Campaigns

    I wanted to drop a quick note to you all that I have written a new Forbes article called Six Hallmarks of Successful Crowdfunding Campaigns.

    From the piece:

    While the newness of crowdfunding may have worn off, this popular way to raise funds has continued to spark interest, especially to entrepreneurs and startups.  For some, it is a panacea; a way to raise funds quickly and easily, with an extra dose of marketing and awareness thrown in. Sadly, the reality of what is needed to make a crowdfunding campaign a success is often missing in all the excitement.

    I have some experience with crowdfunding in a few different campaigns. Back in 2013 I helped shape one of the largest crowdfunding campaigns at the time, the Ubuntu Edge, which had a $32million goal and ended up raising $12,814,216. While it didn’t hit the mark, the campaign set records for the funds raised. My second campaign was for the Global Learning XPRIZE, which had a $500,000 goal and we raised $942,223. Finally, I helped advise ZBiotics with their $25,000 campaign, and they raised $52,732.

    Today I want to share some lessons learned along the way with each of these campaigns. Here are six considerations you should weave into your crowdfunding strategy…

    In it I cover these six key principles:

    1. Your campaign is a cycle: plan it out
    2. Your pitch needs to be short, sharp, and clear on the value
    3. Focus on perks people want (and try to limit shipping)
    4. Testimonials and validation builds confidence
    5. Content is king (and marketing is queen)
    6. Incentivize your audience to help

    You can read the piece by clicking here.

    You may also want to see some of my other articles that relate to the different elements of doing crowdfunding well:

    Good luck with your crowdfunding initiatives and let me know how you get on!

    The post Forbes Piece: Six Hallmarks of Successful Crowdfunding Campaigns appeared first on Jono Bacon.

    13 February, 2019 11:17PM

    Dimitri John Ledkov: Encrypt all the things

    xkcd #538: Security
    Went into blogger settings and enabled TLS on my custom domain blogger blog. So it is now finally a https://blog.surgut.co.uk However, I do use feedburner and syndicate that to the planet. I am not sure if that is end-to-end TLS connections, thus I will look into removing feedburner between my blog and the ubuntu/debian planets. My experience with changing feeds in the planets is that I end up spamming everyone. I wonder, if I should make a new tag and add that one, and add both feeds to the planet config to avoid spamming old posts.

    Next up went into gandi LiveDNS platform and enabled DNSSEC on my domain. It propagated quite quickly, but I believe my domain is now correctly signed with DNSSEC stuff. Next up I guess, is to fix DNSSEC with captive portals. I guess what we really want to have on "wifi" like devices, is to first connect to wifi and not set it as default route. Perform captive portal check, potentially with a reduced DNS server capabilities (ie. no EDNS, no DNSSEC, etc) and only route traffic to the captive portal to authenticate. Once past the captive portal, test and upgrade connectivity to have DNSSEC on. In the cloud, and on the wired connections, I'd expect that DNSSEC should just work, and if it does we should be enforcing DNSSEC validation by default.

    So I'll start enforcing DNSSEC on my laptop I think, and will start reporting issues to all of the UK banks if they dare not to have DNSSEC. If I managed to do it, on my own domain, so should they!

    Now I need to publish CAA Records to indicate that my sites are supposed to be protected by Let's Encrypt certificates only, to prevent anybody else issuing certificates for my sites and clients trusting them.

    I think I think I want to publish SSHFP records for the servers I care about, such that I could potentially use those to trust the fingerprints. Also at the FOSDEM getdns talk it was mentioned that openssh might not be verifying these by default and/or need additional settings pointing at the anchor. Will need to dig into that, to see if I need to modify something about this. It did sound odd.

    Generated 4k RSA subkeys for my main key. Previously I was using 2k RSA keys, but since I got a new yubikey that supports 4k keys I am upgrading to that. I use yubikey's OpenGPG for my signing, encryption, and authentication subkeys - meaning for ssh too. Which I had to remember how to use `gpg --with-keygrip -k` to add the right "keygrip" to `~/.gnupg/sshcontrol` file to get the new subkey available in the ssh agent. Also it seems like the order of keygrips in sshcontrol file matters. Updating new ssh key in all the places is not fun I think I did github, salsa and launchpad at the moment. But still need to push the keys onto the many of the installed systems.

    Tried to use FIDO2 passwordless login for Windows 10, only to find out that my Dell XPS appears to be incompatible with it as it seems that my laptop does not have TPM. Oh well, I guess I need to upgrade my laptop to have a TPM2 chip such that I can have self-unlocking encrypted drives, and like OTP token displayed on boot and the like as was presented at this FOSDEM talk.

    Now that cryptsetup 2.1.0 is out and is in Debian and Ubuntu, I guess it's time to reinstall and re-encrypt my laptop, to migrate from LUKS1 to LUKS2. It has a bigger header, so obviously so much better!

    Changing phone soon, so will need to regenerate all of the OTP tokens. *sigh* Does anyone backup all the QR codes for them, to quickly re-enroll all the things?

    BTW I gave a talk about systemd-resolved at FOSDEM. People didn't like that we do not enable/enforce DNS over TLS, or DNS over HTTPS, or DNSSEC by default. At least, people seemed happy about not leaking queries. But not happy again about caching.

    I feel safe.

    ps. funny how xkcd uses 2k RSA, not 4k.

    13 February, 2019 11:09PM by Dimitri John Ledkov (noreply@blogger.com)

    hackergotchi for Tails

    Tails

    Tails 3.12.1 is out

    This release is an emergency release to fix a critical security vulnerability in Firefox.

    It also fixes other security vulnerabilities. You should upgrade as soon as possible.

    Changes

    Known issues

    See the list of long-standing issues.

    Tails fails to start a second time on some computers (#16389)

    On some computers, after installing Tails to a USB stick, Tails starts a first time but fails to start a second time. In some cases, only BIOS (Legacy) was affected and the USB stick was not listed in the Boot Menu.

    We are still investigating the issue, so if it happens to you, please report your findings by email to tails-testers@boum.org. Mention the model of the computer and the USB stick. This mailing list is archived publicly.

    To fix this issue:

    1. Reinstall your USB stick using the same installation method.

    2. Start Tails for the first time and set up an administration password.

    3. Choose Applications ▸ System Tools ▸ Root Terminal to open a Root Terminal.

    4. Execute the following command:

      sgdisk --recompute-chs /dev/bilibop

    You can also test an experimental image:

    1. Download the .img file from our development server.

    2. Install it using the same installation methods.

      We don't provide any OpenPGP signature or other verification technique for this test image. Please only use it for testing.

    Get Tails 3.12.1

    What's coming up?

    Tails 3.13 is scheduled for March 19.

    Have a look at our roadmap to see where we are heading to.

    We need your help and there are many ways to contribute to Tails (donating is only one of them). Come talk to us!

    13 February, 2019 12:34PM

    February 12, 2019

    Cumulus Linux

    Cumulus content roundup: January

    The new year is now in full swing and we’re excited about all the great content we’ve shared with you so far! In case you missed some of it, here’s our Cumulus content roundup- January edition. As always, we’ve kept busy last month with lots of great resources and news for you to read. One of the biggest things we announced was our new partnership with Nutanix but wait, there’s so much more! We’ve rounded up the rest of the right here, so settle in and stay a while!

    From Cumulus Networks:

    Cumulus + Nutanix = Building and Simplifying Open, Modern Data Centers at Scale: We are excited to announce that Cumulus and Nutanix are partnering to build and operate modern data centers with open networking software.

    Moving a Prototype Network to Production: With prototyping production networks, the network becomes elevated to a standard far superior to the traditional approaches.

    Operations guide: We thought it would be great to document our process for creating a web scale networking operations guide so that you could then write your own.

    Containers are here to stay, who has the right skill set?: Who controls containers: developers, or operations teams? We discuss the answer to this very important question that IT in any organization need to address.

    Kernel of Truth episode 10: 2019 predictions: Join us for an episode dedicated to trends and predictions for 2019 straight from the brains of some of Cumulus’ brightest including JR River, co-founder and CTO of Cumulus Networks.

    News from the web:

    Cumulus and Nutanix Team Up to Simplify Hyperconverged Infrastructure Networking: Data Center Knowledge talks about the partnership as the latest in a series of moves by HCI vendors to integrate networking into their solutions.

    Cumulus and Nutanix Integrate HCI, Open Networking: Via SDxCentral, “We live to compete with Arista and Cisco. That’s what we do everyday, and having a better partnership and a better story for Nutanix customers is just one more way to do that” – Josh Leslie, CEO of Cumulus Networks.

    The post Cumulus content roundup: January appeared first on Cumulus Networks engineering blog.

    12 February, 2019 07:35PM by Katie Weaver

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Jono Bacon: The Gin Map

    OK, folks, little personal post here. If you are not interested in gin, or are not old enough to drink it, you can safely ignore this.

    I am an enormous gin fan. I love it. Yes, I know it divides people and many of you can’t stand the stuff, but hey, our differences make the world go around, right?

    Well, I have set myself a fun little challenge. I want to try a gin from EVERY country in the world.

    Now, I know what you are thinking. Some countries probably don’t produce gin. Well, I am not sure how much I believe you: if there is water, juniper, and a bowl, someone somewhere is producing gin. I am going to find it.

    I will be tracking this on the Bacon Gin Map. I won’t add anything to the map unless I have a picture of the bottle, so many gins I have tried are not on there yet due to this picture requirement. I will also add short reviews or comments (again, some of the gins I have already tried that are there will get their reviews updated when I can try them again.)

    I have my bar at home, and if you want to contribute a new bottle to the collection, I will snap a photo of us with it and add it to the map with a credit to you. Yes, this is a shameful attempt to solicit bottles of gin from you all.

    Know of a gin from a country I have not covered yet? Great! Let me know in the comments below, or use the hashtag #baconginmap on Twitter.

    I will be updating the map regularly. So, for you gin aficionados, feel free to check out the map if you want to try a gin from somewhere a little different.

    See the Bacon Gin Map

    The post The Gin Map appeared first on Jono Bacon.

    12 February, 2019 04:43PM

    hackergotchi for LiMux

    LiMux

    Einblicke in den Stand der Digitalisierung und Smart City für Kommunen

    Wie sieht eine gelungenen Digitalisierung in Kommunen aus? Nicht nur in München diskutieren dazu Verwaltung, Politik, Startups und engagierte Bürgerinnen und Bürger. Wolfgang Glock, Leiter für die Themen E- und Open Government und Smart City … Weiterlesen

    Der Beitrag Einblicke in den Stand der Digitalisierung und Smart City für Kommunen erschien zuerst auf Münchner IT-Blog.

    12 February, 2019 12:11PM by Stefan Döring

    hackergotchi for Ubuntu developers

    Ubuntu developers

    The Fridge: Ubuntu Weekly Newsletter Issue 565

    Welcome to the Ubuntu Weekly Newsletter, Issue 565 for the week of February 3 – 9, 2019. The full version of this issue is available here.

    In this issue we cover:

    The Ubuntu Weekly Newsletter is brought to you by:

    • Krytarik Raido
    • Bashing-om
    • Chris Guiver
    • Wild Man
    • TheNerdyAnarchist
    • mIk3_08
    • And many others

    If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

    Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

    12 February, 2019 01:55AM

    February 11, 2019

    Robert Ancell: linux.conf.au 2019

    Along with a number of other Canonical staff I recently attended linux.conf.au 2019 in Christchurch, New Zealand. I consider this the major Australia/New Zealand yearly conference that covers general open source development. This year the theme of the conference was "Linux of Things" and many of the talks had an IoT connection.

    One of the premium swag items was a Raspberry Pi Zero. It is unfortunate that this is not a supported Ubuntu Core device (CPU a generation too old) as this would have been a great opportunity to show an Ubuntu Core device in action. I did prepare a lightning talk showing some Ubuntu Core development on a Raspberry Pi 3, but this sadly didn't make the cut. You can see it in blog form.

    LCA consistently has high quality talks, so choosing what to attend is hard. Mostly everything was recorded and is viewable on their YouTube channel. Here is some highlights that I saw:

    STM32 Development Boards (literally) Falling From The Sky (video) - This talk was about tracking and re-purposing hardware from weather balloons. I found it interesting as it made me think about the amount of e-waste that is likely to be generated as IoT increases and ways in that it can be re-cycled, particularly with open source software.

    Plastic is Forever: Designing Tomu's Injection-Molded Case (video) and SymbiFlow - The next generation FOSS FPGA toolchain (video) - FPGA development is something that has really struggled to break into the mainstream. I think this is mostly down to two things - a lack of a quality open source toolchain and cheap hardware. These talks make it seem like we're getting really close with the SymbiFlow toolchain and hardware like the Fomu. I think we'll get some really interesting new developments when we get something close to the Rasberry Pi/Arduino experience and I'm looking forward to writing some code in the FPGA and IoT space, hopefully soon!

    The Tragedy of systemd (video) - It's the conflict that just keeps giving 😭 Benno talked about how regardless of how systemd came to exist the value of modern middleware is valuable. I had thought the majority had come to this conclusion but it seems this is still an idea that needs selling. I think the talk was effective in doing that.

    Sequencing DNA with Linux Cores and Nanopores (video) - This was a live (!) demonstration of doing DNA sequencing on the speakers lunch. This was done using the MinION - a USB DNA sequencer. As well as being able to complete the task what impressed me was this was done on a laptop and no special software was required. Given this device costs something around $1000 and is easy to use this opens up DNA analysis to the open source world.

    Around the world in 80 Microamps - ESP32 and LoRa for low-power IoT (video) - This discussed real world cases of building IoT / automation solutions using battery power (e.g. solar not suitable). It covered how it's very hard to run a Linux based solution for a long time on a battery, but technology is slowly improving. Turns out the popularity of e-scooters is making bigger and cheaper batteries available.

    Christchurch has recently started trialing Lime scooters. These were super popular with a hacker crowd and quickly accumulated around the venue. I planned to scooter from the airport to the venue but sadly that day there weren't any nearby, so I walked half way and scootered the rest. They're super fun and useful so I recommend you try them if you are visiting a city that has them. 🙂




    11 February, 2019 10:47PM by Robert Ancell (noreply@blogger.com)

    hackergotchi for Whonix

    Whonix

    Qubes-Whonix 14 (4.0.1-201901231238) TemplateVMs Point Release for Qubes R4

    @Patrick wrote:

    special instructions required to securely update because of apt security update [DSA 4371-1] are not required since this point release already contains the security fixed APT version.

    (Same version as recommended in Qubes Security Bulletin #46.)


    This is a point release.

    A point release is not a separate, new version of Whonix. Instead, it is a re-release of Whonix which is inclusive of all updates up to a certain point.

    Installing any version of Whonix 14 and fully updating it leads to a system which is identical to installing a Whonix point release.

    If the Whonix installation is updated, no further action is required.

    Regardless of the current installed version of Whonix, if users wish to install (or reinstall) Whonix for any reason, then the point release is a convenient and more secure method, since it bundles all Whonix updates that are available at that specific time.


    Installation Guide:

    Re-Installation Guide:

    Posts: 1

    Participants: 1

    Read full topic

    11 February, 2019 11:28AM by @Patrick

    Whonix VirtualBox 14.0.1.3.8 - Point Release

    @Patrick wrote:

    This is a point release.

    Debian APT remote code execution vulnerability DSA 4371-1 is fixed in this version. Therefore, special instructions for upgrading are not required. The usual standard (“everyday”) upgrading instructions should be applied.


    Download Whonix for VirtualBox:

    Posts: 3

    Participants: 3

    Read full topic

    11 February, 2019 10:28AM by @Patrick

    February 10, 2019

    hackergotchi for Pardus

    Pardus

    Sistem Patent Anonim Şirketi geniş kapsamlı olarak Pardus işletim sistemi kullanımına geçti

    Sistem Patent Anonim Şirketi geniş kapsamlı olarak Pardus işletim sistemi kullanımına geçti.

    1999 yılında Sınai Mülkiyet hakları olan Marka, Patent ve Tasarım vekillik hizmetleri ve telif hakları tescili alanında faaliyet göstermeye başlayan firmadan yapılan açıklamalar şöyle:

    “Sistem Patent A.Ş. olarak Yurtdışında iki ve Türkiye içerisinde 12 toplam 14 ofis ile faaliyet göstermekteyiz. Toplamda 50 adet bilgisayarda Pardus 17.4 kullanımına geçmiş bulunmaktayız. Geçiş sürecimizde herhangi bir aksaklık olmayıp Pardus işlevsel olarak bütün ihtiyaçlarımızı karşılamakta. Hatta Pardus üzerinde gelen LibreOffice programları sayesinde bazı işlevsel süreçlerimiz hız kazanmış; ikinci, üçüncü hatta dördüncü farklı yazılım ihtiyaçlarımız ortadan kalkmıştır.

    Maddi olarak 2019 rakamları ile 95.000₺ sadece Pardus sayesinde işletim sistemi lisans ücretinden, LibreOffice ile de 207.000₺ ofis yazılımı lisans ücretinden tasarruf edilmiştir. Özgür ve açık kaynaklı Inkscape, Scribus, Gimp gibi yazılımlar sayesinde de 45.000₺ civarı bir tasarrufumuz söz konusudur. Toplam 2019 yılı tasarruf bedelimiz 347.000₺ olup üzerimizden kalkan bu yük için Pardus ekibine ayrıca teşekkürlerimizi borç biliriz. Ayrıca donanımsal olarak da alt seviye tabir ettiğimiz eski bilgisayarlarda da Pardus‘u güvenle kullanabildiğimiz için yeni bilgisayar almak masrafından da kurtulmuş olduk.

    Pardus‘a ilk bakışımız kararlı, hızlı, kullanımı kolay ve kullanıcı dostu olmasıdır. Birçok açık kaynak alternatif programları çatısının altında topladığı için birçok ihtiyacımızı hızlı ve kolay yoldan çözmekte. En korktuğumuz risklerden olan piyasada kullanmış olduğumuz yazılımların kurulması veya alternatiflerinin bulunamamasıydı. Fakat geliştirilen altyapı ve Pardus Mağaza sayesinde program bulmada ya da kurulumunda hiç zorluk yaşamadık.”

    10 February, 2019 09:05PM by Gökhan Gurbetoğlu

    hackergotchi for SparkyLinux

    SparkyLinux

    Sparky Play & Sparky Player

    There are two new, small tools available for Sparkers: Sparky Play MP3 and Sparky Player.

    What is Sparky Play MP3?
    It is a simply, small, Yad based tool which lets you search directories and play mp3 audio files.

    The tool lets you:
    – choose a directory to be searched using a keyword (singer, song name, etc, whatever is used in a name of a mp3 file) or
    – use * to find all mp3 files in a chosen directory
    – display all mp3 files been found in a drop down menu
    – play your mp3 files

    Installation:
    sudo apt update
    sudo apt install sparky-play

    Search window
    Player window

    The Sparky Play MP3 project page: github.com/sparkylinux/sparky-play
    The tool is created by Elton, partly based on a work of Raimundo Portela, with my small improvements.

     

    The second tool is Sparky Player.
    What is Sparky Player?
    It is a simply and very small tool which lets play any audio or video file, which…
    – it is available from a context menu only
    – it doesn’t provide a standard’s desktop menu entry
    – it doesn’t provide any button and other graphical features
    – it calls and uses ‘ffplay’ to play all audio and video files
    – it supports all audio and video files which are supported by ‘ffplay’ via ‘libavcodecs’

    Installation:
    sudo apt update
    sudo apt install sparky-player

    Context menu
    Player window

    Just mark your audio or video file and choose ‘Sparky Player’ to let it play, or set ‘Sparky Player’ as a default player for a chosen multimedia file.

    Anyway, it lets you manage a playing audio or video file in a player window via your keyboard, for example:
    – pause = space bar
    – quit = q or Esc
    – full screen = f
    – right arrow = forward
    – left arrow = back

    The Sparky Player project page: github.com/sparkylinux/sparky-player

    Please test both tools and report whatever you find.

    10 February, 2019 09:03PM by pavroo

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Riccardo Padovani: Glasnost: yet another Gitlab's client.

    I love Gitlab. I have written about it, I contribute (sporadically) with some code and I am a big fan of their CI/CD system (ask my colleagues!). Still, they need to improve on their mobile side.

    I travel often, and be able to work on issues and pipelines on the go is something essential for me. Unfortunately, Gitlab’s UX on small screens is far from ideal (while it has improved over the years).

    Enter Glasnost

    My good friend Giovanni has developed a new opensource mobile client for Gitlab, with a lot of cool features: Glasnost!

    glasnost logo

    In his words:

    Glasnost is a free, currently maintained, platform independent and opensource mobile application that is aiming to visualize and edit the most important entities handled by Gitlab.

    Among the others features, I’d like to highlight support for multiple Gitlab hosts (so you can work both on your company’s Gitlab and on Gitlab.com at the same time), two different themes (a light one and a dark one), a lite version for when your data connection is stuck on edge, and support for fingerprint authentication.

    The application is still in an early phase of development, but it already has enough features to be used daily. I am sure Giovanni would love some feedback and suggestions, so please go on the Glasnost’s issues tracker or leave a feedback on the PlayStore.

    If you feel a bit more adventurous, you can contribute to the application itself: it is written in React+Redux with Expo: the code is hosted on Gitlab (of course).

    Enjoy yet another client for Gitlab, and let Giovanni knows what you think!

    playstore logo

    For any comment, feedback, critic, write me on Twitter (@rpadovani93) or drop an email at riccardo@rpadovani.com.

    Regards,
    R.

    10 February, 2019 06:45PM

    February 09, 2019

    Sebastian Dröge: MPSC Channel API for painless usage of threads with GTK in Rust

    A very common question that comes up on IRC or elsewhere by people trying to use the gtk-rs GTK bindings in Rust is how to modify UI state, or more specifically GTK widgets, from another thread.

    Due to GTK only allowing access to its UI state from the main thread and Rust actually enforcing this, unlike other languages, this is less trivial than one might expect. To make this as painless as possible, while also encouraging a more robust threading architecture based on message-passing instead of shared state, I’ve added some new API to the glib-rs bindings: An MPSC (multi-producer/single-consumer) channel very similar to (and based on) the one in the standard library but integrated with the GLib/GTK main loop.

    While I’ll mostly write about this in the context of GTK here, this can also be useful in other cases when working with a GLib main loop/context from Rust to have a more structured means of communication between different threads than shared mutable state.

    This will be part of the next release and you can find some example code making use of this at the very end. But first I’ll take this opportunity to also explain why it’s not so trivial in Rust first and also explain another solution.

    Table of Contents

    1. The Problem
    2. One Solution: Safely working around the type system
    3. A better solution: Message passing via channels

    The Problem

    Let’s consider the example of an application that has to perform a complicated operation and would like to do this from another thread (as it should to not block the UI!) and in the end report back the result to the user. For demonstration purposes let’s take a thread that simply sleeps for a while and then wants to update a label in the UI with a new value.

    Naively we might start with code like the following

    let label = gtk::Label::new("not finished");
    [...]
    // Clone the label so we can also have it available in our thread.
    // Note that this behaves like an Rc and only increases the
    // reference count.
    let label_clone = label.clone();
    thread::spawn(move || {
        // Let's sleep for 10s
        thread::sleep(time::Duration::from_secs(10));
    
        label_clone.set_text("finished");
    });

    This does not compile and the compiler tells us (between a wall of text containing all the details) that the label simply can’t be sent safely between threads. Which is absolutely correct.

    error[E0277]: `std::ptr::NonNull<gobject_sys::GObject>` cannot be sent between threads safely
      --> src/main.rs:28:5
       |
    28 |     thread::spawn(move || {
       |     ^^^^^^^^^^^^^ `std::ptr::NonNull<gobject_sys::GObject>` cannot be sent between threads safely
       |
       = help: within `[closure@src/bin/basic.rs:28:19: 31:6 label_clone:gtk::Label]`, the trait `std::marker::Send` is not implemented for `std::ptr::NonNull<gobject_sys::GObject>`
       = note: required because it appears within the type `glib::shared::Shared<gobject_sys::GObject, glib::object::MemoryManager>`
       = note: required because it appears within the type `glib::object::ObjectRef`
       = note: required because it appears within the type `gtk::Label`
       = note: required because it appears within the type `[closure@src/bin/basic.rs:28:19: 31:6 label_clone:gtk::Label]`
       = note: required by `std::thread::spawn`

    In, e.g. C, this would not be a problem at all, the compiler does not know about GTK widgets and generally all GTK API to be only safely usable from the main thread, and would happily compile the above. It would the our (the programmer’s) job then to ensure that nothing is ever done with the widget from the other thread, other than passing it around. Among other things, it must also not be destroyed from that other thread (i.e. it must never have the last reference to it and then drop it).

    One Solution: Safely working around the type system

    So why don’t we do the same as we would do in C and simply pass around raw pointers to the label and do all the memory management ourselves? Well, that would defeat one of the purposes of using Rust and would require quite some unsafe code.

    We can do better than that and work around Rust’s type system with regards to thread-safety and instead let the relevant checks (are we only ever using the label from the main thread?) be done at runtime instead. This allows for completely safe code, it might just panic at any time if we accidentally try to do something from wrong thread (like calling a function on it, or dropping it) and not just pass the label around.

    The fragile crate provides a type called Fragile for exactly this purpose. It’s a wrapper type like Box, RefCell, Rc, etc. but it allows for any contained type to be safely sent between threads and on access does runtime checks if this is done correctly. In our example this would look like this

    let label = gtk::Label::new("not finished");
    [...]
    // We wrap the label clone in the Fragile type here
    // and move that into the new thread instead.
    let label_clone = fragile::Fragile::new(label.clone());
    thread::spawn(move || {
        // Let's sleep for 10s
        thread::sleep(time::Duration::from_secs(10));
    
        // To access the contained value, get() has
        // to be called and this is where the runtime
        // checks are happening
        label_clone.get().set_text("finished");
    });

    Not many changes to the code and it compiles… but at runtime we of course get a panic because we’re accessing the label from the wrong thread

    thread '<unnamed>' panicked at 'trying to access wrapped value in fragile container from incorrect thread.', ~/.cargo/registry/src/github.com-1ecc6299db9ec823/fragile-0.3.0/src/fragile.rs:57:13

    What we instead need to do here is to somehow defer the change of the label to the main thread, and GLib provides various API for doing exactly that. We’ll make use of the first one here but it’s mostly a matter of taste (and trait bounds: the former takes a FnOnce closure while the latter can be called multiple times and only takes FnMut because of that).

    let label = gtk::Label::new("not finished");
    [...]
    // We wrap the label clone in the Fragile type here
    // and move that into the new thread instead.
    let label_clone = fragile::Fragile::new(label.clone());
    thread::spawn(move || {
        // Let's sleep for 10s
        thread::sleep(time::Duration::from_secs(10));
    
        // Defer the label update to the main thread.
        // For this we get the default main context,
        // the one used by GTK on the main thread,
        // and use invoke() on it. The closure also
        // takes ownership of the label_clone and drops
        // it at the end. From the correct thread!
        glib::MainContext::default().invoke(move || {
            label_clone.get().set_text("finished");
        });
    });

    So far so good, this compiles and actually works too. But it feels kind of fragile, and that’s not only because of the name of the crate we use here. The label passed around in different threads is like a landmine only waiting to explode when we use it in the wrong way.

    It’s also not very nice because now we conceptually share mutable state between different threads, which is the underlying cause for many thread-safety issues and generally increases complexity of the software considerable.

    Let’s try to do better, Rust is all about fearless concurrency after all.

    A better solution: Message passing via channels

    As the title of this post probably made clear, the better solution is to use channels to do message passing. That’s also a pattern that is generally preferred in many other languages that focus a lot on concurrency, ranging from Erlang to Go, and is also the the recommended way of doing this according to the Rust Book.

    So how would this look like? We first of all would have to create a Channel for communicating with our main thread.

    As the main thread is running a GLib main loop with its corresponding main context (the loop is the thing that actually is… a loop, and the context is what keeps track of all potential event sources the loop has to handle), we can’t make use of the standard library’s MPSC channel. The Receiver blocks or we would have to poll in intervals, which is rather inefficient.

    The futures MPSC channel doesn’t have this problem but requires a futures executor to run on the thread where we want to handle the messages. While the GLib main context also implements a futures executor and we could actually use it, this would pull in the futures crate and all its dependencies and might seem like too much if we only ever use it for message passing anyway. Otherwise, if you use futures also for other parts of your code, go ahead and use the futures MPSC channel instead. It basically works the same as what follows.

    For creating a GLib main context channel, there are two functions available: glib::MainContext::channel() and glib::MainContext::sync_channel(). The latter takes a bound for the channel, after which sending to the Sender part will block until there is space in the channel again. Both are returning a tuple containing the Sender and Receiver for this channel, and especially the Sender is working exactly like the one from the standard library. It can be cloned, sent to different threads (as long as the message type of the channel can be) and provides basically the same API.

    The Receiver works a bit different, and closer to the for_each() combinator on the futures Receiver. It provides an attach() function that attaches it to a specific main context, and takes a closure that is called from that main context whenever an item is available.

    The other part that we need to define on our side then is how the messages should look like that we send through the channel. Usually some kind of enum with all the different kinds of messages you want to handle is a good choice, in our case it could also simply be () as we only have a single kind of message and without payload. But to make it more interesting, let’s add the new string of the label as payload to our messages.

    This is how it could look like for example

    enum Message {
        UpdateLabel(String),
    }
    [...]
    let label = gtk::Label::new("not finished");
    [...]
    // Create a new sender/receiver pair with default priority
    let (sender, receiver) = glib::MainContext::channel(glib::PRIORITY_DEFAULT);
    
    // Spawn the thread and move the sender in there
    thread::spawn(move || {
        thread::sleep(time::Duration::from_secs(10));
    
        // Sending fails if the receiver is closed
        let _ = sender.send(Message::UpdateLabel(String::from("finished")));
    });
    
    // Attach the receiver to the default main context (None)
    // and on every message update the label accordingly.
    let label_clone = label.clone();
    receiver.attach(None, move |msg| {
        match msg {
            Message::UpdateLabel(text) => label_clone.set_text(text.as_str()),
        }
    
        // Returning false here would close the receiver
        // and have senders fail
        glib::Continue(true)
    });

    While this is a bit more code than the previous solution, it will also be more easy to maintain and generally allows for clearer code.

    We keep all our GTK widgets inside the main thread now, threads only get access to a sender over which they can send messages to the main thread and the main thread handles these messages in whatever way it wants. There is no shared mutable state between the different threads here anymore, apart from the channel itself.

    09 February, 2019 01:25PM

    hackergotchi for Septor

    Septor

    Ažuriranje: Septor 2019.1

















    Prema planiranim aktivnostima, izvršeno je ažuriranje ISO slike na verziju 2019.1, a izmene u novom izdanju su sledeće:

       - Tor internet pregledač je instaliran kompletno (8.0.5)
       - Sistem je ažuriran na Debian testing riznice, sa datumom 8. februar

       - Nove verzije dobili su Thunderbird, LibreOffice, Tor, Privoxy, VLC...   
       - Gufw zaštitni zid je podrazumevano aktiviran
       - Podrazumevana Gtk tema je sada Waugtk
       - Zamenjen je vidžet za vremensku prognozu






                                              Septor-2019.1-amd64.iso   (signature)  (md5sum)



    09 February, 2019 12:52PM by DebianSrbija (noreply@blogger.com)

    Dostupan je Septor 2019















    Dostupno je novo izdanje, drugo po redu, distribucije Septor Linux 2019. Kao što je poznato, glavna karakteristika ovog operativnog sistema je obavljanje saobraćaja preko Tor mreže, koja u sadašnjem vremenu važi kao jedan od najsigurnijih načina komunikacije na internetu. Za nedovoljno upućene, radi se o sistemu za rutiranje saobraćaja preko brojnih posrednika koji se određuju i smenjuju nasumično u okviru projektovanog vremenskog intervala. Kada se povežete na Tor mrežu, tamo vas očekuju mnogobrojni mrežni čvorovi sa svih delova naše planete, koji će služiti za prijem, prenos i otpremu korisničkih podataka.

                   
    Kao osnova za obrazovanje operativnog sistema korišćen je Debian (Buster) u svojoj još uvek razvojnoj verziji, sa KDE grafičkim okruženjem. Distribucija se može koristiti preko live režima, uz pomoć usb medija ili instalirana klasično na hard disk. U skladu sa projektnim zadatkom, za preinstalirani sofver odabrane su aplikacije za koje smo smatrali da su adekvatne za distribuciju ovakvog tipa. Par aplikacija je dobilo zamenu, pa sada kompletan spisak programa izgleda ovako:

    • Dolphin i KFind za upravljanje i pretragu fajlova
    • Synaptic i Gdebi za upravljanje i instalaciju softvera
    • Grafika / Multimedija: GIMP, Gwenview. VLC, K3b, Guvcview
    • Кancelarija: LibreOffice, Kontact, КOrganizer, Okular, Kwrite, Kate
    • Internet: Tor pregledač, Thunderbird, Ricochet IM, HexChat, QuiteRSS, OnionShare
    • Alati: Gufw, Konsole, Ark, Sweeper, KGpg, Kleopatra, MAT, KWallet, VeraCrypt, Rosa Image Writer, Bootiso





    Za koreni (root) pristup fajlovima, dostupna je ikona u meniju. Kod prvog pokretanja, Ricochet IM klijent će postaviti pitanje da li korisnik hoće povezivanje preko Tor mreže. HexChat i QuiteRSS su podrazumevano podešeni na Tor mrežu, a korisnici to mogu izmeniti kroz podešavanje ovih aplikacija, u odeljku Network.

    Ako ste novajlija na Linuksu, instalacioni proces je jednostavan i traje oko 10 minuta, a ovde možete da pročitate kako se pripremaju mediji za instalaciju. Grafički interfejs instalacionog postupka podrazumevano je podešen na engleski jezik, što se može promeniti već na prvom koraku. 



                                            Septor-2019.1-amd64.iso   (signature)  (md5sum)

                                             

    09 February, 2019 09:25AM by DebianSrbija (noreply@blogger.com)

    hackergotchi for Serbian GNU/Linux

    Serbian GNU/Linux

    Доступан је Сербиан 2019 Опенбокс



    Доступно је за преузимање ново издање дистрибуције Сербиан, намењено за рачунаре са слабијим карактеристикама, као и за све оне који су љубитељи оперативних система са малом потрошњом ресурса. У употреби је управник прозора Openbox, а нешто више о његовим подешавањима можете сазнати овде. Као основа у образовању овог издања коришћен је Debian у својој још увек развојној верзији, са кодним именом Buster.  Величина ИСО слике за преузимање износи око 1.4 ГБ, док ће по инсталацији, заузети простор на хард диску бити нешто око 5 ГБ. Додатне снимке екрана можете погледати овде.


    Апликације које долазе уз Сербиан 2019 Openbox, важе за мање захтевне, а њихов избор изгледа овако:

    Ту су још и алати за  архивирање, клипборд, чишћење система, управљање партицијама и ИСО сликама, подешавање изгледа итд. На овој страни, наведени су неки од савета који вам могу затребати, а односе се на Openbox управник прозора и пратеће програме. Као пар битнијих ставки, треба напоменути да се распоред тастатуре мења пречицом Ctrl+Shift, а подешене су опције: la, ћи, en. Ако сте извршили инсталацију на преносиви рачунар, да би пратили стање батерије, преко ставке Управљање енергијом, омогућите видљивост иконе у системској касети.

                   


    Ако сте новајлија на Линуксу, инсталациони процес је једноставан и траје око 10 минута, а овде можете прочитати како се припремају медији за инсталацију. Графички интерфејс инсталационог поступка подразумевано је подешен на ћириличну опцију, док ће управљање тастатуром бити на латиници. Ако до сада нисте видели како изгледа инсталација, можете је погледати у сликама, а доступан је и видео материјал. Ако некоме са другог говорног подручја затреба, на путањи .config/openbox/ може пронаћи фајл eng-menu.xml, да би користио мени на енглеском језику. 


    09 February, 2019 08:56AM by DebianSrbija (noreply@blogger.com)

    Доступан је Сербиан 2019 КДЕ



    Доступна је за преузимање нова верзија оперативног система Сербиан ГНУ/Линукс 2019, са КДЕ графичким окружењем. Графички изглед новог издања посвећен је уметничком делу Паје ЈовановићаСербиан долази подешен на ћириличну опцију, а кроз системске поставке може бити одабрана и латиница, као и ијекавица за обе варијанте. Као основа за образовање дистрибуције коришћен је Debian (Buster) у својој још увек развојној верзији, са КДЕ графичким окружењем. 

    Сербиан 2019, као и претходних пет издања, намењен је свим корисницима који желе да имају оперативни систем на српском језику. Намењен је и као могући избор за садашње кориснике власничких оперативних система. Такође, постоје и корисници који не умеју све сами да подесе и који су до сада користили Линукс дистрибуције које важе као више пријатељске за употребу. Додатне снимке екрана можете погледати овде.


    На новом издању, поред уобичајених програма који долазе уз КДЕ графичко окружење, налази се и колекција програма који ће корисницима омогућити квалитетно извршавање постављених задатакаСви преинсталирани програми су преведени на српски језик. Употребљен је кернел 4.19. а у односу на претходну верзију, побољшана је подршка за екстерне уређаје, пар апликација добило је замене, па садашњи избор изгледа овако:

    Ако сте новајлија на Линуксу, инсталациони процес је једноставан и траје краће од 10 минута, а овде можете прочитати како се припремају медији за инсталацију. Графички интерфејс инсталационог поступка подразумевано је подешен на ћириличну опцију, док ће управљање тастатуром бити на латиници. Ако до сада нисте видели како изгледа инсталација, можете је погледати у сликама, а доступан је и видео материјал. По инсталацији, Сербиан ће заузети нешто изнад 5 ГБ, тако да би било пожељно да му приликом партиционисања, за удобно деловање одредите 12 до 15 ГБ.

                     


    Када вам се подигне тек инсталирани систем, прочитајте приложени текстуални документ где је записано неколико савета. Као битније, треба истаћи да се распоред тастатуре мења пречицом Ctrl+Shift, а подешене су опције: la, ћи, en. Десктоп ефекти који су подешени подразумевано и покрећу се притиском на тастере F10, F11 и F12. У нашој ризници софтвера могу се пронаћи и инсталирати пакети: teamviewer, viber, veracrypt, deadbeef, dropbox, yandex-disk, master-pdf итд.

    На крају, хвала свим читаоцима ових редова, корисницима који имају или ће имати Сербиан на свом рачунару, као и свим медијима и појединцима који су дали свој допринос популаризацији оперативног система на српском језику. Ако има заинтересованих који желе да помогну у промоцији, доступни су и банери за ту намену.




    09 February, 2019 08:54AM by DebianSrbija (noreply@blogger.com)

    February 08, 2019

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Full Circle Magazine: Full Circle Weekly News #120


    System76 unveils ‘Darter Pro’ Linux laptop with choice of Ubuntu or Pop!_OS
    Source: https://betanews.com/2019/01/29/system76-darter-pro-linux-laptop/

    Ubuntu 18.04.2 LTS to Arrive on February 7 with New Components from Ubuntu 18.10
    Source: https://news.softpedia.com/news/ubuntu-18-04-2-lts-to-arrive-on-february-7-with-updated-components-524785.shtml

    Canonical Releases Snapcraft 3.1 Snap Creator Tool with Various Improvements
    Source: https://news.softpedia.com/news/canonical-releases-snapcraft-3-1-snap-creator-tool-with-various-improvements-524761.shtml

    LXQt 0.14 Desktop Adds Split View in File Manager, LXQt 1.0 Still in Development
    Source: https://news.softpedia.com/news/lxqt-0-14-desktop-adds-split-view-in-file-manager-lxqt-1-0-still-in-development-524700.shtml

    Japan Will Hack Its Citizens’ IoT Devices To ‘Make Them Secure’
    Source: https://fossbytes.com/japanese-will-hack-its-citizens-iot-devices-secure/

    Canonical Outs Major Linux Kernel Update for Ubuntu 18.04 LTS to Patch 11 Flaws
    Source: https://news.softpedia.com/news/canonical-outs-major-linux-kernel-update-for-ubuntu-18-04-lts-to-patch-11-flaws-524740.shtml

    Tails 3.12 with new installation method
    Source: https://www.pro-linux.de/news/1/26725/tails-312-mit-neuer-installationsmethode.html

    08 February, 2019 07:02PM

    February 07, 2019

    Cumulus Linux

    How to make CI/CD with containers viable in production

    Continuing Integration and Continuing Development (CI/CD), and containers are both at the heart of modern software development. CI/CD developers regularly break up applications into microservices, each running in their own container. Individual microservices can be updated independently of one another, and CI/CD developers aim to make those updates frequently.

    This approach to application development has serious implications for networking.

    There are a lot of things to consider when talking about the networking implications of CI/CD, containers, microservices and other modern approaches to application development. For starters, containers offer more density than virtual machines (VMs); you can stuff more containers into a given server than is possible with VMs.

    Meanwhile, containers have networking requirements just like VMs do, meaning more workloads per server. This means more networking resources are required per server. More MAC addresses, IPs, DNS entries, load balancers, monitoring, intrusion detection, and so forth. Network plumbing hasn’t changed, so more workloads means more plumbing to instantiate and keep track of.

    Containers can live inside a VM or on a physical server. This means that they may have different types of networking requirements than traditional VMs, (only talking to other containers within the same VM, for example) than other workloads. All the while, they have the same networking requirements that VMs do.

    Containers themselves don’t live migrate between servers; but the VMs they live on might, and that can present problems, such as tracking MAC addresses and IPs for multiple containers inside a single VM as that VM moves between physical hosts. Containers can be also destroyed and recreated by the thousands, posing new challenges.

    In some cases, it’s important to associate a given IP address or MAC address with a specific data set, even though the container (and thus the applications operating on that data) are destroyed and recreated elsewhere. Containers are also far more likely to be built from configuration files using an Infrastructure as Code (IaC) approach than are their VM predecessors.

    Infrastructure as Code

    Not getting things wrong means automation and orchestration. Humans are prone to error, and that’s before factoring in the dramatic increase in both the number of workloads and the frequency of change once an organization’s adopted modern development approaches. A CI/CD developer regularly updating their application, which involves destroying and recreating multiple containers, can dramatically increase the frequency of change for network administrators.

    Today, automation and orchestration of IT infrastructure increasingly falls into the category of IaC. Kubernetes, Terraform and many other IaC applications read a configuration file, typically a YAML file. This YAML file can contain all the details about a workload, and elements of the underlying infrastructure, from the configuration of the individual application, all the way down to the physical network.

    This assumes, of course, that all the various infrastructure elements support automation. You can’t, for example, register a workload’s new address with the firewall if the IaC application in use can’t talk to the firewall.

    Dynamic behavior like this, though, inevitably leads to complexity. As soon as it’s possible to automate and orchestrate the entire lifecycle workloads, we stop caring about things like “where are those workloads being placed?” Instead of placing all workloads that need to share secure backend communications on the same host, we might allow those workloads to be spread across multiple hosts or even multiple clusters.

    Increasingly, organizations rely on workload schedulers to determine workload placement, perhaps restraining that placement in some way – such as grouping workloads that form a single service – and perhaps not. It’s increasingly common, for example, to run some of a service’s workloads on multiple public clouds, as well as some of those workloads in on-premises data centers.

    Ensuring secure communication between these workloads requires complex networking. These workloads may be united through VPNs, layer 2 tunnels, gateways, proxies, and more. There are a seemingly limitless number of options today. No organization can afford to pay network administrators to set up and tear down these connections every time a microservice is updated and reinstantiated, or when a workload is added or moved.

    Software-defined infrastructure, which by definition includes networking, is no longer a nice to have. It’s an absolute must for those organizations that wish to be able to effectively provide infrastructure for applications using modern development approaches, such as CI/CD, containers, and microservices. As the bit that connects all the other bits, the place to start on this journey is the physical network.

    The post How to make CI/CD with containers viable in production appeared first on Cumulus Networks engineering blog.

    07 February, 2019 07:40PM by Trevor Pott

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Jonathan Riddell: KDE at FOSDEM 2019

    February means FOSDEM, the largest gathering of free software developers in the continent. I drove for two days down the winding roads and even onto a train and out again to take the bits needed to run the stall there. Fortunately my canoeing friend Poppy was there for car karaoke and top Plasma dev David got picked up along the way to give us emotional support watching Black Mirror Bandersnatch with its multiple endings.

    The beer flowed freely at Delerium but disaster(!) the venue for Saturday did not exist!  So I did some hasty scouting to find a new one before returning for more beer.

    Rather than place us next to Gnome the organisers put us next to our bestie friends Nextcloud which was nice and after some setup the people came and kept on coming.  Saturday was non stop on the stall but fortunately we had a good number of volunteers to talk to our fans and future fans.

    Come Home to KDE in 2019 was the theme.  You’ve been distro hopping.  Maybe bought a macbook because you got bored of the faff with Linux. But now it’s time to re-evaluate.  KDE Plasma is lightweight, full features, simple and beautiful.  Our applications are world class.  Our integration with mobile via KDE Connect is unique and life changing.

    I didn’t go to many talks because I was mostly stuck on the stall but an interesting new spelling library nuspell looks like something we should add into our frameworks, and Tor is helping people evade governments and aiding the selling of the odd recreational drug too.

    20190203_090217

    At 08:30 not many helpers or punters about but the canoeists got the show going.

    20190202_102814
    In full flow on the Saturday Wolthera does a live drawing show of Krita while Boud is on hand for queries and selfies.

    20190202_212641
    The Saturday meal after a quick change of venue was a success where we were joined by our friends Nextcloud and the Lawyers of Freedom.

    20190203_214942
    Staying until the following day turns out to allow a good Sunday evening to actually chat and discuss the merits of KDE, the universe and everything.  With waffles.

    Facebooktwitterlinkedinby feather

    07 February, 2019 03:37PM

    Podcast Ubuntu Portugal: S01E22 – Geeks aos molhos

    Neste episódio falamos do FOSDEM e da nossa experiência este ano. Se foste, partilha a tua experiência, se não foste cuidado, vais ficar com vontade de ir para o ano. Já sabes: Ouve, subscreve e partilha!

    • https://fosdem.org/2019/
    • https://fosdem.org/2019/schedule/event/keynotes_welcome/
    • https://fosdem.org/2019/schedule/event/keynote_fifty_years_unix/
    • https://fosdem.org/2019/schedule/event/matrix_french_state/
    • https://fosdem.org/2019/schedule/event/dns_over_http/
    • https://fosdem.org/2019/schedule/event/dns_privacy_panel/
    • https://fosdem.org/2019/schedule/event/containers_lxd_update/
    • https://fosdem.org/2019/schedule/event/crostini/
    • https://fosdem.org/2019/schedule/event/full_software_freedom/
    • https://fosdem.org/2019/schedule/event/behind_snapcraft/
    • https://fosdem.org/2019/schedule/event/nextcloud/
    • https://volunteers.fosdem.org/

    Patrocínios

    Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

    Atribuição e licenças

    A imagem de capa: Georgia Aquarium e está licenciada como CC BY 2.5.

    A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

    cujo texto integral pode ser lido aqui

    Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

    07 February, 2019 02:41PM

    hackergotchi for Univention Corporate Server

    Univention Corporate Server

    Linux and Windows Backups: Bareos in the Univention App Center

    In the App Center, our partner Univention provides a growing number of applications from different manufacturers. All programs can be installed and set up with just a few clicks. They’ve also integrated our Open Source backup solution: Bareos is licensed under AGPLv3 and specializes in heterogeneous IT landscapes. So, if you’re running UCS, the App Center provides you with a professional backup solution for your Windows and Linux machines in your UCS domain.

    Bareos: Client & Server Backup Solution

    Bareos (Backup Archiving Recovery Open Sourced) is a cross-network Open Source backup solution that preserves, archives and recovers data from all major operating systems (Linux, Windows, macOS, FreeBSD, AIX, HP-UX, and Solaris). Bareos works with different storage media, including hard disks and tape drives. It also connects to various cloud services. It’s possible to define schedules for full, incremental, and differential backups. A multilingual web interface (the Bareos Web UI) allows you to access and monitor all Bareos components, as well as restore your data.

    Bareos consists of several different parts:

    • Bareos Director
    • Bareos File Daemon(s)
    • Bareos Storage Daemon(s)

    The Bareos Director (the server) is the control unit. It manages the database (Bareos catalog), the connected clients, the file sets (description of the data that Bareos should back up), the backup schedule, and the backup jobs. The Bareos File Daemon is installed on every client computer. It executes the director’s instructions and sends the data to a Bareos Storage Daemon that saves the data together with its attributes on the defined backup medium. In the simplest case, all Bareos components run on a single computer. (For more information on Bareos, please have a look at the manual.)


    Visualisierung Computer VirusDon’t let malware take you hostage!

    In this article, Maik Außendorf explains how to back up data from Linux and Windows machines centrally with Bareos and restore them in case of disaster. 


    Installing Bareos via the Univention App Center

    Select Bareos in the App Center and click on Install. After a short time UCS will notify you that the Bareos director is ready for backup jobs. Before you activate the backup application on the clients, you should adjust a few UCR variables. The Univention Configuration Registry (UCR) is the central configuration tool that saves you the hassle of having to use a text editor for modifying the configuration files.

    Click on the link Univention Configuration Registry Module and enter bareos in the search field to show only the UCR variables of the backup program. You should modify the following variables:

    • bareos/filestorage: Location for the backup(s) (the default is /var/lib/bareos/storage on the server); please specify your backup media and make sure that there is enough space.
    • bareos/backup_myself: This variable determines whether Bareos should back up the server itself. The default value is no; change it to yes to include the UCS server in your backup concept.
    • bareos/webui/console/user1/username: This is the user name for the Bareos Web UI (default: Administrator)
    • bareos/webui/console/user1/password: The password for the Administrator account of the Bareos Web UI (please note that this is not the UCS password!)

    There is no need to change the default values of the variables bareos/max_*. They define the maximum size (in GByte) and the number of full, incremental, and differential backups. Every time you select the button Save in the Edit UCR Variable dialog box, the changes are written to the corresponding configuration file (in the /etc/bareos directory).

    Configuring the Clients

    Setting up the Bareos clients on the computers in the UCS domain is also easy. The current version (Bareos 17.2.6, UCS 4.2 and 4.3) only supports Windows and Linux. To add a new client to your backup plan, go to the Univention Management Console (UMC), select Devices and then Computers. Click on a machine’s name in the list to adjust its settings. The menu on the left side should show the entry Bareos Backup in the General section. Next, select the checkbox enable backup job on the right.

    After you’ve clicked Save, new configuration files for the Bareos Director and for the Bareos File Daemon are automatically created. All that’s left to do: Install the Bareos File Daemon on the Windows or Linux computer and transfer the configuration file from the UCS computer to the client. The configuration file is stored in the /etc/bareos/autogenerated/clients directory on the UCS server. The file name contains the computer name used in the UCS Management Console.

    If you need more information on the Bareos File Daemon and how to set up client packages on Windows and Linux machines, please have a look at the how-to on our community website.

    Bareos Web UI

    The Bareos installation via the Univention App Center automatically sets up the web interface that lets you monitor the backup software as well as restore your data. (Please note, that the current version of Bareos Web UI does not allow to define new backup jobs or schedules.) You can get to the web interface via the UCS portal. To log in, enter the user name and password that you stored in the UCR variables.

    Subscription and Support

    The Bareos company also offers subscription and support, and to purchase one of our backup plans, simply click on Buy in the Univention App Center. You can find more information about the Bareos services on our website.

    Der Beitrag Linux and Windows Backups: Bareos in the Univention App Center erschien zuerst auf Univention.

    07 February, 2019 09:03AM by Klara Pechtel

    BunsenLabs Linux

    Moonlight artwork seems to be the theme for Buster

    The latest upgrade included one for desktop-base, so some new artwork. I'm very 'meh' about it...

    https://wiki.debian.org/DebianArt/Themes/MoonLight

    07 February, 2019 12:00AM

    February 06, 2019

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Brian Murray: What’s the OOPS ID?

    The other day gnumeric crashed on me and like a good Ubuntu user, I submitted the crash report to the Ubuntu Error Tracker. Naturally, I also wanted to see the crash report in the Error Tracker and find out if other people had experienced the crash. It used to be an ordeal to find the OOPS ID associated with a specific crash, you’d have to read multiple lines of the systemd journal using ‘journalctl -u whoopsie.service’ and find the right OOPS ID for the crash about which you are interested.

    $ journalctl -u whoopsie.service
    -- Logs begin at Fri 2019-02-01 09:36:47 PST, end at Wed 2019-02-06 08:41:02 PST. --
    Feb 02 07:08:46 impulse whoopsie[4358]: [07:08:46] Parsing /var/crash/_usr_bin_gnumeric.1000.crash.
    Feb 02 07:08:46 impulse whoopsie[4358]: [07:08:46] Uploading /var/crash/_usr_bin_gnumeric.1000.crash.
    Feb 02 07:08:48 impulse whoopsie[4358]: [07:08:48] Sent; server replied with: No error
    Feb 02 07:08:48 impulse whoopsie[4358]: [07:08:48] Response code: 200
    Feb 02 07:08:48 impulse whoopsie[4358]: [07:08:48] Reported OOPS ID 7120987c-26fc-11e9-9efd-fa163ee63de6
    Feb 02 07:11:11 impulse whoopsie[4358]: [07:11:11] Sent; server replied with: No error
    Feb 02 07:11:11 impulse whoopsie[4358]: [07:11:11] Response code: 200

    However, I recently made a change to whoopsie to write the OOPS ID to the corresponding .uploaded file in /var/crash. So now I can just read the .uploaded file to find the OOPS ID.


    $ sudo cat /var/crash/_usr_bin_gnumeric.1000.uploaded
    7120987c-26fc-11e9-9efd-fa163ee63de6

    This is currently only available in Disco Dingo, which will become the 19.04 release of Ubuntu, but if you are interested in having it in another release let me know or update the bug.

    06 February, 2019 05:11PM

    hackergotchi for LiMux

    LiMux

    munich4Europe: Als Münchens Young Ambassador für Europa

    Franz-Josef Möller, dualer Student „BWL – Public Management“ der Stadt München, arbeitet seit September 2018 im IT-Referat für ein Projekt zur Neuorganisation der IT. Als „Munich Young Ambassador“ reiste er im November mit einer Delegation … Weiterlesen

    Der Beitrag munich4Europe: Als Münchens Young Ambassador für Europa erschien zuerst auf Münchner IT-Blog.

    06 February, 2019 10:00AM by Stefan Döring

    BunsenLabs Linux

    Please update the bunsen-keyring package before June 2019

    This is old news since the package update has been out for a few weeks and most BL users should have installed the update by now, so this announcement just serves as a heads-up for people keeping their computer offline either permanently or most of the time:

    Please make sure to execute

    apt-get update && apt-get install bunsen-keyring

    on your Helium and Hydrogen/Deuterium BL systems before June 2019 as our repository signing key was set to expire in June 2019. We extended the expiration period of our key for another couple of years, but all BL systems need the updated key in order to be able to verify packages from our repositories beyond June. Should you miss the deadline and your local key expires, you can just download the current bunsen-keyring package manually off our repository index page and install it manually.

    If you continuously update your system, you don't need to do anything.

    Thanks!

    06 February, 2019 12:00AM

    February 05, 2019

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Sergio Schvezov: Snapcraft 3.1

    snapcraft 3.1 is now available on the stable channel of the Snap Store. This is a new minor release building on top of the foundations laid out from the snapcraft 3.0 release. If you are already on the stable channel for snapcraft then all you need to do is wait for the snap to be refreshed. The full release notes are replicated here below Build Environments It is now possible, when using the base keyword, to once again clean parts.

    05 February, 2019 05:48PM

    hackergotchi for Purism PureOS

    Purism PureOS

    How to Avoid the Frightful 5 Big Tech Corporations

    So you’re wondering…

    You’re starting to question the moral values of Big Tech. You and your friends probably have a growing feeling of creepiness about the tech giants who have — like a poorly-acted villain — told you one thing, and given you another.

    Society – all of us – was told by these rising tech giants that “Everybody’s doing it, it’s easy: just do it,” and even though the masses – again, all of us – were skeptical, also generally thought, “Okay, I may be the product… but I am in control.” Until, of course, you weren’t in control.

    Big Tech have two business models: one is to exploit your private life for profit, the other to lock you into their products and services. Some even have both. Consequently, nearly everyone wants to leave Facebook – it’s just that nobody wants to leave it for Facebook 2.0. And that highlights the larger, deeper, and more menacing issue in digital society: that your digital civil rights are under constant, relentless attacks from Big-Tech.

    Why is that?

    That’s because Big Tech have a legally bound requirement to exploit you to maximize profit. They have been structured specifically to lock people up, to make it impossible to leave. Society’s technology genius is not lacking, its moral genius is.

    Big Tech is rotten to the core, they’re maximizing shareholder value without values. And that has to change.

    Is it changing?

    Counter to every attempt by Big Tech, you still do retain the control to switch away from platforms under their control. Leaving the harmful tech companies is a process of recognizing what is abusive in your relationship with them – and theirs with you; switching to something that aligns with your core values will bring you joy. If everyone used, bought, and shared ethical products, we would all be living in digital utopia.

    Avoiding Big Tech that harms you is much easier when you know what you actually want.

    What do you want?

    Like most people, you probably want convenient products that allow you to participate in digital society, products where you are also respected and protected by default. And you probably recognize that the current tech giants will not — and cannot — provide that.

    I created Purism to solve this giant problem. We are solving it. And with your support, we will advance the digital civil rights movement. One that changes technology for the better.

    What model can compete against Big Tech?

    The steps it takes are pretty clear: and even though they are difficult, Purism is farther along than you may currently think. We have the momentum to realize our dreams, here is our model:

    Step #1 is simple: avoid Big Tech in products and services.

    Step #2 is to manufacture hardware without purposeful backdoors that exploit users, and therefore society.

    Step #3 is to release all software source code under Free Software licenses, designed to protect and respect society

    Step #4: Focus all development on values that better society, ensuring individual digital civil rights are fully respected.

    And finally step #5, to release services that do not exploit, do not lock-in, and do not control people.

    Doesn’t that sound like a really good business model in, and for, technology?
    “Maximize Society’s Values.” — Todd Weaver, Purism SPC
    sounds an awful lot better than the current Big Tech model,
    “Harm People for Profit.” — Big-Tech.

    This rotten core of locking people into products and booking them into an exploitative platform needs to become a thing of past regret.

    Social purpose companies mean advancing social good first, returning the power to the people.

    Decentralized protocols mean decentralized power, returning control to the people.

    Free Software means freedoms that benefit society, returning control to the people.

    Secure hardware means private data is kept private, returning control to the people.

    And all this means any future product or service that you use or join must be from an organization that does social good, uses decentralized protocols, that advances freedom in software and security in hardware. Can you imagine how awesome society will be if technology does all that?

    It’s within your power to help society – by avoiding Big Tech, by using products and services that respect your digital civil rights.

    Thank you.

    Learn more about Purism products that do all this.

    The post How to Avoid the Frightful 5 Big Tech Corporations appeared first on Purism.

    05 February, 2019 05:12PM by Todd Weaver

    hackergotchi for Whonix

    Whonix

    Whonix KVM 14.0.1.3.8 - Point Release - Tester Version

    @HulaHoop wrote:

    Tester version 14.0.1.3.8 is now available for download.

    In reality, it is pretty much stable at this point and the problems with 14.0.1.1.7 ironed out.

    This is a release candidate for a point release.

    Debian APT remote code execution vulnerability DSA 4371-1 is fixed in this version.

    An Apparmor bug that prevented Whonixcheck from starting was fixed.

    For a detailed list of changes since 14.0.1.1.7 see the commit log


    Download the Testers-Only version of Whonix for KVM:

    Posts: 1

    Participants: 1

    Read full topic

    05 February, 2019 02:22AM by @HulaHoop

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Robert Ancell: Easy IoT with Ubuntu Core and Raspberry Pi

    My current job involves me mostly working in the upper layers of the desktop software stack however I started out working in what was then called embedded engineering but now would probably be know as the Internet of Things (IoT). I worked on a number of projects which normally involved taking some industrial equipment (radio infrastructure, camera control system) and adding a stripped down Linux kernel and an application.

    While this was cutting edge at the time, there were a number of issues with this approach:
    • You essentially had to make your own mini-distribution to match the hardware you were using. There were some distributions available at the time but they were often not light weight enough or had a financial cost.
    • You had to build your own update system. That comes with a lot of security risks.
    • The hardware was often custom.
    The above issues meant a large overhead building and maintaining the platform instead of spending that time and money on your application. If you wanted to make a hobby project it was going to be expensive.

    But we live in exciting times! It's now possible to use cheap hardware and easily accessible software to make a robust IoT device. For around $USD60 you can make a highly capable device using Ubuntu Core and Raspberry Pi. I decided to make a device that showed a scrolling LED display, but there are many other sensors and output devices you could attach.

    The Raspberry Pi 3 A+ is a good choice to build with. It was just recently released and is the same as the B+ variant but on a smaller board. This means you save some money and space but only lose some connectors that you can probably live without in an IoT device.



    I added an SD card and for protection put it in a case. I chose an nice Ubuntu orange colour.


    Next step was to connect up a display (also in Ubuntu orange). Note this didn't need the wires - it should fit flat onto the case but I spent too much time photographing the process that I accidentally soldered on the connector backwards. So don't make that mistake... 😕


    Final step was to connect a USB power supply (e.g. a phone charger). The hardware is complete, now for the software...

    Using Ubuntu Core 18 is as simple as downloading a file and copying it onto the SD card. Then I put the SD card into the Raspberry Pi, powered it on and all I had to do was:
    1. Select my home WiFi network.
    2. Enter my email address for my Ubuntu SSO account.
    3. Secure shell into the Raspberry Pi from my Ubuntu laptop.
    The last step is magically easy. If you connect a screen to the Pi it shows you the exact ssh command to type to log into it (i.e. you don't have to work out the IP address) and it uses the SSH key you have attached to your Ubuntu SSO account - no password necessary!

    $ ssh robert-ancell@192.168.1.210

    Now to write my application. I decided to write it in C so it would be fast and have very few dependencies. The easiest way to quickly develop was to cross-compile it on my Ubuntu laptop, then ssh the binary over the the Pi. This just required installing the appropriate compiler:

    $ sudo apt install gcc-arm-linux-gnueabihf
    $ arm-linux-gnueabihf-gcc test.c -o test
    $ scp test robert-ancell@192.168.1.210:
    $ ssh robert-ancell@192.168.1.210 ./test

    Once I was happy my application worked the next step was to package it to run on Ubuntu Core. Core doesn't use .deb packages, instead the whole system is built using Snaps.

    All that is required to generate a snap is to fill out the following metadata (running snapcraft init creates the template for you):

    name: little-orange-display
    base: core18
    version: git
    summary: Demonstration app using Ubuntu Core and a Raspberry Pi
    description: |
      This is a small app used to demonstrate using Ubuntu Core with a Raspberry Pi.
      It uses a Scroll pHAT HD display to show a message.

    architectures:
      - build-on: all
        run-on: armhf

    grade: stable
    confinement: strict

    apps:
      little-orange-display:
        daemon: simple
        command: display-daemon
        plugs:
          - i2c

    parts:
      little-orange-display:
        plugin: make
        source: .


    This describes the following:
    • Information for users to understand the app.
    • It is an armhf package that is stable and confined.
    • It should run as a daemon.
    • It needs a special access to I2C devices (the display).
    • How to build it (use the Makefile I wrote).
    To test the package I built it on my laptop and installed the .snap file on the Raspberry Pi:

    $ snapcraft
    $ scp little-orange-display_0+git.aaa6688_armhf.snap robert-ancell@192.168.1.210:
    $ ssh robert-ancell@192.168.1.210
    $ snap install little-orange-display_0+git.aaa6688_armhf.snap
    $ snap connect little-orange-display:i2c pi:i2c-1
    $ snap start little-orange-display

    And it ran!
     

    The last stage was to upload it to the Snap store. This required me to register the name (little-orange-display) and upload it:

    $ snapcraft register little-orange-display
    $ snapcraft push little-orange-display_0+git.aaa6688_armhf.snap

    And with that little-orange-display is in the store. If I wanted to make more devices I can by installing Ubuntu Core and enter the following on each device:

    $ snap install little-orange-display
    $ snap connect little-orange-display:i2c pi:i2c-1
    $ snap start little-orange-display


    And that's the end of my little project. I spent very little time installing Ubuntu Core and doing the packaging and the majority of the time writing the app, so it solved the issues I would have traditionally encountered building a project like this.

    Using Ubuntu Core and Snaps this project now has following functionality available:
    • It automatically updates.
    • The application I wrote is confined, so any bugs I introduce are unlikely to break the OS or any other app that might be installed.
    • I can use Snap channels to test software easily. In their simplest usage I can have a device choose to be on the edge channel which contains a snap built directly from the git repository. When I'm happy that's working I can move it to the beta channel for wider testing and finally to the stable channel for all devices.
    • I get metrics on where my app is being used. Apparently it has one user in New Zealand currently (i.e. me). 🙂

    05 February, 2019 01:26AM by Robert Ancell (noreply@blogger.com)

    February 04, 2019

    hackergotchi for Ubuntu

    Ubuntu

    Ubuntu Weekly Newsletter Issue 564


    Welcome to the Ubuntu Weekly Newsletter, Issue 564 for the week of January 27 – February 2, 2019.
    The full version of this issue is available here.

    In this issue we cover:

    The Ubuntu Weekly Newsletter is brought to you by:

    • Krytarik Raido
    • Bashing-om
    • Chris Guiver
    • Wild Man
    • TheNerdyAnarchist
    • mIk3_08
    • And many others
    If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!


    Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

    04 February, 2019 11:01PM by guiverc

    hackergotchi for Purism PureOS

    Purism PureOS

    Purism announces a partnership with GDquest to develop adaptive game tutorials

    We are happy to announce our forthcoming partnership with GDquest – one that we hope will make the world a happier, more fun place.

    Libre/indie game designers might like to know that Nathan Lovato – game design expert, founder and game design instructor at GDquest – will be making a series of tutorials, explaining how to make adaptive games with high-quality libre game engine Godot; tutorials showing how games can both be created and released on the Librem 5 smartphone, and later submitted to the PureOS store.

    The first of the three video tutorials will focus on how to create a mobile game for GNU/Linux. It will also help conceive and design a 2D mobile game, and tackle design issues that are unique to mobile games – such as having a small screen, dealing with touch controls and any performance and usability issues. By loading Flossy Gnu in Godot, the tutorial demonstrates how these performance and usability issues are to be addressed. Specific tips for GNU/Linux in general, and for the Librem 5 in particular, are of course also to be noted and discussed.

     

     

    The second tutorial will deal with sideloading your newly created game onto your Librem 5, starting by demonstrating how to build “Flossy Gnu” on your Librem laptop – or on any other GNU/Linux laptops; how to copy and install it onto your Librem 5 smartphone, play it – and hopefully have plenty of fun with it. It’ll also suggest how to install a new build when you update your game.

    The third and last (but not least) video tutorial will be all about publishing to the PureOS store. It’ll demonstrate how to publish source code and assets for a reproducible build, and how to submit the game for inclusion in the PureOS store after that.

    GDQuest is producing more tutorial videos as part of their ongoing crowdfunding campaign. There are only a few days left to back the project. Join us in supporting them!

    Get in touch with Nathan Lovato at GitHub, at GDquest, or at his pro SNS account

    Image credit: MooGNU Copyright 2012 /g/ CC-BY-3.0

    The post Purism announces a partnership with GDquest to develop adaptive game tutorials appeared first on Purism.

    04 February, 2019 07:14PM by Ines Mendes

    hackergotchi for Whonix

    Whonix

    Whonix VirtualBox 14.0.1.3.8 - Point Release - Testers Wanted!

    @Patrick wrote:

    Testers Wanted!

    This is a release candidate for a point release.

    Debian APT remote code execution vulnerability DSA 4371-1 is fixed in this version. Therefore, special instructions for upgrading are not required. The usual standard (“everyday”) upgrading instructions should be applied.


    Download the Testers-Only version of Whonix for VirtualBox:

    Posts: 4

    Participants: 2

    Read full topic

    04 February, 2019 09:02AM by @Patrick

    February 03, 2019

    hackergotchi for SparkyLinux

    SparkyLinux

    Sparky 5.7~dev20190203

    There are development live/install media of Sparky 5.7 20190203 of the rolling line available for testing.

    The new iso images features improved Advanced Installed which provides a bug fixing around wrong detecting partitions.

    If a number of partitions you have is bigger than 9, and if you choose the first partition as the first choice (swap partition on Bios machines; UEFI partition on machines with UEFI motherboard), sda1 for expample, the installer cut out from a next window all of existing partition with the number starting you already choosen (sda1 – sda10, sda11, sda12, etc.).

    So they are not visible in Installer window and can not be used for installing Sparky.

    ‘sparky-backup-core’ 20190203 features a fix for that issue so your existing iso image can be used anyway, if you update the package in the live system.

    The second change is about the Linux kernel – the present iso images are shipped with Debian Linux kernel back as default.

    Please test the new images and report whatever you find.

    The issue has been found by guy under ‘The Operating System World’ nick name – thank’s a lot!

    New development iso images can be found at download/development page.

    03 February, 2019 11:00PM by pavroo

    hackergotchi for VyOS

    VyOS

    Contributors, contributions and badges

     

    Looks like there’s quite a bit of misunderstanding regarding free subscriptions for contributors, who is eligible, and how people can contribute to the VyOS Project.

    03 February, 2019 04:00PM by Daniil Baturin (daniil@sentrium.io)

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Costales: Podcast Ubuntu y otras hierbas S03E02: Software libre en los ciclos formativos... desde dentro

    En este nuevo programa, segundo de la tercera temporada Ubuntu y Otras Hierbas, con Francisco Molinero y Javier Teruelo y con la inestimable colaboración de Lidia Montero y José Manuel Blanco,  se adentra en las complejas relaciones entre los ciclos formativos y el software libre y sus potenciales salidas y futuros (y abre algún melón sorpresa).
    ¿Conseguirá ser ésta la Nueva Generación?

    El podcast esta disponible para escuchar en:

    03 February, 2019 01:15PM by Marcos Costales (noreply@blogger.com)

    February 02, 2019

    Ted Gould: Knot Boards

    Each year Cub Scouts has a birthday party for Scouting in February, which is called the Blue & Gold banquet. We have a tradition that at the banquet were we thank all of our volunteers who help make the Cub Scout Pack run. For the Den Leaders, who are so critical to the program, I like to do something special that helps them to run a better program for the scouts. For 2018 (notice I'm a little behind) I decided to make all of the Den Leaders for our Pack knot boards.

    SVG file for the knotboard design

    When I was a Scout I remember my mom making knot boards. Back then we had a piece of paper with the various knots that was varnished onto a piece of plywood, which had a rope attached to it. High technology for the time, but today I'm a member of TheLab.ms makerspace and have access to a laser cutter. While these knot boards are the same in spirt, we can do some very cool things with big toys.

    Laser actively cutting boards with a cool sparkle

    First step is to pull out Inkscape and design the graphics. I grabbed a rope border from Open Clipart and grabbed some knot graphics from a Scouting PDF (which I can't find a link to). I put those together to create the basic design along with labes for the knots. I also added a place for each Scout to sign their name as a Thank you to the Den Leader. I then make some small circles for the laser cutter to cut out holes for the ropes. I made a long oblong region on the right so the board would have a handle and a post to tie the hitches around. Then lastly I added the outline to cut out the board.

    To get the design into the laser cutter I exported it from Inkscape in two graphics. I exported the cut lines as a DXF and I exported the etching as a 300 DPI PNG. The cut lines were simpler and the laser cutter software was able to handle those and create simple controls for the cutter. The knots on the other hand were more complex vector objects and the laser cutter software couldn't handle them. Inkscape could, so I had it do the rendering to a bitmap. The laser cutter can then setup scans that use the bitmap data which worked very well.

    For the boards I used ¼th" Lauan Plywood which I was able to get in 2'x2' sheets at Lowe's. Those sheets have a nice grain on both sides. I also liked being able to get sheets that were exactly the size I needed to fit into the laser cutter. Saved me a step. I'm certain the knot boards would be great in many other woods and other materials.


    Cut knot boards sitting the bed of the laser cutter

    After cutting out the knot boards I needed short lengths of rope to be able to insert into the holes. I couldn't find anywhere that would sell me short pieces of rope. I felt like I needed a Monty Python sketch. To make short lengths of the paracord I looped it in a circle with the circumference as the length I needed. Then I took a blowtorch and cut the circle. This also sealed the ends of the paracord.

    Final knot boards with ropes in the holes

    02 February, 2019 12:00AM

    February 01, 2019

    Jamie Strandboge: Monitoring your snaps for security updates

    Some time ago we started alerting publishers when their stage-packages received a security update since the last time they built a snap. We wanted to create the right balance for the alerts and so the service currently will only alert you when there are new security updates against your stage-packages. In this manner, you can choose not to rebuild your snap (eg, since it doesn’t use the affected functionality of the vulnerable package) and not be nagged every day that you are out of date.

    As nice as that is, sometimes you want to check these things yourself or perhaps hook the alerts into some form of automation or tool. While the review-tools had all of the pieces so you could do this, it wasn’t as straightforward as it could be. Now with the latest stable revision of the review-tools, this is easy:

    $ sudo snap install review-tools
    $ review-tools.check-notices \
      ~/snap/review-tools/common/review-tools_656.snap
    {'review-tools': {'656': {'libapt-inst2.0': ['3863-1'],
                              'libapt-pkg5.0': ['3863-1'],
                              'libssl1.0.0': ['3840-1'],
                              'openssl': ['3840-1'],
                              'python3-lxml': ['3841-1']}}}

    The review-tools are a strict mode snap and while it plugs the home interface, that is only for convenience, so I typically disconnect the interface and put things in its SNAP_USER_COMMON directory, like I did above.

    Since now it is super easy to check a snap on disk, with a little scripting and a cron job, you can generate a machine readable report whenever you want. Eg, can do something like the following:

    $ cat ~/bin/check-snaps
    #!/bin/sh
    set -e
    
    snaps="review-tools/stable rsync-jdstrand/edge"
    
    tmpdir=$(mktemp -d -p "$HOME/snap/review-tools/common")
    cleanup() {
        rm -fr "$tmpdir"
    }
    trap cleanup EXIT HUP INT QUIT TERM
    
    cd "$tmpdir" || exit 1
    for i in $snaps ; do
        snap=$(echo "$i" | cut -d '/' -f 1)
        channel=$(echo "$i" | cut -d '/' -f 2)
        snap download "$snap" "--$channel" >/dev/null
    done
    cd - >/dev/null || exit 1
    
    /snap/bin/review-tools.check-notices "$tmpdir"/*.snap

    or if  you already have the snaps on disk somewhere, just do:

    $ /snap/bin/review-tools.check-notices /path/to/snaps/*.snap

    Now can add the above to cron or some automation tool as a reminder of what needs updates. Enjoy!

    01 February, 2019 08:53PM

    Podcast Ubuntu Portugal: S01E21 – O famoso eixo Sintra Bruxelas

    Neste episódio além de notícias de hardware, da release do Ubuntu Core, mudanças na gestão do Lubuntu, falámos dos nossos planos para a FOSDEM 2019. Já sabes: Ouve, subscreve e partilha!

    • https://lubuntu.me/introducing-the-lubuntu-council/
    • https://www.omgubuntu.co.uk/2019/01/entroware-hades
    • https://www.entroware.com/store/ares
    • https://twitter.com/barton808/status/1088132516098265088
    • https://puri.sm/posts/librem-laptops-now-at-version-4/
    • https://twitter.com/m_wimpress/status/1086901588411719680
    • https://getsol.us/2019/01/14/2019-to-venture-ahead/
    • https://www.phoronix.com/scan.php?page=news_item&px=Raspberry-Pi-New-CM-3
    • https://pixels.camp/
    • https://fosdem.org/2019/

    Patrocínios

    Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

    Atribuição e licenças

    A imagem de capa: Tim Sneddon e está licenciada como CC BY-SA 2.0.

    A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

    cujo texto integral pode ser lido aqui

    Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

    01 February, 2019 07:25PM

    hackergotchi for SparkyLinux

    SparkyLinux

    January 2019 donation report

    Many thanks to all of you for supporting Sparky!
    Your donations help keeping Sparky alive.

    Don’t forget to send a small tip in the February too 🙂

     

    Country
    Supporter
    Amount
    Poland
    Krzysztof M.
    PLN 50
    Germany
    Alexander F.
    € 10
    World
    Reudi L.
    € 10
    World
    Adrian B.
    $ 1
    World
    Gernot P.
    $ 10
    World
    Merlyn M.
    $ 5
    World
    Damion H.
    € 15
    World
    Karl A.
    € 3.33
    Poland
    Witold M.
    PLN 100
    Poland
    Jacek G.
    PLN 40
    Poland
    Stanisław G.
    PLN 20
    Poland
    Władysław K.
    PLN 20
    Poland
    Andrzej P.
    PLN 5
    World
    Jorg S.
    € 2,5
    World
    Peter E.
    € 10
    Poland
    Paweł S.
    PLN 100
    Germany
    Petr U.
    € 20
    Poland
    Przemysław P.
    PLN 70
    Total:
    PLN 405
    € 65,83
    $ 16

    01 February, 2019 06:46PM by pavroo