December 14, 2018

hackergotchi for Ubuntu developers

Ubuntu developers

Robert Ancell: Interesting things about the GIF image format

I recently took a deep dive into the GIF format. In the process I learnt a few things by reading the specification.

A GIF is made up of multiple images

 

I thought the GIF format would just contain a set of pixels. In fact, a GIF is made up of multiple images. So a simple example like:


 Could actually be made up of multiple images like this:

 

GIF has transparency, but that doesn't mean you have transparent GIFs

 

In the above example the sun and house images have the background in them. If the background was very detailed then this would be inefficient. So instead you can set a transparent colour index for each image. Pixels with this index don't replace the background pixels when the images are composited together.


That's the only transparency in the specification. The background colour is actually encoded in the file so technically a GIF picture has all pixels set to a colour. However at some point renderers decided they wanted transparency and ignored the background colour and set it to transparent instead. It's not in the spec, but it's what everyone does. This is the reason that GIF transparency looks bad - there's no alpha channel, just a hack abusing another feature.

You can have more than 256 colours

 

GIFs are well known for having a palette of only up to 256 colours. However, you can have a different palette for each image in the GIF. That means in the above example you could use a palette with lots of greens and blues for the background, lots of reds for the house and lots of yellows for the sun. The combined image could have up to 768 colours! With some clever encoding you can have a GIF file that uses up to 24 million colours.

Animation is just delaying the rendering 


GIFs are most commonly used for small animations. This wasn't in the original specification but at some point someone realised if you inserted a delay between each image you could make an animation! In the above example we could animate by adding more images of the sun that were rotated from the previous frame with a delay before them:

 

Why we can't have nice things


With all of the above GIF is both a simple but powerful format. You can make an animation that is made up of small updates efficiently encoded.

Sadly however someone decided that all images inside a GIF file should be treated as animation frames. And they should have a minimum delay time (including zero delays being rounded up to 20ms or so). So if you want you GIF to look as you intended you're stuck with one image per frame and only 256 colours per frame unless the common decoders are fixed. It seems the main reason they continue to be like this is there are badly encoded GIF files online and they don't want them to stop working.

GIF, you are a surprisingly beautiful format and it's a shame we don't see your full potential!

14 December, 2018 04:07AM by Robert Ancell (noreply@blogger.com)

Robert Ancell: GIFs in GNOME

    Here is the story of how I fell down a rabbit hole and ended up learning far more about the GIF image format than I ever expected...
    We had a problem with users viewing a promoted snap using GNOME Software. When they opened the details page they'd have huge CPU and memory usage. Watching the GIF in Firefox didn't show a problem - it showed a fairly simple screencast demoing the app without any issues.
    I had a look at the GIF file and determined:
    • It was quite large for a GIF (13Mb).
    • It had a lot of frames (625).
    • It was quite high resolution (1790×1060 pixels).
    • It appeared the GIF was generated from a compressed video stream, so most of the frame data was just compression artifacts. GIF is lossless so it was faithfully reproducing details you could barely notice.
    GNOME Software uses GTK+, which uses gdk-pixbuf to render images. So I had a look a the GIF loading code. It turns out that all the frames are loaded into memory. That comes to 625×1790×1060×4 bytes. OK, that's about 4.4Gb... I think I see where the problem is. There's a nice comment in the gdk-pixbuf source that sums up the situation well:

     /* The below reflects the "use hell of a lot of RAM" philosophy of coding */

    They weren't kidding. 🙂

    While this particular example is hopefully not the normal case the GIF format has has somewhat come back from the dead in recent years to be a popular format. So it would be nice if gdk-pixbuf could handle these cases well. This was going to be a fairly major change to make.

    The first step in refactoring is making sure you aren't going to break any existing behaviour when you make changes. To do this the code being refactored should have comprehensive tests around it to detect any breakages. There are a good number of GIF tests currently in gdk-pixbuf, but they are mostly around ensuring particular bugs don't regress rather than checking all cases.

    I went looking for a GIF test suite that we could use, but what was out there was mostly just collections of GIFs people had made over the years. This would give some good real world examples but no certainty that all cases were covered or why you code was breaking if a test failed.

    If you can't find what you want, you have to build it. So I wrote PyGIF - a library to generate and decode GIF files and made sure it had a full test suite. I was pleasantly surprised that GIF actually has a very well written specification, and so implementation was not too hard. Diversion done, it was time to get back to gdk-pixbuf.

    Tests plugged in, and the existing code actually has a number of issues. I fixed them, but this took a lot of sanity to do so. It would have been easier to replace the code with new code that met the test suite, but I wanted the patches to be back-portable to stable releases (i.e. Ubuntu 16.04 and 18.04 LTS).

    And with a better foundation, I could now make GIF frames load on demand. May you GIF viewing in GNOME continue to be awesome.

    14 December, 2018 02:16AM by Robert Ancell (noreply@blogger.com)

    December 13, 2018

    Podcast Ubuntu Portugal: S01E15 – Open Source Garden

    Neste episódio o Diogo Constantino, esteve na CMS Garden Unconference na Unperfekthaus em Essen em conjunto com os speakers e organizadores do Secure Open Source Day, e aproveitou para gravar uma conversa agradável com os intervenientes da conferência, dando a conhecer o evento e a comunidade.

    13 December, 2018 09:51PM

    Jonathan Riddell: Achievement of the Week

    This week I gave KDE Frameworks a web page after only 4 years of us trying to promote it as the best thing ever since cabogganing without one.  I also updated the theme on the KDE Applications 18.12 announcement to this millennium and even made the images in it have a fancy popup effect using the latest in JQuery Bootstrap CSS.  But my proudest contribution is making the screenshot for the new release of Konsole showing how it can now display all the cat emojis plus one for a poodle.

    So far no comments asking why I named my computer thus.

    Facebooktwittergoogle_pluslinkedinby feather

    13 December, 2018 06:41PM

    Ubuntu Podcast from the UK LoCo: S11E40 – North Dallas Forty

    This week we’ve been playing on the Nintendo Switch. We review our tech highlights from 2018 and go over our 2018 predictions, just to see how wrong we really were. We also have some Webby love and go over your feedback.

    It’s Season 11 Episode 40 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

    In this week’s show:

    That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

    13 December, 2018 03:00PM

    Alan Pope: Fixing Broken Dropbox Sync Support

    Like many people, I've been using Dropbox to share files with friends and family for years. It's a super convenient and easy way to get files syncronised between machines you own, and work with others. This morning I was greeted with a lovely message on my Ubuntu desktop.

    Dropbox says 'no'

    It says "Can't sync Dropbox until you sign in and move it to a supported file system" with options to "See requirements", "Quit Dropbox" and "Sign in".

    Dropbox have reduced the number of file systems they support. We knew this was coming for a while, but it's a pain if you don't use one of the supported filesystems.

    Recently I re-installed my Ubuntu 18.04 laptop and chose XFS rather than the default ext4 partition type when installing. That's the reason the error is appearing for me.

    I do also use NextCloud and Syncthing for syncing files, but some of the people I work with only use Dropbox, and forcing them to change is tricky.

    So I wanted a solution where I could continue to use Dropbox but not have to re-format the home partition on my laptop. The 'fix' is to create a file, format it ext4 and mount it where Dropbox expects your files to be. That's essentially it. Yay Linux. This may be useful to others, so I've detailed the steps below.

    Note: I strongly recommend backing up your dropbox folder first, but I'm sure you already did that so let's assume you're good.

    This is just a bunch of commands, which you could blindly paste en masse, or, preferably one-by-one, checking it did what it says it should, before moving on. It worked for me, but may not work for you. I am not to blame if this deletes your cat pictures. Before you begin, stop Dropbox completely. Close the client.

    I've also put these in a github gist.

    # Location of the image which will contain the new ext4 partition
    DROPBOXFILE="$HOME"/.dropbox.img
    
    # Current location of my Dropbox folder
    DROPBOXHOME="$HOME"/Dropbox
    
    # Where we will copy the folder to. If you have little space, you could make this
    # a folder on a USB drive
    DROPBOXBACKUP="$HOME"/old_Dropbox
    
    # What size is the Dropbox image file going to be. It makes sense to set this
    # to whatever the capacity of your Dropbox account is, or a little more.
    DROPBOXSIZE="20G"
    
    # Create a 'sparse' file which will start out small and grow to the maximum
    # size defined above. So we don't eat all that space immediately.
    dd if=/dev/zero of="$DROPBOXFILE" bs=1 count=0 seek="$DROPBOXSIZE"
    
    # Format it ext4, because Dropbox Inc. says so
    sudo mkfs.ext4 "$DROPBOXFILE"
    
    # Move the current Dropbox folder to the backup location
    mv "$DROPBOXHOME" "$DROPBOXBACKUP"
    
    # Make a new Dropbox folder to replace the old one. This will be the mount point
    # under which the sparse file will be mounted
    mkdir "$DROPBOXHOME"
    
    # Make sure the mount point can't be written to if for some reason the partition 
    # doesn't get mounted. We don't want dropbox to see an empty folder and think 'yay, let's delete
    # all his files because this folder is empty, that must be what they want'
    sudo chattr +i "$DROPBOXHOME"
    
    # Mount the sparse file at the dropbox mount point
    sudo mount -o loop "$DROPBOXFILE" "$DROPBOXHOME"
    
    # Copy the files from the existing dropbox folder to the new one, which will put them
    # inside the sparse file. You should see the file grow as this runs.
    sudo rsync -a "$DROPBOXBACKUP"/ "$DROPBOXHOME"/
    
    # Create a line in our /etc/fstab so this gets mounted on every boot up
    echo "$DROPBOXFILE" "$DROPBOXHOME" ext4 loop,defaults,rw,relatime,exec,user_xattr 0 0 | sudo tee -a /etc/fstab
    
    # Let's unmount it so we can make sure the above line worked
    sudo umount "$DROPBOXHOME"
    
    # This will mount as per the fstab 
    sudo mount -a
    
    # Set ownership and permissions on the new folder so Dropbox has access
    sudo chown $(id -un) "$DROPBOXHOME"
    sudo chgrp $(id -gn) "$DROPBOXHOME"
    

    Now start Dropbox. All things being equal, the error message will go away, and you can carry on with your life, syncing files happily.

    Hope that helps. Leave a comment here or over on the github gist.

    13 December, 2018 11:15AM

    Cumulus Linux

    Lessons learned from Black Friday and Cyber Monday

     

    If you’re a consumer-facing business, Black Friday and Cyber Monday are the D-Day for IT operations. Low-level estimates indicate that upwards of 20% of all revenues for companies can occur within these two days. The stakes are even higher if you’re a payment processor as you aggregate the purchases across all consumer businesses. This means that the need to remain available during these crucial 96 hours is paramount.

    My colleague, David, and I have been working the past 10 months preparing for this day.  In January 2018 we started a new deployment with a large payment processor to help them build out capacity for their projected 2018 holiday payment growth. Our goal was to create a brand new, 11 rack data center to create a third region to supplement the existing two regions used for payment processing. In addition, we helped deploy additional Cumulus racks and capacity at the existing two regions, which were historically built with traditional vendors.

    Now that both days have come and gone, read on to find out what we learned from this experience.

    Server Interop Testing

    Payment processing has most of its weight on the payment applications running in the data center. As with most networking, the network is just a medium to access the applications that drive the business. The most overlooked part of a greenfield deployment is validating all the server interop connectivity.

    This problem provides an interesting chicken or the egg scenario. A network can be fully deployed, provisioned and control plane validated without any applications active. Then, the servers can be initially deployed and their server to TOR connectivity can be established relatively simply. The challenge then is making sure actual applications work successfully in the environment.

    Having a reliable burn-in period that follows the initial deployment is critical to instil confidence and iron out any wrinkles in the deployment.

    Unfortunately, for this environment, we ran tight on time and didn’t have that dedicated reliable burn in period. As a result, we were fighting fires right up until go live. Despite this being suboptimal, I get a feeling that every enterprise organization (or at least every enterprise organization I’ve worked with) ends up falling into this trap.

    Architecting Redundancy

    Redundancy can come in many forms and we had to be careful with the seductive allure of application redundancy and dynamic migration of applications. When we’ve chased this functionality before, we’ve ended up in a place where that functionality was not trusted for production. To be more clear, a lot of times IT organizations will skimp on networking gear with the expectations that the application has built-in redundancy either through a distributed solution or dynamic migration of applications. As a result, they think that single top of rack, or single edge devices, are satisfactory for high availability for their applications.

    As we get closer and closer to the go-live date, we’ve historically found that the robust redundancy promised by these applications doesn’t meet production level expectations. I’m not pointing fingers, but I think the problem is more complex than initially assessed.

    Luckily, we were in a place where we were able to build the network from the ground up using an architecture that accommodates the lowest common denominator. We built two top of racks everywhere, dual exits, multiple ecmp paths in the layer 3 infrastructure, amongst other redundant solutions which are classic to data center networking solutions.

    The challenge we faced here was whether we needed redundancy at Layer 2 versus Layer 3. Layer 2 redundancy was primarily around using LAG/MLAG or active/passive setup which made L3 redundancy preferred because it’s simple and reliable to setup and troubleshoot. L3 protocols are open and a redundant link is generally easy to isolate for troubleshooting. L2 redundancy, on the other hand, can be open when using LACP, but when using distributed LAG across two switches most vendors will implement a proprietary solution to make this happen. Cumulus Linux is no different as our MLAG solution only works with another Cumulus Linux peer.

    When we can, we try to prioritize two forms of redundancy:
    1. Layer 1 cabling redundancy
    2. Layer 3 routing redundancy

    Layer 2 redundancy can’t be avoided, but we also tried to reduce the amount of L2 redundancy when available.

    Capacity Addition

    As we approached the go-live date of Black Friday, we found that additional capacity was needed for more reliable functionality of the application software. This meant that we had to build additional racks with minimal notice or runway. Luckily, because we leveraged automation from the start, we were never hindered by the lag of applying a configuration to a newly cabled rack.

    We only needed a couple hours of lead time to fully configure a new rack, and the majority of the capacity addition was around racking, stacking, and cabling up the hardware itself.

    Our design had identical cabling and configuration for every rack, with the only changes being loopback IP addresses. We also used BGP Unnumbered which allowed us to keep from manually defining IP addresses per uplink from our Top of Racks. Updating the variables in our automation code was as simple and easy as adding a new loopback variable for each new switch being introduced.

    This experience taught us a lot and we hope that you can now benefit from our experience and learnings too. If you’re interested in reading more from me, check out my other recent blog “EVPN behind the curtains” here.

    The post Lessons learned from Black Friday and Cyber Monday appeared first on Cumulus Networks engineering blog.

    13 December, 2018 01:52AM by Rama Darbha

    December 12, 2018

    hackergotchi for Univention Corporate Server

    Univention Corporate Server

    Release of UCS Dashboard: Now Published in Version 1.1

    With version 1.1 we have released the final version of the UCS Dashboard Apps.

    The UCS Dashboard allows administrators to quickly and easily read the state of the domain and individual servers on different dashboards.

    After the beta release in the summer and lots of feedback, we have now released the final version of the UCS Dashboard Apps (Dashboard, Database and Client). Changes primarily concern a more robust installation / de-installation and better configuration of the application. In addition, emphasis has been placed on ease of expandability, to be able, e.g., to integrate non-UCS systems.

    Feedback, especially on other metrics about the server or domain status, is, of course, always welcome.

    The changes at a glance:

    UCS Dashboard

    • The UCS Dashboard Database (Prometheus) server is now configurable (Dashboard and Dashboard Database can thus run on different systems).
    • You can now integrate your own Grafana configuration.
    • Grafana has been updated to version 5.3.4.

    UCS Dashboard Database

    • An error during the uninstallation has been fixed (cron-job will be removed now).
    • You can now integrate your own configuration for Prometheus targets (clients).
    • Errors in the data collection script have been fixed.
    • Prometheus has been updated to version 2.4.3.

    UCS Dashboard Client

    • Errors in the data collection script have been fixed.

    UCS Dashboard Apps in App Center

     

    Der Beitrag Release of UCS Dashboard: Now Published in Version 1.1 erschien zuerst auf Univention.

    12 December, 2018 12:07PM by Maren Abatielos

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Colin King: Linux I/O Schedulers

    The Linux kernel I/O schedulers attempt to balance the need to get the best possible I/O performance while also trying to ensure the I/O requests are "fairly" shared among the I/O consumers.  There are several I/O schedulers in Linux, each try to solve the I/O scheduling issues using different mechanisms/heuristics and each has their own set of strengths and weaknesses.

    For traditional spinning media it makes sense to try and order I/O operations so that they are close together to reduce read/write head movement and hence decrease latency.  However, this reordering means that some I/O requests may get delayed, and the usual solution is to schedule these delayed requests after a specific time.   Faster non-volatile memory devices can generally handle random I/O requests very easily and hence do not require reordering.

    Balancing the fairness is also an interesting issue.  A greedy I/O consumer should not block other I/O consumers and there are various heuristics used to determine the fair sharing of I/O.  Generally, the more complex and "fairer" the solution the more compute is required, so selecting a very fair I/O scheduler with a fast I/O device and a slow CPU may not necessarily perform as well as a simpler I/O scheduler.

    Finally, the types of I/O patterns on the I/O devices influence the I/O scheduler choice, for example, mixed random read/writes vs mainly sequential reads and occasional random writes.

    Because of the mix of requirements, there is no such thing as a perfect all round I/O scheduler.  The defaults being used are chosen to be a good best choice for the general user, however, this may not match everyone's needs.   To clarify the choices, the Ubuntu Kernel Team has provided a Wiki page describing the choices and how to select and tune the various I/O schedulers.  Caveat emptor applies, these are just guidelines and should be used as a starting point to finding the best I/O scheduler for your particular need.

    12 December, 2018 09:55AM by Colin Ian King (noreply@blogger.com)

    December 11, 2018

    hackergotchi for Univention Corporate Server

    Univention Corporate Server

    Third Point Release of Univention Corporate Server 4.3-3

    With UCS 4.3-3 the third point release for Univention Corporate Server (UCS) 4.3 is now available, which includes a number of important updates and various new features.

    Improved configurability of the portal

    The portal is the starting point for many UCS users and administrators. As described in the blog article Design the UCS Portal with Drag & Drop, you can adapt it very easily to your needs. The categories “Applications” and “Administration” were static until now. We have extended the portal so that you can now define your own categories. In addition, you can add static links to the portal, e.g. also link an imprint here.

    Screenshot UCS portal

    In many environments different users should be shown different tiles. To do this, the group members for whom a particular tile is displayed are stored in the tiles. Previously, you could only assign each tile to one group. With UCS 4.3-3 you can now assign several groups to each tile. As soon as a user is a member of one of these groups, the corresponding tile will be displayed.

    UCS Dashboard for infrastructure overview

    After a three-month beta phase, in which we have collected user feedback, the UCS Dashboard is now available in version 1.1 in the App Center. It can be used as an extension of UCS. In the past weeks, we have mainly improved the beta version towards more robust installation and uninstallation as well as better configuration and extensibility. Find more details about the UCS Dashboard in our article Release of UCS Dashboard: Now Published in Version 1.1.

    UCS_Dashboard_Server

    The dashboard allows administrators to see the state of the domain or individual servers quickly and easily on different dashboards. The UCS Dashboard provides the ability to identify server utilization trends and take action before problems occur. Especially in large environments with many servers, it is a very useful feature for administrators. During installation, two dashboards are automatically created and you can easily create additional custom dashboards yourself. The open source solutions Grafana and Prometheus are used as basic technologies.

    Usability improvements in the Univention Management Console

    The web-based Univention Management Console (UMC) allows you to manage the entire domain. UMC offers various modules for administration, e.g. for creating users, joining the domain or diagnosing the system. We have fixed minor bugs in different modules and improved their usability. Furthermore, we have improved the scrolling in the UMC module of the App Center and the LDAP Directory. Individual areas can now be scrolled independently and edited separately more easily.

    Multi container support for the App Center

    So far, the App Center has only offered Docker-based apps that consist of a single Docker container. From now on also so-called multi container apps are supported. The multi container apps are apps that consist of more than one container. A detailed description can be found in the blog post Multi Container Support for Docker Apps for Univention App Center.

    In addition, various improvements have been integrated into the App Center, which above all improve the stability of updates. For example, the password you enter during app updates is now validated before any action is performed.

    New simplified Python API for using IDM

    The Identity Management System is accessed in UCS with the help of Univention Directory Manager (UDM). In addition to the command line interface, UDM also offers a Python API, which is designed for the internal use.

    With UCS 4.3-3 there is now a new improved Python API, which makes it possible to work very easily with the objects from the Identity Management System. We will publish a corresponding API documentation shortly. First examples can be found at github.

    Univention Virtual Machine Manager

    UVMM is primarily used to centrally manage KVM-based virtual machines. We have adapted UVMM so that you can configure whether you allow or don’t allow a live migration in case of having different CPU types of the host systems. You can and should use this especially if you use incompatible CPU types in your environment.

    In addition, we have integrated several other improvements in UVMM. If, for example, the virtual machine pauses automatically, because there is not enough hard disk space available, a corresponding warning is now shown in UVMM.

    LDAP replication

    It could happen in our LDAP replication that the Univention Directory Listener reestablished the LDAP connection to another server. However, the connection to the Univention Directory Notifier was kept under certain circumstances. As a result, it could happen that the notification was sent by one system, but the changes were read from another system. An incorrect replication could be the result. This problem has been fixed and it is now ensured that the information always comes from the same system.

    Samba

    In Samba there is a command dbcheck to detect and fix inconsistencies in the Samba databases. This tool is often the first approach to solve problems in Samba environments. Several improvements have been integrated to find and fix errors in a more targeted way.

    For the synchronization of the directory service objects between OpenLDAP and Samba 4 the S4 Connector is used. For the synchronization between OpenLDAP and a Microsoft Active Directory the AD Connector is used. In both connectors various improvements in synchronization have been implemented, e.g. the synchronization of DNS objects and password expiration data in the S4 Connector and the synchronization of mail attributes in the AD Connector.

    Stability and Security Updates

    UCS 4.3-3 is now based on Debian 9.6., which has been released in November and contains a lot of stability and security updates.

    In addition, various other security updates have been integrated into UCS 4.3-3, such as the Linux kernel, OpenSSL or MariaDB.

    A complete list of changes to UCS 4.3-3 including all CVE numbers can be found in our Release Notes.

    Der Beitrag Third Point Release of Univention Corporate Server 4.3-3 erschien zuerst auf Univention.

    11 December, 2018 05:18PM by Stefan Gohmann

    Release UCS 4.3-3

    With UCS 4.3-3 the third point release for Univention Corporate Server (UCS) 4.3 is now available, which includes a number of important updates and various new features.

    Improved configurability of the portal

    The portal is the starting point for many UCS users and administrators. As described in the blog article Design the UCS Portal with Drag & Drop, you can adapt it very easily to your needs. The categories “Applications” and “Administration” were static until now. We have extended the portal so that you can now define your own categories. In addition, you can add static links to the portal, e.g. also link an imprint here.

    In many environments different users should be shown different tiles. To do this, the group members for whom a particular tile is displayed are stored in the tiles. Previously, you could only assign each tile to one group. With UCS 4.3-3 you can now assign several groups to each tile. As soon as a user is a member of one of these groups, the corresponding tile will be displayed.

    UCS Dashboard for infrastructure overview

    After a three-month beta phase, in which we have collected user feedback, the UCS Dashboard is now available in version 1.1 in the App Center. It can be used as an extension of UCS. In the past weeks, we have mainly improved the beta version towards more robust installation and uninstallation as well as better configuration and extensibility.

    UCS_Dashboard_Server
    The dashboard allows administrators to see the state of the domain or individual servers quickly and easily on different dashboards. The UCS Dashboard provides the ability to identify server utilization trends and take action before problems occur. Especially in large environments with many servers, it is a very useful feature for administrators. During installation, two dashboards are automatically created, and you can easily create additional custom dashboards yourself. The open source solutions Grafana and Prometheus are used as basic technologies.

    Usability improvements in the Univention Management Console

    The web-based Univention Management Console (UMC) allows you to manage the entire domain. UMC offers various modules for administration, e.g. for creating users, joining the domain or diagnosing the system. We have fixed minor bugs in different modules and improved their usability. Furthermore, we have improved the scrolling in the UMC module of the App Center and the LDAP Directory. Individual areas can now be scrolled independently and edited separately more easily.

    Multi container support for the App Center

    So far, the App Center has only offered Docker-based apps that consist of a single Docker container. From now on also so-called multi container apps are supported. The multi container apps are apps that consist of more than one container. A detailed description can be found in the blog post Multi Container Support for Docker Apps for Univention App Center.

    In addition, various improvements have been integrated into the App Center, which above all improve the stability of updates. For example, the entered password is now validated during app updates before any action is performed.

    New simplified Python API for using IDM

    The Identity Management System is accessed in UCS with the help of Univention Directory Manager (UDM). In addition to the command line interface, UDM also offers a Python API, which is designed for the internal use. With UCS 4.3-3 there is now a new improved Python API, which makes it possible to work very easy with the objects from the Identity Management System. We will publish a corresponding API documentation shortly. First examples can be found at github

    Univention Virtual Machine Manager

    UVMM is primarily used to centrally manage KVM-based virtual machines. We have adapted UVMM so that you can configure whether a live migration is allowed or not with different CPU types of the host systems. You can and should use this especially if you use incompatible CPU types in your environment.

    In addition, we have integrated several other improvements in UVMM. If, for example, the virtual machine pauses automatically because there is not enough hard disk space available, a corresponding warning is now shown in UVMM.

    LDAP replication

    It could happen in our LDAP replication that the Univention Directory Listener reestablished the LDAP connection to another server. However, the connection to the Univention Directory Notifier was kept under certain circumstances. As a result, it could happen that the notification were sent by one system, but the changes were read from another system. An incorrect replication could be the result. This problem has been fixed and it is now ensured that the information always comes from the same system.

    Samba

    In Samba there is a command dbcheck to detect and fix inconsistencies in the Samba databases. This tool is often the first approach to solve problemsin Samba environments. Several improvements have been integrated to find and fix errors in a more targeted way.
    For the synchronization of the directory service objects between OpenLDAP and Samba 4 the S4 Connector is used. For the synchronization between OpenLDAP and a Microsoft Active Directory the AD Connector. In both connectors various improvements in synchronization have been implemented, e.g. the synchronization of DNS objects and password expiration data in the S4 Connector and the synchronization of mail attributes in the AD Connector.

    Stability and Security Updates

    UCS 4.3-3 is now based on Debian 9.6., which has been released in November and contains a lot of stability and security updates.

    In addition, various other security updates have been integrated into UCS 4.3-3, such as the Linux kernel, OpenSSL or MariaDB.

    A complete list of changes to UCS 4.3-3 including all CVE numbers can be found in our Release Notes

    Der Beitrag Release UCS 4.3-3 erschien zuerst auf Univention.

    11 December, 2018 04:44PM by Stefan Gohmann

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Jono Bacon: 10 Ways To Up Your Public Speaking Game

    Public speaking is an art form. There are some amazing speakers, such as Lawrence Lessig, Dawn Wacek, Rory Sutherland, and many more. There are also some boring, rambling disasters that clog up meetups, conferences, and company events.

    I don’t claim to be an expert in public speaking, but I have had the opportunity to do a lot of it, including keynotes, presentation sessions, workshops, tutorials, and more. Over the years I have picked up some best practices and I thought I would share some of them here. I would love to hear your recommendations too, so pop them in the comments.

    1. Produce Clean Slides

    Great talks are a mixture of simple, effective slides and a dynamic, engaging speaker. If one part of this combination is overloading you with information, the other part gets ignored.

    The primary focus should be you and your words. Your #1 goal is to weave together an interesting story that captivates your audience. 

    Your slides should simple provide a visual tool to help get your words over more effectively. Your slides are not the lead actress, they are the supporting actor.

    Avoid extensive amounts of text and paragraphs. Focus on diagrams, pictures, and simple lists.

    Good:

    Bad:

    Notice how I took my company logo off, just in case someone swipes it and think that I actually like to make slides like this. 🙂

    Look at the slides of great speakers to get your creativity flowing.

    2. Deliver Pragmatic Information

    Keynotes are designed for the big ideas that set the stage for a conference. Regular talks are designed to get over key concepts that can help the audience expand their capabilities.

    With both, give your audience information they can pragmatically use. How many times have you left a talk and thought, “Well, that was neat, but, er…how the hell do I start putting those concepts into action?

    You don’t have to have all the answers, but you need to package up your ideas in a way that is easy to consume in the real world, not just on a stage.

    Diagrams, lists, and step-by-step instructions work well. Make these higher level for the keynotes and more in-depth for the regular talks. Avoid abstract, generic ideas: they are unsatisfying and boring.

    3. Build and Relieve Tension

    Great movies and TV shows build a sense of tension (e.g. a character in a hostage situation) and the payoff is when that tension is relieved (e.g. the character gets rescued.)

    Take a similar approach in your talks. Become vulnerable. Share times when you struggled, got things wrong, or made mistakes. Paint a picture of the low point and what was running through your mind.

    Then, relieve the tension by sharing how you overcame it, bringing your audience along for the ride. This makes your presentation dynamic and interesting, and makes it clear that you are not perfect either, which helps build a closer connection with the audience. Speaking of which…

    4. Loosen Up and Be Yourself

    Far too many speakers deliver their presentations like they have a rod up their backside.

    Formal presentations are boring. Presentations where the speaker feels comfortable in their own skin and is able to identify with the audience are much more interesting.

    For example, I was delivering a presentation to a financial services firm a few months ago. I weaved in it stories about my family, my love of music, travel experiences, and other elements that made it more personal. After the session a number of audience members came over and shared how it was refreshing to see a more approachable presentation in a world that is typically so formal.

    Your goal is to build a connection with your audience. To do this well they need to feel you are on the same level. Speak like them, share stories that relate to them, and they will give you their attention, which is all you can ask for.

    5. Involve Your Audience (but not too much)

    There is a natural barrier between you and your audience. We are wired up to know that the social context of a presentation means the speaker does the talking and the audience does the listening. If you violate this norm (such as heckling), you would be perceived as an asshole.

    You need to break this barrier, but to never cede control to your audience. If you loose control and make the social norm for them to interrupt, your presentation will be riddled with audience noise.

    Give them very specific ways to participate, such as:

    • Ask how they are doing at the beginning of a talk.
    • Throw out questions and invite them to put their hands up (or clap loudly.)
    • Invite someone to volunteer for something (such as a role play scenario.)
    • Take and answer questions.

    6. Keep Your Ego in Check

    We have all seen it. A speaker is welcomed to the stage and they constantly remind you about how great they are, the awards they have won, and how (allegedly) inspirational they are. In some cases this is blunt-force ego, in some cases it is a humblebrag. In both cases it sucks.

    Be proud of your work and be great at it, but let the audience sing your praises, not you. Ego can have a damaging impact on your presentation and how you are perceived. This can drive a wedge between you and your audience.

    7. Don’t Rush, but Stay on Time

    We live in multi-cultural world in which we travel a lot. You are likely to have an audience from all over the world, speaking many different languages, and from a variety of backgrounds. Speaking at a million words a minute will make understanding you very difficult some people.

    Speak at a comfortable pace, and don’t rush it. Now, some of you will be natural fast-talkers, and will need to practice this. Remember these?:

    Well, we now all have them on our phones. Switch it on, practice, and ensure you always finish at least a few minutes before your allocated time. This will give you a buffer.

    Running over your allocated time is a sure-fire way to annoy (a) the other speakers who may have to cut their time short, and (b) the event organizer who has to deal with overruns in the schedule. “But it only went over by a few minutes!” Sure, but when everyone does this, entire events get way behind schedule. Don’t be that person.

    8. Practice and get Honest Feedback

    We only get better when we practice and can see our blind spots. Both are essential for getting good at public speaking.

    Start simple. Speak at your local meetups, community events, and other gatherings. Practice, get comfortable, and then file papers at conferences and other events. Keep practicing, and keep refining.

    Critique is essential here. Ask close friends to sit in your talks and ask them for blunt feedback afterwards. What went well? What didn’t go well? Be explicit in inviting criticism and don’t overreact to them when you get it. You want critical feedback…about your slides, your content, your pacing, your hand gestures…the lot. I have had some very blunt feedback over the years and it has always improved my presentations.

    9. Never Depend on Conference WiFi

    It rarely works well, simple as that.

    Oh, and your mobile hotspot may not work either as many conference centers often seem to be built in borderline faraday cages. Next…

    10. Remember, it is just a Presentation

    Some people get a little wacky when it comes to perfecting presentations and public speaking. I know some people who have spent weeks preparing and refining their talks, often getting into a tailspin about imperfections that need to be improved.

    The most important thing to worry about is the content. Is it interesting? Is it valuable? Does it enrich your audience? People are not going to remember the minute details of how you said something, what your slides looked like, and what whether you blinked too much. They will remember the content and ideas: focus on that.

    Oh, and a bonus 11th: turn off animations. They are great in the hands of an artisan, but for most of us they look tacky and awful.

    I am purely scratching the surface here and I would love to hear your suggestions of public speaking tips and recommendations. Share them in the comments! Oh and be sure to join as a member, which is entirely free.

    The post 10 Ways To Up Your Public Speaking Game appeared first on Jono Bacon.

    11 December, 2018 04:00PM

    hackergotchi for Maemo developers

    Maemo developers

    A Pathetic Human Being

    A Venetian gondoliere thought it a good idea to decorate his gondola with fascist symbols, yet he can't handle that others think it not a good "joke"

    The post A Pathetic Human Being appeared first on René Seindal.

    0 Add to favourites0 Bury

    11 December, 2018 03:40PM by René Seindal (rene@seindal.dk)

    hackergotchi for Tails

    Tails

    Tails 3.11 is out

    This release fixes many security vulnerabilities.

    You should upgrade as soon as possible.

    Changes

    New features

    Upgrades and changes

    • Add a confirm dialog between downloading and applying an automatic upgrade to control better when the network is disabled and prevent partially applied upgrades. (#14754 and #15282)

    • When running from a virtual machine, warn about the trustworthiness of the operating system even when running from a free virtualization software. (#16195)

    • Disable Autocrypt in Thunderbird to prevent sending unencrypted emails by mistake. (#15923)

    • Update Linux to 4.18.20.

    • Update Tor Browser to 8.0.4.

    • Update Thunderbird to 60.3.0.

    Fixed problems

    • Fix the opening of Thunderbird in non-English languages. (#16113)

    • Reduce the logging level of Tor when using bridges. (#15743)

    For more details, read our changelog.

    Known issues

    None specific to this release.

    See the list of long-standing issues.

    Get Tails 3.11

    What's coming up?

    Tails 3.12 is scheduled for January 29.

    Have a look at our roadmap to see where we are heading to.

    We need your help and there are many ways to contribute to Tails (donating is only one of them). Come talk to us!

    11 December, 2018 12:34PM

    hackergotchi for Ubuntu developers

    Ubuntu developers

    The Fridge: Ubuntu Weekly Newsletter Issue 556

    Welcome to the Ubuntu Weekly Newsletter, Issue 556 for the weeks of November 25 – December 8, 2018. The full version of this issue is available here.

    In this issue we cover:

    The Ubuntu Weekly Newsletter is brought to you by:

    • Krytarik Raido
    • Bashing-om
    • Chris Guiver
    • Wild Man
    • And many others

    If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

    Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

    11 December, 2018 12:45AM

    December 10, 2018

    hackergotchi for Purism PureOS

    Purism PureOS

    Break Free from Privacy Prison

    As 2018 comes to a close, people around the world have to face the stark truth of surveillance capitalism. Nearly all consumer products — speakers, phones, cars, and perhaps even mattresses — are recording devices, storing metrics on our movements and behavior. The New York Times just published a detailed report on location tracking in leaky Android and iOS apps. That’s just a fact of life when people use smartphones, right? Wrong. In 2019, Purism’s Librem 5 smartphone will be proof that no one has to live with spies in their pockets.

    If anything has changed since Facebook’s Cambridge Analytica scandal, it’s that more and more people are jumping ship from the Frightful Five: Google, Amazon, Facebook, Apple, and Microsoft. At Purism, we offer an alternative to the polluted software ecosystems of these tech giants.

    Our code is Free and Open-Source Software (FOSS), the industry standard in security because it can be verified by experts and amateurs alike. The software on our Librem laptops and our upcoming phone stands on a strong, foundational chain of trust that is matched by hardware features such as kill switches. These switches give people the added assurance that their devices won’t record or “phone home” to advertisers, spies, and cyber criminals. Turn off WiFi, microphone, and webcam on the Librem 5 and they’re off, no question about it.

    Purism’s combination of trustworthy hardware and software is a win for privacy advocates, enterprise, and the so-called average user. We believe that everyone deserves privacy and that security, freedom, and autonomy are closely linked. To build a libre and privacy-respecting world, however, we need to fathom the scope of the problem and meet it head-on.

    The Times exposé follows the movements of Lisa Magrin, a school teacher. On Ms. Magrin’s trips to school, location markers were recorded “more than 800 times” often in her classroom. But it doesn’t stop there: “An app on the device gathered her location information, which was then sold without her knowledge. It recorded her whereabouts as often as every two seconds… While Ms. Magrin’s identity was not disclosed in [phone] records, The Times was able to easily connect her to [a spot on the map].”

    Think that’s invasive? Apps continued to track Ms. Magrin while she traveled to a Weight Watchers meeting, to a dermatologist, hiking with her dog, and to her ex-boyfriend’s home. Many people know that tracking is ubiquitous, but facing the stark results is a harrowing experience.

    Privacy researchers have known the pitfalls of ever-listening sensors for a long time, so I wish I could say this is news to me. My research at Yale Privacy Lab has explored smartphone spying in detail, and I’ve personally dug into the privacy pitfalls of everything from Google’s bogus location settings to snooping billboards. This time last year, Yale Privacy Lab collaborated with the researchers at Exodus Privacy and utilized the excellent Exodus scanner to reveal just how polluted the mobile app ecosystem really is.

    I work at Purism because I know we can solve this problem. Big changes are on the horizon in society at large, and people are fed up with surveillance capitalism. Purism offers replacements for the privacy prisons of the Frightful Five. You don’t have to become a luddite to enjoy them, either — our products are as beautiful as they are secure.

    Take a stand, #DemandFreedom, and join the world that we’re making.

    10 December, 2018 07:50PM by Sean O'Brien

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Jono Bacon: Speaking Engagements in Tel Aviv in December

    I am excited to share that I will be heading to Tel Aviv later this month to speak at a few events. I wanted to share a few details here, and I hope to see you there!

    DevOps Days Tel Aviv

    Dev Ops Days Tel Aviv Tues 18 December 2018 + Wed 19 December 2018 at Tel Aviv Convention Center, 101 Rokach Blvd, Tel Aviv, Israel.

    I am delivering the opening keynote on Tuesday 18th December 2018 at 9am.

    Get Tickets

    Meetup: Building Technical Communities That Scale

    Thu 20th Dec 2018 at 9am at RISE, Ahad Ha’Am St 54, 54 Ahad Ha’Am Street, Tel Aviv-Yafo, Tel Aviv District, Israel.

    I will be delivering a talk and participating in a panel (which includes Fred Simon, Chief Architect of JFrog, Shimon Tolts, CTO of Datree, and Demi Ben Ari, VP R&D of Panorays.)

    Get Tickets (Space is limited, so grab tickets ASAP)

    I popped a video about this online earlier this week. Check it out:

    I hope to see many of you there!

    The post Speaking Engagements in Tel Aviv in December appeared first on Jono Bacon.

    10 December, 2018 07:00AM

    hackergotchi for VyOS

    VyOS

    DNS forwarding in VyOS

    A lot of small networks do not have their own DNS server, but it's not always desirable to just leave hosts to use an external third-party server either, that's why we've had DNS forwarding in VyOS for a long time and are going to keep it there for the foreseeable future.

    Experienced VyOS users already know all about it, but we should post something for newcomers too, shouldn't we?

    Configuring DNS forwarding is very simple. Assuming you have "system name-server" set, all you need to do to simply forward requests from hosts behind eth0 to it is "set service dns forwarding listen-on eth0". Repeat for every interfaces where you have clients and you are done.

    There are some knobs for telling the service to use or not use specific DNS servers though:

    set service dns forwarding listen-on eth0
    
    # Use name servers from "system name-server"
    set service dns forwarding system
    
    # Use servers received from DHCP on eth1 (typically an ISP interface)
    set service dns forwarding dhcp eth1
    
    # Use a hardcoded name server
    set service dns forwarding name-server 192.0.2.10
    

    You can also specify cache size:

    set service dns forwarding cache-size 1000
    

    One of the less known features is the option to use different name servers for different domains. It can be used for a quick and dirty split-horizon DNS, or simply for using an internal server just for internal domains rather than recursive queries:

    set service dns forwarding domain mycompany.local server 192.168.52.100
    set service dns forwarding domain mycompany.example.com server 192.168.52.100
    

    And that's all to it. DNS forwarding is not a big feature — useful doesn't always equal complex.

    10 December, 2018 01:45AM by Daniil Baturin

    hackergotchi for Grml developers

    Grml developers

    Frank Terbeck: E-Series Preferred Values

    In my dayjob, I'm an electrical engineer. Though, I'm mostly involved in writing firmware that runs on baremetal systems. If you're designing electrical circuits for a living or just tinkered with them, you will have heard of E-Series preferred values. In particular, resistors, capacitors and inductors are generally available in ratings derived from these numbers. The basic idea is to fit a number of values into one order of magnitude. For examples in E3, there are three values in the same decimal power: 100, 220 and 470. The next value would be 1000 in the next decimal power. Likewise in E12 there are twelve such values.

    Roughly, E-Series follow an exponential definition. The series generally don't follow the exact mathematical expression. For an actual implementation you either adjust to the specified values or you simply use the actual tables from the spec.

    Now looking up values in tables is a boring task; especially if the tables are relatively small. Finding combinations, though, is more interesting. For example, if you'd like a resistor rated at 50Ω (which is a rather important value in radio frequency design), you'll notice that the exact value is not available in any of the E-Series. But it's easy to combine two 100Ω resistors in parallel to produce a value of 50Ω. …sure, in an actual design you might use 49.9Ω from E96 or E192, or use a specially made resistor of the desired rating. But you get the idea. Combining components in parallel and series circuits allows the parts from E-Series to cover lots of ground.

    I've written a library that implements E-Series in Scheme. Its main modules are (e-series adjacency), which allows looking up values from an E-Series that are in the vicinity of a given value. Then there's (e-series combine) which produces combinations of values from a certain E-Series to approximate a given value. And finally there's the top-level (e-series) module, that implements frontends to the other mentioned modules, to make it possible to easily use the library at a Scheme REPL.

    To see if the library finds a combination from E12 that matches 50Ω:

    scheme@(guile-user)> (resistor 12 50)
     ------------+-------------------------+-------------+-------------+--------------
        Desired  |         Actual (Error)  |     Part A  |     Part B  |  Circuit
     ------------+-------------------------+-------------+-------------+--------------
       50.0000Ω  |   50.0000Ω (  exact  )  |   100.000Ω  |   100.000Ω  |  parallel
       50.0000Ω  |   50.0380Ω (+7.605E-4)  |   56.0000Ω  |   470.000Ω  |  parallel
       50.0000Ω  |   49.7000Ω (-6.000E-3)  |   47.0000Ω  |   2.70000Ω  |  series
       50.0000Ω  |   50.3000Ω (+6.000E-3)  |   47.0000Ω  |   3.30000Ω  |  series
     ------------+-------------------------+-------------+-------------+--------------
    

    Those aren't all the combinations that are possible. By default the module produces tables, that contain combinations that match the desired value at least as well as 1%. Now, to see values in the vicinity of 50Ω all E-Series, you can do this:

    scheme@(guile-user)> (resistor 50)
     ---------+--------------------------+-------------+--------------------------
      Series  |           Below (Error)  |      Exact  |           Above (Error)
     ---------+--------------------------+-------------+--------------------------
        E3    |   47.0000Ω  (-6.000E-2)  |             |   100.000Ω  (+1.000E+0)
        E6    |   47.0000Ω  (-6.000E-2)  |             |   68.0000Ω  (+3.600E-1)
        E12   |   47.0000Ω  (-6.000E-2)  |             |   56.0000Ω  (+1.200E-1)
        E24   |   47.0000Ω  (-6.000E-2)  |             |   51.0000Ω  (+2.000E-2)
        E48   |   48.7000Ω  (-2.600E-2)  |             |   51.1000Ω  (+2.200E-2)
        E96   |   49.9000Ω  (-2.000E-3)  |             |   51.1000Ω  (+2.200E-2)
        E192  |   49.9000Ω  (-2.000E-3)  |             |   50.5000Ω  (+1.000E-2)
     ---------+--------------------------+-------------+--------------------------
    

    And here you see that E96 and E192 have pretty close matches as single components.

    With combinations, the library allows the for user to specify arbitrarily complex predicates to choose from the generated combinations. For example, to only return parallel circuits that approximate 50Ω from E12:

    scheme@(guile-user)> (resistor 12 50 #:predicate (circuit 'parallel))
    ...
    

    And to limit those results to those that have an eror of 0.001 or better, here's a way:

    scheme@(guile-user)> (resistor 12 50 #:predicate (all-of (max-error 1e-3)
                                                             (circuit 'parallel)))
     ------------+-------------------------+-------------+-------------+--------------
        Desired  |         Actual (Error)  |     Part A  |     Part B  |  Circuit
     ------------+-------------------------+-------------+-------------+--------------
       50.0000Ω  |   50.0000Ω (  exact  )  |   100.000Ω  |   100.000Ω  |  parallel
       50.0000Ω  |   50.0380Ω (+7.605E-4)  |   56.0000Ω  |   470.000Ω  |  parallel
     ------------+-------------------------+-------------+-------------+--------------
    

    There are frontends for inductors and capacitors as well, so you don't have to mentally strain yourself too much about with combination corresponds to which circuit. To find a 9.54μF capacitor approximation from E24 with an error better than 0.002:

    scheme@(guile-user)> (capacitor 24 #e9.54e-6 #:predicate (max-error #e2e-3))
     ------------+-------------------------+-------------+-------------+--------------
        Desired  |         Actual (Error)  |     Part A  |     Part B  |  Circuit
     ------------+-------------------------+-------------+-------------+--------------
      9.54000µF  |  9.53000µF (-1.048E-3)  |  9.10000µF  |  430.000nF  |  parallel
      9.54000µF  |  9.55102µF (+1.155E-3)  |  13.0000µF  |  36.0000µF  |  series
      9.54000µF  |  9.52381µF (-1.697E-3)  |  10.0000µF  |  200.000µF  |  series
     ------------+-------------------------+-------------+-------------+--------------
    

    The test-suite and documentation coverage could be better, but the front-end module is easy enough to use, I think.

    10 December, 2018 12:38AM

    December 09, 2018

    hackergotchi for VyOS

    VyOS

    VyOS 1.2.0-rc10 is available for download

    VyOS 1.2.0-rc10 is available for download from https://downloads.vyos.io/?dir=testing/1.2.0-rc10 

    Resolved issues

    The following issues have been fixed:

    • If you save your configuration on a system booted from a livecd, you will be offered to copy it to the installed image (T1047).
    • EFI GRUB can now be installed in a removable location (T1023).
    • The "run show vpn ipsec sa" command now works correctly for SAs with non-zero traffic counters (T956).
    • IPsec SA in/out traffic counters are not displayed in human-readable units (T956).

    Issues that need testing

    We have a bunch of issues that need testing. Please tell us if the following features work for you, or help us figure out a reproducing procedure! We need to make sure they are resolved before we make a stable 1.2.0 release, but we are either unable to reproduce them because they are hardware-specific and we don't have required hardware anywhere; or we cannot reproduce them using the provided procedure, which may mean either that the procedure is incomplete, or that the bug is already fixed.

    • Kernel crashed under (network) load with gen5 Mellanox cards (T1014).
    • Broken 6rd tunnel implementation (T1000).
    • Problem with Intel XL710 NICs (T961).
    • L2TPv3 interfaces sometimes not loaded on boot (T942).
    • OSPF process crashing on peer reboot (T922).
    • BGP process doesn't start on boot (T904).

    Additionally, we would like to know if DMVPN and SNMP integration with routing protocols are working well for you. If you've seen any of those issues, or, to the contrary, you can confirm that you've never seen them, please let us know.

    09 December, 2018 09:17PM by Daniil Baturin

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Benjamin Mako Hill: Awards and citations at computing conferences

    I’ve heard a surprising “fact” repeated in the CHI and CSCW communities that receiving a best paper award at a conference is uncorrelated with future citations. Although it’s surprising and counterintuitive, it’s a nice thing to think about when you don’t get an award and its a nice thing to say to others when you do. I’ve thought it and said it myself.

    It also seems to be untrue. When I tried to check the “fact” recently, I found a body of evidence that suggests that computing papers that receive best paper awards are, in fact, cited more often than papers that do not.

    The source of the original “fact” seems to be a CHI 2009 study by Christoph Bartneck and Jun Hu titled “Scientometric Analysis of the CHI Proceedings.” Among many other things, the paper presents a null result for a test of a difference in the distribution of citations across best papers awardees, nominees, and a random sample of non-nominees.

    Although the award analysis is only a small part of Bartneck and Hu’s paper, there have been at least two papers have have subsequently brought more attention, more data, and more sophisticated analyses to the question.  In 2015, the question was asked by Jaques Wainer, Michael Eckmann, and Anderson Rocha in their paper “Peer-Selected ‘Best Papers’—Are They Really That ‘Good’?

    Wainer et al. build two datasets: one of papers from 12 computer science conferences with citation data from Scopus and another papers from 17 different conferences with citation data from Google Scholar. Because of parametric concerns, Wainer et al. used a non-parametric rank-based technique to compare awardees to non-awardees.  Wainer et al. summarize their results as follows:

    The probability that a best paper will receive more citations than a non best paper is 0.72 (95% CI = 0.66, 0.77) for the Scopus data, and 0.78 (95% CI = 0.74, 0.81) for the Scholar data. There are no significant changes in the probabilities for different years. Also, 51% of the best papers are among the top 10% most cited papers in each conference/year, and 64% of them are among the top 20% most cited.

    The question was also recently explored in a different way by Danielle H. Lee in her paper on “Predictive power of conference‐related factors on citation rates of conference papers” published in June 2018.

    Lee looked at 43,000 papers from 81 conferences and built a regression model to predict citations. Taking into an account a number of controls not considered in previous analyses, Lee finds that the marginal effect of receiving a best paper award on citations is positive, well-estimated, and large.

    Why did Bartneck and Hu come to such a different conclusions than later work?

    Distribution of citations (received by 2009) of CHI papers published between 2004-2007 that were nominated for a best paper award (n=64), received one (n=12), or were part of a random sample of papers that did not (n=76).

    My first thought was that perhaps CHI is different than the rest of computing. However, when I looked at the data from Bartneck and Hu’s 2009 study—conveniently included as a figure in their original study—you can see that they did find a higher mean among the award recipients compared to both nominees and non-nominees. The entire distribution of citations among award winners appears to be pushed upwards. Although Bartneck and Hu found an effect, they did not find a statistically significant effect.

    Given the more recent work by Wainer et al. and Lee, I’d be willing to venture that the original null finding was a function of the fact that citations is a very noisy measure—especially over a 2-5 post-publication period—and that the Bartneck and Hu dataset was small with only 12 awardees out of 152 papers total. This might have caused problems because the statistical test the authors used was an omnibus test for differences in a three-group sample that was imbalanced heavily toward the two groups (nominees and non-nominees) in which their appears to be little difference. My bet is that the paper’s conclusions on awards is simply an example of how a null effect is not evidence of a non-effect—especially in an underpowered dataset.

    Of course, none of this means that award winning papers are better. Despite Wainer et al.’s claim that they are showing that award winning papers are “good,” none of the analyses presented can disentangle the signalling value of an award from differences in underlying paper quality. The packed rooms one routinely finds at best paper sessions at conferences suggest that at least some additional citations received by award winners might be caused by extra exposure caused by the awards themselves. In the future, perhaps people can say something along these lines instead of repeating the “fact” of the non-relationship.


    09 December, 2018 08:20PM

    December 07, 2018

    Cumulus Linux

    Cumulus Linux in the enterprise campus.

    As most know, Cumulus Linux was originally intended for data center switching and routing but over the years, our customer base has requested that we expand into the enterprise campus feature set too. Slowly, we’ve done just that.

    With this expansion though, there are a few items that IT managers tend to take for granted in an all Cisco environment that may need some extra attention when using Cumulus Linux as a campus switch. This is especially the case when it comes to IEEE 802.1x, desk phones, etc.

    Most of the phones we inter-operate with have been of the Cisco variety and quite often, those phones are connected to Cisco switches. There are a few tweaks from the default Cumulus settings that need to be called out in this environment and we’ll now go over what those are and how you can tweak them.

    Cisco IP Phones TLV change

    Cisco IP phones may revert to a different VLAN after initial negotiation. One of our enterprise customers found that according to a Cisco tech note on LLDP-MED and CDP, CDP should be disabled on non-Cisco switches connecting to Cisco phones.

    To eliminate this behavior, make the following adjustment to the lldp daemon:

    `sudo vi /etc/default/lldpd`

    Change this default:

    # Enable CDP by default
    DAEMON_ARGS=”-c -M 4″

    To this setting:

    # Enable CDP by default
    DAEMON_ARGS=”-M 4″

    then `systemctl restart lldpd.service`

    IP Phones random disconnects / Voice quality issues

    The problem is straightforward, IP phones will disconnect and re-authenticate randomly. Another symptom is that voice quality may suffer. The problem doesn’t seem to be phone model specific, more a function of having several phones connected to a switch. Most implementations won’t see this problem as it is related specifically to using IP Phones and the Cumulus Linux Redistribute Neighbor function together.

    Redistribute Neighbor is a feature that enables devices to span subnets by taking an ARP entry and advertising it’s /32 IPv4 address upstream. More information about this functionality is available in the Cumulus Linux documentation and this fine blog post written by Doug Youd a couple of years ago.

    To eliminate this problem, take the following action:

    `vi /etc/rdnbrd.conf`

    Change these 2 values:

    # TX an ARP request to known hosts every keepalive seconds
    keepalive = 1

    # If a host does not send an ARP reply for holdtime consider the host down
    holdtime = 3

    To something like this:

    # TX an ARP request to known hosts every keepalive seconds
    keepalive = 60

    # If a host does not send an ARP reply for holdtime consider the host down
    holdtime = 240

    Then issue `systemctl restart rdnbrd.service`

    The theory behind the keepalive and hold time change is that the phone doesn’t have the processing capability to respond to the amount of control traffic that Redistribute Neighbor is sending its way. Redistribute Neighbor sends ARP messages to the device to ensure that it’s still “there”, You don’t want to run into stale entries as you’re advertising that device into the network for reachability.

    There is a downside to this timer change which is that you won’t detect devices that “go away” in a timely manner. For instance, if you move that IP phone from one switch to another, you’re /32 route entry won’t flush until the hold time has expired, even if the connecting port goes down.

    Configuring Dynamic VLAN with Voice VLAN

    By default, the dynamic VLAN feature for dot1x configures ports as an access port when the dynamic VLAN is received in the authorization. In an environment configured with voice VLAN, we need to assign the PVID dynamically, instead of configuring the port as access. Currently, only configuring the host VLAN dynamically is supported so the voice VLAN on the port must still be configured manually.

    To change the dynamic VLAN behavior to configure the PVID dynamically, instead of the access VLAN, add the following lines to /etc/hostapd.conf:
    bridge_pvid_enable=1

    After adding the line, the hostapd service must be restarted:
    sudo systemctl restart hostapd.service

    Configuring EAP Requests from the switch

    In some cases, an attached device may initiate the EAP process prior to the link or switch being ready. To handle this, it’s possible to configure the switch to send an EAP request to re-initiate the EAPOL process from the attached client.

    To configure this option, edit /etc/hostapd.conf and add the following:
    eap_send_identity=1

    After adding the line, the hostapd service must be restarted:
    sudo systemctl restart hostapd.service

    In other news

    At this time, the solutions outlined above are only tested and supported with VLAN aware bridge implementations. The “voice-VLAN” command documented in Cumulus Linux documentation isn’t needed for the functionality specified above.

    I’m sure as we continue to deploy in various campus environments we’ll run across other tidbits to share. Until then, hopefully, this post helps save someone some troubleshooting time.

    The post Cumulus Linux in the enterprise campus. appeared first on Cumulus Networks engineering blog.

    07 December, 2018 06:16PM by Kevin Witherstine

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Omer Akram: Introducing PySide2 (Qt for Python) Snap Runtime

    Lately at Crossbar.io, we have been PySide2 for an internal project. Last week it reached a milestone and I am now in the process of code cleanup and refactoring as we had to rush quite a few things for that deadline. We also create a snap package for the project, our previous approach was to ship the whole PySide2 runtime (170mb+) with the Snap, it worked but was a slow process, because each new snap build involved downloading PySide2 from PyPI and installing some deb dependencies.

    So I decided to play with the content interface and cooked up a new snap that is now published to snap store. This definitely resulted in overall size reduction of the snap but at the same time opens a lot of different opportunities for app development on the Linux desktop.

    I created a 'Hello World' snap that is just 8Kb in size since it doesn't include any dependencies with it as they are provided by the pyside2 snap. I am currently working on a very simple "sound recorder" app using PySide and will publish to the Snap store.

    With pyside2 snap installed, we can probably export a few environment variables to make the runtime available outside of snap environment, for someone who is developing an app on their computer.

    07 December, 2018 05:11PM by Omer Akram (noreply@blogger.com)

    Jono Bacon: Forbes Piece: The Five Ways Peloton Weave Community and Content Beautifully

    This is an article I wrote for Forbes, which I wanted to share here too.

    Peloton are a fascinating company. Founded in 2012, they started selling their spinning bikes in 2014. These bikes are not your usual spinning affair though, they have large touchscreen wedged on the front:

    The Peloton Bike

    The Peloton Bike

    The screen is used to stream live classes to the bike. A rider can see a calendar of different classes, opt-in, and join an instructor who guides them through the different aspects of the class, such as increasing speed and resistance, and even arm exercises with the companion weights. Can’t make a class? There are a huge library of classes on-demand, including floor exercises, and even yoga.

    There is little doubt that Peloton are killing it. They are valued at $4billion, with an IPO likely happening early next year. At $2000+ for a bike, they don’t come cheap, and you also need to throw in the $39/month subscription to access the classes and content. The price hasn’t put people off though, and they have sold over 300,000 bikes.

    Interestingly, when they raised the price of the bike, they sold more. Why? I believe it is because they have married convenience with carefully crafted community and gamification.

    Unlocking Competitive Spirit

    Psychologically, competition is an important component of how we behave. It is often extrinsically motived: we become competitive because we want the reward…the prize, the new job, the recognition, or something else. There have been a number of examples where this competitive spirit has been used to harness positive outcomes such as the Orteig Prize, the Ansari X Prize, and many others.

    Fitness is tough because it typically rests on intrinsic rewards: you want to feel better, lose weight, or get faster. There is no gift basket for feeling better, losing weight, or getting faster, unless you are an athlete. For the layperson, they need to be willing to invest significant effort in exercising to generate these intrinsic rewards. For many, this is a bridge too far.

    The Peloton Approach

    To think of Peloton as an exercise equipment manufacturer is a mistake. They are a content company, and it is their library of content at is the driver of their success.

    Exercise content and bikes have been around forever though. It is the integration between the bikecontent, and importantly, unlocking personal and social competition that is where their secret sauce is brewed. There are five key areas in which they are doing this.

    Click here to read the rest of this piece on Forbes.

    The post Forbes Piece: The Five Ways Peloton Weave Community and Content Beautifully appeared first on Jono Bacon.

    07 December, 2018 04:30PM

    hackergotchi for Univention Corporate Server

    Univention Corporate Server

    Multi Container Support for Docker Apps for Univention App Center

    Since the release of UCS 4.1 in November 2015, the App Center has supported Docker apps. These are applications in the form of Docker images that are deployed by the App Center in a Docker container. To do this, the App Center downloads the Docker image of an app and starts the Docker container. We call these apps “Single Container Apps” because the App Center only supports one container per app. This functionality is sufficient for many apps.

    Until now! The app landscape is constantly changing and new app candidates like Rocket.Chat, Metasfresh and Zammad no longer consist of just one container. These apps are called “Multi Container Applications”. In order to fulfill the wish of many of our users to also find such apps in the Univention App Center and run them on UCS, the App Center now supports Multi Container Apps with the UCS 4.3 errata update 345, thus creating the necessary prerequisites.

    A common approach to container virtualization, as implemented by Docker, is to provide individual services in individual containers. Probably the most common use case is the separation of application and database, such as Rocket.Chat. The number of containers is not limited to two, as Metasfresh shows.


    Container am Kran

    If you want to learn more about the Docker technology itself, we recommend this short introduction:

    Brief introduction: Docker


    Multi Container with Docker Compose

    Docker Compose, which is also used in the App Center, is used for multi-container applications in the Docker environment. Central element is the docker-compose.yml file. For Rocket.Chat it looks like this:

    version: '2'
    services:
     rocketchat:
     image: rocketchat/rocket.chat:latest
     restart: unless-stopped
     volumes:
     - ./uploads:/app/uploads
     environment:
     - PORT=3000
     - ROOT_URL=http://localhost:3000
     - MONGO_URL=mongodb://mongo:27017/rocketchat
     - MONGO_OPLOG_URL=mongodb://mongo:27017/local
     - MAIL_URL=smtp://smtp.email
     depends_on:
     - mongo
     ports:
     - 3000:3000
    
     mongo:
     image: mongo:3.2
     restart: unless-stopped
     volumes:
     - ./data/db:/data/db
     command: mongod --smallfiles --oplogSize 128 --replSet rs0 --storageEngine=mmapv1
     # this container's job is just run the command to initialize the replica set.
     # it will run the command and remove himself (it will not stay running)
     mongo-init-replica:
     image: mongo:3.2
     command: 'mongo mongo/rocketchat --eval "rs.initiate({ _id: ''rs0'', members: [ { _id: 0, host: ''localhost:27017'' } ]})"'
     depends_on:
     - mongo
    
     hubot:
     image: rocketchat/hubot-rocketchat:latest
     restart: unless-stopped
     environment:
     - ROCKETCHAT_URL=rocketchat:3000
     - ROCKETCHAT_ROOM=GENERAL
     - ROCKETCHAT_USER=bot
     - ROCKETCHAT_PASSWORD=botpassword
     - BOT_NAME=bot
     - EXTERNAL_SCRIPTS=hubot-help,hubot-seen,hubot-links,hubot-diagnostics
     depends_on:
     - rocketchat
     volumes:
     - ./scripts:/home/hubot/scripts
     ports:
     - 3001:8080

    These files in YAML format define various services. These services specify which Docker image to use and which environment variables, volumes, ports, and dependencies to other containers are required. In its entirety, it describes an application with all its parts that are started using Docker Compose.

    Docker Compose configuration using the Zammad application as an example

    App providers can store their Docker Compose configuration in the App Provider Portal, as the screenshot of Zammad shows.

    Screenshot_example_ZammadApp providers can store a whole range of configuration settings for the app in the provider portal. This also includes scripts that are executed at different stages during the app lifecycle in the container and on the host.

    For Multi Container Apps, it must be specified which of the defined services or containers is the main service so that the App Center knows to which these scripts and some configurations have to be applied. As a rule, the main service is the container containing the application.

    Possible changes of the Docker Compose file by the App Center

    Before a Multi Container App can be put into operation on the user’s target system via Docker Compose, the docker-compose.yml is directly processed by the App Center on the UCS system and some points are changed or added.

    1. The App Center adds two standard volumes for the main service, as they are also included in Single Container Apps. These are the /var/lib/univention-appcenter/apps/<appid>/data and /var/lib/univention-appcenter/apps/<appid>/conf directories on the UCS host. If volumes are also defined in the App Provider Portal in the app Configuration, these are also supplemented in the Docker Compose file by the App Center for the main service.
    2. If ports are defined in the App Provider Portal, they will also be added in the Docker Compose file. Already defined ports are still valid. If the same port is defined in the portal and in the Docker Compose file, the configuration in the App Provider Portal takes precedence. For example, if the Docker Compose file states that port 4500 is provided externally as port 4500, but the Portal defines that this port is to be used as 6500, the Docker Compose file will be modified to map port 4500 to 6500 on the host.
    3. Another special feature is the specification of the web interface. If the Docker Compose file specifies that port 80 or 443 should be opened to the outside and the app configuration specifies that these ports should be used by the App Center for the web interface, the App Center will specify a port on the fly on the target system and specify it in the Docker Compose file. This is because UCS hosts usually occupy ports 80 and 443 with a web server. The App Center creates an Apache Reverse Proxy configuration so that the app can be reached via a URL of the form https://hostname.domain/appid/.
    4. Docker containers like to use environment variables. Docker apps on UCS also make use of this and UCS provides a number of environment variables via the App Center, such as parameters for an LDAP connection. The necessary variables are also written to the Docker Compose file.
    5. Furthermore, in the main service, as in Single Container Apps, all UCR variables defined on UCS are available under /etc/univention/base.conf, as well as the password for the so-called machine account under /etc/machine.secret, via which the main service has authenticated read access to the LDAP directory.
    6. When a multi-container app is released, the Docker Compose file is adapted on the server side and the Docker Image information is modified to point to the Docker images in the Univention Docker registry. All Docker images from published apps are copied to the Univention Docker registry to be independent of the Docker infrastructure. This is the only server-side change to the Docker Compose file.

    As a result, Docker Compose starts a Docker Compose configuration on the target system that no longer matches 100% of the app vendor’s input. This is necessary because a lot of information must be available at the start of an app for optimal integration. Thus, the apps can adjust to the local environment and preconfigure themselves accordingly, so that users can start immediately. The modified Docker Compose file can be found on the UCS target system under /var/lib/univention-appcenter/apps/$apps/compose/docker-compose.yml.

    Screenshot_Zammad

    App providers can now directly start creating their Docker Apps in the App Provider Portal and store the Docker Compose configuration. The documentation will be published within the next days. And the first Multi Container Apps are already on their way and should be published in the App Center within the next weeks.

     

    Der Beitrag Multi Container Support for Docker Apps for Univention App Center erschien zuerst auf Univention.

    07 December, 2018 11:41AM by Nico Gulden

    LiMux

    Hackathon meets Smart Country Convention

    Fünfundzwanzig Stunden, dreizehn Teams, ein Ziel: Smarte Lösungen entwickeln für Bürgerinnen und Bürger der Städte und Regionen. Am 20. und 21. November veranstaltete der Hackerstolz e.V. den Hackathon „Smart Country{Hacks}“ im Rahmen der ersten Smart Country … Weiterlesen

    Der Beitrag Hackathon meets Smart Country Convention erschien zuerst auf Münchner IT-Blog.

    07 December, 2018 08:36AM by Lisa Zech

    December 06, 2018

    hackergotchi for Tanglu developers

    Tanglu developers

    Cutelyst 2.6.0 released! Now on VCPKG and buildroot

    Cutelyst, a Qt Web Framework has upped to 2.6.0. This release if full of important bug fixes and is the best version when targeting Windows OS so far. It reached 5 years old, 440 stars on GitHub and since the last release has had many users asking questions, reporting issues and making pull requests.

    Until now Windows support was a thing I mostly trusted Appveyor compiling and running tests fine, but this changed a bit in this release, I got a freelance job where some terminals would be editing images to be printed on T-Shirts, then they sent their art to a central server which receives and print, so, after I finished the QtQuick application and managed to convince them of running the terminals on KDE/Plasma as it was basically a kiosk full screen application I went on writing the server part.

    Using Cutelyst on the server was a perfect match, the process was a Qt Widgets application, that, when linked to Cutelyst::WSGI could start listening all on the same process without issues, every terminal were connected via websockets protocol, which was just awesome, whenever I changed a terminal config I could see it changing instantly on the terminal, QWebSocketServer class could indeed do the same, but, to create the T-Shirt Art Fonts and Pictures needed to be “installed” on the terminal. Now with HTTP capabilities I simply exported all those folders and the whenever I sent a new JSON with config to the terminals, it contained the URLs of all these files which where updated in a blink.

    On deploy time it was clear that using Windows on the server was a better option, first I’d need to give support for them on how to configure printers and use the system, also, printer drivers could also cause me troubles, so whatever let’s just compile it and get the money.

    In order to make things easier I managed to get VCPKG to build a Qt5 for me, in a command line fashion, after that I saw how easy it was to create a package for Cutelyst, it’s upstream now, you just need to type:

    vcpkg install cutelyst2

    This will pull qt5-base package, and get you a working Cutelyst that easy, sadly Qt5 packages didn’t work on Linux nor on MacOS (both with issues filled).

    Thanks to this project, several Windows related issues have been fixed, still work to do but I have an app on production on Windows now 🙂

    I’m still no Windows fan, so I ended up configuring MXE and cross compiling Cutelyst and my application for Windows on Linux with it.

    If you are doing embedded stuff, Cutelyst is also available on buildroot.

    Besides that Cutelyst 2.6.0 has some other very important bug fixes.

    Get it here!

    06 December, 2018 06:26PM by dantti

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Jono Bacon: Interview with Matt Keller about the Global Learning XPRIZE Progress, Finalists, and Field Trials

    One of my proudest achievements when I worked at XPRIZE was playing a role in the Global Learning XPRIZE. This is a $15 million competition to produce an Android app that teaches children to read, write, and perform arithmetic using a tablet, and without the aid of a teacher within 18 months.

    The prize is currently in field trials, and I recently caught up with Matt Keller who leads the prize, talking about the progress of the teams, the finalists, the field trials and more. Check it out:

    I think this prize shows enormous potential in producing autonomous learning in the remotest of regions in the world.

    The post Interview with Matt Keller about the Global Learning XPRIZE Progress, Finalists, and Field Trials appeared first on Jono Bacon.

    06 December, 2018 05:05PM

    Jonathan Riddell: www.kde.org

    It’s not uncommon to come across some dusty corner of KDE which hasn’t been touched in ages and has only half implemented features. One of the joys of KDE is being able to plunge in and fix any such problem areas. But it’s quite a surprise when a high profile area of KDE ends up unmaintained. www.kde.org is one such area and it was getting embarrassing. February 2016 we had a sprint where a new theme was rolled out on the main pages making the website look fresh and act responsively on mobiles but since then, for various failures of management, nothing has happened. So while the neon build servers were down for shuffling to a new machine I looked into why Plasma release announcements were updated but not Frameworks or Applications announcments. I’d automated Plasma announcements a while ago but it turns out the other announcements are still done manually, so I updated those and poked the people involved. Then of course I got stuck looking at all the other pages which hadn’t been ported to the new theme. On review there were not actually too many of them, if you ignore the announcements, the website is not very large.

    Many of the pages could be just forwarded to more recent equivalents such as getting the history page (last update in 2003) to point to timeline.kde.org or the presentation slides page (last update for KDE 4 release) to point to a more up to date wiki page.

    Others are worth reviving such as KDE screenshots page, press contacts, support page. The contents could still do with some pondering on what is useful but while they exist we shouldn’t pretend they don’t so I updated those and added back links to them.

    While many of these pages are hard to find or not linked at all from www.kde.org they are still the top hits in Google when you search for “KDE presentation” or “kde history” or “kde support” so it is worth not looking like we are a dead project.

    There were also obvious bugs that needed fixed for example the cookie-opt-out banner didn’t let you opt out, the font didn’t get loaded, the favicon was inconsistent.

    All of these are easy enough fixes but the technical barrier is too high to get it done easily (you need special permission to have access to www.kde.org reasonably enough) and the social barrier is far too high (you will get complaints when changing something high profile like this, far easier to just let it rot). I’m not sure how to solve this but KDE should work out a way to allow project maintenance tasks like this be more open.

    Anyway yay, www.kde.org is now new theme everywhere (except old announcements) and pages have up to date content.

    There is a TODO item to track website improvements if you’re interested in helping, although it missed the main one which is the stalled port to WordPress, again a place it just needs someone to plunge in and do the work. It’s satisfying because it’s a high profile improvement but alas it highlights some failings in a mature community project like ours.

    Facebooktwittergoogle_pluslinkedinby feather

    06 December, 2018 04:44PM

    hackergotchi for Maemo developers

    Maemo developers

    Venice Kayak

    Kayaking in Venice is a unique experience. Venice Kayak offers guided kayak tours in the city of Venice and in the lagoon.

    The post Venice Kayak appeared first on René Seindal.

    0 Add to favourites0 Bury

    06 December, 2018 04:34PM by René Seindal (rene@seindal.dk)

    Venice Street Photography

    I have put up a separate site with my street photography from Venice

    The post Venice Street Photography appeared first on René Seindal.

    0 Add to favourites0 Bury

    06 December, 2018 04:29PM by René Seindal (rene@seindal.dk)

    Photo walks in Venice

    The locals know Venice

    The post Photo walks in Venice appeared first on René Seindal.

    0 Add to favourites0 Bury

    06 December, 2018 04:18PM by René Seindal (rene@seindal.dk)

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Ubuntu Podcast from the UK LoCo: S11E39 – The Thirty-Nine Steps

    This week we’ve been flashing devices and getting a new display. We discuss Huawei developing its own mobile OS, Steam Link coming to the Raspberry Pi, Epic Games laucnhing their own digital store and we round up the community news.

    It’s Season 11 Episode 39 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

    In this week’s show:

    That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

    06 December, 2018 03:00PM

    hackergotchi for Univention Corporate Server

    Univention Corporate Server

    In the Univention App Center: OpenID Connect Provider

    With the development of the OpenID Connect Provider App, which we announced at the Univention Summit 2018, we have taken another important step towards making UCS a secure and open platform for managing a wide range of services.

    The goal we are pursuing: All UCS users should retain full control over their data and digital identities at all times. Also they should have the greatest possible and free choice between different software applications.

    With this app, we are now offering an additional technology in addition to the SAML identity provider, which has long been integrated into UCS. They both allow administrators to connect third-party applications to UCS via single sign-on. The user authentication required for this runs against UCS’ identity management. The user password is not passed on to the connected service but remains in your system and thus under your control. The services to be connected must have an interface in order to work as OpenID Relying Party. The app is based on the software Konnect developed by Kopano, an OpenID Connect Provider developed in the programming language Go.

    Single Sign-on with OpenID Connect and SAML

    The use case for the OpenID Connect Provider is similar to the one of a SAML identity provider. The connected service does not receive the user password, but only the information that the user has successfully logged on to the identity management. For this purpose when using OpenID Connect or SAML, it is necessary to establish a trust relationship between the identity provider – in this case UCS – and the connected service before users can log on.

    With SAML, a certificate pair is created and the public key is stored on the connected service. When using the OpenID Connect provider, establishing trust works in a similar way. Here, however, a shared secret is stored in UCS and the external service for communication. Another important difference between the two login procedures is that with SAML, all communication runs via the user’s browser who logs on to the UCS. With OpenID Connect, the initial login normally also runs via a browser. Afterwards the external service establishes a direct HTTPS connection to the UCS identity provider in order to query the required user attributes such as the name and e-mail address.

    Connecting external services – Functionality of the App

    After installing the App from the Univention App Center you can connect external services to UCS via OpenID Connect by adding this service in the Univention Management Console in the LDAP browser in the containeroidc, which is located below the container cn=univention. Here you can register the new service by clicking on the button Add and selecting OpenID Connect Relying Party Service. The same is possible from the command line:

    udm oidc/rpservice create --set name=<UCS_internal identifier> \
     --position=cn=oidc,cn=univention,$(ucr get ldap/base) \
     --set clientid=<ID> \
     --set clientsecret=<A_long_Password> \
     --set trusted=yes \
     --set applicationtype=web \
     --set redirectURI=<URL_from_Documentation>
    • name is the service name displayed in the web interface during login.
    • clientid and secret must be identical here and at the connected service (shared secret).
    • trusted is set to yes if you do not want the user to be shown a separate request to confirm the transfer of user attributes. This should be set by default.
    • applicationtype is set to web for Internet services
    • redirectURI is the URL of the login endpoint found in the documentation of the connected service. If a service can be accessed via several URLs or should also be accessible via IP address, all possible addresses must be added to the attribute redirectURI.

    The connected service needs information about the OpenID Connect endpoints for its configuration. These are available at the URL https://<FQDN of the server>/.well-known/openid-configuration.

    Sample connection of WordPress to UCS OpenID Connect Provider

    As an example, I will show you how to connect WordPress to the IDM of UCS as OpenID Relying Party. The WordPress App from the Univention App Center has to be installed in addition to the OpenID Connect Provider App. In this example both apps should be installed on the UCS DC Master to make the process simple.

    Now WordPress has to be prepared with a plugin for the login via OpenID Connect. The username of the WordPress administrator is wp-admin. The password can be found in the file /etc/wordpress-admin.secret. As an administrator, you can access the plugins menu in the WordPress administration interface via Settings. The Plugin OpenID Connect Generic Client can be found, installed and activated by performing a search under Plugins->Install. In the settings of the plugin the connection to the UCS provider must now be established. OpenID Connect Client is a new category under Settings. The UCS provider is added here. The settings are to be made as follows:

    Login Type: OpenID Connect on login form

    Client ID and Secret Key are freely selectable, but must be specified identically in the UCS configuration, see below. Scope defines the attributes the plugin needs from the UCS user.

    Client id: wordpress-ucs
    secret: averysecretpassword
    Scope: openid profile email offline_access

    The following fields should contain the corresponding values from the Well Known Configuration of the OpenID Connect provider, which can be viewed at https://<FQDN of the server>/.well-known/openid-configuration. A corresponding link can also be found in the UMC App description after installation.

    Login Endpoint URL: https://<FQDN of the Server>/signin/v1/identifier/_/authorize
    Userinfo Endpoint URL: https://<FQDN of the Server>/konnect/v1/userinfo
    Token Validation Endpoint URL: https://<FQDN of the Server>/konnect/v1/token
    End Session Endpoint URL: https://<FQDN of the Server>/signin/v1/identifier/_/endsession
    Identity Key: name
    Nickname Key: name
    Email Formatting: {name}@mail.domain
    Display Name Formatting: {family_name}

    openid-connect-wordpress-connection

    The Email Formatting item uses the username and a generic mail domain. The value {email} can also be set in the same field. In this case, however, it must be ensured that an e-mail address has been configured for the UCS users. Otherwise the login of the user will fail.

    With a click on Save Changes, UCS is registered as OpenID Connect Provider in WordPress.

    Now you have to make the WordPress configuration known in UCS. To do this, the options must be transferred via a terminal session. The values clientid and secret must be identical to the values entered above in WordPress.

    udm oidc/rpservice create --set name=Wordpress \
     --position=cn=oidc,cn=univention,$(ucr get ldap/base) \
     --set clientid=wordpress-ucs \
     --set clientsecret=averysecretpassword \
     --set trusted=yes \
     --set applicationtype=web \
     --set redirectURI="https://$(ucr get hostname).$(ucr get domainname)/wordpress/wp-admin/admin-ajax.php"

    OIDC-udm-settings-EN

    This completes the setup and the access can be tested. An additional button ‘Login with OpenID Connect’ is now visible on the WordPress login page. On the OpenID Connect Provider login page the usual UCS user credentials can now be used. In order for the login to work smoothly, make sure that the WordPress login page is called via the FQDN of the UCS system, otherwise the redirection for users fails after authentication.

    We would be pleased if the OpenID Connect App could offer you another good opportunity to expand your IT landscape with new applications that you can access easily and securely. Share your experiences and questions with us and other UCS users through the comment box below.

    We are looking forward to your feedback. Further questions about the use of the app can also be asked in our forum.

    Der Beitrag In the Univention App Center: OpenID Connect Provider erschien zuerst auf Univention.

    06 December, 2018 02:49PM by Erik Damrose

    hackergotchi for VyOS

    VyOS

    VyOS release model change

    Now that we are approaching the 1.2.0 LTS release, it's time to make a big announcement. Perhaps we should have made it earlier, but we've been too busy coding.

    There are two distinct categories of VyOS users. The first category is people who want the latest features even at cost of stability. These are mostly networking geeks who run it in their home networks and network labs and open source developers, though some businesses are also happy with this approach. The second category is people who need or want stability. There are of course people who want both too, but we have to accept that these goals are contradictory at least some of the time.

    This is common for all software projects, but with VyOS, more people seem to belong to the second category. Every once in a while someone asks on the channels and the forum whether they can update an extremely outdated VyOS (or even Vyatta Core) version to the latest release. We also hear from people frequently that they are not going to even try 1.2.0 until it reaches the stable status.

    It's quite obvious by now that the single release model is not fitting for this situation. With 1.2.0, people who contributed new features had to wait a long time for their code to appear in any non-nightly build image because other people, mainly the maintainers, have been working on tearing the codebase off of the outdated Debian release, reworking the foundations, and hunting bugs. This is frustrating for contributors and even created the appearance of the dead project at some point. It also means that new code doesn't get to people enthusiastic to test it nearly as fast as it should.

    On the other hand, while we have an active community of people who send us patches and report bugs, the community is very small compared to the entire user base. The number of people who contributed patches is easy to measure so I did it out of curiosity: the largest submodule, vyatta-cfg-system, has less then 50 unique names in its commit log starting from the project start date, which means less than 50 people contributed to it in five years. The number of active testers isn't much higher. For comparison, the 1.1.8 images get thousands of downloads every month and the wiki documentation gets thousands of views too. To make it easier for people to contribute, there's a lot of work to be done: reworking the foundational libraries, getting rid of poorly written legacy code, and so on, and focused effort (which means human-hours) is required to break the cycle.

    Maintaining stable releases is one of the hardest parts, but that burden falls on a disproportionately small number of people, while most of the business users who are the main consumers of stable releases do nothing to advance the project. This is not a healthy or sustainable situation. Someone needs to pay the bills.

    The new release model

    First, there now will be two release lines: rolling release and long-term support releases.

    The rolling release images will be (roughly) monthly snapshots of the "current" branch, with all latest pull requests merged in. They will be tested to successfully boot and load a sample config. The target audience is the people who want the latest features (even if they are not working perfectly yet). People who send us pull requests can be sure their contribution will be available to themselves and willing testers in a reasonable time. Since in VyOS it's easy to revert to the previous version if something goes wrong, the rolling release should be good enough for non-critical production use, since you can always go back to a working version at the end of the maintenance window and report the findings.

    The long-term support versions will be maintained for at least two years from the release date. They will undergo extensive testing by the maintainers, and receive backported bugfixes and security updates until they reach an EOL, with a possibility of receiving extended support by special agreement. It is meant for enterprises and service provider users.

    Unlike the rolling release images, binary LTS images will be only available to people who help the project move forward, either by contributing their time or their money. We are not going to compromise the free software ideals: you will always be able to build LTS releases yourself.

    The LTS images will be available at no cost to all people who contribute to VyOS. Every kind of contribution counts:

    Writing code
    Testing release candidates and rolling/nightly build and reporting bugs
    Writing documentation
    Promoting VyOS in blogs, social networks, and at conferences
    Everyone who contributed before the release of 1.2.0 LTS version will get a perpetual subscription. People who will have joined later will need to be active within the last year to maintain their subscription (the required activity level is yet to be determined, but it will require substantial and non-trivial changes, i.e. not just typo fixes). Companies who allow their employees to work on VyOS during working hours or specifically pay their employees or contractors to work on code that is meant for the mainline VyOS or produces open source integration tools will be able to get a corporate subscription if those employees/contractors confirm it.

    We are also happy to provide subscriptions to contributors of all projects that VyOS uses, such as FRR, netfilter, OpenVPN, StrongSWAN, and many others.

    Companies who simply want to use stable, long-term support releases without making technical or social contributions to the project will have to purchase a binary image subscription (you pay for access to ready-made images, not software licensing as such).

    All money received from the paid subscriptions will be used to fund VyOS development, including paying the salaries of the VyOS maintainers who work at Sentrium, hiring/contracting developers from the community, expanding the project infrastructure, and supporting our upstream projects.

    If you have contributed to VyOS, you can register right now using this form. We will post the details of the commercial offer and pricing later, stay tuned.

    P.S.

    Back in 2013, I said that there will never be a "VyOS Subscription Edition". Technically I lied, but it was said from a very different perspective. At the time we hoped that now that VyOS is open for everyone's contribution, a large contributor community will form and many of the corporate users who used to use the old project will contribute to it willingly, sponsor its development, or purchase support subscriptions; in practice very few of them did. That approach didn't work, but switching our AWS Marketplace offer from free of charge to paid has become the first reliable funding source for the project and already allowed us to add support for more cloud platforms, as well as rework some of the fundamentals of VyOS to make it easier to contribute to.
    I hope continuing this line and introducing the rolling release will create a model that is more beneficial for the project and more fair towards its contributors.
    Just to clarify, VyOS is neither going to hide the toolchain required to build LTS releases yourself nor going open core. Those part of the original plan do and will stay the same.


    06 December, 2018 01:32PM by Daniil Baturin

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Podcast Ubuntu Portugal: S01E14 – Dos oito, aos oitenta

    Já com o pensamento em 2019, sem esquecer a quadra natalícia, neste episódio -que volta a sair à quinta-feira!!! – falamos sobre prendas, home automation e revivalismo. Já sabes: Ouve, subscreve e partilha!

    Patrocínios

    Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

    Atribuição e licenças

    A imagem de capa: Nick Hobgood e está licenciada como CC BY-SA.

    A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

    Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

    06 December, 2018 01:13PM

    LMDE

    Linux Mint 19.1 “Tessa” Xfce – BETA Release

    This is the BETA release for Linux Mint 19.1 “Tessa” Xfce Edition.

    Linux Mint 19.1 Tessa Xfce Edition

    Linux Mint 19.1 is a long term support release which will be supported until 2023. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

    New features:

    This new version of Linux Mint contains many improvements.

    For an overview of the new features please visit:

    What’s new in Linux Mint 19.1 Xfce“.

    Important info:

    The release notes provide important information about known issues, as well as explanations, workarounds and solutions.

    To read the release notes, please visit:

    Release Notes for Linux Mint 19.1 Xfce

    System requirements:

    • 1GB RAM (2GB recommended for a comfortable usage).
    • 15GB of disk space (20GB recommended).
    • 1024×768 resolution (on lower resolutions, press ALT to drag windows with the mouse if they don’t fit in the screen).

    Notes:

    • The 64-bit ISO can boot with BIOS or UEFI.
    • The 32-bit ISO can only boot with BIOS.
    • The 64-bit ISO is recommended for all modern computers (Almost all computers sold since 2007 are equipped with 64-bit processors).

    Upgrade instructions:

    • This BETA release might contain critical bugs, please only use it for testing purposes and to help the Linux Mint team fix issues prior to the stable release.
    • It will be possible to upgrade from this BETA to the stable release.
    • It will also be possible to upgrade from Linux Mint 19. Upgrade instructions will be published after the stable release of Linux Mint 19.1.

    Bug reports:

    • Bugs in this release should be reported on Github at https://github.com/linuxmint/mint-19.1-beta.
    • Create one issue per bug.
    • As described in the Linux Mint Troubleshooting Guide, do not report or create issues for observations.
    • Be as accurate as possible and include any information that might help developers reproduce the issue or understand the cause of the issue:
      • Bugs we can reproduce, or which cause we understand are usually fixed very easily.
      • It is important to mention whether a bug happens “always”, or “sometimes”, and what triggers it.
      • If a bug happens but didn’t happen before, or doesn’t happen in another distribution, or doesn’t happen in a different environment, please mention it and try to pinpoint the differences at play.
      • If we can’t reproduce a particular bug and we don’t understand its cause, it’s unlikely we’ll be able to fix it.
    • The BETA phase is literally a bug squashing rush, where the team is extremely busy and developers try to fix as many bugs as fast as possible.
    • There usually are a huge number of reports and very little time to answer everyone or explain why a particular report is not considered a bug, or won’t get fixed. Don’t let this frustrate you, whether it’s acknowledged or not, we appreciate everyone’s help.
    • Please visit https://github.com/linuxmint/Roadmap to follow the progress of the development team between the BETA and the stable release.

    Download links:

    Here are the download links for the 64-bit ISO:

    A 32-bit ISO image is also available at https://www.linuxmint.com/download_all.php.

    Integrity and authenticity checks:

    Once you have downloaded an image, please verify its integrity and authenticity.

    Anyone can produce fake ISO images, it is your responsibility to check you are downloading the official ones.

    Enjoy!

    We look forward to receiving your feedback. Many thanks in advance for testing the BETA!

    06 December, 2018 12:47PM by Clem

    Linux Mint 19.1 “Tessa” MATE – BETA Release

    This is the BETA release for Linux Mint 19.1 “Tessa” MATE Edition.

    Linux Mint 19.1 Tessa MATE Edition

    Linux Mint 19.1 is a long term support release which will be supported until 2023. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

    New features:

    This new version of Linux Mint contains many improvements.

    For an overview of the new features please visit:

    What’s new in Linux Mint 19.1 MATE“.

    Important info:

    The release notes provide important information about known issues, as well as explanations, workarounds and solutions.

    To read the release notes, please visit:

    Release Notes for Linux Mint 19.1 MATE

    System requirements:

    • 1GB RAM (2GB recommended for a comfortable usage).
    • 15GB of disk space (20GB recommended).
    • 1024×768 resolution (on lower resolutions, press ALT to drag windows with the mouse if they don’t fit in the screen).

    Notes:

    • The 64-bit ISO can boot with BIOS or UEFI.
    • The 32-bit ISO can only boot with BIOS.
    • The 64-bit ISO is recommended for all modern computers (Almost all computers sold since 2007 are equipped with 64-bit processors).

    Upgrade instructions:

    • This BETA release might contain critical bugs, please only use it for testing purposes and to help the Linux Mint team fix issues prior to the stable release.
    • It will be possible to upgrade from this BETA to the stable release.
    • It will also be possible to upgrade from Linux Mint 19. Upgrade instructions will be published after the stable release of Linux Mint 19.1.

    Bug reports:

    • Bugs in this release should be reported on Github at https://github.com/linuxmint/mint-19.1-beta.
    • Create one issue per bug.
    • As described in the Linux Mint Troubleshooting Guide, do not report or create issues for observations.
    • Be as accurate as possible and include any information that might help developers reproduce the issue or understand the cause of the issue:
      • Bugs we can reproduce, or which cause we understand are usually fixed very easily.
      • It is important to mention whether a bug happens “always”, or “sometimes”, and what triggers it.
      • If a bug happens but didn’t happen before, or doesn’t happen in another distribution, or doesn’t happen in a different environment, please mention it and try to pinpoint the differences at play.
      • If we can’t reproduce a particular bug and we don’t understand its cause, it’s unlikely we’ll be able to fix it.
    • The BETA phase is literally a bug squashing rush, where the team is extremely busy and developers try to fix as many bugs as fast as possible.
    • There usually are a huge number of reports and very little time to answer everyone or explain why a particular report is not considered a bug, or won’t get fixed. Don’t let this frustrate you, whether it’s acknowledged or not, we appreciate everyone’s help.
    • Please visit https://github.com/linuxmint/Roadmap to follow the progress of the development team between the BETA and the stable release.

    Download links:

    Here are the download links for the 64-bit ISO:

    A 32-bit ISO image is also available at https://www.linuxmint.com/download_all.php.

    Integrity and authenticity checks:

    Once you have downloaded an image, please verify its integrity and authenticity.

    Anyone can produce fake ISO images, it is your responsibility to check you are downloading the official ones.

    Enjoy!

    We look forward to receiving your feedback. Many thanks in advance for testing the BETA!

    06 December, 2018 12:43PM by Clem

    Linux Mint 19.1 “Tessa” Cinnamon – BETA Release

    This is the BETA release for Linux Mint 19.1 “Tessa” Cinnamon Edition.

    Linux Mint 19.1 Tessa Cinnamon Edition

    Linux Mint 19.1 is a long term support release which will be supported until 2023. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

    New features:

    This new version of Linux Mint contains many improvements.

    For an overview of the new features please visit:

    What’s new in Linux Mint 19.1 Cinnamon“.

    Important info:

    The release notes provide important information about known issues, as well as explanations, workarounds and solutions.

    To read the release notes, please visit:

    Release Notes for Linux Mint 19.1 Cinnamon

    System requirements:

    • 1GB RAM (2GB recommended for a comfortable usage).
    • 15GB of disk space (20GB recommended).
    • 1024×768 resolution (on lower resolutions, press ALT to drag windows with the mouse if they don’t fit in the screen).

    Notes:

    • The 64-bit ISO can boot with BIOS or UEFI.
    • The 32-bit ISO can only boot with BIOS.
    • The 64-bit ISO is recommended for all modern computers (Almost all computers sold since 2007 are equipped with 64-bit processors).

    Upgrade instructions:

    • This BETA release might contain critical bugs, please only use it for testing purposes and to help the Linux Mint team fix issues prior to the stable release.
    • It will be possible to upgrade from this BETA to the stable release.
    • It will also be possible to upgrade from Linux Mint 19. Upgrade instructions will be published after the stable release of Linux Mint 19.1.

    Bug reports:

    • Bugs in this release should be reported on Github at https://github.com/linuxmint/mint-19.1-beta.
    • Create one issue per bug.
    • As described in the Linux Mint Troubleshooting Guide, do not report or create issues for observations.
    • Be as accurate as possible and include any information that might help developers reproduce the issue or understand the cause of the issue:
      • Bugs we can reproduce, or which cause we understand are usually fixed very easily.
      • It is important to mention whether a bug happens “always”, or “sometimes”, and what triggers it.
      • If a bug happens but didn’t happen before, or doesn’t happen in another distribution, or doesn’t happen in a different environment, please mention it and try to pinpoint the differences at play.
      • If we can’t reproduce a particular bug and we don’t understand its cause, it’s unlikely we’ll be able to fix it.
    • The BETA phase is literally a bug squashing rush, where the team is extremely busy and developers try to fix as many bugs as fast as possible.
    • There usually are a huge number of reports and very little time to answer everyone or explain why a particular report is not considered a bug, or won’t get fixed. Don’t let this frustrate you, whether it’s acknowledged or not, we appreciate everyone’s help.
    • Please visit https://github.com/linuxmint/Roadmap to follow the progress of the development team between the BETA and the stable release.

    Download links:

    Here are the download links for the 64-bit ISO:

    A 32-bit ISO image is also available at https://www.linuxmint.com/download_all.php.

    Integrity and authenticity checks:

    Once you have downloaded an image, please verify its integrity and authenticity.

    Anyone can produce fake ISO images, it is your responsibility to check you are downloading the official ones.

    Enjoy!

    We look forward to receiving your feedback. Many thanks in advance for testing the BETA!

    06 December, 2018 12:12PM by Clem

    December 05, 2018

    hackergotchi for Tails

    Tails

    We are accepting donations in 4 new cryptocurrencies

    Users of cryptocurrencies have been huge supporters of Tails in all our previous donation campaigns and we are extremely thankful to them!

    As part of our on-going donation campaign, we are now accepting donations in 4 new cryptocurrencies.

    We chose these 4 because they are the most popular and easiest for us to receive. We might add more cryptocurrencies in the future as they gain momentum.

    But as of now, we hope the 5 crypto and 4 legacy currencies you'll find on the donation page will provide you with the necessary means to support us!

    Bitcoin Cash

    qrzav77wkhd942nyqvya34mya3fqxzx90ypjge0njh

    Ethereum

    0xD6A73051933ab97C38cEFf2abB2f9E06F3a3ed78

    Ethereum Classic

    0x86359F8b44188c105E198DdA3c0421AC60729195

    Litecoin

    MJ1fqVucBt8YpfPiQTuwBtmLJtzhizx6pz

    05 December, 2018 06:00PM

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Benjamin Mako Hill: Banana Peels

    Photo comic of seeing a banana peel in the road while on a bike.

    Although it’s been decades since I last played, it’s still flashbacks to Super Mario Kart and pangs of irrational fear every time I see a banana peel in the road.

    05 December, 2018 04:25AM

    December 04, 2018

    hackergotchi for Kali Linux

    Kali Linux

    Kali Linux for the Gemini PDA

    Running Kali on a Gem

    The Gemini PDA from Planet Computers is an ultra-thin, clamshell mobile device with a tactile keyboard. Sporting a 5.99″ screen, QWERTY keyboard, 4G & Wi-Fi, deca-core CPU, and an open source bootloader that supports multi-boot, it caught our attention straight away when it popped up on Indiegogo. It is a great little pocket rocket and having a landscape orientation and hardware keyboard, is well suited for a native Kali installation with a full LXQT desktop environment.

    Hardware Specs

    • MediaTek Deca Core Helio, with either X25 or X27 chipset
    • CPU: 2x Cortex A72 @2.6GHz, 4x Cortex A53 @2.0GHz, 4x Cortex A53 @1.6GHz
    • GPU: ARM Mali T880 MP4 @875MHz
    • RAM: 4GB
    • Flash: 64GB plus micro SD card support

    More: https://en.wikipedia.org/wiki/Gemini_(PDA)

    Operating Systems

    Multiboot any one, two, or three of the following five operating systems: Android, rooted Android, Sailfish, Debian, Kali Linux. The image we provide on our download page includes the following two partitions:

    1. Android (rooted), 16 GB. To boot Android, just press and hold the “On” (Esc) key until it vibrates
    2. Kali Linux, 40 GB. To boot Kali, press and hold the “On” (Esc) key until it vibrates and then quickly press the silver “Voice Assist” button on the right hand side of the device

    Kernel

    Our Gemini image contains a Kali Linux fork of the Gemini-Android kernel 3.18 with injection support for all your favourite Wi-Fi chips.

    Desktop Environment

    LXQT with SDDM is lightweight, provides great scaling for tiny screens, has good touch support, and a slick modern layout. Whilst using the tiny touchscreen looks a bit intimidating initially, it is surprisingly finger friendly. We don’t bother using a mouse anymore with this device.

    Linux / Android integration

    Being basically a pimped up cell phone requires a convergence of Linux (glibc) and Android (bionic) to drive the hardware not yet natively supported by GNU/Linux. We are using components from the Halium project to achieve that.

    Bringing GNU/Linux to the Gemini PDA, or any other mobile platform, is in the very early stages and some of it still needs a bit of work, such as data and voice support, GPS, power management, etc. There is currently one known issue with the Gemini having occasional issues when shutting down. The community is currently working on it.

    Overall, it’s a very stable experience thanks to the hard work of the Sailfish and Gemian communities, in particular TheKit and adam_b, who brought Gemian to the Gemini PDA and helped a lot with this project.

    Installation

    We have published a Gemini installation guide on our documentation site to get you up and running quickly.

    Support

    Linux on the Gemini PDA is very experimental with limited manufacturer support and some hardware is not natively supported by Linux, requiring some community hacks. Offensive Security does not provide technical support for the Gemini. Support for Kali on the Gemini can be obtained via various methods listed on the Kali Linux Community page.

    Wrapping Up

    The Gemini PDA is a nifty little powerhouse that combines the charm and handling of the good old Psion series with the power of a modern ARM64, making it the ideal mobile platform for a desktop version of Kali Linux with touch support.

    With the community demand for Kali Linux on the Gemini and considering that the manufacturer has just launched a new crowd funding campaign for another device, having a Kali platform for this particular hardware segment is setting us up for exciting times ahead.

    04 December, 2018 04:22PM by elwood

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Colin Watson: Deploying Swift

    Sometimes I want to deploy Swift, the OpenStack object storage system.

    Well, no, that’s not true. I basically never actually want to deploy Swift as such. What I generally want to do is to debug some bit of production service deployment machinery that relies on Swift for getting build artifacts into the right place, or maybe the parts of the Launchpad librarian (our blob storage service) that use Swift. I could find an existing private or public cloud that offers the right API and test with that, but sometimes I need to test with particular versions, and in any case I have a terribly slow internet connection and shuffling large build artifacts back and forward over the relevant bit of wet string makes it painfully slow to test things.

    For a while I’ve had an Ubuntu 12.04 VM lying around with an Icehouse-based Swift deployment that I put together by hand. It works, but I didn’t keep good notes and have no real idea how to reproduce it, not that I really want to keep limping along with manually-constructed VMs for this kind of thing anyway; and I don’t want to be dependent on obsolete releases forever. For the sorts of things I’m doing I need to make sure that authentication works broadly the same way as it does in a real production deployment, so I want to have Keystone too. At the same time, I definitely don’t want to do anything close to a full OpenStack deployment of my own: it’s much too big a sledgehammer for this particular nut, and I don’t really have the hardware for it.

    Here’s my solution to this, which is compact enough that I can run it on my laptop, and while it isn’t completely automatic it’s close enough that I can spin it up for a test and discard it when I’m finished (so I haven’t worried very much about producing something that runs efficiently). It relies on Juju and LXD. I’ve only tested it on Ubuntu 18.04, using Queens; for anything else you’re on your own. In general, I probably can’t help you if you run into trouble with the directions here: this is provided “as is”, without warranty of any kind, and all that kind of thing.

    First, install Juju and LXD if necessary, following the instructions provided by those projects, and also install the python-openstackclient package as you’ll need it later. You’ll want to set Juju up to use LXD, and you should probably make sure that the shells you’re working in don’t have http_proxy set as it’s quite likely to confuse things unless you’ve arranged for your proxy to be able to cope with your local LXD containers. Then add a model:

    juju add-model swift
    

    At this point there’s a bit of complexity that you normally don’t have to worry about with Juju. The swift-storage charm wants to mount something to use for storage, which with the LXD provider in practice ends up being some kind of loopback mount. Unfortunately, being able to perform loopback mounts exposes too much kernel attack surface, so LXD doesn’t allow unprivileged containers to do it. (Ideally the swift-storage charm would just let you use directory storage instead.) To make the containers we’re about to create privileged enough for this to work, run:

    lxc profile set juju-swift security.privileged true
    lxc profile device add juju-swift loop-control unix-char \
        major=10 minor=237 path=/dev/loop-control
    for i in $(seq 0 255); do
        lxc profile device add juju-swift loop$i unix-block \
            major=7 minor=$i path=/dev/loop$i
    done
    

    Now we can start deploying things! Save this to a file, e.g. swift.bundle:

    series: bionic
    description: "Swift in a box"
    applications:
      mysql:
        charm: "cs:mysql-62"
        channel: candidate
        num_units: 1
        options:
          dataset-size: 512M
      keystone:
        charm: "cs:keystone"
        num_units: 1
      swift-storage:
        charm: "cs:swift-storage"
        num_units: 1
        options:
          block-device: "/etc/swift/storage.img|5G"
      swift-proxy:
        charm: "cs:swift-proxy"
        num_units: 1
        options:
          zone-assignment: auto
          replicas: 1
    relations:
      - ["keystone:shared-db", "mysql:shared-db"]
      - ["swift-proxy:swift-storage", "swift-storage:swift-storage"]
      - ["swift-proxy:identity-service", "keystone:identity-service"]
    

    And run:

    juju deploy swift.bundle
    

    This will take a while. You can run juju status to see how it’s going in general terms, or juju debug-log for detailed logs from the individual containers as they’re putting themselves together. When it’s all done, it should look something like this:

    Model  Controller  Cloud/Region     Version  SLA
    swift  lxd         localhost        2.3.1    unsupported
    
    App            Version  Status  Scale  Charm          Store       Rev  OS      Notes
    keystone       13.0.1   active      1  keystone       jujucharms  290  ubuntu
    mysql          5.7.24   active      1  mysql          jujucharms   62  ubuntu
    swift-proxy    2.17.0   active      1  swift-proxy    jujucharms   75  ubuntu
    swift-storage  2.17.0   active      1  swift-storage  jujucharms  250  ubuntu
    
    Unit              Workload  Agent  Machine  Public address  Ports     Message
    keystone/0*       active    idle   0        10.36.63.133    5000/tcp  Unit is ready
    mysql/0*          active    idle   1        10.36.63.44     3306/tcp  Ready
    swift-proxy/0*    active    idle   2        10.36.63.75     8080/tcp  Unit is ready
    swift-storage/0*  active    idle   3        10.36.63.115              Unit is ready
    
    Machine  State    DNS           Inst id        Series  AZ  Message
    0        started  10.36.63.133  juju-d3e703-0  bionic      Running
    1        started  10.36.63.44   juju-d3e703-1  bionic      Running
    2        started  10.36.63.75   juju-d3e703-2  bionic      Running
    3        started  10.36.63.115  juju-d3e703-3  bionic      Running
    

    At this point you have what should be a working installation, but with only administrative privileges set up. Normally you want to create at least one normal user. To do this, start by creating a configuration file granting administrator privileges (this one comes verbatim from the openstack-base bundle):

    _OS_PARAMS=$(env | awk 'BEGIN {FS="="} /^OS_/ {print $1;}' | paste -sd ' ')
    for param in $_OS_PARAMS; do
        if [ "$param" = "OS_AUTH_PROTOCOL" ]; then continue; fi
        if [ "$param" = "OS_CACERT" ]; then continue; fi
        unset $param
    done
    unset _OS_PARAMS
    
    _keystone_unit=$(juju status keystone --format yaml | \
        awk '/units:$/ {getline; gsub(/:$/, ""); print $1}')
    _keystone_ip=$(juju run --unit ${_keystone_unit} 'unit-get private-address')
    _password=$(juju run --unit ${_keystone_unit} 'leader-get admin_passwd')
    
    export OS_AUTH_URL=${OS_AUTH_PROTOCOL:-http}://${_keystone_ip}:5000/v3
    export OS_USERNAME=admin
    export OS_PASSWORD=${_password}
    export OS_USER_DOMAIN_NAME=admin_domain
    export OS_PROJECT_DOMAIN_NAME=admin_domain
    export OS_PROJECT_NAME=admin
    export OS_REGION_NAME=RegionOne
    export OS_IDENTITY_API_VERSION=3
    # Swift needs this:
    export OS_AUTH_VERSION=3
    # Gnocchi needs this
    export OS_AUTH_TYPE=password
    

    Source this into a shell: for instance, if you saved this to ~/.swiftrc.juju-admin, then run:

    . ~/.swiftrc.juju-admin
    

    You should now be able to run openstack endpoint list and see a table for the various services exposed by your deployment. Then you can create a dummy project and a user with enough privileges to use Swift:

    USERNAME=your-username
    PASSWORD=your-password
    openstack domain create SwiftDomain
    openstack project create --domain SwiftDomain --description Swift \
        SwiftProject
    openstack user create --domain SwiftDomain --project-domain SwiftDomain \
        --project SwiftProject --password "$PASSWORD" "$USERNAME"
    openstack role add --project SwiftProject --user-domain SwiftDomain \
        --user "$USERNAME" Member
    

    (This is intended for testing rather than for doing anything particularly sensitive. If you cared about keeping the password secret then you’d use the --password-prompt option to openstack user create instead of supplying the password on the command line.)

    Now create a configuration file granting privileges for the user you just created. I felt like automating this to at least some degree:

    touch ~/.swiftrc.juju
    chmod 600 ~/.swiftrc.juju
    sed '/^_password=/d;
         s/\( OS_PROJECT_DOMAIN_NAME=\).*/\1SwiftDomain/;
         s/\( OS_PROJECT_NAME=\).*/\1SwiftProject/;
         s/\( OS_USER_DOMAIN_NAME=\).*/\1SwiftDomain/;
         s/\( OS_USERNAME=\).*/\1'"$USERNAME"'/;
         s/\( OS_PASSWORD=\).*/\1'"$PASSWORD"'/' \
         <~/.swiftrc.juju-admin >~/.swiftrc.juju
    

    Source this into a shell. For example:

    . ~/.swiftrc.juju
    

    You should now find that swift list works. Success! Now you can swift upload files, or just start testing whatever it was that you were actually trying to test in the first place.

    This is not a setup I expect to leave running for a long time, so to tear it down again:

    juju destroy-model swift
    

    This will probably get stuck trying to remove the swift-storage unit, since nothing deals with detaching the loop device. If that happens, find the relevant device in losetup -a from another window and use losetup -d to detach it; juju destroy-model should then be able to proceed.

    Credit to the Juju and LXD teams and to the maintainers of the various charms used here, as well as of course to the OpenStack folks: their work made it very much easier to put this together.

    04 December, 2018 01:37AM

    December 03, 2018

    hackergotchi for ARMBIAN

    ARMBIAN

    RockPro64

    UART is accessible on pin 6 (GND),8 (TX) and 10 (RX) and with unusual speed: 1500000

    03 December, 2018 06:37PM by Igor Pečovnik

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Daniel Pocock: Smart home: where to start?

    My home automation plans have been progressing and I'd like to share some observations I've made about planning a project like this, especially for those with larger houses.

    With so many products and technologies, it can be hard to know where to start. Some things have become straightforward, for example, Domoticz can soon be installed from a package on some distributions. Yet this simply leaves people contemplating what to do next.

    The quickstart

    For a small home, like an apartment, you can simply buy something like the Zigate, a single motion and temperature sensor, a couple of smart bulbs and expand from there.

    For a large home, you can also get your feet wet with exactly the same approach in a single room. Once you are familiar with the products, use a more structured approach to plan a complete solution for every other space.

    The Debian wiki has started gathering some notes on things that work easily on GNU/Linux systems like Debian as well as Fedora and others.

    Prioritize

    What is your first goal? For example, are you excited about having smart lights or are you more concerned with improving your heating system efficiency with zoned logic?

    Trying to do everything at once may be overwhelming. Make each of these things into a separate sub-project or milestone.

    Technology choices

    There are many technology choices:

    • Zigbee, Z-Wave or another protocol? I'm starting out with a preference for Zigbee but may try some Z-Wave devices along the way.
    • E27 or B22 (Bayonet) light bulbs? People in the UK and former colonies may have B22 light sockets and lamps. For new deployments, you may want to standardize on E27. Amongst other things, E27 is used by all the Ikea lamp stands and if you want to be able to move your expensive new smart bulbs between different holders in your house at will, you may want to standardize on E27 for all of them and avoid buying any Bayonet / B22 products in future.
    • Wired or wireless? Whenever you take up floorboards, it is a good idea to add some new wiring. For example, CAT6 can carry both power and data for a diverse range of devices.
    • Battery or mains power? In an apartment with two rooms and less than five devices, batteries may be fine but in a house, you may end up with more than a hundred sensors, radiator valves, buttons, and switches and you may find yourself changing a battery in one of them every week. If you have lodgers or tenants and you are not there to change the batteries then this may cause further complications. Some of the sensors have a socket for an optional power supply, battery eliminators may also be an option.

    Making an inventory

    Creating a spreadsheet table is extremely useful.

    This helps estimate the correct quantity of sensors, bulbs, radiator valves and switches and it also helps to budget. Simply print it out, leave it under the Christmas tree and hope Santa will do the rest for you.

    Looking at my own house, these are the things I counted in a first pass:

    Don't forget to include all those unusual spaces like walk-in pantries, a large cupboard under the stairs, cellar, en-suite or enclosed porch. Each deserves a row in the table.

    Sensors help make good decisions

    Whatever the aim of the project, sensors are likely to help obtain useful data about the space and this can help to choose and use other products more effectively.

    Therefore, it is often a good idea to choose and deploy sensors through the home before choosing other products like radiator valves and smart bulbs.

    The smartest place to put those smart sensors

    When placing motion sensors, it is important to avoid putting them too close to doorways where they might detect motion in adjacent rooms or hallways. It is also a good idea to avoid putting the sensor too close to any light bulb: if the bulb attracts an insect, it will trigger the motion sensor repeatedly. Temperature sensors shouldn't be too close to heaters or potential draughts around doorways and windows.

    There are a range of all-in-one sensors available, some have up to six features in one device smaller than an apple. In some rooms this is a convenient solution but in other rooms, it may be desirable to have separate motion and temperature sensors in different locations.

    Consider the dining and sitting rooms in my own house, illustrated in the floorplan below. The sitting room is also a potential 6th bedroom or guest room with sofa bed, the downstairs shower room conveniently located across the hall. The dining room is joined to the sitting room by a sliding double door. When the sliding door is open, a 360 degree motion sensor in the ceiling of the sitting room may detect motion in the dining room and vice-versa. It appears that 180 degree motion sensors located at the points "1" and "2" in the floorplan may be a better solution.

    These rooms have wall mounted radiators and fireplaces. To avoid any of these potential heat sources the temperature sensors should probably be in the middle of the room.

    This photo shows the proposed location for the 180 degree motion sensor "2" on the wall above the double door:

    Summary

    To summarize, buy a Zigate and a small number of products to start experimenting with. Make an inventory of all the products potentially needed for your home. Try to mark sensor locations on a floorplan, thinking about the type of sensor (or multiple sensors) you need for each space.

    03 December, 2018 08:44AM

    hackergotchi for Qubes

    Qubes

    QSB #45: Insecure default Salt configuration

    We have just published Qubes Security Bulletin (QSB) #45: Insecure default Salt configuration. The text of this QSB is reproduced below. This QSB and its accompanying signatures will always be available in the Qubes Security Pack (qubes-secpack).

    View QSB #45 in the qubes-secpack:

    https://github.com/QubesOS/qubes-secpack/blob/master/QSBs/qsb-045-2018.txt

    Learn about the qubes-secpack, including how to obtain, verify, and read it:

    https://www.qubes-os.org/security/pack/

    View all past QSBs:

    https://www.qubes-os.org/security/bulletins/

    
    
                 ---===[ Qubes Security Bulletin #45 ]===---
    
                                 2018-12-03
    
                     Insecure default Salt configuration
    
    Summary
    ========
    
    In Qubes OS, one use of Salt (aka SaltStack) is to configure software
    installed in domUs (including TemplateVMs and AppVMs). [1] To protect
    dom0 from potentially compromised domUs, all complex processing is done
    in a DisposableVM. [2] Each target domU being configured gets a separate
    DisposableVM, which is given power to execute arbitrary commands
    (through the qubes.VMShell qrexec service) in that target domU.
    
    In the default configuration, each DisposableVM generated for this
    purpose is based on the same default DVM Template that is used for all
    other default DisposableVM actions (including the default "Disposable:
    Firefox" menu entry). This DVM Template has a red label and has
    networking enabled, which might suggest that it is not
    security-critical.  However, if this default DVM Template were
    compromised (for example, by a web browser plugin the user had installed
    there [3]), then the next time Salt were used, it could also compromise
    all target domUs it were configuring.
    
    Although it is possible to use an alternative DVM Template for Salt, the
    option to do so has not been exposed through any command-line or
    graphical user interface.
    
    Vulnerable systems
    ==================
    
    To exploit this vulnerability, two conditions must be met:
    
    1. The user must actively use Salt to configure software inside a domU.
       This does not happen by default; user intervention is required. Only
       domUs configured by Salt are affected.
    
    2. The user must compromise the default DVM Template. (For example, the
       user might customize the DVM Template by installing an untrusted
       program in it, not realizing the security implications of doing so.)
    
    The issue affects only Qubes OS 4.0. In Qubes 3.2, Salt processing
    occurs in a temporary AppVM based on the default TemplateVM.
    
    Resolution
    ==========
    
    To fix this problem, we are implementing two changes:
    
    1. Adding the "management_dispvm" VM property, which specifies the DVM
       Template that should be used for management, such as Salt
       configuration.  TemplateBasedVMs inherit this property from their
       parent TemplateVMs.  If the value is not set explicitly, the default
       is taken from the global "management_dispvm" property. The
       VM-specific property is set with the qvm-prefs command, while the
       global property is set with the qubes-prefs command.
    
    2. Creating the "default-mgmt-dvm" DVM Template, which is hidden from
       the menu (to avoid accidental use), has networking disabled, and has
       a black label (the same as TemplateVMs). This VM is set as the global
       "management_dispvm".
    
    Patching
    =========
    
    The specific packages that resolve the problems discussed in this
    bulletin are as follows:
    
      For Qubes OS 4.0:
      - qubes-core-dom0 version 4.0.36
      - qubes-mgmt-salt-dom0-virtual-machines version 4.0.15
      - qubes-mgmt-salt-admin-tools version 4.0.12
    
      For Qubes OS 3.2:
      - No packages necessary, since 3.2 is not affected.
        (See above for details.)
    
    The packages are to be installed in dom0 via the Qubes VM Manager or via
    the qubes-dom0-update command as follows:
    
      For updates from the stable repository (not immediately available):
      $ sudo qubes-dom0-update
    
      For updates from the security-testing repository:
      $ sudo qubes-dom0-update --enablerepo=qubes-dom0-security-testing
    
    These packages will migrate from the security-testing repository to the
    current (stable) repository over the next two weeks after being tested
    by the community.
    
    
    Credits
    ========
    
    The issue was reported by Demi M. Obenour <demiobenour@gmail.com>
    
    References
    ===========
    
    [1] https://www.qubes-os.org/doc/salt/#configuring-a-vms-system-from-dom0
    [2] https://github.com/QubesOS/qubes-issues/issues/1541#issuecomment-187482786
    [3] https://www.qubes-os.org/doc/dispvm-customization/
    
    --
    The Qubes Security Team
    https://www.qubes-os.org/security/
    

    03 December, 2018 12:00AM

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Eric Hammond: Guest Post: Notable AWS re:invent Sessions, by Jennine Townsend

    A guest post authored by Jennine Townsend, expert sysadmin and AWS aficionado

    There were so many sessions at re:Invent! Now that it’s over, I want to watch some sessions on video, but which ones?

    Of course I’ll pick out those that are specific to my interests, but I also want to know the sessions that had good buzz, so I made a list that’s kind of mashed together from sessions that I heard good things about on Twitter, with those that had lots of repeats and overflow sessions, figuring those must have been popular.

    But I confess I left out some whole categories! There aren’t sessions for Alexa or DeepRacer (not that I’m not interested, they’re just not part of my re:Invent followup), and I don’t administer any Windows systems so I leave out most of those sessions.

    Some sessions have YouTube links, some don’t (yet) have and may never have YouTube videos, since lots of (types of) sessions aren’t recorded. (But even there, if I search the topic and speakers, I bet I can often find an earlier talk.)

    There’s not much of a ranking: keynotes at the top, sessions I heard good things about in the middle, then sessions that had lots of repeats. It’s only mildly specific to my interests, so I thought other people might find it helpful. It’s also not really finished, but I wanted to get started watching sessions this weekend!

    Keynotes

    Peter DeSantis Monday Night Live

    Terry Wise Global Partner Keynote

    Andy Jassy keynote

    Werner Vogels keynote

    DEV322 What’s New with the AWS CLI (Kyle Knapp, James Saryerwinnie)

    SRV409 A Serverless Journey: AWS Lambda Under the Hood

    CON362 Container Power Hour with Jess, Clare, and Abby

    SRV325 Using DevOps, Microservices, and Serverless to Accelerate Innovation (David Richardson, Ken Exner, Deepak Singh)

    SRV375 Lambda Layers and Runtime API (Danilo Poccia) - Chalk Talk

    SRV338 Configuration Management and Service Discovery (mentions CloudMap) (Alex Casalboni, Ben Kehoe) - Chalk Talk

    CON367 Introducing App Mesh (Kiran Meduri, Shubha Rao, James Straub)

    SRV355 Best Practices for CI/CD with AWS Lambda and Amazon API Gateway (Chris Munns) (focuses on SAM, CodeStar, I believe) - Chalk Talk

    DEV327 Advanced Infrastructure as Code Programming on AWS

    SRV322 From Monolith to Modern Apps: Best Practices

    CON301 Mastering Kubernetes on AWS

    ARC202 Running Lean Architectures: How to Optimize for Cost Efficiency

    DEV319 Continuous Integration Best Practices

    AIM404 Build, Train, and Deploy ML Models Quickly and Easily with Amazon SageMaker

    STG209 Amazon S3 Storage Management (Scott Hewitt) - Chalk Talk

    ENT205 Executing a Large-Scale Migration to AWS (Joe Chung, Jonathan Allen, Mike Wittig)

    DEV317 Advanced Continuous Delivery Best Practices

    CON308 Building Microservices with Containers

    ANT323 Build Your Own Log Analytics Solutions on AWS

    ANT201 Big Data Analytics Architectural Patterns and Best Practices

    DEV403 Automate Common Maintenance & Deployment Tasks Using AWS Systems Manager - Builders Session

    DAT356 Which Database Should I Use? - Builders Session

    DEV309 CI/CD for Serverless and Containerized Applications

    ARC209 Architecture Patterns for Multi-Region Active-Active Applications

    AIM401 Deep Learning Applications Using TensorFlow

    SRV305 Inside AWS: Technology Choices for Modern Applications

    SEC401 Mastering Identity at Every Layer of the Cake

    SEC371 Incident Response in AWS - Builders Session

    SEC322 Using AWS Lambda as a Security Team

    NET404 Elastic Load Balancing: Deep Dive and Best Practices

    DEV321 What’s New with AWS CloudFormation

    DAT205 Databases on AWS: The Right Tool for the Right Job

    Original article and comments: https://alestic.com/2018/12/aws-reinvent-jennine/

    03 December, 2018 12:00AM

    December 02, 2018

    hackergotchi for Xanadu developers

    Xanadu developers

    Los 9 errores más comunes al usar CSS Grid

    Esta es una traducción del artículo original publicado en el blog de Mozilla Hacks. Traducción por Uriel Jurado.

    Es fácil tener muchos errores usando una tecnología nueva, especialmente algo que tuvo un gran cambio desde la versión anterior, tal como en CSS Grid. En este vídeo (en inglés) explico los 9 errores más comunes que la gente tiene al usar esta tecnología, con consejos y tips para evitar estas trampas y romper viejos hábitos.

    Enlace al vídeo en Youtube

    Error 1: Creer que CSS Grid lo es todo

    Flexbox vs CSS Grid – ¿Cuál es mejor?

    Usando Flexbox y Grid juntos

    Eliminar Cajas con CSS Shapes

    Error 2: Usar únicamente porcentajes en las dimensiones

    Mínimo y Máximo, dimensionando contenido en CSS Grid

    Unidades FR en CSS Grid

    MinMax en CSS Grid

    Error 3: Asumir que necesitas breakpoints

    Diseño asombrosamente sencillo con CSS Grid

    Error 4: Confundirse al enumerar

    Diseñador Gráfico Ingenioso y Práctico con CSS Grid

    Lo Básico de CSS Grid: El gran cuadro

    Error 5: Siempre usar 12 columnas

    Explico esto al final de “Unidades FR en CSS Grid”

    Error 6: Ignorar el poder de las filas

    Flexibilidad y dobleces

    Espacio Blanco en la Web

    Error 7: Buscar un Framework

    Error 8: Esperar a la muerte de IE11

    ¿Internet Explorer + CSS Grid?

    Serie de 7 partes sobre escribir CSS flexible que trabaje en todos los navegadores

    Error 9: Titubear en vez de jugar

    Mondrian Responsivo

    CSS Grid como si fueras Jan Tschicold

    02 December, 2018 09:11PM by sinfallas

    LMDE

    Monthly News – November 2018

    Many thanks to all the people who help our project financially. Donations are up again, more than 500 of you sent us funds in October and we now have 129 patrons on Patreon.

    The BETA release for Linux Mint 19.1 will be out this week. We’re counting on you to help us find bugs and to help us fix them, so that we can raise the quality of the release and get to stable before Christmas.

    Some of the new features were described here on this blog, others will be unveiled tomorrow. This is an exciting time for all of us and we hope you enjoy it and have fun with the new release.

    Sponsorships:

    Linux Mint is proudly sponsored by:

    Platinum Sponsors:
    Private Internet Access
    Gold Sponsors:
    Linux VPS Hosting
    Silver Sponsors:

    Sucuri
    VPN Free
    Bronze Sponsors:
    Vault Networks *
    AYKsolutions Server & Cloud Hosting
    7L Networks Toronto Colocation *
    Goscomb
    BGASoft Inc
    David Salvo
    OpusVL
    Community Sponsors:

     

    Donations in October:

    A total of $11,535 were raised thanks to the generous contributions of 538 donors:

    $300, Tamas H.
    $200, Harland F.
    $200, B
    $163 (3rd donation), Jack B.
    $150 (3rd donation), Don P.
    $141, Frederic B.
    $128, Hector G.
    $109 (3rd donation), Uwe P.
    $109 (3rd donation), Naoise G.
    $109 (2nd donation), Peter A.
    $109, Torsten P.
    $100 (3rd donation), Markus S.
    $100 (3rd donation), The Incredibly Useful Company Limited
    $100 (2nd donation), Mountain Computers, Inc
    $100, Christopher H. J.
    $100, Symeon V.
    $100, Dave M.
    $80, Gops S.
    $65, Gladesoft, Inc.
    $60, Ray M.
    $54 (4th donation), Jan-Albert V.
    $54 (3rd donation), Daniel S.
    $54 (3rd donation), J. F. .
    $54 (3rd donation), Juergen S.
    $54 (3rd donation), Michael S.
    $54 (2nd donation), B. S. aka “disfit”
    $54 (2nd donation), Schultz M. L.
    $54 (2nd donation), Jeroen V. B.
    $54 (2nd donation), Karl H.
    $54, Daina E.
    $54, Ansgar M. aka “Nicky”
    $54, Gildas M.
    $54, Alexander V.
    $54, Barbara E.
    $54, Bernd B.
    $54, Cédric D.
    $54, Nilo V.
    $54, Gebhard M.
    $54, Nicolas H.
    $50 (28th donation), Anthony C. aka “ciak”
    $50 (11th donation), Thomas T. aka “FullTimer1489”
    $50 (9th donation), George H.
    $50 (7th donation), Warren A.
    $50 (7th donation), Carl G.
    $50 (7th donation), Anonymous User
    $50 (6th donation), Douglas J.
    $50 (5th donation), William W.
    $50 (5th donation), JimM
    $50 (3rd donation), Fred W.
    $50 (3rd donation), Dave K.
    $50 (2nd donation), Robert K.
    $50 (2nd donation), Charles W.
    $50 (2nd donation), Don P.
    $50 (2nd donation), Victor I.
    $50, Dragonaur
    $50, Brett L.
    $50, Samuel L.
    $50, Illusion Labs AB
    $50, Field Services USA aka “Jim”
    $50, Mark B.
    $50, William R.
    $50, Donor
    $49, Martin S.
    $44 (2nd donation), Ugo J.
    $44 (2nd donation), Thomas B.
    $42 (4th donation), Martin K.
    $40 (4th donation), Darin W.
    $40 (2nd donation), John B.
    $40, Mark O.
    $40, Lee S.
    $39.69, John P.
    $38 (2nd donation), Nico R.
    $35 (8th donation), Real F.
    $35 (6th donation), Jeff S.
    $35 (2nd donation), Andrew C.
    $33 (104th donation), Olli K.
    $33 (5th donation), Lars-gunnar S.
    $33 (5th donation), Bruno N.
    $33 (2nd donation), Rafael S. A.
    $33 (2nd donation), Erkki J.
    $33, Peter K.
    $33, Barbara B. aka “Camilla”
    $33, Alexander L. aka “Lexolas”
    $33, Thorsten K.
    $33, Bert-henry S.
    $33, Sieghart F.
    $33, Fabio V.
    $33, Philippe F.
    $33, Hugues C.
    $31, Daniel C.
    $30 (25th donation), Kouji Sugibayashi
    $30 (7th donation), V. Mark Lehky aka “SiKing
    $30 (4th donation), Bruno Weber
    $30 (2nd donation), Michael M.
    $30 (2nd donation), Kamil R.
    $30 (2nd donation), Oscar R.
    $30 (2nd donation), Doug K.
    $30, 裴 丰硕
    $30, Roger H.
    $30, Lala V. D.
    $30, Drew G.
    $27 (7th donation), Ralf D.
    $27 (2nd donation), Patrick C.
    $27, Henk B.
    $27, Ivica P.
    $27, Stefan K.
    $27, Manuel G. F.
    $27, aka “rfspd”
    $27, akaIDIOT
    $26 (3rd donation), Fred D.
    $25 (87th donation), Ronald W.
    $25 (86th donation), Ronald W.
    $25 (9th donation), Douglas T.
    $25 (8th donation), Michael Welch aka “Dr. Mike
    $25 (6th donation), Myron J.
    $25 (4th donation), Jonathan L.
    $25 (4th donation), Karen J.
    $25 (4th donation), Michael S.
    $25 (3rd donation), Robert A.
    $25 (3rd donation), Chuck E.
    $25 (2nd donation), Roderick N.
    $25 (2nd donation), Mark F.
    $25 (2nd donation), CA aka “Clauclau”
    $25 (2nd donation), Daniel J.
    $25 (2nd donation), Scott O.
    $25 (2nd donation), Joe K.
    $25 (2nd donation), Steven L.
    $25 (2nd donation), George C.
    $25 (2nd donation), Frederick S.
    $25, Sheree P.
    $25, Steve E.
    $25, Roger J.
    $25, Frank C.
    $25, Randall D.
    $25, Ben J. aka “webwrx”
    $25, Jerry C.
    $25, Marek M.
    $25, Richard S.
    $25, Noah K.
    $25, Karl B.
    $25, Christopher H.
    $25, Cameron M.
    $25, Andrew R.
    $25, Larry W.
    $25, Heath H.
    $25, Daryl B.
    $25, Eric M.
    $25, Alen K.
    $25, Frank N.
    $25, Fred A. Jr
    $24, Barbara M.
    $23 (2nd donation), James H.
    $22 (20th donation), Derek R.
    $22 (6th donation), David M.
    $22 (5th donation), Jonathan K.
    $22 (5th donation), Florent G.
    $22 (4th donation), Bernhard J.
    $22 (4th donation), Manfred W.
    $22 (3rd donation), Aurelie L. B.
    $22 (3rd donation), Ørnulv A.
    $22 (3rd donation), Alexandru C.
    $22 (2nd donation), Lois S.C.
    $22 (2nd donation), Ralf Klawitter
    $22 (2nd donation), Christian K.
    $22 (2nd donation), Roman S.
    $22 (2nd donation), R. I. . aka “Birman”
    $22, Francesca S. S.
    $22, Octavian I.
    $22, Paul aka “Morts”
    $22, Carlo V.
    $22, Ondrej M.
    $22, Dieter K.
    $22, Remi C.
    $22, Onno G.
    $22, Ben A.
    $22, Klaus B.
    $22, Bert D. B.
    $22, Christine N.
    $22, Kurt-rainer D.
    $22, Pascal R.
    $22, Mack
    $22, Jörg B.
    $22, Thomas K.
    $22, Finn B. J.
    $22, Hana G.
    $21.73, Adam Champken
    $20 (26th donation), Kouji Sugibayashi
    $20 (15th donation), Ray
    $20 (11th donation), Lance M.
    $20 (11th donation), Michel S.
    $20 (8th donation), Justin Oros
    $20 (7th donation), Nicklas L.
    $20 (6th donation), John D.
    $20 (4th donation), Peter L.
    $20 (4th donation), Anthony S.
    $20 (4th donation), Srikanth B.
    $20 (4th donation), Bryan F.
    $20 (4th donation), Keith N.
    $20 (3rd donation), Ken W. aka “Tracknut”
    $20 (3rd donation), Allan M. aka “trini64”
    $20 (3rd donation), Che H.
    $20 (2nd donation), 陳 俊.
    $20 (2nd donation), Roy G.
    $20 (2nd donation), Bajan52
    $20 (2nd donation), Steven T. aka “oakhilltop”
    $20 (2nd donation), Michael F.
    $20, Hirantha Ketipearachchi
    $20, Aloke B.
    $20, webjobs.biz
    $20, Matthew W.
    $20, Diane K.
    $20, Michael B.
    $20, Brianlkeeton.com
    $20, Robert W.
    $20,
    $20, Graeme M. J.
    $20, Antonio M.
    $20, Adminout PTY LTD
    $20, Raghav K.
    $20, Paul B.
    $20, Stephen B.
    $20, Davd A.
    $20, CF Style Inc.
    $20, Richard M.
    $20, Rex T.
    $20, Samuli H.
    $20, UFO
    $20, John L.
    $20, Marcus W.
    $20, 唐 伟杰
    $20, Alan B.
    $20, Dennis H.
    $20, David R.
    $20, David S.
    $20, Eldrid K.
    $20, Wendy C.
    $20, Paul S.
    $20, Luca D.
    $20, Dean S.
    $20, NomP
    $20, Roy K.
    $17 (4th donation), Daniel G.
    $16 (21st donation), Andreas S.
    $16 (9th donation), Robert K.
    $16 (4th donation), Harm R.
    $16 (3rd donation), Isidoro L.
    $16 (3rd donation), Zahari D. K.
    $16 (3rd donation), Gerhard A.
    $16 (3rd donation), Gilles PdT
    $16 (2nd donation), Joachim B.
    $16 (2nd donation), Joachim A.
    $16 (2nd donation), Ion M. aka “nelu.ipx”
    $16 (2nd donation), Björn S.
    $16 (2nd donation), Jean-marie L.
    $16, Filippo F.
    $16, Christian F.
    $16, Valentin R.
    $16, Dmitry S.
    $16, Francois C.
    $16, Primoz M.
    $16, Valentin R.
    $16, Niko M.
    $15 (10th donation), Hemant Patel
    $15 (4th donation), Abigail M.
    $15 (4th donation), Constantin M.
    $15 (2nd donation), David G.
    $15 (2nd donation), Robert H.
    $15 (2nd donation), DBG
    $15, Dominik W.
    $15, Lois F.
    $15, Tr S.
    $15, Mark D.
    $15, Alan Y.
    $15, Andrei C. I.
    $15, Leon S.
    $14 (3rd donation), Martin F.
    $13, Bruno Y.
    $12 (92th donation), Tony C. aka “S. LaRocca”
    $12 (91th donation), Tony C. aka “S. LaRocca”
    $11 (17th donation), Alessandro S.
    $11 (10th donation), Queenvictoria
    $11 (7th donation), Marc V. K.
    $11 (7th donation), Florian U.
    $11 (5th donation), Kari H.
    $11 (5th donation), Alexander Lang
    $11 (4th donation), Slobodan Vrkacevic
    $11 (3rd donation), Remus F. B.
    $11 (3rd donation), Dieter R.
    $11 (3rd donation), Mirko A.
    $11 (2nd donation), Claus Moller
    $11 (2nd donation), Pjerinjo
    $11 (2nd donation), Derek B.
    $11 (2nd donation), James P.
    $11 (2nd donation), Rafal K.
    $11 (2nd donation), Iris W.
    $11 (2nd donation), Kenichi M.
    $11 (2nd donation), Radoslav J.
    $11 (2nd donation), Luis M. M. I.
    $11 (2nd donation), Giovambattista A.
    $11 (2nd donation), Fabio F.
    $11, Martin W.
    $11, Christian W.
    $11, Ümit A.
    $11, Arkadiusz K.
    $11, Roland S.
    $11, Bernard L.
    $11, Andreas K.
    $11, Christopher I.
    $11, Arndt K.
    $11, OportoFado.com
    $11, Walter B.
    $11, Mathieu T.
    $11, Christian T.
    $11, Andre P.
    $11, Pier G. R.
    $11, Gunther U. aka “wEbAddEr”
    $11, Roel V. S.
    $11, Wojciech K.
    $11, hadisch aka “hadisch”
    $11, Matevž N.
    $11, Mathieu Q.
    $11, Pierre B.
    $11, Gabriele D.
    $11, Waldemar P. aka “valldek”
    $11, Dennis J.
    $11, Holger D.
    $11, Bob D.
    $11, Ernesto C.
    $10 (35th donation), Thomas C.
    $10 (26th donation), Frank K.
    $10 (24th donation), Kouji Sugibayashi
    $10 (23rd donation), Paul O.
    $10 (23rd donation), Kouji Sugibayashi
    $10 (14th donation), Terrance G.
    $10 (9th donation), Roger B.
    $10 (7th donation), Dohaeng L.
    $10 (6th donation), Tony H. aka “TonyH1212”
    $10 (5th donation), Arkadiusz T.
    $10 (5th donation), John T.
    $10 (4th donation), Ian M.
    $10 (4th donation), Gerard M. Cormick aka “gmacor2”
    $10 (4th donation), David T.
    $10 (4th donation), Raymond O.
    $10 (4th donation), อนล ธรรมตระการ aka “ฮอง”
    $10 (4th donation), Ishiyama T.
    $10 (3rd donation), Andrew C.
    $10 (3rd donation), Ric D.
    $10 (3rd donation), George M.
    $10 (3rd donation), Doug S.
    $10 (3rd donation), Kamil G.
    $10 (3rd donation), John K.
    $10 (2nd donation), Douglas S. aka “AJ Gringo”
    $10 (2nd donation), Erik W.
    $10 (2nd donation), Tabo K.
    $10 (2nd donation), blueredgreen
    $10 (2nd donation), C T Johnson, Inc
    $10 (2nd donation), Mike B.
    $10 (2nd donation), Laurent D.
    $10 (2nd donation), Philip S. aka “Smithereens”
    $10 (2nd donation), Josef H. R. H.
    $10 (2nd donation), Krzysztof S.
    $10 (2nd donation), Alfred C.
    $10, Matt K.
    $10, Damian
    $10, David D. M.
    $10, Jordan B.
    $10, Cuauhtemoc M.
    $10, Julien R.
    $10, Lennart S.
    $10, Kenneth W.
    $10, Joarez W.
    $10, David B.
    $10, IxL
    $10, Edward U.
    $10, Yokota Y.
    $10, Robert S.
    $10, Daniel R. N.
    $10, Jose C.
    $10, Norman I.
    $10, Danuta O.
    $10, Rithwik J.
    $10, John L.
    $10, Новиков И.
    $10, Morgan S.
    $10, Jiateng W.
    $10, Håkan F.
    $10, Mukesh R.
    $10, Guilherme Aires
    $10, Matthew F.
    $10, Peter S.
    $10, Rodolfo Zappa aka “RodZappa”
    $10, Kurt W.
    $10, Alok A.
    $10, François B.
    $10, André T. P.
    $10, Gladys B.
    $10, Shahov I.
    $9 (2nd donation), Philip B.
    $9, Roy Y.
    $8 (14th donation), Kevin O. aka “Kev”
    $8 (5th donation), Toni K.
    $8 (2nd donation), Paul B.
    $8 (2nd donation), Iker P. M.
    $8, Sébastien B. aka “SebastJava”
    $8, Mike A.
    $8, Mani D.
    $8, Tomeu P. S.
    $8, Roswitha O.
    $7 (4th donation), Daniel J G II
    $7 (3rd donation), Roy G.
    $6 (8th donation), gmq
    $6 (7th donation), gmq
    $6 (6th donation), Jan Miszura
    $6, Amy K.
    $5.95, Bounpone J. S.
    $5 (30th donation), Eugene T.
    $5 (29th donation), Eugene T.
    $5 (22nd donation), Kouji Sugibayashi
    $5 (21st donation), Todd A aka “thobin”
    $5 (20th donation), Bhavinder Jassar
    $5 (18th donation), Jens-uwe R.
    $5 (14th donation), Kjell O. B. aka “kob”
    $5 (14th donation), Hans P.
    $5 (11th donation), Lumacad Coupon Advertising
    $5 (8th donation), Халилова А.
    $5 (6th donation), Pokies Portal
    $5 (6th donation), Jimmy R. W.
    $5 (6th donation), Leszek Bober aka “L__B
    $5 (5th donation), JvdB
    $5 (4th donation), Luiz H. R. C.
    $5 (3rd donation), broyeur vegetaux
    $5 (3rd donation), Jeroen St
    $5 (3rd donation), Josh J.
    $5 (3rd donation), Willem V. U.
    $5 (3rd donation), Nenad G.
    $5 (3rd donation), Kim T.
    $5 (3rd donation), Stefan N.
    $5 (3rd donation), Igor Simić
    $5 (2nd donation), Marcel M.
    $5 (2nd donation), Le M.
    $5 (2nd donation), Dawid M.
    $5 (2nd donation), Nemer A.
    $5 (2nd donation), Raymond D.
    $5 (2nd donation), Carlos G. L. G.
    $5 (2nd donation), Yonglan Z.
    $5 (2nd donation), Philipp B.
    $5 (2nd donation), Wayne A.
    $5 (2nd donation), Pavel M.
    $5, Greg M.
    $5, Tony N.
    $5, Nalin Ratnakar
    $5, Michel S.
    $5, Evan C.
    $5, Yang X.
    $5, Drcz
    $5, Loi Pinel
    $5, Gary M.
    $5, aka “Saesch78”
    $5, Maxmilian B.
    $5, Norton L. C.
    $5, Darius O.
    $5, Tabo K.
    $5, John W.
    $5, T M.
    $5, Linus J.
    $5, Tom N.
    $5, Allan Dacasin
    $5, Gabriele G.
    $5, Joseph G.
    $5, Ricardo E.
    $5, Jose M. G. C.
    $5, Daniel C
    $5, Michael L.
    $5, Yoyo F.
    $5, Alejandro N.
    $5, Miha M.
    $5, David J.
    $4, Viktor H.
    $3 (3rd donation), Artur F.
    $3, Chameka L.
    $3, Dalibor B.
    $3, Bogdan O.
    $3, asa
    $3, Chasity A.
    $2.7 (2nd donation), Elizabeth M.
    $2.5, Casey L.
    $2.3 (2nd donation), Sarie B.
    $69.77 from 51 smaller donations

    If you want to help Linux Mint with a donation, please visit https://www.linuxmint.com/donors.php

    Patrons:

    Linux Mint is proudly supported by 129 patrons, for a sum of $687 per month.

    To become a Linux Mint patron, please visit https://www.patreon.com/linux_mint

    Rankings:

    • Distrowatch (popularity ranking):  2227 (2nd)
    • Alexa (website ranking): 4036

    02 December, 2018 01:09PM by Clem

    hackergotchi for VyOS

    VyOS

    On security of GRE/IPsec scenarios

    As we've already discussed, there are many ways to setup GRE (or something else) over IPsec and they all have their advantages and disadvantages. Recently an issue was brought to my attention: which ones are safe against unencrypted GRE traffic being sent?

    The reason this issue can appear at all is that GRE and IPsec are related to each other more like routing and NAT: in some setups their configuration has to be carefully coordinated, but in general they can easily be used without each other. Lack of tight coupling between features allows greater flexibility, but it may also create situations when the setup stops working as intended without a clear indication as to why it happened.

    Let's review the knowingly safe scenarios:

    VTI

    This one is least flexible, but also foolproof by design: the VTI interface (which is secretly simply IPIP) is brought up only when an IPsec tunnel associated with it is up, and goes down when the tunnel goes down. No traffic will ever be sent over a VTI interface until IKE succeeds.

    Tunnel sourced from a loopback address

    If you have missed it, the basic idea of this setup is the following:

    set interfaces dummy dum0 address 192.168.1.100/32
    
    set interfaces tunnel tun0 local-ip 192.168.1.100/32
    set interfaces tunnel tun0 remote-ip 192.168.1.101/32 # assigned to dum0 on the remote side
    
    set vpn ipsec site-to-site peer 203.0.113.50 tunnel 1 local prefix 192.168.1.100/32
    set vpn ipsec site-to-site peer 203.0.113.50 tunnel 1 remote prefix 192.168.1.101/32
    

    Most often it's used when the routers are behind NAT, or one side lacks a static address, which makes selecting traffic for encryptions by protocol alone impossible. However, it also introduces tight coupling between IPsec and GRE: since the remote end of the GRE tunnel can only be reached via an IPsec tunnel, no communication between the routers over GRE is possible unless the IPsec tunnel is up. If you fear that any packets may be sent via the default route, you can nullroute the IPsec tunnel network to be sure.

    The complicated case

    Now let's examine the simplest kind of setup:

    set interfaces tunnel tun0 local-ip 192.0.2.100 # WAN address
    set interfaces tunnel tun0 remote-ip 203.0.113.200
    
    set vpn ipsec site-to-site peer 203.0.113.200 tunnel 1 protocol gre
    

    In this case IPsec is setup to encrypt the GRE traffic to 203.0.113.200, but the GRE tunnel itself can work without IPsec. In fact, it will work without IPsec, just without encryption, and that is the concern for some people. If the IPsec tunnel goes down due to misconfiguration, it will fall back to the common, unencrypted GRE.

    What can you do about it?

    As a user, if your requirement is to prevent unencrypted traffic from ever being sent, you should use VTI or use loopback addresses for tunnel endpoints.

    For developers this question is more complicated.

    What should be done about it?

    The opinions are divided. I'll summarize the arguments here.

    Arguments for fixing it:

    • Cisco does it that way (attempts to detect that GRE and IPsec are related — at least in some implementations and at least when it's referenced as IPsec profile in the GRE tunnel)
    • The current behaviour is against user's intentions

    Arguments against fixing it:

    • Attempts to guess user's intentions are doomed to fail at least some of the time (for example, what if a user intentionally brings an IPsec tunnel down to isolate GRE setup issues?)
    • The only way to guarantee that unencrypted traffic is never sent is checking for a live SA matching protocol and source before forwarding every packet — that's not good for performance).

    Practical considerations:

    • Since IKE is in the userspace, the kernel can't even know that an SA is supposed to exist until IKE succeeds: automatic detection would be a big change that is unlikely to be accepted in the mainline kernel.
    • Configuration changes required to avoid the issue are simple
    If you have any thoughts on the issue, please share with us!

    02 December, 2018 02:04AM by Daniil Baturin

    December 01, 2018

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Julian Andres Klode: Migrating web servers

    As of today, I migrated various services from shared hosting on uberspace.de to a VPS hosted by hetzner. This includes my weechat client, this blog, and the following other websites:

    • jak-linux.org
    • dep.debian.net redirector
    • mirror.fail

    Rationale

    Uberspace runs CentOS 6. This was causing more and more issues for me, as I was trying to run up-to-date weechat binaries. In the final stages, I ran weechat and tmux inside a debian proot. It certainly beat compiling half a system with linuxbrew.

    The web performance was suboptimal. Webpages are served with Pound and Apache, TLS connection overhead was just huge, there was only HTTP/1.1, and no keep-alive.

    Security-wise things were interesting: Everything ran as my user, obviously, whether that’s scripts, weechat, or mail delivery helpers. Ugh. There was also only a single certificate, meaning that all domains shared it, even if they were completely distinct like jak-linux.org and dep.debian.net

    Enter Hetzner VPS

    I launched a VPS at hetzner and configured it with Ubuntu 18.04, the latest Ubuntu LTS. It is a CX21, so it has 2 vcores, 4 GB RAM, 40 GB SSD storage, and 20 TB of traffic. For 5.83€/mo, you can’t complain.

    I went on to build a repository of ansible roles (see repo on github.com), that configured the system with a few key characteristics:

    • http is served by nginx
    • certificates are per logical domain - each domain has a canonical name and a set of aliases; and the certificate is generated for them all
    • HTTPS is configured according to Mozilla’s modern profile, meaning TLSv1.2-only, and a very restricted list of ciphers. I can revisit that if it’s causing problems, but I’ve not seen huge issues.
    • Log files are anonymized to 24 bits for IPv4 addresses, and 32 bit for IPv6 addresses, which should allow me to identify an ISP, but not an individual user.

    I don’t think the roles are particularly reusable for others, but it’s nice to have a central repository containing all the configuration for the server.

    Go server to serve comments

    When I started self-hosting the blog and added commenting via mastodon, it was via a third-party PHP script. This has been replaced by a Go program (GitHub repo). The new Go program scales a lot better than a PHP script, and provides better security properties due to AppArmor and systemd-based sandboxing; it even uses systemd’s DynamicUser.

    Special care has been taken to have time outs for talking to upstream servers, so the program cannot hang with open connections and will respond eventually.

    The Go binary is connected to nginx via a UNIX domain socket that serves FastCGI. The service is activated via systemd socket activation, allowing it to be owned by www-data, while the binary runs as a dynamic user. Nginx’s native fastcgi caching mechanism is enabled so the Go process is only contacted every 10 minutes at the most (for a given post). Nice!

    Performance

    Performance is a lot better than the old shared server. Pages load in up to half the time of the old one. Scalability also seems better: I tried various benchmarks, and achieved consistently higher concurrency ratings. A simple curl via https now takes 100ms instead of 200ms.

    Performance is still suboptimal from the west coast of the US or other places far away from Germany, but got a lot better than before: Measuring from Oregon using webpagetest, it took 1.5s for a page to fully render vs ~3.4s before. A CDN would surely be faster, but would lose the end-to-end encryption.

    Upcoming mail server

    The next step is to enable email. Setting up postfix with dovecot is quite easy it turns out. Install them, tweak a few settings, setup SPF, DKIM, DMARC, and a PTR record, and off you go.

    I mostly expect to read my email by tagging it on the server using notmuch somehow, and then syncing it to my laptop using muchsync. The IMAP access should allow some notifications or reading on the phone.

    Spam filtering will be handled with rspamd. It seems to be the hot new thing on the market, is integrated with postfix as a milter, and handles a lot of stuff, such as:

    • greylisting
    • IP scoring
    • DKIM verification and signing
    • ARC verification
    • SPF verification
    • DNS lists
    • Rate limiting

    It also has fancy stuff like neural networks. Woohoo!

    As another bonus point: It’s trivial to confine with AppArmor, which I really love. Postfix and Dovecot are a mess to confine with their hundreds of different binaries.

    I found it via uberspace, which plan on using it for their next uberspace7 generation. It is also used by some large installations like rambler.ru and locaweb.com.br.

    I plan to migrate mail from uberspace in the upcoming weeks, and will post more details about it.

    01 December, 2018 10:40PM

    hackergotchi for SparkyLinux

    SparkyLinux

    Midori 7.0

    There is a new application available in our repos: Midori 7.0.

    Midori is a lightweight yet powerful web browser which runs just as well on little embedded computers named for delicious pastries as it does on beefy machines with a core temperature exceeding that of planet earth. And it looks good doing that, too. Oh, and of course it’s free software.

    This point release has been published after over 3 years of development break.
    The browser has been completely rewritten in Vala & GTK3.

    Installation/Upgrade:
    sudo apt update
    sudo apt install midori

    Midori 7.0 is available for Sparky 4/Debian Stretch and Sparky 5/Debian Buster users as well.

    Midori

    Midori Git repos: github.com/midori-browser/core

     

    01 December, 2018 10:17PM by pavroo

    November 2018 donation report

    Many thanks to all of you for supporting Sparky!
    Your donations help keeping Sparky alive.

    Don’t forget to send a small tip in December too 🙂

     

    Country
    Supporter
    Amount
    Germany
    Eric H.
    € 25
    Poland
    Krzysztof M.
    PLN 50
    Italy
    Roberto T.
    € 10
    Poland
    Emil N.
    PLN 30
    USA
    John S.
    € 10
    World
    Ruedi L.
    € 10
    Germany
    Alexander F.
    € 10
    Poland
    Jacek G.
    PLN 40
    Poland
    Stanisław G.
    PLN 20
    Poland
    Paweł S.
    PLN 30
    Switzerland
    Johnny A.
    € 50
    World
    Jorg S.
    € 2.5
    World
    Denis P.
    € 50
    World
    Adrian B.
    $ 1
    World
    Gernot P.
    $ 10
    World
    Merlyn M.
    $ 5
    Germany
    Dirk O.
    € 15
    Poland
    Wojciech H.
    PLN 1
    Total:
    PLN 171
    € 182,5
    $ 16

    01 December, 2018 01:15PM by pavroo

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Colin King: New features in Forkstat

    Forkstat is a simple utility I wrote a while ago that can trace process activity using the rather useful Linux NETLINK_CONNECTOR API.   Recently I have added two extra features that may be of interest:

    1.  Improved output using some UTF-8 glyphs.  These are used to show process parent/child relationships and various process events, such as termination, core dumping and renaming.   Use the new -g (glyph) option to enable this mode. For example:


    In the above example, the program "wobble" was started and forks off a child process.  The parent then renames itself to wibble (indicated by a turning arrow). The child then segfaults and generates a core dump (indicated by a skull and crossbones), triggering apport to investigate the crash.  After this, we observe NetworkManager creating a thread that runs for a very short period of time.   This kind of activity is normally impossible to spot while running conventions tools such as ps or top.

    2. By default, forkstat will show the process name using the contents of /proc/$PID/cmdline.  The new -c option allows one to instead use the 16 character task "comm" field, and this can be helpful for spotting process name changes on PROC_EVENT_COMM events.

    These are small changes, but I think they make forkstat more useful.  The updated forkstat will be available in Ubuntu 19.04 "Disco Dingo".

    01 December, 2018 12:47PM by Colin Ian King (noreply@blogger.com)

    hackergotchi for VyOS

    VyOS

    VyOS 1.2.0-rc9 is available for download

    We are getting closer and closer to the stable release. There are still bugs for a couple more release candidates, and some features we need to get in, but generally the time of spectatular updates in the 1.2.0/crux branch is almost over—all big things will be going on in the new 1.3.0/equuleus branch soon. The new release candidate is available for download from https://downloads.vyos.io/?dir=testing/1.2.0-rc9 

    Software updates

    The kernel has been updated to the most recent 4.19.4 release. It includes multiple fixes in drivers, so tasks related to drivers like this one about Mellanox card causing a crash under load (T1014), or those about Intel cards (T986, T961) should be re-tested.

    This kernel also removes the original SPECTRE vulnerability mitigation code that had a big performance impact.  We'd like to hear about your experience in this regard.

    We have also included the Hyper-V daemons package. If you are running VyOS on a Hyper-V host, please let us know if it works well for you.

    As usual, FRR has been updated to the latest master as well.

    Bugfixes

    • The "protocols bgp ... address-family ipv4-unicast redistribute ospf" works again (T1034).
    • Set-time validation rules for OSPFv3 areas does not allow decimal notation anymore, since it was never allowed by Quagga or FRR (T981).
    • PPPoE server help strings and autocompletion have been improved.
    • The ofed-scripts package (a remnant of the official Mellanox drivers) is removed from the image since we are using kernel built-in drivers now.
    • Package lists for the image build now correctly include aptitude which is needed for the grub-efi package fetching, so you should be able to build images yourself without problems again. Sorry it's been broken for a while!

    Contributor subscriptions

    Since the moment we've published the pre-registration form, we have received a number of LTS subscription requests from our veteran contributors, and people who have joined VyOS recently. We are still working on the subscriber portal, but we'll make sure to send everyone a notification when the 1.2.0 LTS release, and the portal are ready. When the portal is ready, contributors will be able to register directly.

    If you have been or are contributing to VyOS, you can find the form here.

    Upd

    Originally the image was accidentally uploaded with broken wireguard package, but the issue is resolved now, thanks to Kroy who identified the issue and notified us quickly.

    01 December, 2018 10:03AM by Daniil Baturin

    VyOS 1.2.0-rc8 is available for download

    A new release candidate, 1.2.0-rc8 is available for download from https://downloads.vyos.io/?dir=testing/1.2.0-rc8

    As usual, it offers a few bugfixes, but also some last moment additions we wanted to make before the code freeze.

    New features

    PPPoE based on accel-ppp

    https://accel-ppp.org/ is a high performance implementation of the PPP protocol itself and multiple protocols based on it, including PPPoE, PPTP, and L2TP that has become very popular with service providers.

    To make VyOS a better option for access concentrators, we (and by "we" I mostly mean our contributor hagbard!) rewrote the PPPoE scripts to use accel-ppp instead of rp-pppoe.

    No other protocols are reimplemented yet, but we are considering that option. Reimplementing PPTP can be challenging because the kernel module that accel-ppp uses for it conflicts with ip_gre (which is used for normal GRE tunnels as well as PPTP in the current implementation), but L2TPv2 should be doable.

    The configuration syntax of the new PPPoE implementation is fully compatible with the old one, save for the RADIUS key option that is handled by a migration script, so your old configuration should work as expected if you used PPPoE server in older releases.

    Saltstack integration

    This project has existed for quite a while, and even accidentally made it to one of the earlier release candidates, but we never made it official. Now it should be stable enough to be included in an image.

    Salt is a popular configuration management project. There is already VyOS support in Ansible, so why not expand our support for automation platforms?

    Unlike Ansible, Salt needs an agent package on the target system. The minimal configuration required to make it work is "set service salt-minion master 192.0.2.1" (where 192.0.2.1 is the Salt master server address).

    Hop limit matching in IPv6 firewalls

    Thanks to a patch by Ray Patrick Soucy, it's now possible to match on hop limit in IPv6 firewall rules (T573).

    The command is "set firewall ipv6-name Foo rule 10 hop-limit" and can have one of the following options: "eq $num" (equals exactly), "gt $num" (greater than) and "lt $num" (less than).

    BGP interface option for link-local peer sessions

    It is now possible to specify the interface to be used for a session in BGP (T941) with a command "set protocols bgp 64512 neighbor 192.0.2.10 interface eth0". This should allow using IPv6 link-local addresses for peer sessions.

    Multipath routing options

    New kernel versions include a few improvements in multipath routing. By default, the kernel only uses network layer information to bind connections to next hops, but now it's possible to make it also use transport layer information (e.g. TCP or UDP ports) for that decision with these commands: "set system ip multipath layer4-hashing", "set system ipv6 multipath layer4-hashing" (T992).

    There's another command that is (so far at least) IPv4-specific: "set system ip multipath ignore-unreachable-nexthops". It makes the kernel exclude next hops with unreachable ARP from routing decisions.

    Bug fixes

    1. Obsolete "dynamic" option was removed from NTP (T1018).
    2. It is now possible to restart the DHCP relay agent (T1016). 
    3. Conntrack helper is now enabled by default (T1011).
    4. Validation rules for 6rd tunnels have been corrected, now it should be borderline usable (T1000).
    5. Fixed dynamic DNS requests over HTTP (T983).
    6. Fixed DNS forwarding service not listening on IPv6 address (T974).
    7. immark module is now enabled in syslog (T940).
    8. Console device speed option should now modify the GRUB config correctly (T969). 
    9. Is it now possible to disable the in-memory table netflow plugin (T458). 
    10. OSPF LS update sending on a flapped interface seems to have been automatically solved by migration to FRR (T409).

    Bug that need verification

    Some people reportes that Intel XL710 network cards do not work, but that was before we updated the kernel to 4.19 (T961). It needs re-testing on rc7 or rc8.

    This kernel also needs more testing with fifth generation Mellanox cards.

    01 December, 2018 10:03AM by Daniil Baturin

    BunsenLabs Linux

    BunsenLabs Helium midterm iso release

    To match the Debian 9.6 point release we have rebuilt our iso images (version helium-4) to incorporate the latest Debian and BunsenLabs upgrades. This means that new installers will not have to do a post-install upgrade in order to get the latest improvements and bugfixes.

    For more info about BunsenLabs Helium, see the existing Helium Release Notes .

    NOTE: Current users do not need to reinstall their systems. It is enough to keep packages up to date with regular apt upgrades.

    01 December, 2018 12:00AM

    November 30, 2018

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Sergio Schvezov: Snapcraft 3.0

    The release notes for snapcraft 3.0 have been long overdue. For convenience I will reproduce them here too. Presenting snapcraft 3.0 The arrival of snapcraft 3.0 brings fresh air into how snap development takes place! We took the learnings from the main pain points you had when creating snaps in the past several years, and we we introduced those lessons into a brand new release - snapcraft 3.0! Build Environments As the cornerstone for behavioral change, we are introducing the concept of build environments.

    30 November, 2018 07:47PM

    hackergotchi for SparkyLinux

    SparkyLinux

    Sparky news 2018/11

    The 11th monthly report of 2018 of the Sparky project:
    – Sparky’s Linux kernel updated up to version 4.19.5 & 4.20-rc4
    – the ‘etcher-electron’ package changed its name to ‘balena-etcher-electron’; uninstall the old one and install the new one if you use the tool
    – Advanced Installer fstab config has been improved and the old fashion /dev/sdX device numbers is replaced by UUIDs now
    – Sparky Backup Core got configuration of all supported desktops and window managers so a new iso image displays your desktop name at the boot screen now
    – APTus got a new small tool called quick-list which searches packages in the repository and displays info about them; thank’s to Elton
    – the first Sparky Small Business Server development iso is out, but it is not usable yet, it is really development image
    – I also started working on a new application, which was laying down in a dark corner very long time, called APTus AppCenter, which should replace sparky-aptus, sparky-aptus-extra, sparky-aptus-gamer, sparky-offiice, sparky-codecs, and a few other packages, and will integrate all the tools into one providing more useful tool for easy installation many popular applications.

    The new tool is very simple and lightweight, as all Sparky tools, and uses Yad and HTML technology.

    Any help is welcome to develop the new app, small tips are warm welcome too 🙂

    APTus AppCenter

     

    30 November, 2018 04:39PM by pavroo

    hackergotchi for Univention Corporate Server

    Univention Corporate Server

    Linux Programs in Windows: Just Integrate UCS in Active Directory

    Our Univention App Center offers many open source applications from all areas, which you can add to your UCS environment in just a few clicks. Whether groupware, CRM or backup solution – the list of apps is growing continuously. If you want to use these applications in a Windows environment, UCS offers a particularly convenient way of doing so: UCS can be integrated with existing Windows environment, in particular, in an existing Active Directory domain.

    After such an integration for which you use our app ‘Active Directory Connection’, the Active Directory (AD) continues to work as a the primary directory service, while UCS can extend the AD domain by exactly those open source software solutions that are available in the App Center.

    Programs such as ownCloud, Nextcloud, Kopano or the Open-Xchange App Suite are thus also easily accessible to users of an AD domain. The administrative burden neither increases as UCS, being part of an AD domain, uses the AD‘s authentication services. You can thus eliminate managing your users twice.

    In the following, I will explain to you briefly how to integrate UCS into an existing AD domain.

    Two options for the UCS and AD domain: integration or synchronization

    You‘ve got two option to integrate Univention Corporate Server in an existing AD domain. Our app Active Directory Connection helps in both cases:

    • In the first case, UCS becomes a member of an AD domain (AD remains the primary directory service)
    • In the second case, the AD and UCS domains operate in parallel. This setup involves the synchronization of all account data, but independent authentication mechanisms.

    Screenshot UCS AD Takeover and AD Connection

    If the UCS server becomes a full member of an AD domain, UCS uses the AD‘s authentication services and makes them available to the open source programs from our App Center. The AD remains the primary directory service. The UCS server continues to provide the OpenLDAP-based directory services that are needed by the apps hosted in the App Center.

    Alternatively, the App Active Directory Connection can synchronize users, groups, and passwords between an AD and a parallel UCS domain. This synchronization can happen uni- and bidirectional. Here, the authentication mechanisms work independently from each other.

    Now I introduce you to the first scenario and explain how to configure UCS as a member of an AD domain. First of all, however, I would like to point out a few peculiarities.

    Technical backgrounds – The role of OpenLDAP and Kerberos

    The “member mode”, as we call this variant internally at Univention, includes some technical limitations. As UCS is not the leading identity management system in this operating mode, the Univention Management Console (UMC) protects user and group objects that are managed by AD tools against changes. In this case it will no longer be possible to reset users’ passwords via the UMC, because the password file does not exist in the OpenLDAP.

    Please also note that the UCS server is not a classic domain controller in this mode. It is part of the AD domain and thus joined the Kerberos infrastructure deployed in the AD.

    Set up UCS as an Active Directory member

    Screenshot AD Connection

    After the installation of the Active Directory Connection app via the Univention App Center, click on ‚Open module‘ and turn to ‚Configuration‘. Select ‚Configure UCS as part of an Active Directory domain (recommended)‘ if Active Directory is the main domain. Via ‚Next‘ you are guided to the input mask for the domain access data. In the top field you enter the IP address of the Windows server or the name of the AD domain. Below this is space for the username and password of an AD administrator account. Please note that this account must be authorized to create new servers and new users – a local administrator account for managing workstation PCs or end users is not enough. Afterwards click on ‚Join AD domain‘.

    Screenshot UCS AD domain access data

    After a short moment you should see the message that the connection has been established successfully. The app’s configuration wizard also indicates that already connected UCS systems need to re-join the domain. Please do this for each UCS system by using the ‚domain join‘ module in the UMC of the respective server. Now you click on ‚Finish‘ and confirm the restart of the UMC server components and the web server.

    The next time you want to log in, log in as an administrator, this time using the password from the AD domain. Once you successfully authenticated, you will see the Active Directory Connection module. It shows the status of the connection service and allows the set up of an SSL encryption and a password service.


    Screenshot of a video about the UCS AD ConnectionStep by step instruction: Operate UCS as a Member of a Windows Active Directory Domain

    Learn in this video, how you can operate a UCS system as a member of an existing Windows Active Directory domain via the App Active Directory Connection.


    Way clear for a good cooperation across platforms!

    Users, groups, and computers continue to be managed by the AD domain controller. Exceptions are attributes that only exist in UCS, for example, the activation or configuration of third-party apps like ownCloud, Nextcloud, Kopano, or Open-Xchange.

    Thanks to the Active Directory Connection, you can extend your existing Windows domain by all functions of UCS. As a platform, Univention Corporate Server provides open-source applications for everyone – with the usual, convenient installation and configuration routines. If you have questions or suggestions, please visit us in our forum.

    Further articles we can recommend

    Brief Introduction: Samba / Microsoft Active Directory
    Bye Bye Active Directory Password Service

    Der Beitrag Linux Programs in Windows: Just Integrate UCS in Active Directory erschien zuerst auf Univention.

    30 November, 2018 12:55PM by Kevin Dominik Korte