August 28, 2015

hackergotchi for Xanadu developers

Xanadu developers

hackergotchi for Tails

Tails

Tails 1.5.1 is out

Tails, The Amnesic Incognito Live System, version 1.5.1, is out.

This is an emergency release, triggered by an unscheduled Firefox release meant to fix critical security issues.

It fixes numerous security issues and all users must upgrade as soon as possible.

Changes

Upgrades and changes

  • Install Tor Browser 5.0.2 (based on Firefox ESR 38.2.1).

Known issues

  • Tails Greeter is not translated for Vietnamese and Russian (ticket #9992).

See the current list of known issues.

Download or upgrade

Go to the download page.

What's coming up?

The next Tails release is scheduled for September 22.

Have a look to our roadmap to see where we are heading to.

Do you want to help? There are many ways you can contribute to Tails. If you want to help, come talk to us!

28 August, 2015 10:34AM

hackergotchi for Ubuntu developers

Ubuntu developers

Dimitri John Ledkov: Go enjoy Python3

Given a string, get a truncated string of length up to 12.

The task is ambiguous, as it doesn't say anything about whether or not 12 should include terminating null character or not. None the less, let's see how one would achieve this in various languages.
Let's start with python3

import sys
print(sys.argv[1][:12])

Simple enough, in essence given first argument, print it up to length 12. As an added this also deals with unicode correctly that is if passed arg is 車賈滑豈更串句龜龜契金喇車賈滑豈更串句龜龜契金喇, it will correctly print 車賈滑豈更串句龜龜契金喇. (note these are just random Unicode strings to me, no idea what they stand for).

In C things are slightly more verbose, but in essence, I am going to use strncpy function:

#include <stdio.h>
#include <string.h>
void main(int argc, char *argv[]) {
char res[12];
strncpy(res,argv[1],12);
printf("%s\n",res);
}
This treats things as byte-array instead of unicode, thus for unicode test it will end up printing just 車賈滑豈. But it is still simple enough.
Finally we have Go
package main

import "os"
import "fmt"
import "math"

func main() {
fmt.Printf("%s\n", os.Args[1][:int(math.Min(12, float64(len(os.Args[1]))))])
}
This similarly treats argument as a byte array, and one needs to cast the argument to a rune to get unicode string handling. But there are quite a few caveats. One cannot take out of bounds slices. Thus a naïve os.Args[1][:12] can result in a runtime panic that slice bounds are out of range. Or if a string is known at compile time, a compile time error. Hence one needs to calculate length, and do a min comparison. And there lies the next caveat, math.Min() is only defined for float64 type, and slice indexes can only be integers and thus we end up writing ]))))])...

12 points for python3, 8 points for C, and Go receives nul points Eurovision style.

EDIT: Andreas Røssland and James Hunt are full of win. Both suggesting fmt.Printf("%.12s\n", os.Args[1]) for go. I like that a lot, as it gives simplicity & readability without compromising the default safety against out of bounds access. Hence the scores are now: 14 points for Go, 12 points for python3 and 8 points for C.

EDIT2: I was pointed out much better C implementation by Keith Thompson - http://pastebin.com/5i7rFmMQ in essence it uses strncat() which has much better null termination semantics. And Ben posted a C implementation which handles wide characters http://www.decadent.org.uk/ben/blog/truncating-a-string-in-c.html. I regret to inform you that this blog post got syndicated onto hacker news and has now become the top viewed post on my blog of all time, overnight. In retrospect, I regret awarding points at the end of the blog post, as that's just was merely an expression of opinion and is highly subjective measure. But this problem statement did originate from me reviewing go code that did "if/then/else" comparison and got it wrong to truncate a string and I thought surely one can just do [:12] which has lead me down the rabbit hole of discovering a lot about Go; it's compile and runtime out of bounds access safeguards; lack of universal Min() function; runes vs strings handling and so on. I'm only a beginner go programmer and I am very sorry for wasting everyone's time on this. I guess people didn't have much to do on a Throwback Thursday.

The postings on this site are my own and don't necessarily represent Intel’s positions, strategies, or opinions.

28 August, 2015 09:48AM by Dimitri John Ledkov (noreply@blogger.com)

Joel Leclerc: Follow up on the non-windowing display server idea

Note: I’m sorry, this post is a bit of a mess.

I wrote a post 2 days ago, outlining an idea for a non-windowing display server — a layer that wayland compositors (or other programs) could be built upon. It got quite a bit more attention than I expected, and there were many responses to the idea.

Before I go on, I wish to address a few things that weren’t clear in the original post:

The first being that I am not an ubuntu developer, and am in no way associated with canonical. I am only an ubuntu member :) Even though I don’t use ubuntu personally, I wish to improve the user experience of those who do.

Second is a point that I did not address clearly in the original post: One of the main reasons for this idea is to enable users to modify the video resolution, gamma ramp, orientation, brightness, etc. DRM provides an API for doing these operations, however, AFAIK, you cannot run modesetting operations on a virtual terminal that is already running an application that has called video modesetting operations. In other words, you cannot run a DRM-based application on an already-running wayland server in order to run a modesetting operation. So, AFAIK, the only way to enable an application to do this is to write a sort of “proxy” server that handles requests, and then runs the video modesetting operations.

Since I am currently confusing myself re-reading this, I’ll try to provide a diagram in order to explain what I mean.

If you want to change the gamma ramp, for example, this is impossible:

drm_client_wayland

So with the display server acting as a proxy of sorts, it becomes possible:

drm_client_display_server

This is also why I believe that having a server over a shared library is crucial. A shared library would allow for abstraction over multiple backends, however, it doesn’t allow communication with more than one application. A wayland compositor can access all of the functions, yes, but wayland clients cannot.

The third clarification is that this is not only meant for wayland. Though this is the main “client” I have in mind for this server, it isn’t restricted to only wayland. The idea is that it could be used by anything, for example, as one response pointed out, xen virtualization. Or, in my case, I actually want to write clients that use this server directly, without even using a windowing server like wayland (yes, I actually have a good reason for wanting this XD ). In other words, though I believe that the group that would use this the most would be wayland users (hence why I wrote the original post tailored towards this), it isn’t only meant for wayland.

There were a few responses saying that wayland intentionally doesn’t support this, not because of the reason I originally suspected (it being “only” a windowing protocol), but because one of wayland’s main goals is to let the compositor to have full control over the display, and make sure that there are no flickers or tearing etc., which changing the video resolution (or some other modesetting operations) would undoubtedly cause. I understand and respect this, however, I still want to be able to change the resolution or gamma ramp (etc.) myself, and suffer the consequences of the momentary flickering or whatever else. Again though, I respect wayland’s decision in this aspect, so my proposal, instead, is this: To make this an optional backend for wayland compositors. Instead of my original proposal, which was to build wayland compositors on top of this (in order to help simplify the stack), instead, have this as an option, so that if users wish to have the video modesetting (etc.) capabilities, they can use this backend instead.

A pretty large concern that many people (including myself) have is performance. Having an extra server on the stack would definitely have an impact on performance, but the question is how much.

So with this being said, going forwards, I am currently working on implementing a proof-of-concept prototype in order to have a better sense of what it entails, especially in regards to performance. The prototype will be anything but production-ready, but hopefully will at least work … maybe XD .


28 August, 2015 01:22AM

August 27, 2015

Jono Bacon: Ubuntu, Canonical, and IP

Recently there has been a flurry of concerns relating to the IP policy at Canonical. I have not wanted to throw my hat into the ring, but I figured I would share a few simple thoughts.

Firstly, the caveat. I am not a lawyer. Far from it. So, take all of this with a pinch of salt.

The core issue here seems to be whether the act of compiling binaries provides copyright over those binaries. Some believe it does, some believe it doesn’t. My opinion: I just don’t know.

The issue here though is with intent.

In Canonical’s defense, and specifically Mark Shuttleworth’s defense, they set out with a promise at the inception of the Ubuntu project that Ubuntu will always be free. The promise was that there would not be a hampered community edition and full-flavor enterprise edition. There will be one Ubuntu, available freely to all.

Canonical, and Mark Shuttleworth as a primary investor, have stuck to their word. They have not gone down the road of the community and enterprise editions, of per-seat licensing, or some other compromise in software freedom. Canonical has entered multiple markets where having separate enterprise and community editions could have made life easier from a business perspective, but they haven’t. I think we sometimes forget this.

Now, from a revenue side, this has caused challenges. Canonical has invested a lot of money in engineering/design/marketing and some companies have used Ubuntu without contributing even nominally to it’s development. Thus, Canonical has at times struggled to find the right balance between a free product for the Open Source community and revenue. We have seen efforts such as training services, Ubuntu One etc, some of which have failed, some have succeeded.

Again though, Canonical has made their own life more complex with this commitment to freedom. When I was at Canonical I saw Mark very specifically reject notions of compromising on these ethics.

Now, I get the notional concept of this IP issue from Canonical’s perspective. Canonical invests in staff and infrastructure to build binaries that are part of a free platform and that other free platforms can use. If someone else takes those binaries and builds a commercial product from them, I can understand Canonical being a bit miffed about that and asking the company to pay it forward and cover some of the costs.

But here is the rub. While I understand this, it goes against the grain of the Free Software movement and the culture of Open Source collaboration.

Putting the legal question of copyrightable binaries aside for one second, the current Canonical IP policy is just culturally awkward. I think most of us expect that Free Software code will result in Free Software binaries and to make claim that those binaries are limited or restricted in some way seems unusual and the antithesis of the wider movement. It feels frankly like an attempt to find a loophole in a collaborative culture where the connective tissue is freedom.

Thus, I see this whole thing from both angles. Firstly, Canonical is trying to find the right balance of revenue and software freedom, but I also sympathize with the critics that this IP approach feels like a pretty weak way to accomplish that balance.

So, I ask my humble readers this question: if Canonical reverts this IP policy and binaries are free to all, what do you feel is the best way for Canonical to derive revenue from their products and services while also committing to software freedom? Thoughts and ideas welcome!

27 August, 2015 11:59PM

The Fridge: Wily Werewolf Beta 1 Released

"I am Groot."
– Groot

The first beta of the Wily Werewolf (to become 15.10) has now been released!

This beta features images for Kubuntu, Lubuntu, Ubuntu GNOME, Ubuntu Kylin, Ubuntu MATE, Xubuntu and the Ubuntu Cloud images.

Pre-releases of the Wily Werewolf are *not* encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu flavor developers and those who want to help in testing, reporting and fixing bugs as we work towards getting this release ready.

Beta 1 includes a number of software updates that are ready for wider testing. This is quite an early set of images, so you should expect some bugs.

While these Beta 1 images have been tested and work, except as noted in the release notes, Ubuntu developers are continuing to improve the Wily Werewolf. In particular, once newer daily images are available, system installation bugs identified in the Beta 1 installer should be verified against the current daily image before being reported in Launchpad. Using an obsolete image to re-report bugs that have already been fixed wastes your time and the time of developers who are busy trying to make 15.10 the best Ubuntu release yet. Always ensure your system is up to date before reporting bugs.

Kubuntu

Kubuntu uses KDE software and now features the new Plasma 5 desktop.

The Kubuntu 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/kubuntu/releases/wily/beta-1/

More information about Kubuntu 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/Kubuntu

Lubuntu

Lubuntu is a flavour of Ubuntu based on LXDE and focused on providing a very lightweight distribution.

The Lubuntu 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/lubuntu/releases/wily/beta-1/

More information about Lubuntu 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/Lubuntu

Ubuntu GNOME

Ubuntu GNOME is a flavour of Ubuntu featuring the GNOME3 desktop environment.

The Ubuntu GNOME 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/ubuntu-gnome/releases/wily/beta-1/

More information about Ubuntu GNOME 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/UbuntuGNOME

Ubuntu Kylin

Ubuntu Kylin is a flavour of Ubuntu that is more suitable for Chinese users.

The Ubuntu Kylin 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/ubuntukylin/releases/wily/beta-1/

More information about Ubuntu Kylin 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/UbuntuKylin

Ubuntu MATE

Ubuntu MATE is a flavour of Ubuntu featuring the MATE desktop environment for people who just want to get stuff done.

The Ubuntu MATE 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/ubuntu-mate/releases/wily/beta-1/

More information about Ubuntu MATE 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/UbuntuMATE

Xubuntu

Xubuntu is a flavour of Ubuntu shipping with the XFCE desktop environment.

The Xubuntu 15.10 Beta 1 images can be downloaded from:

http://cdimage.ubuntu.com/xubuntu/releases/wily/beta-1/

More information about Xubuntu MATE 15.10 Beta 1 can be found here:

https://wiki.ubuntu.com/WilyWerewolf/Beta1/Xubuntu

Ubuntu Cloud

Ubuntu Cloud images can be run on Amazon EC2, Openstack, SmartOS and many other clouds.

The Ubuntu Cloud 15.10 Beta 1 images can be downloaded from:

http://cloud-images.ubuntu.com/releases/wily/beta-1/

Regular daily images for Ubuntu can be found at:

http://cdimage.ubuntu.com

If you’re interested in following the changes as we further develop Wily, we suggest that you subscribe to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a week) carrying announcements of approved specifications, policy changes, beta releases and other interesting events.

http://lists.ubuntu.com/mailman/listinfo/ubuntu-devel-announce

A big thank you to the developers and testers for their efforts to pull together this Beta release!

Originally posted to the ubuntu-devel-announce mailing list on Thu Aug 27 14:27:17 UTC 2015 by Martin Wimpress on behalf of Ubuntu Release Team

27 August, 2015 10:02PM

Kubuntu: Kubuntu Wily Beta 1

The first Beta of Wily (to become 15.10) has now been released!

The Beta-1 images can be downloaded from: http://cdimage.ubuntu.com/kubuntu/releases/wily/beta-1/

More information on Kubuntu Beta-1 can be found here: https://wiki.kubuntu.org/WilyWerewolf/Beta1/Kubuntu

27 August, 2015 09:21PM

Rohan Garg: Legalese is vague: Always consult a lawyer

Jon recently published a blog post stating that you’re free to create Ubuntu derivatives as long as you remove trademarks. I do not necessarily agree with this statement, primarily because of this clause in the IP rights policy :

Copyright

The disk, CD, installer and system images, together with Ubuntu packages and binary files, are in many cases copyright of Canonical (which copyright may be distinct from the copyright in the individual components therein) and can only be used in accordance with the copyright licences therein and this IPRights Policy.

From what I understand, Canonical is asserting copyright over various binaries that are shipped on the ISO, and they’re totally in the clear to do so for any packages that end up on the ISO that are permissively licensed ( X11 for eg. ), because permissive licenses, unlike copyleft licenses, do not prohibit additional restrictions on top of the software. A reading of the GPL has the explicit statement :

4. You may not copy, modify, sublicense, or distribute the Program
except as expressly provided under this License. Any attempt
otherwise to copy, modify, sublicense or distribute the Program is
void, and will automatically terminate your rights under this License.
However, parties who have received copies, or rights, from you under
this License will not have their licenses terminated so long as such
parties remain in full compliance.

Whereas licenses such as the X11 license explicitly allow sub licensing :

… including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software …

Depending on the jurisdiction you live in, Canonical *can* claim copyrights over the binaries that are produced in the Ubuntu archive. This is something that multiple other parties such as the SF Conservancy, FSF as well as Bradley Kuhn have agreed on.

So once again, all of this is very much dependent on where you live and where your ISO’s are hosted. So if you’re distributing an Ubuntu derivative, I’d very much recommend talking to a professional lawyer who’d best be able to advise you about how the policy affects you in your jurisdiction. It may very well be that you require a license, or it may be that you don’t. I’m not a lawyer and AFAIK, neither is Jon.

Addendum/Afterthought :

Taken a bit more extreme, one could even argue that in order to be GPL compliant, derivatives should provide sources to all the packages that land on the ISO, and just passing off this responsibility to Canonical is a potential GPL violation.


27 August, 2015 06:47PM

hackergotchi for Cumulus Linux

Cumulus Linux

Monitoring Our Network Infrastructure With Sensu

Cumulus Networks provides a service known as the Cumulus Workbench. This service is an infrastructure made of physical switches, virtual machines running in Google Compute Engine (GCE), virtual machines running on our own hardware and bare metal servers. It allows prospective customers and partners to prototype network topologies, test out different configuration management tools, and get a general feeling for open networking. The workbench is also utilized for our boot camp classes.

Right now, we are completely rewriting the workbench backend! Many of the changes that we’re making are to the technical plumbing, so they’re behind the scenes. Monitoring the various workbench components is critical, as any downtime can easily affect a prospective sale or even an in-progress training session. Since our infrastructure is a mix of virtual machines, physical servers and switches, I needed one place to help me monitor the health of the entire system.

We use Puppet for automating our internal infrastructure. I chose Puppet since it holds most of my operational experience, but I firmly believe that the best automation tool is the one that you choose to use! If you want more details on how we use Puppet for automation, I will be speaking in depth about it in this webinar.  I paired up Puppet with Hiera, a key/value pair data store, which enables us to store much of our site-specific data such as passwords in a contained location.  Check out this space for an announcement soon releasing our new workbench code to the public!

I decided against using Icinga or Nagios. In my experience, Nagios’s performance seems to suffer greatly after a few hundred nodes, which makes it difficult for me to use the smallest instance possible. Icinga is a popular fork from the original Nagios codebase, and much of the development effort is focused on performance or speed enhancements. Icinga has an excellent community and great performance, but I wanted to go for a more radical change. So I chose Sensu, which is normally a server monitoring tool, and am using it for monitoring Cumulus Linux.

Sensu uses a very distributed model, which scales well over a large number of nodes. It also has a great amount of flexibility in its check system since they introduced handlers. A handler is an action that is performed when a problem is noted. For example, if the web server is down, a handler could perform a service restart. It doesn’t solve the root problem, but when it’s late at night and I want the problem solved, the less work I have to do, the better!

sensu-diagram-87a902f0

Sensu diagram courtesy of Sensu documentation

One of the best parts of open source software is the community. With Sensu, it’s a necessity!  Sensu is moving from a monolithic community check repository to a directory of multiple repositories.

Sensu was pretty easy to set up with Puppet, thanks to a lot of hard work by the community.  Much of the shared community work made this a snap, such as Puppet’s RabbitMQ module, Tom de Vylder’s redis module, Sensu’s Sensu module and Yelp’s Uchiwa module.

Using these modules gives me more time for configuring checks and takes less time to configure the monitoring server itself.

We want to hear about your monitoring stories too!  What are you using and why? What have you monitored? Do you have any custom checks that would help others? Join or start a conversation in the Cumulus Networks Community!

The post Monitoring Our Network Infrastructure With Sensu appeared first on Cumulus Networks Blog.

27 August, 2015 05:46PM by Leslie Carr

hackergotchi for Ubuntu developers

Ubuntu developers

Jonathan Riddell: Ubuntu Archive Still Free Software

Ubuntu is entirely committed to the principles of free software development; we encourage people to use free and open source software, improve it and pass it on.” is what used to be printed on the front page of ubuntu.com. This is still true but recently has come under attack when the project’s main sponsor, Canonical, put up an IP policy which broke the GPL and free software licences generally by claiming packages need to be recompiled. Rather than apologising for this in the modern sense of the word by saying sorry, various staff members have apologised in an older sense of the word meaning to excuse. But everything in Ubuntu is free to share, copy and modify (or just free to share and copy in the case of restricted/multiverse). The archive admins wills only let in packages which comply to this and anyone saying otherwise is incorrect.

In this twitter post Michael Hall says “If a derivative distro uses PPAs it needs an additional license.” But he doesn’t say what there is that needs an additional licence, the packages already have copyright licences all, of them free software.

It should be very obvious that Canonical doesn’t control the world and a licence is only needed if there is some law that allows them to restrict what others want to do. There’s been a few claims on what that law might be but nothing that makes sense when you look at it. It’s worth examining their claims because people will fall for them and that will destroy Ubuntu as a community project. Community projects depend on everyone having the freedom to do whatever they want with the code else nobody will give their time to a project that someone else will then control.

In this blog post Dustin Kirkland again doesn’t say what needs a licence but says one is needed based on Geographical Indication. It’s hard to say if he’s being serious. A geographical indication (GI) is a sign used on products that have a specific geographical origin and possess qualities or a reputation that are due to that origin and are then assessed before being registered. There is no Geographical Indication registration in Ubuntu and it’s completely irrelevant to everything. So lets move on.

A more dangerous claim you can see on this reddit post where Michael Hall claims “for permissively licensed code where you did not build the binary, there is no pre-existing right to redistribution of that binary”.    This is incorrect, everything in Ubuntu has a free software licence with an explicit right to redistribution. (Or a few bits are public domain where no licence is needed at all.)  Let’s take the libX11 as a random example, it gets shipped with a copyright file containing this licence:

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”),  to deal in the Software without restriction

so we do have permission.  Shame on those who say otherwise.  This applies to the source of course and so it applied to any derived work such as the binaries, which is why it’s shipped with the binaries.  It even says you can’t remove remove the licence:
“The above copyright notice and this permission notice (including the next paragraph) shall be included in all copies or substantial portions of the Software.”
So it’s free software and the licence requires it to remain free software.  It’s not copyleft, so if you combine it with another work which is not free software then the result is proprietary, but we don’t do that in Ubuntu.  The copyright owner could put extra restrictions on but nobody else can because it’s a free world and you can’t make me do stuff just because you say so, you have to have some legal way to restrict me first.
One of the items allowed by this X11 licence is the ability to “sublicense” which is just putting another licence on it, but you can’t remove the original licence as it says in the part of the licence I quoted above.  Once I have a copy of the work I can copy it all I want under the X11 licence and ignore your sublicence.
This is even true of works under the public domain or a WTFPL style licence, once I’ve got a copy of the work it’s still public domain so I can still copy, share and modify it freely.  You can’t claim it’s your copyright because, well, it’s not.

In Matthew Garrett’s recent blog post he reports that “Canonical assert that the act of compilation creates copyright over the binaries”.  Fortunately this is untrue and can be ignored.  Copyright requires some creative input, it’s not enough to run a work through a computer program.  In the very unlikely case a court did decide that compiling a programme added some copyright then they would not decide that copyright was owned by the owners of the computer it ran on but on the copyright owners of the compiler, which is the Free Software Foundation and the copyright would be GPL.

In conclusion there is nothing which restricts people making derivatives of Ubuntu except the trademark, and removing branding is easy. (Even that is unnecessary unless you’re trading which most derivatives don’t, but it’s a sign of good faith to remove it anyway.)

Which is why Mark Shuttleworth says “you are fully entitled and encouraged to redistribute .debs and .iso’s”. Lovely.

 

facebooktwittergoogle_pluslinkedinby feather

27 August, 2015 02:50PM

Xubuntu: Xubuntu 15.10 Beta 1

The Xubuntu team is pleased to announce the immediate release of Xubuntu 15.10 Beta 1. This is the first beta towards the final release in October.

The first beta release also marks the end of the period to land new features in the form of Ubuntu Feature Freeze. This means any new updates to packages should be bug fixes only, the Xubuntu team is committed to fixing as many of the bugs as possible before the final release.

The Beta 1 release is available for download by torrents and direct downloads from
http://cdimages.ubuntu.com/xubuntu/releases/wily/beta-1/

Highlights and known issues

New features and enhancements

  • LibreOffice Calc and Writer and now included. These applications replace Gnumeric and Abiword respectively.
  • A new theme for LibreOffice, libreoffice-style-elementary is also included and is default for Wily Werewolf.

Known Issues

Some issues during testing of image were found, in addition some bugs related to Xubuntu have been noted during the development cycle. Full detail of all of these can be found in the release notes at https://wiki.ubuntu.com/WilyWerewolf/Beta1/Xubuntu

27 August, 2015 02:39PM

Ubuntu GNOME: Wily Werewolf (15.10) Beta 1 Released

Hello,

Ubuntu GNOME Team is glad to announce the release of Beta 1 of Ubuntu GNOME Wily Werewolf (15.10).

What’s new and how to get it?

Please do read the release notes:
https://wiki.ubuntu.com/WilyWerewolf/Beta1/UbuntuGNOME

As always, thanks a million to each and everyone who has helped, supported and contributed to make this yet another successful milestone!

We have great testers and without their endless support, we don’t think we can ever make it. Please, keep the great work up!

Thank you!

27 August, 2015 02:31PM

August 26, 2015

Rohan Garg: An alternative to Linaro’s HWPacks

For the past couple of weeks I’ve been playing with a variety of boards, and a single problem kept raising its head over and over again, I needed to build test images quickly in order to be able to checkout whether or not these boards had the features that I wanted.

This lead me to investigating tools around building images for these boards. And the tools I came across for each of these boards were absymal to say the least. All of them were either very board specific or were not versatile enough for my needs. Linaro’s HWPack’s came very very close to what I needed, but still had some of the following limitations :

  • HWPack’s are inflexible in terms of partitioning layout, the entire partitioning layout is internal to the tool, and you could only specify one of three variations of the partition layout, and not control anything else, such as start sectors of the partitions.
  • HWPack’s are inflexible in terms of bootloader flashing, as far as I can tell, there was no way to specify the start sector, the byte size and other options that some of these boards were passing dd to flash the bootloader to the image.
  • HWPacks, as far as I could tell, could not generate config files that would be used by u-boot at boot.
  • HWPack’s only support Apt.

So with those 4 problems to solve, I set out writing my own replacement for Linaro’s HWPack’s , and lo and behold, you can find it here. ( I’m quite terrible at coming up with awesome names for my projects, so I chose the most simple and descriptive name I could think of ;)

Here’s a sample config for the ODROID C1, a neat little board from HardKernel.

The rootfs section

You can specify a rootfs for your board in this section, it will take a url to the rootfs tar and optionally a md5sum for the tar.

The firmware section

We currently have 2 firmware backends for installing the firmware ( things like the kernel, and other board specific packages ). One is the tar backend which, like the rootfs section, takes a url to the firmware tar and optionally a md5sum and the Apt backend. I only have time to maintain these 2 backends, so I’d absolutely love it if someone could write more backends such as yum or pacman and send me a pull request.

The tar backend will copy everything from the boot/* folder inside the tar onto the first partition, and anything inside the firmware/* and modules/* folder into the rootfs’s /lib folder. This is a bit implicit and I’m trying to figure out a way to make this better.

The apt backend can take multiple apt repos to be added to the rootfs and a list of packages to install afterwards.

The bootloader section

The bootloader has a :config section which will take a ERB file to be rendered and installed into both the rootfs and the bootfs ( if you have one ).

Here’s a quote of the sample ERB file for the ODROID C1:

This allows me to dynamically render boot files depending on what kernel was installed on the image and what the UUID of the rootfs is. You can in fact access more variables as described here.

Moving on to the :uboot section of the bootloader, you can specify as many stages as you want to flash onto the image. Each stage will take a :file to flash and optionally :dd_opts, which are options that you might want to pass to dd when writing the bootloader. The stages are flashed in the sequence that is declared in config.yml and the files are searched for in the rootfs first, failing which they’re searched for in the bootfs partition, if you have one.

The login section

The login section is quite self-explanatory and takes a user, a password for the user and a list of groups the user should be added to on the target image.

The login section is optional and can be skipped if your rootfs already has a pre-configured user.

At the moment I have configs for the ODROID C1, Cubox-I ( thanks to Solid Run for sending me a free-extra board! :) and the Raspberry Pi 2.

If you have questions send me an email or leave them in the comments below, and I’ll try to answer them ASAP :).

If you end up writing a config for your board, please send me a PR with the config, that’d be most awesome.

PS: Some of the most awesome people I know are meeting up at Randa next month to work on bringing Touch to KDE. It’d be supremely generous of you if you could donate towards the effort.


26 August, 2015 03:34PM

Dustin Kirkland: An Open Letter to Google Nest (was: OMG, F*CKING NEST PROTECT AGAIN!)

[Updates (1) and (2) at the bottom of the post]

It's 01:24am, on Tuesday, August 25, 2015.  I am, again, awake in the the middle of the night, due to another false alarm from Google's spitefully sentient, irascibly ignorant Nest Protect "smart" smoke alarm system.

Exactly how I feel right now.  Except I'm in my pajamas.
Warning: You'll find very little profanity on this blog.  However, the filter is off for this post.  Apologies in advance.

ARRRRRRRRRRRRRRRRRGGGGGGGGGHHHHHHHHHHH!!!!!!!!!!!
Oh.
My.
God.
FOR FUCK'S SAKE.

"Heads up, there's smoke in the kids room," she says.  Not once, but 3 times in a 30 minute period, between 12am and 1am, last night.


That's my alarm clock.  Right now.  I'm serious.
"Heads up, there's smoke in the guest bedroom," she says again tonight a few minutes ago, at 12:59am.

There was in fact never any smoke to clear.
Is it possible for anything wake you up more seriously and violently in a cold panic than a smoke alarm detecting something amiss in your 2 year old's bedroom?

Here's what happens (each time)...

Every Nest Protect unit in the house announces, in unison, "Heads up, there's smoke in the kids' room."  Then both my phone and my wife's phone buzzes on our night stands, with the urgent incoming message from the Nest app.  Another few seconds pass, and a another set of alarms, this time delivered by email, in case you missed the first two.

The first and second time it happens, you jump up immediately.  You run into their room and make sure everyone is okay -- both the infant in the crib and toddler who's into everything.  You walk the whole house, checking the oven, the stove, the toaster, the computer equipment.  You open the door and check around outside.  When everything is okay, you're left with a tingling in the back of your mind, wondering what went wrong.  When you're a computer engineer by trade, you're trying to debug the hardware and/or software bug causing the false positive.  Then you set about trying to calm your family and get them back into bed.  And at some point later, you calm your own nerves and try to get some sleep.  It's a work night after all.

But the third, fourth, and fifth time it happens?  From 3 different units?

Well, it never ceases to scare the ever living shit out of you, waking up out of deep sleep, your mind racing, assessing the threat.

But then, reality kind of sets in.  It's just the stupid Nest Protect fucking it all up again.

Roll over, go back to bed, and pray that the full alarm doesn't sound this time, waking up both kids and setting us up for a really bad night and next few days at school.

It's not over yet, though.  You then wait for the same series of messages announcing the all clear -- first the bitch over the loudspeaker, followed by the Android app notification, then the email -- each with the same message:  "Caution, the smoke is clearing..."

THERE WAS NEVER ANY FUCKING SMOKE, YOU STUPID CYBORG. 

20 years later, and the smartest company in the world
creates a smoke detector that broadcasts the IoT equivalent
of PC LOAD LETTER to your smart home, mobile app, and email.
But not this time.  I'm not rolling over.  I'm here, typing with every ounce of anger this Thinkpad can muster. I'm mashing these keys in the guest bedroom that's supposedly on fire.  I can most assuredly tell you that it's a comfy 72 F, that the air is as clean as a summer breeze.

I'm writing this, hoping that someone, somewhere hears how disturbingly defective, and dangerously disingenuous this product actually is.

It has one job to do.  Detect and report smoke.  And it's unable to do that effectively.  If it can't reliably detect normality, what confidence should I have that it'll actually detect an emergency if that dreaded day ever comes?

The sad, sobering reality is: zero.  I have zero confidence whatsoever in the Nest Protect.

What's worse, is that I'm embarrassed to say that I've been duped into buying 7 (yes, seven) of these broken pieces of shit, at $99 apiece.  I'm a pretty savvy technical buyer, and admittedly a pretty magnanimous early adopter.  But while I'm accepting on beta versions of gadgets and gizmos, I am entirely unforgiving on the safety and livelihood of my family and guests.

Michael Larabel of Phoronix recounts his similar experience here.  He destroyed one with a sledgehammer, which might provide me with some catharsis when (not if, but when) this happens again.

Michael Larabel of Phoronix destroyed his malfunctioning Nest Protect
with a 20 lb sledgehammer, to silence the false alarm in the middle of the night
 There's a sad, long, thread on Nest's customer support forum, calling for a better "silence" feature.  I'm sorry, that's just wrong.  The solution is not a better way to "silence" false positives.  Root out the false positives to begin with.  Or recall the hardware.  Tut, tut, tut.

You can't be serious...
This is from me to Google and Nest on behalf of thousands of trusting families out there:  You have the opportunity, and ultimately the obligation.  Please make this right.  Whatever that means, you owe the world that.
  • Ship working firmware.
  • Recall faulty hardware.
  • Refund the product.
Okay, the empassioned rant is over.  Time for data.  Here is the detailed, distressing timeline.
  • January 2015: I installed 5 Nest Protects: one in each of two kids' rooms, the master bedroom, the hallway, and the kitchen/living room
  • February 2015: While on a business trip to, South Africa, I received notification via email and the Nest App that there was a smoke emergency at my home, half a world away, with my family in bed for the night.  My wife called me immediately -- in the middle of the night in Texas.  My heart raced.  She assured me it was a false alarm, and that she had two screaming kids awake from the noise.  I filed a support ticket with Nest (ref:_00D40Mlt9._50040jgU8y:ref) and tried to assure my wife that it was just a glitch and that I'd fix it when I got home.

  • May 23, 2015: We thought it was funny enough to post to Facebook, "When Nest mistakes a diaper change for a fire, that's one impressive poop, kiddo!"  Not so funny now.


  • August 9, 2015: I installed 2 more Nest Protects, in the guest bedroom and my office
  • August 21, 2015, 11:26am: While on a flight home from another business, trip, I receive another set of daytime warnings about smoke in the house.  Another false alarm.
  • August 24, 2015, 12am: While asleep, I receive another 3 false alarms.
  • August 25, 2015, 1am: Again, asleep, another false alarm.  Different room, different unit.  I'm fucking done with these.
I'm counting on you Google/Nest.  Please make it right.

Burning up but not on fire,
Dustin

Update #1: I was contacted directly by email and over Twitter, by Nest's "Executive Relations", who offer to replace of all 7 of my "v1" Nest Protects with 7 new "v2" Nest Protects, at no charge.  The new "v2" Protect reportedly has an improved design with better photoelectric detector that reduces false positives.  I was initially inclined to try the new "v2" Protects, however, neither the mounting bracket nor the wiring harness are compatible from v1 to v2.  So I would have to replace all of the brackets and redoing all of the wiring myself.  I asked, but Nest would not cover the cost of a professional (re-)installation.  At this point, as expressed my disappointment in this alternative, and I was offered a full refund, in 4-6 weeks time, after I return the 7 units.  I've accepted this solution and will replace the Nest Protects with a simpler, more reliable traditional smoke detector. 
Update #2: I suppose I should mention that I generally like my Nest Thermostat and (3) Dropcams.  This blog post is really only complaining about the Titanic disaster that is the Nest Protect.

26 August, 2015 02:06PM by Dustin Kirkland (noreply@blogger.com)

Pasi Lallinaho: A series of minor improvements for Ubuntu websites

In addition to using developer documentation (see A compact style for jQuery API documentation), people who work with communities need to use community and communication related websites. The bigger the community, the more tools it needs.

In a large community like Ubuntu, the amount of maintenance is big and the variety of platforms is huge. On top of these, many of the website aren’t directly maintained for the community (which has both good and bad sides). For these reasons, it’s sometimes hard and/or slow to get updates landed for the CSS files for the websites.

While workarounds aren’t ideal, at least we can fight the problematic styles with modern technology. That said, I’ve created a gist for a Stylish style that provides some minor improvements for some ubuntu.com websites.

Currently, the style brings the following improvements:

  • The last line of the chat is completely shown in Ubuntu Etherpad pads
  • Images and code blocks aren’t overlapping the content section in Planet Ubuntu, avoiding horizontal scrollbars
  • In the Ubuntu wiki, list items do not have a large bottom padding, making the lists more readable
  • Also in the wiki, tables are always full width but not too wide, keeping them aligned nicely

If you are constantly hitting other annoying styling issues on the Ubuntu websites, leave me a comment and I’ll see whether I can update the gist with a workaround. However, please report the bugs and issues for concerned maintaining parties as well, so we can stop using these workarounds as soon as possible. Thank you!

26 August, 2015 01:41PM

Lubuntu Blog: Happy 24th birthday, Linux!

Can you believe Linux is celebrating 24 years already? It was on this day, August 25, back in 1991 when a young Linus Torvalds made his now-legendary announcement on the comp.os.minix newsgroup:

Hello everybody out there using minix -

I'm doing a (free) operating system (just a hobby, won't be big and professional like gnu) for 386(486) AT clones. This has been brewing since april, and is starting to get ready. I'd like any feedback on things people like/dislike in minix, as my OS resembles it somewhat (same physical layout of the file-system (due to practical reasons) among other things).

I've currently ported bash(1.08) and gcc(1.40), and things seem to work. This implies that I'll get something practical within a few months, and I'd like to know what features most people would want. Any suggestions are welcome, but I won't promise I'll implement them :-)

Linus

PS. Yes – it's free of any minix code, and it has a multi-threaded fs. It is NOT portable (uses 386 task switching etc), and it probably never will support anything other than AT-harddisks, as that's all I have :-(.

Quite an understated beginning if I ever heard one!

There's some debate in the Linux community as to whether we should be celebrating Linux's birthday today or on October 5 when the first public release was made, but Linus says he is O.K. with you celebrating either one, or both! So as we say happy birthday, let's take a quick look back at the years that have passed and how far we have come.

Via OpenSource.

26 August, 2015 12:48PM by Rafael Laguna (noreply@blogger.com)

Thomas Ward: Landscape and Gitlab on the same server: Headaches, and thoughts.

This is a mini case study, or rather a report from me, on how difficult it can be to run multiple services from the same server. Especially when they listen on similar ports for different aspects. In this post, I examine the headaches of making two things work on the same server: GitLab (via their Omnibus .deb packages), and Landscape (Canonical’s systems management tool).

I am not an expert on either of the software I listed, but what I do know I will state here.

The Software

Landscape

Many of you have probably heard of Landscape, Canonical’s systems management tool for the Ubuntu operating system. Some of you probably know about how we can deploy Landscape standalone for our own personal use with 10 Virtual and 10 Physical machines managed by Landscape, via Juju, or manually.

Most of my systems/servers are Ubuntu, and I have enough that makes management by one individual a headache. In the workplace, we have an entire infrastructure set up for a specific set of applications, all on an Ubuntu base, and a similar headache in managing them all one at a time. For me, discovering Landscape Dedicated Server, the setup yourself, makes management FAR easier. Landscape has a dependency on Apache

GitLab

GitLab is almost like GitHub in a sense. It provides a web interface for working with code, via the Git Version Control System. Github and GitLab are both very useful, but for those of us wanting the same interface in only one organization, or for personal use, and not trusting the Cloud hosts like GitHub or GitLab’s cloud, we can run it via their Omnibus package, which is Gitlab pre-packaged for different distributions (Ubuntu included!)

It includes its own copy of nginx for serving content, and uses Unicorn for the Ruby components. It listens on both port 80 and 8080, initially, per the gitlab configuration file which rewrites and modifies all the other configurations for Gitlab, which includes both of those servers.

The tricky parts

But then, I ran into a dilemma on my own personal setup of it: What happens if you need Landscape and multiple other sites run from the same server, some parts with SSL, some without? Throw into the mix that I am not an Apache person, and part of the dilemma appears.

1: Port 8080.

There’s a conflict between these two softwares. Part of Landscape (I believe the appserver part) and part of GitLab (it’s Unicorn server, which handles the Ruby-to-nginx interface both try and bind to port 8080.

2: Conflicting Web Servers on Same Web Ports

Landscape relies on Apache. GitLab relies on its own-shipped nginx. Both are set by default to listen on port 80. Landscape’s Apache config also listens on HTTPS.

These configurations, out of the box by default, have a very evil problem: both try to bind to port 80, so they don’t work together on the same server.

My solution

Firstly, some information. The nginx bundled as part of GitLab is not easily configured for additional sites. It’s not very friendly to be a ‘reverse proxy’ handler. Secondly, I am not an Apache person. Sure, you may be able to get Apache to work as the ‘reverse proxy’, but it is unwieldy for me to do that, as I’m an nginx guy.

These steps also needed to be done with Landscape turned off. (That’s as easy as running sudo lsctl stop)

1: Solve the Port 8080 conflict

Given that Landscape is something by Canonical, I chose to not mess with it. Instead, we can mess with GitLab to make it bind Unicorn to a different port.

What we have to do with GitLab is tell its Unicorn to listen on a different IP/Port combination. These two lines in the default configuration file control it (the file is located at /etc/gitlab/gitlab.rb in the Omnibus packages):

# unicorn['listen'] = '127.0.0.1'
# unicorn['port'] = 8080

These are commented out by default. The default binding is to bind to 127.0.0.1:8080. We can easily change GitLab’s configurations though, by editing the file, and uncommenting both lines. We have to uncomment both because otherwise it tries to bind to the specified port, but also *:8080 (which breaks Landscape’s services). After making those changes, we now run sudo gitlab-ctl reconfigure and it redoes its configurations and makes everything adapt to those changes we just made.

2: Solve the web server problem

As I said above, I’m an nginx guy. I also discovered revising the GitLab nginx server to do this is a painful thing, so I did an ingenious thing.

First up: Apache.

I set the Apache bindports to be something else. In this case, I revised /etc/apache2/ports.conf to be the following:

# If you just change the port or add more ports here, you will likely also
# have to change the VirtualHost statement in
# /etc/apache2/sites-enabled/000-default.conf

Listen 10080


Listen 10443


Listen 10443

# vim: syntax=apache ts=4 sw=4 sts=4 sr noet

Now, I went into the sites-enabled configuration for Landscape, and also changed the bindports accordingly – the HTTP listener on Port 80 now listens on 10080, and the SSL listener on Port 443 now listens on 10443 instead.

Second: GitLab.

This one’s easier, since we simply edit /etc/gitlab/gitlab.rb, and modify the following lines:

#nginx['listen_addresses'] = ['127.0.0.1']
#nginx['listen_port'] = 80

First, we uncomment the lines. And then, we change the 'listen_port' item to be whatever we want. I chose 20080. Then sudo gitlab-ctl reconfigure will apply those changes.

Finally, a reverse proxy server to handle everything.

Behold, we introduce a third web server: nginx, 1.8.0, from the NGINX Stable PPA.

This works by default because we already changed all the important bindhosts for services. Now the headache: we have to configure this nginx to do what we want.

Here’s a caveat: I prefer to run things behind HTTPS, with SSL. To do this, and to achieve it with multiple domains, I have a few wildcard certs. You’ll have to modify the configurations that I specify to set them up to use YOUR SSL certs. Otherwise, though, the configurations will be identical.

I prefer to use different site configuration files for each site, so we’ll do that. Also note that you will need to put in real values where I say DOMAIN.TLD and such, same for SSL certs and keys.

First, the catch-all for catching other domains NOT hosted on the server, placed in /etc/nginx/sites-available/catchall:

server {
listen 80 default_server;

server_name _;

return 406; # HTTP 406 is "Not Acceptable". 404 is "Not Found", 410 is "Gone", I chose 406.
}

Second, a snippet file with the configuration to be imported in all the later configs, with reverse proxy configurations and proxy-related settings and headers, put into /etc/nginx/snippets/proxy.settings.snippet:


proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_max_temp_file_size 0;

proxy_connect_timeout 90;
proxy_send_timeout 90;
proxy_read_timeout 90;

proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;

Third, the reverse-proxy configuration for Landscape, which is fairly annoying and took me multiple tries to get working right, placed in /etc/nginx/sites-available/landscape_reverseproxy. Don’t forget that Landscape needs SSL for parts of it, so you can’t skip SSL here:


server {
listen 443 ssl;

server_name landscape.DOMAIN.TLD;

ssl_certificate PATH_TO_SSL_CERTIFICATE; ##### PUT REAL VALUES HERE!
ssl_certificate_key PATH_TO_SSL_CERTIFICATE_KEY; ##### PUT REAL VALUES HERE

# These are courtesy of https://cipherli.st, minus a few things.
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off;

include /etc/nginx/snippets/proxy.settings.snippet;

location / {
proxy_pass https://127.0.0.1:10443/;
}

location /message-system {
proxy_pass https://127.0.0.1:10443/;
}
}

server {
listen 80;
server_name landscape.DOMAIN.TLD;

include /etc/nginx/snippets/proxy.settings.snippet;

location / {
return 301 https://landscape.DOMAIN.TLD$request_uri;
}

location /ping {
proxy_pass http://127.0.0.1:10080/;
}
}

Forth, the reverse-proxy configuration for GitLab, which was not as hard to make working. Remember, I put this behind SSL, so I have SSL configurations here. I’m including comments for what to put if you want to NOT have SSL:

# If you don't want to have the SSL listener, you don't need this first server block
server {
listen 80;
server_name gitlab.DOMAIN.TLD

# We just send all HTTP traffic over to HTTPS here.
return 302 https://gitlab.DOMAIN.TLD$request_uri;
}

server {
listen 443 ssl;
# If you want to have this listen on HTTP instead of HTTPS,
# uncomment the below line, and comment out the other listen line.
#listen 80;
server_name gitlab.DOMAIN.TLD;

# If you're not using HTTPS, remove from here to the line saying
# "Stop SSL Remove" below
ssl_certificate /etc/ssl/hellnet.io/hellnet.io.chained.pem;
ssl_certificate_key /etc/ssl/hellnet.io/hellnet.io.key;

ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off; # Requires nginx >= 1.5.9
# Stop SSL Remove

include /etc/nginx/snippets/proxy.settings.snippet;

location / {
proxy_pass http://127.0.0.1:20080/;
}
}

System specifications considerations

Landscape is not light on resources. It takes about a gig of RAM to run safely, from what I’ve observed, but 2GB is more recommended.

GitLab recommends AT LEAST 2GB of RAM. It uses at least that, so you should have 3GB for this at the minimum.

Running both demands just over 3GB of RAM. You can run it on a 4GB box, but it’s better to have double that space just in case, especially if Landscape and Gitlab both get heavy use. I run it on an 8GB converted desktop, which is now a Linux server but used to be a Desktop.

26 August, 2015 12:12PM

Raphaël Hertzog: Freexian’s report about Debian Long Term Support, July 2015

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 79.50 work hours have been dispatched among 7 paid contributors. Their reports are available:

Evolution of the situation

August has seen a small decrease in terms of sponsored hours (71.50 hours per month) because two sponsors did not pay their renewal invoice on time. That said they reconfirmed their willingness to support us and things should be fixed after the summer. And we should be able to reach our first milestone of funding the equivalent of a half-time position, in particular since a new platinum sponsor might join the project.

DebConf 15 happened this month and Debian LTS was featured in a talk and in a work session. Have a look at the video recordings:

In terms of security updates waiting to be handled, the situation is better than last month: the dla-needed.txt file lists 20 packages awaiting an update (4 less than last month), the list of open vulnerabilities in Squeeze shows about 22 affected packages in total (11 less than last month). The new LTS frontdesk ensures regular triage of CVE reports and the difference between both counts dropped significantly. That’s good!

Thanks to our sponsors

Thanks to Sig-I/O, a new bronze sponsor, which joins our 35 other sponsors.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

26 August, 2015 09:14AM

Mattia Migliorini: You Might Need a Pro for These Tech Upgrades

By now, you are probably more than a little tired of hearing people tell you how easy it is to do things like build a website or add ecommerce to an existing site. But when do you need a professional?

Does It Effect the Customer Experience?

If the thing you want to do will have an adverse effect on the client experience if it goes horribly wrong, then you will want to bring in a licensed professional. The last thing you want to do is inadvertently do something that will increase customer confusion.

Avoid changing major design elements of your site just because you are bored. If you are not a designer, you may be changing something that is crucial to navigation or discoverability. It is like knocking out a wall without determining if it is a load-bearing wall. If your site enjoys high levels of customer experience, leave changes to a pro.

Does It Effect Security?

The only thing more sacrosanct than customer experience is customer security. At this point in time, it is safe to say that no company ought to be left as the sole proprietor of consumer security. At the very least, there needs to be third-party security auditing to be sure things are as secure as you think they are.

That is the type of thing that is outsourced to IT services from Firewall Technical, and other such companies. Not every company is big enough to justify having its own IT department. But if you handle customer data, you are required to perform due diligence. In some instances, that means outsourcing security matters to a professional.

Is It Going to Void Your Warranty?

There are plenty of changes you can make to your tech and web presence that are inward facing. If you have the time and skills to take o those projects, knock yourself out. But even those projects should be shifted to a professional if there is the danger of voiding your warranty if something goes awry. Even if nothing goes wrong, some upgrades will void your warranty just because you did them.

You don’t know how, watch a couple of YouTube videos, and have at it. But when it is time to upgrade those slow, unreliable, spinning hard drives to SSDs, check your nerve, and your warranty. While one may be sufficient, the other may not be.

Some people feel ashamed to call for help when it is something they should be able to do themselves. But the real shame is letting pride be the cause of your downfall when help was only a phone call away.

The post You Might Need a Pro for These Tech Upgrades appeared first on deshack.

26 August, 2015 06:27AM

Joel Leclerc: Idea: Non-windowing display server

For the TL;DR folk who are concerned with the title: It’s not an alternative to wayland or X11. It’s layer that wayland compositors (or other) can use.

As a quick foreward: I’m still a newbie at this field. While I try my best to avoid inaccuracies, there might be a few things I state here that are wrong, feel free to correct me!

Wayland is mainly a windowing protocol. It allows clients to draw windows (or, as the wayland documentation puts it, “surfaces”), and receive input from those surfaces. A wayland server (or “compositor”) has the task of drawing these surfaces, and providing the input to the clients. That is the specification.

However, where does a compositor draw these surfaces to? How does the compositor receive input? It has to provide many backends for various methods of drawing the composited surface. For example, the weston compositor has support for drawing the composited surface using 7 different backends (DRM, Linux Framebuffer, Headless [a fake rendering device], RDP, Raspberry Pi, Wayland, and X11). The amount of work put into making these backends work must be incredible, which is exactly where the problem relies in: it’s arguably too much work for a developer to put in if they want to make a new compositor.

That’s not the only issue though. Another big problem is that there is then no standard way to configure the display. Say you wanted a wayland compositor to change the video resolution to 800×600. The only way to do that is to use a compositor-specific extension to the protocol, since the protocol, AFAIK, has no method for changing the video resolution — and rightfully so. Wayland is a windowing protocol, not a display protocol.

My idea is to create a display server that doesn’t handle windowing. It handles display-related things, such as drawing pixels on the screen, changing video mode, etc… Wayland compositors and other programs that require direct access to the screen could then use this server and trust that the server will take care of everything display-related for them.

I believe that this would enable for much simpler code, and add a good deal more power and flexibility.

To give a more graphic description (forgive my horrible diagraming skills):

Current Stack:

wayland_current

Proposed Stack:

 

wayland_new

I didn’t talk about the input server, but it’s the same idea as the display server: Have a server dedicated to providing input. Of course, if the display server uses something like SDL as the backend, it may have to also provide the input server, due to the SDL library, AFAIK, doesn’t allow a program to access the input of another program.

This is an idea I have toyed around with for some time now (ever since I tried writing my own wayland compositor, in fact! XD), so I’m curious as to what people think of it. I would be more than happy to work with others to implement this.


26 August, 2015 05:42AM

hackergotchi for Xanadu developers

Xanadu developers

¡¡¡Feliz cumpleaños Linux!!!

Un día como hoy, en 1991 se anuncio al mundo el nacimiento del núcleo Linux.

Mas información:


Archivado en: General Tagged: aniversario, festividades, linux

26 August, 2015 01:27AM by sinfallas

August 25, 2015

hackergotchi for Ubuntu developers

Ubuntu developers

Aaron Honeycutt: My contributions to KDE and Kubuntu since Akademy

Packaging

During Akademy I had the great advantage of being in the same room of our (Kubuntu) top packagers (Riddell and Scarlett). Who have helped me learn to package and make patches for errors in the CI/QA machine. Since then I’ve also had the help of Philip (yofel) and Clive (clivejo) in the #kubuntu-devel IRC room as well. I’ve packaged digikam and recently kdenlive ( both need testing in my ppa :) ) as well as getting a new Kubuntu Setting package out there too (ppa) which overlays the slideshow in Muon Discover to highlight some top KDE applications.

Artwork

I also worked with Andrew from the VDG on a Breeze High Contrast color scheme which made it in for Plasma 5.4 before the freeze!

  • commit: https://quickgit.kde.org/?p=breeze.git&a=commit&h=3ebb6ed33fb6522b0f5ca855a9fbd2b79c165e65

 

I can’t thank the Ubuntu Community enough for funding my trip to Akademy this year! THANK YOU!

25 August, 2015 11:39PM

Ubuntu Kernel Team: Kernel Team Meeting Minutes – August 25, 2015

Meeting Minutes

IRC Log of the meeting.

Meeting minutes.

Agenda

20150825 Meeting Agenda


Release Metrics and Incoming Bugs

Release metrics and incoming bug data can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kt-meeting.txt


Status: CVE’s

The current CVE status can be reviewed at the following link:

  • http://kernel.ubuntu.com/reports/kernel-cves.html


Status: Wily Development Kernel

We have rebased our Wily master-next branch to the latest upstream
v4.2-rc8 and uploaded to our ~canonical-kernel-team PPA. The fglrx DKMS
package is still failing to build with this latest kernel. We are
actively investigating to get this resolved.
—–
Important upcoming dates:

  • https://wiki.ubuntu.com/WilyWerewolf/ReleaseSchedule
    Thurs Aug 27 – Beta 1 (~2 days away)
    Thurs Sep 24 – Final Beta (~4 weeks away)
    Thurs Oct 8 – Kernel Freeze (~6 weeks away)
    Thurs Oct 15 – Final Freeze (~7 weeks away)
    Thurs Oct 22 – 15.10 Release (~8 weeks away)


Status: Stable, Security, and Bugfix Kernel Updates – Precise/Trusty/Utopic/Vivid

Status for the main kernels, until today:

  • Precise – Verification & Testing
  • Trusty – Verification & Testing
  • lts-Utopic – Verification & Testing
  • Vivid – Verification & Testing

    Current opened tracking bugs details:

  • http://kernel.ubuntu.com/sru/kernel-sru-workflow.html
    For SRUs, SRU report is a good source of information:
  • http://kernel.ubuntu.com/sru/sru-report.html

    Schedule:

    cycle: 16-Aug through 05-Sep
    ====================================================================
    14-Aug Last day for kernel commits for this cycle
    15-Aug – 22-Aug Kernel prep week.
    23-Aug – 29-Aug Bug verification & Regression testing.
    30-Aug – 05-Sep Regression testing & Release to -updates.


Open Discussion or Questions? Raise your hand to be recognized

No open discussion.

25 August, 2015 07:53PM

Sujeevan Vijayakumaran: Visiting FrOSCon …

Last weekend, on the 22nd and 23rd August, the FrOSCon took place in St. Augustin (next to Bonn) in Germany. It is one of the bigger Open Source Conferences and my first visit. Short summary: It was great! There were many talks, too bad there were many on talks on the same time, but luckily all talks were recorded.

Personally I gave to talks: about Snappy Ubuntu Core and about Ubuntu Phone. You can watch both talks here and here, both talks are in German. Both talks had many attendees. (Here is a small photo!)

On Saturday I didn't visit more talks - on the evening, after the talks, there was a free barbecue for everybody! Also, the entrance to the conference was completely free in this year, which I strongly support.

On Sunday I went to the Talk of Benjamin Mako Hill about „Access Without Empowerment“, which was the only English talk which I visited. I also visited a few more talks, if you are interested to watch the other talks, you can have a look here.

The rest of the time was I mostly talking to people at the Ubuntu Booth, mostly showing and presenting my Ubuntu Phones. Besides that we had a small Taskwarrior-Meetup with Dirk Deimeke, Wim Schürmann and Lynoure Braakman which was quite funny and interesting ;).

I really like to visit different Open Source Conferences, mostly to learn new stuff and talk to old and new friends. This time I've met many „old“ friends and also met new guys. Surprisingly, I had the chance to meet and talk to Niklas Wenzel from the Ubuntu-Community, who is involved in the development of different apps and features of Ubuntu Phone (and he's way younger than I would have expected) and also Christian Dywan from Canonical.

I'm really looking forward to the next conferences, which will be Ubucon in Berlin and OpenRheinRuhr in Oberhausen later this year!

25 August, 2015 06:50PM

Lubuntu Blog: Lubuntu 15.10 beta 1

Beta 1 is now available for testing, please help test it. New to testing? Head over to the wiki for all the information and background you need, along with contact points.


Also, there's a new Facebook group named LubuntuQA for testing new Lubuntu ISOs, as well as bug triage. You can find it here.

And at last, but not least, a new ISO made by Julien Lavergne with the LXQt desktop integrated, just for testing Lubuntu Next evolution, is available here.

25 August, 2015 05:07PM by Rafael Laguna (noreply@blogger.com)

hackergotchi for Blankon developers

Blankon developers

Sokhibi: Pelatihan Desain Grafis dengan Inkscape di BLC Pekalongan

Tanggal 21 Agustus 2015 saya mengisi acara pelatihan desain Grafis dengan menggunakan aplikasi Open Source (inkscape), acara ini bertempat di BLC Pekalongan Jl.Majapahit Nomor 5 Pekalongan.

Saya berangkat ke Pekalongan dari Semarang melalui stasiun Poncol menggunakan jasa angkutan umum yaitu Kereta Api, adapun kereta api yang saya naiki adalah kereta api Kamanda Purwokerto - Tegal. 
Kereta api berangkat dari Stasiun Poncol (semarang) jam 05.15 WIB, dan sampai di Pekalongan jam 06.39 WIB.

Singkat cerita, acara pelatihan desain Grafis dengan Inkscape dimulai jam 09.05 WIB, Materi pertama yang saya sampaikan adalah pengenalan beberapa aplikasi Grafis yang biasa digunakan, baik aplikasi Proprietary dan Ope nSource.

Pada sesi ini saya juga menjelaskan tentang perbandingan harga antara aplikasi grafis Proprietary dan Open Source.


Materi selanjutnya adalah praktek secara langsung menggunakan aplikasi grafis Inkscape, adapun praktek yang dilakukan adalah menggambar atau membuat desain Kartu Nama.

Pelatihan desain grafis dengan inkscape di BLC Pekalongan menurut saya berjalan lumayan lancar kecuali beberapa kendala seperti berikut:
  • Beberapa mouse di ruang pelatihan sudah trouble sehingga kurang nyaman saat digunakan (diklik sekali langsung terklik dua kali).
  • OS pada komputer kurang familiar untuk pengguna awam karena Menu Bar pada aplikasi secara otomatis tersembunyi saat aplikasi tidak digunakan (automatic hide), hal ini sangat menghabat proses belajar sehingga banyak waktu yang terbuang sia-sia.
Acara pelatihan Desain Grafis dengan Inkscape berakhir jam 11.10 WIB, sebelum kita pisah ke alam masing-masing maka seperti acara pelatihan lain pada umumnya kita foto narsis bersama



Demikian liputan singkat yang dapat saya sampaikan, semoga pelatihan semacam ini dapat terlaksana di kota-kota lainnya.

Oh ya, hampir ada yang terlupa, untuk Anda yang tertarik dengan materi praktek pada acara tersebut dapat melihat videonya di Youtube




25 August, 2015 01:02AM by Istana Media (noreply@blogger.com)

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 431

Welcome to the Ubuntu Weekly Newsletter. This is issue #431 for the week August 17 – 23, 2015, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • Chris Guiver
  • Jim Connett
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

25 August, 2015 12:53AM

August 24, 2015

Luis de Bethencourt: Notes from the GStreamer Summer Hackfest 2015



A week ago a dozen cool guys, who happen to be GStreamer developers, met at Montpellier for the GStreamer Summer Hackfest 2015. We got together to work for 3 days, over the weekend, without a fixed agenda. The hacking venue coworkin' Montpellier was provided by Edward Hervey (bilboed) and most meals were provided by GStreamer.

With the opportunity to work in the same room and enjoy the lovely city of Montpellier, developers speed up the patch reviews and fixes by being able to easily discuss them with colleagues through the high-bandwith low-latency face-to-face protocol. They also took the chance to discuss major features and try to settle on problems that have been waiting for design decisions for a long time in the community. This is a non-exhaustive list of work done in the event:

  • Performance improvement for Caps negotiations: Caps negotiation is part of the GStreamer applications startup and was vastly optimized. Initial tests show it is taking 49.6% less time to happen.


  • nvenc element: A new nvenc element for recent NVIDIA GPU's was made released. It currently implements h264.


  • 1.6 release: At the hackfest a few blocker issues were revisited to get the project ready for releasing version 1.6. This will be a stable release. The release candidate just took place right after the hackfest, the 1.5.90 version.


  • decodebin3 design proposal/discussion: A new version of the playback stack was proposed and discussed. It should maintain the same features of the current version but cover use cases needed by targets with restricted resources, such as embedded devices (TV, mobile, for example), and providing a stream selection API for applications to use. This is a very important feature for Tizen to support more hardware-enabled scenarios in its devices.


  • Moving to Phabricator: The community started experimenting with the recently created Phabricator instance for GSteamer's bug and code tracking. Tweaking of settings and scripts, before a full transition from Bugzilla can be made.


  • Improvements on GtkGLSink: The sink had flickering and scaling noise among some other problems. Most are now fixed.


  • libav decoders direct rendering: Direct rendering allows decoders to write their output directly on screen, increasing performance by reducing the number of memory copies done. The libav video decoders had their direct rendering redone for the new libav API as is now enabled again.


  • Others: improvements to RTP payloaders and depayloadres of different formats, discussions about how to provide more documentation, bug fixes and more.
    Without any major core design decision pending, this hackfest allowed the attendees to work on different areas they wanted to focus on and it was very productive in many fronts. With the GStreamer Conference being around the corner, people in the organizing committee discussed which talks should be accepted and other organizational details.


A huge gratitude note to our host, Edward Hervey (shown below). The venue was very comfortable, the fridge always stocked and the city a lot of fun!


Lively discussion about GST Streams and decodebin3


If you missed the notes from the previous Hackfest. Read them here

24 August, 2015 04:03PM

Canonical Design Team: Publishing Vanilla

We’ve got a new CSS framework at Canonical, named Vanilla. My colleague Ant has a great write-up introducing Vanilla. Essentially it’s a CSS microframework powered by Sass. The build process consists of two steps, an open source build, and a private build.

Open Source Build

While there are inevitably componants that need to be kept private (keys, tokens, etc.) being Canonical, we want to keep much of the build in the open, in addition to the code. We wanted the build to be as automated and close to CI/CD principles as possible. Here’s what happens:

Committing to our github repository kicks off a travis build that runs gulp tests, which include sass-lint. And we also use david-dm.org to make sure our npm dependencies are up to date. All of these have nice badges we can link to right from our github page, so the first thing people see is the heath of our project. I really like this, it keeps us honest, and informs the community.

Not everything can be done with travis, however, as publishing Vanilla to npm, updating our project page and demo site require some private credentials. For the confidential build, we use Jenkins. (formally Hudson, a java-based build management system.).

Private Build with Jenkins

Our Jenkins build does a few things:

  1. Increment the package.json version number
  2. npm publish (package)
  3. Build Sass with npm install
  4. Upload css to our assets server
  5. Update Sassdoc
  6. Update demo site with new CSS

Robin put this functionality together in a neat bash script: publish.sh.

We use this script in a Jenkins build that we kick off with a few parameters, point, minor and major to indicate the version to be updated in package.json. This allows our devs push-button releases on the fly, with the same build, from bugfixes all the way up to stable releases (1.0.0)

After less than 30 seconds, our demo site, which showcases framework elements and their usage, is updated. This demo is styled with the latest version of Vanilla, and also serves as documentation and a test of the CSS. We take advantage of github’s html publishing feature, Github Pages. Anyone can grab – or even hotlink – the files on our release page.

The Future

It’d be nice for the regression test (which we currently just eyeball) to be automated, perhaps with a visual diff tool such as PhantomCSS or a bespoke solution with Selenium.

Wrap-up

Vanilla is ready to hack on, go get it here and tell us what you think! (And yes, you can get it in colours other than Ubuntu Orange)

24 August, 2015 01:35PM

hackergotchi for HandyLinux

HandyLinux

Passage du forum et du wiki en https

Bonjour

Une petite brève pour signaler qu'en ce moment , on passe le forum et le wiki en https.
donc au moment du passage, si vous suivez un lien (classique) 'forum.handyinux.org' alors qu'on est passé en 'https://handylinux.org/forum', vous verrez ce message.

Mauvais HTTP_REFERER. Vous avez été renvoyé(e) vers cette page par une source inconnue ou interdite. Si le problème persiste, assurez-vous que le champ « URL de base » de la page Administration » Options est correctement renseigné et que vous vous rendez sur ces forums en utilisant cette URL. Vous pourrez trouver davantage d'informations dans la documentation de FluxBB



il suffit de changer vos liens ou vos marques pages en conséquence (remplacer "http://forum.handylinux.org" par "https://handylinux.org/forum")

le forum est passé en https ce lundi 24 août à 14h45

https, c'est quoi? la réponse est là !

A+


HandyLinux - la distribution Debian sans se prendre la tête...

24 August, 2015 01:30PM by fibi

hackergotchi for SparkyLinux

SparkyLinux

Updates 2015/08/24

 

There a few updates of 3th party applications available in our repository:
– TOR Browser 5.0.1
– Enlightenment 0.19.9
I removed two modules:
* systray – it doesn’t work for long time
* econnman – not in use at all
and added postinstall script to fix freqset module problem with the wrong chmod
– Master PDF Editor 3.3.10
– SMTube 15.8.0-1
– SpiderOakOne 6.0.1
– TeamViewer 10.0.46203
– WPS Office 9.1.0.4975~a19p1
– Tint2 0.12.2

 

24 August, 2015 12:23PM by pavroo

hackergotchi for Grml developers

Grml developers

Michael Prokop: DebConf15: “Continuous Delivery of Debian packages” talk

At the Debian Conference 2015 I gave a talk about Continuous Delivery of Debian packages. My slides are available online (PDF, 753KB). Thanks to the fantastic video team there’s also a recording of the talk available: WebM (471MB) and on YouTube.

24 August, 2015 12:15PM

hackergotchi for Ubuntu developers

Ubuntu developers

Nathan Haines: Ubuntu Free Culture Showcase submissions are now open!

Ubuntu 15.10 is coming up soon, and what better way to celebrate a new release with beautiful new content to go with the release? The Ubuntu Free Culture Showcase is a way to celebrate the Free Culture movement, where talented artists across the globe create media and release it under licenses that encourage sharing and adaptation. We're looking for content which shows off the skill and talent of these amazing artists and will great Ubuntu 15.10 users. We announced the showcase last week, and now we are accepting submissions at the following groups: For more information, please visit the Ubuntu Free Culture Showcase page on the Ubuntu wiki.

24 August, 2015 03:40AM

Nathan Haines: Making Hulu videos play in Ubuntu

A couple of weeks ago, Hulu made some changes to their video playback system to incorporate Adobe Flash DRM technology. Unfortunately, this meant that Hulu no longer functioned on Ubuntu because Adobe stopped supporting Flash on Linux several year ago, and therefore Adobe’s DRM requires HAL which was likewise obsoleted about 4 years ago and was dropped from Ubuntu in 13.10. The net result is that Hulu no longer functions on Ubuntu.

While Hulu began detecting Linux systems and displaying a link to Adobe’s support page when playback failed, and the Adobe site correctly identifies the lack of HAL support as the problem, the instructions given no longer function because HAL is no longer provided by Ubuntu.

Fortunately, Michael Blennerhassett has maintained a Personal Package Archive which rebuilds HAL so that it can be installed on Ubuntu. Adding this PPA and then installing the “hal” package will allow you to play Hulu content once again.

To do this, first open a Terminal window by searching for it in the Dash or by pressing Ctrl+Alt+T.

Next, type the following command at the command line and press Enter:

sudo add-apt-repository ppa:mjblenner/ppa-hal

You will be prompted for your password and then you will see a message from the PPA maintainer. Press Enter, and the PPA will be added to Ubuntu’s list of software sources. Next, have Ubuntu refresh its list of available software, which will now include this PPA, by typing the following and pressing Enter:

sudo apt update

Once this update finishes, you can then install HAL support on your computer by searching for “hal” in the Ubuntu Software Center and installing the “Hardware Abstraction Layer” software, or by typing the following command and pressing Enter:

sudo apt install hal

and confirming the installation when prompted by pressing Enter.

book cover

I explain more about how to install software on the command line in Chapter 5 and how to use PPAs in Chapter 6 of my upcoming book, Beginning Ubuntu for Windows and Mac Users, coming this October from Apress. This book was designed to help Windows and Mac users quickly and easily become productive on Ubuntu so they can get work done immediately, while providing a foundation for further learning and exploring once they are comfortable.

24 August, 2015 02:13AM

August 23, 2015

John Baer: Never buy internet viagra

Viagra is a very specialized drug, and it use should not be taken lightly. Not taking Viagra in a responsible way can seriously damage your health, especially if you have problems such as a heart condition, or suffer from high blood pressure. The problem with Viagra is that it is now being sold under many different names on the Internet. This is called generic medication and is often produced in places like India or China. Is it safe? No, it isn’t always safe. Maria who works for London escorts, says that her father bought some. He was about embarrassed about his medical condition, and did not want to speak to his doctor. But like so many London escorts know, this is not a drug to be played around with at all.

Maria has worked for charlotte action escorts services for about two years. During that time she has always known that her father has suffered from a heart condition. The condition reduces his circulation quite severely, and makes it difficult for him to maintain a good erection. Most London escorts know that this can happen to men who have circulatory problems, and Viagra is one of the many solutions available. But, you should never take any drug without having spoken to your doctor first.

Maria’s dad took Viagra which he had bought of the Internet and ended up having a heart attack. She says it is complicated, but the Viagra contraindicated with the medication that he was already on. That means that it caused a problem and the two drugs mixed together caused her father to have a heart attack. Maria had to take two weeks off from London escorts services, and go to look after her mom whilst her dad was in hospital. It was a worrying time for my mom, she says, so I simply had to take time off from London escorts. It was the only way to cope.

In the end, Maria’s dad recovered and Maria was able to return to her job at London escorts services. It was scary, she says, and taught be a valuable lesson. You should never take drugs without knowing what they can do, and I am sure that many London escorts appreciate that Viagra should not be played around with just like other medications. The fact is, says Maria, my father could have died. Of course, my mom and I would both have been devastated.

The Internet is full of sexual performance enhancing drugs and you should at all times be careful.

There are some safe alternatives out there such as the amino acids, and herbal alternatives. However, London escorts would like you all to know that the herb Ginseng can be dangerous as well. It can “knock out” some heart medications and raise your blood pressure. This is another online sexual enhancement drug which London escorts warn you to stay away from at all times. If, you do have a concern about your performance, it is always best to see your local GP.

23 August, 2015 01:00PM

Joel Leclerc: Using Openlux to help your sleep and/or relax your eyes

If you are familiar with research suggesting that blue light affects your sleep, you might also be familiar with a (free!) software named f.lux. I use it on my iDevices (used to use it on my computers too), and it works great …. except for a few issues.

The first is CPU consumption. Seriously, this software takes up a lot of CPU. That was the main reason behind ditching xflux (the X11 edition of the software). It also doesn’t entirely block out blue light, even at the lowest color temperature it allows (this is true for the iOS version too). There were a number of other issues that became annoying over time (forced very long animations, a daemon that rarely ever works as intended, sometimes the software doesn’t even work at all, mouse cursor being left entirely out of the picture, etc.). These would (probably) all be simple to fix …. however, it’s free as in price, not as in freedom. The software is closed-source.

Openlux is a very simple open-source MIT-licensed clone I wrote that tries to address these issues (minus the mouse cursor issue, that one is a bit more complex). For now, it doesn’t contain as many features as xflux does, but it is only a first release. Animations and the lot will come later :)

I haven’t worked on packaging yet (if anyone wishes to spend some time doing this, that would be greatly appreciated!!), but for now, visit https://github.com/MiJyn/openlux for download and compilation information (sorry for the mess in main.c, I will get to that later!).

Here are a few usage examples

openlux                      # Sets the screen color temperature to 3400K (the default)
openlux -k 1000              # Sets the color temperature to 1000K
openlux -k 2000 -b 0         # Sets color temperature to 2000K, but removes all blue light
openlux -k 2000 -b 255       # Ditto, but blue is set to 255 (maximum value, gives the screen a magenta-ish tone)
openlux -r 130 -g 150 -b 100 # Gives the screen a dark swamp green tint (Kelvin value is ignored)
openlux -k 40000             # Sets the screen color temperature to 40000K
openlux -i                   # Resets the screen color temperature

I personally like using openlux -k 10000 during the day (very relaxing for the eyes!), and openlux -k 2300 -b 40 during the night.

I hope this can be useful for you!! If you have any issues, suggestions, feedback, etc. (even if you just want to say thank-you — those are always appreciated ^^), feel free to write a comment or send me an email!


23 August, 2015 12:20AM

August 22, 2015

hackergotchi for Xanadu developers

Xanadu developers

Algunas distribuciones GNU/Linux que deberías probar (primera parte)

Si eres fanático del distrohopping este publicación te va a gustar, pues sera la primera de una pequeña serie dedicada a mostrar distribuciones poco conocidas pero interesantes, espero les guste…

  • Semplice Linux: Es una distribución ligera, simple y rápida basada en la rama inestable de Debian. Incluye una pequeña colección de aplicaciones puestas al día corriendo sobre el gestor de ventanas Openbox.

  • ExTiX: Es una distribución de Linux para escritorio y DVD en vivo basado en Ubuntu que presenta el escritorio GNOME 3 personalizado.

  • SliTaz GNU/Linux: Es una mini distribución y CD en vivo diseñada para correr rápidamente sobre equipos con 256 MB de RAM. SliTaz usa BusyBox, un kernel reciente de Linux. Corre con Syslinux y provee más de 200 comandos de Linux, servidor lighttpd, base de datos SQLite, herramientas de rescate, cliente IRC, cliente SSH y servidor apoyados en Dropbear, sistema de ventanas X, tiene a JWM, gFTP, Geany IDE, Mozilla Firefox, AlsaPlayer, GParted, un editor de archivos de sonido y más.

  • PapyrOS: Es una distribución que no ha tenido mucha suerte desde sus primeros momentos de vida. Empezó como Quartz OS, pero tuvo que cambiar el nombre, ya que Apple tenía los derechos de su librería gráfica. Más tarde, eligieron Quantum OS, que nuevamente tuvieron que cambiar por temas de derechos. Con su nombre aparentemente definitivo, PapyrOS busca conseguir una distribución de GNU/Linux que disponga de un interfaz basada en el diseño Material Design de Google.

  • Devil-Linux: Es una distribución Linux creada para usarse como router/firewall el cual bootea completamente desde CD-ROM. Devil-Linux es capaz de arrancar en una PC antigua, esta distribución no proporciona una interfaz gráfica, haciéndola una distribución muy liviana. Sin embargo, esta distribución incluye un gran rango de servicios (DNS, Web, FTP SMTP, etc.), herramientas (MySQL, Lynx, Wget, etc.) y utilidades de seguridad (OpenVPN, Shorewall, etc.) asegurando un alto nivel de flexibilidad. Guardando la configuración en un disquete, los cambios pueden ser restaurados en el booteo. Sin usar un dispositivo con permisos de escritura.

  • Alpine Linux: Es una distribución Linux basada en uClibc y BusyBox, que tiene como objetivo ser ligera y segura por defecto sin dejar de ser útil para tareas de propósito general. Alpine Linux usa los parches PaX y grsecurity en el núcleo por defecto y compila todos los paquetes con protección stack-smashing. Está diseñado principalmente para routers x86, cortafuegos, VPNs, VoIP y servidores.

  • fli4l (flexible internet router for linux, before floppy isdn for linux): Es una distribución de Linux cuya tarea principal es proporcionar un pequeño sistema Linux que convierte casi cualquier máquina en un router. La distribución se puede ejecutar desde un disquete y fue creada con el objetivo de tener una configuración simple y correr en hardware antiguo.

  • LEAF: Es un Linux embebido para dispositivos de red para el uso en una variedad de topologías. A pesar de que se puede utilizar de otras maneras se utiliza principalmente como un gateway de Internet, router, firewall, y el punto de acceso inalámbrico.

Captura de Pantalla

  • Ultimate Edition: Es un derivado de Ubuntu y de Linux Mint. La meta del proyecto es crear un sistema operativo completo, integrado de forma continua, visualmente estimulante y fácil de instalar. Actualizar con un solo botón. Otras características principales incluyen un escritorio personalizado y temas con efectos en 3D, soporte para un amplio espectro de opciones de red, incluyendo Wi-Fi y Bluetooth, así como la integración de muchas otras aplicaciones extras y repositorios de paquetes.

  • Simplicity Linux: Es un derivado de Puppy Linux con LXDE como entorno de escritorio por defecto. Se presenta en cuatro ediciones: Obsidiana, netbook, escritorio y Medios de Comunicación. La edición netbook cuenta con software basado en la nube, el sabor de escritorio ofrece una colección de software de propósito general, y la variante de los medios de comunicación está diseñado para proporcionar a los usuarios “lounge” de PC, con fácil acceso a sus medios de comunicación.

  • KaOS: Es una distribución Linux de escritorio que cuenta con la última versión del entorno de escritorio KDE, la suite ofimática Calligra, y otras aplicaciones de software populares que utilizan el conjunto de herramientas Qt. Se inspiró en Arch Linux, pero los desarrolladores construyen sus propios paquetes que están disponibles en sus repositorios. Kaos emplea un modelo de desarrollo rolling release y es construido exclusivamente para sistemas de 64 bits.

  • Linux Lite: Es una distribución de Linux amistosa con los novatos, basada en Ubuntu LTS y que presenta el escritorio Xfce.

  • Knoppix: Es un CD live con una colección de programas GNU/Linux, detección automática de hardware y soporte para muchas tarjetas gráficas, de sonido, SCSO y dispositivos USB, así como otros periféricos. Knoppix puede ser usado como un Linux de muestra, un CD educativo, un sistema de rescate o ser adaptado y usado como una plataforma para demostraciones de productos de software comercial. No es necesario instalar nada en el disco duro. Debido a la descompresión al vuelo, el CD puede tener hasta 2 GB de programas ejecutables instalados en él.

  • 4MLinux: Es una distribución en miniatura de Linux que se enfoca en 4 prestaciones: Para mantenimiento (como un disco en vivo de rescate del sistema), multimedios (para reproducir DVDs de video y otros archivos multimedia), como miniservidor (usando el demonio inetd) y de entretenimiento (proveyendo varios pequeños juegos de Linux).

  • Wifislax: Es un CD en vivo basado en Slackware que contiene una variedad de herramientas de seguridad y forenses. El principal motivo de la fama de la distribución es la integración de varios controladores de red no oficiales en el kernel de Linux, proveyendo así de soporte inmediato para un gran número de tarjetas de red cableadas e inalámbricas.

  • Chakra: Es una distribución y CD en vivo poderosa y amigable con el usuario originalmente derivada de Arch Linux. Presenta un instalador gráfico, detección y configuración automática de hardware, el escritorio KDE más nuevo y una variedad de herramientas y extras.

  • Salix: Es una distribución de Linux basada en Slackware que es simple, rápida, fácil de usar y compatible con Slackware Linux. Optimizado para su uso en escritorio, Salix presenta una aplicación por tarea, repositorios personalizados de paquetes, gestión avanzada de paquetes con soporte de dependencias, herramientas de administración ad hoc de sistemas y un arte creativo e innovador.

  • Scientific: Es un Red Hat Enterprise Linux recompilado, co-desarrollado por el Laboratorio Nacional Fermilab y la Organización Europea para la Investigación Nuclear (CERN en inglés). A pesar de que apunta a ser plenamente compatible con Red Hat Enterprise Linux, también provee paquetes adicionales no encontrados en la distribución de la cual proviene. Lo más notable entre éstos son varios sistemas de archivos, incluyendo Cluster Suite y Global File System (GFS), FUSE, OpenAFS, Squashfs y Unionfs; soporte de red inalámbrica con firmas para Intel inalámbrico, MadWiFi y NDISwrapper, Sun Java y Java Development Kit, el ligero gestor de ventanas IceWM, R un lenguaje y ambiente para computación estadística y el cliente de correo Alpine.

  • Netrunner: Es una distribución basada en Kubuntu que presenta un escritorio KDE altamente personalizado con aplicaciones extras, códecs multimedia, complementos de Flash y Java, así como una apariencia y sensación única. Las modificaciones están diseñadas para realzar la facilidad de uso del escritorio para el usuario al tiempo que se preserva la libertad de hacer ajustes.

  • SparkyLinux: Es una distribución basada en Debian. Incluye entornos de escritorios como Razor-QT, LXDE, OpenBox/JWM, e17 y MATE, un gran abanico para elegir tu entorno preferido, está especialmente creada para funcionar en equipos viejos con pocos recursos, lo que no impide contar con un buen sistema operativo y completo.

  • MOFO Linux: Rompe la censura. Muchos gobiernos están implantando bloqueos a ciertas páginas web por motivos políticos o simplemente para combatir la piratería. Con esta distribución Linux podrás saltarte estas restricciones y acceder a todo tipo de web, sin censura ni bloqueos, navegando con total libertad por el ciberespacio. Es un derivado de la distribución GNU/Linux Porteus, de la familia de Slackware, una de las más primitivas distribuciones Linux. MOFO también da la posibilidad de usarlo como un CD en vivo.

  • Q4OS GNU/Linux: Debian tiene miles de hijos. Mucha gente hace sus adaptaciones para que se adapte a unas necesidades concretas, estéticas, de rendimiento, de actualización, de simplicidad… Y como no, era inevitable tener un hijo que tratase de comportarse de forma similar a XP para atraer a usuarios de este sistema.

  • Gobang: Es una distribución GNU/Linux de origen polaco que nos recuerda mucho en su aspecto y denominación a CrunchBang. Al igual que esta hace uso del gestor de ventanas openbox, el panel tint2, conky y una estética similar, para proponernos un escritorio ligero, pero con la diferencia de que no está basada en Debian, sino que hace uso de los repositorios de la versión de Ubuntu 14.04.

  • Xiaopan OS: Es una distribución basada en Tiny Core Linux con la que podemos analizar la fortaleza de nuestras conexiones inalámbricas gracias al conjunto de herramientas avanzadas que incorpora, la mayoría de ellas diseñadas para romper la seguridad WPA/WPA2/WPS/WEP de redes WiFi y que vienen acompañadas de un conjunto amplio de controladores para las tarjetas de red más comunes.

  • IPCop: Esta distribución que surgió como un fork de SmoothWall, además de todas las funcionalidades Firewall y UTM (Unified Threat Management o gestión unificada de amenazas) integradas, junto con sus addons, te permite crear también zonas desmilitarizadas (DMZ). Es gratuita y puede ser una gran herramienta de seguridad para uso empresarial o doméstico con ella puedes transformar un viejo equipo en un Firewall-UTM que proteja toda la red doméstica o de tu empresa.

  • AV Linux: Es una distribución versátil basada en Debian que presenta una gran colección de programas para la producción de audio y vídeo. Adicionalmente, incluye un kernel personalizado con hilado IRQ habilitado para desempeño de audio de baja latencia. AV Linux puede correr directamente desde un DVD en vivo o un dispositivo de almacenamiento en vivo de USB, aunque también puede ser instalado en un disco duro y ser usado como un sistema operativo de propósito general para las tareas diarias.

  • Emmabuntüs: Es una distribución de Linux para el escritorio basada en Xubuntu. Lucha por ser amigable con el principiante y razonablemente ligero en recursos tal que pueda ser usado en computadoras viejas. También incluye muchas características modernas como un gran número de programas pre-configurados para el uso diario, una barra lanzadora de aplicaciones, fácil instalación de programas no libres y códecs multimedia así como una rápida configuración a través de scripts automatizados. La distribución tiene soporte en inglés, francés y español.

  • Calculate Linux: Es una familia basada en Gentoo de 3 distribuciones distinguidas: Calculate Directory Server (CDS), que es una solución que soporta clientes Windows y Linux vía LDAP + SAMBA, proveyendo servidores proxy, de correo y Jabbers con gestión optimizada de usuarios; Calculate Linux Desktop (CLD), que es una estación de trabajo y distribución para clientes con los escritorios de KDE, GNOME o Xfce que incluye una guía para configurar una conexión a CDS y finalmente, Calculate Linux Scratch (CLS), que es un CD en vivo con un sistema de construcción para crear una distribución a la medida.

  • Ubuntu Kylin: Es un sub-proyecto oficial de Ubuntu cuya meta es crear una variante de Ubuntu que sea más adecuada para los usuarios chinos usando el sistema de escritura chino simplificado. El proyecto provee una experiencia delicada, completa y totalmente a la medida al usuario chino y está lista para usarse proveyendo una interfaz de escritorio para el usuario regionalizada en chino simplificado y con programas generalmente preferidos por muchos de los usuarios chinos.

  • Pinguy: Es una distribución basada en Ubuntu dirigida a usuarios principiantes de Linux. Presenta numerosas mejoras en ser amigable con el usuario, soporte inmediato para controladores multimedia y complementos para el navegador, una interfaz de usuario GNOME altamente retocada con menús mejorados, paneles y puertos, así como una cuidadosa selección de aplicaciones para el escritorio muy populares para muchas tareas comunes de computación.

  • Qubes OS: Es una distribución Linux para escritorio orientada a la seguridad basada en Fedora cuyo concepto principal es “seguridad por aislamiento” valiéndose de dominios implementados como las máquinas virtuales ligeras Xen. Intenta combinar dos metas contradictorias: Hacer que la separación entre dominios sea tan fuerte como sea posible, debido principalmente a una arquitectura inteligente que minimiza la cantidad de código de confianza y hacer este aislamiento tan transparente y sencillo como sea posible.

  • Lunar Linux: Es una distribución de Linux basada en fuentes con un sistema gestor de paquetes único el cual construye cada paquete o módulo para la máquina en la que está siendo instalada. A pesar de que puede tomarle un poco en hacer una instalación de Lunar completa, vale la pena porque tiende a ser bastante rápida una vez ya instalada. Al principio Lunar era un derivado de Sorcerer GNU/Linux (SGL). La derivación ocurrió entre finales de enero y principios de febrero del 2002 y fue originalmente hecha por un pequeño grupo de gente que quería desarrollar colaborativamente para extender la tecnología Sorcerer. El nombre original para el proyecto era Pingüino Lunar (Lunar-Penguin), pero el grupo decidió rebautizarlo como Lunar Linux mientras que el nombre de Lunar-Penguin se ha vuelto una suerte de sombrilla la cual el equipo podría usar si deciden desarrollar colaborativamente en algo además de Lunar Linux.

  • Parsix: Es un DVD en vivo y de instalación basado en Debian. La meta del proyecto es la de proveer un sistema operativo listo para usarse y fácil de instalar basándose en la rama de pruebas de Debian y el lanzamiento estable más nuevo del escritorio de GNOME. Programas extras están disponibles para su instalación desde los repositorios propios de la distribución.

  • Xanadu GNU/Linux: Es una distribución Rolling Release basada en Debian SID, desarrollada para ser rápida, ligera y segura, pensando en las necesidades del usuario convencional y ofreciendo herramientas para el usuario avanzado. Utiliza Openbox como gestor de ventanas y LXDE como entorno de escritorio.


Archivado en: Distribuciones Tagged: arch-derivado, debian-derivado, fedora-derivado, gentoo-derivado, linux, puppy-derivado, redhat-derivado, slackware-derivado, ubuntu-derivado

22 August, 2015 06:40PM by sinfallas

August 21, 2015

hackergotchi for Ubuntu developers

Ubuntu developers

Nekhelesh Ramananthan: Clock App Update: August 2015

We have been working on a new clock app update with lots of goodies :-) I thought I would summarize the release briefly. Huge props to Bartosz Kosiorek for helping out with this release and coordinating with the canonical designers for the stopwatch & timer designs.

General Improvements

We focused on many parts of the clock app for this release ranging from the world-clock feature, to the alarms and stopwatch.

  • Transitioned to the new 15.04 SDK ListItems which effectively results in a lot of custom code being removed and maintaining consistency with other apps. LP: #1426550
  • User added world cities previously were not translated if the user changed the phone language. This has been fixed. LP: #1477492
  • New navigation structure due to the introduction of Stopwatch
  • Replaced a few hard coded icons with system icons. LP: #1476580
  • Fixed not being able to add world cities with apostrophe in their names (database limitation). LP: #1473074

Stopwatch

This along with Timer is the single most requested feature since the clock app reboot. And I am thrilled to see it finally land in this update. It sports a couple of usability tweaks like prevent screen-dim while stopwatch is running and keep the stopwatch running in the background regardless of if the clock app is closed or the phone is switched off. The UI is clean and simple. Expect some more changes to this in the future. We reused a lot of code from Michael Zanetti's Stopwatch App.

stopwatch-image

Alarms

In this area, we have fixed a good number of small bugs that overall improve the alarms experience. The highlight of course is the support for custom alarm sounds. Yes! You can now import music using content hub and set that as your alarm sound to wake you in the morning.

custom-sound-image

Other bugs fixed include,

  • Changed default alarm sound to something a bit more strong. LP: #1354370
  • Fixed confirmation behaviour being confusing in the alarm page header. LP: #1408015
  • Made the alarm times shown in the alarm page more bigger and bolder. LP: #1365428
  • Adding/Deleting alarms will move the alarm list items up/down using a nice smooth animation
  • Alternate between alarm frequency and alarm ETA every 5 seconds using a fade animation. LP: #1466000
  • Fixed alarm time being incorrectly set if it the current time is a multiple of 5. LP: #1484926

This pretty much sums up the upcoming release. We will wait a few days to ensure it is fully translated and then tested by QA before releasing the update next week.

21 August, 2015 10:39PM

Rhonda D'Vine: DebConf15

I tried to start to write this blog entry like I usually do: Type along what goes through my mind and see where I'm heading. This won't work out right now for various reasons, mostly because there is so much going on that I don't have the time to finish that in a reasonable time and I want to publish this today still. So please excuse me for being way more brief than I usually am, and hopefully I'll find the time to expand some things when asked or come back to that later.

Part of the reason of me being short on time is different stuff going on in my private life which requires additional attention. A small part of this is also something that I hinted in a former blog entry: I switched my job in June. I really was looking forward to this. I made them aware of what the name Rhonda means to me and it's definitely extremely nice to be addressed with female pronouns at work. And also I'm back in a system administration job which means there is an interest overlap with my work on Debian, so a win-win situation on sooo many levels!

I'm at DebConf15 since almost two weeks now. On my way here I was complimented on my outfit by a security guard at the Vienna airport which surprised me but definitely made my day. I was wearing one of these baggy hippie pants (which was sent to me by a fine lady I met at MiniDebConf Bucharest) but pulled up the leg parts to the knees so it could be perceived as a skirt instead. Since I came here I was pretty busy with taking care of DCschedule bot adjustments (like, changing topic and twittering from @DebConf at the start of the talks), helping out with the video team when I noticed there was a lack of people (which is a hint for that you might want to help with the video team in the future too, it's important for remote people but also for yourself because you can't attend multiple sessions at the same time).

And I have to repeat myself, this is the place I feel home amongst my extended family, even though I it still is sometimes for me to get to speak up in certain groups. I though believe it's more an issue of certain individuals taking up a lot of space in discussions without giving (more shy) people in the round the space to also join in. I guess it might be the time that we need a session on dominant talking patterns for next year and how to work against them. I absolutely enjoyed such a session during last year's FemCamp in Vienna which set the tone for the rest of the conference, and it was simply great.

And then there was the DebConf Poetry Night. I'm kinda disappointed with the outcome this year. It wasn't able to attract as much people anticipated, which I to some degree account to me not making people aware of it well enough, overlapping with a really great band playing at the same time in competition, and even though the place where we did it sounded like a good idea at first, it didn't had enough light for someone to read something from a book (but that was solved through smartphone lights). I know that most people did enjoy it, so it was good to do it, but I'm still a fair bit disappointed with the outcome and will try to work on improving on that grounds for next year. :)

With all this going on there unfortunately wasn't as much time as I would have liked to spend with people I haven't seen for a long time, or new people I haven't met yet. Given that this year's DebConf had an height in attendees (526 being here at certain times during the two weeks, and just today someone new arrived too, so that doesn't even have to be the final number) it makes it a bit painful to have picked up so many tasks and thus lost some chances to socialize as much as I would have liked to.

So, if you are still here and have the feeling we should have talked more, please look for me. As Bdale pointed out correctly in the New to DebConf BoF (paraphrased): When you see us DebConf old timers speaking to someone else and you feel like you don't want to disturb, please do disturb and speak to us. I always enjoyed to get to know new people. This for me always is one of the important aspects of DebConf.

Also, I am very very happy to have received feedback from different people about both my tweets and my blog, thank you a lot of that. It is really motivating to keep going.

So, lets enjoy the last few hours of DebConf!

Another last side notice: While my old name in the Debian LDAP did manage to find some wrongly displayed names in the DebConf website, like for speakers, or volunteers, it was clear to me that having it exposed through SSO.debian.org isn't really something I appreciate. So I took the chance and spoke to Luca from the DSA team right here today, and ... got it fixed. I love it! Next step is getting my gpg key exchanged, RT ticket is coming up. :)

/debian | permanent link | Comments: 1 | Flattr this

21 August, 2015 09:00PM

Ubuntu Podcast from the UK LoCo: S08E24 – Epic Movie - Ubuntu Podcast

It’s Episode Twenty-four of Season Eight of the Ubuntu Podcast! Laura Cowen, Martin Wimpress, and Alan Pope are back with Stuart Langridge!

In this week’s show:

  • We chat about why Laura doesn’t like webapps on the Ubuntu Phone and we get a more qualified view from app developer Stuart.
  • We go over your feedback, including Ubuntu Phone notes from Pete Cliff.
  • We have a command line love, Comcast from Jorge Castro.
  • We chat about getting a Picade, playing with Jasper, and controlling a Nexus 6 with Pebble Time whilst listening to podcasts on the move.

PiCade

That’s all for this week, please send your comments and suggestions to: show@ubuntupodcast.org
Join us on IRC in #ubuntu-podcast on Freenode
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

21 August, 2015 05:35PM

Valorie Zimmerman: Upon returning home from Akademy: thoughts

Akademy is long over, you say? Yes, but I've been traveling almost constantly since flying home, since my husband is on the home leg of his long hike of the Pacific Crest Trail, which he is chronicling at http://bobofwashington.blogspot.com. While driving about the state to meet him, I've not been online much, therefore unable to create blogposts from my thoughts and impressions written during and right after Akademy. Fortunately, I did scrawl some thoughts which I'll post over the next week or so.

Please visit https://akademy.kde.org/2015 for more information about Akademy. Click the photo for a larger version and add names if someone is left unlabeled.

First: A Coruña where Akademy 2015 met, is beautiful! Galicia, the region of Spain is not only beautiful, but serves delicious food, especially if you love fresh seafood.

The local team, working in conjunction with the e.V. Board and the amazing Kenny Duffus and Kenny Coyle created a wonderful atmosphere in which to absorb, think, and work. One of the best bits this year was the Rialta, where most of us lived during Akademy. Scarlett and I flew in early, to get over our jetlag, and have a day to see the city.


The journey from Seattle began very early Monday morning, and Scarlett set out even earlier the previous day via Amtrack train to Seattle. Our connections and flights were very long, but uneventful. We caught the airport bus and then the city bus 24 and walked to the Rialta, arriving about dinner-time Tuesday. Although we tried to avoid sleeping early, it was impossible.

Waking the next morning at 4am with no-one about, and no coffee available was a bit painful! Breakfast was not served until 8am, and we were *not* late! Rialta breakfasts are adequate; the coffee less so. I found that adding a bit of cocoa made it more drinkable, but some days bought cafe con leche from the bar instead. That small bar was also the source of cervesa (beer) and a few whiskys as well.

One of the beautiful things about the Rialta was their free buses for residents. Some were called Touristic, and followed a long loop throughout the city. You could get off at any of the stops and get back on later after sight-seeing, eating or shopping. So we rode it a loop to figure out what we wanted to see, which was part of the sea-side and the old town. Scarlett and I both took lots of photos of the beautiful bay and some of the port. After visiting Picasso's art college, we headed into the old city. On the way in, we saw a archaelogical dig of a Roman site, I guess one of many. This one was behind the Military Museum. As we walked further into the city, we heard music from Game of Thrones, and a giant round tent covered in medieval scenes. As we walked around the square trying to figure out what was happening, we saw lancers on large horses, dancing about waiting to enter the ring!


Some of the Akademy attendees were inside the tent watching the jousts, we later found out. I stopped in to the tourist info office to find out why the tent was there, and found out there was a week-long celebration all through the old city. It was delightful to turn the corner and see a herd of geese, or medieval handicrafts, or.... beer! A small cold beer from a beer barrel with a medieval monk serving us was most welcome as we wandered close to Domus. The Rialta bus was a great way "home."

A day of play left us ready to work as the rest of the attendees began to arrive.
Oh by the way: give big! Randa Meetings will soon be happening, and we need your help!


21 August, 2015 05:00AM by Valorie Zimmerman (noreply@blogger.com)

Hollman Enciso: Instalando wallabag en Ubuntu con Docker

Hace unas semanas conocí wallabag, una alternativa libre y opensource que nace como alternativa a proyectos o productos de “read later” como instapaper, pocket y otros … pues no se a ud’s pero a mi si me pasa que encuentro urls interesantes en la world wide web pero aveces no tengo el tiempo de leer la noticia o articulo completo; generalmente lo dejo en un tab de mi navegador abierto pero muchas veces perdemos esas lecturas pendientes cuando las “guardo” de esta forma.

Pues bien, quise darle una probada a wallabag, el cual me permite tener mi propio servidor de noticias o artículos de la www para leer luego; ademas que también cuenta con add-on para firefox, chrome, y una app para android, IOS, windows phone y firefoxOS (descargas)…

…y como ando aprendiendo un poco de contenedores y Docker específicamente por ahora, decidí crear mi propio contenedor para practicar y aprender un poco mas. así que en este post también les hablare un poco de docker 😀

Lo primero que hice fue leer el manual de instalación sobre Ubuntu para crear mi serie de pasos en un Dockerfile para poder construir mi propia imagen o contenedor.

FROM ubuntu:latest
MAINTAINER Hollman Enciso <hollman.enciso en gmail>
RUN apt-get update && apt-get -y dist-upgrade

#Install the neccesary packages
RUN apt-get -y install apache2 php5 php5-gd php5-imap php5-mcrypt php5-memcached php5-mysql mysql-client php5-curl php5-tidy php5-sqlite curl git sqlite3

#This will install the required dependency Twig via Composer
RUN curl -sS https://getcomposer.org/installer | php
RUN  mv composer.phar /usr/local/bin/composer
#RUN cd /var/www/html/ && /usr/local/bin/composer install
RUN rm -rf /var/www/html/*

#cloning the project
RUN git clone https://github.com/wallabag/wallabag.git /var/www/html/
RUN chown -R www-data: /var/www/html/

#setting the document root volume
VOLUME [“/var/www/html/”]

#Set some apache variables
ENV APACHE_RUN_USER www-data
ENV APACHE_RUN_GROUP www-data
ENV APACHE_LOG_DIR /var/log/apache2
ENV APACHE_PID_FILE /var/run/apache2.pid
ENV APACHE_RUN_DIR /var/run/apache2
ENV APACHE_LOCK_DIR /var/lock/apache2

RUN echo “ServerName localhost” >> /etc/apache2/apache2.conf

#Expose default apache port
EXPOSE 80

#run apache on background
CMD [“/usr/sbin/apache2”, “-D”, “FOREGROUND”]

Luego se crea la imagen:

hollman@prime ~/Docker/wallabag $ docker build -t wallabag/v1 .

y al finalizar podemos ver que la tenemos listica 😀

hollman@prime ~ $ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
wallabag/v1 latest 35316456d694 6 minutes ago 461.5 MB

Luego la lanzamos y revisamos que este andando:

hollman@prime ~/Docker $ docker run -d -p 80:80 hollman/wallabag:latest
16b44a184bd58ae181a36d38c50ccff6d408b54a74b543bef81d2231bb0175ca
hollman@prime ~/Docker $ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
16b44a184bd5 hollman/wallabag:latest “/usr/sbin/apache2 – 6 seconds ago Up 5 seconds 0.0.0.0:80->80/tcp serene_poitras

Y lanzamos nuestro navegador para completar la instalación:

Wallabag_installationEn esta parte solo nos queda ingresar los datos de instalación como el motor de base de datos a utilizar que puede ser sqlite o mysql y para finalizar los datos de cuenta admin.

En este punto pueden utilizar el mismo contenedor para utilizar con sqlite3 o si prefieren con otro contenedor como mysql con la bd.

Ponemos los datos y finalizamos la instalación.

En mi caso solo instalé y configuré el add-on de firefox para agregar mis lecturas por leer para que se guarden en mi wallabag las cuales las agrego con solo dar click en el icono.

wallabag_firefox_addonya después desde mi casa o cualquier parte donde haya puesto el servidor (publico o privado) puedo continuar leyendo, etiquetar, compartir, rankear, eliminar o guardar como servidor de contenido o material que pueda utilizar luego.

index_wallabagPara finalizar y si alguien lo quiere probar lo he subido a mi docker hub. no lo he terminado; la idea es que al lanzar el contenedor este cree la bd y despliegue un contenedor de mysql si lo desea para que sea completamente “out of the box”. Para quienes ya tienen docker en sus maquinas con un docker pull hollman/wallabag bastará para bajar la imagen

21 August, 2015 01:06AM

August 20, 2015

hackergotchi for Cumulus Linux

Cumulus Linux

Building an OpenStack Practice

In Q4 2013 at Dasher, we began our journey to create an OpenStack ecosystem that helps our clients as they transform their business and IT infrastructure. For years, Dasher has been helping clients move from physical to virtual environments. As business and IT needs evolved, more customers started evaluating moving from virtual to cloud environments and building their own private cloud. Dasher saw OpenStack becoming the de facto standard for private cloud, but proprietary black box network switches remained a misfit, giving rise to open networking — the disaggregation of network hardware from software.

A couple of our clients along with one of our senior solution architects, Ryan Day, suggested we explore Cumulus Networks® and learn about their Cumulus® Linux® offering. The results are highlighted below and we will attempt to answer: Why do we think the Cumulus Linux OS is a logical step in the evolution of network operating systems?

Cumulus Linux enables software-defined everything (SDE). SDE may be the cool new fad of 2015, but adopting SDE because it is what all the cool kids are doing is certainly not a reason to move to a new technology. Let’s explore Dasher’s reasons for recommending Cumulus Linux to our clients. Like all of the solutions we offer, it has to be the right technology and financial solution that meets our client’s environment, workloads and budgets.

Here are our top seven reasons to deploy Cumulus Linux:

  1. Cost: The practice of running a Linux operating system on industry-standard hardware brings some well-known cost benefits over traditional or proprietary systems. In this case, we’re talking about 1G, 10G, 40G and soon 100G switches with the same silicon ASICs the traditional network vendors use but running Cumulus Linux instead of a proprietary OS. For example, one analysis we performed showed that you could invest in Cumulus Linux plus hardware from a well-known manufacturer and get 10G equipment at the price of other folks’ 1G switches. The cost savings can get even more dramatic when SFP+ and QSFP optics are involved. Also, Cumulus Linux is subscription based, like other Linux distributions, which means even more cost savings.
  2. Choice: Adopting Cumulus Linux gives you the choice of which hardware vendor you invest in, and while the options include bare metal switch vendors like Accton/Edge-Core, Penguin Computing, Quanta and others, there are also options from “tier 1″ hardware suppliers like Dell and HP. The tier 1 hardware vendors see a growing market for the SDN approach of decoupling software from hardware and now offer their own switches for use with Cumulus Linux. This freedom of choice is enabled by Cumulus Networks supporting the most commonly used networking chipsets on the planet: Broadcom’s Trident family of chips.
  3. Automation: Cumulus Linux is… just… Linux. It is not an application based on Linux, it is literally a Linux operating system, and Bash is its default shell. Therefore, it lends itself to the same configuration management toolsets and, by extension, the same continuous integration and continuous deployment tools that DevOps or automation-focused IT teams already use today. Tools like Puppet, Chef, Ansible and Salt work with Cumulus Linux in the exact same way they would with any other Debian-based Linux distribution.
  4. Scale: While you could certainly introduce Cumulus Linux in your network at the edge for top of rack (ToR) L2/L3 switching, there is a strong focus on supporting the industry trend of moving from a 3-tier networking architecture (core-aggregation-edge) to a 2-tier networking architecture (spine-leaf). This enables massively scalable and performant networks and much easier optimization for east-west traffic. Web scale environments are being copied by enterprises, and guess what, Web scale is based on ease of management, low cost, programmability and software-defined everything, which is what Cumulus Networks brings to the table.
  5. Standards-based Technology: In addition to being a Linux operating system, Cumulus Networks follows open standards-based feature implementations and fosters a strong community with collaboration and contribution from their install base. Part of the secret sauce for Cumulus Linux is the ability to write code that interfaces with the ASICs in the network switch. This is a licensed technology, and therefore is the only part of Cumulus Linux that is not open source.
  6. Support: One thing we found pleasantly surprising is that Cumulus Networks provides a support model intended to reduce or eliminate the finger pointing that can occur between hardware and software vendors. They do this by providing diagnostic support for the platforms on their hardware compatibility list. They maintain intimate knowledge of the hardware they support, and they continue to test new software releases against previously certified hardware. When needed, our experiences with the Cumulus Networks support organization have been industry leading. And the Cumulus Networks subscription-based license includes the license cost and maintenance, providing you with the support you need to feel confident in your use of Cumulus Linux.
  7. Knowledge: Many DevOps, systems or network IT groups already have Linux expertise in house, and those skill sets are perfectly applicable to Cumulus Linux, which can drastically reduce or eliminate the learning curve associated with implementing technology from a new vendor. Still, there’s always something to learn, and Cumulus Networks offers training courses and a lab environment for folks to learn or to test changes.

Some example use cases for Cumulus Linux include:

  • Using it in an underlay network
  • Building an OpenStack cloud with overlay networks, Hadoop clusters or other distributed systems where throughput is paramount and affordability is critical
  • Almost any other environment where network automation is needed and Linux skills are available

If you’re currently evaluating network technologies, Cumulus Linux is definitely worth exploring, and we are happy to help educate you about their solutions. They also have Cumulus® VX™ and Cumulus Workbench, which make it really easy to get to know Cumulus Linux solutions for yourself.

Ryan Day is a Senior Solution Architect at Dasher and Chris Saso is Executive Vice President, Technology at Dasher.

The post Building an OpenStack Practice appeared first on Cumulus Networks Blog.

20 August, 2015 05:28PM by Chris Saso

hackergotchi for Ubuntu developers

Ubuntu developers

Jono Bacon: Talking with a Mythbuster and a Maker

Recently I started writing a column for Forbes.

My latest column covers the rise of the maker movement and in it I interviewed Jamie Hyneman from Mythbusters and Dale Dougherty from Make Magazine.

Go and give it a read right here.

20 August, 2015 04:48PM

Harald Sitter: A Touch of Plasma in the Mountains

It is this time of the year again. In but a few weeks some 50 KDE contributors are going to take over the village of Randa in the Swiss Alps to work on making KDE software yet more awesome.

So if you would kindly click on this fancy image here to donate a penny a or two, I think you will make people around the world eternally grateful:

Fundraiser-Banner-2015

Not convinced yet? Oh my.

The KDE Sprints in Randa are an annual event where different parts of the KDE community meet in the village of Randa in Switzerland to focus their minds and spirit on making KDE software better, faster, more robust, more secure, and of course better looking as well.

Sprints are a big part of KDE development, they enable contributors to meet in person and focus the entire thrust of their team on pushing their project and the KDE community as a whole forward. KDE software is primarily built by a community of volunteers and as such they require support to finance these sprints to leap forward in development and bring innovation to software.

If you have not yet perused the Randa 2015 page, you definitely should. You will probably find that the list of main projects for this year not only sound very interesting, but will in all likelihood be relevant to you. If you own a smartphone or tablet you can benefit from KDEConnect which makes your mobile device talk to your computer (by means of magic no less). Or perhaps you’d rather have the opportunity to run Plasma on your mobile device? General investments in touch-support and enablement are going to go a long way to achieve that. Do you like taking beautiful photographs? Improvements to digiKam will make it even easier to manage and organize your exploits.
These are but a few things the KDE contributors are going to focus on in Randa. All in all there should be something for everyone to get behind and support.

KDE is a diverse community with activities in many different areas in and around software development. Standing as a beacon of light in a world where everyone tries to gobble up as much information about their users as possible, or lock users’ data in proprietary formats from which they cannot ever be retrieved again, or quite simply spy on people.

Be a benefactor of freedom. Support Randa 2015.

20 August, 2015 11:42AM

Sean Davis: MenuLibre 2.0.7 and 2.1.0 Released

Ubuntu Feature Freeze is always such an exciting time.  New stable (2.0.7) and development (2.1.0) versions of MenuLibre are now available.  Several bugs have been fixed and the new development release begins to show a modern spin on the UI. What’s New? MenuLibre 2.0.7 is a bugfix release and 2.1.0 builds on top of that … Continue reading MenuLibre 2.0.7 and 2.1.0 Released

20 August, 2015 10:51AM

Sean Davis: Catfish 1.3.0 Released

The first release in the Catfish 1.3 development cycle is now out!  This development cycle is intended to further refine the user interface and improve search speed and responsiveness.  1.3.0 is all about style. What’s New? The toolbar has been replaced with a Gtk HeaderBar.  This better utilizes screen space and makes Catfish more consistent … Continue reading Catfish 1.3.0 Released

20 August, 2015 04:12AM

Stuart Langridge: Using the content hub on Ubuntu

On an Ubuntu phone, apps are1 isolated from one another; each app has its own little folder where its files go, and no other app can intrude. This, obviously, requires some way to exchange files between apps, because frankly there are times when my epub ebook is in my file downloader app and I need it in my ebook reader app. And so on.

To deal with this, Ubuntu provides the Content Hub: a way for an app to say “I need a photo” and all the other apps on your phone which have photos to say “I have photos! Ask me! Me!”.

This is, at a high level, the right thing to do. If my app wants to use a picture of you as an avatar, it should not be able to snarf your whole photo gallery and do what it wants with it. More troubling yet, adding some new social network app should not give it access to your whole address book so that it can hassle your friends to join, or worse just snaffle that information and store it away on its own server for future reference. So when some new app needs a photo of you to be an avatar, it asks the content hub; you, the punter, choose an app to provide that photo, and then a photo from within that app, and our avatar demander gets that photo, and none of the pictures of your kids or your holiday or whatever you take photos of. This is, big picture2 a good idea.

Sadly, the content hub is spectacularly under-documented, so actually using it in your Ubuntu apps is jolly hard work. However, with an assist3 from Michael Zanetti, I now understand how to offer files you have to others via the content hub. So I come to explain this to you.

First, you need permission to access the content hub at all. So, in your appname.apparmor file4, add content_exchange_source.5 This tells Ubuntu that you’re prepared to provide files for others (you are a “source” of data). You then need to, also in manifest.json, configure what you’re allowed to do with the content hub; add a hooks.content-hub key which names a file (myappname.content-hub or whatever you prefer). That file that you just named needs to also be json, and looks something like {"source": ["all"]}, which dictates which sorts of files you want to be a source for.6 Once you’ve done all this, you’re allowed to use the content hub. So now we explore how.

In your QML app, you need to add a ContentPeerPicker. This is a normal QML Item; specifically, showing it to the user is your responsibility. So you might want to drop it in a Dialog, or a Page, or you might just put it at top level with visible:hidden and then show it when appropriate (such as when your user taps a file or image or whatever that they want to open in another app).

Your ContentPeerPicker should look, at minimum, like this:

ContentPeerPicker {
    handler: ContentHandler.Destination
    contentType: ContentType.All
    onPeerSelected: {
        var transfer = peer.request();
        var items = new Array();
        exportItem.url = /* whatever the URL of the file you want to share is */;
        items.push(exportItem);
        transfer.items = items;
        transfer.state = ContentTransfer.Charged;
        cpp.visible = false;
    }
    onCancelPressed: cpp.visible = false;
}

The important parts here are handler: ContentHandler.Destination (which means “I am a source for files which need to be opened in some other app”), and contentType: ContentType.All (which means “I am a source for all types of file”).7 After that8 show it to the user somehow and connect to its onPeerSelected method. When the user picks some other app to export to from this new Item, onPeerSelected will be called; when the callback onPeerSelected is called, the peer property is valid. Get a transfer object to this peer: var transfer = peer.request();, and then you need to fill in transfer.items. This is a JavaScript list of ContentItems; specifically, define ContentItem { id: exportItem } in your app, and then make a “list” of one item with var items = new Array(); exportItem.url = PATH_TO_FILE_YOU_ARE_EXPORTING; items.push(exportItem); transfer.items = items;.9 After that, set transfer.state = ContentTransfer.Charged and your transfer begins; you can hide the ContentPeerPicker by setting cpp.visible=false at this point.

And that’s how to export files over the Content Hub so that your app can make files available to others. There’s a second half of this (other apps export the files; your app wants to retrieve them, so let’s say they’re an app which needs a photo, and you’re an app with photos), which I’ll come to in a future blog post.

As you can see from the large number of footnotes10 there are a number of caveats with this whole process, in particular that a bunch of it isn’t documented. It will, I’m sure, over time, get better. Meanwhile, the above gives you the basics. Have fun.

  1. correctly
  2. ha!
  3. a bit more than that, if I’m honest
  4. or whatever you called it; hooks.$APPNAME.apparmor in manifest.json
  5. This is more confusing than it should be. If you’re using Ubuntu SDK as your editor, then clicking the big “+” button will load a list of possible apparmor permissions. Don’t double-click a permission; this will just show you what it means in code terms, rather irrelevantly. Instead, choose your permission (content_exchange_source in this case) and then say Add
  6. you can also do `{“source”:[“pictures”]}. There may be other things you can write in there instead of “all” or “pictures”, but the documentation is surlily silent on such things.
  7. You can see all the possible content types in the Ubuntu SDK ContentType documentation (https://developer.ubuntu.com/api/apps/qml/sdk-15.04/Ubuntu.Content.ContentType/), with misleading typos and all
  8. as mzanetti excellently described it
  9. You can transfer more than one item, here.
  10. not this one, though

20 August, 2015 01:19AM

Valorie Zimmerman: Support Randa 2015



Weeeee! KDE is sponsoring Randa Meetings again, this time with touch. And you can help making KDE technologies even better! This exciting story in the Dot this week, https://dot.kde.org/2015/08/16/you-can-help-making-kde-technologies-even-better caught not only my attention, but my pocketbook as well.

Yes, I donated, although I'm not going this time. Why? Because it is important, because I want Plasma Mobile to succeed, because I want my friend Scarlett* to have a great time, and because I want ALL the devels attending to have enough to eat! Just kidding, they can live on Swiss chocolate and cheese. No, really: the funds are needed for KDE software development.

So dig deep, my friends, and help out. https://www.kde.org/fundraisers/kdesprints2015/

*(And somebody hire Scarlett to make KDE software!)

20 August, 2015 12:36AM by Valorie Zimmerman (noreply@blogger.com)

August 19, 2015

Svetlana Belkin: Ubuntu Membership Pages Update

Back in May 2015, the Membership Board had a UOS session about what we, as a board and others who help us, can help more people to get their Membership.  Two of those items dealt with updating some of the Membership pages.

The first update was to the FAQ page.  We added a question that deals with the non-English speaking community that we have.  The question is:

I do not speak English or my English is weak, can I still apply?

and  the answer is:

For course you can! Simply ask someone you know who can translate for you to come to the meeting.

This should clear up the confusion for these people.

The second update is to the main page.  At our session, we figured out that we were missing the benefits of having a Ubuntu Membership.  We wrote this section to remind our Community that there are benefits for the individuals (such as recognition of significant, sustained, and continued contributions), BUT also to the Ubuntu and Open Source communities as a whole.

As a close, the Membership Board is asking that if you want to become a Member, please apply!

19 August, 2015 04:37PM

hackergotchi for ArcheOS

ArcheOS

ArcheoFOSS I, proceedings of the workshop now available as Open Access

Hi all,
this fast post is to notify that are finally available as Open Access the proceedings of the first workshop "Open Source, Free Software e Open Format nei processi di ricerca archeologici" (en: "Open Source, Free Software and Open Format in archaeological reasearch precesses"), which in the later editions will be known as ArcheoFOSS. The event took place in Grosseto in May 2006.
Since Open Access in archeology has always been one of the main topics of this workshop, some days ago we started a discussion on the official mailing list to try to free some of the proceedings which are actually available just as printed publications. The first result has been the release of the articles collected in the first edition, thanks to the kindness of Giancarlo Macchi Janica. Currently we are working on the other two workshops which are not yet available: ArcheoFOSS V (held in Foggia in 2010) and ArcheoFOSS VI (held in Neaples in 2011). 
The image below shows the front cover of the digital publication of the proceedings of the first edition, while here you can read the official announcement about the Open Access publication (pdf here).

Front cover of proceedings of the first workshop "Open Source, Free Software e Open Format nei processi di ricerca archeologici"
A special thanks also to +Stefano Costa for uploading everything on ArcheoFOSS website.

19 August, 2015 02:17PM by Luca Bezzi (noreply@blogger.com)

hackergotchi for HandyLinux

HandyLinux

Le blog, un blog, abracadabra et #...

Bonjour

Un coucou Aoûtien pour vous signaler quelques évolutions/nouveautés:

  • Le blog Handylinux a été mis à jour par arpinux. Le travailleur dans sa grotte est décidément infatiguable... Encore merci à lui!
  • Puisqu'il est question de blog et d'arpinux, un petit rappel pour ceux/celles qui auraient raté l'info: il a re-lancé un blog perso et je vous invite à y faire un tour!
  • Mélanie, membre (très) actif de la communauté, a ouvert une discussion sur le forum pour les personnes qui s'inquiètent du vocabulaire utilisé parfois abscons (notamment pour la console). C'est simple, efficace et éducatif. Merci à elle!
  • La rentrée est proche et annonce l'arrivée de HandyLinux-2.2 avec plusieurs évolutions, je vous rappelle que vos avis sont attendus sur cette discussion que vous soyez débutants ou geek!
  • La rentrée toujours: nous souhaitons la bienvenue aux nouveaux membres du forum qui sont nombreux ces derniers jours. Je voudrais saluer particulièrement certains membres de la (l'ex) communauté crunchbang (#) à laquelle j'ai appartenu par le passé avant de suivre comme beaucoup d'autres l'idée de guantas (le grand initiateur) et de me retrouver sur les rives verdoyantes et fleuries de HandyLinux. (non non j'en fais pas trop ). En tous cas, welcome on board, prenez vos aises, sortez les chaises longues et les transats sans oublier votre apéro préféré...

Pour finir, je vais en remettre "une petite couche" concernant windows 10: Si vous ne voulez pas être espionné avec windows 10, passez à Linux! Voir cet article .


A bientôt




HandyLinux - la distribution Debian sans se prendre la tête...

19 August, 2015 11:10AM by fibi

August 18, 2015

hackergotchi for Ubuntu developers

Ubuntu developers

Svetlana Belkin: Membership Board Member Interviews: Alan Pope

The Ubuntu Membership Board is responsible for approving new Ubuntu members. I interviewed our board members in order for the Community to get to know them better and get over the fear of applying to Membership.

The sixth interviewee is Alan Pope:

What do you do for a career?

For the last 3.5 years I’ve worked at Canonical on Ubuntu. Previously I was an SAP Consultant for ~10 years and before that I’ve been a server and desktop admin.

What was your first computing experience?

I got a Sinclair ZX81 for Christmas 1981, 34 years ago. Computers were pretty dumb back then, booting directly to BASIC. My Dad had typed in a program listing from a magazine so I’d have something to play with on Christmas morning. I was hooked from then on.

How long have you been involved with Ubuntu?

According to Launchpad my account was created in March 2006, so I guess around then. I started by answering support questions on Launchpad Answers, and moved on to other community related activities
later.

Since you are all fairly new to the Board, why did you join?

I’ve actually been on the board before, but stepped down in late 2011. I re-joined because I knew it was only a small amount of time/effort on my part and is beneficial to the project. I want to see more
contributors apply for Ubuntu Membership, and I hope we can help foster that over the coming months.

What are some of the projects you’ve worked on in Ubuntu over the years?

I’ve worked on supporting new users on launchpad, irc and AskUbuntu. I have moderated mailing lists, been a LoCo Team lead, and on the LoCo Council and Community Council. I also started an Ubuntu Podcast with some friends which is still going after 7 years with thousands of listeners.

What is your focus in Ubuntu today?

Currently I spend most of my time working on the Ubuntu Core Apps project which makes many of the Free Software apps we ship on the phone.

Do you contribute to other free/open source projects? Which ones?

Not really other than filing bugs in upstream bug trackers when I need to.

If you were to give a newcomer some advice about getting involved with Ubuntu, what would it be?

Look for something you find fun and interesting. Treat it like a hobby rather than a job. If you have fun contributing you’re more likely to stick at it. Don’t be afraid of asking for help as there’s a lot of people who have contributed for a long time and will happily answer your questions.

Do you have any other comments else you wish to share with the community?

I’ve met many friends through Ubuntu and have spent nearly 10 years contributing to the project both in my own time and now as a day job. I’ve really enjoyed being part of the community. It’s a great feeling contributing to a free software project, and I hope to continue for as long as I can.

18 August, 2015 04:14PM

Daniel Holbach: Snapcraft has landed in the archive

In the flurry of uploads for the C++ ABI transition and other frantic work (Thursday is Feature Freeze day) this gem maybe went unnoticed:

snapcraft (0.1) wily; urgency=low

  * Initial release

What this means? If you’re on wily, you can easily try out snapcraft and get started turning software into snaps. We have some initial docs available on the developer site which should help you find your way around.

This is a 0.1 release, so there are bugs and there might be bigger changes coming your way, but there will also be more docs, more plugins and more good stuff in general. If you’re curious, you might want to sign up for the daily build (just add the ppa:snappy-dev/snapcraft-daily PPA).

Here’s a brilliant example of what snapcraft can do for you: packaging a Java app was never this easy.

If you’re more into client apps, check out Ted’s article on how to create a QML snap.

As you can easily see: the future is on its way and upstreams and app developer will have a much easier time sharing their software.

As I said above: snapcraft is still a 0.1 release. If you want to let us know your feedback and find bugs or propose merges, you can find snapcraft in Launchpad.

18 August, 2015 01:41PM

Canonical Design Team: Django behind a proxy: Fixing absolute URLs

I recently tried to setup OpenID for one of our sites to support authentication with login.ubuntu.com, and it took me much longer than I’d anticipated because our site is behind a reverse-proxy.

My problem

I was trying to setup OpenID with the django-openid-auth plugin. Normally our sites don’t include absolute links (https://example.com/hello-world) back to themselves, because relative URLs (/hello-world) work perfectly well, so normally Django doesn’t need to know the domain name that it’s hosted it.

However, when authenticating with OpenID, our website needs to send the user off to login.ubuntu.com with a callback url so that once they’re successfully authenticed they can be directed back to our site. This means that the django-openid-auth needs to ask Django for an absolute URL to send off to the authenticator (e.g. https://example.com/openid/complete).

The problem with proxies

In our setup, the Django app is served with a light Gunicorn server behind an Apache front-end which handles HTTPS negotiation:

User <-> Apache <-> Gunicorn (Django)

(There’s actually an additional HAProxy load-balancer in between, which I thought was complicating matters, but it turns out HAProxy was just passing through requests absolutely untouched and so was irrelevant to the problem.)

Apache was setup as a reverse-proxy to Django, meaning that the user only ever talks to Apache, and Apache goes off to get the response from Django itself, with Django’s local network IP address – e.g. 10.0.0.3.

It turns out this is the problem. Because Apache, and not the user directly, is making the request to Django, Django sees the request come in at http://10.0.0.3/openid/login rather than https://example.com/openid/login. This meant that django-openid-auth was generating and sending the wrong callback URL of http://10.0.0.3/openid/complete to login.ubuntu.com.

How Django generates absolute URLs

django-openid-auth uses HttpRequest.build_absolute_uri which in turn uses HttpRequest.get_host to retrieve the domain. get_host then normally uses the HTTP_HOST header to generate the URL, or if it doesn’t exist, it uses the request URL (e.g.: http://10.0.0.3/openid/login).

However, after inspecting the code for get_host I discovered that if and only if settings.USE_X_FORWARDED_HOST is True then Django will look for the X-Forwarded-Host header first to generate this URL. This is the key to the solution.

Solving the problem – Apache

In our Apache config, we were initially using mod_rewrite to forward requests to Django.

RewriteEngine On
RewriteRule ^/?(.*)$ http://10.0.0.3/$1 [P,L]

However, when proxying with this method Apache2 doesn’t send the X_Forwarded_Host header that we need. So we changed it to use mod_proxy:

ProxyPass / http://10.0.0.3/
ProxyPassReverse / http://10.0.0.3/

This then means that Apache will send three headers to Django: X-Forwarded-For, X-Forwarded-Host and X-Forwarded-Server, which will contain the information for the original request.

In our case the Apache frontend used HTTPS protocol, whereas Django was only using so we had to pass that through as well by manually setting Apache to pass an X-Forwarded-Proto to Django. Our eventual config changes looked like this:

<VirtualHost *:443>
    ...
    RequestHeader set X-Forwarded-Proto 'https' env=HTTPS

    ProxyPass / http://10.0.0.3/
    ProxyPassReverse / http://10.0.0.3/
    ...
</VirtualHost>

This meant that Apache now passes through all the information Django needs to properly build absolute URLs, we just need to make Django parse them properly.

Solving the problem – Django

By default, Django ignores all X-Forwarded headers. As mentioned earlier, you can set get_host to read the X-Forwarded-Host header by setting USE_X_FORWARDED_HOST = True, but we also needed one more setting to get HTTPS to work. These are the settings we added to our Django settings.py:

# Setup support for proxy headers
USE_X_FORWARDED_HOST = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')

After changing all these settings, we now have Apache passing all the relevant information (X-Forwarded-Host, X-Forwarded-Proto) so that Django is now able to successfully generate absolute URLs, and django-openid-auth now works a charm.

18 August, 2015 09:47AM

Arthur Schiwon: This August: ownCloud Contributor Conference 2015 in Berlin

The ownCloud contributor conference comes closer and closer! Roughly two weeks are left until it kicks off on August 28th here in Berlin. You did not register yet? Please do at the conference site.

With time passing rapidly our dear Jos is like the sedulous ants, taking care of all the smaller and bigger things, hopping to and fro. Helping around a bit, I feel the air is heating up and tension rising. Yeah!

Audience

Our target audience are ownCloud contributors also in the wider community sense. It is all about pushing ownCloud forwards and making it better and better continuously. Therefore the conference consists of two days with Keynotes, inspiring Lightning Talks and hands-on workshops. Other five days solely concentrate on hacking, having half a dozen rooms for coding and discussing. There's even a dedicated "meeting room", which you can reserve if you invite people to discuss a matter in a more calm atmosphere. While diligent Jos takes care that there is enough caffeine you can turn into code (and other supplies of course).

If however you consider yourself just more interested and want to see what is going on in ownCloud County and feel the pulse, do not hesitate to visit the conference days on the weekend. There is no entrance fee, but please register to make our planning easier.

Keynotes, Workshops and Talks

Our guest keynote this year will be hold by Angela Richter, who is a director at Schauspiel Köln, the national theatre in Cologne, Germany. One of her recent works that received popularity was "Supernerds" which was based on interviews with Julian Assange, Thomas Drake and others. Read also a more detailed introduction of Angela Richter.

Another Keynote will be traditionally hold by ownCloud founder and CTO Frank Karlitschek. There will be also a few talks (compared to standard lightning talks), for instance show-casing complex ownCloud setups at CERN and Sciebo (DE). There are several workshops where you can get into development of literally everything: ownCloud core, apps, desktop and mobile clients, but also underlying libraries as SabreDAV.

We put focus on lightning talks, thus when they are on there will be no other talk or workshop running. They are short, so will not be boring. You definitely will see inspiring topics, so you can get your hands dirty in the hacking sessions. Or, at least, get the news of what else is happing in our little universe.

You can browse the whole conference schedule. Take a look into the Lightning talks, because the contents of each you can see on their detail page.

My contributions

Also I will do give some lightning talks and a … let's what will come out of it. A short overview:

  1. What happened in LDAP County, Lightning Talks Part 1, Sat 12pm
    I will give a review on which changes were introduced to the LDAP backend since the last conference.
  2. Experimenting with LDAP, Workshop, Sat 3.30pm
    Here I want to offer a platform to learn about and play with the LDAP Backend. I plan to drive this solely on questions and ideas from the attendees. We shall learn, hack and see. Bring your questions and thoughts. Also join if you plan to contribute to the LDAP Backend.
  3. ownCloud Music Streaming on SailfishOS, Lightning Talks Part 2, Sun 10am
    A tech lightning talk for beginner in SailfishOS app writing from a beginner. I will show how to make your shanties from ownCloud Music play in a very simple player on your SailfishOS device. Very much recommended if you're tired of all the web programming all the time :)
  4. ownCloud Bookmarks, Lightning Talks Part 2, Sun 10am
    An introduction of one of the oldest apps in ownCloud and a call for contributors. It was stagnating recently, and I want to revive it.
  5. Do you have any questions or suggestions on those? Shoot them via the comment form below!

Registration

Still did not register? Do it! As said, entrance is for free, but we would like to plan and foresee the number of attendees. If you are already a contributor and have some pull requests merged into the code base (or have other significant contributions) but struggle with finances to pay the travel costs, you can ask for assistance. If you really cannot do it, we will stream the main conference room (keynotes and lightning talks), but you miss all the hacking fun :'-(.

18 August, 2015 05:03AM

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 430

18 August, 2015 04:41AM by lyz

hackergotchi for Ubuntu developers

Ubuntu developers

Aaron Honeycutt: My forecast for the next 4 months

With energy from Akademy still running though my veins but slowly lowering I’m looking at the next FLOSS (Free Libre Open Source Software) events. I’ve asked the community funds to go to OpenHelp (still no reply yet), FOSSETCON (which was approved!). Now over to just general IT event(s): ITPalooza, which will be my first time there. I’m super excited for all the events! I’ve learned much from all the events I have gone to and found more voids in OSS/FLOSS projects that I can fill ex. Kubuntu Packages, KDE SVG, Plasma Mobile and hopefully more will come soon!

This past Saturday we (Ubuntu FL LoCo) had our 2nd Ubuntu Hour at Mojo Donut which was a great time like the first. I also got a new person to come over that I met at a vBeer the Wednesday before.

I would like to thank the awesome Ubuntu Community for helping me reach more projects and meet awesome people and tech!

18 August, 2015 04:10AM

hackergotchi for Blankon developers

Blankon developers

Sokhibi: Membuat MMT Menu Minuman dengan Inkscape


Tadi siang iseng-iseng menyeleksi berkas lama yang tersimpan di komputer, alasannya adalah untuk bersih-bersih beberapa data yang sudah tidak terpakai mengingat hard disk pada komputer tersebut sudah lumayan penuh.

Pada saat menyeleksi data yang akan dihapus tanpa sengaja saya menemukan beberapa project desain tiga tahun yang lalu, salah satunya adalah desain MMT berisi menu Sop Buah yang dipesan oleh salah seorang pemilik warung makan di Simpang Lima. Maka secara spontan (uhui... gaya komeng) saya tulis saja tutorial cara membuat MMT tersebut agar lebih bermanfaat untuk orang lain selain yang pesan MMT.


Mari langsung praktek

Untuk mendesain (menggambar) MMT seperti dalam contoh, pertama yang harus disiapkan adalah menyediakan beberapa gambar (bisa dari foto atau ambil di internet), semakin banyak gambar atau foto yang tersedia semakin bagus.



Langkah selanjutnya adalah mengatur halaman sesuai dengan ukuran project yang akan dibuat, jangan lupa atur juga unit yang digunakan, misalnya cm, px, m, dll, hal ini perlu dilakukan agar lebih nyaman saat mengerjakan project.

Sebelum lanjut ke proses desain, sebaiknya lakukan penyimpanan project yang akan dibuat pada direktori tertentu di komputer, hal ini wajib dilakukan agar jika sewaktu-waktu aplikasi yang digunakan mengalami masalah maka sebagian project yang sudah dikerjakan telah tersimpan secara otomatis oleh fitur Autosave aplikasi yang digunakan.

Berikutnya kita mulai menggambar, hal pertama yang sering saya lakukan dalam menggambar adalah membuat beberapa layer untuk kelompok gambar masing-masing, hal ini saya lakukan untuk mempermudah proses seleksi object jika melakukan editing, contohnya saya buat 3 (tiga) layer seperti gambar di bawah ini

Kunci dua layer yang belum kita gunakan, sehingga tersisa satu layer yang aktif (layer paling bawah), gambarlah kotak persegi panjang pada layer tersebut sebagai latar belakang.
Atur ukuran kotak sesuai lebar dan tinggi halaman, letakkan di tengah-tengah halaman (gunakan fitur Align and Distribute untuk mempermudah pekerjaan ini)


Modifikasi kotak persegi panjang tersebut agar lebih menarik, dalam contoh saya modofiaksi dengan Gradient Radial seperti gambar di bawah ini:


Setelah proses pembuatan latar belakang selesai, kunci layer tersebut, buka layer kedua dari bawah, buat tulisan sesuai menu yang dinginkan, jangan lupa cantumkan juga tulisan lain sebagai penghias misalnya harga, dalam contoh, saya membuat gambar seperti mendung dengan menggunakan beberapa object Circles yang ditumpuk berjejer melingkar, kemudian saya lakukan metode Union.

Kunci Layer ini jika pekerjan pada layer ini dirasa sudah selesai, aktifkan layer ketiga, masukkan gambar buah yang sudah disiapkan sebelumnya, edit gambar buah tersebut dengan cara menghilangkan bagian tertentu yang tidak terpakai.
Untuk menghilangkan bagaian tertentu dapat dilakukan dengan fitur Clip => Set menggunakan Path tertutup yang dapat dibuat dengan Bezier Tool.



Masukkan gambar lain satu persatu, lakukan juga edit gambar seperti cara sebelumnya, setelah itu aturlah semenarik mungkin.

Jika semua proses desain sudah selesai, lakukan penyimpan project, kemudian export menjadi format *.png untuk dibawa ke jasa cetak MMT.
Anda juga dapat menyimpan desain ini dengan metode Save As menjadi format lain, misalnya *.PDF atau *.EPS.

Dibawah ini adalah Video tutorial redesain dari MMT yang saya buat secara singkat



Demikian Tutorian singkat cara Membuat MMT menu minuman dengan Inkscape, semoga bermanfaat untuk kita semua.

18 August, 2015 03:30AM by Istana Media (noreply@blogger.com)