December 08, 2016

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S09E41 – Pine In The Neck - Ubuntu Podcast

It’s Season Nine Episode Forty-One of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Joe Ressington are connected and speaking to your brain.

We are four once more, thanks to some help from our mate Joe!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

08 December, 2016 03:00PM

Dustin Kirkland: A Touch of Class at Sir Ludovic, Bucharest, Romania

A few weeks ago, I traveled to Bucharest, Romania for a busy week of work, planning the Ubuntu 17.04 (Zesty) cycle.

I did have a Saturday and Sunday to myself, which I spent mostly walking around the beautiful, old city. After visiting the Romanian Athenaeum, I quite randomly stumbled into one truly unique experience. I passed a window shop for "Sir Ludovic Master Suit Maker" which somehow caught my eye.



I travel quite a bit on business, and you'll typically find me wearing a casual sports coat, a button up shirt, nice jeans, cowboy boots, and sometimes cuff links. But occasionally, I feel a little under-dressed, especially in New York City, where a dashing suit still rules the room.

Frankly, everything I know about style and fashion I learned from Derek Zoolander. Just kidding. Mostly.

Anyway, I owned two suits. One that I bought in 2004, for that post-college streak of weddings, and a seersucker suit (which is dandy in New Orleans and Austin, but a bit irreverent for serious client meetings on Wall Street).

So I stepped into Sir Ludovic, merely as a curiosity, and walked out with the most rewarding experience of my week in Romania. Augustin Ladar, the master tailor and proprietor of the shop, greeted me at the door. We then spent the better part of 3 hours, selecting every detail, from the fabrics, to the buttons, to the stylistic differences in the cut and the fit.




Better yet, I absorbed a wealth of knowledge on style and fashion: when to wear blue and when to wear grey, why some people wear pin stripes and others wear checks, authoritative versus friendly style, European versus American versus Asian cuts, what the heck herringbone is, how to tell if the other guy is also wearing hand tailored attire, and so on...

Augustin measured me for two custom tailored suits and two bespoke shirts, on a Saturday. I picked them up 6 days later on a Friday afternoon (paying a rush service fee).

Wow. Simply, wow. Splendid Italian wool fabric, superb buttons, eye-catching color shifting inner linings, and an impeccably precise fit.









I'm headed to New York for my third trip since, and I've never felt more comfortable and confident in these graceful, classy suits. A belated thanks to Augustin. Fabulous work!



Cheers,
Dustin

08 December, 2016 02:33PM by Dustin Kirkland (noreply@blogger.com)

Ubuntu Insights: System76 working with Canonical on improving HiDPI support in Ubuntu

This is a guest post by Ryan Sipes, Community Manager at System76. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

Last week System76 engineers participated in a call with Martin Wimpress of the Ubuntu Desktop team to discuss HiDPI support in Ubuntu, specifically Unity 7. HiDPI support exists in Unity 7, but there are areas that could use improvement, and the call focused around those. The conversation was primarily focused around bugs that still remain in the out-of-the-box HiDPI experience; specifically around enabling automatic scaling and Ubuntu recognizing when a HiDPI display is present so that it can adjust accordingly.

This has become a focus of System76 as it has worked to provide a good experience for users purchasing their new 4K HiDPI displays now available on the Oryx Pro and BonoboWS laptops.

“With our HiDPI laptops, everything is twice as crisp; it’s like a high-quality printed magazine instead of a traditional computer display. The user interface is clearer, text is sharper, photos are more detailed, games are higher res, and videos can be viewed in full lifelike 4K. This is great whether you’re anyone from a casual computer user to a video editor producing high end content or a professional developer who wants a better display for your code editor.”, says Cassidy James Blaede, a developer at System76 and a co-founder of elementary OS, an Ubuntu-based distribution that has put a lot of work into HiDPI support. Cassidy recently wrote a blog post explaining HiDPI, diving into the specifics of how it works.

Some patches that improve HiDPI support are in review and they are expected to land in Ubuntu soon. In order to accelerate this process HiDPI bugs in Launchpad are being tagged accordingly and will make it easier for contributors to focus their efforts more easily. System76 will be contributing heavily to this process, but many other Ubuntu community members have expressed interest in contributing as well, so this will likely be a hot spot in the near future.

08 December, 2016 02:08PM

Ubuntu Insights: Using the ubuntu-app-platform content interface in app snaps

This is a guest post by Olivier Tilloy, Software Engineer at Canonical. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

Recently the ubuntu-app-platform snap has been made available in the store for application developers to build their snaps without bundling all their dependencies. The ubuntu-app-platform snap includes standard Qt libraries (version 5.6.1 as of this writing) and QML runtime, the ubuntu UI toolkit and related dependencies, and oxide (a web engine based on the chromium content API and its QML bindings).

This allows app developers to declare a dependency on this snap through the content sharing mechanism, thus reducing dramatically the size of the resulting app snaps.

I went through the exercise with the webbrowser-app snap. This proved surprisingly easy and the size of the snap (amd64 architecture) went down from 136MB to 22MB, a sizeable saving!

For those interested in the details, here are the actual changes in the snapcraft.yaml file: see here.

Essentially they consist in:

  • Using the ‘platform’ plug (content interface) and specifying its default provider (‘ubuntu-app-platform’)
  • Removing pretty much all stage packages
  • Adding an implicit dependency on the ’desktop-ubuntu-app-platform’ wiki part
  • Adding an empty ‘ubuntu-app-platform’ directory in the snap where snapd will bind-mount the content shared by the ubuntu-app-platform snap

Note that the resulting snap could be made even smaller. There is a known bug in snapcraft where it uses ldd to crawl the dependencies, ignoring the fact that those dependencies are already present in the ubuntu-app-platform snap.

Also note that if your app depends on any Qt module that isn’t bundled with ubuntu-app-platform, you will need to add it to the stage packages of your snap, and this is likely to bring in all the Qt dependencies, thus duplicating them. The easy fix for this situation is to override snapcraft’s default behaviour by specifying which files the part should install, using the “snap” section (see what was done for e.g. address-book-app at here.

08 December, 2016 02:03PM

Alessio Treglia: The new professionals of the interconnected world

interdisciplinary-learningThere is an empty chair at the conference table of business professionals, a not assigned place that increasingly demands for the presence of a new type of integration manager. The demands for an ever-increasing specialization, imposed by the modern world, are bringing out with great emphasis the need for an interdisciplinary professional who understands the demands of specialists and who is able to coordinate and to link actions and decisions. This need, often still ignored, is a direct result of the growing complexity of the modern world and the fast communications inside the network.

Complexity” is undoubtedly the most suitable paradigm to characterize the historical and social model of today’s world, in which the interactions and connections between the various areas now form an inextricable network of relations. Since the ’60s and’ 70s a large group of scholars – including the chemist Ilya Prigogine and the physicist Murray Gell-Mann – began to study what would become a true Science of Complexity.

Yet this is not an entirely new concept: the term means “composed of several parts connected to each other and dependent on each other“, exactly as reality, nature, society, and the environment around us. A “complex” mode of thought integrates and considers all contexts, interconnections, interrelationships between the different realities as part of the vision.

What is professionalism? And who are professionals? What can define a professional? <…>

<Read More…[by Fabio Marzocca]>

08 December, 2016 02:02PM

Dustin Kirkland: Ubuntu 16.04 LTS Security: A Comprehensive Overview


From Linux kernel livepatches to encryption to ASLR to compiler optimizations and configuration hardening, we strive to ensure that Ubuntu 16.04 LTS is the most secure Linux distribution out of the box.

These slides try to briefly explain:

  • what we do to secure Ubuntu
  • how the underlying technology works
  • when the features took effect in Ubuntu

I hope you find this slide deck informative and useful!  The information herein is largely collected from the Ubuntu Security Features wiki page, where you can always find up to date information.



Cheers,
Dustin

08 December, 2016 01:28PM by Dustin Kirkland (noreply@blogger.com)

Ubuntu Insights: Mounting your home directory in LXD

lxd

As of LXD stable 2.0.8 and feature release 2.6, LXD has support for various UID and GID map related manipulaions. A common question is: “How do I bind-mount my home directory into a container?” and before the answer was “well, it’s complicated but you can do it; it’s slightly less complicated if you do it in privleged containers”. However, with this feature, now you can do it very easily in unprivileged containers.

First, find out your uid on the host:

$ id
uid=1000(tycho) gid=1000(tycho) groups=1000(tycho),4(adm),24(cdrom),27(sudo),30(dip),46(plugdev),112(lpadmin),124(sambashare),129(libvirtd),149(lxd),150(sbuild)

On standard Ubuntu hosts, the uid of the first user is 1000. Now, we need to allow LXD to remap to remap this id; you’ll need an additional entry for root to do this:

$ echo 'root:1000:1' | sudo tee -a /etc/subuid /etc/subgid

Now, create a container, and set the idmap up to map both uid and gid 1000 to uid and gid 1000 inside the container.

$ lxc init ubuntu-daily:z zesty

Creating zesty

$ lxc config set zesty raw.idmap 'both 1000 1000'

Finally, set up your home directory to be mounted in the container:

$ lxc config device add zesty homedir disk source=/home/tycho path=/home/ubuntu

And leave an insightful message for users of the container:

$ echo 'meshuggah rocks' >> message

Finally, start your container and read the message:

$ lxc start zesty
$ lxc exec zesty cat /home/ubuntu/message
meshuggah rocks

And enjoy the insight offered to you by your home directory 🙂

08 December, 2016 09:00AM

Nathan Haines: UbuCon Europe 2016

UbuCon Europe 2016

Nathan Haines enjoying UbuCon Europe

If there is one defining aspect of Ubuntu, it's community. All around the world, community members and LoCo teams get together not just to work on Ubuntu, but also to teach, learn, and celebrate it. UbuCon Summit at SCALE was a great example of an event that was supported by the California LoCo Team, Canonical, and community members worldwide coming together to make an event that could host presentations on the newest developer technologies in Ubuntu, community discussion roundtables, and a keynote by Mark Shuttleworth, who answered audience questions thoughtfully, but also hung around in the hallway and made himself accessible to chat with UbuCon attendees.

Thanks to the Ubuntu Community Reimbursement Fund, the UbuCon Germany and UbuCon Paris coordinators were able to attend UbuCon Summit at SCALE, and we were able to compare notes, so to speak, as they prepared to expand by hosting the first UbuCon Europe in Germany this year. Thanks to the community fund, I also had the immense pleasure of attending UbuCon Europe. After I arrived, Sujeevan Vijayakumaran picked me up from the airport and we took the train to Essen, where we walked around the newly-opened Weihnachtsmarkt along with Philip Ballew and Elizabeth Joseph from Ubuntu California. I acted as official menu translator, so there were no missed opportunities for bratwurst, currywurst, glühwein, or beer. Happily fed, we called it a night and got plenty of sleep so that we would last the entire weekend long.

Zeche Zollverein, a UNESCO World Heritage site

UbuCon Europe was a marvelous experience. Friday started things off with social events so everyone could mingle and find shared interests. About 25 people attended the Zeche Zollverein Coal Mine Industrial Complex for a guided tour of the last operating coal extraction and processing site in the Ruhr region and was a fascinating look at the defining industry of the Ruhr region for a century. After that, about 60 people joined in a special dinner at Unperfekthaus, a unique location that is part creative studio, part art gallery, part restaurant, and all experience. With a buffet and large soda fountains and hot coffee/chocolate machine, dinner was another chance to mingle as we took over a dining room and pushed all the tables together in a snaking chain. It was there that some Portuguese attendees first recognized me as the default voice for uNav, which was something I had to get used to over the weekend. There's nothing like a good dinner to get people comfortable together, and the Telegram channel that was established for UbuCon Europe attendees was spread around.

Sujeevan Vijayakumaran addressing the UbuCon Europe attendees

The main event began bright and early on Saturday. Attendees were registered on the fifth floor of Unpefekthaus and received their swag bags full of cool stuff from the event sponsors. After some brief opening statements from Sujeevan, Marcus Gripsgård announced an exciting new Kickstarter campaign that will bring an easier convergence experience to not just most Ubuntu phones, but many Android phones as well. Then, Jane Silber, the CEO of Canonical, gave a keynote that went into detail about where Canonical sees Ubuntu in the future, how convergence and snaps will factor into future plans, and why Canonical wants to see one single Ubuntu on the cloud, server, desktop, laptop, tablet, phone, and Internet of Things. Afterward, she spent some time answering questions from the community, and she impressed me with her willingness to answer questions directly. Later on, she was chatting with a handful of people and it was great to see the consideration and thought she gave to those answers as well. Luckily, she also had a little time to just relax and enjoy herself without the third degree before she had to leave later that day. I was happy to have a couple minutes to chat with her.

Nathan Haines and Jane Silber

Microsoft Deutschland GmbH sent Malte Lantin to talk about Bash on Ubuntu on Windows and how the Windows Subsystem for Linux works, and while jokes about Microsoft and Windows were common all weekend, everyone kept their sense of humor and the community showed the usual respect that’s made Ubuntu so wonderful. While being able to run Ubuntu software natively on Windows makes many nervous, it also excites others. One thing is for sure: it’s convenient, and the prospect of having a robust terminal emulator built right in to Windows seemed to be something everyone could appreciate.

After that, I ate lunch and gave my talk, Advocacy for Advocates, where I gave advice on how to effectively recommend Ubuntu and other Free Software to people who aren’t currently using it or aren’t familiar with the concept. It was well-attended and I got good feedback. I also had a chance to speak in German for a minute, as the ambiguity of the term Free Software in English disappears in German, where freies Software is clear and not confused with kostenloses Software. It’s a talk I’ve given before and will definitely give again in the future.

After the talks were over, there was a raffle and then a UbuCon quiz show where the audience could win prizes. I gave away signed copies of my book, Beginning Ubuntu for Windows and Mac Users, in the raffle, and in fact I won a “xenial xeres” USB drive that looks like an origami squirrel as well as a Microsoft t-shirt. Afterwards was a dinner that was not only delicious with apple crumble for desert, but also free beer and wine, which rarely detracts from any meal.

Marcos Costales and Nathan Haines before the uNav presentation

Sunday was also full of great talks. I loved Marcos Costales’s talk on uNav, and as the video shows, I was inspired to jump up as the talk was about to begin and improvise the uNav-like announcement “You have arrived at the presentation.” With the crowd warmed up from the joke, Marcos took us on a fascinating journey of the evolution of uNav and finished with tips and tricks for using it effectively. I also appreciated Olivier Paroz’s talk about Nextcloud and its goals, as I run my own Nextcloud server. I was sure to be at the UbuCon Europe feedback and planning roundtable and was happy to hear that next year UbuCon Europe will be held in Paris. I’ll have to brush up on my restaurant French before then!

Nathan Haines contemplating tools with a Neanderthal

That was the end of UbuCon, but I hadn’t been to Germany in over 13 years so it wasn’t the end of my trip! Sujeevan was kind enough to put up with me for another four days, and he accompanied me on a couple shopping trips as well as some more sightseeing. The highlight was a trip to the Neanderthal Museum in the aptly-named Neandertal, Germany, and then afterward we met his friend (and UbuCon registration desk volunteer!) Philipp Schmidt in Düsseldorf at their Weihnachtsmarkt, where we tried the Feuerzangenbowle, where mulled wine is improved by soaking a block of sugar in rum, then putting it over the wine and lighting the sugarloaf on fire to drip into the wine. After that, we went to the Brauerei Schumacher where I enjoyed not only Schumacher Alt beer, but also the Rhein-style sauerbraten that has been on my to-do list for a decade and a half. (Other variations of sauerbraten—not to mention beer—remain on the list!)

I’d like to thank Sujeevan for his hospitality on top of the tremendous job that he, the German LoCo, and the French LoCo exerted to make the first UbuCon Europe a stunning success. I’d also like to thank everyone who contributed to the Ubuntu Community Reimbursement Fund for helping out with my travel expenses, and everyone who attended, because of course we put everything together for you to enjoy.

08 December, 2016 05:04AM

hackergotchi for VyOS

VyOS

VyOS 2.0 development digest #1

I keep talking about the future VyOS 2.0 and how we all should be doing it, but I guess my biggest mistake is not being public enough, and not being structured enough.

In the early days of VyOS, I used to post development updates, which no one would read or comment upon, so I gave up on it. Now that I think of it, I shouldn't have expected much as the size of the community was very small at the time, and there were hardly many people to read it in the first place, even though it was a critical time for the project, and input from the readers would have been very valuable.

Well, this is a critical time for the project too, and we need your input and your contributions more than ever, so I need to get to fixing my mistakes and try to make it easy for everyone to see what's going on and what we need help with.

Getting a steady stream of contributions is a very important goal. While the commercial support thing we are doing may let the maintainers focus on VyOS and ensure that things like security fixes and release builds get guaranteed attention in time, without occasional contributors who add things they personally need (while maintainers may not, I think myself I'm using maybe 30% of all VyOS features any often) the project will never realize its full potential, and may go stale.

But to make the project easy to manage and easy to contribute to, we need to solve multiple hard problems. It can be hard to get oneself to do things that promise no immediate returns, but if you looks at it the other way, we have a chance to build a system of our dreams together. As of 1.1.x and 1.2.x (the jessie branch), we'll figure it out how to maintain it until we solve those problems, but that's for another post. Right now we are talking about VyOS 2.0, which gets to be a cleanroom rewrite.

Why VyOS isn't as good as it could be, and can't be improved

I considered using "Why VyOS sucks" to catch reader's attention. It's a harsh word, and it may not be all that true, given that VyOS in its current state is way ahead of many other systems that don't even have system-wide config consistency checks, or revisions, or safe upgrades, but there are multiple problems that are so fundamental that they are impossible to fix without rewriting at least a very large part of the code.

I'll state the design problems that cannot be fixed in the current system. They affect both end users and contributors, sometimes indirectly, but very seriously.

Design problem #1: partial commits

You've seen it. You commit, there's an error somewhere, and one part of the config is applied, while the other isn't. Most of the time it's just a nuisance, you fix the issue and commit again, but if you, say, change interface address and firewall rule that is supposed to allow SSH to it, you can get locked out of your system.

The feature that can't be implemented due to it is what goes by "commit check" in JunOS. You can't test if your configuration will apply cleanly without actually commiting it.

It's because in the scripts, the logic for consistency checking and generating real configs (and sometimes applying them too) is mixed together. Regardless of the backend issues, every script needs to be taken apart and rewritten to separate that logic. We'll talk more about it later.

Design problem #2: read and write operations disparity

Config reads and writes are implemented in completely different ways. There is no easy programmatic API for modifying the config, and it's very hard to implement because binaries that do it rely on specific environment setup. Not impossible, but very hard to do right, and to maintain afterwards.

This blocks many things: network API and thus an easy to implement GUI, modifying the config script scripts in sane ways (we do have the script-template which does the trick, kinda, but it could be a lot better).

Design problem #3: internal representation

Now we are getting to really bad stuff. The running config is represented as a directory tree in tmpfs. If you find it hard to believe, browse /opt/vyatta/config/active, e.g. /opt/vyatta/config/active/system/time-zone/node.val

Config levels are directories, and node values are in node.val files. For every config session, a copy of the active directory is made, and mounted together with the original directory in union mount through UnionFS.

There are lots of reasons why it's bad:

  • It relies on behaviour of UnionFS, OverlayFS or another filesystem won't do. We are at mercy of unionfs-fuse developers now, and if they stop maintaining it (and I can see why they may, OverlayFS has many advantages over it), things will get interesting for us
  • It requires watching file ownership and permissions. Scripts that modify the config need to run as vyattacfg group, and if you forget to sg, you end up with a system where no one but you (or root) can make any new commits, until you fix it by hand or reboot
  • It keeps us from implementing role-based access control, since config permissions are tied to UNIX permissions, and we'd have to map it to POSIX ACLs or SELinux and re-create those access rules at boot since the running config dir is populated by loading the config
  • For large configs, it creates a fair amount of system calls and context switches, which may make system run slower than it could

Design problem #3: rollback mechanism

Due to certain details (mostly handling of default values), and the way config scripts work too, rollback cannot be done without reboot. Same issue once made Vyatta developers revert activate/deactivate feature.

It makes confirmed commit a lot less useful than it should be, especially in telecom where routers cannot be rebooted at random even in maintenance windows.

Implementation problem #1: untestable logic

We already discussed it a bit. The logic for reading the config, validating it, and generating application configs is mixed in most of the scripts. It may not look like a big deal, but for the maintainers and contributors it is. It's also amplified by the fact that there is not way to create and manipulate configs separately, the only way you can test anything is to build a complete image, boot it, and painstakingly test everything by hand, or have expect-like tool emulate testing it by hand.

You never know if your changes may possibly work until you get them to a live system. This allows syntax errors in command definitions and compilation errors in scripts to make it into builds, and it make it into a release more than one time when it wasn't immediately apparent and only appread with certain combination of options.

This can be improved a lot by testing components in isolation, but this requires that the code is written in appropriate way. If you write a calculator and start with add(), sub(), mul() etc. functions, and use them in a GUI form, you can test the logic on its own automatically, e.g. does add(2,3) equal 5, and does mul(9, 0) equal 0, does sqrt(-3) raise an exception and so on. But if you embed that logic in button event handlers, you are out of luck. That's how VyOS is for the most part, even if you mock the config subsystem so that config read functions return the test data, you need to redo the script so that every function does exactly one thing testable in isolation.

This is one of the reasons 1.2.0 is taking so long, without tests, or even ability to add them, we don't even know what's not working until we stumble upon it in manual testing.

Implementation problem #2: command definitions

This is a design problem too, but it's not so fundamental. Now we use custom syntax for command definitions (aka "templates"), which have tags such as help: or type: and embedded shell scripts. There are multiple problem with it. For example, it's not so easy to automatically generate at least a command reference from them, and you need a complete live system for that, since part of the templates is autogenerated. The other issue is that right now some components feature very extensive use of embedded shell, and some things are implemented in embedded shell scripts inside templates entirely, which makes testing even harder than it already is.

We could talk about upgrade mechanism too, but I guess I'll leave it for another post. Right now I'd like to talk about proposed solutions, and what's being done already, and what kind of work you can join.

08 December, 2016 03:27AM

December 07, 2016

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Webinar: Running an enterprise-grade OpenStack without the headaches

openstack-without-the-headaches

Watch Webinar On-demand

In Canonical’s latest on-demand webinar, we explore how to build and manage highly scalable OpenStack clouds with BootStack on a Supermicro reference architecture.

Supermicro offers end-to-end green computing solutions for the data center and enterprise IT. Its solutions range from server, storage, blade, workstations, networking devices, server management software and technology support and services, making them an ideal choice for Bootstack deployments.

With BootStack, Canonical experts build, operate and optionally transfer a customer’s OpenStack cloud either on-premises or in a hosted environment. Using a hyper-converged solution stack that has been tested & validated in Supermicro labs, customers can choose a best-in-class hardware platform with OIL validation, the leading production OS for OpenStack deployments and networking overlay delivered as a fully managed service. Using Juju, the application and service modelling tool, BootStack customers can easily integrate the infrastructure and operations they need.

Join Arturo Suarez and Akash Chandrashekar from Canonical, and Srini Bala from Supermicro as they explore a rich landscape of opportunities that combines Juju on Supermicro’s certified platforms to help you tackle the challenge of building and maintaining of complex microservices-based solutions like OpenStack and Kubernetes.

Watch On-Demand

07 December, 2016 05:25PM

hackergotchi for Cumulus Linux

Cumulus Linux

How much does web-scale networking cost?

Here at Cumulus Networks, we cannot help but tout our innovative technology, user-friendly features and seamless integrations. Can you blame us for being excited about what we do? But at the end of the day, we know that your business decisions come down to the bottom line. How much will web-scale networking cost my business? What’s the total cost of ownership?

Going web-scale with Cumulus Networks can save your business money on both the cost of Capex and Opex. We created this easy-to-use TCO calculator so you could see your potential cost based on your data center requirements.

 

Web-scale networking cost calculator

Adjust your estimated cost by choosing your rack requirements with the slider

 

Our total-cost-of-ownership (TCO) calculator was designed using actual customer data. We found that our customers were saving substantially on things like:

  • The cost of acquiring open switches and optics
  • The ability to leverage existing talent and tools
  • OS required for Layer 3 networking
  • OS support and integration
  • The cost of scalability

One of the biggest contributors to lowering costs with Cumulus Networks is that web-scale architecture with open networking offers choice and flexibility. You can use any brand of ONIE compatible switches and optics. In fact, Cumulus Linux is the only native Linux Network Operating System (NOS) that supports over 50 switch hardware platforms based on eight chipsets from two silicon vendors.

We created the TCO calculator to help make your decision to trust Cumulus Linux easier. If you would like to learn how we put together the calculator as well as see some of the data behind it, you can also check out the TCO Report to get a detailed breakdown.

Based on the data we gathered, we learned that within three years most customers reduced TCO by 48%-55% on their network infrastructure with web-scale networking.

So how much could you save by lowering your data center costs? Use the calculator to find out and then view the TCO report on that page to learn more.

The post How much does web-scale networking cost? appeared first on Cumulus Networks Blog.

07 December, 2016 04:59PM by Kelsey Havens

hackergotchi for ArcheOS

ArcheOS

Comparing 7 photogrammetry systems. Which is the best one?


by Cicero Moraes
3D Designer of Arc-Team.

When I explain to people that photogrammetry is a 3D scanning process from photographs, I always get a look of mistrust, as it seems too fantastic to be true. Just imagine, take several pictures of an object, send them to an algorithm and it returns a textured 3D model. Wow!

After presenting the model, the second question of the interested parties always orbits around the precision. What is the accuracy of a 3D scan per photo? The answer is: submillimetric. And again I am surprised by a look of mistrust. Fortunately, our team wrote a scientific paper about an experiment that showed an average deviation of 0.78 mm, that is, less than one millimeter compared to scans done with a laser scanner.

Just like the market for laser scanners, in photogrammetry we have numerous software options to proceed with the scans. They range from proprietary and closed solutions, to open and free. And precisely, in the face of this sort of programs and solutions, comes the third question, hitherto unanswered, at least officially:

Which photogrammetry software is the best?

This is more difficult to answer, because it depends a lot on the situation. But thinking about it and in the face of a lot of approaches I have taken over time, I decided to respond in the way I thought was broader and more just.


The skull of the Lord of Sipan


In July of 2016 I traveled to Lambayeque, Peru, where I stood face to face with the skull of the Lord of Sipan. In analyzing it I realized that it would be possible to reconstruct his face using the forensic facial reconstruction technique. The skull, however, was broken and deformed by the years of pressure it had suffered in its tomb, found complete in 1987, one of the greatest deeds of archeology led by Dr. Walter Alva.


To reconstruct a skull I took 120 photos with an Asus Zenphone 2 smartphone and with these photos I proceeded with the reconstruction works. Parallel to this process, professional photographer Raúl Martin, from the Marketing Department of Inca University Garcilaso de la Vega (sponsor of my trip) took 96 photos with a Canon EOS 60D camera. Of these, I selected 46 images to proceed with the experiment.

Specialist of the Ministry of Culture of Peru initiating the process of digitalization of the skull (in the center)


A day after the photographic survey, the Peruvian Ministry of Culture sent specialists in laser scanning to scan the skull of the Lord of Sipan, carrying a Leica ScanStation C10 equipment. The final cloud of points was sent 15 days later, that is, when I received the data from the laser scanner, all models surveyed by photogrammetry were ready.

We had to wait for this time, since the model raised by the equipment is the gold standard, that is, all the meshes raised by photogrammetry would be compared, one by one, with it.

Full points cloud imported into MeshLab after conversion done in CloudCompare
The cloud of points resulting from the scan were .LAS and .E57 files ... and I had never heard of them. I had to do a lot of research to find out how to open them on Linux using free software. The solution was to do it in CloudCompare, which offers the possibility of importing .E57 files. Then I exported the model as .PLY to be able to open in MeshLah and reconstruct the 3D mesh through the Poisson algorithm.

3D mesh reconstructed from a points cloud. Vertex Color (above) and surface with only one color (below).

As you noted above, the jaw and surface of the table where the pieces were placed were also scanned. The part related to the skull was isolated and cleaned for the experiment to be performed. I will not deal with these details here, since the scope is different. I have already written other materials explaining how to delete unimportant parts of a cloud of points / mesh.

For the scanning via photogrammetry, the chosen systems were:

1) OpenMVG (Open Multiple View Geometry library) + OpenMVS (Open Multi-View Stereo reconstruction library): The sparse cloud of points is calculated in OpenMVG and the dense cloud of points in OpenMVS.

2) OpenMVG + PMVS (Patch-based Multi-view Stereo Software): The sparse cloud of points is calculated in the OpenMVG and later the PMVS calculates the dense cloud of points.

3) MVE (Multi-View Environment): A complete photogrammetry system.

4) Agisoft® Photoscan: A complete and closed photogrammetry system.

5) Autodesk® Recap 360: A complete online photogrammetry system.

6) Autodesk ® 123D Catch: A complete online photogrammetry system.

7) PPT-GUI (Python Photogrammetry Toolbox with graphical user interface): The sparse cloud of points is generated by the Bundler and later the PMVS generates the dense cloud.

* Run on Linux under Wine (PlayOnLinux).

Above we have a table concentrating important aspects of each of the systems. In general, at least apparently there is not one system that stands out much more than the others.


Sparse cloud generation + dense cloud generation + 3D mesh + texture, inconsiderate time to upload photos and 3D mesh download (in the cases of 360 Recap and 123D Catch).

Alignment based on compatible points

Aligner skulls
All meshes were imported to Blender and aligned with laser scanning.


Above we see all the meshes side by side. We can see that some surfaces are so dense that we notice only the edges, as in the case of 3D scanning and OpenMVG + PMVS. Initially a very important information... the texture in the scanned meshes tend to deceive us in relation to the quality of the scan, so, in this experiment I decided to ignore the texture results and focus on the 3D surface. Therefore, I have exported all the original models in .STL format, which is known to have no texture information.


Looking closely, we will see that the result is consistent with the less dense result of subdivisions in the mesh. The ultimate goal of the scan, at least in my work, is to get a mesh that is consistent with the original object. If this mesh is simplified, since it is in harmony with the real volumetric aspect, it is even better, because, when fewer faces have a 3D mesh, the faster it will be to process it in the edition.


If we look at the file sizes (.STL exported without texture), which is a good comparison parameter, we will see that the mesh created in OpenMVG + OpenMVS, already clean, has 38.4 MB and Recap 360 only 5.1 MB!

After years of working with photogrammetry, I realized that the best thing to do when we come across a very dense mesh is to simplify it, so we can handle it quietly in real time. It is difficult to know if this is indeed the case, as it is a proprietary and closed solution, but I suppose both the Recap 360 and the 123D Catch generate complex meshes, but at the end of the process they simplify it considerably so they run on any hardware (PC and smartphones), preferably with WebGL support (interactive 3D in the internet browser).

Soon, we will return to discuss this situation involving the simplification of meshes, let us now compare them.

How 3D Mesh Comparison Works


Once all the skulls have been cleaned and aligned to the gold standard (laser scan) it is time to compare the meshes in the CloudCompare. But how does this 3D mesh comparison technology work?

To illustrate this, I created some didactic elements. Let's go to them.


This didactic element deals with two planes with surfaces of thickness 0 (this is possible in 3D digital modeling) forming an X.


Then we have object A and object B. In the final portion of both sides the ends of the planes are distant in millimeters. Where there is an intersection the distance is, of course, zero mm.


When we compare the two meshes in the CloudCompare. They are pigmented with a color spectrum that goes from blue to red. The image above shows the two plans already pigmented, but we must remember that they are two distinct elements and the comparison is made in two moments, one in relation to the other.

Now we have a clearer idea of how it works. Basically what happens is the following, we set a distance limit, in this case 5mm. What is "out" try to be pigmented red, what is "in" tends to be pigmented with blue and what is at the intersection, ie on the same line, tends to be pigmented with green.


Now I will explain the approach taken in this experiment. See above we have an element with the central region that tends to zero and the ends that are set at +1 and -1mm. In the image does not appear, but the element we use to compare is a simple plane positioned at the center of the scene, right in the region of the base of the 3D bells, or those that are "facing upwards" when those that are "facing down" .


As I mentioned earlier, we have set the limit of comparison. Initially it was set at +2 and -2mm. What if we change this limit to +1 and -1mm? See that this was done in the image above, and the part that is out of bounds.


In order for these off-limits parts not to interfere with visualization, we can erase them.


Thus resulting in a mesh comprising only the interest part of the structure.

For those who understand a little more 3D digital modeling, it is clear that the comparison is made at the vertices rather than the faces. Because of this, we have a serrated edge.

Comparing Skulls


The comparison was made by FOTOGRAMETRIA vs. LASER SCANNING with limits of +1 and -1 mm. Everything outside that spectrum was erased.


OpenMVG+OpenMVS


OpenMVG+PMVS


Photoscan


MVE


Recap 360


123D Catch


PPT-GUI


By putting all the comparisons side by side, we see that there is a strong tendency towards zero, the seven photogrammetry systems are effectively compatible with laser scanning!


Let's now turn to the issue of file sizes. One thing that has always bothered me in the comparisons involving photogrammetry results was the accounting for the subdivisions generated by the algorithms that reconstruct the meshes. As I mentioned above, this does not make much sense, since in the case of the skull we can simplify the surface and yet it maintains the information necessary for the work of anthropological survey and forensic facial reconstruction.

In the face of this, I decided to level all the files, leaving them compatible in size and subdivision. To do this, I took as a base the smaller file that is generated by 123D Catch and used the MeshLab Quadratic Edge Collapse Detection filter set to 25000. This resulted in 7 STLs with 1.3 MB each.

With this leveling we now have a fair comparison between photogrammetry systems.


Above we can visualize the work steps. In the Original field are outlined the skulls initially aligned. Then in Compared we observe the skulls only with the areas of interest kept and finally, in Decimated we have the skulls leveled in size. For an unsuspecting reader it seems to be a single image placed side by side.


When we visualize the comparisons in "solid" we realize better how compatible they all are. Now, let's go to the conclusions.


Conclusion


The most obvious conclusion is that, overall, with the exception of MVE that showed less definition in the mesh, all photogrammetry systems had very similar visual results.

Does this mean that the MVE is inferior to the others?

No, quite the opposite. The MVE is a very robust and practical system. In another opportunity I will present its use in a case of making prosthesis with millimeter quality. In addition to this case he was also used in other projects of making prosthetics, a field that demands a lot of precision and it was successful. The case was even published on the official website of Darmstadt University, the institution that develops it.

What is the best system at all?

It is very difficult to answer this question, because it depends a lot on the user style.

What is the best system for beginners?

Undoubtedly, it's the Autodesk® Recap 360. This is an online platform that can be accessed from any operating system that has an Internet browser with WebGL support. I already tested directly on my smartphone and it worked. In the courses that I ministering about photogrammetry, I have used this solution more and more, because students tend to understand the process much faster than other options.

What is the best system for modeling and animation professionals?

I would indicate the Agisoft® Photoscan. It has a graphical interface that makes it possible, among other things, to create a mask in the region of interest of the photogrammetry, as well as allows to limit the area of calculation drastically reducing the processing time of the machine. In addition, it exports in the most varied formats, offering the possibility to show where the cameras were at the time they photographed the scene.

Which system do you like the most?

Well, personally I appreciate everyone in certain situations. My favorite today is the mixed OpenMVG + OpenMVS solution. Both are open source and can be accessed via the command line, allowing me to control a series of properties, adjusting the scanning to the present need, be it to reconstruct a face, a skull or any other piece. Although I really like this solution, it has some problems, such as the misalignment of the cameras in relation to the models when the sparse cloud scene is imported into Blender. To solve this I use the PPT-GUI, which generates the sparse cloud from the Bundler and the match, that is, the alignment of the cameras in relation to the cloud is perfect. Another problem with the OpenMVG + OpenMVS is that it eventually does not generate a full dense cloud, even if sparse displays all the cameras aligned. To solve this I use the PMVS which, although generating a mesh less dense than OpenMVS, ends up being very robust and works in almost all cases. Another problem with open source options is the need to compile programs. Everything works very well on my computers, but when I have to pass on the solutions to the students or those interested it becomes a big headache. For the end user what matters is to have a software in which on one side enter images and on the other leave a 3D model and this is offered by the proprietary solutions of objective mode. In addition, the licenses of the resulting models are clearer in these applications, I feel safer in the professional modeling field, using templates generated in Photoscan, for example. Technically, you pay the license and can generate templates at will, using them in your works. What looks more or less the same with Autodesk® solutions.

Acknowledgements


To the Inca University Garsilazo de la Vega for coordinating and sponsoring the project of facial reconstruction of the Lord of Sipán, responsible for taking me to Lima and Lambayeque in Peru. Many thanks to Dr. Eduardo Ugaz Burga and to Msc. Santiago Gonzáles for all the strength and support. I thank Dr. Walter Alva for his confidence in opening the doors of the Tumbas Reales de Sipán museum so that we could photograph the skull of the historical figure that bears his name. This thanks goes to the technical staff of the museum: Edgar Bracamonte Levano, Cesar Carrasco Benites, Rosendo Dominguez Ruíz, Julio Gutierrez Chapoñan, Jhonny Aldana Gonzáles, Armando Gil Castillo. I thank Dr. Everton da Rosa for supporting research, not only acquiring a license of Photoscan for it, but using the technology of photogrammetry in his orthognathic surgery plans. Dr. Paulo Miamoto for presenting brilliantly the results of this research during the XIII Brazilian Congress of Legal Dentistry and the II National Congress of Forensic Anthropology in Bahia. To Dr. Rodrigo Salazar for accepting me in his research group related to facial reconstruction of cancer victims, which caused me to open my eyes to many possibilities related to photogrammetry in the treatment of humans. To members of the Animal Avengers group, Roberto Fecchio, Rodrigo Rabello, Sergio Camargo and Matheus Rabello, for allowing solutions based on photogrammetry in their research. Dr. Marcos Paulo Salles Machado (IML RJ) and members of IGP-RS (SEPAI) Rosane Baldasso, Maiquel Santos and Coordinator Cleber Müller, for adopting the use of photogrammetry in Official Expertise. To you all, thank you!

07 December, 2016 04:57PM by cogitas3d (noreply@blogger.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Jamming with Ubuntu Core

web-banner

We celebrated the launch of Ubuntu Core 16 hackathon, or shall we say Snapathon, that took place in Shenzhen between 26-27th November. The 30+ hour gathering and coding session was attended by developers, makers and anyone that was interested in the Internet of Things and the technology that powers them. Attendees were from different backgrounds, industries and places, but demonstrated the same passion and interest to learn about Ubuntu Core and be part of the evolution of the Internet of Things.  

From smart homes to drones, robots and industrial systems, Ubuntu Core provides robust security, app stores and reliable updates. Ubuntu makes development easy…and snap packages make Ubuntu Core secure and reliable for widely distributed devices. This hackathon demonstrated how easy it is to package a snap and work with Ubuntu Core.

We kicked-off with a tech talk on Ubuntu core and then the hacking session took-place…

  • 10 teams on site
  • 29 hours of non-stop coding
  • 6 different types of hardwares / dev boards and sensors (RaspberryPi 3, QualComm Dragon boards, LeMaker HiKey 96boards, Intel NUCs, Dell Gateway, and Pine A64 boards )
  • And 7 snaps were born!

1、snap: water-iot-service: by Jarvis Chung and Lucas Lu

This project is an application that can help  monitor and test water quality and status under different environments, especially when it is difficult or dangerous for direct human access. It utilizes RaspberryPi 3 and a few sensors to gather information data, which will be remotely sent to Qnap’s NAS systems for data analysis. Result can be accessed through a web interface.

The team who worked on this project were from QNAP System. More information on QNAP and their solutions can be found here.

qnap

2、Project Cooltools, snap: sensor-gw: by Hao jianlin

This project used TI sensor tag to collect the location’s light condition, and accordingly the snap application can auto adjusts smart bulb’s lighting to achieve an optimized lighting ambience. The project is powered by Ubuntu Core and running on a QualComm dragonboard. A useful addition to your smart home solution!

The team behind this project is from Shenzhen CoolTools, a startup company focusing on developing smart IoT solutions and applications.

img_0405

3、snap: crazy-app by Crazyou

Crazy-app was developed by Crazyou, a startup robotics company based in Shenzhen. Their crazy-app snap application provides remote monitoring, remote control and admin ability for their robots….as well as remote access to their robot’s webcam to capture surrounding images! More information about Crazyou and their robots can be find here

img_0428

4、snap: Simcaffe by Lao Liang

Running on QualComm DragonBoard 410c, the project is powered by Ubuntu core and comes with an AI developed by using Caffe deep learning framework which can be trained to recognize different images. The project was designed to be utilized in smart surveillance systems. The project code is available on Github.

img_0440

5、snap: Sutop by team PCBA

Sutop is a simple yet handy system admin tool which can monitor and manage your device system remotely.

img_0427

6、My wardrobe by Li Jiancheng

Powered by ubuntu core, and running on Rasberry Pi –  it’s a simple snap that can store all your clothes images and help to organize all of them to provide matching options when you need some help with getting stylish.

img_0414

7、Project Cellboot by Shen Jianfeng

It is a cluster snap that can utilize all connected ubuntu core devices to performance cluster data computing and analysis tasks.

img_0389

Besides the above projects there were a couple more developed during the hackathon, however with time limitation, they didn’t come to life for  demo stage – though at some point we’re looking forward to seeing them soon in the store!

It was indeed a long night in Shenzhen but the amount of ideas and innovation that came out of it was amazing. Until next time…. !

img_0469

07 December, 2016 04:42PM

Stéphane Graber: Running snaps in LXD containers

LXD logo

Introduction

The LXD and AppArmor teams have been working to support loading AppArmor policies inside LXD containers for a while. This support which finally landed in the latest Ubuntu kernels now makes it possible to install snap packages.

Snap packages are a new way of distributing software, directly from the upstream and with a number of security features wrapped around them so that these packages can’t interfere with each other or cause harm to your system.

Requirements

There are a lot of moving pieces to get all of this working. The initial enablement was done on Ubuntu 16.10 with Ubuntu 16.10 containers, but all the needed bits are now progressively being pushed as updates to Ubuntu 16.04 LTS.

The easiest way to get this to work is with:

  • Ubuntu 16.10 host
  • Stock Ubuntu kernel (4.8.0)
  • Stock LXD (2.4.1 or higher)
  • Ubuntu 16.10 container with “squashfuse” manually installed in it

Installing the nextcloud snap

First, lets get ourselves an Ubuntu 16.10 container with “squashfuse” installed inside it.

lxc launch ubuntu:16.10 nextcloud
lxc exec nextcloud -- apt update
lxc exec nextcloud -- apt dist-upgrade -y
lxc exec nextcloud -- apt install squashfuse -y

And then, lets install that “nextcloud” snap with:

lxc exec nextcloud -- snap install nextcloud

Finally, grab the container’s IP and access “http://<IP>” with your web browser:

stgraber@castiana:~$ lxc list nextcloud
+-----------+---------+----------------------+----------------------------------------------+
|    NAME   |  STATE  |         IPV4         |                     IPV6                     |
+-----------+---------+----------------------+----------------------------------------------+
| nextcloud | RUNNING | 10.148.195.47 (eth0) | fd42:ee2:5d34:25c6:216:3eff:fe86:4a49 (eth0) |
+-----------+---------+----------------------+----------------------------------------------+

Nextcloud Login screen

Installing the LXD snap in a LXD container

First, lets get ourselves an Ubuntu 16.10 container with “squashfuse” installed inside it.
This time with support for nested containers.

lxc launch ubuntu:16.10 lxd -c security.nesting=true
lxc exec lxd -- apt update
lxc exec lxd -- apt dist-upgrade -y
lxc exec lxd -- apt install squashfuse -y

Now lets clear the LXD that came pre-installed with the container so we can replace it by the snap.

lxc exec lxd -- apt remove --purge lxd lxd-client -y

Because we already have a stable LXD on the host, we’ll make things a bit more interesting by installing the latest build from git master rather than the latest stable release:

lxc exec lxd -- snap install lxd --edge

The rest is business as usual for a LXD user:

stgraber@castiana:~$ lxc exec lxd bash
root@lxd:~# lxd init
Name of the storage backend to use (dir or zfs) [default=dir]:

We detected that you are running inside an unprivileged container.
This means that unless you manually configured your host otherwise,
you will not have enough uid and gid to allocate to your containers.

LXD can re-use your container's own allocation to avoid the problem.
Doing so makes your nested containers slightly less safe as they could
in theory attack their parent container and gain more privileges than
they otherwise would.

Would you like to have your containers share their parent's allocation (yes/no) [default=yes]?
Would you like LXD to be available over the network (yes/no) [default=no]?
Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
Would you like to create a new network bridge (yes/no) [default=yes]?
What should the new bridge be called [default=lxdbr0]?
What IPv4 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
What IPv6 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
LXD has been successfully configured.

root@lxd:~# lxd.lxc launch images:archlinux arch
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

Creating arch
Starting arch

root@lxd:~# lxd.lxc list
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| NAME |  STATE  |         IPV4         |                      IPV6                     |    TYPE    | SNAPSHOTS |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+
| arch | RUNNING | 10.106.137.64 (eth0) | fd42:2fcd:964b:eba8:216:3eff:fe8f:49ab (eth0) | PERSISTENT | 0         |
+------+---------+----------------------+-----------------------------------------------+------------+-----------+

And that’s it, you now have the latest LXD build installed inside a LXD container and running an archlinux container for you. That LXD build will update very frequently as we publish new builds to the edge channel several times a day.

Conclusion

It’s great to have snaps now install properly inside LXD containers. Production users can now setup hundreds of different containers, network them the way they want, setup their storage and resource limits through LXD and then install snap packages inside them to get the latest upstream releases of the software they want to run.

That’s not to say that everything is perfect yet. This is all built on some really recent kernel work, using unprivileged FUSE filesystem mounts and unprivileged AppArmor profile stacking and namespacing. There very likely still are some issues that need to get resolved in order to get most snaps to work identically to when they’re installed directly on the host.

If you notice discrepancies between a snap running directly on the host and a snap running inside a LXD container, you’ll want to look at the “dmesg” output, looking for any DENIED entry in there which would indicate AppArmor rejecting some request from the snap.

This typically indicates either a bug in AppArmor itself or in the way the AppArmor profiles are generated by snapd. If you find one of those issues, you can report it in #snappy on irc.freenode.net or file a bug at https://launchpad.net/snappy/+filebug so it can be investigated.

Extra information

More information on snap packages can be found at: http://snapcraft.io

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

07 December, 2016 02:37PM

hackergotchi for SparkyLinux

SparkyLinux

SparkyLinux 4.5.1 MinimalGUI

There is an update of Sparky 4.5.1 MinimalGUI available to download.

The Sparky Advanced Installer doesn’t work as it should in the MinimaGUI edition, if you are trying to install an additional desktop. The installer calls a ‘desktop-installer’, but it does not coming back to the main installer with right privileges after. It used to do before, but not any more.

You do not re-install your already installed Sparky.
You can also use existing Sparky 4.5 MinimalGUI iso images, but if you’d like to install a different desktop, run the Advanced Installer from the command line:
sudo sparkylinux-installer gui
or
sudo sparkylinux-installer

The updated MinimalGUI 4.5.1 iso images provide a fix of the small issue.

Live is crazy, anyway…

07 December, 2016 02:34PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: LXD 2.0: LXD and OpenStack [11/12]

This is the eleventh blog post in this series about LXD 2.0.

LXD logo

Introduction

First of all, sorry for the delay. It took quite a long time before I finally managed to get all of this going. My first attempts were using devstack which ran into a number of issues that had to be resolved. Yet even after all that, I still wasn’t be able to get networking going properly.

I finally gave up on devstack and tried “conjure-up” to deploy a full Ubuntu OpenStack using Juju in a pretty user friendly way. And it finally worked!

So below is how to run a full OpenStack, using LXD containers instead of VMs and running all of this inside a LXD container (nesting!).

Requirements

This post assumes you’ve got a working LXD setup, providing containers with network access and that you have a pretty beefy CPU, around 50GB of space for the container to use and at least 16GB of RAM.

Remember, we’re running a full OpenStack here, this thing isn’t exactly light!

Setting up the container

OpenStack is made of a lof of different components, doing a lot of different things. Some require some additional privileges so to make our live easier, we’ll use a privileged container.

We’ll configure that container to support nesting, pre-load all the required kernel modules and allow it access to /dev/mem (as is apparently needed).

Please note that this means that most of the security benefit of LXD containers are effectively disabled for that container. However the containers that will be spawned by OpenStack itself will be unprivileged and use all the normal LXD security features.

lxc launch ubuntu:16.04 openstack -c security.privileged=true -c security.nesting=true -c "linux.kernel_modules=iptable_nat, ip6table_nat, ebtables, openvswitch"
lxc config device add openstack mem unix-char path=/dev/mem

There is a small bug in LXD where it would attempt to load kernel modules that have already been loaded on the host. This has been fixed in LXD 2.5 and will be fixed in LXD 2.0.6 but until then, this can be worked around with:

lxc exec openstack — ln -s /bin/true /usr/local/bin/modprobe
Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get OpenStack going.

lxc exec openstack -- apt-add-repository ppa:conjure-up/next -y
lxc exec openstack -- apt-add-repository ppa:juju/stable -y
lxc exec openstack -- apt update
lxc exec openstack -- apt dist-upgrade -y
lxc exec openstack -- apt install conjure-up -y

And the last setup step is to configure LXD networking inside the container.
Answer with the default for all questions, except for:

  • Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
  • Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
lxc exec openstack -- lxd init

And that’s it for the container configuration itself, now we can deploy OpenStack!

Deploying OpenStack with conjure-up

As mentioned earlier, we’ll be using conjure-up to deploy OpenStack.
This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

Start it with:

lxc exec openstack -- sudo -u ubuntu -i conjure-up
  • Select “OpenStack with NovaLXD”
  • Then select “localhost” as the deployment target (uses LXD)
  • And hit “Deploy all remaining applications”

This will now deploy OpenStack. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected.

conjure-up

Conjure-Up deploying OpenStack

Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

Access the dashboard and spawn a container

The dashboard runs inside a container, so you can’t just hit it from your web browser.

The easiest way around this is to setup a NAT rule with:

lxc exec openstack -- iptables -t nat -A PREROUTING -p tcp --dport 80 -j --to <IP>

Where “<ip>” is the dashboard IP address conjure-up gave you at the end of the installation.

You can now grab the IP address of the “openstack” container (from “lxc info openstack”) and point your web browser to: http://<container ip>/horizon

This can take a few minutes to load the first time around. Once the login screen is loaded, enter the default login and password (admin/openstack) and you’ll be greeted by the OpenStack dashboard!

oslxd-dashboard

You can now head to the “Project” tab on the left and the “Instances” page. To start a new instance using nova-lxd, click on “Launch instance”, select what image you want, network, … and your instance will get spawned.

Once it’s running, you can assign it a floating IP which will let you reach your instance from within your “openstack” container.

Conclusion

OpenStack is a pretty complex piece of software, it’s also not something you really want to run at home or on a single server. But it’s certainly interesting to be able to do it anyway, keeping everything contained to a single container on your machine.

Conjure-Up is a great tool to deploy such complex software, using Juju behind the scene to drive the deployment, using LXD containers for every individual service and finally for the instances themselves.

It’s also one of the very few cases where multiple level of container nesting actually makes sense!

Extra information

The conjure-up website can be found at: http://conjure-up.io
The Juju website can be found at: http://www.ubuntu.com/cloud/juju

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

Original article

<—- Previous blog

07 December, 2016 10:00AM

December 06, 2016

hackergotchi for Tails

Tails

Call for testing: redesigned Tails Greeter

You can help Tails! The first alpha for the redesigned Tails Greeter is out. We are very excited and cannot wait to hear what you think about it :)

What is Tails Greeter?

Tails Greeter is the set of dialogs that appear after the boot menu, but before the GNOME Desktop appears.

It lets you choose your language, enable your persistent volume, and set a number of other options.

Why a new Tails Greeter?

We had two main reasons to redesign Tails Greeter:

  • Usability testing has demonstrated that it is not as easy to use as we would like, especially for people trying Tails for the first time.
  • We have pushed the old interface to its limits; it cannot accommodate the options we would like to add to it.

What is new in the redesigned Tails Greeter?

Nearly everything you can see has changed! We have been working for more than two years with designers to make Tails Greeter easier to use:

Redesigned Tails Greeter alpha screenshot

How to test the redesigned Tails Greeter?

Keep in mind that this is a test image. We did not carefully test it so it is not guaranteed to provide any security or anonymity.

But test wildly!

Download and install

experimental Tails ISO image including the redesigned Tails Greeter

The line corresponding to the ISO image is the one whose size is 1G.

You cannot install this ISO image from Tails 2.x. It is impossible as well to upgrade to this ISO image from Tails 2.x. So, either install or upgrade from a non-Tails system, or start this ISO image from DVD and then clone it to a USB stick.

To install this ISO image, follow our usual installation instructions, skipping the Download and verify step.

What to test

Don't hesitate to test all kinds of options, and ensure they are taken into account in the Tails session.

If you find anything that is not working as it should, please report to us on tails-testers@boum.org, including the exact filename of the ISO image you have tested.

Known issues in the redesigned Tails Greeter

Like it?

We have a donation campaign going on: we explained you why we needed donations, how we use these donations, and we shared with you our plans for the next years.

So if you want Tails to remain independent, if you want to enable the Tails team to work on projects we think are important, such as redesigning Tails Greeter, please take one minute to make a donation.

06 December, 2016 06:00PM

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Ubuntu Weekly Newsletter Issue 490

Welcome to the Ubuntu Weekly Newsletter. This is issue #490 for the week November 28 – December 4, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Chris Guiver
  • Elizabeth K. Joseph
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

06 December, 2016 04:17AM

December 05, 2016

Seif Lotfy: Playing with .NET (dotnet) and IronFunctions

Again if you missed it, IronFunctions is open-source, lambda compatible, on-premise, language agnostic, server-less compute service.

While AWS Lambda only supports Java, Python and Node.js, Iron Functions allows you to use any language you desire by running your code in containers.

With Microsoft being one of the biggest players in open source and .NET going cross-platform it was only right to add support for it in the IronFunctions's fn tool.

TL;DR:

The following demos a .NET function that takes in a URL for an image and generates a MD5 checksum hash for it:

Using dotnet with functions

Make sure you downloaded and installed dotnet. Now create an empty dotnet project in the directory of your function:

dotnet new  

By default dotnet creates a Program.cs file with a main method. To make it work with IronFunction's fn tool please rename it to func.cs.

mv Program.cs func.cs  

Now change the code as you desire to do whatever magic you need it to do. In our case the code takes in a URL for an image and generates a MD5 checksum hash for it. The code is the following:

using System;  
using System.Text;  
using System.Security.Cryptography;  
using System.IO;

namespace ConsoleApplication  
{
    public class Program
    {
        public static void Main(string[] args)
        {
            // if nothing is being piped in, then exit
            if (!IsPipedInput())
                return;

            var input = Console.In.ReadToEnd();
            var stream = DownloadRemoteImageFile(input);
            var hash = CreateChecksum(stream);
            Console.WriteLine(hash);
        }

        private static bool IsPipedInput()
        {
            try
            {
                bool isKey = Console.KeyAvailable;
                return false;
            }
            catch
            {
                return true;
            }
        }
        private static byte[] DownloadRemoteImageFile(string uri)
        {

            var request = System.Net.WebRequest.CreateHttp(uri);
            var response = request.GetResponseAsync().Result;
            var stream = response.GetResponseStream();
            using (MemoryStream ms = new MemoryStream())
            {
                stream.CopyTo(ms);
                return ms.ToArray();
            }
        }
        private static string CreateChecksum(byte[] stream)
        {
            using (var md5 = MD5.Create())
            {
                var hash = md5.ComputeHash(stream);
                var sBuilder = new StringBuilder();

                // Loop through each byte of the hashed data
                // and format each one as a hexadecimal string.
                for (int i = 0; i < hash.Length; i++)
                {
                    sBuilder.Append(hash[i].ToString("x2"));
                }

                // Return the hexadecimal string.
                return sBuilder.ToString();
            }
        }
    }
}

Note: IO with an IronFunction is done via stdin and stdout. This code

Using with IronFunctions

Let's first init our code to become IronFunctions deployable:

fn init <username>/<funcname>  

Since IronFunctions relies on Docker to work (we will add rkt support soon) the <username> is required to publish to docker hub. The <funcname> is the identifier of the function.

In our case we will use dotnethash as the <funcname>, so the command will look like:

fn init seiflotfy/dotnethash  

When running the command it will create the func.yaml file required by functions, which can be built by running:

Push to docker

fn push  

This will create a docker image and push the image to docker.

Publishing to IronFunctions

To publish to IronFunctions run ...

fn routes create <app_name>  

where <app_name> is (no surprise here) the name of the app, which can encompass many functions.

This creates a full path in the form of http://<host>:<port>/r/<app_name>/<function>

In my case, I will call the app myapp:

fn routes create myapp  

Calling

Now you can

fn call <app_name> <funcname>  

or

curl http://<host>:<port>/r/<app_name>/<function>  

So in my case

echo http://lorempixel.com/1920/1920/ | fn call myapp /dotnethash  

or

curl -X POST -d 'http://lorempixel.com/1920/1920/'  http://localhost:8080/r/myapp/dotnethash  

What now?

You can find the whole code in the examples on GitHub. Feel free to join the Iron.io Team on Slack.
Feel free to write your own examples in any of your favourite programming languages such as Lua or Elixir and create a PR :)

05 December, 2016 10:15PM

Harald Sitter: KDE Framworks 5 Content Snap Techno

In the previous post on Snapping KDE Applications we looked at the high-level implication and use of the KDE Frameworks 5 content snap to snapcraft snap bundles for binary distribution. Today I want to get a bit more technical and look at the actual building and inner workings of the content snap itself.

The KDE Frameworks 5 snap is a content snap. Content snaps are really just ordinary snaps that define a content interface. Namely, they expose part or all of their file tree for use by another snap but otherwise can be regular snaps and have their own applications etc.

KDE Frameworks 5’s snap is special in terms of size and scope. The whole set of KDE Frameworks 5, combined with Qt 5, combined with a large chunk of the graphic stack that is not part of the ubuntu-core snap. All in all just for the Qt5 and KF5 parts we are talking about close to 100 distinct source tarballs that need building to compose the full frameworks stack. KDE is in the fortunate position of already having builds of all these available through KDE neon. This allows us to simply repack existing work into the content snap. This is for the most part just as good as doing everything from scratch, but has the advantage of saving both maintenance effort and build resources.

I do love automation, so the content snap is built by some rather stringy proof of concept code that automatically translates the needed sources into a working snapcraft.yaml that repacks the relevant KDE neon debs into the content snap.

Looking at this snapcraft.yaml we’ll find some fancy stuff.

After the regular snap attributes the actual content-interface is defined. It’s fairly straight forward and simply exposes the entire snap tree as kde-frameworks-5-all content. This is then used on the application snap side to find a suitable content snap so it can access the exposed content (i.e. in our case the entire file tree).

slots:
    kde-frameworks-5-slot:
        content: kde-frameworks-5-all
        interface: content
        read:
        - "."

The parts of the snap itself are where the most interesting things happen. To make things easier to read and follow I’ll only show the relevant excerpts.

The content snap consists of the following parts: kf5, kf5-dev, breeze, plasma-integration.

The kf5 part is the meat of the snap. It tells snapcraft to stage the binary runtime packages of KDE Frameworks 5 and Qt 5. This effectively makes snapcraft pack the named debs along with necessary dependencies into our snap.

    kf5:
        plugin: nil
        stage-packages:
          - libkf5coreaddons5
        ...

The kf5-dev part looks almost like the kf5 part but has entirely different functionality. Instead of staging the runtime packages it stages the buildtime packages (i.e. the -dev packages). It additionally has a tricky snap rule which excludes everything from actually ending up in the snap. This is a very cool tricky, this effectively means that the buildtime packages will be in the stage and we can build other parts against them, but we won’t have any of them end up in the final snap. After all, they would be entirely useless there.

    kf5-dev:
        after:
          - kf5
        plugin: nil
        stage-packages:
          - libkf5coreaddons-dev
        ....
        snap:
          - "-*"

Besides those two we also build two runtime integration parts entirely from scratch breeze and plasma-integration. They aren’t actually needed, but ensure sane functionality in terms of icon theme selection etc. These are ordinary build parts that simply rely on the kf5 and kf5-dev parts to provide the necessary dependencies.

An important question to ask here is how one is meant to build against this now. There is this kf5-dev part, but it does not end up in the final snap where it would be entirely useless anyway as snaps are not used at buildtime. The answer lies in one of the rigging scripts around this. In the snapcraft.yaml we configured the kf5-dev part to stage packages but then excluded everything from being snapped. However, knowing how snapcraft actually goes about its business we can “abuse” its inner workings to make use of the part after all. Before the actual snap is created snapcraft “primes” the snap, this effectively means that all installed trees (i.e. the stages) are combined into one tree (i.e. the primed tree), the exclusion rule of the kf5-dev part is then applied on this tree. Or in other words: the primed tree is the snap before exclusion was applied. Meaning the primed tree is everything from all parts, including the development headers and CMake configs. We pack this tree in a development tarball which we then use on the application side to stage a development environment for the KDE Frameworks 5 snap.

Specifically on the application-side we use a boilerplate part that employs the same trick of stage-everything but snap-nothing to provide the build dependencies while not having anything end up in the final snap.

  kde-frameworks-5-dev:
    plugin: dump
    snap: [-*]
    source: http://build.neon.kde.org/job/kde-frameworks-5-release_amd64.snap/lastSuccessfulBuild/artifact/kde-frameworks-5-dev_amd64.tar.xz

Using the KDE Framworks 5 content snap KDE can create application snaps that are a fraction of the size they would be if they contained all dependencies themselves. While this does give up optimization potential by aggregating requirements in a more central fashion it quickly starts paying off given we are saving upwards of 70 MiB per snap.

Application snaps can of course still add more stuff on top or even override things if needed.

Finally, as we approach the end of the year, we begin the season of giving. What would suit the holidays better than giving to the entire world by supporting KDE with a small donation?
postcard02

05 December, 2016 04:10PM

hackergotchi for Univention Corporate Server

Univention Corporate Server

New in the App Center: UCS print server Quota to control printer cost

Almost every organization, school, and university uses printers. The question that is most often asked is how to keep control of the printer cost without simultaneously increasing administrative efforts.

An efficient and easy-to-use solution for this issue is provided by our new app “UCS print server Quota”. Via a graphical interface IT administrators can determine and set so-called quotas, i.e. the number of pages each user or group is allowed to use in the organization for a certain time period. This function primarily serves to limit the consumption of paper in larger organizations such as schools and universities. Costs are thus kept as low as possible.

pykota for print server Quota

The new app will install the software pykota on the server which, in turn, will take care of the printer server Quota. After the installation the domain administrator can centrally set appropriate quotas for single users or groups. Before the execution of each print job, the system will check the current quota status of that person or group and will either allow the print job or block it.

Further information

Details on the functioning of this app can be found in our Extended Print services documentation.

Find and test the app via the Univention App Center.

Der Beitrag New in the App Center: UCS print server Quota to control printer cost erschien zuerst auf Univention.

05 December, 2016 02:20PM by Maren Abatielos

hackergotchi for Ubuntu developers

Ubuntu developers

Rafael Carreras: Yakkety Yak release parties

Catalan LoCo Team celebrated on November 5th a release party of the next Ubuntu version, in that case, 16.10 Xenial Xerus, in Ripoll, such a historical place. As always, we started explaining what Ubuntu is and how it adapts to new times and devices.

FreeCad 3D design and Games were both present at the party.

A few weeks later, in December 3rd, we did another release party, this time in Barcelona.

We went to Soko, previously a chocolate factory, that nowadays is a kind of Makers Lab, very excited about free software. First, Josep explained the current developments in Ubuntu and we carried some installations on laptops.

We ate some pizza and had discussions about free software on public administrations. Apart from the usual users who came to install Ubuntu on their computers, the responsible from Soko gave us 10 laptops for Ubuntu installation too. We ended the tasks installing Wine for some Lego to run.

That’s some art that is being made at Soko.

I’m releasing that post because we need some documentation on release parties. If you need some advice on how to manage a release party, you can contact me or anyone in Ubuntu community.

 

05 December, 2016 01:44PM

December 04, 2016

hackergotchi for OSMC

OSMC

OSMC PiDrive: we have a winner

Last month, we announced a competition to give you a chance to win your own OSMC PiDrive.

A PiDrive kit contains:

  • WD PiDrive 314GB
  • SanDisk Class 8GB SD card
  • OSMC case
  • 3A Power supply and cables
  • Screwdriver and screws

We're pleased to announce that we have a winner. Congratulations to Ian F from London. We'll be holding more competitions and giving away more prizes in the near future, so keep your eyes peeled.

If you still want to get your hands on a drive, you grab one from our Store.

04 December, 2016 11:31PM by Sam Nazarko

hackergotchi for Ubuntu developers

Ubuntu developers

Svetlana Belkin: Community Service Learning Within Open * Communities

As the name implies, “service-learning is an educational approach that combines learning objectives with community service in order to provide a pragmatic, progressive learning experience while meeting societal needs” (Wikipedia).  When you add the “community” part to that definition it changes to, “about leadership development as well as traditional information and skill acquisition” (Janet 1999).

How does this apply to Open * communities?

Simple!  Community service learning is an ideal way to get middle/high school/college students to get involved within the various communities and understand the power of Open *. And also to stay active after their term of community service learning.

This idea came to me just today (as of writing, Nov. 30th) as a thought on what is really Open *.  Not the straightforward definition of it but the the affect Open * creates.  As I stated on my home page of my site, Open * creates a sense of empowerment.  One way is through the actions that create skills and improvements to those skills.  Which skills are those?  Mozilla Learning made a map and description to these skills on their Web Literacy pages.  They are show below also:

screenshot-from-2016-11-30-19-07-22Most of these skills along with the ways to gain these skills (read, write, participate) can be used as skills to worked on for community service learning.

As stated above, community service learning is really the focus of gaining skills and leadership skills while (in the Open * sense) contribute to projects that impacts the society of the world.  This is really needed now as there are many local and world issues that Open * can provide solutions too.

I see this as an outreach program for schools and the various organizations/groups such as Ubuntu, System76, Mozilla, and even Linux Padawan.  Unlike Google Summer of Code (GSoC), no one receives a stipend but the idea of having a mentor could be taken from GSoC.  No, not could but should.  Because the student needs someone to guide them, hence Linux Padawan could benefit from this idea.

Having that said, I will try to work out a sample program that could be used and maybe test it with Linux Padawan.  Maybe I could have this ready by spring semester.

Random Fact #1: Simon Quigley, through his middle school, is in a way already doing this type of learning.

Random Fact #2: At one point of time, I wanted to translate that Web Literacy map into one that can be applied to Open *, not just one topic.

04 December, 2016 09:35PM

hackergotchi for SparkyLinux

SparkyLinux

Skype Installer 0.1.10

There is an update of Skype Installer 0.1.10 available in our repos.

Due to problems with installing Skype stable 4.x on Sparky 64 bit, I made some changes inside the installer:
– downloads and installs Skype 4.x on 32 bit systems
– downloads and installs Skype 1.x Alpha on 64 bit systems

There are some dependencies problems trying to install Skype i386 on x86_64 bit systems now.
This solution should resolve the problem, so upgrade the ‘skype-installer’ as following:
sudo apt-get update
sudo apt-get install skype-installer

Then run the Skype Installer from Menu-> Internet.

 

04 December, 2016 02:00PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Jo Shields: A quick introduction to Flatpak

Releasing ISV applications on Linux is often hard. The ABI of all the libraries you need changes seemingly weekly. Hence you have the option of bundling the world, or building a thousand releases to cover a thousand distribution versions. As a case in point, when MonoDevelop started bundling a C Git library instead of using a C# git implementation, it gained dependencies on all sorts of fairly weak ABI libraries whose exact ABI mix was not consistent across any given pair of distro releases. This broke our policy of releasing “works on anything” .deb and .rpm packages. As a result, I pretty much gave up on packaging MonoDevelop upstream with version 5.10.

Around the 6.1 release window, I decided to take re-evaluate question. I took a closer look at some of the fancy-pants new distribution methods that get a lot of coverage in the Linux press: Snap, AppImage, and Flatpak.

I started with AppImage. It’s very good and appealing for its specialist areas (no external requirements for end users), but it’s kinda useless at solving some of our big areas (the ABI-vs-bundling problem, updating in general).

Next, I looked at Flatpak (once xdg-app). I liked the concept a whole lot. There’s a simple 3-tier dependency hierarchy: Applications, Runtimes, and Extensions. An application depends on exactly one runtime.  Runtimes are root-level images with no dependencies of their own. Extensions are optional add-ons for applications. Anything not provided in your target runtime, you bundle. And an integrated updates mechanism allows for multiple branches and multiple releases parallel-installed (e.g. alpha & stable, easily switched).

There’s also security-related sandboxing features, but my main concerns on a first examination were with the dependency and distribution questions. That said, some users might be happier running Microsoft software on their Linux desktop if that software is locked up inside a sandbox, so I’ve decided to embrace that functionality rather than seek to avoid it.

I basically stopped looking at this point (sorry Snap!). Flatpak provided me with all the functionality I wanted, with an extremely helpful and responsive upstream. I got to work on trying to package up MonoDevelop.

Flatpak (optionally!) uses a JSON manifest for building stuff. Because Mono is still largely stuck in a Gtk+2 world, I opted for the simplest runtime, org.freedesktop.Runtime, and bundled stuff like Gtk+ into the application itself.

Some gentle patching here & there resulted in this repository. Every time I came up with an exciting new edge case, upstream would suggest a workaround within hours – or failing that, added new features to Flatpak just to support my needs (e.g. allowing /dev/kvm to optionally pass through the sandbox).

The end result is, as of the upcoming 0.8.0 release of Flatpak, from a clean install of the flatpak package to having a working MonoDevelop is a single command: flatpak install --user --from https://download.mono-project.com/repo/monodevelop.flatpakref 

For the current 0.6.x versions of Flatpak, the user also needs to flatpak remote-add --user --from gnome https://sdk.gnome.org/gnome.flatpakrepo first – this step will be automated in 0.8.0. This will download org.freedesktop.Runtime, then com.xamarin.MonoDevelop; export icons ‘n’ stuff into your user environment so you can just click to start.

There’s some lingering experience issues due the sandbox which are on my radar. “Run on external console” doesn’t work, for example, or “open containing folder”. There are people working on that (a missing DBus# feature to allow breaking out of the sandbox). But overall, I’m pretty happy. I won’t be entirely satisfied until I have something approximating feature equivalence to the old .debs.  I don’t think that will ever quite be there, since there’s just no rational way to allow arbitrary /usr stuff into the sandbox, but it should provide a decent basis for a QA-able, supportable Linux MonoDevelop. And we can use this work as a starting point for any further fancy features on Linux.

Gtk# app development in Flatpak MonoDevelop

Editing MonoDevelop in MonoDevelop. *Inception noise*

04 December, 2016 10:44AM

December 03, 2016

hackergotchi for Whonix

Whonix

Testers Wanted! Tor – Stable Upgrades

Tor was updated to 0.2.8.10 in Whonix stable-proposed-updates as well as in testers repository.

Instructions for changing Whonix repository:
https://www.whonix.org/wiki/Whonix-APT-Repository

Then just do a update:
https://www.whonix.org/wiki/Update

The post Testers Wanted! Tor – Stable Upgrades appeared first on Whonix.

03 December, 2016 07:27PM by Patrick Schleizer

hackergotchi for Ubuntu developers

Ubuntu developers

Ross Gammon: My Open Source Contributions June – November 2016

So much for my monthly blogging! Here’s what I have been up to in the Open Source world over the last 6 months.

Debian

  • Uploaded a new version of the debian-multimedia blends metapackages
  • Uploaded the latest abcmidi
  • Uploaded the latest node-process-nextick-args
  • Prepared version 1.0.2 of libdrumstick for experimental, as a first step for the transition. It was sponsored by James Cowgill.
  • Prepared a new node-inline-source-map package, which was sponsored by Gianfranco Costamagna.
  • Uploaded kmetronome to experimental as part of the libdrumstick transition.
  • Prepared a new node-js-yaml package, which was sponsored by Gianfranco Costamagna.
  • Uploaded version 4.2.4 of Gramps.
  • Prepared a new version of vmpk which I am going to adopt, as part of the libdrumstick transition. I tried splitting the documentation into a separate package, but this proved difficult, and in the end I missed the transition freeze deadline for Debian Stretch.
  • Prepared a backport of Gramps 4.2.4, which was sponsored by IOhannes m zmölnig as Gramps is new for jessie-backports.
  • Began a final push to get kosmtik packaged and into the NEW queue before the impending Debian freeze for Stretch. Unfortunately, many dependencies need updating, which also depend on packages not yet in Debian. Also pushed to finish all the new packages for node-tape, which someone else has decided to take responsibility for.
  • Uploaded node-cross-spawn-async to fix a Release Critical bug.
  • Prepared  a new node-chroma-js package,  but this is unfortunately blocked by several out of date & missing dependencies.
  • Prepared a new node-husl package, which was sponsored by Gianfranco Costamagna.
  • Prepared a new node-resumer package, which was sponsored by Gianfranco Costamagna.
  • Prepared a new node-object-inspect package, which was sponsored by Gianfranco Costamagna.
  • Removed node-string-decoder from the archive, as it was broken and turned out not to be needed anymore.
  • Uploaded a fix for node-inline-source-map which was failing tests. This turned out to be due to node-tap being upgraded to version 8.0.0. Jérémy Lal very quickly provided a fix in the form of a Pull Request upstream, so I was able to apply the same patch in Debian.

Ubuntu

  • Prepared a merge of the latest blends package from Debian in order to be able to merge the multimedia-blends package later. This was sponsored by Daniel Holbach.
  • Prepared an application to become an Ubuntu Contributing Developer. Unfortunately, this was later declined. I was completely unprepared for the Developer Membership Board meeting on IRC after my holiday. I had had no time to chase for endorsements from previous sponsors, and the application was not really clear about the fact that I was not actually applying for upload permission yet. No matter, I intend to apply again later once I have more evidence & support on my application page.
  • Added my blog to Planet Ubuntu, and this will hopefully be the first post that appears there.
  • Prepared a merge of the latest debian-multimedia blends meta-package package from Debian. In Ubuntu Studio, we have the multimedia-puredata package seeded so that we get all the latest Puredata packages in one go. This was sponsored by Michael Terry.
  • Prepared a backport of Ardour as part of the Ubuntu Studio plan to do regular backports. This is still waiting for sponsorship if there is anyone reading this that can help with that.
  • Did a tweak to the Ubuntu Studio seeds and prepared an update of the Ubuntu Studio meta-packages. However, Adam Conrad did the work anyway as part of his cross-flavour release work without noticing my bug & request for sponsorship. So I closed the bug.
  • Updated the Ubuntu Studio wiki to expand on the process for updating our seeds and meta-packages. Hopefully, this will help new contributors to get involved in this area in the future.
  • Took part in the testing and release of the Ubuntu Studio Trusty 14.04.5 point release.
  • Took part in the testing and release of the Ubuntu Studio Yakkety Beta 1 release.
  • Prepared a backport of Ansible but before I could chase up what to do about the fact that ansible-fireball was no longer part of the Ansible package, some one else did the backport without noticing my bug. So I closed the bug.
  • Prepared an update of the Ubuntu Studio meta-packages. This was sponsored by Jeremy Bicha.
  • Prepared an update to the ubuntustudio-default-settings package. This switched the Ubuntu Studio desktop theme to Numix-Blue, and reverted some commits to drop the ubuntustudio-lightdm-theme package fom the archive. This had caused quite a bit of controversy and discussion on IRC due to the transition being a little too close to the release date for Yakkety. This was sponsored by Iain Lane (Laney).
  • Prepared the Numix Blue update for the ubuntustudio-lightdm-theme package. This was also sponsored by Iain Lane (Laney). I should thank Krytarik here for the initial Numix Blue theme work here (on the lightdm theme & default settings packages).
  • Provided a patch for gfxboot-theme-ubuntu which has a bug which is regularly reported during ISO testing, because the “Try Ubuntu Studio without installing” option was not a translatable string and always appeared in English. Colin Watson merged this, so hopefully it will be translated by the time of the next release.
  • Took part in the testing and release of the Ubuntu Studio Yakkety 16.10 release.
  • After a hint from Jeremy Bicha, I prepared a patch that adds a desktop file for Imagemagick to the ubuntustudio-default-settings package. This will give us a working menu item in Ubuntu Studio whilst we wait for the bug to be fixed upstream in Debian. Next month I plan to finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition, including dropping ubuntustudio-lightdm-theme from the Ubuntu Studio seeds. I will include this fix at the same time.

Other

  • At other times when I have had a spare moment, I have been working on resurrecting my old Family History website. It was originally produced in my Windows XP days, and I was no longer able to edit it in Linux. I decided to convert it to Jekyll. First I had to extract the old HTML from where the website is hosted using the HTTrack Website Copier. Now, I am in the process of switching the structure to the standard Jekyll template approach. I will need to switch to a nice Jekyll based theme, as as the old theming was pretty complex. I pushed the code to my Github repository for safe keeping.

Plan for December

Debian

Before the 5th January 2017 Debian Stretch soft freeze I hope to:

Ubuntu

  • Add the Ubuntu Studio Manual Testsuite to the package tracker, and try to encourage some testing of the newest versions of our priority packages.
  • Finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition including an update to the ubuntustudio-meta packages.
  • Reapply to become a Contributing Developer.
  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in.

Other

  • Continue working to convert my Family History website to Jekyll.
  • Try and resurrect my old Gammon one-name study Drupal website from a backup and push it to the new GoONS Website project.

03 December, 2016 11:52AM

hackergotchi for SparkyLinux

SparkyLinux

SparkyLinux 4.5 is out

There is an update of SparkyLinux 4.5 “Tyche” available now.

As before, Sparky “Home” editions provide fully featured operating system based on Debian ‘testing’ with desktops of your choice: LXDE, LXQt, KDE, MATE and Xfce.

Sparky MinimalGUI and MinimalCLI lets you install the base system with a minimal set of applications and your favorite desktop, via the Sparky Advanced Installer.

Changes between 4.4 and 4.5:
– full system upgrade as of November 29, 2016
– Linux kernel 4.8.7 as default (4.8.12-sparky available in Sparky ‘unstable’ repo)
– Firefox 45.5.0 ESR (Firefox 50.0.2 available in our repos)
– Icedove 45.4 (Thunderbird 45.5.1 available in our repos)
– LibreOffice 5.2.3-rc1
– libc6 2.24, systemd 232-6, python 2.7.12 + 3.5.2, gcc 5.4.1 + 6.2.0
– added 2 new desktops to be installed via the MinimalGUI/CLI and APTus: CDE (Common Desktop Environment), DDE (Deepin Desktop Environment); Pantheon desktop has been removed from the configuration
– all Sparky pages (blog, forums, wiki) are encrypted so they are available via the https protocol now; it did not make changes in Sparky repos
– Netsurf web browser replaced by Midori in MinimalGUI Edition – it does not work well with the https protocol
– added support for ‘exfat’ file system

Big thanks go to our community members, for their intensive help on Sparky forums and community portals.

Known issues:
– the live system does not work well inside VirtualBox (but works fine inside VMware Workstation, in BIOS and EFI mode)

ISO images of SparkyLinux can be downloaded from the download page.

 

03 December, 2016 11:40AM by pavroo

December 02, 2016

hackergotchi for Ubuntu developers

Ubuntu developers

Timo Aaltonen: Mesa 12.0.4 backport available for testing

Hi!

I’ve uploaded Mesa 12.0.4 for xenial and yakkety to my testing PPA for you to try out. 16.04 shipped with 11.2.0 so it’s a slightly bigger update there, while yakkety is already on 12.0.3 but the new version should give radeon users a 15% performance boost in certain games with complex shaders.

Please give it a spin and report to the (yakkety) SRU bug if it works or not, and mention the GPU you tested with. At least Intel Skylake seems to still work fine here.

 


02 December, 2016 10:28PM

hackergotchi for Tails

Tails

Call for testing: first alpha of the revamped greeter

You can help Tails! The first alpha for the revamped greeter is out. We are very excited and cannot wait to hear what you think about it :)

What is the greeter?

The greeter is the application that lets you choose your language, unlock your persistence or add an additional setting at Tails startup.

What is new in the revamped greeter?

Nearly everything you can see! We worked for more than two years with designers to redesign the greeter interface.

How to test the revamped greeter?

Keep in mind that this is a test image. It didn't went through our QA process and is not guaranteed to provide any security or anonymity.

If you find anything that is not working as it should, please report to us tails-testers@boum.org.

Download and install

Tails 3.0~alpha1 ISO image including the revamped greeter

You cannot install Tails 3.0~alpha1 from Tails 2.x. It is impossible as well to upgrade to Tails 3.0~alpha1 from Tails 2.x. So, either install or upgrade from a non-Tails system, or start Tails 3.0~alpha1 from DVD and then clone it to a USB stick.

To install the test ISO, follow our usual installation instructions, skipping the Download and verify step.

What to test

The greeter should show up after Tails boot:

Revamped greeter alpha screenshot

Don't hesitate to test weird options, and verify they are taken into account in the Tails session.

If you find anything that is not working as it should, please report to us on tails-testers@boum.org.

Known issues in the revamped greeter

  • The greeter is only available in english of french

02 December, 2016 10:19PM

hackergotchi for Ubuntu developers

Ubuntu developers

Costales: Fairphone 2 & Ubuntu

En la Ubucon Europe pude conocer de primera mano los avances de Ubuntu Touch en el Fairphone 2.

Ubuntu Touch & Fairphone 2
El Fairphone 2 es un móvil único. Como su propio nombre indica, es un móvil ético con el mundo. No usa mano de obra infantil, construido con minerales por los que no corrió la sangre y que incluso se preocupa por los residuos que genera.

Delantera y trasera
En el apartado de software ejecuta varios sistemas operativos, y por fin, Ubuntu es uno de ellos.

Tu elección
El port de Ubuntu está implementado por el proyecto UBPorts, que está avanzando a pasos de gigante cada semana.

Cuando yo probé el móvil, me sorprendió la velocidad de Unity, similar a la de mi BQ E4.5.
La cámara es suficientemente buena. Y la duración de la batería es aceptable.
Me encantó especialmente la calidad de la pantalla, con sólo mirarla se nota su nitidez.
Respecto a las aplicaciones, probé varias de la Store sin problema.

Carcasa
En resumen, un gran sistema operativo, para un gran móvil :) Un win:win

Si te interesa colaborar como desarrollador de este port, te recomiendo este grupo de Telegram: https://telegram.me/joinchat/AI_ukwlaB6KCsteHcXD0jw

All images are CC BY-SA 2.0.

02 December, 2016 05:54PM by Marcos Costales (noreply@blogger.com)

Harald Sitter: Snapping KDE Applications

This is largely based on a presentation I gave a couple of weeks ago. If you are too lazy to read, go watch it instead😉

For 20 years KDE has been building free software for the world. As part of this endeavor, we created a collection of libraries to assist in high-quality C++ software development as well as building highly integrated graphic applications on any operating system. We call them the KDE Frameworks.

With the recent advance of software bundling systems such as Snapcraft and Flatpak, KDE software maintainers are however a bit on the spot. As our software is building on such a vast collection of frameworks and supporting technology, the individual size of a distributable application can be quite abysmal.

When we tried to package our calculator KCalc as a snap bundle, we found that even a relatively simple application like this, makes for a good 70 MiB snap to be in a working state (most of this is the graphical stack required by our underlying C++ framework, Qt).
Since then a lot of effort was put into devising a system that would allow us to more efficiently deal with this. We now have a reasonably suitable solution on the table.

The KDE Frameworks 5 content snap.

A content snap is a special bundle meant to be mounted into other bundles for the purpose of sharing its content. This allows us to share a common core of libraries and other content across all applications, making the individual applications just as big as they need to be. KCalc is only 312 KiB without translations.

The best thing is that beside some boilerplate definitions, the snapcraft.yaml file defining how to snap the application is like a regular snapcraft file.

Let’s look at how this works by example of KAlgebra, a calculator and mathematical function plotter:

Any snapcraft.yaml has some global attributes we’ll want to set for the snap

name: kalgebra
version: 16.08.2
summary: ((TBD))
description: ((TBD))
confinement: strict
grade: devel

We’ll want to define an application as well. This essentially allows snapd to expose and invoke our application properly. For the purpose of content sharing we will use a special start wrapper called kf5-launch that allows us to use the content shared Qt and KDE Frameworks. Except for the actual application/binary name this is fairly boilerplate stuff you can use for pretty much all KDE applications.

apps:
  kalgebra:
    command: kf5-launch kalgebra
    plugs:
      - kde-frameworks-5-plug # content share itself
      - home # give us a dir in the user home
      - x11 # we run with xcb Qt platform for now
      - opengl # Qt/QML uses opengl
      - network # gethotnewstuff needs network IO
      - network-bind # gethotnewstuff needs network IO
      - unity7 # notifications
      - pulseaudio # sound notifications

To access the KDE Frameworks 5 content share we’ll then want to define a plug our application can use to access the content. This is always the same for all applications.

plugs:
  kde-frameworks-5-plug:
    interface: content
    content: kde-frameworks-5-all
    default-provider: kde-frameworks-5
    target: kf5

Once we got all that out of the way we can move on to actually defining the parts that make up our snap. For the most part parts are build instructions for the application and its dependencies. With content shares there are two boilerplate parts you want to define.

The development tarball is essentially a fully built kde frameworks tree including development headers and cmake configs. The tarball is packed by the same tech that builds the actual content share, so this allows you to build against the correct versions of the latest share.

  kde-frameworks-5-dev:
    plugin: dump
    snap: [-*]
    source: http://build.neon.kde.org/job/kde-frameworks-5-release_amd64.snap/lastSuccessfulBuild/artifact/kde-frameworks-5-dev_amd64.tar.xz

The environment rigging provide the kf5-launch script we previously saw in the application’s definition, we’ll use it to execute the application within a suitable environment. It also gives us the directory for the content share mount point.

  kde-frameworks-5-env:
    plugin: dump
    snap: [kf5-launch, kf5]
    source: http://github.com/apachelogger/kf5-snap-env.git

Lastly, we’ll need the actual application part, which simply instructs that it will need the dev part to be staged first and then builds the tarball with boilerplate cmake config flags.

  kalgebra:
    after: [kde-frameworks-5-dev]
    plugin: cmake
    source: http://download.kde.org/stable/applications/16.08.2/src/kalgebra-16.08.2.tar.xz
    configflags:
      - "-DKDE_INSTALL_USE_QT_SYS_PATHS=ON"
      - "-DCMAKE_INSTALL_PREFIX=/usr"
      - "-DCMAKE_BUILD_TYPE=Release"
      - "-DENABLE_TESTING=OFF"
      - "-DBUILD_TESTING=OFF"
      - "-DKDE_SKIP_TEST_SETTINGS=ON"

Putting it all together we get a fairly standard snapcraft.yaml with some additional boilerplate definitions to wire it up with the content share. Please note that the content share is using KDE neon’s Qt and KDE Frameworks builds, so, if you want to try this and need additional build-packages or stage-packages to build a part you’ll want to make sure that KDE neon’s User Edition archive is present in the build environments sources.list deb http://archive.neon.kde.org/user xenial main. This is going to get a more accessible centralized solution for all of KDE soon™.

name: kalgebra
version: 16.08.2
summary: ((TBD))
description: ((TBD))
confinement: strict
grade: devel

apps:
  kalgebra:
    command: kf5-launch kalgebra
    plugs:
      - kde-frameworks-5-plug # content share itself
      - home # give us a dir in the user home
      - x11 # we run with xcb Qt platform for now
      - opengl # Qt/QML uses opengl
      - network # gethotnewstuff needs network IO
      - network-bind # gethotnewstuff needs network IO
      - unity7 # notifications
      - pulseaudio # sound notifications

plugs:
  kde-frameworks-5-plug:
    interface: content
    content: kde-frameworks-5-all
    default-provider: kde-frameworks-5
    target: kf5

parts:
  kde-frameworks-5-dev:
    plugin: dump
    snap: [-*]
    source: http://build.neon.kde.org/job/kde-frameworks-5-release_amd64.snap/lastSuccessfulBuild/artifact/kde-frameworks-5-dev_amd64.tar.xz
  kde-frameworks-5-env:
    plugin: dump
    snap: [kf5-launch, kf5]
    source: http://github.com/apachelogger/kf5-snap-env.git
  kalgebra:
    after: [kde-frameworks-5-dev]
    plugin: cmake
    source: http://download.kde.org/stable/applications/16.08.2/src/kalgebra-16.08.2.tar.xz
    configflags:
      - "-DKDE_INSTALL_USE_QT_SYS_PATHS=ON"
      - "-DCMAKE_INSTALL_PREFIX=/usr"
      - "-DCMAKE_BUILD_TYPE=Release"
      - "-DENABLE_TESTING=OFF"
      - "-DBUILD_TESTING=OFF"
      - "-DKDE_SKIP_TEST_SETTINGS=ON"

Now to install this we’ll need the content snap itself. Here is the content snap. To install it a command like sudo snap install --force-dangerous kde-frameworks-5_*_amd64.snap should get you going. Once that is done one can install the kalgebra snap. If you are a KDE developer and want to publish your snap on the store get in touch with me so we can get you set up.

The kde-frameworks-5 content snap is also available in the edge channel of the Ubuntu store. You can try the games kblocks and ktuberling like so:

sudo snap install --edge kde-frameworks-5
sudo snap install --edge --devmode kblocks
sudo snap install --edge --devmode ktuberling

If you want to be part of making the world a better place, or would like a KDE-themed postcard, please consider donating a penny or two to KDE

postcard04

02 December, 2016 02:44PM

hackergotchi for Kali Linux

Kali Linux

Kali Linux in the AWS cloud, again

We’re happy to announce that we’ve once again listed our Kali Linux images on the Amazon AWS marketplace. You can now spin up an updated Kali machine easily through your EC2 panel. Our current image is a “full” image, which contains all the standard tools available in a full Kali release. Once your instance is running, connect to it with your SSH private key using the “ec2-user” account. Don’t forget to update your Kali instance to get the latest packages and bug fixes. Type as root (or sudo): apt update && apt dist-upgrade. We are “selling” these images on the marketplace for free, so other than the regular Amazon charges, there are no extras to pay. The Kali team would like to take this opportunity to thank r0kh for his efforts of getting Kali back on track (no pun intended) and working flawlessly in AWS. If you plan to use these Kali images for penetration testing in an AWS environment, make sure you check out the Amazon penetration testing request form.

02 December, 2016 01:45PM by muts

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Cloud Chatter: November 2016

Welcome to our November edition. We begin with details on our latest partnership with Docker. Next up, we bring you a co-hosted webinar with PLUMgrid exploring how enterprises can build and manage highly scalable OpenStack clouds. We also have a number of exciting announcements with partners including Microsoft, Cloud Native Computing Forum and Open Telekom Cloud. Take a look at our top blog posts for interesting tutorials and videos. And finally, don’t miss out on our round up of industry news.

docker

A Commercial Partnership With Docker

Docker and Canonical have announced an integrated Commercially Supported (CS) Docker Engine offering on Ubuntu providing Canonical customers with a single path for support of the Ubuntu operating system and CS Docker Engine in enterprise Docker operations.

As part of this agreement Canonical will provide Level 1 and Level 2 technical support for CS DOcker Engine backed by Docker, Inc providing Level 3 support.
Learn more

Webinar: Secure, scale and simplify your OpenStack deployments

In our latest on-demand webinar, we explore how enterprises and telcos can build and manage highly scalable OpenStack clouds with BootStack, Juju and PLUMgrid. Arturo Suarez, Product Manager for BootStack at Canonical, and Justin Moore, Principal Solutions Architect, at PLUMgrid, discuss common issues users run into when running OpenStack at scale, and how to circumnavigate them using solutions such as BootStack, Juju and PLUMgrid ONS.

Watch on-demand

In Other News

Microsoft loves Linux. SQL Server Public Preview available on Ubuntu

Canonical announced that the next public release of Microsoft’s SQL Server is now available for Ubuntu. SQL Server on Ubuntu now provides freedom of choice for developers and organisations alike whether you use on premises or in the cloud. With SQL Server on Ubuntu, there are significant cost savings, performance improvements, and the ability to scale & deploy additional storage and compute resources easier without adding more hardware. Learn more

Canonical launches fully managed Kubernetes and joins the CNCF

Canonical recently joined The Cloud Native Computing Foundation (CNCF), expanding the Canonical Distribution of Kubernetes to include consulting, integration and fully-managed on-prem and on-cloud Kubernetes services. Ubuntu leading the adoption of Linux containers, and Canonical’s definition of a new class of application and new approach to operations, are only some of the key contributions being made. Learn more

Open Telekom Cloud joins Certified Public Cloud

T-Systems, a subsidiary of Deutsche Telekom, recently launched its own Open Telekom Cloud platform, based on Huawei’s OpenStack and hardware platforms. Canonical and T-Systems have announced their partnership to provide certified Ubuntu images on all LTS versions of Ubuntu to users of their cloud services. Learn more

Top blog posts from Insights

Industry News

Ubuntu Cloud in the news

OpenStack & NFV

Containers & Storage

Big data / Machine Learning / Deep Learning

02 December, 2016 12:01PM

Raphaël Hertzog: My Free Software Activities in November 2016

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

In the 11 hours of (paid) work I had to do, I managed to release DLA-716-1 aka tiff 4.0.2-6+deb7u8 fixing CVE-2016-9273, CVE-2016-9297 and CVE-2016-9532. It looks like this package is currently getting new CVE every month.

Then I spent quite some time to review all the entries in dla-needed.txt. I wanted to get rid of some misleading/no longer applicable comments and at the same time help Olaf who was doing LTS frontdesk work for the first time. I ended up tagging quite a few issues as no-dsa (meaning that we will do nothing for them as they are not serious enough) such as those affecting dwarfutils, dokuwiki, irssi. I dropped libass since the open CVE is disputed and was triaged as unimportant. While doing this, I fixed a bug in the bin/review-update-needed script that we use to identify entries that have not made any progress lately.

Then I claimed libgc and and released DLA-721-1 aka libgc 1:7.1-9.1+deb7u1 fixing CVE-2016-9427. The patch was large and had to be manually backported as it was not applying cleanly.

The last thing I did was to test a new imagemagick and review the update prepared by Roberto.

pkg-security work

The pkg-security team is continuing its good work: I sponsored patator to get rid of a useless dependency on pycryptopp which was going to be removed from testing due to #841581. After looking at that bug, it turns out the bug was fixed in libcrypto++ 5.6.4-3 and I thus closed it.

I sponsored many uploads: polenum, acccheck, sucrack (minor updates), bbqsql (new package imported from Kali). A bit later I fixed some issues in the bbsql package that had been rejected from NEW.

I managed a few RC bugs related to the openssl 1.1 transition: I adopted sslsniff in the team and fixed #828557 by build-depending on libssl1.0-dev after having opened the proper upstream ticket. I did the same for ncrack and #844303 (upstream ticket here). Someone else took care of samdump2 but I still adopted the package in the pkg-security team as it is a security relevant package. I also made an NMU for axel and #829452 (it’s not pkg-security related but we still use it in Kali).

Misc Debian work

Django. I participated in the discussion about a change letting Django count the number of developers that use it. Such a change has privacy implications and the discussion sparked quite some interest both in Debian mailing lists and up to LWN.

On a more technical level, I uploaded version 1.8.16-1~bpo8+1 to jessie-backports (security release) and I fixed RC bug #844139 by backporting two upstream commits. This led to the 1.10.3-2 upload. I ensured that this was fixed in the 1.10.x upstream branch too.

dpkg and merged /usr. While reading debian-devel, I discovered dpkg bug #843073 that was threatening the merged-/usr feature. Since the bug was in code that I wrote a few years ago, and since Guillem was not interested in fixing it, I spent an hour to craft a relatively clean patch that Guillem could apply. Unfortunately, Guillem did not yet manage to pull out a new dpkg release with the patches applied. Hopefully it won’t be too long until this happens.

Debian Live. I closed #844332 which was a request to remove live-build from Debian. While it was marked as orphaned, I was always keeping an eye on it and have been pushing small fixes to git. This time I decided to officially adopt the package within the debian-live team and work a bit more on it. I reviewed all pending patches in the BTS and pushed many changes to git. I still have some pending changes to finish to prettify the Grub menu but I plan to upload a new version really soon now.

Misc bugs filed. I filed two upstream tickets on uwsgi to help fix currently open RC bugs on the package. I filed #844583 on sbuild to support arbitrary version suffix for binary rebuild (binNMU). And I filed #845741 on xserver-xorg-video-qxl to get it fixed for the xorg 1.19 transition.

Zim. While trying to fix #834405 and update the required dependencies, I discovered that I had to update pygtkspellcheck first. Unfortunately, its package maintainer was MIA (missing in action) so I adopted it first as part of the python-modules team.

Distro Tracker. I fixed a small bug that resulted in an ugly traceback when we got queries with a non-ASCII HTTP_REFERER.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

02 December, 2016 11:45AM

hackergotchi for VyOS

VyOS

VyOS remote management library for Python

Someone on Facebook rightfully noted that lately there's been more work on the infrastructure than development. This is true, but that work on infrastructure was long overdue and we just had to do it some time. There is even more work on the infrastructure waiting to be done, though it's more directly related to development, like restructuring the package repos.

Anyway, it doesn't mean all development has stopped while we've been working on infrastructure. Today we released a Python library for managing VyOS routers remotely.

Before I get to the details, have a quick example of what using it is like:

import vymgmt

vyos = vymgmt.Router('192.0.2.1', 'vyos', password='vyos', port=22)

vyos.login()
vyos.configure()

vyos.set("protocols static route 203.0.113.0/25 next-hop 192.0.2.20")
vyos.delete("system options reboot-on-panic")
vyos.commit()

vyos.save()
vyos.exit()
vyos.logout()

If you want to give it a try, you can install it from PyPI ("pip install vymgmt"), it's compatible with both Python 2.7 and Python 3. You can read the API reference at http://vymgmt.readthedocs.io/en/latest/ or get the source code at https://github.com/vyos/python-vyos-mgmt .

Now to the details. This is not a true remote API, the library connects to VyOS over SSH and sends commands as if it was a user session. Surprisingly, one of the tricky parts was to find an SSH/expect library that can cope with VyOS shell environment well, and is compatible with both 2.7 and 3. All credit for this goes to our contributor who goes by Hochikong, who tried a whole bunch of them, settled with pexpect and wrote a prototype.

How the library is better than using pexpect directly, if it's a rather thin wrapper for it? First, it's definitely more convenient to just call set() or delete() or commit() than to format command strings yourself and take care of the sending and receiving lines.

Second, common error conditions are detected (through simple regex matching) and raise appropriate exceptions such as ConfigError (for set/delete failures) or CommitError for commit errors. There's also a special ConfigLocked exception (a subclass of CommitError) that is raised when commit fails due to another commit in progress, so you can recover from it by sleep() and retry. This may seem uncommon, but people who use VRRP transition scripts and the like on VyOS already reported that they ran into it.

Third, the library is aware of the state machine of VyOS sessions, and will not let you accidentally do wrong things such as trying to enter set/delete commands before entering the conf mode. By default it also doesn't let you exit configure sessions if there are uncommited or unsaved changes, though you can override it. If a timeout occursm an exception will be raised too (while pexpect returns False in this case).

Right now it only supports set, delete, and commit, of all high level methods. This should be enough for the start, but if you want something else, there are generic methods for running op and conf mode commands (run_op_mode_command() and run_conf_mode_command() respectively). We are not sure what people want most, so what we implement depends on your requests ans suggestions (and pull requests of course!). Other things that are planned but that aren't there yet are SSH public key auth and top level words other than set and delete (rename, copy etc.). We are not sure if commit-confirm is really friendly to programmatic access, but if you have any ideas how to handle it, share with us.

On an unrelated note, syncer and his graphics designer friend made a design for VyOS t-shirts. If anyone buys that stuff, the funds will be used for the project needs. The base cost is around 20 eur, but you can get them with 15% discount by using VYOSMGTLIB promo code: https://teespring.com/stores/vyos?source=blog&pr=VYOSMGTLIB

02 December, 2016 03:36AM

December 01, 2016

hackergotchi for ArcheOS

ArcheOS

QGIS Time Manager, for archaeological drawing on RTI-like raster series

Hi all,
I go on today writing about the Time Manager plugin of QGIS we saw in our last post
This time I will focus the attention on one of the alternative (and unconventional) use we can do of this tool for archaeological aims: an archaeological vector drawing based on RTI-like raster series.
Of course, when I speak about archaeological vector drawing, I mean a GIS based technique (like the one described in this old post). We already developed a little bit further this methodology in order to use it in a semi-automatic way for archaeological finds (related post 1 and 2; bibliography here), so that this post can be seen as an integration of that work-flow. For the concept of RTI, I suggest you to read +Rupert Gietl 's post about a large scale case of study for such an application and my post about the open source tool developed by Giampaolo Palma (Visual Computing Lab of the CNR-ISTI).
The concept of RTI-like raster series is pretty simple: if in a common archaeological excavation is planned an RTI documentation (e.g. to further analyse particular artefacts such as small pottery fragments, coins, inscriptions in stone, etc...), than it is also possible to use some of the original pictures (with different light conditions) to simulate an RTI viewer within any GIS software. Once one of this picture has been rectified (and georeferenced, when needed), the related worldfile can be used also for all the other images (considering that they have all the same size), so that in QGIS it is pretty simple to create a raster series through the Time Manager plugin.
The video below shows the result of this operation on a pottery fragment from the excavation of Khovle Gora, an archaeological mission in Georgia which we supported for the University of Innsbruck (Institut für Alte Geschichte und Altorientalistik).




I hope this post will be useful, have a nice evening!

01 December, 2016 08:56PM by Luca Bezzi (noreply@blogger.com)

hackergotchi for SparkyLinux

SparkyLinux

Enlightenment 0.21.4

 

There is an update of Enlightenment 0.21.4 ready in Sparky repository now.

Make upgrade as usually:
sudo apt-get update
sudo apt-get dist-upgrade

If you would like to make fresh installations, run:
sudo apt-get update
sudo apt-get install enlightenment

 

There are also a few important, 3rd party updates landed to the repos as well:
– EFL 1.18.3
– Firefox 50.0.2
– Palemoon 27.0.1
– Thunderbird 45.5.1
– TOR Browser 6.0.7

 

01 December, 2016 07:29PM by pavroo

LMDE

Linux Mint 18.1 “Serena” Cinnamon – BETA Release

This is the BETA release for Linux Mint 18.1 “Serena” Cinnamon Edition.

Linux Mint 18.1 Serena Cinnamon Edition

Linux Mint 18.1 is a long term support release which will be supported until 2021. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

New features:

This new version of Linux Mint contains many improvements.

For an overview of the new features please visit:

What’s new in Linux Mint 18.1 Cinnamon“.

Important info:

The release notes provide important information about known issues, as well as explanations, workarounds and solutions.

To read the release notes, please visit:

Release Notes for Linux Mint 18.1 Cinnamon

System requirements:

  • 512MB RAM (1GB recommended for a comfortable usage).
  • 9GB of disk space (20GB recommended).
  • Graphics card capable of 800×600 resolution (1024×768 recommended).
  • DVD drive or USB port.

Notes:

  • The 64-bit ISO can boot with BIOS or UEFI.
  • The 32-bit ISO can only boot with BIOS.
  • The 64-bit ISO is recommend for all modern computers (Almost all computers sold in the last 10 years are equipped with 64-bit processors).

Upgrade instructions:

  • This BETA release might contain critical bugs, please only use it for testing purposes and to help the Linux Mint team fix issues prior to the stable release.
  • It will be possible to upgrade from this BETA to the stable release.
  • It will also be possible to upgrade from Linux Mint 18. Upgrade instructions will be published next month after the stable release of Linux Mint 18.1.

Bug reports:

  • Please report bugs below in the comment section of this blog.
  • When reporting bugs, please be as accurate as possible and include any information that might help developers reproduce the issue or understand the cause of the issue:
    • Bugs we can reproduce, or which cause we understand are usually fixed very easily.
    • It is important to mention whether a bug happens “always”, or “sometimes”, and what triggers it.
    • If a bug happens but didn’t happen before, or doesn’t happen in another distribution, or doesn’t happen in a different environment, please mention it and try to pinpoint the differences at play.
    • If we can’t reproduce a particular bug and we don’t understand its cause, it’s unlikely we’ll be able to fix it.
  • Please visit https://github.com/linuxmint/Roadmap to follow the progress of the development team between the BETA and the stable release.

Download links:

Here are the download links for the 64-bit ISO:

A 32-bit ISO image is also available at https://www.linuxmint.com/download_all.php.

Integrity and authenticity checks:

Once you have downloaded an image, please verify its integrity and authenticity.

Anyone can produce fake ISO images, it is your responsibility to check you are downloading the official ones.

Enjoy!

We look forward to receiving your feedback. Many thanks in advance for testing the BETA!

01 December, 2016 04:09PM by Clem

Linux Mint 18.1 “Serena” MATE – BETA Release

This is the BETA release for Linux Mint 18.1 “Serena” MATE Edition.

Linux Mint 18.1 Serena MATE Edition

Linux Mint 18.1 is a long term support release which will be supported until 2021. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

New features:

This new version of Linux Mint contains many improvements.

For an overview of the new features please visit:

What’s new in Linux Mint 18.1 MATE“.

Important info:

The release notes provide important information about known issues, as well as explanations, workarounds and solutions.

To read the release notes, please visit:

Release Notes for Linux Mint 18.1 MATE

System requirements:

  • 512MB RAM (1GB recommended for a comfortable usage).
  • 9GB of disk space (20GB recommended).
  • Graphics card capable of 800×600 resolution (1024×768 recommended).
  • DVD drive or USB port.

Notes:

  • The 64-bit ISO can boot with BIOS or UEFI.
  • The 32-bit ISO can only boot with BIOS.
  • The 64-bit ISO is recommend for all modern computers (Almost all computers sold in the last 10 years are equipped with 64-bit processors).

Upgrade instructions:

  • This BETA release might contain critical bugs, please only use it for testing purposes and to help the Linux Mint team fix issues prior to the stable release.
  • It will be possible to upgrade from this BETA to the stable release.
  • It will also be possible to upgrade from Linux Mint 18. Upgrade instructions will be published next month after the stable release of Linux Mint 18.1.

Bug reports:

  • Please report bugs below in the comment section of this blog.
  • When reporting bugs, please be as accurate as possible and include any information that might help developers reproduce the issue or understand the cause of the issue:
    • Bugs we can reproduce, or which cause we understand are usually fixed very easily.
    • It is important to mention whether a bug happens “always”, or “sometimes”, and what triggers it.
    • If a bug happens but didn’t happen before, or doesn’t happen in another distribution, or doesn’t happen in a different environment, please mention it and try to pinpoint the differences at play.
    • If we can’t reproduce a particular bug and we don’t understand its cause, it’s unlikely we’ll be able to fix it.
  • Please visit https://github.com/linuxmint/Roadmap to follow the progress of the development team between the BETA and the stable release.

Download links:

Here are the download links for the 64-bit ISO:

A 32-bit ISO image is also available at https://www.linuxmint.com/download_all.php.

Integrity and authenticity checks:

Once you have downloaded an image, please verify its integrity and authenticity.

Anyone can produce fake ISO images, it is your responsibility to check you are downloading the official ones.

Enjoy!

We look forward to receiving your feedback. Many thanks in advance for testing the BETA!

01 December, 2016 04:09PM by Clem

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Canonical’s Distribution of Kubernetes reduces operational friction

Linux containers (LXC) are one of the hottest technologies in the market today. Developers are adopting containers, especially Docker, as a way to speed-up development cycles and deliver code into testing or production environments much faster than traditional methods. With the largest base of LXC, LXD, and Docker users, Ubuntu has long been the platform of choice for developers driving innovation with containers and is widely used to run infrastructure like Kubernetes as a result. Due to customer demand, Canonical recently announced a partnership with Google to deliver the Canonical Distribution of Kubernetes.


Marco Ceppi, Engineering Manager at Canonical, tells our container story at KubeCon 2016

Explaining Containers and Canonical’s Distribution of Kubernetes

First, a bit of background, containers offer an alternative to traditional virtualization. Containers allow organizations to virtually run multiple Linux systems on a single kernel without the need for a hypervisor. One of the most promising features of containers is the ability to put more applications onto a physical server than you could with a virtual machine. There are two types of containers – machine and process.

Machine containers (sometimes called OS containers) allow developers/organizations to configure, install, and run applications, multiple processes, or libraries within a single container. They create an environment where companies can manage distributed software solutions across various environments, operating systems, and configurations. Machine containers are largely used by organizations to “lift-and-shift” legacy applications from on-premise to the cloud. Whereas process containers (sometimes called application containers) share the same kernel host, they can only run a single process or command. Process containers are especially valuable for creating microservices or API functions calls that are fast, efficient, and optimized. Process containers also allow developers to deploy services and solutions more efficiently and on time without having to deploy virtual machines.

Ubuntu is the container OS (operating system) used by a majority of Docker developers and deployments worldwide, and Kubernetes is the leader in coordinating process containers across a cluster, enabling high-efficiency DevOps, horizontal scaling, and support for 12-factor apps. Our Distribution of Kubernetes allows organizations to manage and monitor their containers across all major public clouds, and within private infrastructures. Kubernetes is effectively the air traffic controller for managing how containers are deployed.

Even as the cost of software has declined, the cost to operate today’s complex and distributed solutions have increased as many companies have found themselves managing these systems in a vacuum. Even for experts, deploying, and operating containers and Kubernetes at scale can be a daunting task. However, by deploying Ubuntu, Juju for software modeling, and Canonical’s Kubernetes distribution helps organizations to make deployment simple. Further, we have certified our distribution of Kubernetes to work with most major public clouds as well as on-premise infrastructure like VMware or Metal as a Service (MaaS) solutions thereby eliminating many of the integration and deployment headaches.

A new approach to IT operations

Containers are only part of the major change in the way we think about software. Organisations are facing fundamental limits in their ability to manage escalating complexity, and Canonical’s focus on operations has proven successful in enabling cost-effective scale-out infrastructure. Canonical’s approach dramatically increases the ability of IT operations teams to run ever more complex and large scale systems.

Leading open source projects like MAAS, LXD, and Juju help enterprises to operate in a hybrid cloud world. Kubernetes extends the diversity of applications which can now be operated efficiently on any infrastructure.

Moving to Ubuntu and to containers enables an organization to reduce overhead and improve operational efficiency. Canonical’s mission is to help companies to operate software on their public and private infrastructure, bringing Netflix-style efficiency to the enterprise market.

Canonical views containers as one of the key technologies IT and DevOps organizations will use as they work to become more cost effective and based in the cloud. Forward-looking enterprises are moving from proof of concepts (POCs) to actual production deployments, and the window for competitive advantage is closing.

For more information on how we can help with education, consulting, and our fully-managed or on cloud Kubernetes services, check out the Canonical Distribution of Kubernetes.

01 December, 2016 03:00PM

Ubuntu Podcast from the UK LoCo: S09E40 – Dirty Dan’s Dark Delight - Ubuntu Podcast

It’s Season Nine Episode Forty of the Ubuntu Podcast! Alan Pope, Mark Johnson, Martin Wimpress and Dan Kermac are connected and speaking to your brain.

The same line up as last week are here again for another episode.

In this week’s show:

  • We discuss what we’ve been upto recently:
  • We review the nexdock and how it works with the Raspberry Pi 3, Meizu Pro 5 Ubuntu Phone, bq M10 FHD Ubuntu Tablet, Android, Dragonboard 410c, Roku, Chomecast, Amazon FireTV and laptops from Dell and Entroware.

  • We share a Command Line Lurve:

sudo apt install netdiscover
sudo netdiscover

The output looks something like this:

_____________________________________________________________________________
  IP            At MAC Address     Count     Len  MAC Vendor / Hostname
-----------------------------------------------------------------------------
192.168.2.2     fe:ed:de:ad:be:ef      1      42  Unknown vendor
192.168.2.1     da:d5:ba:be:fe:ed      1      60  TP-LINK TECHNOLOGIES CO.,LTD.
192.168.2.11    ba:da:55:c0:ff:ee      1      60  BROTHER INDUSTRIES, LTD.
192.168.2.30    02:02:de:ad:be:ef      1      60  Elitegroup Computer Systems Co., Ltd.
192.168.2.31    de:fa:ce:dc:af:e5      1      60  GIGA-BYTE TECHNOLOGY CO.,LTD.
192.168.2.107   da:be:ef:15:de:af      1      42  16)
192.168.2.109   b1:gb:00:bd:ba:be      1      60  Denon, Ltd.
192.168.2.127   da:be:ef:15:de:ad      1      60  ASUSTek COMPUTER INC.
192.168.2.128   ba:df:ee:d5:4f:cc      1      60  ASUSTek COMPUTER INC.
192.168.2.101   ba:be:4d:ec:ad:e5      1      42  Roku, Inc
192.168.2.106   ba:da:55:0f:f1:ce      1      42  LG Electronics
192.168.2.247   f3:3d:de:ad:be:ef      1      60  Roku, Inc
192.168.3.2     ba:da:55:c0:ff:33      1      60  Raspberry Pi Foundation
192.168.3.1     da:d5:ba:be:f3:3d      1      60  TP-LINK TECHNOLOGIES CO.,LTD.
192.168.2.103   da:be:ef:15:d3:ad      1      60  Unknown vendor
192.168.2.104   b1:gb:00:bd:ba:b3      1      42  Unknown vendor
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Flickr.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

01 December, 2016 03:00PM

Daniel Pocock: Using a fully free OS for devices in the home

There are more and more devices around the home (and in many small offices) running a GNU/Linux-based firmware. Consider routers, entry-level NAS appliances, smart phones and home entertainment boxes.

More and more people are coming to realize that there is a lack of security updates for these devices and a big risk that the proprietary parts of the code are either very badly engineered (if you don't plan to release your code, why code it properly?) or deliberately includes spyware that calls home to the vendor, ISP or other third parties. IoT botnet incidents, which are becoming more widely publicized, emphasize some of these risks.

On top of this is the frustration of trying to become familiar with numerous different web interfaces (for your own devices and those of any friends and family members you give assistance to) and the fact that many of these devices have very limited feature sets.

Many people hail OpenWRT as an example of a free alternative (for routers), but I recently discovered that OpenWRT's web interface won't let me enable both DHCP and DHCPv6 concurrently. The underlying OS and utilities fully support dual stack, but the UI designers haven't encountered that configuration before. Conclusion: move to a device running a full OS, probably Debian-based, but I would consider BSD-based solutions too.

For many people, the benefit of this strategy is simple: use the same skills across all the different devices, at home and in a professional capacity. Get rapid access to security updates. Install extra packages or enable extra features if really necessary. For example, I already use Shorewall and strongSwan on various Debian boxes and I find it more convenient to configure firewall zones using Shorewall syntax rather than OpenWRT's UI.

Which boxes to start with?

There are various considerations when going down this path:

  • Start with existing hardware, or buy new devices that are easier to re-flash? Sometimes there are other reasons to buy new hardware, for example, when upgrading a broadband connection to Gigabit or when an older NAS gets a noisy fan or struggles with SSD performance and in these cases, the decision about what to buy can be limited to those devices that are optimal for replacing the OS.
  • How will the device be supported? Can other non-technical users do troubleshooting? If mixing and matching components, how will faults be identified? If buying a purpose-built NAS box and the CPU board fails, will the vendor provide next day replacement, or could it be gone for a month? Is it better to use generic components that you can replace yourself?
  • Is a completely silent/fanless solution necessary?
  • Is it possibly to completely avoid embedded microcode and firmware?
  • How many other free software developers are using the same box, or will you be first?

Discussing these options

I recently started threads on the debian-user mailing list discussing options for routers and home NAS boxes. A range of interesting suggestions have already appeared, it would be great to see any other ideas that people have about these choices.

01 December, 2016 01:11PM

Ubuntu Insights: UbuCon Europe – a sure sign of community strength

image04

UbuCon Europe, 18-20 Nov 2016, Essen, Germany

Recovering from a great UbuCon Europe session that just took place a couple weeks ago! This year was attended by 170 people from 20 countries, about half coming from Germany. Almost everyone was (obviously) running Ubuntu and had been around in the community for years.

The event was organised by the Ubuntu community, led by Sujeevan Vijayakumaran – congrats to him! The venue, the schedule, the social events and sponsoring were all flawlessly executed and it showed the level of commitment the European Ubuntu community has. So much so that the community are already looking forward to the next big UbuCon in Paris!

image06

The venue was the Unperfekthaus, central to Essen. A beautifully weird and inclusive 4-storey coworking / artist’s / maker-space / café / event location.

Selection of talks
We had a good Canonical attendance at the event that included Jane, Thibaut, Thomas, Alan, Christian, Daniel, Martin and Michael! Many long-standing community members such as Laura Fautley (czajkowski), Elizabeth Joseph Krumbach (pleia2), cm-t arudy, José Antonio Rey, Marcos Costales, Marius Gripsgård (mariogrip), Philip Ballew and Nathan Haines also attended and presented to – all worthy of a mention.

image03

Jane gave a keynote at the event to a packed room. The talks and workshop from Alan, Daniel, Michael and Thibaut were focused on snaps and IoT and were well-received. There seemed to be a lot of interest in learning more about the new way of doing things and getting started immediately – what we like to hear.

Malte Lantin from Microsoft gave an overview of Bash on Ubuntu on Windows. The talk started by covering why Microsoft worked on the project, some history of previous attempts at nix compatibility and posix compliance along with some technical infrastructure details. A few demos were given and technical questions from the audience answered.

Elizabeth K Joseph gave a talk reminding the audience that while the Ubuntu project has been branching out to new technology, the traditional desktop is still there, and still needs volunteers to help. She outlined a great selection of ways in which new contributors can get involved with practical advice and links to resources all the way through.

image02

Alan gave a talk on “zero to store” about snaps and the store. The examples were well picked and lots of fun – the feedback after the session was mostly amazement of how easy it has become to build and publish software. Michael’s session “digging deep into snapcraft” which was in the following time-slot was very well-attended. Probably as a result.

On the snap workshop, everyone worked through the codelabs examples in their own pace and we had some upstream participation (Limesurvey) as well.

During the final session, UbuCon Feedback and 2017 plannings, some attendees new to the Ubuntu community commented on how they didn’t know anyone coming into the event. They felt welcome and made many new friends. So much so, there is some serious interest in creating an UbuCon in Romania as a result….let’s do it!

More info here with the event page and the schedule.

01 December, 2016 11:42AM

Ubuntu Insights: Competition: Build a seasonal snap on your Raspberry Pi!

tree_2

And our present for you this Christmas is… a competition to create a seasonal snap on your Raspberry Pi 2 or 3! We’d love to see the entries that are the most festive, original and work across devices – it could be snow falling, Christmas carols, Santa Claus snaps – anything that is festive! The best three entries will receive prizes that include Raspberry Pi accessories, backpacks and videos of your creations across our channels! See below on how to enter.

To enter: Tweet your Seasonal Snap installation line to @Ubuntu using hashtag #SeasonalSnaps – here’s an example ‘sudo snap install snow-on-me #SeasonalSnap’

If this is your first snap… follow the instructions below, and we’ve created a couple inspiration snaps for you to get started with:

  • Download: Download Ubuntu Core onto your Raspberry Pi 2 or 3 device here
  • Create: To learn about snapcraft and create your first snap, head to Codelabs – a step-by-step guide on installing and creating snaps (you will need Ubuntu 16.04.) To find Codelabs, install the snap: snap install snap-codelabs and then head to http://localhost:8123
  • Publish Once you’ve created your Seasonal snap, publish it to the store by following instructions here and don’t forget to tweet in your installation line with #SeasonalSnaps

 

For inspiration…

Here are a couple examples of festive snaps we created which will give you a sense of what we’re after!

1. snow-on-me

  • Falling snow that demonstrates the use of the oxide full-screen web server on a Raspberry Pi 2. For this you weill need to plug your Raspberry Pi to a screen.
  • Install it with: sudo snap install snow-on-me
  • Install the oxide fullscreen webview: sudo snap install oxide-digitalsignage --devmode --channel=beta
  • Change boot configuration file and give it enough GPU RAM for displaying web pages: sudo mount -o remount,rw /boot/uboot . Edit /boot/uboot/config.txt and add one line: gpu_mem=448
  • Reboot
  • After reboot execute: /snap/bin/oxide-digitalsignage.start-oxide --url="http://localhost"
  • Source code, complete instructions and extra tricks: https://github.com/ubuntu/snow-on-me-snap

2. xmas-hat

  • See yourself with a fancy Christmas hat! Face-detection with OpenCV displaying results through a web server on port 8080
  • Hook your Raspberry Pi 2 to the local network and plug a USB camera in one of the Pi USB ports
  • Install it with: sudo snap install xmas-hat --edge --devmode
  • Get the IP address of your Pi (ifconfig) and point a browser on your local network to IPaddress:8080
  • Source code: https://github.com/caldav/xmas-hat

 

1-david-xmas-hat

Entry and Judging

Entry dates: 1st Dec ’16 – 3rd Jan ’17

Three winners will be selected! And snaps selected will be based on those that are the most creative, bring the widest array of technology from backend to ‘Internet of Things’ and importantly, bring Christmas joy!

Winners will be announced on 5th January 2017 in the Ubuntu Devices newsletter and 6th January across the Ubuntu Facebook and Twitter channel, detailing how prizes can be claimed.

Prizes
All three winning snaps will be featured in a video on our CelebrateUbuntu YouTube channel!

1st place winner

  • Raspberry Pi 7-Inch touch screen display
  • Ubuntu backpack
  • Raspberry Pi 3 box

 

2nd place winner

  • Raspberry Pi Sense HAT
  • Ubuntu backpack
  • Raspberry Pi 3 box

 

3rd place winner

  • Ubuntu swag (backpack)
  • Raspberry Pi 3 box

 

Happy snapping folks!

Terms & Conditions: Festive Raspberry Pi competition

01 December, 2016 11:42AM

LiMux

Welche Services Bürger wünschen – Umfrage zum E-Government

Im Frühjahr 2016 hat die Landeshauptstadt München ihre Bürger gefragt, welche Verwaltungsprozesse sie gerne als elektronische Services abwickeln würden – also weitgehend ohne den Gang in die Behörde, am heimischen Rechner, gegebenenfalls unter Nutzung des … Weiterlesen

Der Beitrag Welche Services Bürger wünschen – Umfrage zum E-Government erschien zuerst auf Münchner IT-Blog.

01 December, 2016 09:07AM by Stefan Döring

hackergotchi for Ubuntu developers

Ubuntu developers

Zygmunt Krynicki: Ubuntu Core Gadget Snaps

Gagdet snaps, the somewhat mysterious part of snappy that few people grok. Being a distinct snap type, next to kernel, os and the most common app types, it gets some special roles. If you are on a classic system like Ubuntu, Debian or Fedora you don't really need or have one yet. Looking at all-snap core devices you will always see one. In fact, each snappy reference platform has one. But where are they?

Up until now the gadget snaps were a bit hard to find. They were out there but you had to have a good amount of luck and twist your tongue at the right angle to find them. That's all changed now. If you look a https://github.com/snapcore you will see a nice, familiar pattern of devicename-gadget. Each repository is dedicated to one device so you will see a gadget snap for Raspberry Pi 2 or Pi 3, for example.

But there's more! Each of those github repositories is linked to a launchpad project that automatically mirrors the git repository, builds the snap and uploads it to the store and publishes the snap to the edge channel!

The work isn't over, as you will see the gadget snaps are mostly in binary form, hand-made to work but still a bit too mysterious. The Canonical Foundations team is working on building them in a way that is friendlier to community and easier to trace back to their source code origins.

If you'd like to learn more about this topic then have a look at the snapd wiki page for gadget snaps.

01 December, 2016 08:29AM by Zygmunt Krynicki (noreply@blogger.com)

November 30, 2016

hackergotchi for Cumulus Linux

Cumulus Linux

Is web-scale networking secure? This infographic breaks it down.

At Cumulus Networks, we take a lot of pride in the fact that web-scale networking using Cumulus Linux can have an immense impact on an organization’s ability to scale, automate and even reduce costs. However, we know that efficiency and growth are not the only things our customers care about.

In fact, many of our customers are interested first and foremost in the security of web-scale networking with Cumulus Linux. Many conclude that a web-scale, open environment can be even more secure than a closed proprietary one. Keep reading to learn more or scroll to the bottom to check out our infographic “The network security debate: Web-scale vs. traditional networking”

Here are some of the ways web-scale networking with Cumulus Linux keeps your data center switches secure:

  • Cumulus Linux uses the same standard secure protocols and procedures as a proprietary vendor: For example, Openssh is used by both traditional closed vendors and Cumulus Linux. The standardized MD5 is used for router authentication, and Cumulus supports management VRF.
  • Web-scale networking has more “eyes” on the code with community support: Linux has a large community of developers from different backgrounds and interests supporting the integrity of the code. Since an entire community of developers check the code, dependency on specific vendors, employees or specific interests is eliminated.
  • Customers are not reliant on a sole vendor to fix a vulnerability: When a vulnerability is found, it is shared with the community and an update with only that fix is provided as quickly as possible, sometimes within hours. Proprietary stacks, which often leverage modified versions of the same software, need to analyze these vulnerabilities and do their own patching and testing. No one is reliant on one sole vendor to fix and supply the update.
  • Cumulus Linux hardens the switch by default: For example, the root account is disabled, insecure protocols—like telnet and ftp—are disabled, and control plane policing is enabled.

In short, we believe web-scale networking with Cumulus Linux is just as secure as traditional methods, if not more so. Many more security features than the ones mentioned here are supported to protect the switch against vulnerabilities.

If you would like to learn more about the technical security features offered with Cumulus Linux, we recommend you check out our security whitepaper, “Securing Cumulus Linux: Security Recommendations and Best Practices”. The paper covers the security aspects of Cumulus Linux, along with industry best practices and recommendations.

For a quick visual guide on how web-scale networking security stacks up against traditional methods, check out this infographic: 

The networking security debate: Web-scale vs Traditional Networking

 

 

If you think this infographic is helpful, please use the embed code below to share it on your site!

The post Is web-scale networking secure? This infographic breaks it down. appeared first on Cumulus Networks Blog.

30 November, 2016 07:04PM by Diane Patton

hackergotchi for Ubuntu developers

Ubuntu developers

Eric Hammond: Amazon Polly Text To Speech With aws-cli and Twilio

Today, Amazon announced a new web service named Amazon Polly, which converts text to speech in a number of languages and voices.

Polly is trivial to use for basic text to speech, even from the command line. Polly also has features that allow for more advanced control of the resulting speech including the use of SSML (Speech Synthesis Markup Language). SSML is familiar to folks already developing Alexa Skills for the Amazon Echo family.

This article describes some simple fooling around I did with this new service.

Deliver Amazon Polly Speech By Phone Call With Twilio

I’ve been meaning to develop some voice applications with Twilio, so I took this opportunity to test Twilio phone calls with speech generated by Amazon Polly. The result sounds a lot better than the default Twilio-generated speech.

The basic approach is:

  1. Generate the speech audio using Amazon Polly.

  2. Upload the resulting audio file to S3.

  3. Trigger a phone call with Twilio, pointing it at the audio file to play once the call is connected.

Here are some sample commands to accomplish this:

1- Generate Speech Audio With Amazon Polly

Here’s a simple example of how to turn text into speech, using the latest aws-cli:

text="Hello. This speech is generated using Amazon Polly. Enjoy!"
audio_file=speech.mp3

aws polly synthesize-speech \
  --output-format "mp3" \
  --voice-id "Salli" \
  --text "$text" \
  $audio_file

You can listen to the resulting output file using your favorite audio player:

mpg123 -q $audio_file

2- Upload Audio to S3

Create or re-use an S3 bucket to store the audio files temporarily.

s3bucket=YOURBUCKETNAME
aws s3 mb s3://$s3bucket

Upload the generated speech audio file to the S3 bucket. I use a long, random key for a touch of security:

s3key=audio-for-twilio/$(uuid -v4 -FSIV).mp3
aws s3 cp --acl public-read $audio_file s3://$s3bucket/$s3key

For easy cleanup, you can use a bucket with a lifecycle that automatically deletes objects after a day or thirty. See instructions below for how to set this up.

3- Initiate Call With Twilio

Once you have set up an account with Twilio (see pointers below if you don’t have one yet), here are sample commands to initiate a phone call and play the Amazon Polly speech audio:

from_phone="+1..." # Your Twilio allocated phone number
to_phone="+1..."   # Your phone number to call

TWILIO_ACCOUNT_SID="..." # Your Twilio account SID
TWILIO_AUTH_TOKEN="..."  # Your Twilio auth token

speech_url="http://s3.amazonaws.com/$s3bucket/$s3key"
twimlet_url="http://twimlets.com/message?Message%5B0%5D=$speech_url"

curl -XPOST https://api.twilio.com/2010-04-01/Accounts/$TWILIO_ACCOUNT_SID/Calls.json \
  -u "$TWILIO_ACCOUNT_SID:$TWILIO_AUTH_TOKEN" \
  --data-urlencode "From=$from_phone" \
  --data-urlencode "To=to_phone" \
  --data-urlencode "Url=$twimlet_url"

The Twilio web service will return immediately after queuing the phone call. It could take a few seconds for the call to be initiated.

Make sure you listen to the phone call as soon as you answer, as Twilio starts playing the audio immediately.

The ringspeak Command

For your convenience (actually for mine), I’ve put together a command line program that turns all the above into a single command. For example, I can now type things like:

... || ringspeak --to +1NUMBER "Please review the cron job failure messages"

or:

ringspeak --at 6:30am \
  "Good morning!" \
  "Breakfast is being served now in Venetian Hall G.." \
  "Werners keynote is at 8:30."

Twilio credentials, default phone numbers, S3 bucket configuration, and Amazon Polly voice defaults can be stored in a $HOME/.ringspeak file.

Here is the source for the ringspeak command:

https://github.com/alestic/ringspeak

Tip: S3 Setup

Here is a sample commands to configure an S3 bucket with automatic deletion of all keys after 1 day:

aws s3api put-bucket-lifecycle \
  --bucket "$s3bucket" \
  --lifecycle-configuration '{
    "Rules": [{
        "Status": "Enabled",
        "ID": "Delete all objects after 1 day",
        "Prefix": "",
        "Expiration": {
          "Days": 1
        }
  }]}'

This is convenient because you don’t have to worry about knowing when Twilio completes the phone call to clean up the temporary speech audio files.

Tip: Twilio Setup

This isn’t the place for an entire Twilio howto, but I will say that it is about this simple to set up:

  1. Create a Twilio account

  2. Reserve a phone number through Twilio.

  3. Find the ACCOUNT SID and AUTH TOKEN for use in Twilio API calls.

When you are using the Twilio free trial, it requires you to verify phone numbers before calling them. To call arbitrary numbers, enter your credit card and fund the minimum of $20.

Twilio will only charge you for what you use (about a dollar a month per phone number, about a penny per minute for phone calls, etc.).

Closing

A lot is possible when you start integrating Twilio with AWS. For example, my daughter developed an Alexa skill that lets her speak a message for a family member and have it delivered by phone. Alexa triggers her AWS Lambda function, which invokes the Twilio API to deliver the message by voice call.

With Amazon Polly, these types of voice applications can sound better than ever.

Original article and comments: https://alestic.com/2016/11/amazon-polly-text-to-speech/

30 November, 2016 06:30PM

Elizabeth K. Joseph: Ohio LinuxFest 2016

Last month I had the pleasure of finally attending an Ohio LinuxFest. The conference has been on my radar for years, but every year I seemed to have some kind of conflict. When my Tour of OpenStack Deployment Scenarios was accepted I was thrilled to finally be able to attend. My employer at the time also pitched in to the conference as a Bronze sponsor and by sending along a banner that showcased my talk, and my OpenStack book!

The event kicked off on Friday and the first talk I attended was by Jeff Gehlbach on What’s Happening with OpenNMS. I’ve been to several OpenNMS talks over the years and played with it some, so I knew the background of the project. This talk covered several of the latest improvements. Of particular note were some of their UI improvements, including both a website refresh and some stunning improvements to the WebUI. It was also interesting to learn about Newts, the time-series data store they’ve been developing to replace RRDtool, which they struggled to scale with their tooling. Newts is decoupled from the visualization tooling so you can hook in your own, like if you wanted to use Grafana instead.

I then went to Rob Kinyon’s Devs are from Mars, Ops are from Venus. He had some great points about communication between ops, dev and QA, starting with being aware and understanding of the fact that you all have different goals, which sometimes conflict. Pausing to make sure you know why different teams behave the way they do and knowing that they aren’t just doing it to make your life difficult, or because they’re incompetent, makes all the difference. He implored the audience to assume that we’re all smart, hard-working people trying to get our jobs done. He also touched upon improvements to communication, making sure you repeat requests in your own words so misunderstandings don’t occur due to differing vocabularies. Finally, he suggested that some cross-training happen between roles. A developer may never be able to take over full time for an operator, or vice versa, but walking a mile in someone else’s shoes helps build the awareness and understanding that he stresses is important.

The afternoon keynote was given by Catherine Devlin on Hacking Bureaucracy with 18F. She works for the government in the 18F digital services agency. Their mandate is to work with other federal agencies to improve their digital content, from websites to data delivery. Modeled after a startup, she explained that they try not to over-plan, like many government organizations do and can lead to failure, they want to fail fast and keep iterating. She also said their team has a focus on hiring good people and understanding the needs of the people they serve, rather than focusing on raw technical talent and the tools. Their practices center around an open by default philosophy (see: 18F: Open source policy), so much of their work is open source and can be adopted by other agencies. They also make sure they understand the culture of organizations they work with so that the tools they develop together will actually be used, as well as respecting the domain knowledge of teams they’re working with. Slides from her talk here, and include lots of great links to agency tooling they’ve worked on: https://github.com/catherinedevlin/olf-2016-keynote


Catherine Devlin on 18F

That evening folks gathered in the expo hall to meet and eat! That’s where I caught up with my friends from Computer Reach. This is the non-profit I went to Ghana with back in 2012 to deploy Ubuntu-based desktops. I spent a couple weeks there with Dave, Beth Lynn and Nancy (alas, unable to come to OLF) so it was great to see them again. I learned more about the work they’re continuing to do, having switched to using mostly Xubuntu on new installs which was written about here. On a personal level it was a lot of fun connecting with them too, we really bonded during our adventures over there.


Tyler Lamb, Dave Sevick, Elizabeth K. Joseph, Beth Lynn Eicher

Saturday morning began with a keynote from Ethan Galstad on Becoming the Next Tech Entrepreneur. Ethan is the founder of Nagios, and in his talk he traced some of the history of his work on getting Nagios off the ground as a proper project and company and his belief in why technologists make good founders. In his work he drew from his industry and market expertise from being a technologist and was able to play to the niche he was focused on. He also suggested that folks look to what other founders have done that has been successful, and recommended some books (notably Founders at Work and Work the System). Finaly, he walked through some of what can be done to get started, including the stages of idea development, basic business plan (don’t go crazy), a rough 1.0 release that you can have some early customers test and get feedback from, and then into marketing, documenting and focused product development. He concluded by stressing that open source project leaders are already entrepreneurs and the free users of your software are your initial market.

Next up was Robert Foreman’s Mixed Metaphors: Using Hiera with Foreman where he sketched out the work they’ve done that preserves usage of Hiera’s key-value store system but leverages Foreman for the actual orchestration. The mixing of provisioning and orchestration technologies is becoming more common, but I hadn’t seen this particular mashup.

My talk was A Tour of OpenStack Deployment Scenarios. This is the same talk I gave at FOSSCON back in August, walking the audience through a series of ways that OpenStack could be configured to provide compute instances, metering and two types of storage. For each I gave a live demo using DevStack. I also talked about several other popular components that could be added to a deployment. Slides from my talk are here (PDF), which also link to a text document with instructions for how to run the DevStack demos yourself.


Thanks to Vitaliy Matiyash for taking a picture during my talk! (source)

At lunch I met up with my Ubuntu friends to catch up. We later met at the booth where they had a few Ubuntu phones and tablets that gained a bunch of attention throughout the event. This event was also my first opportunity to meet Unit193 and Svetlana Belkin in person, both of whom I’ve worked with on Ubuntu for years.


Unit193, Svetlana Belkin, José Antonio Rey, Elizabeth K. Joseph and Nathan Handler

After lunch I went over to see David Griggs of Dell give us “A Look Under the Hood of Ohio Supercomputer Center’s Newest Linux Cluster.” Supercomputers are cool and it was interesting to learn about the system it was replacing, the planning that went into the replacement and workload cut-over and see in-progress photos of the installation. From there I saw Ryan Saunders speak on Automating Monitoring with Puppet and Shinken. I wasn’t super familiar with the Shinken monitoring framework, so this talk was an interesting and very applicable demonstration of the benefits.

The last talk I went to before the closing keynotes was from my Computer Reach friends Dave Sevick and Tyler Lamb. They presented their “Island Server” imaging server that’s now being used to image all of the machines that they re-purpose and deploy around the world. With this new imaging server they’re able to image both Mac and Linux PCs from one Macbook Pro rather than having a different imaging server for each. They were also able to do a live demo of a Mac and Linux PC being imaged from the same Island Server at once.


Tyler and Dave with the Island Server in action

The event concluded with a closing keynote by a father and daughter duo, Joe and Lily Born, on The Democratization of Invention. Joe Born first found fame in the 90s when he invented the SkipDoctor CD repair device, and is now the CEO of Aiwa which produces highly rated Bluetooth speakers. Lily Born invented the tip-proof Kangaroo Cup. The pair reflected on their work and how the idea to product in the hands of customers has changed in the past twenty years. While the path to selling SkipDoctor had a very high barrier to entry, globalization, crowd-funding, 3D printers and internet-driven word of mouth and greater access to the press all played a part in the success of Lily’s Kangaroo cup and the new Aiwa Bluetooth speakers. While I have no plans to invent anything any time soon (so much else to do!) it was inspiring to hear how the barriers have been lowered and inventors today have a lot more options. Also, I just bought an Aiwa Exos-9 Bluetooth Speaker, it’s pretty sweet.

My conference adventures concluded with a dinner with my friends José, Nathan and David, all three of whom I also spent time with at FOSSCON in Philadelphia the month before. It was fun getting together again, and we wandered around downtown Columbus until we found a nice little pizzeria. Good times.

More photos from the Ohio LinuxFest here: https://www.flickr.com/photos/pleia2/albums/72157674988712556

30 November, 2016 06:29PM

hackergotchi for Whonix

Whonix

accessibility tools could be automatically removed / you probably should remove them

If you do not use any accessibility tools (gnome-orca, espeakup, console-braille, florence, dasher, kdeaccessibility, kvkbd, kmousetool, kmag, kmouth, jovie, xbrlapi, festival, qt-at-sp), you will not miss anything. (You would probably know if you are using them.)

Soon, there will be a Whonix stable upgrade. The package whonix-gateway-shared-packages-shared-meta will no longer depend on anon-shared-kde-accessibility. This means, when you run `sudo apt-get purge kdeaccessibility && sudo apt-get autoremove` after the upgrade, these accessibility packages will be automatically removed.

Non-Qubes-Whonix only: brltty should be removed, since it currently is causing a performance issue.

Otherwise if you just want to remove brltty, use `sudo apt-get purge brltty`. If you want to keep almost all or only not those you manually uninstalled, you can use `sudo aptitude keep-all`.

If you want these installed, you are still very much free to have them installed. Just install them the usual way.

This is because those have some issues.

Can these packages also be uninstalled before the Whonix stable upgrade? – Due to technical limitations, this is not that easy. However, it is documented here:
https://www.whonix.org/wiki/Whonix_Debian_Packages

Non-Qubes-Whonix only: If you just want to stop the brltty syslog spam, you could use the following workaround to reliably stop it.

sudo systemctl stop brltty
sudo systemctl mask brltty

The post accessibility tools could be automatically removed / you probably should remove them appeared first on Whonix.

30 November, 2016 04:13PM by Patrick Schleizer

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Docker and Canonical Partner on CS Docker Engine for Ubuntu users

docker

  • New commercial agreement provides integrated enterprise support and SLAs for CS Docker Engine

SAN FRANCISCO and LONDON, 30th November 2016 – Docker and Canonical today announced an integrated Commercially Supported (CS) Docker Engine offering on Ubuntu, providing Canonical customers with a single path for support of the Ubuntu operating system and CS Docker Engine in enterprise Docker operations.

This commercial agreement provides for a streamlined operations and support experience for joint customers. Stable, maintained releases of Docker will be published and updated by Docker, Inc, as snap packages on Ubuntu, enabling direct access to the Docker Inc build of Docker for all Ubuntu users. Canonical will provide Level 1 and Level 2 technical support for CS Docker Engine backed by Docker, Inc providing Level 3 support. Canonical will ensure global availability of secure and Ubuntu images on Docker Hub.

Ubuntu is widely used as a devops platform in container-centric environments. “The combination of Ubuntu and Docker is popular for scale-out container operations, and this agreement ensures that our joint user base has the fastest and easiest path to production for CS Docker Engine devops,” said John Zannos, Vice President of Cloud Alliances and Business Development, Canonical.

CS Docker Engine is a software subscription to Docker’s flagship product backed by business day and business critical support. CS Docker Engine includes orchestration capabilities that enable an operator to define a declarative state for the distributed applications running across a cluster of nodes, based on a decentralized model that allows each Engine to be a uniform building block in a self-organizing, self-healing distributed system.

“Through our partnership, we provide users with more choice by bringing the agility, portability, and security benefits of the Docker CS engine to the larger Ubuntu community,” said Nick Stinemates, Vice President, Business Development and Technical Alliances at Docker. “Additionally, Ubuntu customers will be able to take advantage of official Docker support – a service that is not available from most Linux distributions. Together, we have aligned to make it even easier for organizations to create new efficiencies across the entire software supply chain.”

For more information please visit www.docker.com/products/docker-engine and www.ubuntu.com/cloud.

30 November, 2016 02:01PM

hackergotchi for Tails

Tails

Tails 2.7.1 is out

This release fixes many security issues and users should upgrade as soon as possible.

Changes

Upgrades and changes

  • Upgrade Tor Browser to 6.0.7.

For more details, read our changelog.

Known issues

See the list of long-standing issues.

Get Tails 2.7.1

What's coming up?

Tails 2.9 is scheduled for December 13.

Have a look at our roadmap to see where we are heading to.

We need your help and there are many ways to contribute to Tails (donating is only one of them). Come talk to us!

30 November, 2016 12:34PM

November 29, 2016

hackergotchi for HandyLinux

HandyLinux

Comment migrer d'HandyLinux vers DFLinux

bonjour à tou-te-s

Autant le dire clairement, point de g33kerie ou de bidouillage au programme : la migration vers DFLinux est d'une simplicité élémentaire car il ne s'agit en fait que d'un ajout (ou changement) de dépôts de paquets

Avant de migrer ... comprendre les différences

  • La distribution HandyLinux est une dérivée Debian, ce qui veut dire qu'elle s'éloigne de la distribution mère avec l'ajout d'outils dédiés et une gestion différente des logiciels. Elle utilise un dépôt spécifique contenant les ajouts et les versions rétro-portées d'une partie du bureau Xfce. Elle utilise aussi l'option 'no-recommends' du gestionnaire de logiciels APT, ce qui évite de télécharger l'intégralité des dépendances lors de l'ajout d'un paquet afin d'alléger le système : ce n'est pas la méthode utilisée par Debian.
  • Le projet DFLinux quant à lui, se rapproche de Debian et n'est pas à proprement parlé une "dérivée", même si il utilise un dépôt séparé (bien plus petit que celui d'HandyLinux). Les ISOs du projet DFLinux suivent à la lettre le modèle de gestion des paquets Debian et installent l'intégralité des paquets recommandés en dépendance.

Euh ... oui ... et alors ?

Cela signifie que concrètement, vous ne verrez pas de grande différence hormis la disparition de quelques thèmes qui ne sont plus présent dans le nouveau 'handylinuxlook' et la modification de la configuration de base du handymenu...
Par contre, sachez que les dépôts DFLinux seront prioritaires dans la gestion des paquets et donc, la nouvelle configuration du handymenu sera celle prévue pour DFLinux, pas HandyLinux...
Si vous choisissez d'installer le gestionnaire de notification des mises à jour de chez 'dflinux", l'outil dédié 'handylinux' sera supprimé... et ainsi de suite.
Donc est-ce-que ça vaut le coup d'ajouter les dépôts DFLinux ? à vous de voir

Comment migrer ?

La migration d'un projet à l'autre se fait tout simplement en ajoutant ou en changeant les dépôts dédiés utilisés.
Les deux dépôts sont compatibles, vous pouvez donc utiliser la méthode décrite dans la documentation pour ajouter les dépôts DFLinux, ou simplement en graphique avélasouris ....
  • je clique ici  pour télécharger l'archive de l'outil de migration
  • je décompresse l'archive qui dévoile un dossier "Migration handy2df"

  • j'ouvre le dossier qui contient un script "migration.sh"

  • je clique sur le script et je l'exécute

  • demande de mot de passe administrateur (donc l'utilisateur principal sur HandyLinux)

  • la fenêtre de présentation s'affiche

  • si vous validez, la migration se lance : ajout des dépôts dflinux

  • mise à jour des dépôts

  • mise à jour du système grâce à l'outil intégré à HandyLinux






  • je nettoie

et voilà

le tout en vidéo pour un aperçu avant la migration ?
Musique : Sunrise Piano par DDmyzik



Et pourquoi pas une fenêtre pop-up qui avertirait tous les utilisateurs handylinux de cette migration ??

  • HandyLinux, ses applications, ses dépôts, sont maintenus jusqu'à la fin du support par défaut de "Jessie", cad en 2018, un an après la sortie de "Stretch".
  • Conserver HandyLinux ne provoque aucune faille de sécurité : les dépôts spécifiques sont toujours en ligne et maintenus.

La migration n'est donc absolument pas obligatoire, le seul outil absent des dépôts handylinux est le "dfl-doc" contenant les cahiers du débutant et la documentation dédiée 'dflinux' . Ces documents sont librement accessibles à tous les utilisateurs : il n'est pas nécessaire d'installer 'dfl-doc' pour profiter des cahiers
Donc aucune raison valable de venir polluer les écrans des utilisateurs basiques qui s'en tapent complètement des petites aventures internes des distributions GNU/Linux

La question qui résume ...
- maintenant, quand je veux installer 'mydflinux', le système me dit de supprimer 'handylinux-desktop' ...?
- bah oui .... les logiciels 'dflinux' sont prioritaires et passeront donc "au-dessus" des logiciels 'handylinux*' si nécessaire.



bonne semaine, on se revoit pour la mise à jour lors de la sortie de Debian-8.7
++
arp
HandyLinux - la distribution Debian sans se prendre la tête...

29 November, 2016 07:35PM by arpinux

hackergotchi for SparkyLinux

SparkyLinux

Sparky news 2016/11/29

 

Some changes already came to Sparky so its time to let you know about them.

1. All Sparky pages (blog, forums, wiki) are encrypted so they are available via the https protocol now. It does not make changes in Sparky repos.

2. New, updated Sparky iso images 4.5 should be ready on the end of this week.

3. The latest build of Calamares which can be tested via Sparky 4.5-dev iso, will not be used in the upcoming 4.5 media. They still be using the ‘live-installer’.

4. The Pantheon desktop’s repos is not available any more (for Debian testing) so will be removed from APTus and MinimalISO images. There is an unofficial repos for Debian stable available somewhere, so if you need it, it’s not hard to find it.

5. SlimJet web browser landed in our repos, and I debianize Palemoon web browser which is available in our repos too.

6. The live system of the new iso images still does not work well inside VirtualBox, but works fine inside VMware Workstation, in BIOS and EFI mode.

7. There is a new service Sparky WebDir available for all of you, if you would like to promote your business or private web page, blog, etc. and support Sparky project too. We welcome free entries to the WebDir, but paid ones can help us to keep Sparky alive.

 

29 November, 2016 12:14PM by pavroo

hackergotchi for Ubuntu developers

Ubuntu developers

Valorie Zimmerman: KDE Developer Guide needs a new home and some fresh content

As I just posted in the Mission Forum, our KDE Developer Guide needs a new home. Currently it is "not found" where it is supposed to be.

UPDATE: Nicolas found the PDF on archive.org, which does have the photos too. Not as good as the xml, but better than nothing.

We had great luck using markdown files in git for the chapters of the Frameworks Cookbook, so the Devel Guide should be stored and developed in a like manner. I've been reading about Sphinx lately as a way to write documentation, which is another possibility. Kubuntu uses Sphinx for docs.

In any case, I do not have the time or skills to get, restructure and re-place this handy guide for our GSoC students and other new KDE contributors.

This is perhaps suitable for a Google Code-in task, but I would need a mentor who knows markdown or Sphinx to oversee. Contact me if interested! #kde-books or #kde-soc

29 November, 2016 06:31AM by Valorie Zimmerman (noreply@blogger.com)

Jono Bacon: Luma Giveaway Winner – Garrett Nay

I little while back I kicked off a competition to give away a Luma Wifi Set.

The challenge? Share a great community that you feel does wonderful work. The most interesting one, according to yours truly, gets the prize.

Well, I am delighted to share that Garrett Nay bags the prize for sharing the following in his comment:

I don’t know if this counts, since I don’t live in Seattle and can’t be a part of this community, but I’m in a new group in Salt Lake City that’s modeled after it. The group is Story Games Seattle: http://www.meetup.com/Story-Games-Seattle/. They get together on a weekly+ basis to play story games, which are like role-playing games but have a stronger emphasis on giving everyone at the table the power to shape the story (this short video gives a good introduction to what story games are all about, featuring members of the group:

Story Games from Candace Fields on Vimeo.

Story games seem to scratch a creative itch that I have, but it’s usually tough to find friends who are willing to play them, so a group dedicated to them is amazing to me.

Getting started in RPGs and story games is intimidating, but this group is very welcoming to newcomers. The front page says that no experience with role-playing is required, and they insist in their FAQ that you’ll be surprised at what you’ll be able to create with these games even if you’ve never done it before. We’ve tried to take this same approach with our local group.

In addition to playing published games, they also regularly playtest games being developed by members of the group or others. As far as productivity goes, some of the best known story games have come from members of this and sister groups. A few examples I’m aware of are Microscope, Kingdom, Follow, Downfall, and Eden. I’ve personally played Microscope and can say that it is well designed and very polished, definitely a product of years of playtesting.

They’re also productive and engaging in that they keep a record on the forums of all the games they play each week, sometimes including descriptions of the stories they created and how the games went. I find this very useful because I’m always on the lookout for new story games to try out. I kind of wish I lived in Seattle and could join the story games community, but hopefully we can get our fledgling group in Salt Lake up to the standard they have set.

What struck me about this example was that it gets to the heart of what community should be and often is – providing a welcoming, supportive environment for people with like-minded ideas and interests.

While much of my work focuses on the complexities of building collaborative communities with the intricacies of how people work together, we should always remember the huge value of what I refer to as read communities where people simply get together to have fun with each other. Garrett’s example was a perfect summary of a group doing great work here.

Thanks everyone for your suggestions, congratulations to Garrett for winning the prize, and thanks to Luma for providing the prize. Garrett, your Luma will be in the mail soon!

The post Luma Giveaway Winner – Garrett Nay appeared first on Jono Bacon.

29 November, 2016 12:08AM