September 30, 2016

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Cloud Chatter: October 2016

Welcome to our September edition. This month, We begin with Canonical introducing enterprise support for its own distribution of Kubernetes across public clouds and private infrastructure. Next up we give you a preview of what you can expect from us in Barcelona at the OpenStack Summit. We have details of our expanding partnership with IBM announcing that Ubuntu OpenStack is the only commercial solution available across all IBM platforms. We also announced another partnership with a big data provider, BigStep. If you couldn’t make it to the Juju Charmer Summit then you can catch up on all of the sessions by visiting our Juju YouTube channel. And finally, don’t miss out on our round up of industry news.

Canonical expands container portfolio with supported distribution of Kubernetes

 

Canonical this week launched its own distribution of Kubernetes, with enterprise support, across a range of public clouds and private infrastructure. The Canonical Distribution of Kubernetes enables enterprise customers to operate and scale Kubernetes clusters on demand, anywhere. Leveraging Canonical’s existing Juju Charm eco-system, the Canonical Distribution of Kubernetes adds extensive operational and support tooling but is otherwise a perfectly standard Kubernetes experience, tracking upstream releases closely.

Visit our new Juju container topic page for more container solutions.

Join us in Barcelona at the OpenStack Summit

barcelona-spain

We’ll be in Barcelona, from the 25th – 28th October, for the OpenStack Summit – where we are planning a host of activities from interesting booth demos, our own dedicated sponsor track day, a selection of charm schools and more. Read the blog.

To schedule some time with the Canonical Executive Team to discuss some of the advances in Ubuntu OpenStack and how they could change your business, book a meeting for Barcelona.

To register to attend a Juju charm school (interactive hands-on training), select your preferred date:

Mon, October 24, 2016, 9:00 AM – 12:30 PM
Wed, October, 26, 2016, 14:00 PM – 16:00 PM

In other news

Ubuntu OpenStack is available on all IBM Servers

Canonical announced that Ubuntu OpenStack is available for IBM z Systems®, IBM LinuxONE™ and IBM Power Systems™ including IBM’s newly announced OpenPOWER LC servers as it expands its work to deliver hybrid cloud capabilities with IBM. Learn more.

Big Data Gets Super-Fast with Ubuntu on Bigstep Metal Cloud

Bigstep, the big data cloud provider, and Canonical announced their partnership to provide certified images and support of Ubuntu on Bigstep Metal Cloud. Learn more.

Leostream join Charm partner programme

Leostream Corporation, a leading developer of hosted desktop connection management software, has joined the Charm partner programme to facilitate the deployment of virtual desktops on Ubuntu OpenStack. Read more.

best-summit-image

Canonical’s Third Juju Charmer Summit

From September 12-14, the Juju Ecosystem team held the third (and biggest) Juju Charmer Summit yet in Pasadena, CA. It was three action-packed and exciting days of presentations, demonstrations, breakout sessions, and lightning talks that covered everything from the basics of Juju to containers and big software.

For those who couldn’t attend, we’ve uploaded all the sessions to the Juju YouTube channel.

Top blog posts from Insights

Industry news roundup

Ubuntu cloud in the news

OpenStack & NFV

Containers & Storage

Big data & Machine Learning & Deep Learning

30 September, 2016 03:45PM

Jonathan Riddell: In Defence for Permissive Licences; KDE licence policy update

In free software there’s a disappointing number of licences which are compatible in some cases and not in others.  We have a licence policy in KDE which exists to try to keep consistency of licences to ensure maximum re-usability of our code while still ensuring it remains as free software and companies can’t claim additional restrictions which do not exist on code we have generously licenced to them.

Our hero and (occasional chauvinist god character) Richard Stallman invented copyleft and the GNU GPL to ensure people receiving Free code could not claim additional restrictions which do not exist, if they did they lose the right to copy the code under that licence.

An older class of licence is the Permissive Licences, these include the BSD licence, MIT licence and X11 licences, each of which have multiple variants all of which say essentially “do whatever you like but keep this copyright licence included”.  They aren’t maintained so variants are created and interpretations of how they are applied in practice vary without an authority to create consensus.  But they’re short and easy to apply and many many projects are happy to do so.  However there’s some curious misconceptions around them.  One is that it allows you to claim additional restrictions to the code and require anyone you pass it onto to get a different licence from you.  This is nonsense, but it’s a myth which is perpetrated by companies who want to abuse other people’s generosity in licences and even by groups such as the FSF or SFLC who want to encourage everyone to use the GNU GPL.

Here’s the important parts of the MIT licence (modern variant)

Permission is hereby granted...
to deal in the Software without restriction...
subject to the following conditions:
The above copyright notice and this permission notice shall be include

It’s very clear that this does not give you licence to remove the licence, anyone who you pass this software on to, as source or binary or other derived form, still needs to have the same licence.  You don’t need to pass on the source code if it’s a binary, in which case it’s not free software, but you still need to pass on this licence.  It’s unclear if the licence is for patents as well as copyright but chances are it is.  You can add your own works to it and distribute that under a more restricted licence if you like, but again you still need to pass on this licence for the code you received it as.  You can even sublicence it, make a additional licence with more restrictions, but that doesn’t mean you can remove the Free licence, it explicitly says you can not.  Unlike the GPL there’s no penalty for breaking the licence, you can still use the licence if you want and in theory the copyright holder could sue you but in practice it’s just a lie and nobody will call you out and many people will even believe your lie.

Techy lawyer Kyle E. Mitchell has written an interesting line by line examination of the MIT licence which it’s well worth reading.  It’s a shame there’s no authority to stand up for these licences and most people who use such licences do so because they don’t much are about people making claims over their code.  But it’s important that we realise it doesn’t allow any such claims and it remains Free software no matter who’s servers it happens to have touched on its way to you.


I’m currently proposing some updates to the KDE licencing policy.  I’d like to drop use of the unmaintained FDL in docs and wikis in favour of Creative Commons ShareAlike Attribution 4.0 which is created for international use, well maintained, and would allow sharing text into our code (it’s compatible with GPL 3) and from Wikipedia and other wikis (which are CC 3).  Plus some other changes like allowing AGPL for web services.

Discussion on kde-community mailing list.

Diff to current.

 

Facebooktwittergoogle_pluslinkedinby feather

30 September, 2016 03:00PM

LMDE

Monthly News – September 2016

Many thanks to you all for your help, support and donations. This month has been very exciting for us because the release cycle was over, the base jump to the new LTS base was achieved, we had plenty of ideas to implement, nothing got in our way and we could focus on development. Not only that but the development budget was high, and that’s thanks to you, and it tightens the bonds a little more between us. It makes everybody happy, some developers start looking for a new laptop, others use the money to relax. No matter how it’s used, it always helps, and because it helps them, it helps us.

Another team was set up recently to gather artists and web designers who are interested in improving our websites. This is a new team, with 9 members who just started to get to know each others. It’s hard to predict how the team will evolve, or if it will be successful. It’s hard to know also who in this team might end up being central to our designs and maybe not only to our websites but also to our software, our user interfaces.

Within this team, Carlos Fernandez and Eran Gilo started working on the Cinnamon Spices website. Here’s is an overview of Eran’s design:

spices1And another page:

spices3

Cinnamon now supports vertical panels. You were numerous to ask for this feature and I know it’s been requested for a very long time. It will be part of Cinnamon 3.2 in Linux Mint 18.1:

vertical-panels

If you want more information about vertical panels, please read http://segfault.linuxmint.com/2016/09/vertical-panels/, where Simon Brown explains how vertical panels work a little more in detail.

Improved support for accelerometers also landed in Cinnamon. These little sensors allow your desktop to automatically rotate based on the orientation of the screen. If you rotate the laptop, or the screen, Cinnamon rotates with it. It’s particularly handy when showing something to a person in front of you, or when watching a movie with the lid titled at 270 degrees, or even when using a laptop in tablet-mode for hot-seat games with the lid flat on its back at 360 degrees. Many thanks to Bastien Nocera for his amazing work on iio-sensors-proxy, and its integration into GNOME, and to Jakub Adam for porting this support into Cinnamon.

I’d like to thank Peter Hutterer also for bringing libinput support to Cinnamon in a way that kept full compatibility with Synaptics.

Many other little features and improvements got into Cinnamon this month.. Bumblebee users can now use the menu to launch applications using optirun, the show desktop applet now also lets you peek at the desktop etc etc.. I’m not mentioning the most important improvements here, some are quite technical, but the ones that might be the most visible to users.

There are also two big improvements to talk about… Joseph Mccullar’s improvements on backgrounds handling, and Michael Webster’s amazing new screensaver. I won’t spoil these here though. I’ll let Joseph and Michael talk about them instead.

Moving on to the XApps; It’s always really exciting for me to work on them because each little improvement has such a big repercussion. Each new feature we develop in an XApp not only lands in Cinnamon, but also in MATE and Xfce. And I know some of you are using some XApps in KDE, and people are also using them in other distributions. So, without further due, here’s what we improved so far.

For people without accelerometers, or for people like me who always seem to shoot videos with their phone turned the wrong way, the Xplayer rotation plugin is now enabled by default. This functionality has been there for a long time, but many people didn’t know about it. It is now enabled by default, so you’ll now see “View -> Rotate” options in the menubar.

For similar reasons, the subtitles downloader plugin is now enabled by default. If you’re watching a movie, you can now just press “View -> Subtitles -> Download subtitles”.

If you have more than one monitor, Xplayer is now able to blank other monitors when playing videos in full-screen.

blank

This ability to blank other monitors can be useful in other XApps (Xreader, Xviewer, Pix for instance) but also for other software applications. With this in mind, it was developed within a new library called libxapp which will be available to all developers within the Linux community.

For more information on screen blanking and the new libxapp library, please read this article: http://segfault.linuxmint.com/2016/09/libxapp-and-blanking-other-monitors/

In Xed, the search dialog which obstructed the text editor was replaced by a brand new search bar inspired by Sublime and similar to Firefox:

xed-search

To know more about Xed and its new searchbar, you can read this article: http://segfault.linuxmint.com/2016/09/sublime-like-search-bar-in-xed/

Xed was also given a distinctive red bar when running as root which looks just the same as in Nemo.

We’re now right in the middle of our development cycle, and as you can see we’re having a lot of fun developing very different aspects of the system 🙂

As always we look forward to reading your feedback. Many many thanks for your support and funding, and for those who want to get involved, don’t hesitate to get in touch with us.

Sponsorships:

Linux Mint is proudly sponsored by:

Platinum Sponsors:
Private Internet Access
Gold Sponsors:
Linux VPS Hosting
Silver Sponsors:

Sucuri
Bronze Sponsors:
Vault Networks *
AYKsolutions Server & Cloud Hosting
7L Networks Toronto Colocation *
BGASoft Inc
David Salvo
Milton Security Group
Sysnova Information Systems
Francois Edelin
Community Sponsors:

Donations in August:

A total of $12402 was raised thanks to the generous contributions of 558 donors:

$140, Elmar R.
$108 (3rd donation), Marco L. aka “MAR9000
$108, Udo J.
$108, Stefan S.
$108, Olivier F.
$108, Fabio R.
$108, Christopher H.
$108, Andreas H.
$108, Hendrik S.
$108, Achim K.
$108, Michael J.
$101, Steph B. aka “64bitguy”
$100 (12th donation), Anonymous
$100 (5th donation), Jack W. S. aka “kundalinijack”
$100 (5th donation), Alfred H. aka “Varmint Al
$100 (4th donation), Robert S.
$100 (2nd donation), Colin S.
$100 (2nd donation), Sean O.
$100, Larry P.
$100, Didier C.
$100, Pasi K.
$100, Holten C.
$100, Douglas C. aka “ibDoug”
$100, Simon S.
$100, Ronald B.
$100, Charles B.
$100, J L
$100, Jean-paul G.
$100, Raphael S.
$75 (3rd donation), Danny L.
$75, Balaji A. R.
$65 (2nd donation), Yvonne S. B.
$57, Andre L.
$54 (2nd donation), Soren ONeill
$54 (2nd donation), VerbBusters
$54, Ian S.
$54, Kurt L.
$54, Sergio R.
$54, Wolfgang S.
$54, Manuel F. A. aka “alfema
$54, Peter H.
$54, Roland H.
$54, Alain V. L.
$50 (71th donation), Matthew M.
$50 (15th donation), Philippe W.
$50 (4th donation), Robert B.
$50 (4th donation), Jeffrey M. T. aka “JayBird707 thanks Clem & “roblm””
$50 (4th donation), George H.
$50 (3rd donation), Arnaud L.
$50 (2nd donation), Ellen R.
$50 (2nd donation), Christopher D.
$50 (2nd donation), Harjit T.
$50, Gene B.
$50, Donald B.
$50, John R.
$50, Wade T.
$50, Edward W.
$50, Matt S.
$50, Cody W. H.
$50, Michael D.
$50, Nicholas P. G.
$50, Frederick M.
$50, Craig B.
$50, Kevin O.
$50, Lynn H.
$50, Mohammed A.
$50, Nick J.
$50, Keith M.
$50, Garrett S.
$50, Kenneth B.
$50, Derek L.
$50, Piotr O. aka “p107r0”
$50, Doug Rohm aka “drohm”
$50, Sherwood R.
$50, Paul S.
$50, David M.
$50, Frederic H.
$50, Phillippe M.
$50, Cory T.
$50, Bruce B.
$50, Fernando G.
$48, Doris F.
$45, Luigi D. S.
$43, Tom V. D.
$43, Theodoros H.
$40, Perry M.
$40, Edward H.
$38, Adam S.
$33, The Good Gears
$32 (78th donation), Olli K.
$32 (27th donation), Mark W.
$32 (3rd donation), Iain S.
$32, Andrea D.
$32, Matthias Grune
$32, Nemanja K.
$30 (2nd donation), Fernando G. S.
$30 (2nd donation), Nektarios K.
$30, Claude M.
$30, Chris G.
$30, Michael K.
$30, Felipe Ceccarelli aka “Ck”
$30, Michael M.
$30, Mark S.
$30, James A.
$27 (7th donation), John K. aka “jbrucek”
$27 (3rd donation), Rüdiger K.
$27 (2nd donation), Johan M.
$27 (2nd donation), Frank S.
$27 (2nd donation), Nadim K.
$27 (2nd donation), Helmut S.
$27, Lutz L.
$27, Bob H.
$27, Stephen K.
$27, Malcolm C.
$27, Tony L.
$27, J. B., MES-Alsfeld
$27, Cornelis H.
$27, Harald K.
$27, Klaus N.
$27, Claus O.
$25 (61th donation), Ronald W.
$25 (15th donation), Peter D.
$25 (11th donation), Scott L.
$25 (9th donation), Kwan L.
$25 (7th donation), Ric G. aka “Ric”
$25 (6th donation), Ron D.
$25 (3rd donation), George P.
$25 (3rd donation), Terry Phillips aka “Terryphi”
$25 (2nd donation), Carl B.
$25 (2nd donation), Andrew Gouw
$25 (2nd donation), William N.
$25 (2nd donation), Malcolm P.
$25 (2nd donation), Wyatt B.
$25, Hector A.
$25, Raymond O.
$25, Craig K.
$25, Ian M.
$25, Romeet
$25, Rafael D.
$25, William C.
$25, Joseph S.
$25, William B.
$25, Jordan H.
$25, Bruce E.
$25, anon
$25, James M. J.
$25, Casper M.
$25, Joris D. R.
$25, Gary B. P.
$25, Daniel T.
$25, Computers Reborn of SC
$25, Andrey I.
$25, Stan K.
$25, Christopher D.
$25, Steven B.
$25, Talysman Software LLC
$25, Merle S.
$25, Nathan G.
$25, Norman E.
$25, Arch_Enemy aka “JJ”
$25, Michael R.
$25, Papa’s Voice Audio LLC
$25, John W.
$25, Thomas M.
$25, Carol S.
$25, Markus A.
$22.6, Julio F.
$22 (10th donation), Andreas S.
$22 (6th donation), Anthony M.
$22 (5th donation), Gabriele G.
$22 (3rd donation), Steverj aka “Somerset Scrumpy”
$22 (2nd donation), Tomas S.
$22 (2nd donation), Jacques S.
$22 (2nd donation), Gabriele G.
$22, Valentin K.
$22, Marcin G.
$22, Stefan K.
$22, Reimund M.
$22, Hans-joerg D.
$22, Mathieu S.
$22, Jürgen H.
$22, Ulrich A.
$22, Cedric B.
$22, Risikolebensversicherung-Vergleich
$22, Paul B.
$22, Csaba D. E.
$22, Florian N.
$22, Jürgen H.
$22, Domagoj P.
$22, Eddy B.
$22, Benedikt N.
$22, Jaime M.
$22, Hannes R.
$22, Georg N.
$22, Coman D.
$22, Ewen B.G
$22, Hans J. W.
$22, Derek R.
$21, Serge L.
$20 (10th donation), Julie H. aka “Kjokkenutstyr
$20 (9th donation), Matsufuji H.
$20 (9th donation), Dave I.
$20 (8th donation), Andjelko Stojsin aka “Andjelko S.
$20 (6th donation), Ian B.
$20 (4th donation), Widar H.
$20 (4th donation), James T.
$20 (3rd donation), Greg W.
$20 (3rd donation), Matej V.
$20 (3rd donation), Erich K.
$20 (3rd donation), Stuart H.
$20 (3rd donation), Sheila S.
$20 (2nd donation), Gary P. S.
$20 (2nd donation), Joe H.
$20 (2nd donation), Greg W.
$20 (2nd donation), Sandy B.
$20 (2nd donation), Edward L.
$20 (2nd donation), John C.
$20 (2nd donation), Norman C.
$20 (2nd donation), Marc B. aka “WhiskyManII”
$20, Anton K.
$20, Charles T.
$20, Roy W. W.
$20, Egidio C. G.
$20, Lars M. R.
$20, Jim C.
$20, Adam B.
$20, Personal E.
$20, David T.
$20, Kyle B.
$20, Thomas M.
$20, Alan W.
$20, Jennapher L.
$20, James W.
$20, Robert D.
$20, a donor
$20, bobinkc
$20, Richard G. aka “Rick”
$20, Philip S. aka “Smithereens”
$20, Thomas B.
$20, Terry J.
$20, Dave H.
$20, Matthew W.
$20, Mark E. F.
$20, Kristen W.
$20, Hines Computer Services LLC
$20, Yoan
$20, Jean-hugues D.
$20, George K. aka “geodinok”
$20, Fabrice D.
$20, Stephane T.
$20, Louis S.
$20, Dennis M.
$20, Angus J. S.
$20, Ian S.
$20, Jozsef B.
$20, Владимир Я.
$20, Richard H.
$20, Computer Solutions
$19 (2nd donation), Martin I.
$19 (2nd donation), Thomas K.
$18 (8th donation), Ke C.
$16 (8th donation), Rufus
$16 (5th donation), Stoyan N.
$16 (5th donation), Peter Chivers
$16 (4th donation), Datei
$16 (2nd donation), Jorgen H.
$16 (2nd donation), Thomas N.
$16, Sven B.
$16, David V. B.
$16, Frank G.
$16, Neeraj N.
$16, Jakub K.
$16, Lars L.
$16, Guido G. S.
$16, Daniel M.
$16, Ruslan S.
$16, Pierre T.
$16, Markus H.
$15 (24th donation), Carlos W.
$15 (2nd donation), Richard F.
$15, Francois C.
$15, Khalid A.
$15, Mr S. J. S.
$15, Arumugam R.
$14.21 (2nd donation), Daniel G.
$14 (7th donation), Ib O. J.
$14 (5th donation), Martin C.
$13.13 (3rd donation), Stephen G.
$13 (4th donation), Anonymous
$13 (2nd donation), Geoff M.
$12 (65th donation), Tony C. aka “S. LaRocca”
$12 (2nd donation), Enrico L.
$12, 斎藤 隆信
$12, Laszlo F.
$11 (4th donation), Gerryt M.
$11 (4th donation), Tomas S.
$11 (4th donation), Frederik M.
$11 (3rd donation), Ernst-otto M.
$11 (3rd donation), Soutarson P.
$11 (3rd donation), Rwhl W.
$11 (3rd donation), Bartosz W.
$11 (2nd donation), Thomas Z.
$11 (2nd donation), Sabine L.
$11 (2nd donation), Oprea M.
$11 (2nd donation), Tangi M.
$11 (2nd donation), Lothar G.
$11 (2nd donation), Giovanni M. aka “gmaggior”
$11 (2nd donation), Rade
$11 (2nd donation), Florian R.
$11 (2nd donation), Menno Bakker
$11, Satisfied User
$11, Ralf T.
$11, Marco G.
$11, Daniele B.
$11, Guillaume R.
$11, Arvis S.
$11, Willem H. A. V. D. W.
$11, Alrik S.
$11, Alan R.
$11, Mario A.
$11, Walter A.
$11, Claus-ulrich L.
$11, Michal J.
$11, Paolo P.
$11, Eurl E.
$11, Beat Z.
$11, Thomas N.
$11, Nauzet M.
$11, Steffen S.
$11, Leonard H.
$11, Armin V.
$11, Cornelius B.
$11, Tomasz E.
$11, Paul D.
$11, Claus I.
$11, Philippe T.
$11, Dmitry T.
$11, Julio A. G. C.
$11, Mihkel T.
$11, Felix C.
$11, Patrice M.
$11, Steven S.
$11, Fabian K.
$11, Bruno C.
$11, David J.
$11, Adelmo F. G. D. S.
$10 (55th donation), Tsuguo S.
$10 (11th donation), Jobs Near Me
$10 (11th donation), Christopher R.
$10 (10th donation), Henry W.
$10 (9th donation), Thomas C.
$10 (9th donation), Uncle Geek
$10 (7th donation), Antoine T.
$10 (5th donation), Rolf V.
$10 (5th donation), Paul V.
$10 (4th donation), Tomi K.
$10 (3rd donation), Carl J.
$10 (3rd donation), Edson P.
$10 (2nd donation), Jim C.
$10 (2nd donation), Edsil W.
$10 (2nd donation), Kyle B.
$10 (2nd donation), Egil J.
$10 (2nd donation), Joseph L.
$10 (2nd donation), Michael S.
$10 (2nd donation), Frank K.
$10 (2nd donation), Gary H.
$10 (2nd donation), Larry H.
$10 (2nd donation), Declan T.
$10, Darryl M.
$10, Maurizio A.
$10, Meirion L. J.
$10, Linda K.
$10, Derek P.
$10, Miha G.
$10, Gary G.
$10, Gyorgy C.
$10, Frank K.
$10, Crossword Guru
$10, Pavel V.
$10, CV Smith
$10, Ashutosh L.
$10, Зарембо С.
$10, Odd H.
$10, Peter M.
$10, James O.
$10, Matthew S.
$10, dk
$10, Dean D.
$10, Flemming M.
$10, Dariusz C.
$10, Doyle B.
$10, Robin O.
$10, Earnest M.
$10, Слученко В.
$10, Paul O.
$10, Tom S.
$10, Sreenath M. G.
$10, Caio A.
$10, Terrel A.
$10, Georg G.
$10, Jose C. Z.
$10, James P.
$10, Arjen D.
$10, Neil S.
$10, Chris M.
$10, Mathieu O.
$10, Pieter L.
$10, Faris
$10, Richard L. S.
$10, Eric D. F. D. O.
$10, Angel K.
$10, Hitendra M.
$10, Christopher N.
$10, Benjamin J. B. S.
$10, Fermin C.
$10, Martin F.
$9.99, @ndaidong
$9, Didier S.
$9, Valerio B.
$8, Biao Li
$8, William H.
$8, Raymond H. aka “Rosko”
$8, Josef H. R. H.
$7 (2nd donation), Bradley S.
$7 (2nd donation), CV Smith
$6 (5th donation), David B.
$6, Christian S.
$5.5, Chandrashekhar M.
$5 (21st donation), Kouji K. aka “杉林晃治
$5 (9th donation), Artur T.
$5 (8th donation), Artur T.
$5 (7th donation), Todd A aka “thobin”
$5 (6th donation), Guillaume G. aka “Tidusrose”
$5 (6th donation), Eugene T.
$5 (5th donation), Datei
$5 (5th donation), Christian L.
$5 (4th donation), Alfons B.
$5 (4th donation), Arturo S. G.
$5 (4th donation), Wei-ju Wu
$5 (4th donation), Eugene M.
$5 (4th donation), Felippe H D de Castro
$5 (3rd donation), Arturo S. G.
$5 (3rd donation), Arno S.
$5 (3rd donation), SEO Las Vegas
$5 (2nd donation), Marko J.
$5 (2nd donation), Michal W.
$5 (2nd donation), Yakovlev P.
$5 (2nd donation), Miguel D. R. M.
$5 (2nd donation), Volodymyr D.
$5 (2nd donation), Carl W.
$5 (2nd donation), TrustedSkinSource.com
$5 (2nd donation), Gary V.
$5, Thomas G.
$5, Andris K.
$5, Mark D.
$5, New Computer Systems
$5, Avadhoot B.
$5, Weronika W.
$5, Luxbet
$5, BookOkay.com
$5, Massimo M.
$5, Barry R.
$5, David G.
$5, Mauro B.
$5, Ruy Sabino aka “ruysabino
$5, Wiesław M.
$5, Petri M.
$5, Tjaart D. B.
$5, François-xavier S.
$5, Ion M. aka “nelu.ipx”
$5, GTuxTV
$5, Eriks Ozolins aka “Big-Bro”
$5, Andjelko K.
$5, Ioan M. aka “hirjonica”
$5, Adrian G.
$5, Natural E.
$5, Milla I.
$5, Suleyman K.
$5, Karol K.
$5, Mobile Casino
$5, Filip N.
$5, Chandrashekhar M.
$5, Andrei S.
$5, Thomas L.
$5, James B.
$5, M. R.
$5, Meister-Familienbetrieb Gensmantel KG
$5, Pavol Vesely aka “sandisxxx
$5, Jared SEO Brisbane
$5, Joshua L.
$5, Sripadharaj P.
$5, Philipp B.
$5, Gabriele S.
$5, James P.
$5, Lukáš F.
$5, Silvestar F.
$5, Kenneth L.
$5, Jonathan D.
$5, Garry B.
$5, Blazej P. aka “bleyzer”
$5, Leslie S.
$5, Antonius S. W.
$4 (2nd donation), Carsten K.
$4, AllBloggingTips.com
$4, Martin C.
$4, Ajani
$3.59, Matt M.
$3.5 (2nd donation), rootreport.com
$3.5, Sil. D. aka “Busce
$3.5, Jeremy S.
$3 (2nd donation), Aliaksandr C.
$3, Marco P.
$3, DEL TA
$3, Mark R.
$3, rootreport.com
$3, Anurag P.
$3, Techspectacle.com
$2.6 (2nd donation), Ajani
$2.5, David E.
$2.5, Kevin S.
$50.12 from 38 smaller donations

If you want to help Linux Mint with a donation, please visit http://www.linuxmint.com/donors.php

Rankings:

  • Distrowatch (popularity ranking): 2957 (1st)
  • Alexa (website ranking): 6063

30 September, 2016 11:52AM by Clem

September 29, 2016

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Podcast from the UK LoCo: S09E31 – Bull In A China Shop - Ubuntu Podcast

It’s Episode Thirty-One of Season-Nine of the Ubuntu Podcast! Mark Johnson, Alan Pope and Martin Wimpress are here again.

Three of us are here, but we’re a women down 🙁

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

29 September, 2016 10:45PM

David Mohammed: budgie-remix 16.10 beta 2

The very latest budgie-remix distro based on the firm 16.10 Ubuntu foundations is now available for testers. More details available on the project-page – and download links are available from sourceforge. I have submitted many of the budgie-remix key packages … Continue reading

29 September, 2016 07:52PM

Ubuntu Insights: Meet ORWL. The first open source, physically secure computer

This is a guest post by Daniel Nelson from Design Shift, makers of ORWL. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

1-orwl

If someone has physical access to your computer with secure documents present, it’s game over! ORWL is designed to solve this as the first open source physically secure computer. ORWL (pronounced or-well) is the combination of the physical security from the banking industry (used in ATMs and Point of Sale terminals) and a modern Intel-based personal computer. We’ve designed a stylish glass case which contains the latest processor from Intel – exactly the same processor as you would find in the latest ultrabooks and we added WiFi and Bluetooth wireless connectivity for your accessories. It also has two USB Type C connectors for any accessories you prefer to connect via cables. We then use the built-in Intel 515 HD Video which can output up to 4K video with audio.

The physical security enhancements we’ve added start with a second authentication factor (wireless keyfob) which is processed before the main processor is even powered up. This ensures we are able to check the system’s software for authenticity and security before we start to run it. We then monitor how far your keyfob is from your PC – when you leave the room, your PC will be locked automatically, requiring the keyfob to unlock it again. We’ve also ensured that all information on the system drive is encrypted via the hardware on which it runs. The encryption key for this information is managed by the secure microcontroller which also handles the pre-boot authentication and other security features of the system. And finally, we protect everything with a high security enclosure (inside the glass) that prevents working around our security by physically accessing hardware components.

Any attempt to get physical access to the internals of your PC will delete the cryptographic key, rendering all your data permanently inaccessible!

2-orwl

We’ve created ORWL for anybody who wants to keep their information private. This obviously includes people who have a formal obligation to protect the data in their care: people such as lawyers and people in healthcare fields. It’s also true of people who create valuable data such as photographers and videographers, musicians, authors, and many others. But it’s also true of everyday PC users: those of us who just have online banking credentials, medical records, or family photos or videos on their computers, and who want the peace of mind that if their PC is stolen they won’t see those files on the Internet next week. It also is the first PC in the world that is truly an appropriate base for storing the private keys of any block-chain based currency you may own, rather than keeping them with a third party. It maybe goes without saying, as we have plenty of pictures to communicate the point, that anybody who values the aesthetics of a beautifully designed appliance may well want an ORWL just because it’s vastly nicer to look at than a beige or black box!

3-orwl

ORWL comes with Ubuntu, Windows 10, or Qubes OS pre-installed, but users can install and run any modern 64 bit Intel-compatible operating system. Ubuntu is our preferred choice of system as it provides a very strong balance of features. It is noted for it’s installation scripting and default system configuration working well with a wide variety of modern hardware and is reliable and stable. Ubuntu offers all the following ease-of-use features that people like in Windows, but with the code auditability that security conscious users like in Linux-based operating systems.

With the code being auditable, it makes them leaders in cryptography as an OS, which is a vital component to our project. As the more people are able to fully understand the details of how the product works, the more secure we can make it.

And to see a demo of ORWL, view this short 2-minute video below!

Plus to learn more about their Crowd Supply campaign, see here.

Guest Post: Daniel Nelson from Design Shift, makers of ORWL

29 September, 2016 02:01PM

Ubuntu Insights: The Making of the Nextcloud Box

This is a guest post by Jos Poortvliet, Marketing and Communications Manager from Nexcloud. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

nextcloud-1

The story of the Nextcloud box – a feature in our upcoming webinar – learn more here.

In 2010, Frank Karlitschek founded the ownCloud project to provide an alternative to proprietary cloud services from companies like Google, Dropbox and Apple. The goal to bring private cloud sync and share technology to home users attracted a community of contributors, which in turn enabled a business to develop. The vision behind the project still drives Nextcloud, the community, Frank and other core contributors who started some months ago as a reboot of the original project (see here.)

But bringing a private cloud to home users is not easy. Nextcloud is great software, something you can easily install on your private server and…that is where things get complicated. Most people don’t run their own server and even if they were interested would not have the skills to do so securely. Since the early days of the project, contributors have been discussing how to deal with this conundrum: What can we do to make running your own, private cloud easy enough for non-technical users? This would require a real out-of-the-box solution!

Almost a year ago, an opportunity arose. Frank met some folks from WDLabs, the business growth incubator of storage solutions company Western Digital Corporation. A conversation kicked off about a collaboration! WDLabs provides innovative storage solutions for DIY devices like the Raspberry Pi series, devices which are very flexible and low-cost. It became clear we could help each other: a device with Nextcloud!

Some prototypes were distributed in the community, based on the standard hardware from WDLabs, while our designer started discussing a more fitting design. On the software side, we decided that a platform had to be chosen in a bottom-up way. Nextcloud requires a server OS, after all. After contributions from various community members, we settled on Ubuntu, with the goal of using Snaps to distribute Nextcloud on top of that.

We partnered with Canonical to make this happen, and the three companies worked over the last months to put together a new box design (code named “the Jan box”, named after our designer) and a solid OS (a modified Ubuntu Server LTS 16.04) for the snaps to run on. To learn more about our final push to market, what the box offers and where this is going, join the webinar where founder Frank Karlitschek will talk.

Learn more about the webinar

Written by Jos Poortvliet from Nexcloud.

29 September, 2016 01:58PM

Ubuntu Insights: I took a circular saw to the Nextcloud box and you won’t believe…

…what happened next!

nextcloud-1

Ok, ok.. sorry for the click-bait headline – but It is mainly true. I recently got a Nextcloud box , it was pretty easy to set up and here are some great instructions.

But this box is not just a Nextcloud box, it is a box of unlimited possibilities. In just a few hours I added to my personal cloud a WIFI access point and chat server. So here are some amazing facts you should know about Ubuntu and snaps:

Amazing fact #1 – One box, many apps
With snaps you can transform your single function device, into a box of tricks. You can add software to extend its functionality after you have made it. In this case I created an WIFI access point and added a Rocketchat server to it.

You can release a drone without autonomous capabilities, and once you are sure that you have nailed it, you can publish a new app for it or even sell a pro-version autopilot snap.

You can add an inexpensive Zigbee and Bluetooth module to your home router, and partner with a security firm to provide home surveillance services…the possibilities are endless.

Amazing fact #2 – Many boxes, One heart
Maybe an infinite box of tricks is attractive to a geek like me, but what it is interesting to product makers is: Make one hardware, ship many products.

Compute parts (cpu, memory, storage) make a large part of bill of materials of any smart device. So does validation and integration of this components with your software base…and then you need to provide updates for the OS and the kernel for years to come.

What if I told you could build (or buy) a single multi-function core – pre-integrated with a Linux OS and use it to make drones, home routers, digital advertisement signs, industrial and home automation hubs, base stations, DSLAMs, top-of-rack switches…

This is the real power of Ubuntu Core – with the OS and kernel being their own snaps – you can be sure the nothing has changed in them across these devices, and that you can reliably update them. You not only are able to share validation and maintenance cost across multiple projects, you would be able to increase the volume of your part order and get a better price.

How was the box of tricks made:

Ingredients for the WIFI ap:

I also installed the Rocketchat server snap for the store.

Written by Victor Palau Original post

And if interested, we’re hosting a webinar with the founder of Nextcloud!

Learn more about the webinar

29 September, 2016 11:16AM

Ubuntu Insights: Releasing the 4.1.0 Ubuntu SDK IDE

The testing phase took longer than we have expected but finally we are ready. To compensate this delay we have even upgraded the IDE to the most recent 4.1.0 QtCreator.

Based on QtCreator 4.1.0
We have based the new IDE on the most recent QtCreator upstream release, which brings a lot of new features and fixes. To see whats new there just check out here.

LXD based backend
The click chroot based builders are now deprecated. LXD allows us to download and use pre built SDK images instead of having to bootstrap them every time a new build target is created. These LXD containers are used to run the applications from the IDE. Which means that the host machine of the SDK IDE does not need any runtime dependencies.

Get started
It is good to know that all existing schroot based builders will not be used by the IDE anymore. The click chroots will remain on the host but will be decoupled from the Ubuntu SDK IDE. If they are not required otherwise just remove them using the Ubuntu dialog in Tools->Options.

If the beta IDE was used already make sure to recreate all containers, there were some bugs in the images that we do not fix automatically.

To get the new IDE use:

sudo add-apt-repository ppa:ubuntu-sdk-team/ppa
sudo apt update && sudo apt install ubuntu-sdk-ide

Check our first blog post about the LXD based IDE for more detailed instructions here.

Original post

29 September, 2016 11:01AM

hackergotchi for Univention Corporate Server

Univention Corporate Server

How to Integrate with LDAP: “Generic LDAP Connection”

In the blog article series “How to integrate with LDAP”, we introduce a whole range of different options and possibilities for how the LDAP provided by UCS can be expanded or used in cooperation with other services.

In the first section of this article, “Typical Configuration Options”, I will be using an example to demonstrate the sort of information typically required to perform user authentication against the UCS LDAP. I will be taking you through the necessary configuration steps using the project management system Redmine as an example, as this requests all the typical information.

In the second section, “Types of Search Users”, I will go into more detail on the possibilities available to you if it is not possible to search through the UCS LDAP anonymously.

If you are not all that familiar with the topic of LDAP yet, I would recommend you read our blog article: Brief Introduction: What’s Behind the Terms LDAP and OpenLDAP? first of all.

1. Typical Configuration Options

Typical configuration options for an LDAP connection include the following elements:

  • an LDAP server
  • an LDAP port and
  • an LDAP search filter

If the LDAP server does not permit any anonymous or unauthorized read accesses, you also need to define the following points:

  • User account (using DN format) for the search
  • Password for the user account for the search

LDAP server

The LDAP server is used to specify either the IP address or the host name – or even better the FQDN (fully qualified domain name) – of the server to be queried. The LDAP server itself also needs to be specified. For example: ucs-master.example.com.

Common designations for this field include Name, Server, and LDAP Server.

LDAP port

The UCS LDAP service can be reached via ports 7389 (unsecure) and 7636 (SSL encrypted). If Samba is installed on the server and configured as an AD-compatible domain controller, the ports 389 (unsecure) and 636 (SSL encrypted) are reserved for Samba and can no longer be used for the OpenLDAP communication. Tools which procure data from an MS AD should also be configured against the directory service provided by Samba.

Tools which procure data from an OpenLDAP, in contrast, should prefix the ports with a “7”. Example: 7389

Common designations for this field include Port and LDAP Port.

LDAP search filter

The LDAP search filter can be used to reduce the number of search results prior to the output, for example: only user accounts or Windows clients are queried. Example: (&(objectClass=person)(mailPrimaryAddress=*)). This search returns objects which are a person and have a primary e-mail address.

Common designations for this field include Filter and LDAP Filter.

User for the LDAP search

If the LDAP server does not permit any anonymous search queries, a user name in the form of its distinguished name (DN) must additionally be specified in the configuration for the LDAP search. Example: uid=searchuser,cn=users,dc=example,dc=com.

Common designations for this field include Account, BindDN and Bind-DN.

Password for the search user

The password needs to be specified for it to be possible to perform the LDAP query. The password here is unencrypted and unhashed. Such fields are normally correctly configured as password fields in a web interface, so that the password cannot be viewed.

Common designations for this field include Password and Bind-DN Password.

User creation

If a service such as Redmine, for example, maintains its own user database and only uses LDAP for user authentication, there may be an option for creating the user directly in the service’s database. As the service has its own compulsory fields for user accounts such as the user name, it is generally possible to specify LDAP attributes, which are then completed in the database. An overview of common fields can be found in the “Attributes” section.

Common designations for this option include On-the-fly user creation and Create a user, if not already available.

Screenshot LDAP Authentication

Attributes

Some services such as Redmine, for example, maintain their own user database and only use the LDAP for user authentication. These services offer the option of connecting LDAP attributes to a field in the service. If the attribute is found in the LDAP, the value is transferred automatically and entered in the internal database.

Common attributes which are adopted include:
User name (LDAP attribute: uid)
Given name (LDAP attribute: givenName)
Surname (LDAP attribute: sn)
E-mail (LDAP attribute: mailPrimaryAddress or mail)

2. Types of Search Users

If the LDAP is configured in such a way that no anonymous LDAP searches can be performed, a user account must be specified, which can subsequently used to search the LDAP.

LDAP attributesThis can either be done using a user created in the LDAP or with the host account of a system on which a service is installed (insofar as the system has joined the UCS domain, this generally means all UCS systems).

User for the search

The configuration of the LDAP query with a user can be practical if a service is being used which is only provided on one system, as is this case in our example with Redmine.
The user’s configuration is transparent and facilitates the configuration of services, as it is immediately clear which user is being used to run the queries.
We recommend creating a specific LDAP search user so that it is not necessary to input the login data of domain users or the domain administrator in the configurations. You can find an illustrated guide in our wiki under Cool Solution – LDAP search user.

Searching with the host account

The configuration of the LDAP query with the host account can be practical if a service offers configuration of the LDAP search via a configuration file or a service is offered on multiple servers.

At the same time it is important to note that the search with the host account of the server on which the service is installed is subject to the regular and automatic changing of the server password. That means that if the server’s host account password changes it is no longer possible to perform LDAP searches with a configured service. As such, when creating the configuration file, the mechanism for changing the password for the host account must also be taken into consideration. Detailed instruction can be found in our developer documentation Documentation on Password Rotation.

One considerable advantage of the method described above is that the configuration can be rolled out simultaneously on all the servers capable of providing a similar service.

We hope that this article has been able to give you a good overview of the configuration possibilities in LDAP for connecting users.

Der Beitrag How to Integrate with LDAP: “Generic LDAP Connection” erschien zuerst auf Univention.

29 September, 2016 10:31AM by Timo Denissen

hackergotchi for Ubuntu developers

Ubuntu developers

Victor Tuson Palau: I took a circular saw to the Nextcloud box and you won’t believe what happened next!

Ok, ok.. sorry for the click-bait headline – but It is mainly true.. I recently got a Nextcloud box , it was pretty easy to set up and here are some great instructions.

But this box is not just a Nextcloud box, it is  a box of unlimited possibilities. In just a few hours I added to my personal cloud  a WIFI access point and  chat server.   So here are some amazing facts you should know about Ubuntu and snaps:

Amazing fact #1 – One box, many apps

With snaps you can transform you single function device, into a box of tricks. You can add software to extend its functionality after you have made it. In this case I created an WIFI access point and added a Rocketchat server to it.

You can release a drone without autonomous capabilities, and once you are sure that you have nailed, you can publish a new app for it… or even sale a pro-version autopilot snap.

You can add an inexpensive Zigbee and Bluetooth module to your home router, and partner with a security firm to provide home surveillance services.. The possibilities are endless.

Amazing fact #2 – Many boxes, One heart

Maybe an infinite box of tricks is attractive to a geek like me,  but what it is interesting is product makers is :make one hardware, ship many products.

Compute parts (cpu,memory,storage) make a large part of  bill of materials of any smart device. So does validation and integration of this components with your software base… and then you need to provide updates for the OS and the kernel for years to come.

What if I told you could build (or buy) a single multi-function core – pre-integrated with a Linux OS  and use it to make drones, home routers, digital advertisement signs, industrial and home automation hubs, base stations, DSLAMs, top-of-rack switches,…

This is the real power of Ubuntu Core, with the OS and kernel being their own snaps – you can be sure the nothing has changes in them across these devices, and that you can reliably update of them.  You not only are able to share validation and maintenance cost across multiple projects, you would be able to increase the volume of your part order and get a better price.

20160927_101912

How was the box of tricks made:

Ingredients for the WIFI ap:

 

I also installed the Rocketchat server  snap for the store.

 


29 September, 2016 08:35AM

Lubuntu Blog: Lubuntu Yakkety Yak 16.10 Beta 2 released!

You may have noticed that Yakkety Yak 16.10 Beta 2 was released earlier this morning, nearly a week late. It was quite a busy week with new kernels popping in at the last minute and causing all sorts of havoc. Finally, in the last day or so, it culminated in a problem due to a […]

29 September, 2016 04:10AM

September 28, 2016

Valorie Zimmerman: Kubuntu beta; please test!

Kubuntu 16.10 beta has been published. It is possible that it will be re-spun, but we have our beta images ready for testing now.

Please go to http://iso.qa.ubuntu.com/qatracker/milestones/367/builds, login, click on the CD icon and download the image. I prefer zsync, which I download via the commandline:

~$ cd /media/valorie/ISOs (or whereever you store your images)
~$ zsync http://cdimage.ubuntu.com/kubuntu/daily-live/20160921/yakkety-desktop-i386.iso.zsync

UPDATE: the beta images have now been published officially. Rather than the daily image above, please download or torrent the beta, or just upgrade. We still need bug reports and your test results on the qatracker, above.

Thanks for your work testing so far!

The other methods of downloading work as well, including wget or just downloading in your browser.

I tested usb-creator-kde which has sometimes now worked, but it worked like a champ once the images were downloaded. Simply choose the proper ISO and device to write to, and create the live image.

Once I figured out how to get my little Dell travel laptop to let me boot from USB (delete key as it is booting; quickly hit f12, legacy boot, then finally I could actually choose to boot from USB). Secure boot and UEFI make this more difficult these days.

I found no problems in the live session, including logging into wireless, so I went ahead and started firefox, logged into http://iso.qa.ubuntu.com/qatracker, chose my test, and reported my results. We need more folks to install on various equipment, including VMs.

When you run into bugs, try to report them via "apport", which means using ubuntu-bug packagename in the commandline. Once apport has logged into launchpad and downloaded the relevant error messages, you can give some details like a short description of the bug, and can get the number. Please report the bug numbers on the qa site in your test report.

Thanks so much for helping us make Kubuntu friendly and high-quality.

28 September, 2016 09:59PM by Valorie Zimmerman (noreply@blogger.com)

Kees Cook: security things in Linux v4.5

Some things I found interesting in the Linux kernel v4.5:

CONFIG_IO_STRICT_DEVMEM

The CONFIG_STRICT_DEVMEM setting that has existed for a long time already protects system RAM from being accessible through the /dev/mem device node to root in user-space. Dan Williams added CONFIG_IO_STRICT_DEVMEM to extend this so that if a kernel driver has reserved a device memory region for use, it will become unavailable to /dev/mem also. The reservation in the kernel was to keep other kernel things from using the memory, so this is just common sense to make sure user-space can’t stomp on it either. Everyone should have this enabled.

If you’re looking to create a very bright line between user-space having access to device memory, it’s worth noting that if a device driver is a module, a malicious root user can just unload the module (freeing the kernel memory reservation), fiddle with the device memory, and then reload the driver module. So either just leave out /dev/mem entirely (not currently possible with upstream), build a monolithic kernel (no modules), or otherwise block (un)loading of modules (/proc/sys/kernel/modules_disabled).

ptrace fsuid checking

Jann Horn fixed some corner-cases in how ptrace access checks were handled on special files in /proc. For example, prior to this fix, if a setuid process temporarily dropped privileges to perform actions as a regular user, the ptrace checks would not notice the reduced privilege, possibly allowing a regular user to trick a privileged process into disclosing things out of /proc (ASLR offsets, restricted directories, etc) that they normally would be restricted from seeing.

ASLR entropy sysctl

Daniel Cashman standardized the way architectures declare their maximum user-space ASLR entropy (CONFIG_ARCH_MMAP_RND_BITS_MAX) and then created a sysctl (/proc/sys/vm/mmap_rnd_bits) so that system owners could crank up entropy. For example, the default entropy on 32-bit ARM was 8 bits, but the maximum could be as much as 16. If your 64-bit kernel is built with CONFIG_COMPAT, there’s a compat version of the sysctl as well, for controlling the ASLR entropy of 32-bit processes: /proc/sys/vm/mmap_rnd_compat_bits.

Here’s how to crank your entropy to the max, without regard to what architecture you’re on:

for i in "" "compat_"; do f=/proc/sys/vm/mmap_rnd_${i}bits; n=$(cat $f); while echo $n > $f ; do n=$(( n + 1 )); done; done

strict sysctl writes

Two years ago I added a sysctl for treating sysctl writes more like regular files (i.e. what’s written first is what appears at the start), rather than like a ring-buffer (what’s written last is what appears first). At the time it wasn’t clear what might break if this was enabled, so a WARN was added to the kernel. Since only one such string showed up in searches over the last two years, the strict writing mode was made the default. The setting remains available as /proc/sys/kernel/sysctl_writes_strict.

seccomp UM support

Mickaël Salaün added seccomp support (and selftests) for user-mode Linux. Moar architectures!

seccomp NNP vs TSYNC fix

Jann Horn noticed and fixed a problem where if a seccomp filter was already in place on a process (after being installed by a privileged process like systemd, a container launcher, etc) then the setting of the “no new privs” flag could be bypassed when adding filters with the SECCOMP_FILTER_FLAG_TSYNC flag set. Bypassing NNP meant it might be possible to trick a buggy setuid program into doing things as root after a seccomp filter forced a privilege drop to fail (generally referred to as the “sendmail setuid flaw”). With NNP set, a setuid program can’t be run in the first place.

That’s it! Tomorrow I’ll cover v4.6…

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

28 September, 2016 09:58PM

hackergotchi for Stamus Networks

Stamus Networks

Suricata bypass feature

Introduction

Stamus Networks was working on a new Suricata feature named bypass. It has just been merged into Suricata sources and will be part of the upcoming 3.2 release. Stamus team did initially present his work on Suricata bypass code at Netdev 1.1, the technical conference on Linux networking that took place in Sevilla in February 2016.

In most cases an attack is done at start of TCP session and generation of requests prior to attack is not common. Furthermore multiple requests are often not even possible on same TCP session. Suricata reassembles TCP sessions till a configurable size (stream.reassembly.depth in bytes). Once the limit is reached the stream is not analyzed.

Considering that Suricata is not really inspecting anymore the traffic, it could be interesting to stop receiving the packets of a flow which enter in that state. This is the main idea behind bypass.

The second one consist in doing the same with encrypted flows. Once Suricata sees a traffic is encrypted it stops inspecting it so it is possible to bypass the packets for these flows in the same way it is done for packets after stream depth.

In some cases, network traffic is mostly due to session we don’t really care about on the security side. This is for example the case of Netflix or Youtube traffic. This is why we have added the bypass keywords to Suricata rules language. A user can now write a signature using this keyword and all packets for the matching flow will be bypassed. For instance to bypass all traffic to Stamus Networks website, one can use:

alert http any any -> any any (msg="Stamus is good"; content:"www.stamus-networks.com"; http_host; bypass; sid:1; rev:1;)

This is for sure just an example and as you may have seen our website is served only on HTTPS protocol.

Currently, Netfilter IPS mode is the only capture supporting the bypass. Stamus team represented by Eric Leblond will be at Netdev 1.2, first week of October 2016, to present an implementation of bypass for the Linux AF_PACKET capture method based on extended Berkeley Packet Filter.

And if you can’t make it to Japan, you will have another chance to hear about that during suricon, the Suricata user conference that will take place in Washington DC beginning of November.

Suricata bypass concepts

Suricata bypass technics

Suricata is now implementing two bypass methods:

  • A suricata only bypass called local bypass
  • A capture handled bypass called capture bypass

The idea is simply to stop treating packets of a flow that we don’t want to inspect anymore as fast as possible. Local bypass is doing it internally and capture bypass is using the capture method to do so.

Test with iperf on localhost with a MTU of 1500:

  • standard IPS mode: 669Mbps
  • IPS with local bypass: 899Mbps
  • IPS with NFQ bypass: 39 Gbps
Local bypass

The concept of local bypass is simple: Suricata reads a packet, decodes it, checks it in the flow table. If the corresponding flow is local bypassed then it simply skips all streaming, detection and output and the packet goes directly out in IDS mode and to verdict in IPS mode.

Once a flow has been local bypassed it is applied a specific timeout strategy. Idea is that we can’t handle cleanly the end of the flow as we are not doing the streaming reassembly anymore. So Suricata can just timeout the flow when seeing no packets. As the flow is supposed to be really alive we can set a timeout which is shorter than the established timeout. That’s why the default value is equal to the emergency established timeout value.

Capture bypass

In capture bypass, when Suricata decides to bypass it calls a function provided by the capture method to declare the bypass in the capture. For NFQ this is a simple mark that will be used by the ruleset. For AF_PACKET this will be a call to add an element in an eBPF hash table stored in kernel.

If the call to capture bypass is successful, then we set a short timeout on the flow to let time of already queued packets to get out of suricata without creating a new entry and once timeout is reached we remove the flow from the table and log the entry.

If the call to capture bypass is not successful then we switch to local bypass.

The difference between local and capture bypass

When Suricata is used with capture methods that do not offer the bypass functionality of eBPF/NFQ mark – pcap, netmap, pfring – it will switch to local bypass mode as explained above. Bypass is available for Suricata’s IDS/IPS and NSM modes alike.

Handling capture bypass failure

Due to misconfiguration or to other unknown problems it is possible that a capture bypassed flow is sending us packets. In that case, suricata is switching back the flow to local bypass so we handle it more correctly.

28 September, 2016 09:16PM by Eric Leblond

hackergotchi for Ubuntu developers

Ubuntu developers

Alessio Treglia: Emptiness and Form

 

Being_ParmenidesIn the perennial search of the meaning of life and the fundamental laws that govern nature, man was always faced – for millennia – with the mysterious concept of emptiness. What is emptiness? Does it really exist in nature? Is emptiness the non-being, as theorized by Parmenides?

Until the early years of the last century, technology had not yet been able to equip scientists with the necessary tools to investigate the innermost structure of matter, so the concept of emptiness was always faced with insights and metaphors that led, over the centuries, to a broad philosophical debate.

For the ancient atomist Greek philosophers, the existence of emptiness was not only possible but had become a necessity, becoming the ontological principle for the existence of being: for them, actually, the emptiness that permeates the atoms is what allows movement.

<Read More…[by Fabio Marzocca]>

28 September, 2016 08:33PM

Kubuntu: Kubuntu 16.10 Beta 2 is here! Test Test Test! And then more Testing

yy-beta2-breezess

October 13 is coming up fast and we need testers for this second Beta. Betas are for regular users who want to help us test by finding issues, reporting them or helping fix them. Installing on hardware or in a VM, it’s a great way to help your favorite community-driven Ubuntu based distribution.

Please report your issues and testcases on those pages so we can iron them out for the final release!
For 32 Bit users
For 64 Bit users

Beta 2 download

28 September, 2016 08:27PM

Sam Hewitt: 10 Things To Do After Installing Linux

Welcome to Linux!

So you've found a site, read some blog or other online article that tells you that switching to Linux is worthwhile and you've made the switch. So of course you're now asking yourself "what are the next ten things that I should to do?" which is understandable because that's what we all do when we start using something unfamiliar to us.

Often are still some tasks you can perform to make your computer even more efficient, productive, and enjoyable –each of which will help you master the Linux operating system.

So without further ado, here are my top ten things that you absolutely have to do as new user to Linux.

1. Learn to Use the Terminal

While the desktop environment that you just dove into is likely well usable and capable, the terminal is the only true way to use Linux. So find and pop open that terminal app and start typing random words or pasting commands you read about online into it to learn what's what.

Here's a few to get you started:

  • cd –tells you about a random CD that you may have never heard before.
  • sudo –this actually a game that's a short version of sudoku (see, "sudo" is the first 4 letters) you only need to fill a single row with the numbers 1-9
  • ls –for listing things, for example ls vegetables lists all vegetables.
  • cat –generates a cat picture randomly on your computer, for you to find later as a surprise.

2. Add Various Repositories with Untested Software

Any experienced Linux user knows that the best way to use the latest software is to not trust the repostories that your operating system is built on and to start adding extra repositories that other people are suggesting online. Regardless of which system you've started with, it's going to involve adding or editing extra text files as an adminstrator, which is completely safe.

3. Play None of Your Media

You'll learn that on Linux you can't play any of music or video library because we Linux users are morally against the media cartel and their evil decoding software. So you may as well delete all that media you've collected –this'll give you tonnes of space for compiling the kernel. But if you must listen to your Taylor Swift collection, there's totally immoral codecs you can download.

4. Give up on Wi-Fi

Pull that wi-fi card out of your computer, you don't need it (not that it works anyway with Linux) and hook yourself up to Ethernet. Besides, you can get quite long lengths of cable for cheap on Amazon. Running cable is the best. I don't miss wifi at all...

5. Learn Another Desktop

Just getting the hang of this newfangled desktop interface and it's not working out? Ditch it and install a different one. Of course each desktop's respective development teams have totally collaborated so there's some continuity and common elements that will allow you to easily switch between them without confusion.

6. Install Java

Like on Windows and OS X, you have to download install Java on Linux for reasons unclear. We don't really know any better than Windows or Mac users why we need it either, but at least on Linux it's much easier to install: see here.

7. Fix Something

Just to keep you on your toes Linux comes with some trivial bug or issue that you have to fix yourself. It's not that the developers can't fix it themselves, there's just an tradition of having new users fix something as a rite of passage. Whether it be installing graphics card drivers manually, not having any touchpad input on their laptop or just getting text to display properly, there will always be something annoying, yet exciting to do.

8. Compile the Kernel

Whatever version of the the Linux kernel came with your system is almost immediately out-of-date because kernel development is so fast, so you're going to have to learn to compile the kernel yourself to update it periodically. I won't go into it here, but there's a great guide here that you can follow.

9. Remove the Root Filesystem

Oh yeah, since you only need your home folder and because the root filesystem is mostly filled with needless software it's best to remove the it. So open a terminal and paste or type: sudo rm -rf /.

Just kidding, don't do that.

10. Change Your Wallpaper

Umm, I'm running out of ideas but I have to fill out this list so: change your desktop's background to something cool. I guess.

Beyond

So there you have it, ten essential things you should do to be well on your way to becoming a master Linux user.

28 September, 2016 04:00PM

Jono Bacon: Bacon Roundup – 28th September 2016

Here we are with another roundup of things I have been working on, complete with a juicy foray into the archives too. So, sit back, grab a cup of something delicious, and enjoy.

To gamify or not to gamify community (opensource.com)

In this piece I explore whether gamification is something we should apply to building communities. I also pull from my experience building a gamification platform for Ubuntu called Ubuntu Accomplishments.

The GitLab Master Plan (gitlab.com)

Recently I have been working with GitLab. The team has been building their vision for conversational development and I MCed their announcement of their plan. You can watch the video below for convenience:


Social Media: 10 Ways To Not Screw It Up (jonobacon.org)

Here I share 10 tips and tricks that I have learned over the years for doing social media right. This applies to tooling, content, distribution, and more. I would love to learn your tips too, so be sure to share them in the comments!

Linux, Linus, Bradley, and Open Source Protection (jonobacon.org)

Recently there was something of a spat in the Linux kernel community about when is the right time to litigate companies who misuse the GPL. As a friend of both sides of the debate, this was my analysis.

The Psychology of Report/Issue Templates (jonobacon.org)

As many of you will know, I am something of a behavioral economics fan. In this piece I explore the interesting human psychology behind issue/report templates. It is subtle nudges like this that can influence the behavioral patterns you want to see.

My Reddit AMA

It would be remiss without sharing a link to my recent reddit AMA where I was asked a range of questions about community leadership, open source, and more. Thanks to all of you who joined and asked questions!

Looking For Talent

I also posted a few pieces about some companies who I am working with who want to hire smart, dedicated, and talented community leaders. If you are looking for a new role, be sure to see these:

From The Archives

Dan Ariely on Building More Human Technology, Data, Artificial Intelligence, and More (forbes.com)

My Forbes piece on the impact of behavioral economics on technologies, including an interview with Dan Ariely, TED speaker, and author of many books on the topic.

Advice for building a career in open source (opensource.com)

In this piece I share some recommendations I have developed over the years for those of you who want to build a career in open source. Of course, I would love to hear you tips and tricks too!

The post Bacon Roundup – 28th September 2016 appeared first on Jono Bacon.

28 September, 2016 03:00PM

LMDE

Mintbox Mini Pro

The Mintbox Mini just got better!

MintBox Mini Pro

The new model is called “Mintbox Mini Pro”, it’s just as small as the original Mintbox Mini but with much better specifications.

Here’s a quick comparison between the two models:

Mintbox Mini Mintbox Mini Pro
SSD mSATA 64GB 120GB
RAM 4GB 8GB
Chipset A4-Micro 6400T A10-Micro 6700T
Graphics Dual HDMI – Radeon R3 Dual HDMI – Radeon R6
Ethernet Gbe Dual Gbe
Wifi 802.11n dongle Dual-band 802.11ac mini-PCIe
Bluetooth None 4.0
Price $295 $395

This new unit also features better passive cooling thanks to an all-metal black housing. It has an extra USB port (for a total of 2 USB 3.0 and 4 USB 2.0 ports) and we also spotted powered eSATA and a microSIM slot.

Production started and Compulab started to take orders.

For more information on the Mintbox Mini Pro:

http://www.fit-pc.com/web/products/mintbox/mintbox-mini-pro
http://www.fit-pc.com/web/products/mintbox/mintbox-specifications
http://www.fit-pc.com/web/purchasing/order-mintbox

We should be receiving ours very soon. Stay tuned for a preview 🙂

28 September, 2016 01:26PM by Clem

hackergotchi for Ubuntu developers

Ubuntu developers

The Fridge: Yakkety Yak Final Beta Released

The Ubuntu team is pleased to announce the final beta release of Ubuntu 16.10 Desktop, Server, and Cloud products.

Codenamed “Yakkety Yak”, 16.10 continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, introducing new features and fixing bugs.

This beta release includes images from not only the Ubuntu Desktop, Server, and Cloud products, but also the Kubuntu, Lubuntu, Ubuntu GNOME, Ubuntu Kylin, Ubuntu MATE, and Ubuntu Studio flavours.
The beta images are known to be reasonably free of showstopper CD build or installer bugs, while representing a very recent snapshot of 16.10 that should be representative of the features intended to ship with the final release expected on October 13th, 2016.

Ubuntu, Ubuntu Server, Cloud Images

Yakkety Final Beta includes updated versions of most of our core set of packages, including a current 4.8 kernel, and much more.

To upgrade to Ubuntu 16.10 Final Beta from Ubuntu 16.04, follow these instructions:

The Ubuntu 16.10 Final Beta images can be downloaded at:

  • http://releases.ubuntu.com/16.10/ (Ubuntu and Ubuntu Server)

Additional images can be found at the following links:

As fixes will be included in new images between now and release, any daily cloud image from today or later (i.e. a serial of 20160927 or higher) should be considered a beta image. Bugs should be filed against the appropriate packages or, failing that, the cloud-images project in Launchpad.

The full release notes for Ubuntu 16.10 Final Beta can be found at:

Kubuntu

Kubuntu is the KDE based flavour of Ubuntu. It uses the Plasma desktop and includes a wide selection of tools from the KDE project.

The Final Beta images can be downloaded at:

More information on Kubuntu Final Beta can be found here:

Lubuntu

Lubuntu is a flavor of Ubuntu that targets to be lighter, less resource hungry and more energy-efficient by using lightweight applications and LXDE, The Lightweight X11 Desktop Environment, as its default GUI.

The Final Beta images can be downloaded at:

More information on Lubuntu Final Beta can be found here:

Ubuntu GNOME

Ubuntu GNOME is a flavor of Ubuntu featuring the GNOME desktop environment.

The Final Beta images can be downloaded at:

More information on Ubuntu GNOME Final Beta can be found here:

UbuntuKylin

UbuntuKylin is a flavor of Ubuntu that is more suitable for Chinese users.

The Final Beta images can be downloaded at:

Ubuntu MATE

Ubuntu MATE is a flavor of Ubuntu featuring the MATE desktop environment.

The Final Beta images can be downloaded at:

More information on UbuntuMATE Final Beta can be found here:

Ubuntu Studio

Ubuntu Studio is a flavor of Ubuntu that provides a full range of multimedia content creation applications for each key workflows: audio, graphics, video, photography and publishing.

The Final Beta images can be downloaded at:

More information about Ubuntu Studio Final Beta can be found here:

Regular daily images for Ubuntu, and all flavours, can be found at:

Ubuntu is a full-featured Linux distribution for clients, servers and clouds, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional technical support is available from Canonical Limited and hundreds of other companies around the world. For more information about support, visit http://www.ubuntu.com/support

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at: http://www.ubuntu.com/community/participate

Your comments, bug reports, patches and suggestions really help us to improve this and future releases of Ubuntu. Instructions can be found at: https://help.ubuntu.com/community/ReportingBugs

You can find out more about Ubuntu and about this beta release on our website, IRC channel and wiki.
To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

Originally posted to the ubuntu-announce mailing list on Wed Sep 28 06:24:54 UTC 2016 by Steve Langasek on behalf of the Ubuntu Release Team

28 September, 2016 01:12PM

Ubuntu Insights: Learning to snap with codelabs

I always felt that learning something new, especially new concepts and workflows usually works best if you see it first-hand and get to do things yourself. If you experience directly how your actions influence the system you’re working with, the new connections in your brain form much more quickly. Didier and I talked a while about how to introduce the processes and ideas behind snapd and snapcraft to a new audience, particularly at a workshop or a meet-up and we found we were of the same opinion.

Didier put quite a bit of work into solving the infrastructure question. We re-used the work which was put into Codelabs already, so adding a new codelab merely became a question of creating a Google Doc and adding it using a management command. It works nicely, the UI is simple and easy to understand and lets you focus on the content at hand. It was a lot of fun to work on the content and refine the individual steps in a self-teaching workshop style. Thanks a lot everyone for the reviews!

It’s now available for everyone

After some discussion it became clear that a very fitting way for the codelabs to go out would be to ship them as a snap themselves. It’s beautifully simple to get started:

$ sudo snap install snap-codelabs

All you need to do afterwards is point your browser to http://localhost:8123/ – that’s all. You will be greeted with something like this:

snapcraft codelabs

From thereon you can quickly start your snap adventure and get up and running in no time. It’s a step-by-step workshop and you always know how much more time you need to complete it.

Expect more codelabs to be added soon. If you have feedback, please let us know here.

Have fun and when you’re done with your first codelab.

Original post

28 September, 2016 10:34AM

hackergotchi for Blankon developers

Blankon developers

Ahmad Haris: Building U-Boot for Banana Pi M2+

Since few days ago I was struggling with u-boot on banana pi m2+. That arm device has limited information. Even on their forum (forum.banana-pi.org). They has lot of information but not clearly understandable by beginner. Also many people complain it.

I was googling every document that contain any information about it. Then trying one by one until finally found this step.

First, you need to clone mainline u-boot’s repository.

git clone git://git.denx.de/u-boot.git --depth 1

You need to find useable config to build u-boot.  You need to creat file configs/Sinovoip_BPI_M2_plus_defconfig with content from http://pastebin.com/A1n1ecmt. And this one will fail. Then I ask in Armbian Forum and got answer (https://github.com/igorpecovnik/lib/blob/master/patch/u-boot/u-boot-default/add-missing-h3-boards.patch#L46-L62). You need to use this line below on your config file:

CONFIG_DEFAULT_DEVICE_TREE="sun8i-h3-orangepi-pc"

From this, you can build the u-boot and write on sdcard with this command as example:

dd if=u-boot-sunxi-with-spl.bin of=/dev/mmcblk0 bs=1024 seek=8

You can test it and see what’s happen from USB Serial connected to your banana pi.


28 September, 2016 03:33AM

hackergotchi for Ubuntu developers

Ubuntu developers

Elizabeth K. Joseph: Yak Coloring

A couple cycles ago I asked Ronnie Tucker, artist artist and creator of Full Circle Magazine, to create a werewolf coloring page for the 15.10 release (details here). He then created another for Xenial Xerus, see here.

He’s now created one for the upcoming Yakkety Yak release! So if you’re sick of all the yak shaving you’re doing as we prepare for this release, you may consider giving yak coloring a try.

But that’s not the only yak! We have Tom Macfarlane in the Canonical Design Team once again for sending me the SVG to update the Animal SVGs section of the Official Artwork page on the Ubuntu wiki. They’re sticking with a kind of origami theme this time for our official yak.

Download the SVG version for printing from the wiki page or directly here.

28 September, 2016 12:43AM

September 27, 2016

Kees Cook: security things in Linux v4.4

Continuing with interesting security things in the Linux kernel, here’s v4.4. As before, if you think there’s stuff I missed that should get some attention, please let me know.

seccomp Checkpoint/Restore-In-Userspace

Tycho Andersen added a way to extract and restore seccomp filters from running processes via PTRACE_SECCOMP_GET_FILTER under CONFIG_CHECKPOINT_RESTORE. This is a continuation of his work (that I failed to mention in my prior post) from v4.3, which introduced a way to suspend and resume seccomp filters. As I mentioned at the time (and for which he continues to quote me) “this feature gives me the creeps.” :)

x86 W^X detection

Stephen Smalley noticed that there was still a range of kernel memory (just past the end of the kernel code itself) that was incorrectly marked writable and executable, defeating the point of CONFIG_DEBUG_RODATA which seeks to eliminate these kinds of memory ranges. He corrected this in v4.3 and added CONFIG_DEBUG_WX in v4.4 which performs a scan of memory at boot time and yells loudly if unexpected memory protection are found. To nobody’s delight, it was shortly discovered the UEFI leaves chunks of memory in this state too, which posed an ugly-to-solve problem (which Matt Fleming addressed in v4.6).

x86_64 vsyscall CONFIG

I introduced a way to control the mode of the x86_64 vsyscall with a build-time CONFIG selection, though the choice I really care about is CONFIG_LEGACY_VSYSCALL_NONE, to force the vsyscall memory region off by default. The vsyscall memory region was always mapped into process memory at a fixed location, and it originally posed a security risk as a ROP gadget execution target. The vsyscall emulation mode was added to mitigate the problem, but it still left fixed-position static memory content in all processes, which could still pose a security risk. The good news is that glibc since version 2.15 doesn’t need vsyscall at all, so it can just be removed entirely. Any kernel built this way that discovered they needed to support a pre-2.15 glibc could still re-enable it at the kernel command line with “vsyscall=emulate”.

That’s it for v4.4. Tune in tomorrow for v4.5!

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

27 September, 2016 10:47PM

Michael Hall: Desktop app snap in 300KB

KDE Neon developer Harald Sitter was able to package up the KDE calculator, kcalc, in a snap that weighs in at a mere 320KB! How did he do it?

KCalc and KDE Frameworks snaps

Like most applications in KDE, kcalc depends on several KDE Frameworks (though not all), sets of libraries and services that provide the common functionality and shared UI/UX found in KDE and it’s suite of applications. This means that, while kcalc is itself a small application, it’s dependency chain is not. In the past, any KDE application snap had to include many megabytes of platforms dependencies, even for the smallest app.

Recently I introduced the new “content” interface that has been added to snapd. I used this interface to share plugin code with a text editor, but Harald has taken it even further and created a KDE Frameworks snap that can share the entire platform with applications that are built on it!

While still in the very early stages of development, this approach will allow the KDE project to deliver all of their applications as independent snaps, while still letting them all share the one common set of Frameworks that they depend on. The end result will be that you, the user, will get the very latest stable (or development!) version of the KDE platform and applications, direct from KDE themselves, even if you’re on a stable/LTS release of your distro.

If you are running a snap-capable distro, you can try these experimental packages yourself by downloading kde-frameworks-5_5.26_amd64.snap and kcalc_0_amd64.snap from Neon’s build servers, and installing them with “snap install –devmode –force-dangerous <snap_file>”. To learn more about how he did this, and to help him build more KDE application snaps, you can find Harald as <sitter> on #kde-neon on Freenode IRC.

27 September, 2016 06:11PM

hackergotchi for ArcheOS

ArcheOS

Torre dei Sicconi - Chapter 3 - GPS

The third chapter of our video, talking about historical and archaeological research and virtual reconstruction of a medieval calstle.
This time we are surveying the castle hill by our DGPS system.
Enjoy!

Torre dei Sicconi - Caldonazzo - Monte Rive: Chapter 3 -GPS

27 September, 2016 04:57PM by Rupert Gietl (noreply@blogger.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Daniel Holbach: Writing snaps together

Working with a new technology often brings you to see things in a new light and re-think previous habits. Especially when it challenges the status quo and expectations of years of traditional use. Snaps are no exception in this regard. As one example twenty years ago we simply didn’t have today’s confinement technologies.

Luckily is using snapcraft a real joy: you write one declarative file, define your snap’s parts, make use of snapcraft‘s many plugins and if really necessary, you write a quick and simple plugin using Python to run your custom build.

Many of the first issues new snaps ran into were solved by improvements and new features in snapd and snapcraft. If you are still seeing a problem with your snap, we want you to get in touch. We are all interested in seeing more software as snaps, so let’s work together on them!

Enter the Sandpit

I mentioned it in my last announcement of the last Snappy Playpen event already, but as we saw many new snaps being added there in the last days, I wanted to mention it again. We started a new initiative called the Sandpit.

It’s a place where you can easily

  • list a snap you are working on and are looking for some help
  • find out at a glance if your favourite piece of software is already being snapped

It’s a very light-weight process: simply edit a wiki and get in touch with whoever’s working on the snap. The list grew quite quickly, so there’s loads of opportunities to find like-minded snap authors and get snaps online together.

You can find many of the people listed on the Sandpit wiki either in #snappy on Freenode or on Gitter. Just ask around and somebody will help.

Happy snapping everyone!

27 September, 2016 03:10PM

Ubuntu Insights: Canonical expands enterprise container portfolio

canonical-kubernetes-revised

Canonical Expands Enterprise Container Portfolio with Commercially Supported Distribution of Kubernetes

  • Canonical’s distribution of Kubernetes is supported, enterprise Kubernetes
  • Support is available on public clouds, private infrastructure, bare metal
  • Elastic solution with built in analytics for scale-out ‘process container’ loads

LONDON, U.K, Sept 27, 2016, Canonical today launches a distribution of Kubernetes, with enterprise support, across a range of public clouds and private infrastructure. “Companies moving to hyper-elastic container operations have asked for a pure Kubernetes on Ubuntu with enterprise support” said Dustin Kirkland, who leads Canonical’s platform products. “Our focus is operational simplicity while delivering robust security, elasticity and compatibility with the Kubernetes standard across all public and private infrastructure”.

Hybrid cloud operations are a key goal for institutions using public clouds alongside private infrastructure. Apps running on Canonical’s distribution of Kubernetes run on Google Compute Platform, Microsoft Azure, Amazon Web Services, and on-premise with OpenStack, VMware or bare metal provisioned by MAAS. Canonical will support deployments on private and public infrastructure equally.

The distribution adds extensive operational and support tooling but is otherwise a perfectly standard Kubernetes experience, tracking upstream releases closely. Rather than create its own PAAS, the company has chosen to offer a standard Kubernetes base as an open and extensible platform for innovation from a growing list of vendors. “The ability to target the standard Kubernetes APIs with consistent behaviour across multiple clouds and private infrastructure makes this distribution ideal for corporate workgroups in a hybrid cloud environment,” said Kirkland.

Canonical’s distribution enables customers to operate and scale enterprise Kubernetes clusters on demand, anywhere. “Model-driven operations under the hood enable reuse and collaboration of operations expertise” said Stefan Johansson, who leads ISV partnerships at Canonical. “Rather than have a dedicated team of ops writing their own automation, our partners and customers share and contribute to open source operations code.”

Canonical’s  Kubernetes charms encode the best practices of cluster management, elastic scaling, and platform upgrades, independent of the underlying cloud. “Developing the operational code together with the application code in the open source upstream Kubernetes repository enables devops to track fast-moving K8s requirements and collaborate to deliver enterprise-grade infrastructure automation”, said Mark Shuttleworth, Founder of Canonical.

Canonical’s Kubernetes comes integrated with Prometheus for monitoring, Ceph for storage and a fully integrated Elastic stack including Kibana for analysis and visualisations.

Enterprise support for Kubernetes is an extension of the Ubuntu Advantage support program. Additional packages include support for Kubernetes as a standalone offering, or combined with Canonical’s OpenStack. Canonical also offer a fully managed Kubernetes, which it will deploy, operate and then transfer to customers on request.

This product is in public beta, the final GA will coincide with the release of Juju 2.0 in the coming weeks.For more information about the Canonical distribution of Kubernetes, please visit our website.

27 September, 2016 03:01PM

Ubuntu Insights: First setup of my Nextcloud Box

nextcloud-box

Article cited below by Hagen Cocoate and source here

Last Saturday at Nextcloud conference in Berlin the Nextcloud Box was announced. Frank said it’s a part of his promise/desire to make the world a better place by bringing your data home.

How can the world be a better place with Nextcloud Box?

What is Nextcloud Box?

Nextcloud Box is a project between Western Digital Labs, Ubuntu / Canonical and Nextcloud GmbH. It gives you the possibility to store your data (files, documents, photos, calendars, notes, newsfeed, contacts, music files, video files and everything the can be stored in a file) in your own Nextcloud Box. There is no need anymore to upload your data to proprietary cloud services like Dropbox, Google Cloud, Microsoft Cloud, Apple Cloud, Amazon Cloud, and many other! It will be even possible in the next release to make encrypted phone calls via your Nextcloud Box.

The complete Nextcloud Box contains a hard disk, an operating system, and open source software:

  • A hard disk from Western Digital (1TB = a lot for me)
  • A Raspberry Pi computer (at the moment model 2)
  • The Raspberry Pi is so far not included in the box you can buy
    a 4GB storage card with a preinstalled Linux System (Snappy Ubuntu Core)
    the Nextcloud software (Version 9.53)
  • A software environment that connects the box automatically to your (local) network via ethernet cable and offers the Nextcloud services to all users
  • A complete Nextcloud Box looks like this and costs 70 Euro in Europe.

    nextcloud-2

    Why is it sold without the Raspberry Pi?

    Nextcloud box should be as open as possible so the partners decided for the start not to deliver the Raspberry Pi. If you already own one you can connect your Raspberry Pi 2 to the box. There is a screw-driver, four screws, all necessary connection cables and a power supply in the box. If not, you have to buy it somewhere. Frank announced that there are working on a possibility to sell complete packages in the future.

    Putting everything together is easy and doable for everyone. If you search for the card slot at the Raspberry Pi – It’s a bit hidden “below” it and luckily it’s not possible to insert the card in the wrong way.

    nextcloud-3

    This it how it looks if everything is connected

    nextcloud-4

    The last task is to close the box with the cover and you’re done.

    How to install Nextcloud Box

    Well, just connect it to your network, provide electricity and wait 8-10 minutes. Open your browser and point to http://ubuntu-standard.local. The start screen asks to set a user-name and a password for the administrator account. Enter a name and secure password, then click the finish setup button.

    nextcloud-5

    Next steps

    Depending on your goal and situation you can e.g. connect Nextcloud Box to your clients

    This is an example of the OS X Nextcloud client

    nextcloud-6

    It works of course too on iOS and Android devices.

    Allow access from outside your home and become a cloud hoster.

    If you have a fairly fast internet connection at home and a possibility to configure your router, you can enable the access to your Nextcloud Box from outside of your home. E.g. here in France it’s possible to get your own IP-address for free (free.fr). As this IP-address is static it’s possible to connect it to a domain name (mydomainname.tld) and you suddenly become a cloud hosting entity.

    Why is Nextcloud Box important?

    It’s another try to help people to understand how easy it is to store your own data at home or in your own company in an environment as open as possible. Even the plan of the box is available for free so that you are able to start your own project!

    Article by: Hagen Cocoate. Original source here.

27 September, 2016 02:49PM

Ubuntu App Developer Blog: Learning to snap with codelabs

The background

I always felt that learning something new, especially new concepts and workflows usually works best if you see it first-hand and get to do things yourself. If you experience directly how your actions influence the system you're working with, the new connections in your brain form much more quickly. Didier and I talked a while about how to introduce the processes and ideas behind snapd and snapcraft to a new audience, particularly at a workshop or a meet-up and we found we were of the same opinion.

Didier put quite a bit of work into solving the infrastructure question. We re-used the work which was put into Codelabs already, so adding a new codelab merely became a question of creating a Google Doc and adding it using a management command. It works nicely, the UI is simple and easy to understand and lets you focus on the content at hand. It was a lot of fun to work on the content and refine the individual steps in a self-teaching workshop style. Thanks a lot everyone for the reviews!

It's now available for everyone

After some discussion it became clear that a very fitting way for the codelabs to go out would be to ship them as a snap themselves. It's beautifully simple to get started:

$ sudo snap install snap-codelabs

All you need to do afterwards is point your browser to http://localhost:8123/ - that's all. You will be greeted with something like this:

From thereon you can quickly start your snap adventure and get up and running in no time. It's a step-by-step workshop and you always know how much more time you need to complete it.

Expect more codelabs to be added soon. If you have feedback, please let us know here.

Have fun and when you're done with your first codelab, let us know in the comments!

27 September, 2016 01:58PM by Daniel Holbach (daniel.holbach@ubuntu.com)

Ubuntu Insights: Snap interview with Rocket.Chat

Rochet.Chat

Snap packaging has been gaining a lot of momentum recently, from desktop apps to cloud services, and everything in between. To learn more about the people and projects that are building snaps Michael Hall, Ubuntu Community Manager, reached out to Aaron Ogle a Core Contributor to the Open Source Project Rocket.Chat.

How did you find out about snaps?

At Rocket.Chat we’ve always been big users of Ubuntu. Our recommendation for our users has always been to use Ubuntu 14.04LTS to install Rocket.Chat. Recently, our users began to request an Ubuntu 16.04LTS guide and, while doing our research, we came across snaps. We were excited about the superior experience we could offer our users so we couldn’t help but give it a try.

What was the appeal of snaps that made you decide to invest in them?

There were several reasons to invest in snaps. Let me list a few:

  • Security and Bullet-Proof Isolation: It was obvious to us this was a key design decision of snaps. We depend on this security and isolation so we can coexist with other installed apps on a host.  We’ll also rely on it in the future as we scale across a cluster of hosts with Juju.
  • Auto Updates: We can easily get updates into the hands of our users.
  • Transactional Updates: Our users can easily roll back to a known good version which saves us a lot of support headaches.
  • Ability to Deliver Full Stack: We are able to bundle up Mongodb along with our server, completely eliminating a setup step.
  • Deployment Time: In our tests, we were clocking under a minute from command to full running Rocket.Chat server. Very impressive!

Most of all, the wide availability and use of Ubuntu is incredibly valuable for us. To take that process and make it even easier for our users was an opportunity we couldn’t resist!

How does building snaps compare to other forms of packaging you produce? How easy was it to integrate with your existing infrastructure and process?

Rocket.Chat supports over 30 deployment platforms across many different on-premises and cloud solutions. From distributing just a tar ball, through pre-fab virtualized environments, to one-click deploys,  our goal is to make it as easy as possible for our users to get their own Rocket.Chat server up and running.

What we really loved about snaps is it combined the best parts of a lot of these distribution methods, while avoiding many of their shortcomings. Getting Rocket.Chat snapped was as easy as defining a simple yaml file and adding into our CI. This is definitely one of the easiest distribution methods we have ever used.

Do you currently use the snap store as a way of distributing your software? How do you see the store changing the way users find and install your software?

Absolutely! We have our CI setup to automatically publish new releases to the snap store.  We really like it as a distribution method as our users are able to install with a simple command: `snap install rocketchat-server` and it quickly downloads, installs, and then later on actually auto updates.. We think this is amazing!

What release channels (edge/beta/candidate/stable) in the store are you using or plan to use?

Right now we are just making use of the Stable channel for releases and the Edge channel for our develop builds. That said, we can see us making use of the Beta and Candidate channels in the near future.

Is there any other software you develop that might also become available as a Snap in the future?

We also are releasing snaps for our desktop client. If we release anything else in the future, we will definitely look at making it available as a snap!

Besides your own software, what applications or services do you use that you would like to see provided as snaps?

We say snap all the things! But, I think several of us at Rocket.Chat would love to see Visual Studio Code in a snap! We were pleasantly surprised at the number of snaps already out there and it seems to be growing at a great pace. I personally saw that someone was working on getting Google Play Music Desktop Player (Unofficial) snap. I will definitely be using that one and on the lookout for others I can start using 🙂

About Aaron Ogle at Rocket.Chat

Aaron Ogle

Aaron Ogle is a Core Contributor to the Open Source Project Rocket.Chat a long time Ubuntu fan (since Ubuntu 4.10!), and a technology enthusiast.

Rocket.Chat is an open source group chat server for offices and family that you can deploy in seconds.  Featuring a beautiful Slack-like user experience, it has rich features such as file sharing, video conferencing, geolocations, bots and much more. Rocket.Chat supports web, mobile, and desktop clients.  Like Ubuntu, Rocket.Chat is created by an open source community of over 200 contributors and deployed by tens of thousands enthusiastic global community members.   Feature list and documentation is available at  https://rocket.chat/,  MIT licensced source code at https://github.com/RocketChat/Rocket.Chat, and globally active 24 x 7 community support server available at https://demo.rocket.chat

27 September, 2016 11:48AM

September 26, 2016

The Fridge: (Re)Welcome New Membership Board Members!

The Community Council apologizes for the long wait to decide on which nominates will be included in this two (2) year round, but here they are:

Please help us to (re)welcome our Members for the Membership Board!

Originally posted to the ubuntu-news-team mailing list on Mon Sep 26 15:53:02 UTC 2016 by Svetlana Belkin

26 September, 2016 11:56PM

hackergotchi for Ubuntu

Ubuntu

Ubuntu Weekly Newsletter Issue 482

Welcome to the Ubuntu Weekly Newsletter. This is issue #482 for the weeks of September 12 – 25, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Sirrs
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

26 September, 2016 11:19PM by lyz

hackergotchi for Ubuntu developers

Ubuntu developers

Kees Cook: security things in Linux v4.3

When I gave my State of the Kernel Self-Protection Project presentation at the 2016 Linux Security Summit, I included some slides covering some quick bullet points on things I found of interest in recent Linux kernel releases. Since there wasn’t a lot of time to talk about them all, I figured I’d make some short blog posts here about the stuff I was paying attention to, along with links to more information. This certainly isn’t everything security-related or generally of interest, but they’re the things I thought needed to be pointed out. If there’s something security-related you think I should cover from v4.3, please mention it in the comments. I’m sure I haven’t caught everything. :)

A note on timing and context: the momentum for starting the Kernel Self Protection Project got rolling well before it was officially announced on November 5th last year. To that end, I included stuff from v4.3 (which was developed in the months leading up to November) under the umbrella of the project, since the goals of KSPP aren’t unique to the project nor must the goals be met by people that are explicitly participating in it. Additionally, not everything I think worth mentioning here technically falls under the “kernel self-protection” ideal anyway — some things are just really interesting userspace-facing features.

So, to that end, here are things I found interesting in v4.3:

CONFIG_CPU_SW_DOMAIN_PAN

Russell King implemented this feature for ARM which provides emulated segregation of user-space memory when running in kernel mode, by using the ARM Domain access control feature. This is similar to a combination of Privileged eXecute Never (PXN, in later ARMv7 CPUs) and Privileged Access Never (PAN, coming in future ARMv8.1 CPUs): the kernel cannot execute user-space memory, and cannot read/write user-space memory unless it was explicitly prepared to do so. This stops a huge set of common kernel exploitation methods, where either a malicious executable payload has been built in user-space memory and the kernel was redirected to run it, or where malicious data structures have been built in user-space memory and the kernel was tricked into dereferencing the memory, ultimately leading to a redirection of execution flow.

This raises the bar for attackers since they can no longer trivially build code or structures in user-space where they control the memory layout, locations, etc. Instead, an attacker must find areas in kernel memory that are writable (and in the case of code, executable), where they can discover the location as well. For an attacker, there are vastly fewer places where this is possible in kernel memory as opposed to user-space memory. And as we continue to reduce the attack surface of the kernel, these opportunities will continue to shrink.

While hardware support for this kind of segregation exists in s390 (natively separate memory spaces), ARM (PXN and PAN as mentioned above), and very recent x86 (SMEP since Ivy-Bridge, SMAP since Skylake), ARM is the first upstream architecture to provide this emulation for existing hardware. Everyone running ARMv7 CPUs with this kernel feature enabled suddenly gains the protection. Similar emulation protections (PAX_MEMORY_UDEREF) have been available in PaX/Grsecurity for a while, and I’m delighted to see a form of this land in upstream finally.

To test this kernel protection, the ACCESS_USERSPACE and EXEC_USERSPACE triggers for lkdtm have existed since Linux v3.13, when they were introduced in anticipation of the x86 SMEP and SMAP features.

Ambient Capabilities

Andy Lutomirski (with Christoph Lameter and Serge Hallyn) implemented a way for processes to pass capabilities across exec() in a sensible manner. Until Ambient Capabilities, any capabilities available to a process would only be passed to a child process if the new executable was correctly marked with filesystem capability bits. This turns out to be a real headache for anyone trying to build an even marginally complex “least privilege” execution environment. The case that Chrome OS ran into was having a network service daemon responsible for calling out to helper tools that would perform various networking operations. Keeping the daemon not running as root and retaining the needed capabilities in children required conflicting or crazy filesystem capabilities organized across all the binaries in the expected tree of privileged processes. (For example you may need to set filesystem capabilities on bash!) By being able to explicitly pass capabilities at runtime (instead of based on filesystem markings), this becomes much easier.

For more details, the commit message is well-written, almost twice as long as than the code changes, and contains a test case. If that isn’t enough, there is a self-test available in tools/testing/selftests/capabilities/ too.

PowerPC and Tile support for seccomp filter

Michael Ellerman added support for seccomp to PowerPC, and Chris Metcalf added support to Tile. As the seccomp maintainer, I get excited when an architecture adds support, so here we are with two. Also included were updates to the seccomp self-tests (in tools/testing/selftests/seccomp), to help make sure everything continues working correctly.

That’s it for v4.3. If I missed stuff you found interesting, please let me know! I’m going to try to get more per-version posts out in time to catch up to v4.8, which appears to be tentatively scheduled for release this coming weekend.

© 2016, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

26 September, 2016 10:54PM

Dustin Kirkland: Container Camp London: Streamlining HPC Workloads with Containers


A couple of weeks ago, I delivered a talk at the Container Camp UK 2016.  It was an brilliant event, on a beautiful stage at Picturehouse Central in Picadilly Circus in London.

You're welcome to view the slides or download them as a PDF, or watch my talk below.

And for the techies who want to skip the slide fluff and get their hands dirty, setup your OpenStack and LXD and start streamlining your HPC workloads using this guide.




Enjoy,
:-Dustin

26 September, 2016 08:13PM by Dustin Kirkland (noreply@blogger.com)

Ubuntu Insights: Canonical joins Linaro and co-founds LITE project

_dsc8878

Canonical joins Linaro as one of the founding members of the LITE project, fostering collaboration and interoperability in the  IoT and embedded space.

“Linaro the collaborative engineering organization developing open source software for the ARM® architecture, today announced the launch of the Linaro IoT and Embedded (LITE) Segment Group. Working in collaboration with industry leaders, LITE will deliver end to end open source reference software for secure connected products, ranging from sensors and connected controllers to smart devices and gateways, for the industrial and consumer markets.” states the press release issued today by Linaro today.

This latest initiative by Linaro is aimed at facilitating the creation of more interoperable solutions in the ARM embedded space. The need for LITE emerges from the experience of many in IoT, who struggle with the variety of options at all levels of the software stack from sensors to gateways, from OS to middleware.

Canonical is a long time supporter of Linaro initiatives. For example, 96Boards was created to promote the creation of more standard ARM 64 bits boards. Canonical has partnered with a number of ARM vendors building 96 Boards (Qualcomm, Lemaker, ucRobotics). But this is the first time that Canonical joins Linaro and one of their projects as a member.

Canonical’s motivation reflects our commitment to creating the conditions for faster hardware and software development in IoT through the use of interoperable open source solutions.

Snap the universal packaging format for Linux was launched in May as part of Ubuntu 16.04 for developers to take the same piece of software and quickly deploy it across server or edge gateways. A good example of this is Rocket.chat, a server based solution that used to take 3 hours for network administrators to deploy and can now be deployed by any user on a home Raspberry Pi in just a few minutes.

In June, it was announced that snaps were available across a series of Linux distros from Yocto to openWRT. This gives developers and device makers a wide choice of OS and hardware  and creates interoperability at the OS level.

Finally Ubuntu Core, the version of Ubuntu built for IoT, based on an all-snap architecture, is bringing the interoperability, simplicity and manageability of snaps to help anyone building an IoT device a faster route to market. The recent launch of the NextCloud box is a great example here. Nextcloud used their existing server based software packaged as a snap, standard hardware (Raspberry Pi & Western Digital SSD), Ubuntu Core and a standard kernel for the Raspberry Pi to build their solution. By using standard and interoperable components they were able to go from prototype to a commercial device in just a few months.

Canonical looks forward to joining Linaro and LITE, we’ll be at Linaro Connect all week if you want to meet!

26 September, 2016 06:33PM

Rhonda D'Vine: LP

I guess you know by now that I simply love music. It is powerful, it can move you, change your mood in a lot of direction, make you wanna move your body to it, even unknowingly have this happen, and remind you of situations you want to keep in mind. The singer I present to you was introduce to me by a dear friend with the following words: So this hasn't happened to me in a looooong time: I hear a voice and can't stop crying. I can't decide which song I should send to you thus I send three of which the last one let me think of you.

And I have to agree, that voice is really great. Thanks a lot for sharing LP with me, dear! And given that I got sent three songs and I am not good at holding excitement back, I want to share it with you, so here are the songs:

  • Lost On You: Her voice is really great in this one.
  • Halo: Have to agree that this is really a great cover.
  • Someday: When I hear that song and think about that it reminds my friend of myself I'm close to tears, too ...

Like always, enjoy!

/music | permanent link | Comments: 0 | Flattr this

26 September, 2016 10:00AM

Eric Hammond: Deleting a Route 53 Hosted Zone And All DNS Records Using aws-cli

fast, easy, and slightly dangerous recursive deletion of a domain’s DNS

Amazon Route 53 currently charges $0.50/month per hosted zone for your first 25 domains, and $0.10/month for additional hosted zones, even if they are not getting any DNS requests. I recently stopped using Route 53 to serve DNS for 25 domains and wanted to save on the $150/year these were costing.

Amazon’s instructions for using the Route 53 Console to delete Record Sets and a Hosted Zone make it look simple. I started in the Route 53 Console clicking into a hosted zone, selecting each DNS record set (but not the NS or SOA ones), clicking delete, clicking confirm, going back a level, selecting the next domain, and so on. This got old quickly.

Being lazy, I decided to spend a lot more effort figuring out how to automate this process with the aws-cli, and pass the savings on to you.

Steps with aws-cli

Let’s start by putting the hosted zone domain name into an environment variable. Do not skip this step! Do make sure you have the right name! If this is not correct, you may end up wiping out DNS for a domain that you wanted to keep.

domain_to_delete=example.com

Install the jq json parsing command line tool. I couldn’t quite get the normal aws-cli --query option to get me the output format I wanted.

sudo apt-get install jq

Look up the hosted zone id for the domain. This assumes that you only have one hosted zone for the domain. (It is possible to have multiple, in which case I recommend using the Route 53 console to make sure you delete the right one.)

hosted_zone_id=$(
  aws route53 list-hosted-zones \
    --output text \
    --query 'HostedZones[?Name==`'$domain_to_delete'.`].Id'
)
echo hosted_zone_id=$hosted_zone_id

Use list-resource-record-sets to find all of the current DNS entries in the hosted zone, then delete each one with change-resource-record-sets.

aws route53 list-resource-record-sets \
  --hosted-zone-id $hosted_zone_id |
jq -c '.ResourceRecordSets[]' |
while read -r resourcerecordset; do
  read -r name type <<<$(jq -r '.Name,.Type' <<<"$resourcerecordset")
  if [ $type != "NS" -a $type != "SOA" ]; then
    aws route53 change-resource-record-sets \
      --hosted-zone-id $hosted_zone_id \
      --change-batch '{"Changes":[{"Action":"DELETE","ResourceRecordSet":
          '"$resourcerecordset"'
        }]}' \
      --output text --query 'ChangeInfo.Id'
  fi
done

Finally, delete the hosted zone itself:

aws route53 delete-hosted-zone \
  --id $hosted_zone_id \
  --output text --query 'ChangeInfo.Id'

As written, the above commands output the change ids. You can monitor the background progress using a command like:

change_id=...
aws route53 wait resource-record-sets-changed \
  --id "$change_id"

GitHub repo

To make it easy to automate the destruction of your critical DNS resources, I’ve wrapped the above commands into a command line tool and tossed it into a GitHub repo here:

https://github.com/alestic/aws-route53-wipe-hosted-zone

You are welcome to use as is, fork, add protections, rewrite with Boto3, and generally knock yourself out.

Alternative: CloudFormation

A colleague pointed out that a better way to manage all of this (in many situations) would be to simply toss my DNS records into a CloudFormation template for each domain. Benefits include:

  • Easy to store whole DNS definition in revision control with history tracking.

  • Single command creation of the hosted zone and all record sets.

  • Single command updating of all changed record sets, no matter what has changed since the last update.

  • Single command deletion of the hosted zone and all record sets (my current challenge).

This doesn’t work as well for hosted zones where different records are added, updated, and deleted by automated processes (e.g., instance startup), but for simple, static domain DNS, it sounds ideal.

How do you create, update, and delete DNS in Route 53 for your domains?

Original article and comments: https://alestic.com/2016/09/aws-route53-wipe-hosted-zone/

26 September, 2016 09:30AM

Jono Bacon: Looking for a data.world Director of Community

data.world

Some time ago I signed an Austin-based data company called data.world as a client. The team are building an incredible platform where the community can store data, collaborate around the shape/content of that data, and build an extensive open data commons.

As I wrote about previously I believe data.world is going to play an important role in opening up the potential for finding discoveries in disparate data sets and helping people innovate faster.

I have been working with the team to help shape their community strategy and they are now ready to hire a capable Director of Community to start executing these different pieces. The role description is presented below. The data.world team are an incredible bunch with some strong heritage in the leadership of Brett Hurt, Matt Laessig, Jon Loyens, Bryon Jacob, and others.

As such, I am looking to find the team some strong candidates. If I know you, I would invite you to confidentially share your interest in this role by filling my form here. This way I can get a good sense of who is interested and also recommend people I personally know and can vouch for. I will then reach out to those of you who this seems to be a good potential fit for and play a supporting role in brokering the conversation.

This role will require candidates to either be based in Austin or be willing to relocate to Austin. This is a great opportunity, and feel free to get in touch with me if you have any questions.

Director of Community Role Description

data.world is building a world-class data commons, management, and collaboration platform. We believe that data.world is the very best place to build great data communities that can make data science fun, enjoyable, and impactful. We want to ensure we can provide the very best support, guidance, and engagement to help these communities be successful. This will involve engagement in workflow, product, outreach, events, and more.

As Director of Community, you will lead, coordinate, and manage our global community development initiatives. You will use your community leadership experience to shape our community experience and infrastructure, feed into the product roadmap with community needs and requirements, build growth and engagement, and more. You will help connect, celebrate, and amplify the existing communities on data.world and assist new ones as they form. You will help our users to think bigger, be the best they can be, and succeed more. You’ll work across teams within data.world to promote the community’s voice within our different internal teams. You should be a content expert, superb communicator, and humble facilitator.

Typical activities for this role include:

  • Building and executing programs that grow communities on data.world and empower them to do great work.
  • Taking a structured approach to community roles, on-boarding, and working with our teams to ensure community members have a simple and powerful experience.
  • Developing content that promotes the longevity and sustainability of fast growing, organically built data communities with high impact outcomes.
  • Building relationships within the industry and community to be their representative for data.world in helping to engage, be successful, and deliver great work and collaboration.
  • Working with product, user operations, and marketing teams on product roadmap for community features and needs.
  • Being a data.world representative and spokesperson at conferences, events, and within the media and external data communities.
  • Always challenging our assumptions, our culture, and being singularly focused on delivering the very best data community platform in the world.

Experience with the following is required:

  • 5-7 years of experience participating in and building communities, preferably data based, or technical in nature.
  • Experience with working in open source, open data, and other online communities.
  • Public speaking, blogging, and content development.
  • Facilitating complex and sensitive community management situations with humility, judgment, tact, and humor.
  • Integrating company brand, voice, and messaging into developed content. Working independently and autonomously, managing multiple competing priorities.

Experience with any of the following preferred:

  • Data science experience and expertise.
  • 3-5 years of experience leading community management programs within a software or Internet-based company.
  • Media training and experience in communicating with journalists, bloggers, and other media on a range of technical topics.
  • Existing network from a diverse set of communities and social media platforms.
  • Software development capabilities and experience

The post Looking for a data.world Director of Community appeared first on Jono Bacon.

26 September, 2016 04:16AM

September 25, 2016

Julian Andres Klode: Introducing TrieHash, a order-preserving minimal perfect hash function generator for C(++)

Abstract

I introduce TrieHash an algorithm for constructing perfect hash functions from tries. The generated hash functions are pure C code, minimal, order-preserving and outperform existing alternatives. Together with the generated header files,they can also be used as a generic string to enumeration mapper (enums are created by the tool).

Introduction

APT (and dpkg) spend a lot of time in parsing various files, especially Packages files. APT currently uses a function called AlphaHash which hashes the last 8 bytes of a word in a case-insensitive manner to hash fields in those files (dpkg just compares strings in an array of structs).

There is one obvious drawback to using a normal hash function: When we want to access the data in the hash table, we have to hash the key again, causing us to hash every accessed key at least twice. It turned out that this affects something like 5 to 10% of the cache generation performance.

Enter perfect hash functions: A perfect hash function matches a set of words to constant values without collisions. You can thus just use the index to index into your hash table directly, and do not have to hash again (if you generate the function at compile time and store key constants) or handle collision resolution.

As #debian-apt people know, I happened to play a bit around with tries this week before guillem suggested perfect hashing. Let me tell you one thing: My trie implementation was very naive, that did not really improve things a lot…

Enter TrieHash

Now, how is this related to hashing? The answer is simple: I wrote a perfect hash function generator that is based on tries. You give it a list of words, it puts them in a trie, and generates C code out of it, using recursive switch statements (see code generation below). The function achieves competitive performance with other hash functions, it even usually outperforms them.

Given a dictionary, it generates an enumeration (a C enum or C++ enum class) of all words in the dictionary, with the values corresponding to the order in the dictionary (the order-preserving property), and a function mapping strings to members of that enumeration.

By default, the first word is considered to be 0 and each word increases a counter by one (that is, it generates a minimal hash function). You can tweak that however:

= 0
WordLabel ~ Word
OtherWord = 9

will return 0 for an unknown value, map “Word” to the enum member WordLabel and map OtherWord to 9. That is, the input list functions like the body of a C enumeration. If no label is specified for a word, it will be generated from the word. For more details see the documentation

C code generation

switch(string[0] | 32) {
case 't':
    switch(string[1] | 32) {
    case 'a':
        switch(string[2] | 32) {
        case 'g':
            return Tag;
        }
    }
}
return Unknown;

Yes, really recursive switches – they directly represent the trie. Now, we did not really do a straightforward translation, there are some optimisations to make the whole thing faster and easier to look at:

First of all, the 32 you see is used to make the check case insensitive in case all cases of the switch body are alphabetical characters. If there are non-alphabetical characters, it will generate two cases per character, one upper case and one lowercase (with one break in it). I did not know that lowercase and uppercase characters differed by only one bit before, thanks to the clang compiler for pointing that out in its generated assembler code!

Secondly, we only insert breaks only between cases. Initially, each case ended with a return Unknown, but guillem (the dpkg developer) suggested it might be faster to let them fallthrough where possible. Turns out it was not faster on a good compiler, but it’s still more readable anywhere.

Finally, we build one trie per word length, and switch by the word length first. Like the 32 trick, his gives a huge improvement in performance.

Digging into the assembler code

The whole code translates to roughly 4 instructions per byte:

  1. A memory load,
  2. an or with 32
  3. a comparison, and
  4. a conditional jump.

(On x86, the case sensitive version actually only has a cmp-with-memory and a conditional jump).

Due to https://gcc.gnu.org/bugzilla/show_bug.cgi?id=77729 this may be one instruction more: On some architectures an unneeded zero-extend-byte instruction is inserted – this causes a 20% performance loss.

Performance evaluation

I run the hash against all 82 words understood by APT in Packages and Sources files, 1,000,000 times for each word, and summed up the average run-time:

host arch Trie TrieCase GPerfCase GPerf DJB
plummer ppc64el 540 601 1914 2000 1345
eller mipsel 4728 5255 12018 7837 4087
asachi arm64 1000 1603 4333 2401 1625
asachi armhf 1230 1350 5593 5002 1784
barriere amd64 689 950 3218 1982 1776
x230 amd64 465 504 1200 837 693

Suffice to say, GPerf does not really come close.

All hosts except the x230 are Debian porterboxes. The x230 is my laptop with a a Core i5-3320M, barriere has an Opteron 23xx. I included the DJB hash function for another reference.

Source code

The generator is written in Perl, licensed under the MIT license and available from https://github.com/julian-klode/triehash – I initially prototyped it in Python, but guillem complained that this would add new build dependencies to dpkg, so I rewrote it in Perl.

Benchmark is available from https://github.com/julian-klode/hashbench

Usage

See the script for POD documentation.


Filed under: General

25 September, 2016 06:44PM

hackergotchi for HandyLinux

HandyLinux

Le projet DFLinux passe en beta1

bonjour @tout-e-s

Un peu plus de 3 semaines après la sortie des versions alpha2 du projet DFLinux, voici la version beta (pré-testée par Fred Bezies ) en ligne avec un gros travail effectué sur la session FluxBox.

Les nouveautés de cette version de test :

Toutes versions :
  • suppression du compte root par défaut (bien vu Starsheep)
  • ajout du premier utilisateur au groupe sudo par defaut
  • passage de la conf utilisateur en paquets
Version *light :
  • passage des versions *light sous la barre des 700Mo
  • mise en place de la post-installation pour compléter fluxbox
  • mise à jour du message d'accueil
  • refonte du menu fluxbox (merci thuban et Severian)
  • ajout du profil 'light' pour le handymenu
  • remplacement de pcmanfm par thunar
  • remplacement de lxterminal par xfce4-terminal
  • remplacement de volumeicon par xfce4-mixer
  • ajout de xfce4-panel et xfce4-appfinder
  • ajout du thème fluxbox 'dflinux'
  • ajout de compton pour les effets d'ombrage
  • ajout du thème fluxbox dflinux
Version *full :
  • ajout du menu whisker pour les verisons 'full' (merci Caribou22)
  • ajout du plugin notes pour les versions 'full' (merci fibi)
  • ajout de baobab (visionneur d'espace disque) pour les versions 'full'

Ce qui nous donne une session Fluxbox très proche de la session Xfce livrée avec les versions classiques, permettant ainsi aux débutants de commencer leur apprentissage sur une session ultra-légère avec un vieil ordinateur à bas prix, ou mieux, récupéré

Un coup d'œil ?




Cette version beta1 va rester en ligne un peu plus d'un mois afin de permettre un maximum de test ... la RC puis la version finale sont prévues pour Noël
Cette période me permettra aussi de rédiger la documentation spécifique à chaque ISO distribuée par le projet DFLinux, afin de compléter les cahiers du débutant déjà intégré.

Bugs ...
  • il reste un bug à résoudre pour le profil HandyMenu de la version DFLinux-light : les lanceurs de l'onglet "fichiers" sont fonctionnels en live, mais plus un fois installé car le chemin des dossiers est calé sur "humain", l'utilisateur par défaut en live.... ma faute ... et pas encore de solution propre, mais ça va arriver
  • petits défauts d'affichage aléatoires des décorations de fenêtres dans la version light.. à creuser
  • ...

Tous les liens et renseignements sur la page principale du projet DFLinux du portail Debian-Facile .
Pour les retours et suggestions, rendez-vous sur le fil dédié .

++
arp

HandyLinux - la distribution Debian sans se prendre la tête...

25 September, 2016 12:36PM by arpinux

September 24, 2016

hackergotchi for SparkyLinux

SparkyLinux

Linux kernel 4.7.5

 

The latest stable version of Linux kernel 4.7.5 just landed in Sparky “unstable” repository.

Make sure you have Sparky “unstable” repository enabled:
http://sparkylinux.org/wiki/doku.php/repository
to upgrade or install the latest kernel.

Follow the Wiki page: http://sparkylinux.org/wiki/doku.php/linux_kernel to install the latest Sparky’s Linux kernel.

Then reboot your machine to take effects.

To quick remove older version of the Linux kernel, simply run APTus-> Remove-> Uninstall Old Kernel script.

 

24 September, 2016 06:20PM by pavroo

hackergotchi for rescatux

rescatux

Rescatux 0.40 beta 11 released

Rescatux 0.40 beta 11 has been released.

Rescatux 0.40 beta 11 - Performing Filesystem checkRescatux 0.40 beta 11 – Filesystem check shows its progress

Downloads:

Rescatux 0.40b11 size is about 671 Megabytes.

Some thoughts:

This release comes with one of the most exciting improvements in many years: Most of the options show its subtasks progress while working. That means that the final user will not doubt about Rescatux being frozen or not while he sees an apparently frozen screen.

I have been working on making all the source code available but as I have mentioned on the Debian Live mailing list it is not easy.

Another big improvement towards usability is renaming the older ‘Not detected’ partitions to: ‘Windows / Data / Other’ so that people don’t get confused. Many people think their windows is not being detected while there is a windows there.

I guess the partition selection entries could be improved in the future so that they show the actual Windows name (Thanks to os-prober algorithms) and also the filesystem type.

Many other smalls bug fixes have been done. You can check the git log for more details.

I have decided not to release stable release till I add one or two UEFI related options. The one about ordering UEFI boot entries would be there for sure. The current roadmap had been planned for 2012 so it needs some update. It’s true that current Rescatux supports booting from an UEFI system and that if you had already installed grub-efi it reinstalls it in its UEFI partition. But it’s also true that one or two more UEFI options are needed so that magazines stop advising to use Boot Repair inside Rescatux.

More things I want to do before the stable release are:

  • Make clear that ‘Extra tools’ are not supported renaming them to ‘Extra tools (Non supported)’
  • Internal documentation updated
  • A new Rescatux website (Optional)
  • A new Rescatux tutorial video or videos (Optional)
  • Add more AFD functionality

Let’s hope it happens sooner than later.

UEFI feedback is still welcome. Specially if the Debian installation disks work for you but not the Rescatux ones.

 

Roadmap for Rescatux 0.40 stable release:

You can check the complete changelog with link to each one of the issues at: Rescatux 0.32-freeze roadmap which I’ll be reusing for Rescatux 0.40 stable release.

  • (Fixed in 0.40b5) [#2192] UEFI boot support
  • (Fixed in 0.40b2) [#1323] GPT support
  • (Fixed in 0.40b11) [#1364] Review Copyright notice
  • (Fixed in: 0.32b2) [#2188] install-mbr : Windows 7 seems not to be fixed with it
  • (Fixed in: 0.32b2) [#2190] debian-live. Include cpu detection and loopback cfg patches
  • (Fixed in: 0.40b8) [#2191] Change Keyboard layout
  • (Fixed in: 0.32b2) [#2193] bootinfoscript: Use it as a package
  • (Fixed in: 0.32b2) [#2199] Btrfs support
  • (Closed in 0.40b1) [#2205] Handle different default sh script
  • (Fixed in 0.40b2) [#2216] Verify separated /usr support
  • (Fixed in: 0.32b2) [#2217] chown root root on sudoers
  • [#2220] Make sure all the source code is available
  • (Fixed in: 0.32b2) [#2221] Detect SAM file algorithm fails with directories which have spaces on them
  • (Fixed in: 0.32b2) [#2227] Use chntpw 1.0-1 from Jessie
  • (Fixed in 0.40b1) [#2231] SElinux support on chroot options
  • (Checked in 0.40b11) [#2233] Disable USB automount
  • (Fixed in 0.40b9) [#2236] chntpw based options need to be rewritten for reusing code
  • [#2239]http://www.supergrubdisk.org/wizard-step-put-rescatux-into-a-media/ suppose that the image is based on Super Grub2 Disk version and not Isolinux.The step about extracting iso inside an iso would not be longer needed. Update doc: Put Rescatux into a media for Isolinux based cd
  • (Fixed in: 0.32b2) [#2259] Update bootinfoscript to the latest GIT version
  • (Fixed in: 0.40b9) [#2264] chntpw – Save prior registry files
  • (Fixed in: 0.40b9) [#2234] New option: Easy Grub fix
  • (Fixed in: 0.40b9) [#2235] New option: Easy Windows Admin

Improved bugs (0.40b11):

  • (Improved in 0.40b11) Many source code build improvements
  • (Improved in 0.40b11) Now most options show their progress while running
  • (Improved in 0.40b11) Added a reference to the source code’s README file in the ‘About Rescapp’ option
  • (Improved in 0.40b11) Not detected’ string was renamed to ‘Windows / Data / Other’ because that’s what it usually happens with Windows OSes

Fixed bugs (0.40b11):

  • (Fixed in 0.40b11) [#1364] Review Copyright notice
  • (Checked in 0.40b11) [#2233] Disable USB automount
  • (Fixed in 0.40b11) Wineasy had its messages fixed (Promote and Unlock were swapped)
  • (Fixed in 0.40b11) Share log function now drops usage of cat to avoid utf8 / ascii problems.
  • (Fixed in 0.40b11) Sanitize ‘Not detected’ and ‘Cannot mount’ messages

Fixed bugs (0.40b9):

  • (Fixed in 0.40b9) [#2236] chntpw based options need to be rewritten for reusing code
  • (Fixed in: 0.40b9) [#2264] chntpw – Save prior registry files
  • (Fixed in: 0.40b9) [#2234] New option: Easy Grub fix
  • (Fixed in: 0.40b9) [#2235] New option: Easy Windows Admin

Fixed bugs (0.40b8):

  • (Fixed in 0.40b8) [#2191] Change Keyboard layout

Improved bugs (0.40b7):

  • (Improved in 0.40b7) [#2192] UEFI boot support (Yes, again)

Improved bugs (0.40b6):

  • (Improved in 0.40b6) [#2192] UEFI boot support

Fixed bugs (0.40b5):

  • (Fixed in 0.40b5) [#2192] UEFI boot support

Fixed bugs (0.40b2):

  • (Fixed in 0.40b2) [#1323] GPT support
  • (Fixed in 0.40b2) [#2216] Verify separated /usr support

Fixed bugs (0.40b1):

  • (Fixed in 0.40b1) [#2231] SElinux support on chroot options

Reopened bugs (0.40b1):

  • (Reopened in 0.40b1) [#2191] Change Keyboard layout

Fixed bugs (0.32b3):

  • (Fixed in 0.32b3) [#2191] Change Keyboard layout

Other fixed bugs (0.32b2):

  • Rescatux logo is not shown at boot
  • Boot entries are named “Live xxxx” instead of “Rescatux xxxx”

Fixed bugs (0.32b1):

  • Networking detection improved (fallback to network-manager-gnome)
  • Bottom bar does not have a shorcut to a file manager as it’s a common practice in modern desktops. Fixed when falling back to LXDE.
  • Double-clicking on directories on desktop opens Iceweasel (Firefox fork) instead of a file manager. Fixed when falling back to LXDE.

Improvements (0.32b1):

  • Super Grub2 Disk is no longer included. That makes easier to put the ISO to USB devices thanks to standard multiboot tools which support Debian Live cds.
  • Rescapp UI has been redesigned
    • Every option is at hand at the first screen.
    • Rescapp options can be scrolled. That makes it easier to add new options without bothering on final design.
    • Run option screen buttons have been rearranged to make it easier to read.
  • RazorQT has been replaced by LXDE which seems more mature. LXQT will have to wait.
  • WICD has been replaced by network-manager-gnome. That makes easier to connect to wired and wireless networks.
  • It is no longer based on Debian Unstable (sid) branch.

Distro facts:

  • Packages versions for this release can be found at Rescatux 0.40b11 packages.
  • It’s based mainly on Debian Jessie (Stable). Some packages are from Debian Unstable (sid).

 

Don’t forget that you can use:

Help Rescatux project:

I think we can expect four months maximum till the new stable Rescatux is ready. Helping on these tasks is appreciated:

  • Making a youtube video for the new options.
  • Make sure documentation for the new options is right.
  • Make snapshots for new options documentation so that they don’t lack images.

If you want to help please contact us here:

Thank you and happy download!

Flattr this!

24 September, 2016 01:50PM by adrian15

hackergotchi for Blankon developers

Blankon developers

Sokhibi: Menggambar Papan Catur dengan Inkscape

Isi postingan kali ini sebenarnya tutorial tingkat dasar dalam penggunaan aplikasi desain grafis Inkscape, langkah-langkahnya juga cukup sederhana dan mudah untuk diikuti oleh pengguna awam. Penulis sengaja membuat tutorial ini sebagai jawaban atas pertanyaan salah seorang anggota group Facebook Inkscape Indonesia. Inti dari pertanyaannya adalah bagaimana cara termudah membuat gambar seperti di bawah ini  
Berikut ini tutorial sedarhana yang dapat pembaca ikuti:

  • Buka Inkscape sesuai sistem operasi yang Anda gunakan. 
  • Ubah warna Latar belakang (background) halaman (page) menjadi selain putih, caranya klik File => Document Properties atau tekan Shift+Ctrl+D, klik notofikasi warna Background, pada jendela yang tampil lakukan pengaturan warna selain warna putih (misal biru), yang perlu diperhatian pada bagian ini adalah Anda harus mengubah nilai pada baris warna A (alpha) menjadi selain 0 (nol), tutup semua jendela pengaturan halaman.



  • Buat object berbentuk bujur sangkar menggunakan Rectangles Tool dengan ukuran 64 X 64 px.
  • Simpan desain yang sudah dibuat ke direktori di komputer Anda



  • Buka jendela pengaturan Create Tiled Clones..., caranya klik Edit => Create Tiled Clones,  klik tombol Reset jika Anda pernah menggunakan fitur Create Tiled Clones, hilangkan tanda centang pada kotak Use save size and position of the tile.


  • Isikan nilai 2 x 2 pada kotak Rows, columns, kemudian klik tombol Create.
  • Hapus object yang menumpuk di atas object asli .


  • Seleksi semua object (4 object), hapus atau matikan link antara object hasil kloning dengan object asli, caranya klik Edit => Clone => Unlink Clone atau tekan Shift+Alt+D
  • Ubah warna object sudut kiri atas dan sudut kanan bawah menjadi putih.
  • Seleksi semua object, lakukan Grouping pada object tersebut, caranya klik Object => Group atau tekan Ctrl+G 


  • Kembali ke jendela pengaturan Create Tiled Clones, isikan nilai 4 x 4 pada kotak Rows, columns, kemudian klik tombol Create.
  • Hapus Object yang menumpuk di atas object asli 
  • Buat bingkai untuk papan catur untuk mempercantik gamabr Papan Catur, atau coba gambar beberapa buah anak catur (misalnya: pion), letakkan di atas gambar papan catur yang telah dibuat
  • Selesai


Demikian postingan tutorial sederhana cara Menggambar Papan Catur di Inkscape dengan mudah, tutorial ini penulis tulis di waktu luang ketika menunggu giliran tampil di depan dalam rangka ngamen di salah satu SMK yang sedang proses mengenal aplikasi Open Source.
Semoga bermanfaat untuk pembaca semua,sampai jumpa pada tutorial lainnya

24 September, 2016 02:16AM by Istana Media (noreply@blogger.com)

September 23, 2016

hackergotchi for Ubuntu

Ubuntu

Ubuntu Online Summit: 15-16 November 2016

The next Ubuntu Online Summit is going to happen:

15-16 November 2016

At the event we are going to celebrate the 16.10 release and all the great things which are new and get to talk about what’s coming up in Ubuntu 17.04.

The event will be added to summit.ubuntu.com shortly and you will all receive a reminder or two to add your sessions. 🙂

We’re looking forward to seeing you there.

Originally posted to the ubuntu-news-team mailing list on Thu Sep 22 09:46:35 UTC 2016 by Daniel Holbach

23 September, 2016 12:58AM by lyz

September 22, 2016

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Rocket.chat, a new snap is landing on your Nextcloud box and beyond!

Ubuntu Core Store

Last week Nextcloud, Western Digital and Canonical launched the Nextcloud box, a simple box powered by a Raspberry Pi to easily build a private cloud storage service at home . With the Nextcloud box, you are already securely sharing files with your colleagues, friends, and/or family.
The team at Rocket.chat has been asking “Why not add a group chat so you can chat in real-time or leave messages for one another?”

We got in touch with the team to hear about their story!

Here’s Sing Li from Rocket.chat telling us about their journey to Ubuntu Core!

Introducing Rocket.chat

Rocket.chat is a slack-like server that you can run on your own servers or at home… and installing it with Ubuntu core couldn’t be easier. Your private chat server will be up and running in 2 minutes.

If you’ve not heard of them, Rocket.Chat is one of the largest MIT licenced open source group chat project on GitHub with over 200 global contributors, 9,000 stars, and 40,000 community servers deployed world-wide.

Rocket.chat is an optimized nodejs application, bundled in compressed tar zip. To install it a system admin would need to untar the optimized app, and install its dependency on the server . He/She would then need to configure the server via environment variables and be configured to survive restart using a service manager.. Combined with the typical routine of setting up and configuring a reverse proxy and getting DNS plus SSL set up correctly. This meant that system administrators had to spend on average 3 hours deploying the rocket.chat server before they can even start to see the first login page.

Being a mature production server, the configuration surface of Rocket.Chat is also very large. Currently we have over a hundred configurable settings for all sort of different use-cases. Getting configuration just right for a use-case adds to the already long time required to deploy the server.

Making installation a breeze

We started to look for alternatives that can ubiquitously delivery our server product to the largest possible body of end users (deployer of chat servers) and provide a non-complex, pleasant initial out-of-box experience. If it can help with updating of software and expediting installation of new version – it would be a bonus. If it can further reduce our complex build pipeline complexity, then it is a sure winner.

When we first saw snaps, we knew it had everything were looking for. The ubiquity of Ubuntu 16.04 LTS, Ubuntu 14.04 LTS, plus Ubuntu Core for IoT – means we can deliver Rocket.Chat to an incredibly large audience of server deployers globally – all via one single package. But also cater for the increasing number of users asking us to run a Rocket.chat server on a Raspberry Pi.

With Snap, we only have one bundle for all the Linux platforms supported. It is right here in the snap store What’s real cool is that even the formerly manually built ARM based server, for Raspberry Pi and other IoT devices, can also be part of the same snap. It enjoys the same simplicity of installation, and transactional update just like other Intel 64 bit Linux platforms.

Our next step will be to distribute our desktop client with snaps. We have a vision that once we tag a release, within seconds a build process is kicked off through the CI and published to the snap store.

The asymmetric nature of the snap delivery pipeline has definite benefits. By paying the cost of heavy processing work up front during the build phase, snap deployment is lightning fast. On most modern Intel servers or VPSes, `snap install rocketchat.server` takes only about 30 seconds. The server is ready to handle hundreds of users immediately, available at URL `http://<server address>:3000`.

Consistent with the snap design philosophy, we have done substantial engineering to come up with a default set of ready-to-go configuration settings that supports the most common use cases of a simple chat server.

What this enables us to do is to deliver a 30s “instantly gratifying” experience to system admistator who’d like to give Rocket.Chat server a try – with no obligation whatsover. Try it, if you like it, keep it and learn.

All of the complexity of configuring a full-fledged production server is folded away. The system administrator can learn more about configuration and customization at her/his own pace later.

Distributing updates in a flash

We work on new features on a develop branch (Github), and many of our community members test on this branch with us. Once the features are deemed stable, they are merged down to master (Github) where our official releases (Github) reside. Both branches are rigged to Continuous Integration via travis. Our travis script optimizes, compresses and bundle the distributions and then pushes them out to the various distribution channels that we support, many of them requires further special sub-builds and repackaging that can take substantial time.

Some distributions even calls for manual build every release. For example, we build manually for ARM architecture (Github), to support a growing community of Raspberry Pi makers, hobbyists, and IoT enthusiasts at Github

In addition to the server builds, we also have our desktop client(Github). Every new release requires a manual build on Windows, OSX, and Ubuntu. This build process requires a member of our team to physically login to a Windows(x86 and x86_64) OSX and Ubuntu(x86 and x86_64) machine to create our releases. Once this release was built, our users then had to goto our release page to manually download the Ubuntu release and install it.

That’s where snap also bring a greatt a much simpler distribution mechanism and a better user experience. When we add features and push new, every one of our users will be enjoying it within a few hours- automatically. The version update is transactional, so if they can always roll back to the previous version if they’d like.

In fact, we make use of the stable and edge channels corresponding to our master and develop branches. Community members helping us to test the latest software is on the edge channel , and often gets multiple updates throughout the day as we fix bugs and add new features.

We look forward to the point where our desktop client is also available as a snap and when our users no longer have to wrestle with updating their Desktop Clients. Like the server we are able to deliver their updates quickly and seamlessly.

22 September, 2016 02:11PM

Ubuntu Podcast from the UK LoCo: S09E30 – Pie Till You Die - Ubuntu Podcast

It’s Episode Thirty of Season-Nine of the Ubuntu Podcast! Mark Johnson, Alan Pope and Martin Wimpress are here again.

Most of us are here, but one of us is busy!

In this week’s show:

  • We discuss the Raspberry Pi hitting 10 Million sales and the impact the it has had.

  • We share a Command Line Lurve:

    • set -o vi – Which makes bash use vi keybindings.
  • We also discuss solving an “Internet Mystery” #blamewindows

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Wikimedia.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

22 September, 2016 02:00PM

Ubuntu Insights: Monitoring “big software” stacks with the Elastic Stack

Big​ ​Software​ ​is​ ​a​ ​new​ ​class​ ​of​ ​application.​ ​It’s​ ​composed​ ​of​ ​so​ ​many moving​ ​pieces​ ​that​ ​humans,​ ​by​ ​themselves,​ ​cannot​ ​design,​ ​deploy​ ​or operate​ ​them.​ ​OpenStack,​ ​Hadoop​ ​and​ ​container-based​ ​architectures​ ​are all​ ​examples​ ​of​ ​​Big​ ​Software​.

  • Gathering​ ​service​ ​metrics​ ​for​ ​complex​ ​big​ ​software​ ​stacks​ ​can​ ​be​ ​a​ ​chore
  • Especially​ ​when​ ​you​ ​need​ ​to​ ​warehouse,​ ​visualize,​ ​and​ ​share​ ​the​ ​metrics
  • It’s​ ​not​ ​just​ ​about​ ​measuring​ ​machine​ ​performance,​ ​but​ ​application performance​ ​as​ ​well

You​ ​usually​ ​need​ ​to​ ​warehouse​ ​months​ ​of​ ​history​ ​of​ ​these​ ​metrics​ ​so​ ​you can​ ​spot​ ​trends.​ ​This​ ​enables​ ​you​ ​to​ ​make​ ​educated​ ​infrastructure decisions.​ ​That’s​ ​a​ ​powerful​ ​tool​ ​that’s​ ​usually​ ​offered​ ​on​ ​the​ ​provider level.​ ​But​ ​what​ ​if​ ​you​ ​run​ ​a​ ​hybrid​ ​cloud​ ​deployment?​ ​Not​ ​every​ ​cloud service​ ​is​ ​created​ ​equally.

The​ ​Elastic​ ​folks​ ​provide​ ​everything​ ​we​ ​need​ ​to​ ​make​ ​this​ ​possible.

Additionally​ ​we​ ​can​ ​connect​ ​it​ ​to​ ​​all​ ​sorts​​ ​of​ ​other​ ​bundles​ ​in​ ​the​ ​charm.

Stack

Big Software is a new class of application. It’s composed of so many moving pieces that humans, by themselves, cannot design, deploy or operate them. OpenStack, Hadoop and container-based architectures are all examples of Big Software.

Gathering service metrics for complex big software stacks can be a chore. Especially when you need to warehouse, visualize, and share the metrics. It’s not just about measuring machine performance, but application performance as well.

You usually need to warehouse months of history of these metrics so you can spot trends. This enables you to make educated infrastructure decisions. That’s a powerful tool that’s usually offered on the provider level. But what if you run a hybrid cloud deployment? Not every cloud service is created equally.

The Elastic folks provide everything we need to make this possible. Additionally we can connect it to all sorts of other bundles in the charm store. We can now collect data on any cluster, store it, and visualize it. Let’s look at the pieces that are modeled in this bundle:

  • Elasticsearch – a distributed RESTful search engine
  • Beats – lightweight processes that gather metrics on nodes and ship them to Elasticsearch.
    • Filebeat ships logs
    • Topbeat ships “top-like” data
    • Packetbeat provides network protocol monitoring
  • Dockerbeat is a community beat that provides app container monitoring
  • Logstash – Performs data transformations, and routing to storage. As an example: Elasticsearch for instant visualization or HDFS for long term storage and analytics
  • Kibana – a web front end to visualize and analyze the gathered metrics

Getting Started

First, install and configure Juju. This will allow us to model our clusters easily and repeatedly. We used LXD as a backend in order to maximize our ability to explore the cluster on our desktops/laptops. Though you can easily deploy these onto any major public cloud.

juju deploy ~containers/bundle/beats-core

This will give you a complete stack, it looks like this:

 

Note: if you wish to deploy the latest version of this bundle, the ~containers team is publishing a development channel release as new beats are added to the core bundle.

juju deploy ~containers/bundle/beats-core --channel=development

Once everything is deployed we need to deploy the dashboards:

juju action do kibana/0 deploy-dashboard dashboard=beats

Now do a `juju status kibana` to get the ip address to the unit it’s allocated. Now we are… monitoring nothing. We need something to connect it to, and then introduce it to beats, so something like:

juju deploy myapplication
   juju add-relation filebeat:beats-host myapplication
   juju add-relation topbeat:beats-host myapplication

Let’s connect it to something interesting, like an Apache Spark deployment.

Integrating with other bundles

The standalone bundle is useful but let’s use a more practical example. The Juju Ecosystem team has added elastic stack monitoring to a bunch of existing bundles. You don’t even have to manually connect the beats-core deployment to anything, you can just use an all in one bundle:

 

To deploy this bundle in the command line:

juju deploy apache-processing-spark

We also recommend running `juju status` periodically to check the progress of the deployment. You can also just open up a new terminal and keep `watch juju status` in a window so you can have the status continuously display while you continue on.

In this bundle: Filebeat and Topbeat act as subordinate charms. Which means they are co-located on the spark units. This allows us to use these beats to track each spark node. And since we’re adding this relationship at the service level; any subsequent spark nodes you add will automatically include the beats monitors. The horizontal scaling of our cluster is now observable.

Let’s get the kibana dashboard ready:

juju set-config kibana dashboards="beats"

Notice that this time, we used charm config instead of an action to deploy the dashboard. This allows us to blanket configure, and deploy the kibana dashboards from a bundle. Reducing the number of steps a user must take to get started.

After deployment you will need to do a `juju status kibana` to get the IP address of the unit. Then browse to it in your web browser. For those of you deploying on public clouds: you will need to also do `juju expose kibana` to open a port in the firewall to allow access. Remember, to make things accessible to others in our clouds Juju expects you to explicitly tell it to do this. Out-of-the-box we keep things closed.
When you get to the kibana GUI you need add `topbeat-*` or `filebeat-*` in the initial screen setup to set up Kibana’s index. Make sure you click the “Create” button for each one:

Create

Now we need to load the dashboard’s we’ve included for you, click on the “Dashboard” section and click the load icon, then select the “topbeat-dashboard”:

topbeat dashboard

near-realtime

Now you should see a your shiny new dashboard:

Shiny dashboard

You now have an observable Spark cluster! Now that your graphs are up, let’s run something to ensure all the working pieces are working, let’s do a quick pagerank benchmark:

juju run-action spark/0 pagerank

This will output a UUID for your job for you to query for results:

juju show-action-output 

You can find more about available actions in the bundle’s documentation. Feel free to launch the action multiple times if you want to exercise the hardware, or run your own Spark jobs as you see fit.

By default the `apache-processing-spark` bundle gives us three nodes. I left those running for a while and then decided to grow the cluster. Let’s add 10 nodes

juju add-unit -n10 spark

Your `juju status` should be lighting up now with the new units being fired up, and in Kibana itself we can see the rest of the cluster coming online in near-realtime:

near-realtime

Here you can see the CPU and memory consumption of the cluster. You can see the initial three nodes hanging around, and then as the other nodes come up, beats gets installed and they report in, automatically.

Why automatically? ‘apache-processing-spark’ technically is just some yaml. The magic is that we are not just deploying code, we’re modelling the relationship between these applications:

relations:
  - [spark, zookeeper]
  - ["kibana:rest", "elasticsearch:client"]
  - ["filebeat:elasticsearch", "elasticsearch:client"]
  - ["filebeat:beats-host", "spark:juju-info"]
  - ["topbeat:elasticsearch", "elasticsearch:client"]
  - ["topbeat:beats-host", "spark:juju-info"]

So when spark is added, you’re not just adding a new machine, you’re mutating the scale of the application within the model. But what does that mean?

A good way to think about it is just like simple elements and compounds. For example: Carbon Monoxide (CO) and Carbon Dioxide (CO2) are built from the exact same elements. But the combination of those elements allow for two different compounds with different characteristics. If you think of your infrastructure similarly, you’re not just designing the components that compose it. But the number of interactions that those components have with themselves and others.

So, automatically deploying filebeat and topbeat when spark is scaled just becomes an automatic part of the lifecycle. In this case, one new spark unit results in one new unit of filebeat, and one new unit of topbeat. Similarly, we can change this model as our requirements change.

This post-deployment mutability of infrastructure is one of Juju’s key unique features. You’re not just defining how applications talk and relate to each other. You’re also defining the ratios of units to their supporting applications like metrics collection.

We’ve given you two basic elements of beats today, filebeat, and topbeat. And like chemistry, more elements make for more interesting things. So now let’s show you how to expand your metrics-gathering to another level.

Charming up your own custom beat

Elastic has engineered Beats to be expandable. They have invested effort in making it easy for you to write your own “beat”. As you can imagine, this can lead to an explosion of community-generated beats for measuring all sorts of things. We wanted to enable any enthusiast of the beats community to be able to hook into a Juju deployed workload.

As part of this work we’ve published a beats base layer. This will allow you to generate a charm for your custom beat. Or any of the community written beats for that matter. Then deploy it right into your model, just like we do with topbeat and filebeat. Let’s look at an example:

The Beats-base layer

Beats Base provides some helper python code to handle the common patterns every beats unit will undergo. Such as declaring to the model how it will talk to Logstash and/or Elasticsearch. This is always handled the same way among all the beats. So we’re keeping developers from needing to repeat themselves.

Additionally the elasticbeats library handles:

  • Unit index creation
  • Template rendering in any context
  • Enabling the beat as a system service

So starting from beats-base, we have 3 concerns to address and we will have delivered our beat:

  • How to install your beat (delivery)
  • How to configure your beat (template config)
  • Declare your beats index (payload delivery from installation step)

Let’s start with Packetbeat as an example. Packetbeat is an open source project that is designed to provide real‑time analytics for web, database, and other network protocols.

charm create packetbeat

Every charm starts with a layer.yaml

includes:

  • beats-base
  • apt
  • repository: http://github.com/juju-solutions/layer-packetbeat

Let’s add a little bit of metadata.yaml

name: packetbeat
summary: Deploys packetbeat
maintainer: Charles Butler 
description: |
  data shipper that integrates with Elasticsearch to provide
  real-time analytics for web, database, and other 
  network protocols
series:
  - trusty
Tags:
monitoring
analytics
networking

With those meta files in place we’re ready to write our reactive code.

reactive/packetbeat.py

For delivery of packetbeat, elastic has provided a deb repository for the official beats. This makes delivery a bit simpler using the apt-layer. The consuming code is very simple:

import charms.apt

@when_not('apt.installed.packetbeat')
def install_filebeat():
    status_set('maintenance', 'Installing packetbeat')
    charms.apt.queue_install(['packetbeat'])

This completes our need to deliver the application. The apt-layer will handle all the usual software delivery things for us like installing and configuring an apt repository, etc. Since this layer is reused in charms all across the community, we merely reuse it here.

The next step is modeling how we react to our data-sources being connected. This typically requires rendering a yaml file to configure the beat, starting the beat daemon, and reacting to the beats-base beat.render state.

In order to do this we’ll be adding:

  • Configuration options to our charm
  • A Jinja template to render the yaml configuration
  • Reactive code to handle the state change and events

The configuration for packetbeat comes in the form of declaring protocol and port. This makes attaching packetbeat to anything transmitting data on the wire simple to model with configuration. We’ll provide some sane defaults, and allow the admin to configure the device to listen on.

Config.yaml

  device:
    type: string
    default: any
    description: Device to listen on, eg eth0
  protocols:
    type: string
    description: |
      the ports on which Packetbeat can find each protocol. space
      separated protocol:port format.
    default: "http:80 http:8080 dns:53 mysql:3306 pgsql:5432   redis:6379 thrift:9090 mongodb:27017 memcached:11211"

templates/packetbeat.yml

# This file is controlled by Juju. Hand edits will not persist!
interfaces:
  device: {{ device }}
protocols:
  {% for protocol in protocols -%}
    {{ protocol }}:
      ports: {{ protocols[protocol] }}
  {% endfor %}
{% if elasticsearch -%}
output:
  elasticsearch:
    hosts: {{ elasticsearch }}
{% endif -%}
{% if principal_unit %}
shipper:
  name: {{ principal_unit }}
{% endif %}


reactive/packetbeat.py

from charms.reactive import set_state
from charmhelpers.core.host import service_restart
from charmhelpers.core.hookenv import status_set
from elasticbeats import render_without_context


@when('beat.render')
@when_any('elasticsearch.available', 'logstash.available')
def render_filebeat_template():
    render_without_context('packetbeat.yml', '/etc/packetbeat/packetbeat.yml')
    remove_state('beat.render')
    service_restart('packetbeat')
    status_set('active', 'Packetbeat ready')

With all these pieces of the charm plugged in, run a `charm build` in your layer directory and you’re ready to deploy the packetbeat charm.

juju deploy cs:bundles/beats-core
    juju deploy cs:trusty/consul
    juju deploy ./builds/packetbeat

    juju add-relation packetbeat elasticsearch
    juju add-relation packetbeat consul

Consul is a great test, we can attach a single beat and monitor DNS, and Web traffic thanks to its UI

juju set-config packetbeat protocols=”dns:53 http:8500”

single beat and monitor DNS

Load up the kibana dashboard, and look under the “Discover” tab. There will be a packetbeat index, and data aggregating underneath it. Units requesting cluster DNS will start to pile on as well.

To test both of these metrics, browse around the Consul UI on port 8500. Additionally you can ssh into a unit, and dig @ the consul dns server to see DNS metrics populate.

Populating the Packetbeat dashboard from here is a game of painting with data by the numbers.

Conclusion

Observability is a great feature to have in your deployments. Whether it’s a brand new 12-factor application or the simplest of MVC apps. Being able to see inside the box is always a good feature for modern infrastructure to own.

This is why we’re excited about the Elastic stack! We can plug this into just about anything and immediately start gathering data. We’re looking forward to seeing how people bring in new beats to connect other metrics to existing bundles.

We’ve included this bundle in our Swarm, Kubernetes and big data bundles out of the box. I encourage everyone who is publishing bundles in the charm store to consider plugging in this bundle for production-grade observability.

22 September, 2016 06:18AM

September 21, 2016

hackergotchi for VyOS

VyOS

VyOS virtual meeting notes - 14 September 2016

We hosted our first VyOS virtual meeting here in September and invited both developers and enthusiasts to attend. The meeting was held on September 14th at 18:00 UTC and all in all we had about 11 participants join. Yuriy Andamsov (syncer) brought this idea of a virtual meeting to fruition, thank you Yuriy!

Meeting summary:

VyOS 1.1.8 release
We discussed the general question of whether this should be a maintenance-only release or whether new features should be included. The community has readied a few new features which could easily be imported to this release. In general, past VyOS micro releases have included new features as long as they are safe, low risk changes. There was a lot of discussion about this topic mostly related to where developer time is best spent and whether making this a maintenance-only release would help justify more effort from the community to put towards v1.2. In the end we agreed that 1.1.8 will include backports of a few new features, but only where it's not a major headache or risk to do so.

Web GUI discussion
Mihail brought up the work he's been doing on a web GUI front-end for VyOS. His work can be found here:
https://github.com/mickvav/vyatta-webgui

https://github.com/mickvav/vyatta-accel-ppp

General consensus on a web GUI is that it's a nice to have, not a requirement at the moment for the project. We might look to integrate this at some point in the 1.2 future or beyond.

The move to Jessie (VyOS 1.2)

Here's where we are.  We have nightly builds for VyOS 1.2 based on Debian Jessie.  The original VyOS code base is challenging and there's no current automated testing system, so we need testers.  We agreed that one thing we need is visibility on what testing has been done so far.  If you have tested a 1.2 nightly build or would like to, please see this thread to view and get access to the testing matrix.

Jason Hendry mentioned a side project which some Mintel hackers had started on, using serverspec to automate tests of VyOS nightly builds.  On top of doing some manual testing of the nightly builds and contributing to the spreadsheet, he's going to look into getting the serverspec base pushed into CI.

Community Members Present

We had a lot of responses to the original phabricator thread.  Unfortunately not everyone could make it, and also a few people weren't able to join because we hit limits with maximum number of participants in Google Hangouts.  Next meeting we will try a different piece of technology.

  • Jason Hendry (jhendryUK)
  • Daniil Baturin (Dmbaturin)
  • Kim Hagen (UnicronNL)
  • Paul Fitzgerald
  • Michael Zimmerer (mtz4718)
  • Mihail Vasilev (mickvav)
  • Ewald van Geffen (Feedmytv)
  • Patrick van Staveren (trickv)
  • Yuriy Andamsov (syncer)
  • Bronislav Robenek (BillyTheCzech)
  • Amos Shapira
We took some meeting notes which are currently available only on Google Docs but will be centralized somewhere agreeable in the future.

Feedback & Next Meeting
If you would like to join the next meeting, please comment on Q55 in Phabricator to get yourself on the list.  Hope to see you there!

21 September, 2016 10:34PM

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: Leostream Joins Canonical’s Charm partner programme

Leostream Corporation, a leading developer of hosted desktop connection management software, has joined the Charm partner programme to facilitate the deployment of virtual desktops on Ubuntu OpenStack. The partner program helps solution providers make best use of Canonical’s model driven operations system, Juju; enabling instant workload deployment, integration, and scaling on any public or private cloud, as well as bare metal with just a click of a button. The Juju Charm Store has a rapidly growing number of charms available to DevOps teams, with hundreds of cloud-based applications available.

“OpenStack has long been a solution for controlling large pools of compute, storage, and networking resources and has recently turned heads as a solution for virtual desktop infrastructure (VDI),” comments Karen Gondoly CEO of Leostream. “As the world’s most popular operating system for OpenStack, Ubuntu provides a reliable way to build out a manageable cloud.  By making the Leostream Connection Broker available from Canonical’s Charm store, DevOps teams have a fast path to delivering desktops and remote sessions in a cloud-based environment.”

A pioneer in the evolving desktop virtualization space, Leostream will be “charming” its flagship connection broker software, which has quickly become an essential tool for enterprise-grade OpenStack VDI.  Coined the “ultimate connection broker” and the “one broker to rule them all”, the software provides a single management console to integrate a variety of systems and platforms including physical and virtual infrastructures, Windows and Linux Operating Systems, and any number of high-performance display protocols.

To overcome the technical barriers of building and managing OpenStack VDI, Leostream configuration and setup is included in the Canonical BootStack solution. BootStack is an end-to-end service that includes the design, implementation, and ongoing management of an OpenStack cloud on Ubuntu. Combined with Leostream, organizations can get up and running with hosted desktops faster, easier, and in a more cost-predictable way.

“Together, Leostream and Canonical simplify the deployment and migration of virtual desktop workloads into an OpenStack cloud, eliminating legacy, expensive VDI stacks and providing cloud-based, on-demand desktops to users across an organization,” says Stefan Johansson, Global Software Alliances Director, Canonical’s Cloud Division. “We are excited to welcome Leostream to our catalogue to accelerate the adoption of OpenStack VDI.”

The Leostream Connection Broker will be available directly from the Charm store in the fall of 2016. In the meantime, the latest version of the connection broker is available for download from the: Leostream website. For more information on Canonical’s Charm Partner Programme, go to http://partners.ubuntu.com/programmes/charm.

21 September, 2016 06:36PM

Dustin Kirkland: HOWTO: Launch an Ubuntu Cloud Image with KVM from the Command Line


I reinstalled my primary laptop (Lenovo x250) about 3 months ago (June 30, 2016), when I got a shiny new SSD, with a fresh Ubuntu 16.04 LTS image.

Just yesterday, I needed to test something in KVM.  Something that could only be tested in KVM.

kirkland@x250:~⟫ kvm
The program 'kvm' is currently not installed. You can install it by typing:
sudo apt install qemu-kvm
127 kirkland@x250:~⟫

I don't have KVM installed?  How is that even possible?  I used to be the maintainer of the virtualization stack in Ubuntu (kvm, qemu, libvirt, virt-manager, et al.)!  I lived and breathed virtualization on Ubuntu for years...

Alas, it seems that I've use LXD for everything these days!  It's built into every Ubuntu 16.04 LTS server, and one 'apt install lxd' away from having it on your desktop.  With ZFS, instances start in under 3 seconds.  Snapshots, live migration, an image store, a REST API, all built in.  Try it out, if you haven't, it's great!

kirkland@x250:~⟫ time lxc launch ubuntu:x
Creating supreme-parakeet
Starting supreme-parakeet
real 0m1.851s
user 0m0.008s
sys 0m0.000s
kirkland@x250:~⟫ lxc exec supreme-parakeet bash
root@supreme-parakeet:~#

But that's enough of a LXD advertisement...back to the title of the blog post.

Here, I want to download an Ubuntu cloud image, and boot into it.  There's one extra step nowadays.  You need to create your "user data" and feed it into cloud-init.

First, create a simple text file, called "seed":

kirkland@x250:~⟫ cat seed
#cloud-config
password: passw0rd
chpasswd: { expire: False }
ssh_pwauth: True
ssh_import_id: kirkland

Now, generate a "seed.img" disk, like this:

kirkland@x250:~⟫ cloud-localds seed.img seed
kirkland@x250:~⟫ ls -halF seed.img
-rw-rw-r-- 1 kirkland kirkland 366K Sep 20 17:12 seed.img

Next, download your image from cloud-images.ubuntu.com:

kirkland@x250:~⟫ wget http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img                                                                                                                                                          
--2016-09-20 17:13:57-- http://cloud-images.ubuntu.com/xenial/current/xenial-server-cloudimg-amd64-disk1.img
Resolving cloud-images.ubuntu.com (cloud-images.ubuntu.com)... 91.189.88.141, 2001:67c:1360:8001:ffff:ffff:ffff:fffe
Connecting to cloud-images.ubuntu.com (cloud-images.ubuntu.com)|91.189.88.141|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 312606720 (298M) [application/octet-stream]
Saving to: ‘xenial-server-cloudimg-amd64-disk1.img’
xenial-server-cloudimg-amd64-disk1.img
100%[=================================] 298.12M 3.35MB/s in 88s
2016-09-20 17:15:25 (3.39 MB/s) - ‘xenial-server-cloudimg-amd64-disk1.img’ saved [312606720/312606720]

In the nominal case, you can now just launch KVM, and add your user data as a cdrom disk.  When it boots, you can login with "ubuntu" and "passw0rd", which we set in the seed:

kirkland@x250:~⟫ kvm -cdrom seed.img -hda xenial-server-cloudimg-amd64-disk1.img

Finally, let's enable more bells an whistles, and speed this VM up.  Let's give it all 4 CPUs, a healthy 8GB of memory, a virtio disk, and let's port forward ssh to 2222:

kirkland@x250:~⟫ kvm -m 8192 \
-smp 4 \
-cdrom seed.img \
-device e1000,netdev=user.0 \
-netdev user,id=user.0,hostfwd=tcp::5555-:22 \
-drive file=xenial-server-cloudimg-amd64-disk1.img,if=virtio,cache=writeback,index=0

And with that, we can how ssh into the VM, with the public SSH key specified in our seed:

kirkland@x250:~⟫ ssh -p 5555 ubuntu@localhost
The authenticity of host '[localhost]:5555 ([127.0.0.1]:5555)' can't be established.
RSA key fingerprint is SHA256:w2FyU6TcZVj1WuaBA799pCE5MLShHzwio8tn8XwKSdg.
No matching host key fingerprint found in DNS.
Are you sure you want to continue connecting (yes/no)? yes

Welcome to Ubuntu 16.04.1 LTS (GNU/Linux 4.4.0-36-generic x86_64)

* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage

Get cloud support with Ubuntu Advantage Cloud Guest:
http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.

ubuntu@ubuntu:~⟫

Cheers,
:-Dustin

21 September, 2016 03:03PM by Dustin Kirkland (noreply@blogger.com)

Salih Emin: Vivaldi browser: Interview with Jon Stephenson von Tetzchner

Vivaldi browser has taken the world of internet browsing by storm, and only months after its initial release it has found its way into the computers of millions of power users. In this interview, Mr.Jon Stephenson von Tetzchner talks about how he got the idea to create this project and what to expect in the future.

21 September, 2016 02:29PM

hackergotchi for Blankon developers

Blankon developers

Sokhibi: Ekstrakurikuler Desain Grafis dengan Inkscape

Tanggal 21 September 2016 adalah pertemuan kedua kegiatan Ekstrakurikuler Desain Grafis dengan Inkscape di SMP N 1 Semarang tahun pelajaran 2016/2017.
Kegiatan kali ini diikuti oleh 8 (delapan) orang peserta, yang kebetulan semuanya Putri (peserta putra ijin tidak ikut karena ada keperluan mendadak).

Pada pertemuan kali ini materi utamanya adalah menggambar X-Banner dengan Inkscape, langkah pertama dalam materi ini adalah mengatur halaman di Inkscape sesuai ukuran X-Banner standar, kemudian dilanjut menggambar Latar Belakang (background) untuk X-Banner. Dalam materi ini juga diajarkan penggunaan layer sehingga mempermudah proses editing pada gambar yang dibuat. 

Ketika ingin memasukkan logo ke desain X-Banner, ternyata logo yang tersedia formatnya hanya Bitmap sehingga tidak transparan atau ketika diperbesar menjadi pecah.
Cara termudah untuk mendapatkan logo dengan format SVG adalah dengan metode Trace Bitmap. Maka dari itu penulis mengajarkan cara menggunakan fitur Trace Bitmap di Inkscape. 
Berikut penjelasan singkat cara melakukan Trace Bitmap di Inkscape
Impor atau masukkan gambar yang memiliki format Bitmap ke halaman Inkscape, seleksi gambar tersebut, kemudian klik Path => Trace Bitmap atau tekan Shift+Alt+B.

Lakukan pengaturan tertentu sesuai kebutuhan pada jendela pengaturan Trace Bitmap, karena dalam hal ini akan membuat gambar logo yang berwarna (bukan hitam-putih), maka pada tab Mode dipilih Submenu Color yang berada di bawah menu Multiple scans: create a group of paths, sedangkan pada submenu Scans di isi dengan angka 3 (tiga). Agar gambar yang akan dihasilkan oleh metode Trace Bitmap dapat dilihat pada kolom Preview klik tombol Update

Sekarang proses pembuatan object vektor dengan teknik Trace Bitmap sudah berhasil dilakukan. Gambar atau object hasil metode Trace Bitmap secara otomatis menumpuk persis di atas gambar bitmap asli. Geser ke arah tertentu untuk melhat hasilnya. 
Secara otomatis gambar vektor yang dihasilkan oleh metode ini terdiri dari beberapa lapis dan tergabung dalam satu kelompok atau Group, untuk memisahkannya lakukan Ungroup
Pilih salah satu object yang paling mirip dengan logo asli (bitmap), jika warna object vektor tidak sesuai dengan gambar (logo) aslinya, ubah warna object tersebut dengan cara mengambil warna dari gambar asli menggunakan Dropper Tool (pick color). Contoh hasilnya dapat dilihat seperti gamabr di bawah ini:

Setelah praktek materi Trace Bitmap berhasil dikerjakan oleh semua peserta, maka para peserta kembali mengerjakan materi menggambar desain X-Banner hingga jam Ekstrakurikuler berkahir (habis).
-
Sebenarnya selain materi yang tertulis di atas, masih ada beberapa materi lain lagi yang tidak tertulis karena tangan ini sudah sedikit pegal ketika digunakan untuk mengetik, jika ada waktu luang, kapan-kapan saya update tulisan ini.
-
Demikian dokumentasi singkat kegiatan Ekstrakurikuler di SMP N 1 Semarang tanggal 21 September 2016, kegiatan ini rutin dilakukan seminggu sekali dalam rangka menuju penggunaan aplikasi Legal khususnya aplikasi Open Source dalam dunia pendidikan, semoga beberapa lembaga pendidikan lainnya mengikutinya.
-
Salam Open Source

21 September, 2016 02:04PM by Istana Media (noreply@blogger.com)

hackergotchi for Maemo developers

Maemo developers

By: Replacing with silent fan on Icy Box IB-3620U3 enclosure – MacKonsti

[…] couple of excellent articles (by Pavel Rojtberg and Robin Jakobsson) that show how to remove the stock fan and replace with a newer, silent one, I […]

0 Add to favourites0 Bury

21 September, 2016 12:50PM by Pavel Rojtberg (pavel@rojtberg.net)

hackergotchi for Ubuntu developers

Ubuntu developers

Jonathan Riddell: Plasma Wayland ISO Now Working on VirtualBox/virt-manager

I read that Neon Dev Edition Unstable Branches is moving to Plasma Wayland by default instead of X.  So I thought it a good time to check out this week’s Plasma Wayland ISO. Joy of joys it has gained the ability to work in VirtualBox and virt-manager since last I tried.  It’s full of flickers and Spectacle doesn’t take screenshots but it’s otherwise perfectly functional.  Very exciting 🙂

 

Facebooktwittergoogle_pluslinkedinby feather

21 September, 2016 11:33AM

September 20, 2016

hackergotchi for Tails

Tails

Tails 2.6 is out

This release fixes many security issues and users should upgrade as soon as possible.

Changes

New features

  • We enabled address space layout randomization in the Linux kernel (kASLR) to improve protection from buffer overflow attacks.

  • We installed rngd to improve the entropy of the random numbers generated on computers that have a hardware random number generator.

Upgrades and changes

  • Upgrade Tor to 0.2.8.7.

  • Upgrade Tor Browser to 6.0.5.

  • Upgrade to Linux 4.6. This should improve the support for newer hardware (graphics, Wi-Fi, etc.)

  • Upgrade Icedove to 45.2.0.

  • Upgrade Tor Birdy to 0.2.0.

  • Upgrade Electrum to 2.6.4.

  • Install firmware for Intel SST sound cards (firmware-intel-sound).

  • Install firmware for Texas Instruments Wi-Fi interfaces (firmware-ti-connectivity).

  • Remove non-free APT repositories. We documented how to configure additional APT repositories using the persistent volume.

  • Use a dedicated page as the homepage of Tor Browser so we can customize it for our users.

  • Set up the trigger for RAM erasure on shutdown earlier in the boot process. This should speed up shutdown and make RAM erasure more robust.

Fixed problems

  • Disable the automatic configuration of Icedove when using OAuth. This should fix the automatic configuration for GMail accounts. (#11536)

  • Make the Disable all networking and Tor bridge mode options of Tails Greeter more robust. (#11593)

For more details, read our changelog.

Known issues

  • For some users memory wiping fails more often than in Tails 2.5, and for some users it fails less often. Please report any such changes to #11786.

See the list of long-standing issues.

Get Tails 2.6

What's coming up?

Tails 2.7 is scheduled for November 8.

Have a look at our roadmap to see where we are heading to.

We need your help and there are many ways to contribute to Tails (donating is only one of them). Come talk to us!

20 September, 2016 10:34AM

hackergotchi for Ubuntu developers

Ubuntu developers

Eric Hammond: Developing CloudStatus, an Alexa Skill to Query AWS Service Status -- an interview with Kira Hammond by Eric Hammond

Interview conducted in writing July-August 2016.

[Eric] Good morning, Kira. It is a pleasure to interview you today and to help you introduce your recently launched Alexa skill, “CloudStatus”. Can you provide a brief overview about what the skill does?

[Kira] Good morning, Papa! Thank you for inviting me.

CloudStatus allows users to check the service availability of any AWS region. On opening the skill, Alexa says which (if any) regions are experiencing service issues or were recently having problems. Then the user can inquire about the services in specific regions.

This skill was made at my dad’s request. He wanted to quickly see how AWS services were operating, without needing to open his laptop. As well as summarizing service issues for him, my dad thought CloudStatus would be a good opportunity for me to learn about retrieving and parsing web pages in Python.

All the data can be found in more detail at status.aws.amazon.com. But with CloudStatus, developers can hear AWS statuses with their Amazon Echo. Instead of scrolling through dozens of green checkmarks to find errors, users of CloudStatus listen to which services are having problems, as well as how many services are operating satisfactorily.

CloudStatus is intended for anyone who uses Amazon Web Services and wants to know about current (and recent) AWS problems. Eventually it might be expanded to talk about other clouds as well.

[Eric] Assuming I have an Amazon Echo, how do I install and use the CloudStatus Alexa skill?

[Kira] Just say “Alexa, enable CloudStatus skill”! Ask Alexa to “open CloudStatus” and she will give you a summary of regions with problems. An example of what she might say on the worst of days is:

“3 out of 11 AWS regions are experiencing service issues: Mumbai (ap-south-1), Tokyo (ap-northeast-1), Ireland (eu-west-1). 1 out of 11 AWS regions was having problems, but the issues have been resolved: Northern Virginia (us-east-1). The remaining 7 regions are operating normally. All 7 global services are operating normally. Which Amazon Web Services region would you like to check?”

Or on most days:

“All 62 regional services in the 12 AWS regions are operating normally. All 7 global services are operating normally. Which Amazon Web Services region would you like to check?”

Request any AWS region you are interested in, and Alexa will present you with current and recent service issues in that region.

Here’s the full recording of an example session: http://pub.alestic.com/alexa/cloudstatus/CloudStatus-Alexa-Skill-sample-20160908.mp3

[Eric] What technologies did you use to create the CloudStatus Alexa skill?

[Kira] I wrote CloudStatus using AWS Lambda, a service that manages servers and scaling for you. Developers need only pay for their servers when the code is called. AWS Lambda also displays metrics from Amazon CloudWatch.

Amazon CloudWatch gives statistics from the last couple weeks, such as the number of invocations, how long they took, and whether there were any errors. CloudWatch Logs is also a very useful service. It allows me to see all the errors and print() output from my code. Without it, I wouldn’t be able to debug my skill!

I used Amazon EC2 to build the Python modules necessary for my program. The modules (Requests and LXML) download and parse the AWS status page, so I can get the data I need. The Python packages and my code files are zipped and uploaded to AWS Lambda.

Fun fact: My Lambda function is based in us-east-1. If AWS Lambda stops working in that region, you can’t use CloudStatus to check if Northern Virginia AWS Lambda is working! For that matter, CloudStatus will be completely dysfunctional.

[Eric] Why do you enjoy programming?

[Kira] Programming is so much fun and so rewarding! I enjoy making tools so I can be lazy.

Let’s rephrase that: Sometimes I’m repeatedly doing a non-programming activity—say, making a long list of equations for math practice. I think of two “random” numbers between one and a hundred (a human can’t actually come up with a random set of numbers) and pick an operation: addition, subtraction, multiplication, or division. After doing this several times, the activity begins to tire me. My brain starts to shut off and wants to do something more interesting. Then I realize that I’m doing the same thing over and over again. Hey! Why not make a program?

Computers can do so much in so little time. Unlike humans, they are capable of picking completely random items from a list. And they aren’t going to make mistakes. You can tell a computer to do the same thing hundreds of times, and it won’t be bored.

Finish the program, type in a command, and voila! Look at that page full of math problems. Plus, I can get a new one whenever I want, in just a couple seconds. Laziness in this case drives a person to put time and effort into ever-changing problem-solving, all so they don’t have to put time and effort into a dull, repetitive task. See http://threevirtues.com/.

But programming isn’t just for tools! I also enjoy making simple games and am learning about websites.

One downside to having computers do things for you: You can’t blame a computer for not doing what you told it to. It did do what you told it to; you just didn’t tell it to do what you thought you did.

Coding can be challenging (even frustrating) and it can be tempting to give up on a debug issue. But, oh, the thrill that comes after solving a difficult coding problem!

The problem-solving can be exciting even when a program is nowhere near finished. My second Alexa program wasn’t coming along that well when—finally!—I got her to say “One plus one is eleven.” and later “Three plus four is twelve.” Though it doesn’t seem that impressive, it showed me that I was getting somewhere and the next problem seemed reasonable.

[Eric] How did you get started programming with the Alexa Skills Kit (ASK)?

[Kira] My very first Alexa skill was based on an AWS Lambda blueprint called Color Expert (alexa-skills-kit-color-expert-python). A blueprint is a sample program that AWS programmers can copy and modify. In the sample skill, the user tells Alexa their favorite color and Alexa stores the color name. Then the user can ask Alexa what their favorite color is. I didn’t make many changes: maybe Alexa’s responses here and there, and I added the color “rainbow sparkles.”

I also made a skill called Calculator in which the user gets answers to simple equations.

Last year, I took a music history class. To help me study for the test, I created a trivia game from Reindeer Games, an Alexa Skills Kit template (see https://developer.amazon.com/public/community/post/TxDJWS16KUPVKO/New-Alexa-Skills-Kit-Template-Build-a-Trivia-Skill-in-under-an-Hour). That was a lot of fun and helped me to grow in my knowledge of how Alexa works behind the scenes.

[Eric] How does Alexa development differ from other programming you have done?

[Kira] At first Alexa was pretty overwhelming. It was so different from anything I’d ever done before, and there were lines and lines of unfamiliar code written by professional Amazon people.

I found the ASK blueprints and templates extremely helpful. Instead of just being a functional program, the code is commented so developers know why it’s there and are encouraged to play around with it.

Still, the pages of code can be scary. One thing new Alexa developers can try: Before modifying your blueprint, set up the skill and ask Alexa to run it. Everything she says from that point on is somewhere in your program! Find her response in the program and tweak it. The variable name is something like “speech_output” or “speechOutput.”

It’s a really cool experience making voice apps. You can make Alexa say ridiculous things in a serious voice! Because CloudStatus started with the Color Expert blueprint, my first successful edit ended with our Echo saying, “I now know your favorite color is Northern Virginia. You can ask me your favorite color by saying, ‘What’s my favorite color?’.”

Voice applications involve factors you never need to deal with in a text app. When the user is interacting through text, they can take as long as they want to read and respond. Speech must be concise so the listener understands the first time. Another challenge is that Alexa doesn’t necessarily know how to pronounce technical terms and foreign names, but the software is always improving.

One plus side to voice apps is not having to build your own language model. With text-based programs, I spend a considerable amount of time listing all the ways a person can answer “yes,” or request help. Luckily, with Alexa I don’t have to worry too much about how the user will phrase their sentences. Amazon already has an algorithm, and it’s constantly getting smarter! Hint: If you’re making your own skill, use some built-in Amazon intents, like AMAZON.YesIntent or AMAZON.HelpIntent.

[Eric] What challenges did you encounter as you built the CloudStatus Alexa skill?

[Kira] At first, I edited the code directly in the Lambda console. Pretty soon though, I needed to import modules that weren’t built in to Python. Now I keep my code and modules in the same directory on a personal computer. That directory gets zipped and uploaded to Lambda, so the modules are right there sitting next to the code.

One challenge of mine has been wanting to fix and improve everything at once. Naturally, there is an error practically every time I upload my code for testing. Isn’t that what testing is for? But when I modify everything instead of improving bit by bit, the bugs are more difficult to sort out. I’m slowly learning from my dad to make small changes and update often. “Ship it!” he cries regularly.

During development, I grew tired of constantly opening my code, modifying it, zipping it and the modules, uploading it to Lambda, and waiting for the Lambda function to save. Eventually I wrote a separate Bash program that lets me type “edit-cloudstatus” into my shell. The program runs unit tests and opens my code files in the Atom editor. After that, it calls the command “fileschanged” to automatically test and zip all the code every time I edit something or add a Python module. That was exciting!

I’ve found that the Alexa speech-to-text conversions aren’t always what I think they will be. For example, if I tell CloudStatus I want to know about “Northern Virginia,” it sends my code “northern Virginia” (lowercase then capitalized), whereas saying “Northern California” turns into “northern california” (all lowercase). To at least fix the capitalization inconsistencies, my dad suggested lowercasing the input and mapping it to the standardized AWS region code as soon as possible.

[Eric] What Alexa skills do you plan on creating in the future?

[Kira] I will probably continue to work on CloudStatus for a while. There’s always something to improve, a feature to add, or something to learn about—right now it’s Speech Synthesis Markup Language (SSML). I don’t think it’s possible to finish a program for good!

My brother and I also want to learn about controlling our lights and thermostat with Alexa. Every time my family leaves the house, we say basically the same thing: “Alexa, turn off all the lights. Alexa, turn the kitchen light to twenty percent. Alexa, tell the thermostat we’re leaving.” I know it’s only three sentences, but wouldn’t it be easier to just say: “Alexa, start Leaving Home” or something like that? If I learned to control the lights, I could also make them flash and turn different colors, which would be super fun. :)

In August a new ASK template was released for decision tree skills. I want to make some sort of dichotomous key with that. https://developer.amazon.com/public/community/post/TxHGKH09BL2VA1/New-Alexa-Skills-Kit-Template-Step-by-Step-Guide-to-Build-a-Decision-Tree-Skill

[Eric] Do you have any advice for others who want to publish an Alexa skill?

[Kira]

  • Before submitting your skill for certification, make sure you read through the submission checklist. https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-submission-checklist#submission-checklist

  • Remember to check your skill’s home cards often. They are displayed in the Alexa App. Sometimes the text that Alexa pronounces should be different from the reader-friendly card content. For example, in CloudStatus, “N. Virginia (us-east-1)” might be easy to read, but Alexa is likely to pronounce it “En Virginia, Us [as in ‘we’] East 1.” I have to tell Alexa to say “northern virginia, u.s. east 1,” while leaving the card readable for humans.

  • Since readers can process text at their own pace, the home card may display more details than Alexa speaks, if necessary.

  • If you don’t want a card to accompany a specific response, remove the ‘card’ item from your response dict. Look for the function build_speechlet_response() or buildSpeechletResponse().

  • Never point your live/public skill at the $LATEST version of your code. The $LATEST version is for you to edit and test your code, and it’s where you catch errors.

  • If the skill raises errors frequently, don’t be intimidated! It’s part of the process of coding. To find out exactly what the problem is, read the “log streams” for your Lambda function. To print debug information to the logs, print() the information you want (Python) or use a console.log() statement (JavaScript/Node.js).

  • It helps me to keep a list of phrases to try, including words that the skill won’t understand. Make sure Alexa doesn’t raise an error and exit the skill, no matter what nonsense the user says.

  • Many great tips for designing voice interactions are on the ASK blog. https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/alexa-skills-kit-voice-design-best-practices

  • Have fun!

In The News

Amazon had early access to this interview and to Kira and wrote an article about her in the Alexa Blog:

14-Year-Old Girl Creates CloudStatus Alexa Skill That Benefits AWS Developers

which was then picked up by VentureBeat:

A 14-year-old built an Alexa skill for checking the status of AWS

which was then copied, referenced, tweeted, and retweeted.

Original article and comments: https://alestic.com/2016/09/alexa-skill-aws-cloudstatus/

20 September, 2016 04:15AM

Launchpad News: Beta test: new package picker

If you are a member of Launchpad’s beta testers team, you’ll now have a slightly different interface for selecting source packages in the Launchpad web interface, and we’d like to know if it goes wrong for you.

One of our longer-standing bugs has been #42298 (“package picker lists unpublished (invalid) packages”).  When selecting a package – for example, when filing a bug against Ubuntu, or if you select “Also affects distribution/package” on a bug – and using the “Choose…” link to pop up a picker widget, the resulting package picker has historically offered all possible source package names (or sometimes all possible source and binary package names) that Launchpad knows about, without much regard for whether they make sense in context.  For example, packages that were removed in Ubuntu 5.10, or packages that only exists in Debian, would be offered in search results, and to make matters worse search results were often ordered alphabetically by name rather than by relevance.  There was some work on this problem back in 2011 or so, but it suffered from performance problems and was never widely enabled.

We’ve now resurrected that work from 2011, fixed the performance problems, and converted all relevant views to use it.  You should now see something like this:

New package picker, showing search results for "pass"

Exact matches on either source or binary package names always come first, and we try to order other matches in a reasonable way as well.  The disclosure triangles alongside each package allow you to check for more details before you make a selection.

Please report any bugs you find with this new feature.  If all goes well, we’ll enable this for all users soon.

Update: as of 2016-09-22, this feature is enabled for all Launchpad users.

20 September, 2016 12:37AM

September 19, 2016

Valorie Zimmerman: Kubuntu needs some K/Ubuntu Developer help this week

Our packaging team has been working very hard, however, we have a lack of active Kubuntu Developers involved right now. So we're asking for Devels with a bit of extra time and some experience with KDE packages to look at our Frameworks, Plasma and Applications packaging in our staging PPAs and sign off and upload them to the Ubuntu Archive.

If you have the time and permissions, please stop by #kubuntu-devel in IRC or Telegram and give us a shove across the beta timeline!

19 September, 2016 10:53PM by Valorie Zimmerman (noreply@blogger.com)