January 23, 2017

hackergotchi for VyOS


Change is coming to VyOS project

People often ask us the same questions, such as if we know about Debian 6 EOL, or when 1.2.0 will be released, or when this or that feature will be implemented. The short answer, for all of those: it depends on you. Yes, you, the VyOS users.

Here’s what it takes to run an open source project of this type. There are multiple tasks, and they all have to be done:

  • Emergency fixes and security patches

  • Routine bug fixes, cleanups, and refactoring

  • Development of new features

  • Documentation writing

  • Testing (including writing automated tests)

All those tasks need hands (ideally, connected to a brain). Emergency bug fixes and security patches needs a team of committed people who can do this job on a short notice, which is attainable in two ways, either there are people for whom it’s their primary job, or the team of committed people is large enough to have people with spare time at any given moment.

Cleanups and refactoring are also things that need a team of committed people because those are things that no one benefits from in a short run, it’s about making life easier for contributors and improving the sustainability of the project, keeping it from becoming an unmanageable mess. Development of new features needs people who are personally interested in those features and have the expertise to integrate them in a right way. It’s perfect if they also maintain their code, but if they simply hand documented and maintainable code to the maintainers team, that’s good enough.

Now, the sad truth is that VyOS Project has none of those. The commitment to using it among its users greatly exceeds the commitment to contributing to it. While we don’t know for certain how many people are using VyOS, we have at least some data. At the moment, there are 600 users of the official AMI on AWS. There were 11k+ users last month on user guide page and it’s constantly growing since the time when I took up the role of the community manager of the VyOS project. We are also aware about companies that have around 1k VyOS instances and companies that rely on VyOS in their business operations in one way or another. But still, if we talk about consumers vs. contributors, we see 99% consumers vs 1% contributors relation.

My original idea was to raise awareness of the VyOS project by introducing a new website, refreshing the forum look, activating social media channels and introducing modern collaboration tools to make participation in the project easier, open new ways how users and companies can participate and contribute. Finally bigger user base means there’s a larger pool of people and companies who can contribute to the project. We also launched commercial support with idea that if companies that using VyOS for their businesses can’t or just don’t want to participate in the project directly, the may be willing to support the project by purchasing support subscriptions.

10 months later I can admit that I was partially wrong in my thoughts. While consumer user base growing rapidly, i just can’t tell the same about contributors and this is a pity. Sure, we got a few new contributors, some of them contribute occasionally, other are more active, and some old contributors are back (Thank you guys for joining/re-joining VyOS!). We are also working with several companies that are showing interest in VyOS as a platform and contribute to the project in commercial means and via human resources, and that is great, however, it’s not enough at this scale.

At this point, I started thinking that current situation is not something that can be considered as fair and not really make sense.

This are just some of questions that came to my mind frequently:

  • Why those who not contributing literally nothing to the project, getting the same as others who spend their time and resources?

  • Why companies like ALP group using VyOS in their business and claiming publicly that they will return improvements to upstream when they are not actually returning anything? Why do some people think that they can come to IRC/Chat and demand something without contributing anything?

  • Why are those cloud providers that using VyOS for their businesses not bothering to support the project in any way?

I would like to remind you of the basic principles of the VyOS philosophy established from its start:

VyOS is a community driven project!

VyOS always will be an open source project!

VyOS will not have any commercial or any other special versions!

However, if we all want VyOS to be a great project, we all need to adhere to those principles, otherwise, nothing will happen. Community driven means that the driving force behind improvements should be those interested in them. Open source means we can’t license a proprietary component from a third party if existing open source software does not provide the feature you need. Finally, free for everyone means we all share responsibility for the success or failure of the project.

I’m happy and proud to be part of VyOS community and I really consider as my duty to help the project and the community grow. I’m doing what I can, and I expect that if you also care about the project, you will participate too.

We all can contribute to the project, no matter if you are developer or network engineer or neither of this.

There are many tasks that can be done by individuals with zero programming involved:

  • Documentation (documenting new features, improving existing wiki pages, or rewriting old documentation for Vyatta Core)

  • Support community in forums/IRC/chat (we have English and localized forums, and you can request a channel in your native language like our Japanese community did)

  • Feature requests (well described use cases from which can benefit all our community: note that a good feature request should make it easier for developers to implement it, just saying you want MPLS is not quite the same as researching existing open source implementations, trying them out, making sample configs contributors with coding skills can use for the reference, drafting CLI design and so on!)

  • Testing & Bug reports

  • Design discussions, such as those in the VyOS 2.0 development digest

If you work at the company that uses VyOS for business needs please consider  talking with CEO/CTO about:

  • Providing full/part time worker(s) to accomplish tasks listed above

  • Provide paid accounts in common clouds for development needs

  • Provide HW and license for laboratory (we need quite a lot of HW to support all of the hypervisors, same is true about licenses for interworking testing)

  • Buy commercial support & services

In January, we’d like to have a meeting with all current contributors to discuss what we can do to increase participation in the project.

Meanwhile, I would like to ask you to share this blog post to all whom it may concern.

All of the VyOS users (especially those companies that use VyOS in their business) should be aware that is time to start participate in the project if you want to keep using VyOS and rely on it in the future.

Brace yourself.

Change is coming!

23 January, 2017 03:35PM by Yuriy Andamasov

VyOS 2.0 development digest #2

In the previous post we talked about the reasons for rewrite, design and implementation issues, and basic ideas. Now it's time to get to details. Today we'll mostly talk about command definitions, or rather interface definitions, since set commands is just one way to access the configuration interface.

Let's review the VyConf architecture (I included a few things in the diagram that we haven't discussed yet, ignore them for now):

At startup, VyConf will load the main config (or the fallback config, if that fails). But to know whether the config is valid, and to know what programs to call to actually configure the target applications, it needs additional data. We'll call that data "interface definitions" since it defines the configuration interface. Specifically, it defines:

  1. What config nodes (paths) are allowed (e.g. "interfaces ethernet", or "protocols ospf")
  2. What values are valid for that nodes (e.g. any IPv4 or IPv6 address for "system name-server")
  3. What script/program should be called when this or that part of the config is changed

The old way

Before we get to the new way, let's review the old way, the way it's one in the current VyOS implementation. In the current VyOS, those definitions are called "templates", no one remembers why.

This is a typical template file:
vyos@vyos# cat /opt/vyatta/share/vyatta-cfg/templates/interfaces/ethernet/node.tag/speed/node.def 
type: txt
help: Link speed
default: "auto"
syntax:expression: $VAR(@) in "auto", "10", "100", "1000", "2500", "10000"; "Speed must be auto, 10, 100, 1000, 2500, or 10000"
allowed: echo auto 10 100 1000 2500 10000

commit:expression: exec "\
	/opt/vyatta/sbin/vyatta-interfaces.pl --dev=$VAR(../@) \
	--check-speed $VAR(@) $VAR(../duplex/@)"

update: if [ ! -f /tmp/speed-duplex.$VAR(../@) ]; then
	   /opt/vyatta/sbin/vyatta-interfaces.pl --dev=$VAR(../@) \
	   	--speed-duplex $VAR(@) $VAR(../duplex/@)
	   touch /tmp/speed-duplex.$VAR(../@)

val_help: auto; Auto negotiation (default)
val_help: 10; 10 Mbit/sec
val_help: 100; 100 Mbit/sec
val_help: 1000; 1 Gbit/sec
val_help: 2500; 2.5 Gbit/sec
val_help: 10000; 10 Gbit/sec

We can spot a few issues with it already. First, the set of definitions is a huge directory tree where each directory represents a config node, e.g. "interfaces ethernet address" will be "interfaces/ethernet/node.tag/address/node.def". This makes them hard to navigate, you need to read a lot of small files to get the whole picture, and you need to do a lot of file/directory hopping to edit them. Now, try mass editing, or checking for mistakes before release...

Next, they use a custom syntax, which needs custom lexer and parser, and custom documentation (in practice, the source is your best bet, though I did write something of a syntax reference). No effortless parsing, no effortless analysis or transformation either.

Next, the value checking has its peculiarities. There is concept of type (it can be txt, u32, ipv4, ipv4net, ipv6, ipv6net, and macaddr), and there is "syntax:expression:" thing which partially duplicate each other. The types are hardcoded in the config backend and cannot be added without modifying it, even though they only define validation procedure and are not used in any other way. The "syntax:expression:" can be either "foo in bar baz quux", or "pattern $regex", or "exec $externalScript".

But, the "original sin" of those files is that they allow embedded shell scripts, as you can see. Mixing data with logic is rarely a good idea, and in this case it's especially annoying because a) you cannot test such code other than on a live system b) you have to read every single file in a package to get the complete picture, since any of them may have embedded shell.

Now to the new way.

23 January, 2017 01:33PM by Daniil Baturin

VyOS 2.0 development digest #3: questions for you, vyconf daemon config format, appliance directory structure, and external validators lookup

Ok, I changed my mind: before we jump into the abyss of datastructures and look how the config and reference trees work, I'll describe the changes I made in the last few days.

Also, I'd like to make it clear that if you don't respond to design questions I state in these posts, I, or whoever takes up the task, will just do what they think is right. ;)

I guess I'll start with the questions this time. First, in the comments to the first post, the person who goes by kglkgvkvsd544 suggested two features: commit-confirm by default, and an alternative solution to the partial commit where instead of loading the failsafe config, the system loads a previous revision instead, in the same way as commit-confirm does. I don't think any of those should be the only available option, but having them as configurable options may be a good idea. Let me know what you think about it.

Another very important question: we need to decide on the wire protocol that vyconf daemon will use for communication with its clients (the CLI tool, the interactive shell, and the HTTP bridge). I created a task for it, https://phabricator.vyos.net/T216, let me know what you think about it. I'm inclined towards protobuf myself.

Now to the recent changes.

vyconfd config

As I already said, VyConf is supposed to run as a daemon and keep the running config tree, the reference tree, and the user sessions in memory. Obviously, it needs to know where to get that data. It also needs to know where to write logs, where the PID file and socket file should be, and other things daemons are supposed to know. Here's what the vyconf config for VyOS may look like:

name = "VyOS"

data_dir = "/usr/share/vyos/"
program_dir = "/usr/libexec/vyos"
config_dir = "/etc/vyos"

# paths relative to config_dir
primary_config = "config.boot"
fallback_config = "config.failsafe"

socket = "/var/run/vyconfd.sock"
pid_file = "/var/run/vyconfd.pid"
log_file = "/var/log/vyconfd.log"
That INI-like language is called TOML, it's pretty well specified and capable of some fancy stuff like arrays and hashes, apart from simple (string, string) pairs. The best part is that there are libraries for parsing it for many languages, and the one for OCaml is particularly nice and idiomatic (it uses lenses and option types to access a nested and possibly underdefined datastructure in a handy and typesafe way), like:
let pid_file = TomlLenses.(get conf (key "vyconf" |-- table |-- key "pid_file" |-- string)) in
match pid_file with
| Some f -> f
| None -> "/var/run/vyconfd.pid"
The config format and this example reflects some decisions. First, the directory structure if more FHS-friendly. What do we need /opt/vyos for, if FHS already has directories meant for exactly what we need: architecture-independent data (/usr/share), programs called by other programs and not directly by users (/usr/libexec), and config files (/etc)?
Second, all important parameters are configurable. The primary motivation for this is making VyConf usable for every appliance developer (most appliances will not be called VyOS for certain), but for ourselves and for every contributor to VyOS it's a reminder to avoid hardcoded paths anywhere: if changing it is just one line edit away, a hardcoded path is an especially bad idea.

Here's the complete directory structure I envision:
  interfaces/ # interface definitions
  components/ # Component definitions
  components/  # Scripts/programs that verify the appliance config, generate actual configs, and apply them
  migration/       # Migration scripts (convert appliance config if syntax changes)
  validators/       # Value validators
  scripts/              # User scripts
  post-config.d/   # Post-config hooks, like vyatta-postconfig-bootup.script
  pre-commit.d/   # Pre-commit hooks
  post-commit.d/ # Post-commit hooks
  archive/             # Commit archive
This is not an exhaustive list of directories an appliance can have of course, it's just directories that have any significance for VyConf. I'm also wondering if we should introduce post-save hooks for those who want to do something beyond the built-in commit archive functionality.

External validators

As you remember, my idea is to get rid of the inflexible system of built-in "types" and make regex the only built-in constraint type, and use external validators for everything else.

External validators will be stored in the $program_dir/validators. Since many packages use the same types of values (IP addresses is a common example), and in VyOS 1.x we already have quite a lot of templates that reference the same validation scripts, making it a separate entity will simplify reusing them, since it's easy to see what validators exist, and you can also be sure that they behave like you expect (or if they don't, it's a bug).
Validator executable take two arguments, first argument is constraint string, and the second is the value to be validated (e.g. range "1-99" "42").The expected behaviour is to return 0 if the value is valid and a non-zero exit code otherwise. Validators should not produce any output, instead the user will get the message defined in the constraintError tag of the interface definition (this approach is more flexible since different things may want to use different messages, e.g. to clarify why exactly the value is not valid).

That's all for today. Feel free to comment and ask questions. The next post really will be about the config tree and the way set commands work, stay tuned.

23 January, 2017 01:32PM by Daniil Baturin

VyOS 2.0 development digest #4: simple tasks for beginners, and the reasons to learn (and use) OCaml

Look, I lied again. This post is still not about the config and reference tree internals. People in the chat and elsewhere started showing some interest in learning OCaml and contributing to VyConf, and Hiroyuki Satou even already made a small pull request (it's regarding build docs rather than the code itself, but that's a good start and will help people with setting up the environment), so I decided to make a post to explain some details and address common concerns.

The easy tasks

There are a couple of tasks that can be done completely by analogy, so they are good for getting familiar with the code and making sure your build environment actually works.

The first one is about new properties of config tree nodes, "inactive" and "ephemeral", that will be used for JunOS-like activate/deactivate functionality, and for nodes that are meant to be temporary and won't make it to the saved config respectively.

The other one is about adding "hidden" and "secret" properties to the command definition schema and the reference tree, "hidden" is meant for hiding commands from completion (for feature toggles, or easter eggs ;), and "secret" is meant to hide sensitive data from unpriviliged users or for making public pastes.

Make sure you reference the task number in your commit description, as in "T225: add this and that" so that phabricator can automatically connect the commits with the task.

If you want to take them up and need any assistance, feel free to ask me in phabricator or elsewhere.

23 January, 2017 01:32PM by Daniil Baturin

VyOS 2.0 development digest #5: doing 1.2.x and 2.0 development in parallel

There was a rather heated discussion about the 1.2.0 situation on the channel, and valid points were definitely expressed: while 2.0 is being written, 1.2.0 can't benefit from any of that work, and it's sad. We do need a working VyOS in any case, and we can't just stop doing anything about it and only work on 2.0. My original plan was to put 1.2.0 in maintenance mode once it's stabilized, but it would mean no updates at all for anyone, other than bugfixes. To make things worse, some things do need if not rewrite, but at least very deep refactoring bordering on rewrite just to make them work again, due to deep changes in the configs of e.g. StrongSWAN.

There are three main issues with reusing the old code,  as I already said: it's written in Perl, it mixes config reading and checking with logic, and it can't be tested outside VyOS. The fourth issue is that the logic for generating, writing, and applying configs is not separated in most scripts either so they don't fit the 2.0 model of more transactional commits. The question is if we can do anything about those issues to enable rewriting bits of 1.2.0 in a way that will allow reusing that code in 2.0 when the config backend and base system are ready, and what exactly should we do.

My conclusion so far is that we probably can, with some dirty hacks and extra care. Read on.

The language

I guess by now everyone agrees that Perl is a bad idea. There are few people who know it these days, and there is no justification for knowing it. The language is a minefield that lacks proper error reporting mechanism or means to convey the semantics.

If you are new to it, look at this examples:

All "error reporting" enabled, let's try to divide a string by an integer.

$ perl -e 'use strict; use warnings; print "foobar" / 42'
Argument "foobar" isn't numeric in division (/) at -e line 1.

A warning indeed... Didn't prevent program from producing a value though: garbage in, garbage out. And, my long time favorite: analogous issues bit me in real code a number of times!

$ perl -e 'print reverse "dog"'

Even if you know that it has to do with "list context", good luck finding information about default context of this or that function in the docs. In short, if the language of VyOS 1.x wasn't Perl, a lot of bugs would be outright impossible.

Python looks like a good candidate for config scripts: it's strongly typed, the type and object system is fairly expressive, there are nice unit test libraries and template processors and other things, and it's reasonably fast. What I don't like about it and dynamically typed languages in general is that it needs a damn good test coverage because the set of errors it can detect at compile time is limited and a lot of errors make it to runtime, but there are always compromises.

But, we need bindings. VyConf will use sockets and protobuf messages for its API which makes writing bindings for pretty much any language trivial, but in 1.x.x it's more complicated. The C++/Perl library from VyOS backend is not really easy to follow, and not trivial to produce bindings for. However, we have cli-shell-api, which is already used in config scripts written in shell, and behaves as it should. It also produces fairly machine-friendly output, even though its error reporting is rudimantary (then again, error reporting of the C++ and Perl library isn't all that nice either). So, for a proof of concept, I decided to make a thin wrapper around cli-shell-api: later it can be rewritten as a real C++ binding if this approach shows its limitations. It will need some C++ library logic extraction and cleanup to replicate the behaviour (why the C++ library itself links against Perl interpreter library? Did you know it also links to specific version of the apt-pkg library that was never meant for end users and made no promise of API stability, for its version comparison function that it uses for soring names of nodes like eth0? That's another story though).

Anyway, I need to add the Python library to the vyatta-cfg package which I'll do soon, for the time being you can put the file to your VyOS (it works in 1.1.7 with python2.6) and play with it:  

Right now it exposes just a handful of functions: exists(), return_value(), return_values(), and list_nodes(). It also has is_leaf/is_tag/is_multi functions that it uses internally to produce somewhat better error reporting, though they are unnecessary in config scripts, since you already know that about nodes from templates. Those four functions are enough to write a config script for something like squid, dnsmasq, openvpn, or anything else that can reload its config on its own. It's programs that need fancy update logic that really need exists_orig or return_effective_value. Incidentally, a lot of components that need that rewrite to repair or could seriously benefit from serious overhaul are like that: for example. iptables is handled by manipulating individual rules right now even though iptables-restore is atomic, likewise openvpn is now managed by passing it the config in command line options while it's perfectly capable of reloading its config and this would make tunnel restarts a lot less disruptive, and strongswan, the holder of the least maintainable config script, is indeed capable of live reload too.

Which brings us to the next part...

The conventions

To avoid having to do two rewrites of the same code instead of just one, we need to make sure that at least substantial part of the code from VyOS 1.2.x can be reused in 2.0. For this we need to setup a set of conventions. I suggest the following, and let's discuss it.

Language version

Python 3 SHALL be used.

Rationale: well, how much longer can we all keep 2.x alive if 3.0 is just a cleaner and nicer implementation?

Coding standard

No single function shall SHOULD be longer than 100 lines.

Rationale: https://github.com/vyos/vyatta-cfg-vpn/blob/current/scripts/vpn-config.pl#L449-L1134 ;)

Logic separation and testability

This is the most important part. To be able to reuse anything, we need to separate assumptions about the environment from the core logic. To be able to test it in isolation and make sure most of the bugs are caught on developer workstations rather than test routers, we need to avoid dependendies on the global state whenever possible. Also, to fit the transactional commit model of VyOS 2.0 later, we need to keep consistency checking, generating configs, and restarting services separated.

For this, I suggest that we config scripts follow this blueprint:

def get_config():
    foo = vyos.config.return_value("foo bar")
    bar = vyos.config.return_value("baz quux")
    return {"foo": foo, "bar": bar} # Could be an object depending on size and complexity...

def verify(config):
    result do_some_checks(config)
    if checks_succees(result):
        return None
        raise ScaryException("Some error")

def generate(config):

def apply(config):

if __name__ == '__main__':
       c = get_config()
    except ScaryException:

This way the function that process the config can be tested outside of VyOS by creating the same stucture as get_config() would create by hand (or from file) and passing it as an argument. Likewise, in 2.0 we can call its verify(), update(), and apply() functions separately.

Let me know what you think.

23 January, 2017 01:32PM by Daniil Baturin

Phabricator maintenance

We are working on the servers now, moving some things around again, and the phabricator is temporarily inaccessible. We'll let you know when it's resolved.

23 January, 2017 01:27PM by Daniil Baturin

VyOS 2.0 development digest #6: new beginner-friendly tasks, design questions, and the details of the config tree

The tasks

Both tasks from the previous post have been taken up and implemented by Phil Summers (thanks, Phil!). New tasks await.

First task was very simple: the Reference_tree module needs functions for checking facts about nodes, analogous to is_multi. For config output, and for high level set/delete/commit operations we need easy ways to know if the node is tag or leaf, or valueless, what component is responsible for it etc. It can be done mostly by analogy with is_multi function and its relatives, so it's friendly to complete beginners. But Phil Summers implemented it before I could make the post (thanks again, Phil!).

Second task is a little bit more involved but still simple enough for anyone who started learning ML not long ago. It's about loading interface definitions from a directory. In VyOS, we may have a bunch of files in /usr/share/vyos/interfaces such as firewall.xml, system.xml, ospf.xml, and so on, and we need to load them into the reference tree that is used for path validation, completion etc.

Design questions

To give you some context, I'll remind you that the vyconf shell will not be bash-based, due to having to fork and modify bash (or any other UNIX shell) to get completion from the first word to begin with, and for variety of other reasons. So, first question: do you think we should use the vyconf shell where you can enter VyOS configuration commands as login shell, or we should go for JunOS-like approach when you login to a UNIX shell and then issue a command to enter the configuration shell? You can cast your vote here: https://phabricator.vyos.net/V2 

Second question is more open-ended: we are going to printing the config as the normal VyOS config syntax, and as set commands, but what else should we support? Some considerations: since "show" will be a part of the config API, it can be used by e.g. web GUI to display the config. This means config output of XML or JSON can be a useful thing. But, which one, or perhaps both? And also we need to decide what the XML and/or JSON shouid look like, since we can go for a generic schema that keeps node names in attributes, or we can use custom tags such as <interfaces> (but then every component should provide a schema).

Now, to the "long-awaited" details of the config tree...

23 January, 2017 01:27PM by Daniil Baturin

VyOS 2.0 development digest #7: Python coding guidelines for config scripts in 1.2.0, and config parser for VyConf

Python coding guidelines for 1.2.0

In some previous post I was talking about the Python wrapper for the config reading library. However, simply switching to a language that is not Perl will not automatically make that code easy to move to 2.0 when the backend is ready, neither it will automatically improve the design and architecture. It will also improve the code in general, and help keeping it readable and maintainable.

You can find the document here: http://wiki.vyos.net/wiki/Python_config_script_policy 

In short:

  • Logic for config validation, generating configs, and changing system settings/restarting services must be completely separated
  • For any configs that allow nesting (dhcpd.conf, ipsec.conf etc.) template processor must be used (as opposed to string concatenation)
  • Functions should not randomly output anything to stdout/stderr
  • Code must be unit-testable

Config parser for VyConf/VyOS 2.0

Today I pushed initial implementation of the new config lexer and parser. It already supports nodes and node comments, but doesn't support node metadata (that will be used to mark inactive and ephemeral nodes).

You can read the code (https://github.com/vyos/vyconf/blob/master/src/curly_lexer.mll and https://github.com/vyos/vyconf/blob/master/src/curly_parser.mly) and play with it by loading the .cma's into REPL. Next step is to add config renderer. Once the protobuf schema is ready we can wrap it all into a daemon and finally have something to really play with, rather than just run the unit tests.

Informally, here's what I changed in the config syntax.

Old config

interfaces {
  /* WAN interface */
  ethernet eth0 {
    duplex auto

New config

interfaces {
  ethernet {
    /* WAN interface */
    eth0 {
      address [;;
      duplex auto;
      // This kind of comment is ignored by the parser

As you can see, the changes are:

  • Leaf nodes are now terminated by semicolons rather than newlines.
  • There is syntax for comments that are ignored by the parser
  • Multi nodes have the array of values in square brackets.
  • Tag nodes do not receive any special formatting.

I suppose the last change may be controversial, because it can lead to somewhat odd-looking constructs like:

interfaces {
  ethernet {
    eth0 {
      vif {
        21 {

If you are really going to miss the old approach to tag nodes (that is "ethernet eth0 {" as opposed to "ethernet { eth0 { ...", let me know and I guess I can come up with something. The main difficulty is that, while this never occurs in configs VyOS config save produces, different tag nodes, e.g. "interfaces ethernet" and "interfaces tunnel" can be intermingled, so for parsing we have to track which ones were already created, and this will make the parser code a lot longer.

I'm pretty convinced that "address; address" is simply visual clutter and JunOS-like square bracket syntax will make it cleaner. It also solves the aforementioned problem with interleaved nodes tracking for leaf nodes.

Let me know what you think about the syntax.

23 January, 2017 01:27PM by Daniil Baturin

VyOS 2.0 development digest #8: vote for or against the new tag node syntax, and the protobuf schema

Tag node syntax

The change in tag node format I introduced in the previous post turned out quite polarizing and started quite some discussion in the comments. I created a poll in phabricator for it: https://phabricator.vyos.net/V3 , please cast your vote there.

If you missed the post, or found the explanation confusing, here's what it's all about. Right now in config files we format tag nodes (i.e. nodes that can have children without predefined names, such as interfaces and firewall rules) differently from other nodes:

/* normal node */
interfaces {
  /* tag node */
  ethernet eth0 {
  /* tag node */
  ethernet eth1 {

It looks nice, but complicates the parser. What I proposed and implemented in the initial parser draft is to not use any custom formatting for tag nodes:

/* normal node */
interfaces {
  /* actually a tag node, but rendering is now the same as for normal */
  ethernet {
    eth0 {
    eth1 {

This makes the parser noticeable simpler, but makes the syntax more verbose and adds more newlines.

If more people vote against this change than for it, I'll take time to implement it in the parser.

Note: This change only affects the config syntax, and has no effect on the command syntax. The command for the example above would still be "set interfaces ethernet eth0 address", in user input and in the output of "show configuration commands". Tag nodes will also be usable as edit levels regardless of the config file syntax, as in "edit interfaces tunnel; copy tun0 to tun1".

Protobuf schema

Today I wrote an initial draft of the protobuf schema that VyConf daemon will use for communication with clients (shell, CLI tool, and HTTP bridge). You can find it here: https://github.com/vyos/vyconf/blob/master/data/vyconf.proto

Right now it defines the following operations:

23 January, 2017 01:27PM by Daniil Baturin

VyOS 2.0 development digest #9: socket communication functionality, complete parser, and open tasks

Socket communication

A long-awaited (by me, anyway ;) milestone: VyConf is now capable of communicating with clients. This allows us to write a simple non-interactive client. Right now the only supported operaion is "status" (a keepalive of sorts), but the list will be growing.

I guess I should talk about the client before going into technical details of the protocol. The client will be way easier to use than what we have now. Two main problems with CLI tools from VyOS 1.x is that my_cli_bin (the command used by set/delete operations) requires a lot of environment setup, and that cli-shell-api is limited in scope. Part of the reason for this is that my_cli_bin is used in the interactive shell. Since the interactive shell of VyConf will be a standalone program rather than a bash completion hack, we are free to make the non-interactive client more idiomatic as a shell command, closer in user experience to git or s3cmd.

This is what it will look like:

SESSION=$(vycli setupSession)
vycli --session=$SESSION configure
vycli --session=$SESSION set "system host-name vyos"
vycli --session=$SESSION delete "system name-server"
vycli --session=$SESSION commit
vycli --session=$SESSION exists "service dhcp-server"
vycli --session=$SESSION commit returnValue "system host-name"
vycli --session=$SESSION --format=json show "interfaces ethernet"

As you can see, first, the top level words are subcommands, much like "git branch". Since the set of top level words is fixed anyway, this doesn't create new limitations. Second, the same client can execute both high level set/delete/commit operations and low level exists/returnValue/etc. methods. Third, the only thing it needs to operate is a session token (I'm thinking that unless it's passed in --session option, vycli should try to get it from an environment variable, but we'll see, let me know what you think about this issue). This way contributors will get an easy way to test the code even before interactive shell is complete; and when VyOS 2.0 is usable, shell scripts and people fond of working from bash rather than the domain-specific shell will have access to all system functions, without worrying about intricate environment variable setup.

The protocol

As I already said in the previous post, VyConf uses Protobuf for serialized messages. Protobuf doesn't define any framing, however, so we have to come up with something. Most popular options are delimiters and length headers. The issue with delimiters is that you have to make sure they do not appear in user input, or you risk losing a part of the message. Some programs choose to escape delimiters, other rely on unusual sequences, e.g. the backend of OPNSense uses three null bytes for it. Since Protobuf is a binary protocol, no sequence is unusual enough, so length headers look like the best option. VyConf uses 4 byte headers in network order, that are followed by a Protobuf message. This is easy enough to implement in any language, so it shouldn't be a problem when writing bindings for other languages.

The code

There is a single client library that can be used by all of the non-interactive client and the interactive shell. It will also serve as the OCaml bindings package for VyConf (Python and other languages wil need their own bindings, but with Protobuf, most of it can be autogenerated).

Parser improvements

Inactive and ephemeral nodes

The curly config parser is now complete. It supports the inactive and ephemeral properties. This is what a config with those will look like:

protocols {
  static {
    /* Inserted by a fail2ban-like script */
    #EPHEMERAL route {
    /* DIsabled by admin */
    #INACTIVE route {

While I'm not sure if there are valid use cases for it, nodes can be inactive and ephemeral at the same time. Deactivating an ephemeral node that was created by scritp perhaps? Anyway, since both are a part of the config format that the "show" command will produce, we get to support both in the parser too.

Multi nodes

By multi nodes I mean nodes that may have more than one value, such as "address" in interfaces. As you remember, I suggested and implemented a new syntax for such nodes:

interfaces {
  ethernet eth0 {
    address [;;

However, the parser now supports the original syntax too, that is:.

interfaces {
  ethernet eth0 {

I didn't intend to support it originally, but it was another edge case that prompted me to add it. For config read operations to work correctly, every path in the tree must be unique. The high level Config_tree.set function maintains this invariant, but the parser gets to use lower level primitives that do not, so if a user creates a config with duplicate nodes, e.g. by careless pasting, the config tree that the parser returns will have them too, so we get to detect such situations and do something about it. Configs with duplicate tag nodes (e.g. "ethernet eth0 { ... } ethernet eth0 { ... }") are rejected as incorrect since there is no way to recover from this. Multiple non-leaf nodes with distinct children (e.g. "system { host-name vyos; } system { name-server; }") can be merged cleanly, so I've added some code to merge them by moving children of subsequent nodes under the first on and removing the extra nodes afterwards. However, since in the raw config there is no real distinction between leaf and non-leaf nodes, so in case of leaf nodes that code would simply remove all but the first. I've extended it to also move values into the first node, which equates support for the old syntax, except node comments and inactive/ephemeral properties will be inherited from the first node. Then again, this is how the parser in VyOS 1.x behaves, so nothing is lost.

While the show command in VyOS 2.0 will always use the new syntax with curly brackets, the parser will not break the principle of least astonishment for people used to the old one. Also, if we decide to write a migration utility for converting 1.x configs to 2.0, we'll be able to reuse the parser, after adding semicolons to the old config with a simple regulat expression perhaps.


Node names and unquoted values now can contain any characters that are not reserved, that is, anything but whitespace, curly braces, square brackets, and semicolons.

What's next?

Next I'm going to work on adding low level config operations (exists/returnValue/...) and set commands so that we can do some real life tests.

There's a bunch of open tasks if you want to join the development:

T254 is about preventing nodes with reserved characters in their names early in the process, at the "set" time. There's a rather nasty bug in VyOS 1.1.7 related to this: you can pass a quoted node name with spaces to set and if there is no validation rule attached to the node, as it's with "vpn l2tp remote-access authentication local-users", the node will be created. It will fail to parse correctly after you save and reload the config. We'll fix it in 1.2.0 of course, but we also need to prevent it from ever appearing in 2.0 too.

T255 is about adding the curly config renderer. While we can use the JSON serializer for testing right now, the usual format is also just easier on the eyes, and it's a relatively simple task too.

23 January, 2017 01:27PM by Daniil Baturin

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu Insights: What IT Pros Need to Know about Server Provisioning

Big Software, IoT and Big Data are changing how organisations are architecting, deploying, and managing their infrastructure. Traditional models are being challenged and replaced by software solutions that are deployed across many environments and many servers. However, no matter what infrastructure you have, there are bare metal servers under it, somewhere.

Organisations are looking for more efficient ways to balance their hardware and infrastructure investments with the efficiencies of the cloud. Canonical’s MAAS (Metal As A Service) is such a technology. MAAS is designed for devops at scale, in places where bare metal is the best way to run your applications. Big data, private cloud, PAAS and HPC all thrive on MAAS. Hardware has always been an expensive and difficult resource to deploy within a data centre, but is unfortunately still a major consideration for any organisation moving all or part of their infrastructure to the cloud. To become more cost-effective, many organisations hire teams of developers to cobble together software solutions that solve functional business challenges while leveraging existing legacy hardware in the hopes of offsetting the need to buy and deploy more hardware-based solutions.

MAAS isn’t a new concept, but demand and adoption rates are growing because many enterprises want to combine the flexibility of cloud services with the raw power of bare metal servers to run high-power, scalable workloads. For example, when a new server needs to be deployed, MAAS automates most, if not all, of the provisioning process. Automation makes deploying solutions much quicker and more efficient because it allows tedious tasks to be performed faster and more accurately without human intervention. Even with proper and thorough documentation, manually deploying server to run web services or Hadoop, for example, could take hours compared to a few minutes with MAAS.

Forward thinking companies are leveraging server provisioning to combine the flexibility of the cloud with the power and security of hardware. For example:

  • High Performance Computing organisations are using MAAS to modernise how they deploy and allocate servers quickly and efficiently.
  • Smart Data centers are using MAAS to enable multi purpose their server usage to improve efficiency and ensure servers do not go underutilised.
  • Hybrid cloud providers leverage MAAS to provide extra server support during peak demand times and between various public cloud providers

This ebook: Server Provisioning: What Network Admins & IT Pros Need to Know outlines how innovative companies are leveraging MAAS to get more out of their hardware investment while making their cloud environments more efficient and reliable. Smart IT pros know that going to the cloud does not mean having to rip and replace their entire infrastructure to take advantage of the opportunities the cloud offers. Canonical’s MAAS is a mature solution to help organisations to take full advantage of their cloud and legacy hardware investments.

Get started with MAAS

To download and install MAAS for free please visit ubuntu.com/download/server-provisioning or to talk to one of our scale-out experts about deploying MAAS in your datacenter contact us. For more information please download our free eBook on MAAS.

Download eBook

23 January, 2017 11:00AM

hackergotchi for Serbian GNU/Linux

Serbian GNU/Linux

Доступан је Сербиан 2017 КДЕ

Доступна је за преузимање нова верзија оперативног система Сербиан ГНУ/Линукс 2017, са КДЕ графичким окружењем. Графички изглед новог издања посвећен је богатом књижевном делу Васка ПопеСербиан долази подешен на ћириличну опцију, а кроз системске поставке може бити одабрана и латиница, као и ијекавица за обе варијанте. Као основа за образовање дистрибуције коришћен је Debian (Stretch) у својој још увек тестинг верзији, са КДЕ графичким окружењем. 

Сербиан 2017, као и претходна три издања, намењен је свим корисницима који желе да имају оперативни систем на српском језику. Намењен је и као могући избор за садашње кориснике власничких оперативних система. Такође, постоје и корисници који не умеју све сами да подесе и који су до сада користили Линукс дистрибуције које важе као више пријатељске за употребу. Додатне снимке екрана можете погледати овде.

На новом издању, поред уобичајених програма који долазе уз КДЕ графичко окружење, налази се и колекција програма који ће корисницима омогућити квалитетно извршавање постављених задатакаСви преинсталирани програми су преведени на српски језик. Употребљен је кернел 4.9.1, а у односу на претходну верзију, побољшана је подршка за екстерне уређаје, пар апликација добило је замене, па садашњи избор изгледа овако:

Ако сте новајлија на Линуксу, инсталациони процес је једноставан и траје краће од 10 минута, а овде можете прочитати како се припремају медији за инсталацију. Графички интерфејс инсталационог поступка подразумевано је подешен на ћириличну опцију, док ће управљање тастатуром бити на латиници. Ако до сада нисте видели како изгледа инсталација, можете је погледати у сликама, а доступан је и видео материјал. По инсталацији, Сербиан ће заузети нешто мање од 5 ГБ, тако да би било пожељно да му приликом партиционисања, за удобно деловање одредите 10 до 15 ГБ.


Када вам се подигне тек инсталирани систем, баците поглед у фасциклу Документи, где је записано неколико савета. Као битније, треба истаћи да се распоред тастатуре мења пречицом Ctrl+Shift, а подешене су опције: la, ћи, en. Такође, систем је подешен да се админ приступ добија kомандом, нпр.: kdesu dolphin или kdesu systemsettings5, а за ГТК апликације: gksu synaptic. Десктоп ефекти који су подешени подразумевано, покрећу се притиском на тастере F10, F11 и F12.

На крају, хвала свим читаоцима ових редова, корисницима који имају или ће имати Сербиан на свом рачунару, као и свим медијима и појединцима који су дали свој допринос популаризацији оперативног система на српском језику. Ако има заинтересованих који желе да помогну у промоцији, доступни су и банери за ту намену.

23 January, 2017 09:50AM by Debian Srbija (noreply@blogger.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Jorge Castro: Canonical Distribution of Kubernetes - Release 1.5.2

We’re proud to announce support for Kubernetes 1.5.2 in the Canonical Distribution of Kubernetes. This is a pure upstream distribution of Kubernetes, designed to be easily deployable to public clouds, on-premise (ie vsphere, openstack), bare metal, and developer laptops. Kubernetes 1.5.2 is a patch release comprised of mostly bugfixes, and we encourage you to check out the release notes.

Getting Started:

Here’s the simplest way to get a Kubernetes 1.5.2 cluster up and running on an Ubuntu 16.04 system:

sudo apt-add-repository ppa:juju/stable
sudo apt-add-repository ppa:conjure-up/next</span>
sudo apt update
sudo apt install conjure-up
conjure-up kubernetes

During the installation conjure-up will ask you what cloud you want to deploy on and prompt you for the proper credentials. If you’re deploying to local containers (LXD) see these instructions for localhost-specific considerations.

For production grade deployments and cluster lifecycle management it is recommended to read the full Canonical Distribution of Kubernetes documentation.

Home page: https://jujucharms.com/canonical-kubernetes/

Source code: https://github.com/juju-solutions/bundle-canonical-kubernetes

How to upgrade

With your kubernetes model selected, you can deploy the bundle to upgrade your cluster if on the 1.5.x series of kubernetes. At this time releases before 1.5.x have not been tested. Depending on which bundle you have previously deployed, run:

    juju deploy canonical-kubernetes


    juju deploy kubernetes-core

If you have made tweaks to your deployment bundle, such as deploying additional worker nodes as a different label, you will need to manually upgrade the components. The following command list assumes you have made no tweaks, but can be modified to work for your deployment.

juju upgrade-charm kubernetes-master
juju upgrade-charm kubernetes-worker
juju upgrade-charm etcd
juju upgrade-charm flannel
juju upgrade-charm easyrsa
juju upgrade-charm kubeapi-load-balancer

This will upgrade the charm code, and the resources to kubernetes 1.5.2 release of the Canonical Distribution of Kubernetes.

New features:

  • Full support for Kubernetes v1.5.2.

General Fixes

  • #151 #187 It wasn’t very transparent to users that they should be using conjure-up when locally developing, conjure-up is now the defacto default mechanism for deploying CDK.

  • #173 Resolved permissions on ~/.kube on kubernetes-worker units

  • #169 Tuned the verbosity of the AddonTacticManager class during charm layer build process

  • #162 Added NO_PROXY configuration to prevent routing all requests through configured proxy [by @axinojolais]

  • #160 Resolved an error by flannel sometimes encountered during cni-relation-changed [by @spikebike]

  • #172 Resolved sporadic timeout issues between worker and apiserver due to nginx connection buffering [by @axinojolais]

  • #101 Work-around for offline installs attempting to contact pypi to install docker-compose

  • #95 Tuned verbosity of copy operations in the debug script for debugging the debug script.

Etcd layer-specific changes

  • #72 #70 Resolved a certificate-relation error where etcdctl would attempt to contact the cluster master before services were ready [by @javacruft]

Unfiled/un-scheduled fixes:

  • #190 Removal of assembled bundles from the repository. See bundle author/contributors notice below

Additional Feature(s):

  • We’ve open sourced our release management process scripts we’re using in a juju deployed jenkins model. These scripts contain the logic we’ve been running by hand, and give users a clear view into how we build, package, test, and release the CDK. You can see these scripts in the juju-solutions/kubernetes-jenkins repository. This is early work, and will continue to be iterated on / documented as we push towards the Kubernetes 1.6 release.

Notice to bundle authors and contributors:

The fix for #190 is a larger change that has landed in the bundle-canonical-kubernetes repository. Instead of maintaining several copies across several repositories of a single use-case bundle; we are now assembling the CDK based bundles as fragments (un-official nomenclature).

This affords us the freedom to rapidly iterate on a CDK based bundle and include partner technologies, such as different SDN vendors, Storage backend components, and other integration points. Keeping our CDK bundle succinct, and allowing the more complex solutions to be assembled easily, reliably, and repeatedly. This does change the contribution guidelines for end users.

Any changes to the core bundle should be placed in its respective fragment under the fragments directory. Once this has been placed/merged, the primary published bundles can be assembled by running ./bundle in the root of the repository. This process has been outlined in the repository README.md

We look forward to any feedback on how opaque/transparent this process is, and if it has any useful applications outside of our own release management process. The ./bundle python script is still very much geared towards our own release process, and how to assemble bundles targeted for the CDK. However we’re open to generalizing them and encourage feedback/contributions to make this more useful to more people.

How to contact us:

We’re normally found in these Slack channels and attend these sig meetings regularly:

Operators are an important part of Kubernetes, we encourage you to participate with other members of the Kubernetes community!

We also monitor the Kubernetes mailing lists and other community channels, feel free to reach out to us. As always, PRs, recommendations, and bug reports are welcome: https://github.com/juju-solutions/bundle-canonical-kubernetes

23 January, 2017 08:30AM

January 22, 2017

hackergotchi for siduction


New fast siduction mirror in the US

As of today we are happy to share with you a new mirror in the United States. It is located at Princeton University and should be our fastest mirror in the US.
The URLs are:

  • Princeton University

    deb https://mirror.math.princeton.edu/pub/siduction/extra unstable main
    deb-src https://mirror.math.princeton.edu/pub/siduction/extra unstable main
    deb https//mirror.math.princeton.edu/pub/siduction/fixes unstable main contrib non-free
    deb-src https://mirror.math.princeton.edu/pub/siduction/fixes unstable main contrib non-free

  • Direct links are to be found on our website.
    Please let us know if any problems arise.

    22 January, 2017 09:09AM by Ferdinand Thommes

    hackergotchi for SparkyLinux



    There is a new app available in Sparky repos: QMPlay2.

    From Softpedia:

    If you are looking for a lightweight media player that can also help you listen to online radio stations and download videos from the Internet, you should try QMPlay2.
    It is a simple application, designed with a fully customizable interface, supports numerous video and audio formats, can play online audio streams and enables you to save clips from YouTube.

    It can play all formats supported by FFmpeg, libmodplug (including J2B and SFX). It also supports Audio CD, raw files, Rayman 2 music and chiptunes. It contains YouTube and Prostopleer browser. The application is created by Błażej Szczygieł.

    sudo apt-get update
    sudo apt-get install qmplay2



    22 January, 2017 12:02AM by pavroo

    January 21, 2017

    hackergotchi for rescatux


    Super Grub2 Disk 2.02s6 downloads

    Recommended download (Floppy, CD & USB in one) (Valid for i386, x86_64, i386-efi and x86_64-efi):

    Super Grub2 Disk 2.01 rc2 Main MenuSuper Grub2 Disk 2.01 rc2 Main Menu






    EFI x86_64 standalone version:

    EFI i386 standalone version:

    CD & USB in one downloads:

    About other downloads. As this is the first time I develop Super Grub2 Disk out of source code (well, probably not the first time, but the first time in ages) I have not been able to build these other downloads: coreboot, i386-efi, i386-pc, ieee1275, x86_64-efi, standalone coreboot, standalone i386-efi, standalone ieee1275. bfree has helped on this matter and with his help we might have those builds in next releases. If you want such builds drop a mail in the mailing list so that we aware of that need.


    Source code:

    Everything (All binary releases and source code):


    In order to check the former downloads you can either check the download directory page for this release

    or you can check checksums right here:


    94f600567ccf71d5984db94cd7755578  super_grub2_disk_2.02s6_source_code.tar.gz
    de3edf82f9dc69f5a7edd105841be040  super_grub2_disk_hybrid_2.02s6.iso
    bae56cb9c8dc741c198e6850df2c3f64  super_grub2_disk_i386_efi_2.02s6.iso
    367250d7dbd99bc28de18a747e371533  super_grub2_disk_i386_pc_2.02s6.iso
    210669a6675aa64f7a12c0f270ee1526  super_grub2_disk_standalone_i386_efi_2.02s6.EFI
    1f07c7333f16a390acd6c2024502ec06  super_grub2_disk_standalone_x86_64_efi_2.02s6.EFI
    ad370d006a6dbfe06ae05492c5157424  super_grub2_disk_x86_64_efi_2.02s6.iso


    515448774c79864c5bd8423985cca495ed45f6b5  super_grub2_disk_2.02s6_source_code.tar.gz
    30f21c486b0874dd708dd28d03651074ac776867  super_grub2_disk_hybrid_2.02s6.iso
    ac7c3e0627119a6c0383c7f191d68b2408ec213a  super_grub2_disk_i386_efi_2.02s6.iso
    32f9aebfb61c1e1c5939ecb91d562a212ba408dd  super_grub2_disk_i386_pc_2.02s6.iso
    8ba74bbb1b6c262b4a0e9d6cef334227f68362f6  super_grub2_disk_standalone_i386_efi_2.02s6.EFI
    aca02a3d0186649d69505688f1d6613cd12fbcfa  super_grub2_disk_standalone_x86_64_efi_2.02s6.EFI
    a118644da399852e9e95257c669963dc35ba2170  super_grub2_disk_x86_64_efi_2.02s6.iso


    a387b4f0dbde79779ab258079ca3b7ec9b929aa218ee42a9e31f33dfd1c1df2c  super_grub2_disk_2.02s6_source_code.tar.gz
    06d8ad3c3faaeb3ad24f28c42774a6bbb4aa80cd1bf64f7f2a0cfe3b161d5b44  super_grub2_disk_hybrid_2.02s6.iso
    9ac3e38279871d922f3d23bf2a749dd1b331256c70093a9ab9cd8a9088a9c012  super_grub2_disk_i386_efi_2.02s6.iso
    e08e228bd5f27ef8252759eb400ba3b4e77e3b2da843a1a03e45c380242275a6  super_grub2_disk_2.02s6/super_grub2_disk_i386_pc_2.02s6.iso
    a445505c66b18d582311a1a4b2c4f15d4fd1140273a6f25966539ca5f9f92dc5  super_grub2_disk_2.02s6/super_grub2_disk_standalone_i386_efi_2.02s6.EFI
    c7cf206231fe3c07771a2891eb2f24bba02495937eaac84dae26262f648a2161  super_grub2_disk_2.02s6/super_grub2_disk_standalone_x86_64_efi_2.02s6.EFI
    e3cbf3e63f3cc5a102affb54a567816c2874aaf03e356485cd42a333b2a9460c  super_grub2_disk_2.02s6/super_grub2_disk_x86_64_efi_2.02s6.iso

    Flattr this!

    21 January, 2017 09:49PM by adrian15

    Super Grub2 Disk 2.02s6 released

    Super Grub2 Disk 2.02s6 stable is here.

    Super GRUB2 Disk is a live cd that helps you to boot into most any Operating System (OS) even if you cannot boot into it by normal means.

    A new stable release

    The former Super Grub2 Disk stable release was 2.00s5 version and released on October 2016 (3 months ago) . New features or changes since previous stable version are:

    • Added Russian language
    • Improved Arch Linux initramfs detection
    • Added i386-efi build support
      Most of you won’t need this image. There are very few machines which specificially need i386-efi boot.
    • Added i386-efi to the hybrid iso
      Now the hybrid iso is even more powerful by being able to autodetect i386-efi and load its modules.
    • Grub itself is translated when a language is selected.
      That means that the strings about such as “Use the up and down keys to select …” from grub itself will be also translated into your own language (if upstream Grub2 supports it)
    • Added loopback.cfg file (non officially supported)
      Some people would like to be able to chainload into Super Grub2 Disk scripts from another Super Grub2 Disk or a custom Grub2 Disk. Now you can do it by doing a configfile to our loopback.cfg. However this way of working is not officially supported. So do not ask for support if it does not work as you expect to.
    Super Grub2 Disk 2.02s5 - Detect and show boot methods in actionSuper Grub2 Disk 2.02s5 – Detect and show boot methods in action

    We are going to see which are the complete Super Grub2 Disk features with a demo video, where you can download it, the thank you – hall of fame and some thoughts about the Super Grub2 Disk development.

    Please do not forget to read our howtos so that you can have step by step guides (how to make a cdrom or an usb, how to boot from it, etc) on how to use Super Grub2 Disk and, if needed, Rescatux.

    Super Grub2 Disk 2.02s4 main menuSuper Grub2 Disk 2.02s3 main menu


    Here there is a little video tour in order to discover most of Super Grub2 Disk options. The rest of the options you will have to discover them by yourself.


    Most of the features here will let you boot into your Operating Systems. The rest of the options will improve the Super Grub2 Disk operating systems autodetecting (enable RAID, LVM, etc.) or will deal with minor aspects of the user interface (Colours, language, etc.).

    • Change the language UI
    • Translated into several languages
      • Spanish / Español
      • German / Deutsch
      • French / Français
      • Italian / Italiano
      • Malay / Bahasa Melayu
      • Russian
    Super Grub2 Disk 2.01 rc2 Spanish Main MenuSuper Grub2 Disk 2.01 rc2 Spanish Main Menu
    • Detect and show boot methods option to detect most Operating Systems
    Super Grub2 Disk 2.01 beta 3 Everything menu making use of grub.cfg extract entries option functionalitySuper Grub2 Disk 2.01 beta 3 – Everything menu making use of grub.cfg extract entries option functionality
    • Enable all native disk drivers *experimental* to detect most Operating Systems also in special devices or filesystems
    • Boot manually
      • Operating Systems
      • grub.cfg – Extract entries

        Super Grub2 Disk 2.01 beta 3 grub.cfg Extract entries optionSuper Grub2 Disk 2.01 beta 3 grub.cfg Extract entries option
      • grub.cfg – (GRUB2 configuration files)
      • menu.lst – (GRUB legacy configuration files)
      • core.img – (GRUB2 installation (even if mbr is overwritten))
      • Disks and Partitions (Chainload)
      • Bootable ISOs (in /boot-isos or /boot/boot-isos
    • Extra GRUB2 functionality
      • Enable GRUB2’s LVM support
      • Enable GRUB2’s RAID support
      • Enable GRUB2’s PATA support (to work around BIOS bugs/limitation)
      • Mount encrypted volumes (LUKS and geli)
      • Enable serial terminal
    • Extra Search functionality
      • Search in floppy ON/OFF
      • Search in CDROM ON/OFF
    • List Devices / Partitions
    • Color ON /OFF
    • Exit
      • Halt the computer
      • Reboot the computer

    Supported Operating Systems

    Excluding too custom kernels from university students Super Grub2 Disk can autodetect and boot most every Operating System. Some examples are written here so that Google bots can see it and also to make more confident the final user who searchs his own special (according to him) Operating System.

    • Windows
      • Windows 10
      • Windows Vista/7/8/8.1
      • Windows NT/2000/XP
      • Windows 98/ME
      • MS-DOS
      • FreeDOS
    • GNU/Linux
      • Direct Kernel with autodetected initrd
        Super Grub2 Disk - Detect any Operating System - Linux kernels detected screenshotSuper Grub2 Disk – Detect any Operating System – Linux kernels detected
        • vmlinuz-*
        • linux-*
        • kernel-genkernel-*
      • Debian / Ubuntu / Mint
      • Mageia
      • Fedora / CentOS / Red Hat Enterprise Linux (RHEL)
      • openSUSE / SuSE Linux Enterpsise Server (SLES)
      • Arch
      • Any many, many, more.
    • FreeBSD
      • FreeBSD (single)
      • FreeBSD (verbose)
      • FreeBSD (no ACPI)
      • FreeBSD (safe mode)
      • FreeBSD (Default boot loader)
    • EFI files
    • Mac OS X/Darwin 32bit or 64bit
    Super Grub2 Disk 2.00s2 rc4 Mac OS X entriesSuper Grub2 Disk 2.00s2 rc4 Mac OS X entries (Image credit to: Smx)

    Support for different hardware platforms

    Before this release we only had the hybrid version aimed at regular pcs. Now with the upcoming new EFI based machines you have the EFI standalone versions among others. What we don’t support is booting when secure boot is enabled.

    • Most any PC thanks to hybrid version (i386, x86_64, i386-efi, x86_64-efi) (ISO)
    • EFI x86_64 standalone version (EFI)
    • EFI i386 standalone version (EFI)
    • Additional Floppy, CD and USB in one download (ISO)
      • i386-pc
      • i386-efi
      • x86_64-efi

    Known bugs

    • Non English translations are not completed
    • Enable all native disk drivers *experimental* crashes Virtualbox randomly

    Supported Media

    • Compact Disk – Read Only Memory (CD-ROM) / DVD
    • Universal Serial Bus (USB) devices
    • Floppy (1.98s1 version only)


    Recommended download (Floppy, CD & USB in one) (Valid for i386, x86_64, i386-efi and x86_64-efi):

    Super Grub2 Disk 2.01 rc2 Main MenuSuper Grub2 Disk 2.01 rc2 Main Menu






    EFI x86_64 standalone version:

    EFI i386 standalone version:

    CD & USB in one downloads:

    About other downloads. As this is the first time I develop Super Grub2 Disk out of source code (well, probably not the first time, but the first time in ages) I have not been able to build these other downloads: coreboot, i386-efi, i386-pc, ieee1275, x86_64-efi, standalone coreboot, standalone i386-efi, standalone ieee1275. bfree has helped on this matter and with his help we might have those builds in next releases. If you want such builds drop a mail in the mailing list so that we aware of that need.


    Source code:

    Everything (All binary releases and source code):


    In order to check the former downloads you can either check the download directory page for this release

    or you can check checksums right here:


    94f600567ccf71d5984db94cd7755578  super_grub2_disk_2.02s6_source_code.tar.gz
    de3edf82f9dc69f5a7edd105841be040  super_grub2_disk_hybrid_2.02s6.iso
    bae56cb9c8dc741c198e6850df2c3f64  super_grub2_disk_i386_efi_2.02s6.iso
    367250d7dbd99bc28de18a747e371533  super_grub2_disk_i386_pc_2.02s6.iso
    210669a6675aa64f7a12c0f270ee1526  super_grub2_disk_standalone_i386_efi_2.02s6.EFI
    1f07c7333f16a390acd6c2024502ec06  super_grub2_disk_standalone_x86_64_efi_2.02s6.EFI
    ad370d006a6dbfe06ae05492c5157424  super_grub2_disk_x86_64_efi_2.02s6.iso


    515448774c79864c5bd8423985cca495ed45f6b5  super_grub2_disk_2.02s6_source_code.tar.gz
    30f21c486b0874dd708dd28d03651074ac776867  super_grub2_disk_hybrid_2.02s6.iso
    ac7c3e0627119a6c0383c7f191d68b2408ec213a  super_grub2_disk_i386_efi_2.02s6.iso
    32f9aebfb61c1e1c5939ecb91d562a212ba408dd  super_grub2_disk_i386_pc_2.02s6.iso
    8ba74bbb1b6c262b4a0e9d6cef334227f68362f6  super_grub2_disk_standalone_i386_efi_2.02s6.EFI
    aca02a3d0186649d69505688f1d6613cd12fbcfa  super_grub2_disk_standalone_x86_64_efi_2.02s6.EFI
    a118644da399852e9e95257c669963dc35ba2170  super_grub2_disk_x86_64_efi_2.02s6.iso


    a387b4f0dbde79779ab258079ca3b7ec9b929aa218ee42a9e31f33dfd1c1df2c  super_grub2_disk_2.02s6_source_code.tar.gz
    06d8ad3c3faaeb3ad24f28c42774a6bbb4aa80cd1bf64f7f2a0cfe3b161d5b44  super_grub2_disk_hybrid_2.02s6.iso
    9ac3e38279871d922f3d23bf2a749dd1b331256c70093a9ab9cd8a9088a9c012  super_grub2_disk_i386_efi_2.02s6.iso
    e08e228bd5f27ef8252759eb400ba3b4e77e3b2da843a1a03e45c380242275a6  super_grub2_disk_2.02s6/super_grub2_disk_i386_pc_2.02s6.iso
    a445505c66b18d582311a1a4b2c4f15d4fd1140273a6f25966539ca5f9f92dc5  super_grub2_disk_2.02s6/super_grub2_disk_standalone_i386_efi_2.02s6.EFI
    c7cf206231fe3c07771a2891eb2f24bba02495937eaac84dae26262f648a2161  super_grub2_disk_2.02s6/super_grub2_disk_standalone_x86_64_efi_2.02s6.EFI
    e3cbf3e63f3cc5a102affb54a567816c2874aaf03e356485cd42a333b2a9460c  super_grub2_disk_2.02s6/super_grub2_disk_x86_64_efi_2.02s6.iso

    Changelog (since former 2.00s2 stable release)

    Changes since 2.02s5 version:

    • Added Russian language
    • Improved Arch Linux initramfs detection
    • Added i386-efi build support
    • Added i386-efi to the hybrid iso
    • Grub itself is translated when a language is selected
    • Added loopback.cfg file (non officially supported)
    • (Devel) sgrub.pot updated to latest strings
    • (Devel) Added grub-build-004-make-check so that we ensure the build works
    • (Devel) Make sure linguas.sh is built when running ‘grub-build-002-clean-and-update’
    • (Devel) Updated upstream Super Grub2 Disk repo on documentation
    • (Devel) Move core supergrub menu under menus/sgd
    • (Devel) Use sg2d_directory as the base super grub2 disk directory variable
    • (Devel) New supergrub-sourcecode script that creates current git branch source code tar.gz
    • (Devel) New supergrub-all-zip-file script: Makes sure a zip file of everything is built.
    • (Devel) supergrub-meta-mkrescue: Build everything into releases directory in order to make source code more clean.
    • (Devel) New supergrub-official-release script: Build main files, source code and everything zip file from a single script in order to ease official Super Grub
      2 Disk releases.

    Changes since 2.02s4 version:

    • Stop trying to chainload devices under UEFI and improve the help people get in the case of a platform mismatch
    • (Devel) Properly support source based built grub-mkfont binary.
    • New options were added to chainload directly either /ntldr or /bootmgr thanks to ntldr command. They only work in BIOS mode.

    Changes since 2.02s3 version:

    • Using upstream grub-2.02-beta3 tag as the new base for Super Grub2 Disk’s grub.
    • Major improvement in Windows OS detection (based on BCD) Windows Vista, 7, …
    • Major improvement in Windows OS detection (based on ntldr) Windows XP, 2000, …

    Changes since 2.02s2 beta 1 version:

    • (Devel) grub-mkstandalone was deleted because we no longer use it
    • Updated (and added) Copyright notices for 2015
    • New option: ‘Disks and Partitions (Chainload)’ adapted from Smx work
    • Many files were rewritten so that they only loop between devices that actually need to be searched into.
      This enhacement will make Super Grub2 Disk faster.
    • Remove Super Grub2 Disk own devices from search by default. Added an option to be able to enable/disable the Super Grub2 Disk own devices search.

    2.02s2 beta 1 changelog:

    • Updated grub 2.02 build to commit: 8e5bc2f4d3767485e729ed96ea943570d1cb1e45
    • Updated documentation for building Super Grub2 Disk
    • Improvement on upstream grub (d29259b134257458a98c1ddc05d2a36c677ded37 – test: do not stop after first file test or closing bracket) will probably make Super Grub2 Disk run faster.
    • Added new grub build scripts so that Super Grub2 Disk uses its own built versions of grub and not the default system / distro / chroot one.
    • Ensure that Mac OS X entries are detected ok thanks to Users dir. This is because Grub2 needs to emulate Mac OS X kernel so that it’s detected as a proper boot device on Apple computers.
    • Thanks to upstream grub improvement now Super Grub2 Disk supports booting in EFI mode when booted from a USB device / hard disk. Actually SG2D was announced previously to boot from EFI from a USB device while it only booted from a cdrom.

    2.02s1 beta 1 changelog:

    • Added new option: “Enable all native disk drivers” so that you can try to load: SATA, PATA and USB hard disks (and their partitions) as native disk drives. This is experimental.
    • Removed no longer needed options: “Enable USB” and “Enable PATA”.
    • “Search floppy” and “Search cdrom” options were moved into “Extra GRUB2 functionality menu”. At the same time “Extra Search functionality” menu was removed.
    • Added new straight-forward option: “Enable GRUB2’s RAID and LVM support”.
    • “List devices/partitions” was renamed to “Print devices/partitions”.
    • “Everything” option was renamed to “Detect and show boot methods”.
    • “Everything +” option was removed to avoid confusions.
    • Other minor improvements in the source code.
    • Updated translation files. Now most translations are pending.
    • Updated INSTALL instructions.

    Finally you can check all the detailed changes at our GIT commits.

    If you want to translate into your language please check TRANSLATION file at source code to learn how to translate into your language.

    Thank you – Hall of fame

    I want to thank in alphabetical order:

    • bfree (Niall Wash): For his work in readding the i386-efi support and for being there when I did live development streams.
    • jim945: For his work on Russian language and improving Arch Linux initramfs detection.
    • schierlm: For his pull request for adding loopback.cfg support.

    The person who writes this article is adrian15 and he added some scripts for easing official releases.

    And I cannot forget about thanking bTactic, the enterprise where I work at and that hosts our site.

    Some thoughts about Super Grub2 Disk development

    Super Grub2 Disk development ideas

    There are two main improvements than can be made to Super Grub2 Disk in the next releases:

    • Fix md5sum files associated to iso files so that they don’t have to full path.
    • Use new upstream Grub release. We still use the upstream 2.02 beta 3 version. According to a recent announce a new Grub will be released on next February.

    Old idea: I don’t know when but I plan to readapt some scripts from os-prober. That will let us detect more operating systems. Not sure when though. I mean, it’s not something that worries me because it does not affect too many final users. But, well, it’s something new that I hadn’t thought about.

    Again, please send us feedback on what you think it’s missing on Super Grub2 Disk.

    Rescatux development

    I want to focus on Rescatux development on the next months so that we have an stable release before the end of 2017. Now I need to finish adding UEFI features, fix the scripts that generate Rescatux source code (difficult) and write much documentation.

    (adrian15 speaking)

    Getting help on using Super Grub2 Disk

    More information about Super Grub2 Disk

    Flattr this!

    21 January, 2017 09:44PM by adrian15

    hackergotchi for Blankon developers

    Blankon developers

    i15n: Aku

    Penerjemahan BlankOn Authentication

    Resources: django.po

    21 January, 2017 01:01PM

    BlankOn Malang: Pelatihan Migrasi Open Source di Dinkes Lamongan

    Pada tanggal 28 dan 29 Oktober 2011  Komunitas Open Source dan Linux Lamongan  dan Dinas Kesehatan Kabupaten Lamongan mengadakan pelatihan migrasi ke Open Source. Sistem Operasi yang dipilih setelah disepakati adalah BlankonLinux. Ada beberapa hal yang menyebabkan Dinkes Lamongan dan KOSLA memilih BlankOn Linux. Ubuntu memang bagus, tapi karena terlalu cepat update rilis baru (tiap 6 bulan sekali) menyebabkan Sistem Operasi yang diinstall terasa jadul. ...

    21 January, 2017 01:01PM

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Jonathan Riddell: Reports of KDE neon Downloads Being Dangerous Entirely Exaggerated

    When you download a KDE neon ISO you get transparently redirected to one of the mirrors that KDE uses. Recently the Polish mirror was marked as unsafe in Google Safebrowsing which is an extremely popular service used by most web browsers and anti-virus software to check if a site is problematic. I expect there was a problem elsewhere on this mirror but it certainly wasn’t KDE neon. KDE sysadmins have tried to contact the mirror and Google.

    You can verify any KDE neon installable image by checking the gpg signature against the KDE neon ISO Signing Key.  This is the .sig file which is alongside all the .iso files.

    gpg2 –recv-key ‘348C 8651 2066 33FD 983A 8FC4 DEAC EA00 075E 1D76’

    wget http://files.kde.org/neon/images/neon-useredition/current/neon-useredition-current.iso.sig

    gpg2 –verify neon-useredition-current.iso.sig
    gpg: Signature made Thu 19 Jan 2017 11:18:13 GMT using RSA key ID 075E1D76
    gpg: Good signature from “KDE neon ISO Signing Key <neon@kde.org>” [full]

    Adding a sensible GUI to do this is future work and fairly tricky to do in a secure way but hopefully soon.



    Facebooktwittergoogle_pluslinkedinby feather

    21 January, 2017 12:18AM

    January 20, 2017

    Jonathan Riddell: KDE neon Inaugurated with Calamares Installer

    You voted for change and today we’re bringing change. Today we give back the installer to the people. Today Calamares 3 was released.

    It’s been a long standing wish of KDE neon to switch to the Calamares installer.  Calamares is a distro independent installer used by various projects such as Netrunner and Tanglu.  It’s written in Qt and KDE Frameworks and has modules in C++ or Python.

    Today I’ve switched the Developer Unstable edition to Calamares and it looks to work pretty nicely.

    However there’s a few features missing compared to the previous Ubiquity installer.  OEM mode might be in there but needs me to add some integration for it.  Restricted codecs install should be easy to add.  LUKS encrypted hard disk are there but also needs some integration from me.  Encrypted home holders isn’t there and should be added.  Updating to latest packages on install should also be added.  It does seem to work with UEFI computers, but not with secure boot yet. Let me know if you spot any others.

    I’ve only tested this on a simple virtual machine, so give it a try and see what breaks. Or if you want to switch back run apt install ubiquity-frontend-kde ubiquity-slideshow-neon''.


    Facebooktwittergoogle_pluslinkedinby feather

    20 January, 2017 06:23PM

    Ubuntu Insights: tutorials.ubuntu.com goes live!

    We are really proud to announce that Tutorials Ubuntu went live this week!

    What are ubuntu tutorials?
    Ubuntu tutorials are a topic-specific walkthroughs, giving you a very practical experience on a particular domain. They are just like learning from pair programming except you can do it on your own! They provide a step-by-step process to doing development and devops activities on Ubuntu machines, servers or devices.

    Each tutorial has:

    • A clear and detailed summary of what you will learn in this tutorial
    • The content difficulty level: you will know where to start from!
    • An estimated completion time for each step and the whole tutorial, so that you plan precisely depending on your availability.
    • A “where to go from there” final step, guiding you to the next logical places to get more information about that particular subject, or the next tutorial you can follow now that you have learned those notions.

    For now, the tutorials focus mainly on building and using snaps and Ubuntu Core. If you’d like to see tutorials cover more topics, or if you’re interested in contributing tutorials, let us know.

    A snap for all tutorials!
    And that’s not all! You can as well work offline if you desire and always take your tutorials with you! Using the snap technology, we built a tutorial snap including the same content and the same technology as the one you can find on the website (that’s the beauty of snaps!)

    To get access to it, on any snap system like Ubuntu desktop 16.04 LTS, just type:

    $ snap install snap-codelabs

    Open your browser at http://localhost:8123/ and enjoy!

    Note that its name and design will soon change to align more with tutorials.ubuntu.com.

    You can contribute too!

    If you plan to help us contributing and creating a new ubuntu tutorial, it’s pretty simple! The backend is based on a simple google doc with a straightforward syntax. If you’d like to write your own tutorial here are some Guidelines you can follow that will help you with the tone of voice, content and much more. Let us know what you’re done!

    You will note that we based our content on Google Codelab framework that they have open sourced. A big up to them!

    We hope you’ll like playing and learning those new concepts in a fun and interactive way! See you soon during your next tutorial.

    20 January, 2017 01:49PM

    Harald Sitter: Snapping DBus

    For the past couple of months I’ve been working on getting KDE applications into the binary bundle format snap.

    With the release of snapd 2.20 last month it gained a much-needed feature to enable easy bundling of applications that register a DBus service name. The all new dbus interface makes this super easy.

    Being able to easily register a DBus service matters a great deal because an extraordinary amount of KDE’s applications are doing just that. The use cases range from actual inter-process communication to spin-offs from this functionality, such as single-instance behavior and clean application termination via the kquitapp command-line utility.

    There’s barely any application that gets by without also claiming its own space on the session bus, so it is a good thing that enabling this is now super easy when building snap bundles.

    One simply adds a suitable slot to the snapcraft.yaml and that’s it:

            interface: dbus
            name: org.kde.kmplot
            bus: session

    An obvious caveat is that the application needs to claim a well-known name on the bus. For most of KDE’s applications this will happen automatically as the KDBusAddons framework will claim the correct name assuming the QCoreApplication properties were set with the relevant data to deduce the organization+app reverse-domain-name.

    As an additional bonus, in KDE we tend to codify the used service name in the desktop files via the X-DBUS-ServiceName entry already. When writing a snapcraft.yaml it is easy to figure out if DBus should be used and what the service name is by simply checking the desktop file.

    The introduction of this feature moves a really big roadblock out of the way for enabling KDE’s applications to be easily snapped and published.

    20 January, 2017 01:47PM


    Changes to Cinnamon Spices


    The Cinnamon desktop can be themed and its features can be extended by adding applets, desklets and extensions. In Cinnamon lingo, all these addons are referred to as “spices”. The goal is to let you spice up your Cinnamon experience so you can enjoy your desktop environment, feel even more at home with it and benefit from niche features and look and feel which go beyond what is developed by the Linux Mint and Cinnamon teams.

    Spices can be installed from within Cinnamon -> System Settings -> Applets/Desklets/Extensions/Themes, or by visiting https://cinnamon-spices.linuxmint.com.


    Throughout the years we’ve seen great spices being developed by 3rd party artists and developers. The “Weather” applet developed by mockturtl is an example of a very popular spice. Just like a killer app, it’s something many people add to their environment right after a fresh install. Spices like that greatly enhance the user experience in the Cinnamon desktop.

    Unfortunately, throughout the years we’ve also seen spices deteriorate that experience by bringing down the overall quality of the Cinnamon desktop. Some spices aren’t packaged properly and thus cannot be installed successfully from the System Settings…. some spices don’t work at all, and in some extreme cases we’ve even seen some spices make the Cinnamon desktop crash or fail to load.

    Another issue is security. In an ideal world we provide the platform for users to enjoy 3rd party development and for developers to reach their audience in the simplest possible manner. In practice, and we’ve seen this last year when our project and users were the targets of specific attacks, we can no longer blindly trust 3rd parties and expose users to them directly. All changes coming from sources we don’t trust must be reviewed to guarantee the absence of malicious code.


    It’s hard to reduce empowerment and raise expectations while explaining a trust issue to 3rd party developers and artists (although the changes we’re implementing also bring a lot of benefits to them which I think some of them will really enjoy) and we’ll do this on Segfault with a separate blog post.

    In this blog post, I’ll address the Cinnamon users and explain the changes we implemented.


    Before I start I’d like to thank to Design Team, and in particular Eran Gilo for his beautiful design, and Carlos Fernandez, Darryn Bourgeois and Nick Karnaukhov  for their amazing work on the implementation.

    I’d like to thank the Development Team also for helping me define all these changes and discussing them with me.

    Revamped Spices website

    The first thing you’ll notice when visiting https://cinnamon-spices.linuxmint.com is the new look and feel.

    This is the result of the work produced by the Design Team. As you can see, things look much better when designed by artists 🙂

    You’ll also notice that you can no longer login, logout or register on the website. The reason for this is that a second update is coming. The Design Team is currently revamping authentication, comments and ratings.

    In the past you needed to sign up on the website, wait for an email, log in and then you could comment and rate spices on a scale of 1 to 5.

    This is all getting changed.

    We don’t need to know your email address or to store your password and we no longer want to. You’ll no longer sign up or login with us. You’ll have the ability to use your existing online identity so you’ll review and rate as yourself directly, without needing a dedicated account. We’ll very likely support Facebook, Github and Google, and possibly other social networks.

    Ratings will be simplified. In the past you could downvote spices and your rating carried the same weight on the score whether it was done last week or 4 years ago. This made it easy for people to trick the system unfairly and it made it hard for new spices to compete with established ones. Going forward you’ll only be able to “like” things. Spices will be considered “popular” not only because they received many “likes”, but also because they did so “recently”.

    Improved security

    All code changes affecting spices are now version controlled.

    All code changes made by untrusted parties are now reviewed.

    Improved maintenance

    Spices used to be developed solely by their author.

    They’re now maintained co-jointly by their author, the Development Team and a new team made of trusted Spices developers.

    This brings a lot of benefits…

    In the past, if the author of a spice didn’t fix something, it never got fixed. In contrast, anyone can fix bugs in spices now. The Development Team can commit the fix directly into it and anyone else can provide a fix via a pull request on github.

    It doesn’t mean all bugs will get fixed, but it’s certainly going to boost bug fixing in Cinnamon spices.

    Maintenance is centralized and bug fixes can be done across the board and affect multiple spices. If somebody fixes something in one spice and the same fix could be applied to other spices, that person has the ability to provide a single fix affecting multiple spices. In the past, this rarely happened, because it would have required an author to communicate with other authors and for them to copy over their fix.

    Spices can gain support for future versions of Cinnamon “before” they come out. Cinnamon 3.2 required changes in themes and many of them don’t look good as a result. The development team is now able to modify spices as changes are being implemented in Cinnamon itself. It’s not just a matter of telling artists and developers about changes which affect their spice, we’re now able to actually fix things for them directly before ever breaking them 🙂

    Streamlined development delivery

    The development of spices now happens on Github. It’s all version controlled and using top of the art technology and the same tools we’re using to develop Cinnamon itself.

    Changes made on Github are synced automatically towards the https://cinnamon-spices.linuxmint.com website and towards your desktop environment. If we fix a bug on Github, a few minutes later it’s there on the website and your Cinnamon desktop sees it as an available spice update.

    If you browse the website and look at a particular spice, you’ll see when it was last edited and what its last commit is. If you click on this commit hash, you’ll see its Git history, i.e. all the changes that occurred to that spice.


    Be aware that when the authentication/comments/rating revamp arrives we’ll reset all the scores to zero and we might also start from scratch on the comments.

    All spices kept their existing “uuid” (which represents their unique identity) except for themes (which didn’t have any in the past). In layman terms this means that changes made to applets, desklets and extensions will be detected by your Cinnamon desktop as new versions of these spices… but this won’t happen for themes that you already installed, since their uuid changed.

    If you installed themes, please remove them and reinstall them. Once this is done you’ll be able to update them again when changes are pushed.

    Information for developers and artists

    If you are developing spices, please also read http://segfault.linuxmint.com/2017/01/changes-to-cinnamon-spices-for-developer-and-artists/

    20 January, 2017 01:18PM by Clem

    January 19, 2017

    hackergotchi for SparkyLinux


    TeamSpeak Installer

    There is a new tool for gamers, available in Sparky repos: TeamSpeak Installer.

    From Wikipedia:

    TeamSpeak is proprietary voice-over-Internet Protocol (VoIP) software for audio communication between users on a chat channel, much like a telephone conference call. Users typically use headphones with a microphone. The client software connects to a TeamSpeak server of the user’s choice, from which the user may join chat channels.

    The TeamSpeak installer can download, install, upgrade/reinstall and remove the TeamSpeak client for Linux.

    sudo apt-get update
    sudo apt-get install teamspeak-installer

    or via just upgraded Sparky APTus Gamer 0.1.15.

    The installer is placed in the Game submenu, so run it after installing it.
    The installer provides a menu entry, if the client installation is successfully, to easy you launching the client.

    TeamSpeak Client

    Let me know if you find any problem with installing the client.


    19 January, 2017 09:08PM by pavroo

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Daniel Pocock: Which movie most accurately forecasts the Trump presidency?

    Many people have been scratching their heads wondering what the new US president will really do and what he really stands for. His alternating positions on abortion, for example, suggest he may simply be telling people what he thinks is most likely to win public support from one day to the next. Will he really waste billions of dollars building a wall? Will Muslims really be banned from the US?

    As it turns out, several movies provide a thought-provoking insight into what could eventuate. What's more, these two have a creepy resemblance to the Trump phenomenon and many of the problems in the world today.

    Countdown to Looking Glass

    On the classic cold war theme of nuclear annihilation, Countdown to Looking Glass is probably far more scary to watch on Trump eve than in the era when it was made. Released in 1984, the movie follows a series of international crises that have all come to pass: the assassination of a US ambassador in the middle east, a banking crisis and two superpowers in an escalating conflict over territory. The movie even picked a young Republican congressman for a cameo role: he subsequently went on to become speaker of the house. To relate it to modern times, you may need to imagine it is China, not Russia, who is the adversary but then you probably won't be able to sleep after watching it.

    cleaning out the swamp?

    The Omen

    Another classic is The Omen. The star of this series of four horror movies, Damien Thorn, appears to have a history that is eerily reminiscent of Trump: born into a wealthy family, a series of disasters befall every honest person he comes into contact with, he comes to control a vast business empire acquired by inheritance and as he enters the world of politics in the third movie of the series, there is a scene in the Oval Office where he is flippantly advised that he shouldn't lose any sleep over any conflict of interest arising from his business holdings. Did you notice Damien Thorn and Donald Trump even share the same initials, DT?

    19 January, 2017 07:31PM

    Ubuntu Insights: Winners of #UbuntuAtMWC

    A couple weeks ago we held a competition to invite you to join us at MWC by telling us what you wanted to see from #UbuntuAtMWC across Cloud, Devices or IoT!

    We had some awesome entries which were very hard to limit down to 10!

    A big thank you to all who entered including our winners selected below and we can’t wait to see you at MWC in Hall P3 – 3K31!











    More info on Ubuntu at MWC here!

    19 January, 2017 04:19PM

    hackergotchi for Cumulus Linux

    Cumulus Linux

    Introducing Cumulus Express — your turnkey solution with open networking hardware & software all in one

    In the last few months at Cumulus Networks, we’ve put a lot of focus on finding innovative ways to make web-scale networking accessible to data centers of all sizes and engineers of all backgrounds. We released features like NCLU, EVPN and PIM to make that happen.

    In our minds, web-scale networking principles make data centers more powerful and make engineers’ lives easier. We take great pride in helping organizations accelerate their journey to web-scale in the fastest, simplest way possible. That’s why we are super excited to announce that web-scale networking with Cumulus Networks just got EVEN BETTER. We know, you didn’t think it was possible.

    Allow us to formally introduce Cumulus Express — your turnkey solution featuring an open networking switch preloaded and licensed with Cumulus Linux. Each Cumulus Express switch is ready to go as is, improving your time to market by eliminating steps to install and research optics. That’s right, you can now deploy switches running Cumulus Linux in one easy step.

    With Cumulus Express you get:

    • 1G to 100G platforms: Available in 1G-T, 10G/10G-T, 25G (coming soon), 40G & 100G speeds
    • NOS & license preloaded: Each switch comes preloaded with Cumulus Linux with an active license key installed
    • Peace of mind: Eliminate compatibility confusion by accessing our list of certified cables/optics that work seamlessly with Cumulus Express
    • 3-year customer support: Each switch comes with 3 years of support, including a hardware warranty
    • A breath of fresh air: Cumulus Express makes web-scale networking faster, easier and more efficient than ever before

    Cumulus Express offers a complete portfolio of networking switches (1G to 100G platforms) that customers across the globe can source from Cumulus Networks’ authorized channel partners. And they’re ready to go now. Simply head to our HCL and you’ll find several Edge-Core boxes ready to purchase — and that’s just the beginning. We’ll be adding more switches in the very close future.

    Over 40% of global enterprises will have adopted web-scale networking principles by 2020. With Cumulus Express, we’re helping organizations who want to accelerate their adoption of web-scale principles.

    Cumulus Express makes purchasing and deploying Cumulus Linux easy for those looking for a one-stop-shop approach. You’ll still get all the web-scale technology and advantages available with a completely disaggregated approach to Cumulus Linux, but with Cumulus Express, you save yourself a few steps by getting the hardware and software all in one.

    Plus, all of our customers still have the choice of choosing hardware and software that fits their budget and needs. You can choose from our long list of compatible hardware, deploy with Cumulus Express or even go with a combination of the two. As always, the choice is up to you— that’s the beauty of open, web-scale networking.

    We’re thrilled to now offer even more choice and flexibility to our web-scale networking offering. We hope this new one-stop shop model will be ideal for those customers looking for a turnkey solution that improves time to market.

    You can learn more about Cumulus Express and how to purchase by visiting our overview page


    The post Introducing Cumulus Express — your turnkey solution with open networking hardware & software all in one appeared first on Cumulus Networks Blog.

    19 January, 2017 01:58PM by Kelsey Havens

    hackergotchi for Univention Corporate Server

    Univention Corporate Server

    Brief Introduction: Differences of the Cloud Services IaaS, PaaS and SaaS

    Summer time is conference time in the IT world and anyone going to one or more of these events hears about the latest developments in cloud computing, wondering sometimes how to keep up with the sheer number of cloud services acronyms used in this industry.

    So let us disentangle the secret code of cloud computing by having a look at what the meaning of the different services IaaS, PaaS, and SaaS actually is.

    ‘As a Service’

    It is best to start explaining from the back of the acronyms – ‘aaS’ or also ‘as a service’ – as all of these refer to a cloud services offering from a cloud service provider. They, in general, represent the leasing of a certain resource for a determined or undetermined amount of time.

    More importantly to understanding what you are really getting is the first letter.

    The first letter represents the highest level the cloud service provider, in short CSP, is maintaining. Anything below that level is the responsibility of the supplier to maintain, update and secure while anything above this determined level is in your area of responsibility. Likewise, any configuration, software, and data below the level is indeed belonging to the CSP while, depending on your contract, you might regain ownership of anything on a higher level.

    Now you notice that the difference between the cloud services IaaS, PaaS, and SaaS is obviously where in this application layer model the split is. So let us have a look at the different layers and at the same time decode the front part of these acronyms.

    IaaS – Infrastructure as a Service

    White Male in front of serverThe lowest level of cloud services you commonly find at a cloud service provider, such as Amazon AWS or Microsoft Azure, is ‘Infrastructure as a Service’. IaaS refers to the supplier offering nothing more than a virtual machine, storage, and network connectivity in their data center which is commonly referred to as server infrastructure. That’s why it is called IaaS.

    Anything that is running on this infrastructure is your responsibility. This includes the operating system, and management system as well as any middleware, application and data.

    Advantages of IaaS
    The significant advantage of IaaS is that you can build a tightly integrated stack optimized for your requirements, similar to your data center. And the price for the leased resources is relatively low and easy to calculate.

    Disadvantages of IaaS
    The downside is that you are responsible for most of the updates and the configuration and management of the complete system. Costs are similar to the situation where your server is on-premise. You will pay a comparable small price for the IaaS offerings but you will not experience a significant reduction in staffing needs as the management of the operating system and the applications remain your responsibility.

    IaaS and the Univention App Center
    While porting your system over from one CSP to another is possible, it might require a certain amount of know-how given different backends at various service providers. Using UCS’s domain concept and automatic replication of data and configurations, administrators can simply join a new UCS system and automatically transfer most directory and user data.

    Furthermore, these features also make using UCS easy for you on most IaaS platforms. Not only can you operate applications from the Univention App Center but, with the right planning, you can easily set up a central management system that can includes various UCS instances spread over multiple IaaS environments and/or your own data centers.

    PaaS – Platform as a Service

    A level above the bare infrastructure sits the operating system, short OS, such as UCS. PaaS offerings thus not only include the server infrastructure but also an OS system, which is maintained by the cloud service provider.

    Advantages of PaaS
    With transferring the responsibility for the OS to the cloud service provider, a lot of the underlying repetitive and administrative work can be offloaded. Particularly users requiring a very specialized application on a general purpose operating system, will have the advantage to focus solely on creating and managing their core product without having to worry about any of the underlying systems.

    Cloud service providers such as Apprenda have ample experience of updating and managing operating systems without lasting interruptions. They know how to profoundly optimize the virtual infrastructure of the OS, thus minimize shortfalls and guarantee higher performances as it is with on-premise systems.

    Disadvantages of PaaS
    On the downside of PaaS are the higher fixed costs and, what might be even more important to consider, the loss of control over parts of your IT system. You cannot, for example, optimize the OS for the specific needs of your application. While some service providers will do this work for you, it often depends on the size of the client whether they do it or not.

    UCS as a PaaS
    Many cloud service providers are already offering UCS in their platform as a service cloud. Using UCS’ built-in automated rollout processes, customers can easily start and maintain their applications via the wizard in UCS’ web interface.

    SaaS – Software as a Service

    Apps to choose fromEven one level above the operating system sits the application (software) itself. The highest level of provided cloud services. Most often a shiny, user-friendly web interface where you can enter and retrieve data.
    Compared to IaaS and PaaS the buyer of a Software as a Service product typically isn’t looking for an IT resource such as a server but for a solution to a problem such as needing a groupware or an accounting system.

    Advantages of SaaS
    As the SaaS concept is a holistic solution, your management responsibilities are minimal and only penetrates to the user and compliance management. Anything behind the application that needs management such as regular updates is hidden from you, allowing a focus on the actual task without long work on the underlying system.

    Disadvantages of SaaS
    While SaaS products, such as Office 365 or Salesforce, greatly take the burden off the IT staff, they come with a heavy price tag.

    One further big disadvantage of SaaS offerings is that usually all customers share one large IT environment that is often spread over several data centers. The storage of customer data is thus not transparent. This, in turn, most often requires the need for legal counsel when evaluating whether or not a particular offer is compliant with the regulatory requirements.

    UCS’ role in SaaS
    In conjunction with SaaS products, UCS provides many advantages, both for CSP and customers.

    Its App Center provides an excellent platform for cloud service providers to roll out numerous SaaS applications to their customers. For you as a customer, UCS includes connectors to integrate and manage your cloud applications combining your on-premises user management with your cloud applications.


    While confusing at first, choosing between the right cloud services does not have to be complex. It generally starts with asking yourself how much ongoing work you want to put into the system and whether it would make a difference if you do not have full control over the whole system. With these questions answered, you can quickly make the choice between the three ”as a Service” classes.

    I hope I was able to give you a glimpse overview of the meanings behind IaaS, PaaS, and SaaS and, of course, would like to invite you to post any questions or comments you might have.

    Der Beitrag Brief Introduction: Differences of the Cloud Services IaaS, PaaS and SaaS erschien zuerst auf Univention.

    19 January, 2017 01:14PM by Kevin Dominik Korte

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Nathan Haines: UbuCon Summit at SCALE 15x Call for Papers

    UbuCons are a remarkable achievement from the Ubuntu community: a network of conferences across the globe, organized by volunteers passionate about Open Source and about collaborating, contributing, and socializing around Ubuntu. UbuCon Summit at SCALE 15x is the next in the impressive series of conferences.

    UbuCon Summit at SCALE 15x takes place in Pasadena, California on March 2nd and 3rd during the first two days of SCALE 15x. Ubuntu will also have a booth at SCALE's expo floor from March 3rd through 5th.

    We are putting together the conference schedule and are announcing a call for papers. While we have some amazing speakers and an always-vibrant unconference schedule planned, it is the community, as always, who make UbuCon what it is—just as the community sets Ubuntu apart.

    Interested speakers who have Ubuntu-related topics can submit their talk to the SCALE call for papers site. UbuCon Summit has a wide range of both developers and enthusiasts, so any interesting topic is welcome, no matter how casual or technical. The SCALE CFP form is available here:


    Over the next few weeks we’ll be sharing more details about the Summit, revamping the global UbuCon site and updating the SCALE schedule with all relevant information.


    About SCaLE:

    SCALE 15x, the 15th Annual Southern California Linux Expo, is the largest community-run Linux/FOSS showcase event in North America. It will be held from March 2-5 at the Pasadena Convention Center in Pasadena, California. For more information on the expo, visit https://www.socallinuxexpo.org

    19 January, 2017 10:12AM

    January 18, 2017

    Stéphane Graber: LXD on Debian (using snapd)

    LXD logo


    So far all my blog posts about LXD have been assuming an Ubuntu host with LXD installed from packages, as a snap or from source.

    But LXD is perfectly happy to run on any Linux distribution which has the LXC library available (version 2.0.0 or higher), a recent kernel (3.13 or higher) and some standard system utilities available (rsync, dnsmasq, netcat, various filesystem tools, …).

    In fact, you can find packages in the following Linux distributions (let me know if I missed one):

    We have also had several reports of LXD being used on Centos and Fedora, where users built it from source using the distribution’s liblxc (or in the case of Centos, from an external repository).

    One distribution we’ve seen a lot of requests for is Debian. A native Debian package has been in the works for a while now and the list of missing dependencies has been shrinking quite a lot lately.

    But there is an easy alternative that will get you a working LXD on Debian today!
    Use the same LXD snap package as I mentioned in a previous post, but on Debian!


    • A Debian “testing” (stretch) system
    • The stock Debian kernel without apparmor support
    • If you want to use ZFS with LXD, then the “contrib” repository must be enabled and the “zfsutils-linux” package installed on the system

    Installing snapd and LXD

    Getting the latest stable LXD onto an up to date Debian testing system is just a matter of running:

    apt install snapd
    snap install lxd

    If you never used snapd before, you’ll have to either logout and log back in to update your PATH, or just update your existing one with:

    . /etc/profile.d/apps-bin-path.sh

    And now it’s time to configure LXD with:

    root@debian:~# lxd init
    Name of the storage backend to use (dir or zfs) [default=dir]:
    Create a new ZFS pool (yes/no) [default=yes]?
    Name of the new ZFS pool [default=lxd]:
    Would you like to use an existing block device (yes/no) [default=no]?
    Size in GB of the new loop device (1GB minimum) [default=15]:
    Would you like LXD to be available over the network (yes/no) [default=no]?
    Would you like stale cached images to be updated automatically (yes/no) [default=yes]?
    Would you like to create a new network bridge (yes/no) [default=yes]?
    What should the new bridge be called [default=lxdbr0]?
    What IPv4 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
    What IPv6 subnet should be used (CIDR notation, “auto” or “none”) [default=auto]?
    LXD has been successfully configured.

    And finally, you can start using LXD:

    root@debian:~# lxc launch images:debian/stretch debian
    Creating debian
    Starting debian
    root@debian:~# lxc launch ubuntu:16.04 ubuntu
    Creating ubuntu
    Starting ubuntu
    root@debian:~# lxc launch images:centos/7 centos
    Creating centos
    Starting centos
    root@debian:~# lxc launch images:archlinux archlinux
    Creating archlinux
    Starting archlinux
    root@debian:~# lxc launch images:gentoo gentoo
    Creating gentoo
    Starting gentoo

    And enjoy your fresh collection of Linux distributions:

    root@debian:~# lxc list
    |   NAME    |  STATE  |         IPV4          |                     IPV6                      |    TYPE    | SNAPSHOTS |
    | archlinux | RUNNING | (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe40:7b1b (eth0) | PERSISTENT | 0         |
    | centos    | RUNNING | (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe87:64ff (eth0) | PERSISTENT | 0         |
    | debian    | RUNNING | (eth0) | fd42:46d0:3c40:cca7:216:3eff:feb4:e984 (eth0) | PERSISTENT | 0         |
    | gentoo    | RUNNING | (eth0) | fd42:46d0:3c40:cca7:216:3eff:fe27:10ca (eth0) | PERSISTENT | 0         |
    | ubuntu    | RUNNING | (eth0)  | fd42:46d0:3c40:cca7:216:3eff:fedc:f0a6 (eth0) | PERSISTENT | 0         |


    The availability of snapd on other Linux distributions makes it a great way to get the latest LXD running on your distribution of choice.

    There are still a number of problems with the LXD snap which may or may not be a blocker for your own use. The main ones at this point are:

    • All containers are shutdown and restarted on upgrades
    • No support for bash completion

    If you want non-root users to have access to the LXD daemon. Simply make sure that a “lxd” group exists on your system and add whoever you want to manage LXD into that group, then restart the LXD daemon.

    Extra information

    The snapd website can be found at: http://snapcraft.io

    The main LXD website is at: https://linuxcontainers.org/lxd
    Development happens on Github at: https://github.com/lxc/lxd
    Mailing-list support happens on: https://lists.linuxcontainers.org
    IRC support happens in: #lxcontainers on irc.freenode.net
    Try LXD online: https://linuxcontainers.org/lxd/try-it

    18 January, 2017 10:19PM

    Ubuntu Insights: 5 Cool things Canonical does with Go

    We had the recent news that Google’s Go was awarded programming language of 2016 by TIOBE! One of the main reasons for winning is the ease of learning and pragmatic nature. It’s less about theoretical nature and more about hands-on-experience, which is why more and more customers are adopting go in Industrial settings. At Canonical we’re doing the same! As supporters of Go, here are 5 cool things we’ve done with Go:

    1. Juju. Juju is devops distilled. Juju enables you to use Charms to deploy your application architectures to EC2, OpenStack, Azure, HP your data center and even your own Ubuntu based laptop. Moving between models is simple giving you the flexibility to switch hosts whenever you want — for free. Code is at https://github.com/juju/juju.

    2. The snapd and snap tools enable systems to work with .snap files. Package any app for every Linux desktop, server, cloud or device, and deliver updates directly. See snapcraft.io for a high level overview about snap files and the snapd application. Some great go code is at https://github.com/snapcore/snapd.

    3. The LXD container hypervisor enables you to move your Linux VMs straight to containers, easily and without modifying the apps or your operations. Canonical’s LXD is a pure-container hypervisor that runs unmodified Linux operating systems and applications with VM-style operations at incredible speed and density. It’s open source, you can see how it’s done at https://github.com/lxc/lxd.

    4. snapweb is a beautiful and functional interface for snap management. It’s a cross html/css/javascript and golang excellent web application whose code can be looked at on https://github.com/snapcore/snapweb.

    5. We also do some advanced demo code to demonstrate our technology. We love Go so much that we did write face-detection-demo, which enables to detect and count faces based on time. Using the go-opencv binding, we even did some fixes for it to compile on arm architecture! Have a look at https://github.com/ubuntu/face-detection-demo.

    Learn more here at the TIOBE index.

    18 January, 2017 11:42AM

    Jonathan Riddell: Get Yourself on www.kde.org

    Google Code-in has just finished where school pupils do tasks to introduce themselves to open development.  I had one to update the screenshots on www.kde.org.  The KDE website is out of date in many ways but here’s a wee way to fix one part of it.  Despite me having about half a dozen students work on it there’s still some old screenshots there so if anyone wants the satisfaction of contributing to www.kde.org’s front page here’s an easy way.

    www.kde.org has screenshots of all our apps but many still use the old KDE 4 Oxygen widget theme and icons.

    For 10 screenshots which is using the old theme take a new screenshot using the new theme.

    They can be checked out from Subversion here https://websvn.kde.org/trunk/www/sites/www/images/screenshots/ also provide one the resized screenshot which is 400 pixels wide exactly.

    Keep the filenames the same and in lower case.

    Upload as a single .zip or .tar.gz containing the screenshots with the right file name and a folder resized/ with the 400px screenshots

    For bonus points you could go through the index file to make sure it’s current with KDE applications https://www.kde.org/applications/index.json
    Facebooktwittergoogle_pluslinkedinby feather

    18 January, 2017 09:50AM

    January 17, 2017

    Simos Xenitellis: How to completely remove a third-party repository from Ubuntu

    Suppose you added a third-party repository of DEB packages in your Ubuntu and you now want to completely remove it, by either downgrading the packages to the official version in Ubuntu or removing them altogether. How do you do that?

    Well, if it was a Personal Package Archive (PPA), you would simply use ppa-purge. ppa-purge is not pre-installed in Ubuntu, so we install it with

    sudo apt update
    sudo apt install ppa-purge

    Here is the help for ppa-purge:

    $ ppa-purge
    Warning:  Required ppa-name argument was not specified
    Usage: sudo ppa-purge [options] <ppa:ppaowner>[/ppaname]
    ppa-purge will reset all packages from a PPA to the standard
    versions released for your distribution.
        -p [ppaname]        PPA name to be disabled (default: ppa)
        -o [ppaowner]        PPA owner
        -s [host]        Repository server (default: ppa.launchpad.net)
        -d [distribution]    Override the default distribution choice.
        -y             Pass -y --force-yes to apt-get or -y to aptitude
        -i            Reverse preference of apt-get upon aptitude.
        -h            Display this help text
    Example usage commands:
        sudo ppa-purge -o xorg-edgers
        will remove https://launchpad.net/~xorg-edgers/+archive/ppa
        sudo ppa-purge -o sarvatt -p xorg-testing
        will remove https://launchpad.net/~sarvatt/+archive/xorg-testing
        sudo ppa-purge [ppa:]ubuntu-x-swat/x-updates
        will remove https://launchpad.net/~ubuntu-x-swat/+archive/x-updates
    Notice: If ppa-purge fails for some reason and you wish to try again,
    (For example: you left synaptic open while attempting to run it) simply
    uncomment the PPA from your sources, run apt-get update and try again.

    Here is an example of ppa-purge that removes a PPA:
    Suppose we want to completely uninstall the Official Wine Builds PPA. The URI of the PPA is shown on that page in bold, and it is ppa:wine/wine-builds.

    To uninstall this PPA, we run

    $ sudo ppa-purge ppa:wine/wine-builds
    Updating packages lists
    PPA to be removed: wine wine-builds
    Package revert list generated:
    wine-devel- wine-devel-amd64- wine-devel-i386:i386- winehq-devel-
    Disabling wine PPA from
    Updating packages lists
    PPA purged successfully
    $ _

    But how do we completely uninstall the packages of a third-party repository? Those do not have a URI that is similar to the format that ppa-purge needs!

    Let’s see an example. If you have an Intel graphics card, you may choose to install their packaged drivers from 01.org. For Ubuntu 16.04, the download page is https://01.org/linuxgraphics/downloads/intel-graphics-update-tool-linux-os-v2.0.2  Yes, they provide a tool that you run on your system and performs a set of checks. Once those checks pass, it adds the Intel repository for Intel Graphics card drivers. You do not see a similar URI from this page, you need to dig deeper after you installed them to find out.

    The details of the repository are in /etc/apt/sources.list.d/intellinuxgraphics.list and it is this single line

    deb https://download.01.org/gfx/ubuntu/16.04/main xenial main #Intel Graphics drivers

    How did we figure out the parameters for ppa-purge? These parameters are just used to identify the correct file in /var/lib/apt/lists/ For the case of the Intel drivers, the relevant files in /var/lib/apt/lists are


    The important ones are the *_Packages files. The important source code line in ppa-purge that will help us, is


    therefore, we select the parameters for ppa-purge accordingly:

    -s download.01.org   for   ${PPAHOST}
    -o gfx               for   ${PPAOWNER}
    -p ubuntu            for   ${PPANAME}

    Now ppa-purge can remove the packages from such a PPA as well, by using these parameters:

    sudo ppa-purge -s download.01.org -o gfx -p ubuntu

    That’s it!

    17 January, 2017 10:20PM

    hackergotchi for Kali Linux

    Kali Linux

    The Kali Linux Certified Professional

    Introducing the KLCP Certification

    After almost two years in the making, it is with great pride that we announce today our new Kali Linux Professional certification – the first and only official certification program that validates one’s proficiency with the Kali Linux distribution.

    If you’re new to the information security field, or are looking to take your first steps towards a new career in InfoSec, the KLCP is a “must have” foundational certification. Built on the philosophy that “you’ve got to walk before you can run,” the KLCP will give you direct experience with your working environment and a solid foundation toward a future with any professional InfoSec work. As we continually see, those entering the Offensive Security PWK program with previous working experience with Kali, and a general familiarity with Linux, tend to do better in the real world OSCP exam.

    For those of you who already have some experience in the field, the KLCP provides a solid and thorough study of the Kali Linux Distribution – learning how to build custom packages, host repositories, manage and orchestrate multiple instances, build custom ISOs, and much, much, more. The KLCP will allow you to take that ambiguous bullet point at the end of your resume – the one that reads “Additional Skills – familiarity with Kali Linux”, and properly quantify it. Possession of the KLCP certification means that you have truly mastered the Kali penetration testing distribution and are ready to take your information security skills to the next level.

    The KLCP exam will be available via Pearson VUE exam centres worldwide after the Black Hat USA 2017 event in Las Vegas.

    New Book – Kali Linux Revealed

    Mastering the Penetration Testing Distribution

    More exciting news! In the past year, we’ve been working internally on an Official Kali Linux book – Kali Linux Revealed: Mastering the Penetration Testing Distribution. This is the first official Kali book from Offsec Press, and is scheduled for release on June 5th, 2017. Kali Linux Revealed will be available in both hard copy and online formats. Keeping the Kali Linux spirit, the online version of the book will be free of charge, allowing anyone who wishes to hone their skills and improve their knowledge of Kali to do so at no cost. This book, together with our official Kali documentation site will encompass the body of knowledge for the KLCP.

    “Kali Linux Revealed” Class at Black Hat USA, 2017

    This year, we are fortunate enough to debut our first official Kali Linux training at the Black Hat conference in Las Vegas, 2017. This in depth, four day course will focus on the Kali Linux platform itself (as opposed to the tools, or penetration testing techniques), and help you understand and maximize the usage of Kali from the ground up. Delivered by Mati Aharoni and Johnny Long, in this four day class you will learn how to:

    • Gain confidence in basic Linux proficiency, fundamentals, and the command line.
    • Install and verify Kali Linux as a primary OS or virtual machine, including full disk encryption and preseeding.
    • Use Kali as a portable USB distribution including options for encryption, persistence, and “self-destruction”.
    • Install, remove, customize, and troubleshoot software via the Debian package manager.
    • Thoroughly administer, customize, and configure Kali Linux for a streamlined experience.
    • Troubleshoot Kali and diagnose common problems in an optimal way.
    • Secure and monitor Kali at the network and filesystem levels.
    • Create your own packages and host your own custom package repositories.
    • Roll your own completely customized Kali implementation and preseed your installations.
    • Customize, optimize, and buld your own kernel.
    • Scale and deploy Kali Linux in the enterprise.
    • Manage and orchestrate multiple installations of Kali Linux.

    Please Note: This is not a penetration testing course. This course is focused on teaching the student how to get the most out of the Kali Linux Penetration Testing Platform, not how to use the packaged tools in an offensive manner. Attending students will receive a signed copy of the “Kali Linux Revealed” book as well as a free voucher to sit the KLCP exam in a nearby Pearson VUE certification centre.

    A lot has been going on behind the scenes in the Kali Linux arena, and we’re excited to see our distribution get a free and formal education path. We believe this will improve the skills of those using Kali Linux and better the community and information security industry as a whole. We are putting all our efforts into finishing up the Kali Revealed book, and will keep y’all updated as the release date nears. In the meantime, follow us on twitter to get realtime updates as they come out.

    17 January, 2017 04:24PM by muts

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Ubuntu Insights: Mir: 2016 end of year review

    This is a guest post by Alan Griffiths, Software engineer at Canonical. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

    2016 was a good year for Mir – it is being used in more places, it has more and better upstream support and it is easier to use by downstream projects. 2017 will be even better and will see version 1.0 released.

    Uses of Mir
    Mir support and development has continued on the two graphics stacks used by Ubuntu phone and desktop (i.e. “android” drivers and “freedesktop”). In the course of 2016 the Mir based Unity8 shell has been released as a “preview”; and, the Mir based miral-kiosk has also been enabled on the snap-based Ubuntu Core.

    For the “preview” of the Unity8 Mir server on desktop – see here.

    For Mir on Ubuntu Core – see here.

    Developing with Mir
    There are three ways which developers might wish to work with Mir:
    1.Enabling a client-side toolkit, library or application to work on Mir
    2.Creating a Mir based system compositor or shell
    3.Enabling Mir on a new platform

    Initially each of these ways of using Mir has been the province of Canonical developers. Partly because Mir is developed by Canonical, but mostly because it has been a moving target.

    But that isn’t the long term goal, we want all of these uses to be possible for downstream projects. And we have been making progress: most of what we want to deliver to support the first two categories is ready and will ship in Ubuntu 17.04.

    Enabling a client-side toolkit, library or application to work on Mir
    In July 2015 we released Mir 0.14 which was the point at which we stabilized the “toolkit” ABI needed to work on client-side toolkits etc. Since then we’ve extended the API, but have maintained ABI compatibility.

    In the course of 2016 we’ve built on this and upstreamed Mir support into GTK3, Qt, SDL2 and kodi. (Ubuntu also carries a patch for SDL1.2.)

    The testing of toolkit work has been facilitated in 2016 by the development of miral-shell example server. This supports the testing and debugging of basic window management facilities. There’s a brief introduction to testing with miral-shell here and debugging here.

    In the process of this work we have identified some potential improvements to the API. We are in the process of landing the improved APIs and deprecating the old ones. At some point this year we will be removing the deprecated APIs (and consequently breaking ABI for hopefully the final time).

    Creating a Mir based system compositor or shell
    The mirserver ABI causes problems for downstreams because ABI compatibility been broken routinely. At a minimum downstream projects have needed to rebuild for each Mir release and often also needed code changes to adapt to API changes.

    In October 2016 we released MirAL to provide a stable ABI for developing Mir servers. Because MirAL depended upon some fundamental types created in Mir there was initially some ABI instability (in libmircommon).

    That smaller ABI instability has now (December 2016) been addressed with the Mir 0.25 release. Mir 0.25 moved these “fundamental” types to a new library (libmircore) for which we can and will maintain ABI compatibility. At the same time we also released MirAL 1.0 (also breaking ABI) to clean up a few small issues. We intend the libmircore and libmiral server ABIs to retain ABI compatibility going forwards.

    The QtMir project that Unity8 uses as an abstraction layer over Mir has also been migrated to libmiral to benefit both from the ABI stability and the basic window management defaults provided by libmiral.

    There’s a starter guide to writing a Mir-based “Desktop Environment” here.

    Enabling Mir on a new platform
    There are at least three categories of “new platform” where developers might try to enable Mir.
    1. New phone hardware/android drivers
    2. A non-Ubuntu mesa distribution
    3. A new graphics API

    For all three categories the support is a “work-in-progress” and not yet ready for use downstream. That should change this year.

    Enabling Mir on new phone hardware/android drivers
    Diagnosing and fixing problems found with running Mir on android based hardware typically need debugging the driver interaction and updating the Mir “graphics-android” plugin module to accommodate variations in the way the driver spec has been interpreted by the vendor. Patches are welcome.

    Enabling Mir on a non-Ubuntu mesa distribution
    Ubuntu currently carries a “distro patch” for mesa to support Mir. Work is planned for this year to update and then upstream this patch. We’ve not done so yet as we know there are changes needed to support current developments (such as Vulkan).

    There are instructions available for getting the current version of the Mesa “distro patch” here.

    Enabling Mir on a new graphics API
    To enable Mir on a new graphics API (such as Vulkan) requires the development of a Mir plugin module. We are working on Vulkan support and that work has led to a better understanding of how these plugins should integrate into Mir.

    The APIs needed to develop platform plugin modules are currently neither stable enough for downstream developers, nor available as published headers in a Mir package that would support out-of-tree development. That should change this year.

    Looking forward to 2017
    As mentioned above, 2017 will see a cleanup of our “toolkit” API and better support for “platform” plugin modules. We will then be working on upstreaming our mesa patch. That will allow us to release our (currently experimental) Vulkan support.

    We’ve also been working on reducing latency but the big wins didn’t quite make the end of 2016. There’s a snapshot of progress here.

    As we complete these changes, 2017 will see a Mir 1.0 release.

    17 January, 2017 11:26AM


    Online recherchieren im Stadtarchiv München

    Bisher mussten Benutzer den Lesesaal des Stadtarchivs München aufsuchen, um dort in verschiedenen Datenbanken und analogen Findmitteln nach Archivgut zu recherchieren. In den Dokumenten und Publikationen kann seit Anfang des Jahres 2017 nun auch unter … Weiterlesen

    Der Beitrag Online recherchieren im Stadtarchiv München erschien zuerst auf Münchner IT-Blog.

    17 January, 2017 09:07AM by Stefan Döring

    BunsenLabs Linux

    Outage on 2017-01-16 from 05:29-12:50 CET

    Thanks anyway for keeping on top of it!
    Good to be back. smile

    17 January, 2017 01:22AM by johnraff

    January 16, 2017

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Raphaël Hertzog: Freexian’s report about Debian Long Term Support, December 2016

    A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

    Individual reports

    In December, about 175 work hours have been dispatched among 14 paid contributors. Their reports are available:

    Evolution of the situation

    The number of sponsored hours did not increase but a new silver sponsor is in the process of joining. We are only missing another silver sponsor (or two to four bronze sponsors) to reach our objective of funding the equivalent of a full time position.

    The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 27. The situation improved a little bit compared to last month.

    Thanks to our sponsors

    New sponsors are in bold.

    No comment | Liked this article? Click here. | My blog is Flattr-enabled.

    16 January, 2017 02:39PM


    Linux Mint 18.1 “Serena” KDE – BETA Release

    This is the BETA release for Linux Mint 18.1 “Serena” KDE Edition.

    Linux Mint 18.1 Serena KDE Edition

    Linux Mint 18.1 is a long term support release which will be supported until 2021. It comes with updated software and brings refinements and many new features to make your desktop even more comfortable to use.

    New features:

    This new version of Linux Mint contains many improvements.

    For an overview of the new features please visit:

    What’s new in Linux Mint 18.1 KDE“.

    Important info:

    The release notes provide important information about known issues, as well as explanations, workarounds and solutions.

    To read the release notes, please visit:

    Release Notes for Linux Mint 18.1 KDE

    System requirements:

    • 2GB RAM.
    • 10GB of disk space (20GB recommended).
    • 1024×768 resolution (on lower resolutions, press ALT to drag windows with the mouse if they don’t fit in the screen).


    • The 64-bit ISO can boot with BIOS or UEFI.
    • The 32-bit ISO can only boot with BIOS.
    • The 64-bit ISO is recommend for all modern computers (Almost all computers sold in the last 10 years are equipped with 64-bit processors).

    Upgrade instructions:

    • This BETA release might contain critical bugs, please only use it for testing purposes and to help the Linux Mint team fix issues prior to the stable release.
    • It will be possible to upgrade from this BETA to the stable release.
    • It will also be possible to upgrade from Linux Mint 18. Upgrade instructions will be published next month after the stable release of Linux Mint 18.1.

    Bug reports:

    • Please report bugs below in the comment section of this blog.
    • When reporting bugs, please be as accurate as possible and include any information that might help developers reproduce the issue or understand the cause of the issue:
      • Bugs we can reproduce, or which cause we understand are usually fixed very easily.
      • It is important to mention whether a bug happens “always”, or “sometimes”, and what triggers it.
      • If a bug happens but didn’t happen before, or doesn’t happen in another distribution, or doesn’t happen in a different environment, please mention it and try to pinpoint the differences at play.
      • If we can’t reproduce a particular bug and we don’t understand its cause, it’s unlikely we’ll be able to fix it.
    • Please visit https://github.com/linuxmint/Roadmap to follow the progress of the development team between the BETA and the stable release.

    Download links:

    Here are the download links for the 64-bit ISO:

    A 32-bit ISO image is also available at https://www.linuxmint.com/download_all.php.

    Integrity and authenticity checks:

    Once you have downloaded an image, please verify its integrity and authenticity.

    Anyone can produce fake ISO images, it is your responsibility to check you are downloading the official ones.


    We look forward to receiving your feedback. Many thanks in advance for testing the BETA!

    16 January, 2017 02:14PM by Clem

    hackergotchi for Blankon developers

    Blankon developers

    Ahmad Haris: Menjadi Pengembang BlankOn

    Apa itu Pengembang BlankOn?

    Pengembang BlankOn adalah manusia-manusia yang aktif berkontribusi dan ingin membangun negeri dengan cara yang berbeda serta keahliah yang berbeda. Mereka terkumpul dalam sebuah kelompok yang bertujuan mencerdaskan kehidupan bangsa dan ikut menertibkan dunia. Kelompok ini menghasilkan orang-orang yang cerdas, canggih, mandiri serta berbudi pekerti yang luhur. Selain itu menghasilkan efek samping berupa distribusi sistem operasi berbasis Linux dengan cita rasa nusantara. Kegiatan atau proyek tersebut bernama BlankOn.

    BlankOn sendiri bisa diterjemahkan topi belangkon, namun kecenderungannya adalah Blank dan On. Di mana artinya dari keadaan ‘mati’ ke ‘nyala/hidup’. Dengan bahasa yang universal “from zero to hero”.

    Proyek BlankOn sendiri dimulai sejak Februari 2005.

    Kenapa Saya Bergabung dengan BlankOn?

    Saat itu, kehidupan saya tidak seperti sekarang. Kondisi masih terlalu awam dan labil. Yang terpikir hanya saya harus belajar hal-hal yang teman-teman saya belum bisa. Seiring waktu, saya mendapatkan banyak pengetahuan dari teman-teman yang lebih senior lainnya.

    Saya mengidolakan beberapa teman-teman yang sudah terlebih dahulu berada di Proyek BlankOn. Mereka keren dan baik dalam kehidupan sehari-hari ataupun dalam hal-hal teknis. Namun umumnya, saya ingin seperti mereka. Contoh sederhana, saya ingin seperti pak Mdamt ataupun pak Fajran yang saat itu punya ‘mainan’ banyak. Bapak berdua itu juga tidak pelit dalam hal berbagi ilmu ataupun mainan.

    Saya juga bermimpi bisa membawa para Pengembang BlankOn ke tingkat yang lebih tinggi sehingga yang tidak pinter seperti saya, bisa ikut maju dan menjadi orang yang bermanfaat bagi sekelilingnya.

    Obsesi tersebut beraneka macam jalannya, salah seorang mantan atasan saya pernah bilang “pengguna linux itu nantinya harus keren, laptop pakai Mac, ndak pakai sandal jepit bulukan, dan kalo perlu punya Ducati”. Ujaran tersebut begitu mengena dan berusaha saya terapkan. Kesemuanya sudah terpenuhi kecuali yang terakhir (tapi punya yang kelas bawah). Hahaha.


    Bagi saya, saya sudah menganggap BlankOn ini sebagai keluarga. Bisa juga ini menjadi sekte sendiri. Saya tidak gampang berteman dengan orang, namun dari BlankOn ini saya jadi memiliki banyak teman baik. Banyak teman banyak rezeki.

    Selain itu, saya jadi dikenal banyak orang karena aktif sebagai Pengembang BlankOn, efek sampingnya, gampang mencari pekerjaan (dalam beberapa waktu, malah sering kelimpahan banyak pekerjaan).

    Di dunia komunitas, rekan-rekan yang lebih berpengalaman mendorong saya dan yang lainnya untuk aktif ke proyek hulu (upstream). Menjadikan saya dan yang lainnya bisa ikut kontribusi dalam tingkatan dunia. Ketemu para pengembang dari berbagai negara, juga bisa berkunjung ke berbagai negara.

    Saya tidak akan bisa menjadi seperti sekarang ini tanpa ada teman-teman Pengembang BlankOn.

    Ah itu dulu dah yang terpikir setelah semalem mengumumkan pensiun menjadi Pengembang BlankOn. Kalau ada yang ditanyakan, bisa komentar di bawah.


    16 January, 2017 09:52AM

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Ted Gould: Presentations Updated

    This post is mostly a mea cupla to all the folks that asked me after a presentation: “And those slides will be online?” The answer is generally “yes” but they were in a tweet or something equally as hard to find. But now I finally got to making an updated presentations page that is actually useful. Hopefully you can find the slides you are looking for there. And more importantly you can use them as a basis for your talk to a local group in your town.

    As I was redoing this I thought it was a bit interesting how my title pages seem to alternate every couple of years between complex and simple. And I think I have a candidate for worst theme (though there was a close second). Also a favorite theme along with a reminder of all the fun it is to make a presentation with JessyInk.

    I think that there are a couple missing that I can’t find, and also video links out on the Internet somewhere. Please drop me a line if you have any ideas, suggestions or I sent you files that I’ve now lost. Hopefully this is easier to maintain now so there won’t be the same delay.

    16 January, 2017 06:00AM

    hackergotchi for HandyLinux


    Dernier article

    tout est dans le titre ... préparez les mouchoirs

    HandyLinux est une belle aventure démarrée en 2013 et qui a montré que le besoin d'un outil facilitant pour les débutants n'était pas entièrement comblé par des distributions comme Ubuntu ou LinuxMint.
    bien sûr, HandyLinux ne s'est jamais hissée au niveau de ces grandes distributions, mais c'était quand même cool de produire autre chose, d'écouter les utilisateurs, de tester de nouvelles approches.

    De nombreuses et nombreux contributeurs sont venus nous épauler que ce soit dans le développement ou la rédaction de la documentation, sans oublier le support direct sur notre forum.
    Alors non, je ne citerais personne car la collégialité d'HandyLinux est entière : chacune et chacun a participé à cette aventure à sa manière, menant HandyLinux là ou elle est actuellement : en fin de vie .

    Réjouissons-nous de la disparition de cette distribution qui a montré qu'une distro de plus, c'est pas le meilleure idée car en dehors de la dispersion des ressources de développement, il y a aussi (et surtout) dispersion des ressources de support. C'est cet aspect qui a manqué à HandyLinux : pas assez de monde pour former une communauté solide et pouvant faire face aux attaques inévitables dès qu'on a un poil de succès.

    De cette disparition est né le projet DFLinux qui va désormais prendre en charge les débutants sur Debian, en s'appuyant sur Debian-Facile, une communauté plus large qui fait déjà ses preuves et qui depuis cet été et le début de notre fusion, assure un support efficace pour les handy'ers.

    Mais faisons le point sur la situation pour les utilisatrices et utilisateurs d'HandyLinux :

    • le serveur, le site principal, la documentation, le forum et le blog resteront accessibles jusqu'au 6 avril 2018
    • le support est principalement assuré sur le forum Debian-Facile, mais vous pouvez aussi demander de l'aide sur les autres forums Debian francophones, HandyLinux reste une Debian et les commandes ou astuces trouvés ailleurs seront valables pour vous

    Que faire en 2018 avec sa HandyLinux ?

    bah elle sera vraiment en fin de vie, même pour une Debian, donc comme pour une Debian, on passe en version supérieure : soit en installant une Debian, soit une ISO issue du projet DFLinux, soit tester d'autres distributions GNU/Linux car oui, ce sera le moment de voler de vos propres ailes ... mais cool ... tout va bien, il y a plein de monde pour vous aider

    Les nouvelles de la distribution et des autres branches du projet DFLinux sont désormais livrées sous forme de flux RSS

    Vous pouvez vous abonner au flux du projet DFLinux qui vous informera des actualités directement

    ... fini les looooongs pavés du arpi sur le blog'handy ...

    et oui ... ça se termine en version douce-amer cette aventure, mais il y a eu tellement plus de bénéfices que de soucis !!
    alors pour ce dernier article, je voudrais remercier toutes et tous qui avez contribué à la création et l'évolution d'HandyLinux, vous qui avez osé demandé une option, qui nous avez donné l'idée de faire autrement, vous qui avez installé handy chez mamie & papy, chez vos voisins ou votre belle-mère, vous qui avez supporté mon git push by night et autres g33keries chronophages ... et enfin vous qui avez abandonné HandyLinux pour passer à une distribution un peu pus éthique, avec moins de non-free dedans

    ce blog est désormais fermé, il disparaîtra le 6 avril 2018 avec l'hébergement handylinux.org.

    le nom de domaine "handylinux.org" restera actif aussi longtemps que nécessaire pour éviter toute reprise commerciale :lbam:

    pour conclure, une tite habitude de la maison, l'image de fin ....

    le premier test du futur handymenu ... 01/08/2013


    épilogue .... fermeture des commentaires sur tout le blog : pour nous contacter, vous pouvez utiliser le formulaire de contact.
    HandyLinux - la distribution Debian sans se prendre la tête...

    16 January, 2017 03:38AM by arpinux

    January 15, 2017

    hackergotchi for Grml developers

    Grml developers

    Frank Terbeck: A client library for XMMS2

    “…done is Scheme.” was an idea I had when I started liking Scheme more and more. XMMS2 is my preferred audio player for a long time now. And I always wanted to write a client for it that fits my frame of mind better than the available ones.

    I started writing one in C and that was okay. But when you're used to use properly extensible applications on a regular basis, you kind of want your audio player to have at least some of those abilities as well. I started adding a lua interpreter (which I didn't like), a Perl interpreter (which I did before in another application, but which is also not a lot of fun). So I threw it all away, setting out to write one in Scheme from scratch.

    To interface the XMMS2 server, I was first trying to write a library that wrapped the C library that the XMMS2 project ships. But now I was back to writing C, which I didn't want to do. Someone on IRC in #xmms2 on freenode suggested to just implement the whole protocol in Scheme natively. I was a little intimidated by that, because the XMMS2 server supports a ton of IPC calls. But XMMS2 also ships with machine readable definitions of all of those, which means that you can generate most of the code and once you've implemented the networking part and have the serialization and de-serialization for the protocol's data types, you're pretty much set. …well, after you've implemented the code that generates your IPC code from XMMS2's definition file.

    Most of XMMS2's protocol data types map very well to Scheme. There are strings, integers, floating point numbers, lists, dictionaries. All very natural in Scheme.

    And then there are Collections. Collections are a way to interact with the XMMS2 server's media library. You can query the media library using collections. You can generate play lists using collections and perform a lot of operations on them like sorting them in a way you can specify yourself. For more information about collections see the Collections Concept page in the XMMS2 wiki.

    Now internally, Collections are basically a tree structure, that may be nested arbitrarily. It carries a couple of payload data sets, but they are trees. And implementing a tree in Scheme is not all that hard either. The serialization and de-serialization is also pretty straight forward, since the protocol reuses its own data types to represent the collection data.

    What is not quite so cool is the language you end up with to express these collections. Say, you want to create a collection that matches four Thrash Metal groups you can do that with XMMS2's command line client like so:

    xmms2 coll create big-four artist:Slayer \
                            OR artist:Metallica \
                            OR artist:Anthrax \
                            OR artist:Megadeth

    To create the same collection with my Scheme library, that would look like this:

        '() '()
        (list (make-collection COLLECTION-TYPE-EQUALS
                  '((field . "artist")
                    (value . "Slayer"))
                  (list (make-collection COLLECTION-TYPE-UNIVERSE '() '() '())))
              (make-collection COLLECTION-TYPE-EQUALS
                  '((field . "artist")
                    (value . "Metallica"))
                  (list (make-collection COLLECTION-TYPE-UNIVERSE '() '() '())))
              (make-collection COLLECTION-TYPE-EQUALS
                  '((field . "artist")
                    (value . "Anthrax"))
                  (list (make-collection COLLECTION-TYPE-UNIVERSE '() '() '())))
              (make-collection COLLECTION-TYPE-EQUALS
                  '((field . "artist")
                    (value . "Megadeth"))
                  (list (make-collection COLLECTION-TYPE-UNIVERSE '() '() '())))))

    …and isn't that just awful? Yes, yes it is. It so is.

    In order to reign in this craziness, the library ships a macro that implements a little domain specific language to express collections with. Using that, the above boils down to this:

    (collection (∩ (artist = Slayer)
                   (artist = Metallica)
                   (artist = Anthrax)
                   (artist = Megadeth)))

    So much better, right? …well, unless you really don't like Unicode characters and the ‘∩’ in there gives you a constant headache… But worry not, you can also use ‘or’ in place if the intersection symbol if you like. Or ‘INTERSECTION’ if you really want to be clear about things.

    If you know a bit of Scheme, you may wonder how to use arguments to those operators that get evaluated at run time. Since evidently, the Slayer in there is turned into a "Slayer" string at compile time; same goes for the artist symbol. These transformations are done to make the language very compact for users to just type expressions. If you want an operator to be evaluated, you have to wrap it in a (| ...) expression:

    (let ((key "artist") (value "Slayer"))
      (collection ((| key) = (| value)))

    These expressions may be arbitrarily complex.

    Finally, to traverse Collection tree data structures, the library ships a function called ‘collection-fold’. It implements pre, post and level tree traversal with both left-to-right and right-to-left directions. So, if you'd like to count the number of nodes in a collection tree, you can do it like this:

    (define *big-four* (collection (∩ (artist = Slayer)
                                      (artist = Metallica)
                                      (artist = Anthrax)
                                      (artist = Megadeth)))
    (collection-fold (lambda (x acc) (+ acc 1))
                     0 *big-four*)

    This would evaluate to 9.

    The library is still at an early stage, but it can control an XMMS2 server as the ‘cli’ example, shipped with the library, will show you. There are no high level primitives to support synchronous and asynchronous server interactions. And there is not a whole lot of documentation, yet either. But the library implements all data types the protocol supports as well as all methods, signals and broadcasts the IPC document defines. The collection DSL supports all collection operators and all attributes one might want to supply.

    Feel free to take a look, play around, report bugs. The library's home is with my account on github.

    15 January, 2017 09:54PM

    BunsenLabs Linux

    Debian 8.7 released

    OH YEA!!!!!

    81 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.

    Thank you for the heads-up!

    15 January, 2017 02:14PM by Sector11

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Valorie Zimmerman: Google Code-in draws to a close -- students finish your final task by January 16, 2017 at 09:00 (PST)

    KDE's Google Code-in party is ending once again. Student work submitted deadline is January 16, 2017 at 09:00 (PST). 

    Mentors, you have until January 18, 2017 at 09:00 (PST) to evaluate your student's work. Please get that done before the deadline, so that admins don't have to judge the student work.

    Then it will be time to choose winners. We need to have our choices in by January 23, 2017 at 09:00 (PST). Winners and Finalists will be announced January 30, 2017 at 09:00 (PST).

    To me, this contest has been lovely. Because there are more organizations participating now, there are more tasks for students, and less pressure on each org. It seems that the students have enjoyed themselves as well.

    Spencerb said, in #kde-soc, This was my first (and final) gci, so I don't have much of a point of comparison, but it's been awesome. I've been an opportunity to meet new people and just get involved with KDE, which I've wanted to do for a long time. I've also learned a lot about serious software development that I wouldn't have otherwise.

    "I'll turn 18 this Monday, which is why this is my last year :(  I'm so glad to have had the chance to participate at least once.

    As a task, Harpreet filed a GCi review: http://aboutgci2016.blogspot.in/

    So far, we've had 121 students. The top ten have 103 completed tasks so far! And 160 tasks completed so far. Most exciting for me is that Beginner tasks completed: 45. Getting kids acquainted with Free and Open Source Software communities, which is why every organization must have beginner tasks. I'm glad 45 kids got to know KDE a bit.

    15 January, 2017 05:04AM by Valorie Zimmerman (noreply@blogger.com)

    January 14, 2017

    Mattia Migliorini: Install Balsamiq Mockups in Debian/Ubuntu

    Balsamiq is one of the best tools for quick wireframes creation. It allows you to efficiently and quickly create mockups that give you an idea of how design elements fit in the page.

    Some years ago there was a package available for the most popular Linux distributions, but since Adobe dropped support for Linux and Balsamiq is built on top of Adobe Air, nowadays they don’t support Linux neither.

    As you can see from the downloads page of Balsamiq, though, it luckily works well with wine.

    Install Balsamiq with WINE

    First things first: install wine.

    sudo apt-get install wine

    Now, let’s proceed with an easy step-by-step guide.

    1. Download the Balsamiq Bundle that includes Adobe Air (if the link does not work, head on to Balsamic Downloads and download the version With Adobe Air bundled)
    2. Open a terminal, unzip the bundle and move it to /opt (change the Downloads directory name according to your setup)
      cd Downloads
      unzip Balsamiq*
      sudo mv Balsamiq* /opt
    3. To make life easier, rename the .exe to simply balsamiq.exe
      cd /opt/Balsamiq_Mockups_3/
      mv Balsamiq\\ Mockups\\ 3.exe balsamiq.exe
    4. Now you can run Balsamiq Mockups by running it with wine
      wine /opt/Balsamiq_Mockups_3/balsamiq.exe

    Add Balsamiq as an application

    The last optional step can save you a lot of time in launching Balsamiq, because it saves you the hassle of writing the command in point 4 above every time you want to launch it (and remembering the Balsamiq executable location). This simply consists in creating a new desktop entry for Balsamiq, which will add it to the applications list of your operating system.

    Create the file ~/.local/share/applications/Balsamiq.desktop with the following content:

    [Desktop Entry]
    Name=Balsamiq Mockups
    Exec=wine /opt/Balsamiq_Mockups_3/balsamiq.exe

    If you are on Ubuntu with Unity, you can add the following lines too:


    Now, just save and have a look at your Dash or Activity Panel to see if it works.

    Install Balsamiq Mockups with Play on Linux

    Eric suggests the use of Play on Linux for an easier installation process and reports that for him Balsamiq Mockups 3 works like a charm in that environment. Worth a try!

    The post Install Balsamiq Mockups in Debian/Ubuntu appeared first on deshack.

    14 January, 2017 09:16AM

    Ted Gould: The Case for Ubuntu Phone

    There are times in standard social interactions where people ask what you do professionally, which means I end up talking about Ubuntu and specifically Ubuntu Phone. Many times that comes down to the seemingly simple question: “Why would I want an Ubuntu phone?” I’ve tried the answer “becasue I’m a thought leader and you should want to be like me,” but sadly that gets little traction outside of Silicon Valley. Another good answer is all the benefits of Free Software, but many of those are benefits the general public doesn’t yet realize they need.

    Ubuntu Phone

    The biggest strength and weakness of Ubuntu Phone is that it’s a device without an intrinsic set of services. If you buy an Android device you get Google Services. If you buy an iPhone you get Apple services. While these can be strengths (at least in Google’s case) they are effectively a lock in to services that may or may not meet your requirements. You certainly can get Telegram or Signal for either of those, but they’re never going to be as integrated as Hangouts or iMessage. This goes throughout the device including things like music and storage as well. Ubuntu and Canonical don’t provide those services, but instead provide integration points for any of them (including Apple and Google if they wanted) to work inside an Ubuntu Phone. This means as a user you can use the services you want on your device, if you love Hangouts and Apple Maps, Ubuntu Phone is happy to be a freak with you.

    Carriers are also interested in this flexibility. They’re trying to put together packages of data and services that will sell, and fetch a premium price (effectively bundling). Some they may provide themselves and some by well known providers; but by not being able to select options for those base services they have less flexibility on what they can do. Sure, Google and Apple could give them a great price or bundle, but they both realize that they don’t have to. So that effectively makes it difficult for the carriers as well as alternate service providers (e.g. Dropbox, Spotify, etc) to compete.

    What I find most interesting thing about this discussion is that it is the original reason that Google bought Android. They were concerned that with Apple controlling the smartphone market they’d be in a position to damage Google’s ability to compete in services. They were right. But instead of opening it up to competition (a competition that certainly at the time and even today they’re likely to win) they decided to lock down Android with their own services. So now we see in places like China where Google services are limited there is no way for Android to win, only forks that use a different set of integrations. One has to wonder if Ubuntu Phone existed earlier whether Google would have bought Android, while Ubuntu Phone competes with Android it doesn’t pose any threat to Google’s core businesses.

    It is always a failure to try and convince people to change their patterns and devices just for the sake of change. Early adopters are people who enjoy that, but not the majority of people. This means that we need to be an order of magnitude better, which is a pretty high bar to set, but one I enjoy working towards. I think that Ubuntu Phone has the fundamental DNA to win in this race.

    14 January, 2017 06:00AM

    hackergotchi for Blankon developers

    Blankon developers

    Sokhibi: Cara Menambahkan Repository Debian ke BlankOn X Tambora

    Belum lama ini Proyek BlankOn merillis Distro BlankOn terbarunya, yaitu BlankOn X Tambora. Kemudian beberapa teman pengguna baru bertanya, apakah di BlankOn bisa menambahkan PPA seperti di Ubuntu?
    Pertanyaan ini biasanya muncul karena pengguna tersebut ingin memasang aplikasi terbaru yang belum tersedia di Lumbung Paket (repository) resmi BlankOn, misalnya ingin memasang Inkscape 0.92.

    Jawabannya adalah, di BlankOn tidak bisa menambahkan PPA seperti di Ubuntu, karena BlankOn bukan turunan Ubuntu, kalau kata pak Manajer Rillis sih beda Mazab.

    Lalu bagaimana caranya agar pengguna dapat menggunakan aplikasi terbaru?
    Cara yang saya tempuh adalah dengan menambahkan Repository Debian unstable (testing).
    Dengan menggunakan cara ini selain dapat memasang aplikasi terbaru, kita juga dapat memang aplikasi tertentu yang tidak tersedia di lumbung paket resmi BlankOn, atau aplikasi di Repository BlankOn sedang rusak dan belum ada yang memperbaiki, misalnya yang seperti saya alami tadi pagi ketika ingin memasang Krita

    Berikut ini penjelasan singkat cara menambahkan repository Debian ke BlankOn Tambora, dalam contoh saya menambahkannya lewat terminal dengan mode Gedit, karena cara ini lebih mudah untuk dipahami dan diikuti oleh pemula, untuk pengguna tingakt lanjut dapat menggunakan mode vim atau nano, hasilnya sama saja kok.
    Buka Terminal dengan cara klik Menu Utama BlankOn => Perkakas Sistem => Terminal, Setelah terminal terbuka, ketik sudo gedit /etc/apt/sources.list untuk membuka daftar list repository dengan gedit.
    Ketik sandi  (password) Adminstrator komputer yang Anda buat ketika memasang BlankOn, ketika Anda mengetik sandi tidak akan muncul, lanjut saja, kemudian tekan tombol Enter.
    Maka setelah Anda menekan tombol Enter, segera tampil list repository lewat gedit seperti di bawah ini
    Langkah selanjutnya adalah menambahkan repository milik Debian unstable dan testing ke source.list seperti di bawah ini
    # Testing repository - main, contrib and non-free branches
    deb http://http.us.debian.org/debian testing main non-free contrib
    deb-src http://http.us.debian.org/debian testing main non-free contrib

    # Testing security updates repository
    deb http://security.debian.org/ testing/updates main contrib non-free
    deb-src http://security.debian.org/ testing/updates main contrib non-free

    # Unstable repo main, contrib and non-free branches, no security updates here
    deb http://http.us.debian.org/debian unstable main non-free contrib
    deb-src http://http.us.debian.org/debian unstable main non-free contrib
    Maka daftar source.list pada repository menjadi seperti gambar di bawah ini

    Jika source.list Debian sudah selesai ditambah, simpan perubahan tersebut dengan klik tombol Simpan, kemudian tutup gedit

    Langakh selanjutnya adalah melakukan pembaruan (update) repository, salah satu caranya adalah dengan mengetik perintah sudo apt-get update melalui terminal.

    Tunggu proses pembaruan repository hingga selesai

    Setelah proses pembaruan (update) repository selesai, tutup terminal. Sampai langkah iini proses penambahan Repository Debian sudah selesai, Anda dapat langsung memasang aplikasi tertentu melalui terminal dengan perintah apt-get instal nama_aplikasi

    Bagi pengguna mahir, pemasangan aplikasi lewat terminal sebenarnya  tidak sulit, namun untuk pemula tentunya tidaklah mudah.
    Untuk mempermudah pemasangan suatu aplikasi (terutama pemula) dapat menggunakan Synaptic Paket Manager, caranya klik Menu Utama BlankOn => Administrasi => Synaptic.
    Ketik nama aplikasi pada kotak pencarian cepat. Aplikasi ini menampilkan daftar paket perangkat lunak secara detail. Selain itu, Anda juga bisa menambah dan menghapus aplikasi, Anda juga bisa melakukan hal yang sama untuk pustaka sistem yang tersedia.

    Untuk menandai paket perangkat lunak yang ingin dipasang, klik kanan pada aplikasi yang akan dipasang lalu pilih menu Tandai untuk Pemasangan.

    Tunggu proses pemasangan hingga selesai, lamanya proses pemasangan tergantung seberapa besar aplikasi yang dipasang dan kecepatan internet yang Anda gunakan.
    Setelah proses pemasangan selesai, tutup Synaptic Paket Manager dan semua aplikasi yang masih berjalan, logut dari destop dan login kembali untuk dapat menggunakan aplikasi yang telah berhasil dipasang.

    Demikian tutorial singkat Cara Menambahkan Repository Debian ke BlankOn X Tambora yang dapat penulis sampaikan pada post singkat ini, sampai jumpa pada tulisan lainnya.

    14 January, 2017 12:11AM by Istana Media (noreply@blogger.com)

    January 13, 2017

    hackergotchi for HandyLinux


    HandyLinux et le changement d'hébergement

    bonjour @tout-e-s

    j'espère que les fêtes de fin d'année se sont bien passées pour vos et vos proches

    encore tous mes vœux pour 2017 !

    Dans le dernier article, je parlais de nom de domaine et d'hébergement : petite explication.

    La situation de départ :
    • HandyLinux-1.9 va passer en fin de vie : documentation et dépôts figés pour la section "handylinux". vous recevrez toutefois les mises à jour de sécurité Debian.
    • HandyLinux-2.5 va passer en version "oldstable" et est maintenue jusqu'en mai 2018.

    Ce qu'il se passe ces prochains mois :

    • le nom de domaine "handylinux.org" reste actif et sera maintenu aussi longtemps que possible au delà de 2018,
    • l'hébergement associé sur OVH ne servira pus à rien une fois Debian Stretch en ligne au printemps 2017.

    Le changement d'hébergement :

    Le contenu pertinent du serveur handylinux actuel va doucement être basculé sur le serveur des cahiers du débutant , afin de m'éviter de payer un double hébergement inutile.
    • Le forum est déjà fermé et disparaîtra en même temps que l'hébergement handylinux,
    • les documentations v1 et v2 seront intégralement disponibles sur le nouveau serveur
    • les dépôts sont déjà en miroir sur le serveur des cahiers du débutant et une mise à jour du paquet handylinux-desktop.deb effectuera pour vous le changement d'adresse des dépôts
    • les goodies (galerie de fonds d'écran, service d'images etc) seront exportés un par un avec redirection et article associé

    Ce qui est déjà en place :

    Ce qu'il reste à faire :
    • transfert du blog... j'hésite à ouvrir un blog pour le projet DFLinux, et donner des news d'handylinux, de dflinux et des cahiers du débutant... @voir
    • transfert des ISOs
    • transfert des archives
    • chasse aux liens morts et corrections des liens internes (surtout sur la doc v1, donc pas si grave)
    • correction de l'adresse des dépôts depuis une mise à jour
    • ... petits détails imprévus

    je vous tiendrais au jus régulièrement.

    HandyLinux - la distribution Debian sans se prendre la tête...

    13 January, 2017 08:42PM by arpinux

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Ubuntu Insights: Welcome the new Ubuntu-based Precision line-up

    This is a guest post by Barton George from Dell. If you would like to contribute a guest post, please contact ubuntu-devices@canonical.com

    Today I am excited to announce the next generation of our Ubuntu-based Precision mobile workstation line. Not only have we revamped the current line-up but we have also added the Precision 5720 All-in-One. This follows the introduction back in October of the 6th generation XPS 13 developer edition.

    How did we get here

    Four and a half years ago a scrappy skunk works project by the name of “Project Sputnik” was kicked off at Dell to gauge interest in a developer-focused laptop. The project received an amazing amount of interest and support from the community and as result, nine months later this project became an official product — the ultra-mobile XPS 13 developer edition.

    While the XPS 13 was a big hit the team soon started getting a steady stream of requests to add a bigger, beefier system. This caught the attention of team member Jared Dominguez (on twitter) who decided to work on his own time to get Ubuntu running on the Dell Precision M3800 mobile workstation. Jared documented his work and then posted the instructions publically.

    Jared’s efforts got so much interest from the community that a little over a year later it debuted as an official product. A little over a year after that, one Ubuntu-based Precision workstation became four and today we are announcing the next generation of this line-up along with the new Precision 5720 All-in-One.

    Key Features for Dell™ Precision 3520

    • Affordable, fully customizable 15” mobile workstation
    • Preloaded with Ubuntu 16.04 LTS
    • 7th generation Intel® Core™ and Intel® Xeon™ processors
    • 15.6” HD (1366×768), FHD (1920×1080) and FHD touch
    • Up to 32GB of memory and 2TB of storage
    • ECC memory, Thunderbolt 3 and NVIDIA graphics
    • Availability: worldwide

    How do I order a 3520 today?

    In the case of the US, you can get to the Ubuntu-based version of the Dell™ Precision 3520, mobile workstation by going to the landing page. Once there click on the green “Customize & Buy” button on the right. This will take you to the “Select Components” page where under “Operating System” you choose Ubuntu 16.04 and away you go!

    With regards to availability for the rest of the line-up, watch this space!

    Original post can be found here.

    13 January, 2017 12:48PM

    Stéphane Graber: Running Kubernetes inside LXD

    LXD logo


    For those who haven’t heard of Kubernetes before, it’s defined by the upstream project as:

    Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications.

    It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

    It is important to note the “applications” part in there. Kubernetes deploys a set of single application containers and connects them together. Those containers will typically run a single process and so are very different from the full system containers that LXD itself provides.

    This blog post will be very similar to one I published last year on running OpenStack inside a LXD container. Similarly to the OpenStack deployment, we’ll be using conjure-up to setup a number of LXD containers and eventually run the Docker containers that are used by Kubernetes.


    This post assumes you’ve got a working LXD setup, providing containers with network access and that you have at least 10GB of space for the containers to use and at least 4GB of RAM.

    Outside of configuring LXD itself, you will also need to bump some kernel limits with the following commands:

    sudo sysctl fs.inotify.max_user_instances=1048576  
    sudo sysctl fs.inotify.max_queued_events=1048576  
    sudo sysctl fs.inotify.max_user_watches=1048576  
    sudo sysctl vm.max_map_count=262144

    Setting up the container

    Similarly to OpenStack, the conjure-up deployed version of Kubernetes expects a lot more privileges and resource access than LXD would typically provide. As a result, we have to create a privileged container, with nesting enabled and with AppArmor disabled.

    This means that not very much of LXD’s security features will still be in effect on this container. Depending on how you feel about this, you may choose to run this on a different machine.

    Note that all of this however remains better than instructions that would have you install everything directly on your host machine. If only by making it very easy to remove it all in the end.

    lxc launch ubuntu:16.04 kubernetes -c security.privileged=true -c security.nesting=true -c linux.kernel_modules=ip_tables,ip6_tables,netlink_diag,nf_nat,overlay -c raw.lxc=lxc.aa_profile=unconfined
    lxc config device add kubernetes mem unix-char path=/dev/mem

    Then we need to add a couple of PPAs and install conjure-up, the deployment tool we’ll use to get Kubernetes going.

    lxc exec kubernetes -- apt-add-repository ppa:conjure-up/next -y
    lxc exec kubernetes -- apt-add-repository ppa:juju/stable -y
    lxc exec kubernetes -- apt update
    lxc exec kubernetes -- apt dist-upgrade -y
    lxc exec kubernetes -- apt install conjure-up -y

    And the last setup step is to configure LXD networking inside the container.
    Answer with the default for all questions, except for:

    • Use the “dir” storage backend (“zfs” doesn’t work in a nested container)
    • Do NOT configure IPv6 networking (conjure-up/juju don’t play well with it)
    lxc exec kubernetes -- lxd init

    And that’s it for the container configuration itself, now we can deploy Kubernetes!

    Deploying Kubernetes with conjure-up

    As mentioned earlier, we’ll be using conjure-up to deploy Kubernetes.
    This is a nice, user friendly, tool that interfaces with Juju to deploy complex services.

    Start it with:

    lxc exec kubernetes -- sudo -u ubuntu -i conjure-up
    • Select “Kubernetes Core”
    • Then select “localhost” as the deployment target (uses LXD)
    • And hit “Deploy all remaining applications”

    This will now deploy Kubernetes. The whole process can take well over an hour depending on what kind of machine you’re running this on. You’ll see all services getting a container allocated, then getting deployed and finally interconnected.

    Once the deployment is done, a few post-install steps will appear. This will import some initial images, setup SSH authentication, configure networking and finally giving you the IP address of the dashboard.

    Interact with your new Kubernetes

    We can ask juju to deploy a new kubernetes workload, in this case 5 instances of “microbot”:

    ubuntu@kubernetes:~$ juju run-action kubernetes-worker/0 microbot replicas=5
    Action queued with id: 1d1e2997-5238-4b86-873c-ad79660db43f

    You can then grab the service address from the Juju action output:

    ubuntu@kubernetes:~$ juju show-action-output 1d1e2997-5238-4b86-873c-ad79660db43f
     address: microbot.
    status: completed
     completed: 2017-01-13 10:26:14 +0000 UTC
     enqueued: 2017-01-13 10:26:11 +0000 UTC
     started: 2017-01-13 10:26:12 +0000 UTC

    Now actually using the Kubernetes tools, we can check the state of our new pods:

    ubuntu@kubernetes:~$ ./kubectl get pods
    default-http-backend-w9nr3 1/1 Running 0 21m
    microbot-1855935831-cn4bs 0/1 ContainerCreating 0 18s
    microbot-1855935831-dh70k 0/1 ContainerCreating 0 18s
    microbot-1855935831-fqwjp 0/1 ContainerCreating 0 18s
    microbot-1855935831-ksmmp 0/1 ContainerCreating 0 18s
    microbot-1855935831-mfvst 1/1 Running 0 18s
    nginx-ingress-controller-bj5gh 1/1 Running 0 21m

    After a little while, you’ll see everything’s running:

    ubuntu@kubernetes:~$ ./kubectl get pods
    default-http-backend-w9nr3 1/1 Running 0 23m
    microbot-1855935831-cn4bs 1/1 Running 0 2m
    microbot-1855935831-dh70k 1/1 Running 0 2m
    microbot-1855935831-fqwjp 1/1 Running 0 2m
    microbot-1855935831-ksmmp 1/1 Running 0 2m
    microbot-1855935831-mfvst 1/1 Running 0 2m
    nginx-ingress-controller-bj5gh 1/1 Running 0 23m

    At which point, you can hit the service URL with:

    ubuntu@kubernetes:~$ curl -s http://microbot. | grep hostname
     <p class="centered">Container hostname: microbot-1855935831-fqwjp</p>

    Running this multiple times will show you different container hostnames as you get load balanced between one of those 5 new instances.


    Similar to OpenStack, conjure-up combined with LXD makes it very easy to deploy rather complex big software, very easily and in a very self-contained way.

    This isn’t the kind of setup you’d want to run in a production environment, but it’s great for developers, demos and whoever wants to try those technologies without investing into hardware.

    Extra information

    The conjure-up website can be found at: http://conjure-up.io
    The Juju website can be found at: http://www.ubuntu.com/cloud/juju

    The main LXD website is at: https://linuxcontainers.org/lxd
    Development happens on Github at: https://github.com/lxc/lxd
    Mailing-list support happens on: https://lists.linuxcontainers.org
    IRC support happens in: #lxcontainers on irc.freenode.net
    Try LXD online: https://linuxcontainers.org/lxd/try-it

    13 January, 2017 10:35AM

    BunsenLabs Linux

    HHH missing in action

    Great to see you back hhh!!

    13 January, 2017 04:10AM by johnraff

    hackergotchi for Tails


    Call for testing: 2.10~rc1

    You can help Tails! The first release candidate for the upcoming version 2.10 is out. Please test it and report any issue. We are particularly interested in feedback and problems relating to:

    • OnionShare
    • Tor Browser's per-tab circuit view
    • Problems with OnionCircuits
    • Problems with Tor Launcher (when configuring Tor bridges, proxy etc.)

    How to test Tails 2.10~rc1?

    Keep in mind that this is a test image. We tested that it is not broken in obvious ways, but it might still contain undiscovered issues.

    But test wildly!

    If you find anything that is not working as it should, please report to us! Bonus points if you first check if it is a known issue of this release or a longstanding known issue.

    Download and install

    Tails 2.10~rc1 torrent

    Tails 2.10~rc1 ISO image OpenPGP signature

    To install 2.10~rc1, follow our usual installation instructions, skipping the Download and verify step.

    Upgrade from 2.9.1

    1. Start Tails 2.9.1 on a USB stick installed using Tails Installer and set an administration password.

    2. Run this command in a Root Terminal to select the "alpha" upgrade channel and start the upgrade:

      echo TAILS_CHANNEL=\"alpha\" >> /etc/os-release && \
    3. After the upgrade is installed, restart Tails and choose Applications ▸ Tails ▸ About Tails to verify that you are running Tails 2.10~rc1.

    What's new since 2.9.1?

    Changes since Tails 2.9.1 are:

    • Major new features and changes

      • Upgrade the Linux kernel to 4.8.0-0.bpo.2 (Closes: #11886).
      • Install OnionShare from jessie-backports. Also install python3-stem from jessie-backports to allow the use of ephemeral onion services (Closes: #7870).
      • Completely rewrite tor-controlport-filter. Now we can safely support OnionShare, Tor Browser's per-tab circuit view and similar.
        • Port to python3.
        • Handle multiple sessions simultaneously.
        • Separate data (filters) from code.
        • Use python3-stem to allow our filter to be a lot more oblivious of the control language (Closes: #6788).
        • Allow restricting STREAM events to only those generated by the subscribed client application.
        • Allow rewriting commands and responses arbitrarily.
        • Make tor-controlport-filter reusable for others by e.g. making it possible to pass the listen port, and Tor control cookie/socket paths as arguments (Closes: #6742). We hear Whonix plan to use it! :)
      • Upgrade Tor to, the new stable series (Closes: #12012).
    • Security fixes

      • Upgrade Icedove to 1:45.6.0-1~deb8u1+tail1s.
    • Minor improvements

      • Enable and use the Debian Jessie proposed-updates APT repository, anticipating on the Jessie 8.7 point-release (Closes: #12124).
      • Enable the per-tab circuit view in Tor Browser (Closes: #9365).
      • Change syslinux menu entries from "Live" to "Tails" (Closes: #11975). Also replace the confusing "failsafe" wording with "Troubleshooting Mode" (Closes: #11365).
      • Make OnionCircuits use the filtered control port (Closes: #9001).
      • Make tor-launcher use the filtered control port.
      • Run OnionCircuits directly as the Live user, instead of a separate user. This will make it compatible with the Orca screen reader (Closes: #11197).
      • Run tor-controlport-filter on port 9051, and the unfiltered one on 9052. This simplifies client configurations and assumptions made in many applications that use Tor's ControlPort. It's the exception that we connect to the unfiltered version, so this seems like the more sane approach.
      • Remove tor-arm (Nyx) (Closes: #9811).
      • Remove AddTrust_External_Root.pem from our website CA bundle. We now only use Let's Encrypt (Closes: #11811).
      • Configure APT to use Debian's Onion services instead of the clearnet ones (Closes: #11556).
      • Replaced AdBlock Plus with uBlock Origin (Closes: #9833). This incidentally also makes our filter lists lighter by de-duplicating common patterns among the EasyList filters (Closes: #6908). Thanks to spriver for this first major code contribution!
      • Install OpenPGP Applet 1.0 (and libgtk3-simplelist-perl) from Jessie backports (Closes: #11899).
      • Add support for exFAT (Closes: #9659).
      • Disable unprivileged BPF. Since upgrading to kernel 4.6, unprivileged users can use the bpf() syscall, which is a security concern, even with JIT disabled. So we disable that. This feature wasn't available before Linux 4.6, so disabling it should not cause any regressions (Closes: #11827).
      • Add and enable AppArmor profiles for OnionCircuits and OnionShare.
      • Raise the maximum number of loop devices to 32 (Closes: #12065).
      • Drop kernel.dmesg_restrict customization: it's enabled by default since 4.8.4-1~exp1 (Closes: #11886).
      • Upgrade Electrum to 2.7.9-1.
    • Bugfixes

      • Tails Greeter:
        • use gdm-password instead of gdm-autologin, to fix switching to the VT where the desktop session lives on Stretch (Closes: #11694)
        • Fix more options scrolledwindow size in Stretch (Closes: #11919)
      • Tails Installer: remove unused code warning about missing extlinux in Tails Installer (Closes: #11196).
      • Update APT pinning to cover all binary packages built from src:mesa so we ensure installing mesa from jessie-backports (Closes: #11853).
      • Install xserver-xorg-video-amdgpu. This should help supporting newer AMD graphics adapters. (Closes #11850)
      • Fix firewall startup during early boot, by referring to the "amnesia" user via its UID (Closes: #7018).
      • Include all amd64-microcodes.

    For more details, see also our changelog.

    Known issues in 2.10~rc1

    • There are no VirtualBox guest modules (#12139).

    • Electrum won't automatically connect since it lacks proxy configuration (#12140). Simply selecting the SOCKS5 proxy in the Network options is enough to get it working again.

    • Longstanding known issues

    13 January, 2017 12:02AM

    January 11, 2017

    hackergotchi for Blankon developers

    Blankon developers

    Rahman Yusri Aftian: Manokwari Shell (Tidak Resmi)

    Manokwari adalah desktop bawaan BlankOn,, berikut adalah tampilan standar dari manokwari


    Manokwari Versi Standar bawaan BlankOn

    karena ada versi yang batal dirilis, tidak ada salahnya saya menggunakan Manokwari Versi abal2 ini, dan inilah penampakannya.


    Manokwari Tidak Resmi dengan tampilan kalender dan pengaturan cahaya


    Panel Kanan On Top Aplikasi

    Ow ya, jika pengen coba bisa di Unduh diisini, dan maaf hanya tersedia arch amd64.

    Cara pasang:

    0. Unduh

    1, sudo dpkg -i manokwari_1.0.13-0blankon1%2bamira1_amd64.deb

    2. pkill manokwari dan berubah,

    11 January, 2017 03:00PM

    hackergotchi for VyOS


    VyOS remote management library for Python

    Someone on Facebook rightfully noted that lately there's been more work on the infrastructure than development. This is true, but that work on infrastructure was long overdue and we just had to do it some time. There is even more work on the infrastructure waiting to be done, though it's more directly related to development, like restructuring the package repos.

    Anyway, it doesn't mean all development has stopped while we've been working on infrastructure. Today we released a Python library for managing VyOS routers remotely.

    Before I get to the details, have a quick example of what using it is like:

    import vymgmt
    vyos = vymgmt.Router('', 'vyos', password='vyos', port=22)
    vyos.set("protocols static route next-hop")
    vyos.delete("system options reboot-on-panic")

    If you want to give it a try, you can install it from PyPI ("pip install vymgmt"), it's compatible with both Python 2.7 and Python 3. You can read the API reference at http://vymgmt.readthedocs.io/en/latest/ or get the source code at https://github.com/vyos/python-vyos-mgmt .

    Now to the details. This is not a true remote API, the library connects to VyOS over SSH and sends commands as if it was a user session. Surprisingly, one of the tricky parts was to find an SSH/expect library that can cope with VyOS shell environment well, and is compatible with both 2.7 and 3. All credit for this goes to our contributor who goes by Hochikong, who tried a whole bunch of them, settled with pexpect and wrote a prototype.

    How the library is better than using pexpect directly, if it's a rather thin wrapper for it? First, it's definitely more convenient to just call set() or delete() or commit() than to format command strings yourself and take care of the sending and receiving lines.

    Second, common error conditions are detected (through simple regex matching) and raise appropriate exceptions such as ConfigError (for set/delete failures) or CommitError for commit errors. There's also a special ConfigLocked exception (a subclass of CommitError) that is raised when commit fails due to another commit in progress, so you can recover from it by sleep() and retry. This may seem uncommon, but people who use VRRP transition scripts and the like on VyOS already reported that they ran into it.

    Third, the library is aware of the state machine of VyOS sessions, and will not let you accidentally do wrong things such as trying to enter set/delete commands before entering the conf mode. By default it also doesn't let you exit configure sessions if there are uncommited or unsaved changes, though you can override it. If a timeout occursm an exception will be raised too (while pexpect returns False in this case).

    Right now it only supports set, delete, and commit, of all high level methods. This should be enough for the start, but if you want something else, there are generic methods for running op and conf mode commands (run_op_mode_command() and run_conf_mode_command() respectively). We are not sure what people want most, so what we implement depends on your requests ans suggestions (and pull requests of course!). Other things that are planned but that aren't there yet are SSH public key auth and top level words other than set and delete (rename, copy etc.). We are not sure if commit-confirm is really friendly to programmatic access, but if you have any ideas how to handle it, share with us.

    On an unrelated note, syncer and his graphics designer friend made a design for VyOS t-shirts. If anyone buys that stuff, the funds will be used for the project needs. The base cost is around 20 eur, but you can get them with 15% discount by using VYOSMGTLIB promo code: https://teespring.com/stores/vyos?source=blog&pr=VYOSMGTLIB

    11 January, 2017 11:15AM by Daniil Baturin

    January 10, 2017

    hackergotchi for Ubuntu developers

    Ubuntu developers

    Jorge Castro: Fresh Kubernetes documentation available now

    Over the past few months our team has been working real hard on the Canonical Distribution of Kubernetes. This is a pure-upstream distribution of k8s with our community’s operational expertise bundled in.

    It means that we can use one set of operational code to get the same deployment on GCE, AWS, Azure, Joyent, OpenStack, and Bare Metal.

    Like most young distributed systems, Kubernetes isn’t exactly famous for it’s ease of use, though there has been tremendous progress over the past 12 months. Our documentation on Kubernetes was nearly non-existent and it became obvious that we had to dive in there and bust it out. I’ve spent some time fixing it up and it’s been recently merged. 

    You can find the Official Ubuntu Guides in the “Create a cluster” section. We’re taking what I call a “sig-cluster-lifecycle” approach to this documentation – the pages are organized into lifecycle topics based on what an operator would do. So “Backups”, or “Upgrades” instead one big page with sections. This will allow us to grow each section based on the expertise we learn on k8s for that given task. 

    Over the past few months (and hopefully for Kubernetes 1.6) we will slowly be phasing out the documentation on our individual charm and layer pages to reduce duplication and move to a pure upstream workflow. 

    On behalf of our team we hope you enjoy Kubernetes, and if you’re running into issues please let us know or you can find us in the Kubernetes slack channels.

    10 January, 2017 07:34PM

    hackergotchi for Blankon developers

    Blankon developers

    Yudha HT: Chip Pico8 Riviu

    Akhir bulan lalu datang barang pesanan saya yang sejak 4 bulan lalu saya pesan yang sempat mengalami kesalahan teknis. Pesanan tersebut berupa 2 mesin yang masing-masing seharga 9 dolar, mesin termurah yang pernah saya beli. Berikut penampakan barangnya.

    Chip Pico8 9 Dolar

    Berikut tuangan layar dari salah satu mesin yang telah beroperasi.

    Chip Pico9 with MariaDB

    Desain dan Bentuk Perangkat

    Perangkat ini adalah perangkat kedua terkecil yang saya punya setelah Arduino Nano. Bentuk kompak dan mungilnya cukup menarik hati saya sejak awal selain harganya.

    Selain itu mesin ini di desain sebagai sebuah kumputer siap pakai. Tidak perlu perangkat tambahan lain untuk melakukan komputasi selain catu daya.


    Cara konfigurasi mesin ini lebih mudah dari mesin yang telah saya punya sebelumnya, RaspberryPi dan BananaPi, karena tidak memerlukan adanya tambahan perangkat. Dan dengan dukungan headless-mode saya tidak memerlukan perangkat aneh-aneh untuk membuatnya berfungsi, cukup dengan minicom /dev/ttyACM?, dimana ? merupakan nomor yang muncul ketika terdeteksi oleh mesin.

    Untuk pengguna Windows bisa menggunakan cara dari Dexter Industries.

    Kelebihan dan Kekurangan

    Kelebihan perangkat ini sudah saya sebutkan diatas, yaitu: bentuk kompak, harga murah, siap pakai dan kemudahan setup. Sedangkan kekurangannya adalah prosesor tunggal, dan media penyimpanan yang minim.

    Tapi apa yang bisa dituntut dari komputer seharga 9 dolar?

    Mari mengoprek. Biar komputernya kerja.

    10 January, 2017 06:58PM

    hackergotchi for Ubuntu developers

    Ubuntu developers

    The Fridge: Ubuntu Weekly Newsletter Issue 494

    Welcome to the Ubuntu Weekly Newsletter. This is issue #494 for the week January 2 – 8, 2017, and the full version is available here.

    In this issue we cover:

    The issue of The Ubuntu Weekly Newsletter is brought to you by:

    • Elizabeth K. Joseph
    • Chris Guiver
    • Paul White
    • And many others

    If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

    Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

    10 January, 2017 04:06PM

    January 09, 2017

    Kubuntu General News: Plasma 5.8.4 and KDE Frameworks 5.8.0 now available in Backports for Kubuntu 16.04 and 16.10

    The Kubuntu Team announces the availability of Plasma 5.8.4 and KDE Frameworks 5.8.0 on Kubuntu 16.04 (Xenial) and 16.10 (Yakkety) though our Backports PPA.

    Plasma 5.8.4 Announcement:
    How to get the update (in the commandline):

    1. sudo apt-add-repository ppa:kubuntu-ppa/backports
    2. sudo apt update
    3. sudo apt full-upgrade -y

    If you have been testing this upgrade by using the backports-landing PPA, please remove it first before doing the upgrade to backports. Do this in the commandline:

    sudo apt-add-repository --remove ppa:kubuntu-ppa/backports-landing

    Please report any bugs you find on Launchpad (for packaging problems) and http://bugs.kde.org for bugs in KDE software.

    09 January, 2017 08:01PM