November 28, 2014

hackergotchi for Ubuntu developers

Ubuntu developers

Daniel Pocock: XCP / XenServer and Debian Jessie

In 2013, Debian wheezy was released with a number of great virtualization options include the Xen Cloud Platform (XCP / Xen-API) toolstack packaged by Thomas Goirand to run in a native Debian host environment.

Unfortunately, XCP is not available as a host (dom0) solution for the upcoming Debian 8 (jessie) release. However, it is possible to continue running a Debian wheezy system as the dom0 host and run virtualized (domU) jessie systems inside it. It may also be possible to use the packages from wheezy on a jessie system, but I haven't looked into that myself so far.

Newer kernel boot failures in Xen

After successfully upgrading a VM (domU in Xen terminology) from wheezy to jessie, I tried to reboot the VM and found that it wouldn't start. People have reported similar problems booting newer versions of Ubuntu and Fedora in XCP and XenServer environments. PyGrub displayed an error on the dom0 console:

# xe vm-start name-label=server05
Error code: Using  to parse /grub/grub.cfg
Error parameters: Traceback (most recent call last):,
   File "/usr/lib/xcp/lib/pygrub.xcp", line 853, in ,
     raise RuntimeError, "Unable to find partition containing kernel"

There is a quick and easy workaround. Hard-code the kernel and initrd filenames into config values that will be used to boot. A more thorough solution will probably involve using a newer version of PyGrub in wheezy.

If the /boot tree is a separate filesystem inside the VM, use commands like the following (substitute the correct UUID for the VM and the exact names/versions of the vmlinuz and initrd.img files):

xe vm-param-set uuid=da654fd0-74db-11e4-82f8-0800200c9a66 \
   PV-bootloader-args="--kernel=/vmlinuz-3.16-3-amd64
   --ramdisk=/initrd.img-3.16-3-amd64"

xe vm-param-set uuid=da654fd0-74db-11e4-82f8-0800200c9a66 \
   PV-args="root=/dev/mapper/vg00-root ro quiet"

and if /boot is on the root filesystem of the VM, this will do the trick:

xe vm-param-set uuid=da654fd0-74db-11e4-82f8-0800200c9a66 \
   PV-bootloader-args="--kernel=/boot/vmlinuz-3.16-3-amd64
   --ramdisk=/boot/initrd.img-3.16-3-amd64"

xe vm-param-set uuid=da654fd0-74db-11e4-82f8-0800200c9a66 \
   PV-args="root=/dev/mapper/vg00-root ro quiet"

Future strategy

Once a comprehensive XCP solution appears in Debian again, hopefully it will be possible to migrate running VMs into the new platform without any downtime and retire the wheezy dom0.

Other upgrade/migration options exist and the choice will depend on various factors, such as whether or not you have built your own tools around the XCP API and whether you use a solution like OpenStack that depends on it. Debian's pkg-xen-devel mailing list may be a good place to discuss these options further.

28 November, 2014 02:18PM

Stephen Michael Kellat: Ruminating on Black Friday

Today is a special day for retailers in the United States of America. Black Friday is a day when retailers traditionally got their sales into the positive for the year (otherwise known as the black from accounting lingo). Many shoppers will be seeking great buys. Having worked in retail as a salesman in consumer electronics retail I will not be venturing out into the madness unless some unforeseen emergency arises.

Something important will be missing as shoppers swarm stores. There won't be installation media for any flavor of Ubuntu. There won't be any Ubuntu Phone or Ubuntu Touch devices on the shelves for purchase. There won't be anything audacious like a scaled down version of the Orange Box demonstrator of Metal As A Service that people could buy to build their own in-home "dark" infrastructure.

This isn't the year when we have something customer-facing in mass market retail. We need to get something out there soon. Our story is one to be shared with the average consumer.

Entering the Linux realm should not be a treasure hunt. We've moved a long way from the 20+ floppy disks for Slackware. We're not totally there yet for the consumer except in two very, very limited cases.

28 November, 2014 12:00AM

November 27, 2014

hackergotchi for Whonix

Whonix

Download Page Redesign for KVM, Qubes and More Needed

Whonix's current Download page only mentions the downloadable stable VirtualBox images. But Whonix can do far more. There is also physical isolation and there is support for other virtualizers, testers-only support for KVM, QEMU and experimental support for Qubes.

Due to Whonix's diverse user base, presenting all that information to (first time) visitors is a huge challenge. A dedicated wiki page about this topic has been created. I will convert those raw information into a more elaborate explanation in this post.

The post Download Page Redesign for KVM, Qubes and More Needed appeared first on Whonix.

27 November, 2014 06:07PM by Patrick Schleizer

hackergotchi for Ubuntu developers

Ubuntu developers

Randall Ross: Share the Story of Ubuntu in Your City

I am gathering stories about groups of Ubuntu enthusiasts, advocates, and contributors for an upcoming project.

If you are part of an active Ubuntu group that has formed in your city (or town) and that has regular face-to-face gatherings with the central theme of Ubuntu, I'd love to hear from you.

Examples of the types of things I'm interested in:

  • When did you first start meeting?
  • How often do you gather?
  • What do you typically do when together?
  • How many people are in your group?

... and anything else you'd like to share.

Either post to the comments or email me at randall at ubuntu dot com.

Thanks!

--
image by Tony Carr
https://www.flickr.com/photos/tonycarr/

27 November, 2014 05:58PM

LMDE

Just a few more days before 17.1

The ISO images for the Cinnamon and MATE editions of Linux Mint 17.1 “Rebecca” just passed QA testing and were approved for a stable release. This release should go public in the coming days.

If you are running Linux Mint 17.1 RC, you do not need to wait for the stable release, and you do not need to reinstall. You can simply use the Update Manager to install any level 1 update you haven’t installed already.

If you are running Linux Mint 17, you do not need to reinstall. Please wait a little while. We’ll provide updates to Linux Mint 17 and information in an upcoming announcement. Upgrading will be easy, fully supported and it will be an opt-in (i.e. you will have the choice to upgrade to 17.1 but also to keep 17 as it is).

Many thanks to all the artists and developers who participated in this release.

Many thanks also to all the people who participated in testing the RC. Your feedback helped us identify many bugs and fix the ones below:

  • All editions
    • When resizing Software Sources window, its scale up and never scale down!
    • LibreOffice theme is missing some sidebar icons
    • Please bring back the “Mint-X-Dark” icon set. its important for dark themes..
    • Help menu item launches linuxmint.com/documentation.php instead of mintdoc
    • Artwork: tomboy systray icon is black
    • mdmsetup Under the Welcome Message the text input area for Custom should align on the left hand side with the ‘Welcome’ text above it. Either ‘Welcome’ should be moved to the right or the text input area increased to the left to get the correct alignment.
    • mdm https://github.com/linuxmint/mdm/pull/127
    • mdm greeter (in preview mode): no icon on window, no easy way to exit, title of the window is very techy…
    • search engines – extra n in dictionnary.com?
    • update mint-mirrors
  • Cinnamon Edition
    • Expo trash icon isn’t sized properly
    • cinnamon-themes to use noto fonts
    • nemo-emblems: hide ubuntuone, dropbox, rabbitvcs icons
    • settings/backgrounds Gradient and Picture Aspect text not aligned on the left side.
    • settings/preferred apps Consider little bit more spacing under the Terminal dropdown to balance the window elements a bit.
    • Account details: Cinnamon 17, by default link: http://s26.postimg.org/armd84n21/Screenshot_from_2014_11_17_10_04_05.png, 17.1 by default: http://s26.postimg.org/gyxr8ait5/Screenshot_from_2014_11_17_10_35_12.png
    • Accessibility settings, typing, turning on-screen keyboard on or off does not appear to do anything.
    • Usint mint-x aqua theme when you maximize windows they go behind panel. Default theme doesn’t do this. I haven’t tried others.
    • I disabled the recently used Files. After that the menu gets way to wide (it reach nearly to the 17.1 on the wallpaper. There is a text in the menu that says “recently used files are disabled…” (I don’t know the correct words, i use the german language)    ?
    • systray icons (reproducible with mintupdate) in bottom panel does not scale with the size of the panel. Increasing the size of the panel does not increase the Update Manager icon (even when all the other icons increase in size…yes I ticked that box :) ).
    • cinnamon-settings-users should not let root modify user’s passwords when their home is crypted
    • Regression in Nemo: Misplaced rename text entry https://github.com/linuxmint/nemo/issues/757
    • Regression in Nemo: When switching the sidebar view to tree view and back, some entries in the “Devices” category are displaced/displayed incorrectly. On mouse-over they display correctly again.
    • In Nemo when I use the option “Open as Root” appears the ROOT icons of Computer and ROOT Home on the desktop and can browse the hole computer as root from there without any password even after I closed the original Nemo.
    • Nemo: Zoom level changes over time on its own
    • Regression: DND minifreezes..
    • rel-notes: add keybinding migration script
    • Not possible to setup mobile broadband? https://github.com/linuxmint/Cinnamon/issues/3640
    • startup animation
    • session properties changes not always being applied
  • MATE Edition
    • Ctrl+Alt+Backspace doesn’t do anything.
    • Caja still uses a 3 sec delay at launch. With partial fixes in systemd and caja on runtime dir issues we could probably remove this or reduce it to a single sec.
    • Can’t make CCSM changes stick
    • mintdesktop: mate-wm-recovery doesn’t always work….
    • apturl-gtk apt://pkname doesn’t show in the window list
    • There are less than half the previously available keyboard shortcuts
    • Ctrl-Alt-t shortcut by default for the terminal
    • Workspace Switcher preferences do not include the ability to change the number of workspaces or change the names of workspaces.

27 November, 2014 04:09PM by Clem

hackergotchi for Ubuntu developers

Ubuntu developers

Ubuntu GNOME: HOWTO Run Ubuntu GNOME as a Rolling Release

Hi,

On the mailing list, I have sent this email.

Who is interested to use/run Ubuntu GNOME as a Rolling Release?!

And apparently, there are many interested in this idea.

To keep it simple and short, here is how to achieve that.

First of all, those who are testing Ubuntu GNOME Development Releases (Vivid Vervet for this cycle), are actually using Ubuntu GNOME as a Rolling Release and perhaps they don’t even know that :)

  1. Go to Ubuntu GNOME Testing Wiki Page.
  2. Download the latest daily build.
  3. Install it on your machine/virtual machine.
  4. Done – you are now using a Rolling Release.

Once Vivid Vervet (15.04) will be released, then your system will be stable once you update it on the release day.

Then, how can you keep using Ubuntu GNOME as a Rolling Release?

That is very easy:

  1. There is a release called “devel“.
  2. If you put that in /etc/apt/sources.list instead of utopic/vivid etc, you will be kicked over the new devel version a few days after it opens.
  3. No need to ever upgrade to the next version/release; just a dist-upgrade is enough.

These days, the devel series are a bit stable, there are lots of automated testing, that makes sure everything is installable and passes tests before it propagates to the main archive. There is the odd issue that slips through the tests, but they get fixed quickly and while we wouldn’t recommend it for a full production system, the people that want the latest and greatest GNOME and tinker a bit, would be fine on it.

This is how to edit /etc/apt/sources.list?

For this cycle, whenever you see “vivid“, replace that with “devel“. Once WW Cycle will start, you don’t need to do anything except waiting until the new development cycle officially starts.

I hope that was fun, helpful and useful. If you have any Question/Feedback, please contact us.

Thank you!

Ali/amjjawad
Ubuntu GNOME Community Manager

27 November, 2014 03:04PM

Daniel Holbach: Long mailing list discussions

I’m very happy that the ubuntu-community-team mailing list is seeing lots of discussion right now. It shows how many people deeply care about the direction of Ubuntu’s community and have ideas for how to improve things.

Looking back through the discussion of the last weeks, I can’t help but notice a few issues we are running into – issues all to common on open source project mailing lists. Maybe you all have some ideas on how we could improve the discussion?

  • Bikeshedding
    The term bikeshedding has a negative connotation, but it’s a very natural phenomenon. Rouven, a good friend of mine, recently pointed out that the recent proposal to change the statutes of the association behind our coworking space (which took a long time to put together) received no comments on the internal mailing list, whereas a change of the coffee brand seemed to invite comments from everyone.
    It is quite natural for this to happen. In a bigger proposal it’s natural for us to comment on anything that is tangible. Discussions in our community of more technical people you will often see discussions about which technology to use, rather than an answer which tries to comment on all aspects.
  • Idea overload
    Being a creative community can sometimes be a bit of a curse. You end up with different proposals plus additional ideas and nobody or few to actually implement them.
  • Huge proposals
    Sometimes you see a mail on a list which lists a huge load of different things. Without somebody who tracks where the discussion is going, summing things up, making lists of work items, etc. it will be very hard to convert a discussion into an actual project.
  • Derailing the conversation
    You’ve all seen this happen: you start the conversation with a specific problem or proposal and end up discussing something entirely different.

All of the above are nothing new, but in a part of our project where discussions tend to be quite general and where we have contributors from many different parts of the community some of the above are even more true.

Personally I feel that all of the above are fine problems to have. We are creative and we have ideas on how to improve things – that’s great. In my mind I always treated the ubuntu-community-team mailing list as a place to kick around ideas, to chat and to hang out and see what others are doing.

As I care a lot about our community and I’d still like to figure out how we can avoid the risk of some of the better ideas falling through the cracks. What do you think would help?

Maybe a meeting, maybe every two weeks to pick up some of the recent discussion and see together as a group if we can convert some of the discussion into something which actually flies?

27 November, 2014 12:27PM

hackergotchi for Tails

Tails

Who are you helping when donating to Tails?

Tails is being distributed free of charge because we strongly believe that free software is more secure by design. But also because we think that nobody should have to pay to be safe while using computers. Unfortunately, Tails cannot stay alive without money as developing Tails and maintaining our infrastructure has a cost.

We rely solely on donations from individuals and supporting organizations to keep Tails updated and getting always better. That's why we need your help!

If you find Tails useful, please consider donating money or contributing some of your time and skills to the project. Donations to Tails are tax-deducible both in the US and in Europe.

In October 2014, Tails was being used by more than 11 500 people daily. The profile of Tor and Tails users is very diverse. This diversity increases the anonymity provided by those tools for everyone by making it harder to target and to identify a specific type of user. From the various contacts that we have with organizations working on the ground, we know that Tails has been used by:

  • Journalists wanting to protect themselves or their sources.

    • Reporters Without Borders is an organization that promotes and defends freedom of information, freedom of the press, and has consultant status at the United Nations. RWB advertises the use of Tails for journalists to fight censorship and protect their sources. RWB uses Tails in their training sessions world-wide.

    • According to Laura Poitras, Glen Greenwald, and Barton Gellman, Tails has been an essential tool to work on the Snowden documents and report on the NSA spying. In a recent article for The Intercept, Micah Lee gives many details on how Tails helped them starting to work together.

    • Fahad Desmukh, a freelance journalist based in Pakistan who is also working for Bytes for All always has a Tails USB handy: "I can use it whenever I may need to and I especially make sure to keep it with me when travelling. Pakistan really isn't the safest place for journalists so thanks to the Tails team for an amazing tool."

    • Jean-Marc Manach, a journalist based in France and specialized in online privacy said that "war reporters have to buy helmets, bullet-proof vests and rent armored cars; journalists using the Internet for their investigations are much luckier: to be as secured as war reporters, they only have to download Tails, burn it on a CD, install it on a SD card, and learn the basics of information and communication security, and it's free!"

  • Human-right defenders organizing in repressive contexts.

    • Tails has been used in combination with Martus, an information system used to report on human rights abuses, to allow Tibetan communities in exile to protect themselves from targeted malware attacks.
  • Democracy defenders facing dictatorships.

  • Citizens facing national emergencies.

    During the last years, we noticed that the use of Tor and Tails systematically peaks when countries face national emergencies. Even if Tails represents a small amount of the global Tor usage, it is advertised by the Tor Project as the safest platform to protect from strong adversaries.

    • In Starting a revolution with technology, Slim Amamou, Tunisian blogger and former Secretary of State for Sport and Youth, explains that Tor "was vital to get information and share it" during the Tunisian revolution of 2011, because social media pages sharing information about the protests were "systematically censored so you could not access them without censorship circumvention tools".

    • Between January 25, the day the Egyptian Revolution of 2011 began, and January 27 2011, the number of Tor users in Egypt was multiplied at least by 4. On January 27, the Egyptian goverment decided to halt Internet access accross the country.

    • Between March 19 and March 31, the number of Tor users in Turkey was multiplied by 3 as a direct response to the growing Internet censorship in the country: on 20 March 2014, access to Twitter was blocked in Turkey, and on 27 March 2014 access to YouTube was blocked.

  • Domestic violence survivors escaping from their abusers.

    • The Tor Project has been working with organizations fighting against domestic violence such as NNEDV, Transition House, and Emerge to help survivors escape digital surveillance from their abuser and report on their situation. As domestic abuse goes digital, circumvention tools like Tor and Tails end up as one of the only options.

If you know of other great stories of Tails users, please share them with us!

27 November, 2014 11:34AM

hackergotchi for Ubuntu developers

Ubuntu developers

Eric Hammond: lambdash: AWS Lambda Shell Hack

I spent the weekend learning just enough JavaScript and nodejs to hack together a Lambda function that runs arbitrary shell commands in the AWS Lambda environment.

This hack allows you to explore the current file system, learn what versions of Perl and Python are available, and discover what packages might be installed.

If you’re interested in seeing the results, then read following article which uses this AWS Lambda shell hack to examine the inside of the AWS Lambda run time environment.

Exploring The AWS Lambda Runtime Environment

Now on to the hack…

Setup

Define the basic parameters.

# Replace with your bucket name
bucket_name=lambdash.alestic.com

function=lambdash
lambda_execution_role_name=lambda-$function-execution
lambda_execution_access_policy_name=lambda-$function-execution-access
log_group_name=/aws/lambda/$function

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role \
  --role-name "$lambda_execution_role_name" \
  --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [{
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
      }]
    }' \
  --output text \
  --query 'Role.Arn'
)
echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. Log to Cloudwatch and upload files to a specific S3 bucket/location.

aws iam put-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name" \
  --policy-document '{
      "Version": "2012-10-17",
      "Statement": [{
          "Effect": "Allow",
          "Action": [ "logs:*" ],
          "Resource": "arn:aws:logs:*:*:*"
      }, {
          "Effect": "Allow",
          "Action": [ "s3:PutObject" ],
          "Resource": "arn:aws:s3:::'$bucket_name'/'$function'/*"
      }]
  }'

Grab the current Lambda function JavaScript from the Alestic lambdash GitHub repository, create the ZIP file, and upload the new Lambda function.

wget -q -O$function.js \
  https://raw.githubusercontent.com/alestic/lambdash/master/lambdash.js
npm install async fs tmp
zip -r $function.zip $function.js node_modules
aws lambda upload-function \
  --function-name "$function" \
  --function-zip "$function.zip" \
  --runtime nodejs \
  --mode event \
  --handler "$function.handler" \
  --role "$lambda_execution_role_arn" \
  --timeout 60 \
  --memory-size 256

Invoke the Lambda function with the desired command and S3 output locations. Adjust the command and repeat as desired.

cat > $function-args.json <<EOM
{
    "command": "ls -laiR /",
    "bucket":  "$bucket_name",
    "stdout":  "$function/stdout.txt",
    "stderr":  "$function/stderr.txt"
}
EOM

aws lambda invoke-async \
  --function-name "$function" \
  --invoke-args "$function-args.json"

Look at the Lambda function log output in CloudWatch.

log_stream_names=$(aws logs describe-log-streams \
  --log-group-name "$log_group_name" \
  --output text \
  --query 'logStreams[*].logStreamName') &&
for log_stream_name in $log_stream_names; do
  aws logs get-log-events \
    --log-group-name "$log_group_name" \
    --log-stream-name "$log_stream_name" \
    --output text \
    --query 'events[*].message'
done | less

Get the command output.

aws s3 cp s3://$bucket_name/$function/stdout.txt .
aws s3 cp s3://$bucket_name/$function/stderr.txt .
less stdout.txt stderr.txt

Clean up

If you are done with this example, you can delete the created resources. Or, you can leave the Lambda function in place ready for future use. After all, you aren’t charged unless you use it.

aws s3 rm s3://$bucket_name/$function/stdout.txt
aws s3 rm s3://$bucket_name/$function/stderr.txt
aws lambda delete-function \
  --function-name "$function"
aws iam delete-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name"
aws iam delete-role \
  --role-name "$lambda_execution_role_name"
aws logs delete-log-group \
  --log-group-name "$log_group_name"

Requests

What command output would you like to see in the Lambda environment?

Original article: http://alestic.com/2014/11/aws-lambda-shell

27 November, 2014 02:33AM

Stephen Michael Kellat: Thanksgiving Note

Today is Thanksgiving in the United States of America. My Canadian relations may quibble over the month Thanksgiving actually takes place in but it is in fact happening today in the United States of America. I'm off enjoying the holiday but this blog post like most this week should have been automatically posted. If you need to reach me about community/governance matters, today is probably not a good day to try.

27 November, 2014 12:00AM

November 26, 2014

Randall Ross: Make Charm Debugging Easier, with DHX

Juju makes things really simple.

But, like you, I'm not content to stop at simple. I'm always looking for ways to make things even simpler so that I have more time to work on tough problems (e.g. spreading Ubuntu in my city.)

Today my colleague Corey Johns pointed me to DHX, a cool plugin for Juju that he developed. Even simpler!

I hope you find this useful.

Click to learn more!Click to learn more!

And please remember to thank Corey for his excellent work.

26 November, 2014 09:09PM

Robert Ancell: Writing applications for Ubuntu Phone

I've just released my fifth application for the Ubuntu phone and I thought I'd write up a summary of my experiences developing for Ubuntu Phone. In summary, it's been pretty positive!

The good:
  • Installing the SDK is as easy as installing any application in Ubuntu.
  • Writing applications is fast. You can throw together something fairly nice in a few hours.
  • Click packages are so easy to build! It makes .deb packages feel like something from the 1990s. Which is appropriate, because they are from the 1990s.
  • The deployment process is incredibly fast. You create a click package from the SDK, upload it to the store in a web form and it lands on my (or anyone else's) phone in under a minute normally. A freaking minute! That's amazing!
The bad / ugly:
  • The Ubuntu SDK (aka Qt Creator) still reinforces why I don't like IDEs. While it's better than older IDEs it's still overly complicated and cluttered with buttons. I only use it to dogfood the process and the command line tools aren't great for building and deploying applications (yet).
  • QML is... OK. It has all the technology of a modern toolkit (e.g. transitions, it's declarative, you can develop using a dynamic language) which is good. But it feels like it was put together in a rush. It's often not clear what the best way is to solve a problem and some components seem to be missing useful functionality (e.g. containers).
  • Javascript is great for small applications but quickly becomes unwieldy for large ones. The default other option is to use C++ which is just an enormous step backwards into complexity. I haven't yet tried Go QML but hopefully that will be a better combination.
  • The Ubuntu store interface is very basic. There's no way to list apps by ranking, you can't see new applications, there's no web interface. I'm sure it will get better soon but it's currently hard to find what's available (which is a big part of why I'm writing this blog post).
Here's what I've made; all these applications are released under the GPL 3 license and available on Launchpad. You can get the source for any of them by typing "bzr branch lp:euchre" from an Ubuntu machine.

Euchre


My first Ubuntu phone application. It's a classic four player trick taking card game with a basic AI.  I learnt a lot about animation in QML developing this. It's all written in Javascript which is really pushing the limits of maintainability for an application like this (1833 lines of QML). While it is the oldest it is also the least downloaded of my applications I think because Euchre is a bit of a niche game and I don't have any in game help.

Animal Farm


The inspiration for this was my daughter enjoying applications like this on Android. You touch the animals and they shake and meow / baa etc. It's trivially small (157 lines of QML).

Dotty


Dotty is a clone of the very successful iOS / Android game Dots. I thought I'd see if copying a popular game would transfer into success in Ubuntu and it has. This is my most popular game with 362 users currently compared to 160 for Animal Farm which is the next most popular. I learnt how to do dynamic components (i.e. the lines and the dots falling down) with this. A good size at 605 lines of QML.

Five Letters


Like Dotty I was looking for the type of games that are already popular on existing platforms. Word games are quite successful and I was thinking of games like 7 little words when designing this. The "making words from five letters" is a common newspaper game. I spent a lot of time trimming the dictionary of possible games to remove anything offensive or obscure so it should be reasonably possible to solve all the puzzles (there's about 1300 of them). 406 lines of QML.

Pairs


My newest game! Released last night. Like Animal Farm I was thinking of something my children might like to play. You turn over the cards two at a time and try and find the matching colours. The colours I've used actually make it quite difficult and fun to play as an adult. 409 lines of QML.

26 November, 2014 09:09PM by Robert Ancell (noreply@blogger.com)

Aurélien Gâteau: Colorpick

Recently I wrote about my so-called "lightweight project management policy". I am going to start slowly and present a small side-project: Colorpick.

Colorpick is a color picker and contrast checker. I originally wrote it to help me check and fix the background and foreground colors of the Oxygen palette to ensure text was readable. Since then I have been using it to steal colors from various places and as a magnifier to inspect tiny details.

The main window looks like this:

Main Window

Admittedly, it's a bit ugly, especially the RGB gradients (KGradientSelector and the Oxygen style do not play well together). Nevertheless, it does the job, which is what side-projects are all about.

Here is an annotated image of the window:

Annotated Window

  1. The current color: clicking it brings the standard KDE color dialog. The main reason it's here is because it can be dragged: drag the color and drop on any application which supports color.

  2. The color in hexadecimal.

  3. Luminance buttons: click them to adjust the luminance of the color.

  4. Color picker: brings the magnifier to pick a color from the screen. One nice thing about this magnifier is that it can be controlled from the keyboard: roughly move the mouse to the area where you want to pick a color then position the picker precisely using the arrow keys. When the position is OK: press Enter to pick the color. Pressing Escape or right-clicking closes the magnifier.

    Magnifier

    Picking the color of the 1-pixel door knob from the home icon. The little inverted-color square in the center shows which pixel is being picked.

  5. Copy button: clicking this button brings a menu with the color expressed in different formats. Selecting one entry copies the color to the clipboard, ready to be pasted.

    Copy menu

  6. RGB sliders: not much to say here. Drag the cursors or enter values, your choice.

  7. Contrast test text: shows some demo text using the selected background and foreground colors, together with the current contrast value. It lets you know if your contrast is good enough according to http://www.w3.org/TR/WCAG20/#visual-audio-contrast.

Interested? The project is on GitHub at https://github.com/agateau/colorpick. Get it with git clone https://github.com/agateau/colorpick then follow the instructions from the INSTALL.md file.

Flattr this

26 November, 2014 05:36PM

Ubuntu App Developer Blog: You have a working scope? Here is what to do before pushing it to the store…

Now that your scope is in a working state, it’s time to get it ready for publication. In this tutorial you will learn how to make your scope look good when the user is browsing the store or the list of scopes installed on the phone.

In the next steps, we are going to prepare a few graphics, edit the <scope>.ini file located in the data directory of your project and package the scope for the store.

Read…

scope_prev_all

26 November, 2014 04:30PM

Randall Ross: POWER Up!

A while back, as part of my new role, I began looking for opportunities to:

  1. Challenge the status quo, and,
  2. Connect people together that want to solve big problems.

(Luckily, the two are closely related.)

Recently, I was introduced to some fine folks at SiteOx in Franklin, TN (that's just outside of Memphis) that happen to have some really fast POWER8 systems that provide infrastructure-as-a-service (IaaS).

I mentioned that previously unknown tidbit to some of my colleagues (who are are awesome Juju Charmers) to see if/how the service could be used to speed Juju Charm development.

As it turns out, it can! In case you missed it, Matt Bruzek of Juju Charmer fame, figured it all out and then wrote a concise guide to do just that. Check it out here, and then...

Click the button to feel the POWER!Click the button to feel the POWER!

Thanks Matt, and thanks SiteOx.

26 November, 2014 12:03AM

Stephen Michael Kellat: Pondering Contingencies

Preparedness is an odd topic. As people in the United States might have recalled from last week, snow abounded in certain parts of the country. Although not located in the New York State community of Buffalo, I am located down the Lake Erie shoreline in Ashtabula. I too am seasonally afflicted with Lake Effect Snow Storms.

Heck, I have even seen Thunder Snow!

Following the major snow, I got to see "High Wind Warning". That was not fun as it did lead to a blackout. The various UPS units around the house started screaming. Once that happened I had multiple systems to shut down. The Xubuntu meeting log this week even shows me shutting down things while departing mid-way. As you might imagine, overhead electrical lines do not play nicely with 50 mile per hour wind gusts.

When using a computer, you never truly have an ideal environment for the bare metal to operate in. Although contemporary life leaves the impression that electricity and broadband service should be constant let alone stable, bad things do happen. I already have multiple UPS units scattered around as it is.

Donald Rumsfeld, the former US Secretary of Defense, had a saying that fits:

As you know, you go to war with the army you have, not the army you might want or wish to have at a later time.

I live in what is termed by our census officials a "Micropolitan Statistical Area" compared to a "Metropolitan Statistical Area" so I know it is small. I know our infrastructure is not the greatest. Planning ahead means being ready to be without electricity for an extended period of time here.

While the Buffalo Bills football team had to move their home game to Detroit due to their stadium filling with snow, imagine the flooding aftermath that may happen when that snow melts. Extreme cases like that are hard to plan for but at least the game is going to happen somewhere. What contingencies have you at least thought about working around?

26 November, 2014 12:00AM

November 25, 2014

hackergotchi for Whonix

Whonix

System Requirements

I updated the system requirements page in the Whonix Wiki. I also opened a discussion on the developer board about what the default RAM setting should be for the Workstation VirtualBox image.

The post System Requirements appeared first on Whonix.

25 November, 2014 10:56PM by Jason J. Ayala P.

hackergotchi for Ubuntu developers

Ubuntu developers

Eric Hammond: AWS Lambda Walkthrough Command Line Companion

The AWS Lambda Walkthrough 2 uses AWS Lambda to automatically resize images added to one bucket, placing the resulting thumbnails in another bucket. The walkthrough documentation has a mix of aws-cli commands, instructions for hand editing files, and steps requiring the AWS console.

For my personal testing, I converted all of these to command line instructions that can simply be copied and pasted, making them more suitable for adapting into scripts and for eventual automation. I share the results here in case others might find this a faster way to get started with Lambda.

These instructions assume that you have already set up and are using an IAM user / aws-cli profile with admin credentials.

The following is intended as a companion to the Amazon walkthrough documentation, simplifying the execution steps for command line lovers. Read the AWS documentation itself for more details explaining the walkthrough.

Set up

Set up environment variables describing the associated resources:

# Change to your own unique S3 bucket name:
source_bucket=alestic-lambda-example

# Do not change this. Walkthrough code assumes this name
target_bucket=${source_bucket}resized

function=CreateThumbnail
lambda_execution_role_name=lambda-$function-execution
lambda_execution_access_policy_name=lambda-$function-execution-access
lambda_invocation_role_name=lambda-$function-invocation
lambda_invocation_access_policy_name=lambda-$function-invocation-access
log_group_name=/aws/lambda/$function

Install some required software:

sudo apt-get install nodejs nodejs-legacy npm

Step 1.1: Create Buckets and Upload a Sample Object (walkthrough)

Create the buckets:

aws s3 mb s3://$source_bucket
aws s3 mb s3://$target_bucket

Upload a sample photo:

# by Hatalmas: https://www.flickr.com/photos/hatalmas/6094281702
wget -q -OHappyFace.jpg \
  https://c3.staticflickr.com/7/6209/6094281702_d4ac7290d3_b.jpg

aws s3 cp HappyFace.jpg s3://$source_bucket/

Step 2.1: Create a Lambda Function Deployment Package (walkthrough)

Create the Lambda function nodejs code:

# JavaScript code as listed in walkthrough
wget -q -O $function.js \
  http://run.alestic.com/lambda/aws-examples/CreateThumbnail.js

Install packages needed by the Lambda function code. Note that this is done under the local directory:

npm install async gm # aws-sdk is not needed

Put all of the required code into a ZIP file, ready for uploading:

zip -r $function.zip $function.js node_modules

Step 2.2: Create an IAM Role for AWS Lambda (walkthrough)

IAM role that will be used by the Lambda function when it runs.

lambda_execution_role_arn=$(aws iam create-role \
  --role-name "$lambda_execution_role_name" \
  --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "lambda.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    }' \
  --output text \
  --query 'Role.Arn'
)
echo lambda_execution_role_arn=$lambda_execution_role_arn

What the Lambda function is allowed to do/access. This is slightly tighter than the generic role policy created with the IAM console:

aws iam put-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name" \
  --policy-document '{
    "Version": "2012-10-17",
    "Statement": [
      {
        "Effect": "Allow",
        "Action": [
          "logs:*"
        ],
        "Resource": "arn:aws:logs:*:*:*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "s3:GetObject"
        ],
        "Resource": "arn:aws:s3:::'$source_bucket'/*"
      },
      {
        "Effect": "Allow",
        "Action": [
          "s3:PutObject"
        ],
        "Resource": "arn:aws:s3:::'$target_bucket'/*"
      }
    ]
  }'

Step 2.3: Upload the Deployment Package and Invoke it Manually (walkthrough)

Upload the Lambda function, specifying the IAM role it should use and other attributes:

# Timeout increased from walkthrough based on experience
aws lambda upload-function \
  --function-name "$function" \
  --function-zip "$function.zip" \
  --role "$lambda_execution_role_arn" \
  --mode event \
  --handler "$function.handler" \
  --timeout 30 \
  --runtime nodejs

Create fake S3 event data to pass to the Lambda function. The key here is the source S3 bucket and key:

cat > $function-data.json <<EOM
{  
   "Records":[  
      {  
         "eventVersion":"2.0",
         "eventSource":"aws:s3",
         "awsRegion":"us-east-1",
         "eventTime":"1970-01-01T00:00:00.000Z",
         "eventName":"ObjectCreated:Put",
         "userIdentity":{  
            "principalId":"AIDAJDPLRKLG7UEXAMPLE"
         },
         "requestParameters":{  
            "sourceIPAddress":"127.0.0.1"
         },
         "responseElements":{  
            "x-amz-request-id":"C3D13FE58DE4C810",
            "x-amz-id-2":"FMyUVURIY8/IgAtTv8xRjskZQpcIZ9KG4V5Wp6S7S/JRWeUWerMUE5JgHvANOjpD"
         },
         "s3":{  
            "s3SchemaVersion":"1.0",
            "configurationId":"testConfigRule",
            "bucket":{  
               "name":"$source_bucket",
               "ownerIdentity":{  
                  "principalId":"A3NL1KOZZKExample"
               },
               "arn":"arn:aws:s3:::$source_bucket"
            },
            "object":{  
               "key":"HappyFace.jpg",
               "size":1024,
               "eTag":"d41d8cd98f00b204e9800998ecf8427e",
               "versionId":"096fKKXTRTtl3on89fVO.nfljtsv6qko"
            }
         }
      }
   ]
}
EOM

Invoke the Lambda function, passing in the fake S3 event data:

aws lambda invoke-async \
  --function-name "$function" \
  --invoke-args "$function-data.json"

Look in the target bucket for the converted image. It could take a while to show up since the Lambda function is running asynchronously:

aws s3 ls s3://$target_bucket

Look at the Lambda function log output in CloudWatch:

aws logs describe-log-groups \
  --output text \
  --query 'logGroups[*].[logGroupName]'

log_stream_names=$(aws logs describe-log-streams \
  --log-group-name "$log_group_name" \
  --output text \
  --query 'logStreams[*].logStreamName')
echo log_stream_names="'$log_stream_names'"
for log_stream_name in $log_stream_names; do
  aws logs get-log-events \
    --log-group-name "$log_group_name" \
    --log-stream-name "$log_stream_name" \
    --output text \
    --query 'events[*].message'
done | less

Step 3.1: Create an IAM Role for Amazon S3 (walkthrough)

This role may be assumed by S3.

lambda_invocation_role_arn=$(aws iam create-role \
  --role-name "$lambda_invocation_role_name" \
  --assume-role-policy-document '{
      "Version": "2012-10-17",
      "Statement": [
        {
          "Sid": "",
          "Effect": "Allow",
          "Principal": {
            "Service": "s3.amazonaws.com"
          },
          "Action": "sts:AssumeRole",
          "Condition": {
            "StringLike": {
              "sts:ExternalId": "arn:aws:s3:::*"
            }
          }
        }
      ]
    }' \
  --output text \
  --query 'Role.Arn'
)
echo lambda_invocation_role_arn=$lambda_invocation_role_arn

S3 may invoke the Lambda function.

aws iam put-role-policy \
  --role-name "$lambda_invocation_role_name" \
  --policy-name "$lambda_invocation_access_policy_name" \
  --policy-document '{
     "Version": "2012-10-17",
     "Statement": [
       {
         "Effect": "Allow",
         "Action": [
           "lambda:InvokeFunction"
         ],
         "Resource": [
           "*"
         ]
       }
     ]
   }'

Step 3.2: Configure a Notification on the Bucket (walkthrough)

Get the Lambda function ARN:

lambda_function_arn=$(aws lambda get-function-configuration \
  --function-name "$function" \
  --output text \
  --query 'FunctionARN'
)
echo lambda_function_arn=$lambda_function_arn

Tell the S3 bucket to invoke the Lambda function when new objects are created (or overwritten):

aws s3api put-bucket-notification \
  --bucket "$source_bucket" \
  --notification-configuration '{
    "CloudFunctionConfiguration": {
      "CloudFunction": "'$lambda_function_arn'",
      "InvocationRole": "'$lambda_invocation_role_arn'",
      "Event": "s3:ObjectCreated:*"
    }
  }'

Step 3.3: Test the Setup (walkthrough)

Copy your own jpg and png files into the source bucket:

myimages=...
aws s3 cp $myimages s3://$source_bucket/

Look for the resized images in the target bucket:

aws s3 ls s3://$target_bucket

Check out the environment

These handy commands let you review the related resources in your acccount:

aws lambda list-functions \
  --output text \
  --query 'Functions[*].[FunctionName]'

aws lambda get-function \
  --function-name "$function"

aws iam list-roles \
  --output text \
  --query 'Roles[*].[RoleName]'

aws iam get-role \
  --role-name "$lambda_execution_role_name" \
  --output json \
  --query 'Role.AssumeRolePolicyDocument.Statement'

aws iam list-role-policies  \
  --role-name "$lambda_execution_role_name" \
  --output text \
  --query 'PolicyNames[*]'

aws iam get-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name" \
  --output json \
  --query 'PolicyDocument'

aws iam get-role \
  --role-name "$lambda_invocation_role_name" \
  --output json \
  --query 'Role.AssumeRolePolicyDocument.Statement'

aws iam list-role-policies  \
  --role-name "$lambda_invocation_role_name" \
  --output text \
  --query 'PolicyNames[*]'

aws iam get-role-policy \
  --role-name "$lambda_invocation_role_name" \
  --policy-name "$lambda_invocation_access_policy_name" \
  --output json \
  --query 'PolicyDocument'

aws s3api get-bucket-notification \
  --bucket "$source_bucket"

Clean up

If you are done with the walkthrough, you can delete the created resources:

aws s3 rm s3://$target_bucket/resized-HappyFace.jpg
aws s3 rm s3://$source_bucket/HappyFace.jpg
aws s3 rb s3://$target_bucket/
aws s3 rb s3://$source_bucket/

aws lambda delete-function \
  --function-name "$function"

aws iam delete-role-policy \
  --role-name "$lambda_execution_role_name" \
  --policy-name "$lambda_execution_access_policy_name"

aws iam delete-role \
  --role-name "$lambda_execution_role_name"

aws iam delete-role-policy \
  --role-name "$lambda_invocation_role_name" \
  --policy-name "$lambda_invocation_access_policy_name"

aws iam delete-role \
  --role-name "$lambda_invocation_role_name"

log_stream_names=$(aws logs describe-log-streams \
  --log-group-name "$log_group_name" \
  --output text \
  --query 'logStreams[*].logStreamName') &&
for log_stream_name in $log_stream_names; do
  echo "deleting log-stream $log_stream_name"
  aws logs delete-log-stream \
    --log-group-name "$log_group_name" \
    --log-stream-name "$log_stream_name"
done

aws logs delete-log-group \
  --log-group-name "$log_group_name"

If you try these instructions, please let me know in the comments where you had trouble or experienced errors.

Original article: http://alestic.com/2014/11/aws-lambda-cli

25 November, 2014 09:36PM

hackergotchi for Xanadu developers

Xanadu developers

Juegos en GNU/Linux

Hace poco un compañero de trabajo me comento que no usaba Linux porque en este no había buenos juegos así que para mostrarle que estaba equivocado me dispuse a hacer una lista (dividida en dos) de juegos que pueden instalarse en Linux.

Coloque un enlace a la web de cada juego para que puedan conocer los requerimientos de hardware y software para su ejecución.

Instalables vía el gestor apt

Instalables vía descarga

Si conocen algún otro juego que no este en esta lista, colócalo en los comentarios y lo añadiré apenas pueda, espero les gusten, saludos…


Tagged: juegos, linux

25 November, 2014 07:27PM by sinfallas

hackergotchi for Ubuntu developers

Ubuntu developers

Chris Wayne: Galileo updated in PPA

Sorry I had neglected this for a bit, but the latest version of Galileo is now available in my PPA.  It has also been uploaded for 12.04, 14.04, 14.10, and vivid (15.04).  Please test and if you find any issues, shoot me an email at cwayne@ubuntu.com

25 November, 2014 07:01PM

Michael Hall: Ubuntu Incubator

The Ubuntu Core Apps project has proven that the Ubuntu community is not only capable of building fantastic software, but they’re capable of the meeting the same standards, deadlines and requirements that are expected from projects developed by employees. One of the things that I think made Core Apps so successful was the project management support that they all received from Alan Pope.

Project management is common, even expected, for software developed commercially, but it’s just as often missing from community projects. It’s time to change that. I’m kicking off a new personal[1] project, I’m calling it the Ubuntu Incubator.

get_excited_banner_banner_smallThe purpose of the Incubator is to help community projects bootstrap themselves, obtain the resources they need to run their project, and put together a solid plan that will set them on a successful, sustainable path.

To that end I’m going to devote one month to a single project at a time. I will meet with the project members regularly (weekly or every-other week), help define a scope for their project, create a spec, define work items and assign them to milestones. I will help them get resources from other parts of the community and Canonical when they need them, promote their work and assist in recruiting contributors. All of the important things that a project needs, other than direct contributions to the final product.

I’m intentionally keeping the scope of my involvement very focused and brief. I don’t want to take over anybody’s project or be a co-founder. I will take on only one project at a time, so that project gets all of my attention during their incubation period. The incubation period itself is very short, just one month, so that I will focus on getting them setup, not on running them.  Once I finish with one project, I will move on to the next[2].

How will I choose which project to incubate? Since it’s my time, it’ll be my choice, but the most important factor will be whether or not a project is ready to be incubated. “Ready” means they are more than just an idea: they are both possible to accomplish and feasible to accomplish with the person or people already involved, the implementation details have been mostly figured out, and they just need help getting the ball rolling. “Ready” also means it’s not an existing project looking for a boost, while we need to support those projects too, that’s not what the Incubator is for.

So, if you have a project that’s ready to go, but you need a little help taking that first step, you can let me know by adding your project’s information to this etherpad doc[3]. I’ll review each one and let you know if I think it’s ready, needs to be defined a little bit more, or not a good candidate. Then each month I’ll pick one and reach out to them to get started.

Now, this part is important: don’t wait for me! I want to speed up community innovation, not slow it down, so even if I add your project to the “Ready” queue, keep on doing what you would do otherwise, because I have no idea when (or if) I will be able to get to yours. Also, if there are any other community leaders with project management experience who have the time and desire to help incubate one of these project, go ahead and claim it and reach out to that team.

[1] While this compliments my regular job, it’s not something I’ve been asked to do by Canonical, and to be honest I have enough Canonical-defined tasks to consume my working hours. This is me with just my community hat on, and I’m inclined to keep it that way.

[2] I’m not going to forget about projects after their month is up, but you get 100% of the time I spend on incubation during your month, after that my time will be devoted to somebody else.

[3] I’m using Etherpad to keep the process as lightweight as possible, if we need something better in the future we’ll adopt it then.

25 November, 2014 06:47PM

Ubuntu Server blog: Server team meeting minutes: 2014-11-25

Agenda

  • Review ACTION points from previous meeting
    • None
  • V Development
  • Server & Cloud Bugs (caribou)
  • Weekly Updates & Questions for the QA Team (psivaa)
  • Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)
  • Ubuntu Server Team Events
  • Open Discussion
  • Announce next meeting date, time and chair

Minutes

Meeting Actions
  • matsubara to chase someone that can update release bugs report: http://reqorts.qa.ubuntu.com/reports/rls-mgr/rls-v-tracking-bug-tasks.html#ubuntu-server
Weekly Updates & Questions for the Kernel Team (smb, sforshee, arges)
  • smb reports: “I did a few stable uploads for Xen in Utopic and Trusty. Though zul, you may want to hold back doing cloud-archive versions. There is more to come. ;) Also from some email report on xen-devel there are a few things missing to make openstack and xen a better experience (bug #1396068 and bug #1394327 at least). I am working on getting things applied and SRUed.”
Agree on next meeting date and time

Next meeting will be on Tuesday, Dec 2nd at 16:00 UTC in #ubuntu-meeting. kickinz1 will chair.

IRC Log
http://ubottu.com/meetingology/logs/ubuntu-meeting/2014/ubuntu-meeting.2014-11-25-16.01.html

25 November, 2014 05:55PM

Scott Kitterman: On being excellent to each other

There has been a lot of discussion recently where there is strong disagreement, even about how to discuss the disagreement. Here’s a few thoughts on the matter.

The thing I personally find the most annoying is when someone thinks what someone else says is inappropriate and says so, it seems like the inevitable response is to scream censorship. When people do that, I’m pretty sure they don’t know what the word censorship actually means. Debian/Ubuntu/Insert Project Name Here resources are not public spaces and no government is telling people what they can and can’t say.

When you engage in speech and people respond to that speech, even if you don’t feel all warm and fuzzy after reading the response, it’s not censorship. It’s called discussion.

When someone calls out speech that they think is inappropriate, the proper response is not to blame a Code of Conduct or some other set of rules. Projects that have a code, also have a process for dealing with claims the code has been violated. Unless someone invokes that process (which almost never happens), the code is irrelevant. What’s relevant is that someone is having a problem with what or how you are saying something and are in some way hurt by it.

Let’s focus on that. The rules are irrelevant, what matters is working together in a collegial way. I really don’t think project members actively want other project members to feel bad/unsafe, but it’s hard to get outside ones own defensive reaction to being called out. So please pay less attention to how you’re feeling about things and try to see things from the other side. If we can all do a bit more of that, then things can be better for all of us.

Final note: If you’ve gotten this far and thought “Oh, that other person is doing this to me”, I have news for you – it’s not just them.

25 November, 2014 04:47PM

Matthew Helmke: Ubuntu Books I Wrote in 2014

Just in time for the end of the year holidays…

I have a new edition of Ubuntu Unleashed 2015 Edition (affiliate link), now available for preorder. This book is intended for intermediate to advanced users.

I also failed to mention on this blog the newest edition of The Official Ubuntu Book (another affiliate link), now in its eighth edition. The book continues to serve as a quality introduction for newcomers to Ubuntu, both the software and the community that surrounds it.

25 November, 2014 04:27PM

hackergotchi for SolydXK

SolydXK

Important SolydXK news

As of today I’m taking a step back in the production of SolydXK. My family and I have come to the conclusion that some things are going to have to change. This means I cannot continue with SolydXK in the current form.

To answer some of the questions I received, I created a small FAQ:

  • Is SolydXK going to disappear?
    – No, I will continue maintaining SolydK64, SolydX64 based on the upcoming Debian stable and the SolydXK repository.
  • Is there going to be a 32-bit version?
    – I am not going to continue to maintain the 32-bit versions. However, I’d love to see them live on as pure community editions. Frank (grizzler) plans to give maintaining them a try, but can’t guarantee this will work out, so any assistance from the community would be highly appreciated.
  • Are there going to be “Enthusiast Editions”, based on Debian testing?
    – I won’t be able to maintain those either, but again, Frank will see what he can do about it. Any help from the community would be very welcome here as well.
  • Is the Back Office going to be maintained?
    – Unfortunately not. Although I love this edition, it takes a lot of time to create the packages and test them.
  • What about the repositories?
    – I will continue maintaining the SolydXK repository. The Debian and Security repositories will point directly to Debian. The packages of the Community repository will be transferred to the SolydXK repository.

    For the Home Editions the sources.list would look like this:

      deb http://repository.solydxk.com/ solydxk main upstream import
      deb http://ftp.debian.org/debian jessie main contrib non-free
      deb http://security.debian.org/ jessie/updates main contrib non-free
      deb http://ftp.debian.org/debian/ jessie-backports main contrib non-free

    The sources.list for the Business Editions would look like this:

      deb http://repository.solydxk.com/ solydxk main upstream import
      deb http://ftp.debian.org/debian wheezy main contrib non-free
      deb http://security.debian.org/ wheezy/updates main contrib non-free
      deb http://ftp.debian.org/debian/ wheezy-backports main contrib non-free

    The repository.solydxk.com repository is still in development. So, please don’t use it on your working machine just yet.

  • Will my current install of SolydXK (and SoldyXK BE) roll into the new SolydXK when the new SolydXK based on Debian Jessie is released?
    – I have prepared a new version of the solydxk-system package that should rewrite the sources.list automatically. If you’re using the Home Edition, your sources.list will point to Jessie and if you’re using the Buesiness Editions, your sources.list will point to Wheezy.
  • Will there be any more Update Packs?
    – As Jessie is now preparing for stable there will be no need for any more Update Packs. Updates will come as regular updates or security updates.
  • What will be the release cycle of the ISOs?
    – Unfortunately, I cannot make any promises on frequency of releases (software or ISOs).
  • Will the forum still exist?
    – Yes, both the site and the forum will stay. Perhaps some minor changes in the structure, but nothing more.
  • When will this all happen?
    – I’ll start the transition on 31 January 2015.

Would you like to help?

  • Maintain one or more ISOs.
    – I’ve developed solydxk-constructor for that purpose. It comes with a help function to get you started and you can post your questions here: http://forums.solydxk.com/viewtopic.php?f=9&t=774
  • If you find some software needs TLC.
    – If you know Python you can help. All code is available at Github: https://github.com/SolydXK
    – If you find that one of those applications need to be translated in your language, you will find a “po” directory in each Github project. Choose the right .po file and start translating with poedit. You can either create a pull request or send me the translated .po file directly.
    Jocelyn (Ane champenois) will try to simplify the work of the translators by using an online platform of translation. You’ll have more informations about that if we’ll achieve that point.

I also would like to thank all the people who have helped me during the past two years. Without you I wouldn’t have come this far. I would especially like to thank zerozero who has been with this project since the early start. Thank you for your advice and keeping my feet on the ground when it was most needed.

Kind regards,
Arjen Balfoort

25 November, 2014 03:18PM by Schoelje

hackergotchi for Ubuntu developers

Ubuntu developers

Pasi Lallinaho: Preparing responsive design for Xubuntu

As some of you might know, I was appointed as the Xubuntu website lead after taking a 6-month break from leadership in Xubuntu.

Since this position was passed on from Lyz (who is by the way doing fantastic job as our marketing lead!), I wouldn’t have wanted to be nominated unless I could actually bring something to the table. Thus, the xubuntu-v-website blueprint lists all the new (and old) projects that I am driving to finish during the Vivid cycle.

Now, please let me briefly introduce you to the field which I’m currently improving…

Responsive design!

In the past days, I have been preparing responsive stylesheets for the Xubuntu website. While Xubuntu isn’t exactly targeted at any device that itself would have a great need for fully responsive design, we do think that it is important to be available for users browsing with those devices as well.

Currently, we have four stylesheets in addition to the regular ones. Two of these are actually useful even for people without small-resolution screens; they improve the user experience for situations when the browser viewport is simply limited.

In the first phase of building the responsive design, I have had three main goals. Maybe the most important aspect is to avoid horizontal scrolling. Accomplishing this already improves the browsing experience a lot especially on small screens. The two other goals are to make some of the typography adjust better to small resolutions while keeping it readable and keeping links, especially internal navigation, easily accessible by expanding their clickable area.

At this point, I’ve pretty much accomplished the first goal, but still have work to do with the other two. There are also some other visual aspects that I would like to improve before going public, but ultimately, they aren’t release-critical changes and can wait for later.

For now, the new stylesheets are only used in the staging site. Once we release them for the wider public, or if we feel like we need some broader beta testing, we will reach for people with mobile (and other small-resolution) devices on the Xubuntu development mailing list for testing.

If you can’t wait to have a preview and are willing to help testing, show up on our development IRC channel #xubuntu-devel on Freenode and introduce yourself. I’ll make sure to get a hold of you sooner than later.

What about Xubuntu documentation?

The Xubuntu documentation main branch has responsive design stylesheets applied already. This change have yet to make it to any release (including the development version), but will land at least in Vivid soon enough.

Once I have prepared the responsive stylesheets for the Xubuntu online documentation frontpage, I will coordinate an effort to get the online documentation to use the responsive design as soon as possible. Expect some email about this on the development mailing list as well.

While we are at it… Paperspace

On a similar note… Last night I released the responsive design that I had been preparing for quite some time for Paperspace, or in other words, the WordPress theme for this blog (and the other blogs in this domain). That said, if you see anything that looks off in any browser resolution below 1200 pixels wide, be in touch. Thank you!

25 November, 2014 02:49PM

Dustin Kirkland: Try These 7 Tips in Your Next Blog Post


In a presentation to my colleagues last week, I shared a few tips I've learned over the past 8 years, maintaining a reasonably active and read blog.  I'm delighted to share these with you now!

1. Keep it short and sweet


Too often, we spend hours or days working on a blog post, trying to create an epic tome.  I have dozens of draft posts I'll never finish, as they're just too ambitious, and I should really break them down into shorter, more manageable articles.

Above, you can see Abraham Lincoln's Gettysburg Address, from November 19, 1863.  It's merely 3 paragraphs, 10 sentences, and less than 300 words.  And yet it's one of the most powerful messages ever delivered in American history.  Lincoln wrote it himself on the train to Gettysburg, and delivered it as a speech in less than 2 minutes.

2. Use memorable imagery


Particularly, you need one striking image at the top of your post.  This is what most automatic syndicates or social media platforms will pick up and share, and will make the first impression on phones and tablets.

3. Pen a catchy, pithy title


More people will see or read your title than the post itself.  It's sort of like the chorus to that song you know, but you don't know the rest of the lyrics.  A good title attracts readers and invites re-shares.

4. Publish midweek


This is probably more applicable for professional, rather than hobbyist, topics, but the data I have on my blog (1.7 million unique page views over 8 years), is that the majority of traffic lands on Tuesday, Wednesday, and Thursday.  While I'm writing this very post on a rainy Saturday morning over a cup of coffee, I've scheduled it to publish at 8:17am (US Central time) on the following Tuesday morning.

5. Share to your social media circles


My posts are generally professional in nature, so I tend to share them on G+, Twitter, and LinkedIn.  Facebook is really more of a family-only thing for me, but you might choose to share your posts there too.  With the lamentable death of the Google Reader a few years ago, it's more important than ever to share links to posts on your social media platforms.

6. Hope for syndication, but never expect it

So this is the one "tip" that's really out of your control.  If you ever wake up one morning to an overflowing inbox, congratulations -- your post just went "viral".  Unfortunately, this either "happens", or it "doesn't".  In fact, it almost always "doesn't" for most of us.

7. Engage with comments only when it makes sense


If you choose to use a blog platform that allows comments (and I do recommend you do), then be a little careful about when and how to engage in the comments.  You can easily find yourself overwhelmed with vitriol and controversy.  You might get a pat on the back or two.  More likely, though, you'll end up under a bridge getting pounded by a troll.  Rather than waste your time fighting a silly battle with someone who'll never admit defeat, start writing your next post.  I ignore trolls entirely.

A Case Study

As a case study, I'll take as an example the most successful post I've written: Fingerprints are Usernames, Not Passwords, with nearly a million unique page views.

  1. The entire post is short and sweet, weighing in at under 500 words and about 20 sentences
  2. One iconic, remarkable image at the top
  3. A succinct, expressive title
  4. Published on Tuesday, October 1, 2013
  5. 1561 +1's on G+, 168 retweets on Twitter
  6. Shared on Reddit and HackerNews (twice)
  7. 434 comments, some not so nice
Cheers!
Dustin


25 November, 2014 02:17PM by Dustin Kirkland (noreply@blogger.com)

The Fridge: Ubuntu Weekly Newsletter Issue 392

Welcome to the Ubuntu Weekly Newsletter. This is issue #392 for the week November 10 – 16, 2014, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Paul White
  • Elizabeth K. Joseph
  • Jose Antonio Rey
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

25 November, 2014 05:33AM

The Fridge: Ubuntu Weekly Newsletter Issue 393

25 November, 2014 05:33AM

Stephen Michael Kellat: Verifying Verification

Please remember that this is written by myself alone. Any reference to "we" below either refers to the five human beings that currently comprise the LoCo Council that I am part of or to the Ubuntu Realm in general. I apologize for any difficulties or consternation caused.


From the perspective of a community team, it can seem daunting when a "case management" bug is imposed relative to Verification or Re-Verification of a team. Many people wonder what that may mean. It might seem like a lot of work. It truly isn't.

In the Verification process, LoCo Council is checking to see if a community team has taken care of setting up some bare minimums. There is a basic expectation of some baseline things that all community teams should possess. Those items include:

  • A "Point of Contact" is set as the team's owner on Launchpad and is reachable
  • Online resources including IRC channel, wiki page, website, e-mail list, Forum/Discourse section, and LoCo Team Portal entry are set up
  • Your team conforms to naming standards

Some things that are useful to mention in a write-up to the LoCo Council include but are not limited to:

  • Links to your social media presences
  • Do you have members of your community who are part of the Ubuntu Members set?
  • What is your roadmap for the future?
  • What brought you to this point?

This doesn't have to be a magnum opus of literary works. An application for this does not even need copious pictures. What the Council needs are just the facts so that members of the Council can look at a glance to review where your community stands. From there we end up asking what your community's needs are and how the Council might assist you. If you've taken over three hours to put together the application, you may have possibly put too much effort into it. It is meant to be a quick process instead of a major high-stakes presentation.

We have only a fraction of community teams checked out to show that they in fact have baseline items set up. We could increase that in this cycle to be much better. There is a page on the wiki with links to a template for building your teams own application. If your team isn't currently verified, you can write to the Council at loco-council@lists.ubuntu.com to set up a time and date when the Council can consider such.

25 November, 2014 12:00AM

November 24, 2014

hackergotchi for rescatux

rescatux

Rescatux 0.32 beta 3 released

Rescatux 0.32 beta 3 has been released.

New Languages selection at Rescatux 0.32 beta 3 greeterNew Languages selection at Rescatux 0.32 beta 3 greeter

Downloads:

Rescatux 0.32b3 size is about 469 Megabytes.

LXDE start menu in Spanish thanks to Rescatux 0.32 beta 3 greeterLXDE start menu in Spanish thanks to Rescatux 0.32 beta 3 greeter

Some thoughts:

This new beta release only has a new feature but it’s a long awaited feature. Now you will be able to select your own language, country and keyboard at boot so that, among other things, you can ask help in the integrated chat with your beloved keyboard layout.

This would not have been possible without the work from Tails project on tails-greeter. My tails-greeter fork is a quick and dirty modification so that it fits into Rescatux. However what I want to achieve is a default greeter in Debian Live so that one can choose its keyboard layout (and other settings) from Xorg.

I’m subscribed to tails-dev mailing list and I will soon ask them about how to modify tails-greeter so that we can share a common codebase between tails-greeter and live-greeter?.

I’m not convinced about them being based on gtk, and even worse, based on gdm3 which pulls quite many dependencies which I never thought were needed for a dm. Yes, I’m tempted to rewrite it into QT but maybe it’s not worth the effort.

I will probably try to modify tails-greeter so that it’s based on lightdm instead of gdm3. This task seems more feasible for me.

Finally on the translation side don’t expect Rescapp to be translated, even in Spanish, because translation subsystem is not implemented and nobody has offered to do so. I think I’m also dropping the Spanish documentation (No one has offered himself to translate it) because this way I won’t have to update it. One less task to do for the release.

What I mean is that while choosing your language, country and keyboard will affect the rest of the distribution do not expect Rescapp to be affected by the language and country settings. It will be only affected by the keyboard setting.

Roadmap for Rescatux 0.32 stable release:

You can check the complete changelog with link to each one of the issues at: Rescatux 0.32-freeze roadmap.

  • [#1323] GPT support
  • [#1364] Review Copyright notice
  • (Fixed in: 0.32b2) [#2188] install-mbr : Windows 7 seems not to be fixed with it
  • (Fixed in: 0.32b2) [#2190] debian-live. Include cpu detection and loopback cfg patches
  • (Fixed in 0.32b3) [#2191] Change Keyboard layout
  • [#2192] UEFI boot support
  • (Fixed in: 0.32b2) [#2193] bootinfoscript: Use it as a package
  • (Fixed in: 0.32b2) [#2199] Btrfs support
  • [#2205] Handle different default sh script
  • [#2216] Verify separated /usr support
  • (Fixed in: 0.32b2) [#2217] chown root root on sudoers
  • [#2220] Make sure all the source code is available
  • (Fixed in: 0.32b2) [#2221] Detect SAM file algorithm fails with directories which have spaces on them
  • (Fixed in: 0.32b2) [#2227] Use chntpw 1.0-1 from Jessie
  • [#2231] SElinux support on chroot options
  • [#2233] Disable USB automount
  • [#2236] chntpw based options need to be rewritten for reusing code
  • [#2239]http://www.supergrubdisk.org/wizard-step-put-rescatux-into-a-media/suppose that the image is based on Super Grub2 Disk version and not Isolinux.The step about extracting iso inside an iso would not be longer needed.”>Update doc: Put Rescatux into a media for Isolinux based cd
  • (Fixed in: 0.32b2) [#2259] Update bootinfoscript to the latest GIT version
  • [#2264] chntpw – Save prior registry files
  • [#2234] New option: Easy Grub fix
  • [#2235] New option: Easy Windows Admin

Other fixed bugs (0.32b2):

  • Rescatux logo is not shown at boot
  • Boot entries are named “Live xxxx” instead of “Rescatux xxxx”

Fixed bugs (0.32b1):

  • Networking detection improved (fallback to network-manager-gnome)
  • Bottom bar does not have a shorcut to a file manager as it’s a common practice in modern desktops. Fixed when falling back to LXDE.
  • Double-clicking on directories on desktop opens Iceweasel (Firefox fork) instead of a file manager. Fixed when falling back to LXDE.

Improvements (0.32b1):

  • Super Grub2 Disk is no longer included. That makes easier to put the ISO to USB devices thanks to standard multiboot tools which support Debian Live cds.
  • Rescapp UI has been redesigned
    • Every option is at hand at the first screen.
    • Rescapp options can be scrolled. That makes it easier to add new options without bothering on final design.
    • Run option screen buttons have been rearranged to make it easier to read.
  • RazorQT has been replaced by LXDE which seems more mature. LXQT will have to wait.
  • WICD has been replaced by network-manager-gnome. That makes easier to connect to wired and wireless networks.
  • It is no longer based on Debian Unstable (sid) branch.

Distro facts:

Feedback welcome:

Did you ever complain because of not being able to write in your own keyboard layout when asking for help in the integrated chat? Don’t miss your chance on testing if it works ok for your language and report us feedback !!!

Don’t forget that you can use:

Help Rescatux project:

I think we can expect two months maximum till the new stable Rescatux is ready, probably half of it because I manage to fix bugs very quick lately. Helping on these tasks is appreciated:

  • Making a youtube video for the new options.
  • Make sure documentation for the new options is right.
  • Make snapshots for new options documentation so that they don’t lack images.

If you want to help please contact us here:

Thank you and happy download!

flattr this!

24 November, 2014 11:27PM by adrian15

hackergotchi for Ubuntu developers

Ubuntu developers

Jono Bacon: Ubuntu Governance Reboot: Five Proposals

Sorry, this is long, but hang in there.

A little while back I wrote a blog post that seemed to inspire some people and ruffle the feathers of some others. It was designed as a conversation-starter for how we can re-energize leadership in Ubuntu.

When I kicked off the blog post, Elizabeth quite rightly gave me a bit of a kick in the spuds about not providing a place to have a discussion, so I amended the blog post to a link to this thread where I encourage your feedback and participation.

Rather unsurprisingly, there was some good feedback, before much of it started wandering off the point a little bit.

I was delighted to see that Laura posted that a Community Council meeting on the 4th Dec at 5pm UTC has been set up to further discuss the topic. Thanks, CC, for taking the time to evaluate and discuss the topic in-hand.

I plan on joining the meeting, but I wanted to post five proposed recommendations that we can think about. Again, please feel free to share feedback about these ideas on the mailing list

1. Create our Governance Mission/Charter

I spent a bit of time trying to find the charter or mission statements for the Community Council and Technical Board and I couldn’t find anything. I suspect they are not formally documented as they were put together back in the early days, but other sub-councils have crisp charters (mostly based off the first sub-council, the Forum Council).

I think it could be interesting to define a crisp mission statement for Ubuntu governance. What is our governance here to do? What are the primary areas of opportunity? What are the priorities? What are the risks we want to avoid? Do we need both a CC and TB?

We already have the answers to some of these questions, but are the answers we have the right ones? Is there an opportunity to adjust our goals with our leadership and governance in the project?

Like many of the best mission statements, this should be a collaborative process. Not a mission defined by a single person or group, but an opportunity for multiple people to feed into so it feels like a shared mission. I would recommend that this be a process that all Ubuntu members can play a role in. Ubuntu members have earned their seat at the table via their contributions, and would be a wonderfully diverse group to pull ideas from.

This would give us a mission that feels shared, and feels representative of our community and culture. It would feel current and relevant, and help guide our governance and wider project forward.

2. Create an ‘Impact Constitution’

OK, I just made that term up, and yes, it sounds a bit buzzwordy, but let me explain.

The guiding principles in Ubuntu are the Ubuntu Promise. It puts in place a set of commitments that ensure Ubuntu always remains a collaborative Open Source project.

What we are missing though is a document that outlines the impact that Ubuntu gives you, others, and the wider world…the ways in which Ubuntu empowers us all to succeed, to create opportunity in our own lives and the life of others.

As an example:

Ubuntu is a Free Software platform and community. Our project is designed to create open technology that empowers individuals, groups, businesses, charities, and others. Ubuntu breaks down the digital divide, and brings together our collective energy into a system that is useful, practical, simple, and accessible.

Ubuntu empowers you to:

  1. Deploy an entirely free Operating System and archive of software to one of multiple computers in homes, offices, classrooms, government institutions, charities, and elsewhere.
  2. Learn a variety of programming and development languages and have the tools to design, create, test, and deploy software across desktops, phones, tablets, the cloud, the web, embedded devices and more.
  3. Have the tools for artistic creativity and expression in music, video, graphics, writing, and more.
  4. . . .

Imagine if we had a document with 20 or so of these impact statements that crisply show the power of our collective work. I think this will regularly remind us of the value of Ubuntu and provide a set of benefits that we as a wider community will seek to protect and improve.

I would then suggest that part of the governance charter of Ubuntu is that our leadership are there to inspire, empower, and protect the ‘impact constitution'; this then directly connects our governance and leadership to what we consider to be the primary practical impact of Ubuntu in making the world a better place.

3. Cross-Governance Strategic Meetings

Today we have CC meetings, TB meetings, FC meetings etc. I think it would be useful to have a monthly, or even quarterly meeting that brings together key representatives from each of the governance boards with a single specific goal – how do the different boards help further each other’s mission. As an example, how does the CC empower the TB for success? How does the TB empower the FC for success?

We don’t want governance that is either independent or dependent at the individual board level. We want governance that is inter-dependent with each other. This then creates a more connected network of leadership.

4. Annual In-Person Governance Summit

We have a community donations fund. I believe we should utilize it to get together key representatives across Ubuntu governance into the same room for two or three days to discuss (a) how to refine and optimize process, but also (b) how to further the impact of our ‘impact constitution’ and inspire wider opportunity in Ubuntu.

If Canonical could chip in and potentially there were a few sponsors, we could potentially bring all governance representatives together.

Now, it could be tempting to suggest we do this online. I think this would be a mistake. We want to get our leaders together to work together, socialize together, and bond together. The benefits of doing this in person significantly outweigh doing it online.

5. Optimize our community brand around “innovation”

Ubuntu has a good reputation for innovation. Desktop, Mobile, Tablet, Cloud…it is all systems go. Much of this innovation though is seen in the community as something that Canonical fosters and drives. There was a sentiment in the discussion after my last blog post that some folks feel that Canonical is in the driving seat of Ubuntu these days and there isn’t much the community can do to inspire and innovate. There was at times a jaded feeling that Canonical is standing in the way of our community doing great things.

I think this is a bit of an excuse. Yes, Canonical are primarily driving some key pieces…Unity, Mir, Juju for example…but there is nothing stopping anyone innovating in Ubuntu. Our archives are open, we have a multitude of toolsets people can use, we have extensive collaborative infrastructure, and an awesome community. Our flavors are a wonderful example of much of this innovation that is going on. There is significantly more in Ubuntu that is open than restricted.

As such, I think it could be useful to focus on this in our outgoing Ubuntu messaging and advocacy. As our ‘impact constitution’ could show, Ubuntu is a hotbed of innovation, and we could create some materials, messaging, taglines, imagery, videos, and more that inspires people to join a community that is doing cool new stuff.

This could be a great opportunity for designers and artists to participate, and I am sure the Canonical design team would be happy to provide some input too.

Imagine a world in which we see a constant stream of social media, blog posts, videos and more all thematically orientated around how Ubuntu is where the innovators innovate.

Bonus: Network of Ubucons

OK, this is a small extra one I would like to throw in for good measure. :-)

The in-person Ubuntu Developer Summits were a phenomenal experience for so many people, myself included. While the Ubuntu Online Summit is an excellent, well-organized online event, there is something to be said about in-person events.

I think there is a great opportunity for us to define two UbuCons that become the primary in-person events where people meet other Ubuntu folks. One would be focused on the US, and one of Europe, and if we could get more (such as an Asian event), that would be awesome.

These would be driven by the community for the community. Again, I am sure the donations fund could help with the running costs.

In fact, before I left Canonical, this is something I started working on with the always-excellent Richard Gaskin who puts on the UbuCon before SCALE in LA each year.

This would be more than a LoCo Team meeting. It would be a formal Ubuntu event before another conference that brings together speakers, panel sessions, and more. It would be where Ubuntu people to come to meet, share, learn, and socialize.

I think these events could be a tremendous boon for the community.


Well, that’s it. I hope this provided some food for thought for further discussion. I am keen to hear your thoughts on the mailing list!

24 November, 2014 10:35PM

Colin King: Measuring stalled instructions with perf stat

Recently I was playing around with CPU loading and was trying to estimate the number of compute operations being executed on my machine.  In particular, I was interested to see how many instructions per cycle and stall cycles I was hitting on the more demanding instructions.   Fortunately, perf stat allows one to get detailed processor statistics to measure this.

In my first test, I wanted to see how the Intel rdrand instruction performed with 2 CPUs loaded (each with a hyper-thread):

$ perf stat stress-ng --rdrand 4 -t 60 --times
stress-ng: info: [7762] dispatching hogs: 4 rdrand
stress-ng: info: [7762] successful run completed in 60.00s
stress-ng: info: [7762] for a 60.00s run time:
stress-ng: info: [7762] 240.01s available CPU time
stress-ng: info: [7762] 231.05s user time ( 96.27%)
stress-ng: info: [7762] 0.11s system time ( 0.05%)
stress-ng: info: [7762] 231.16s total time ( 96.31%)

Performance counter stats for 'stress-ng --rdrand 4 -t 60 --times':

231161.945062 task-clock (msec) # 3.852 CPUs utilized
18,450 context-switches # 0.080 K/sec
92 cpu-migrations # 0.000 K/sec
821 page-faults # 0.004 K/sec
667,745,260,420 cycles # 2.889 GHz
646,960,295,083 stalled-cycles-frontend # 96.89% frontend cycles idle
stalled-cycles-backend
13,702,533,103 instructions # 0.02 insns per cycle
# 47.21 stalled cycles per insn
6,549,840,185 branches # 28.334 M/sec
2,352,175 branch-misses # 0.04% of all branches

60.006455711 seconds time elapsed

stress-ng's rdrand test just performs a 64 bit rdrand read and loops on this until the data is ready, and performs this 32 times in an unrolled loop.  Perf stat shows that each rdrand + loop sequence on average consumes about 47 stall cycles showing that rdrand is probably just waiting for the PRNG block to produce random data.

My next experiment was to run the stress-ng ackermann stressor; this performs a lot of recursion, hence one should see a predominantly large amount of branching.

$ perf stat stress-ng --cpu 4 --cpu-method ackermann -t 60 --times
stress-ng: info: [7796] dispatching hogs: 4 cpu
stress-ng: info: [7796] successful run completed in 60.03s
stress-ng: info: [7796] for a 60.03s run time:
stress-ng: info: [7796] 240.12s available CPU time
stress-ng: info: [7796] 226.69s user time ( 94.41%)
stress-ng: info: [7796] 0.26s system time ( 0.11%)
stress-ng: info: [7796] 226.95s total time ( 94.52%)

Performance counter stats for 'stress-ng --cpu 4 --cpu-method ackermann -t 60 --times':

226928.278602 task-clock (msec) # 3.780 CPUs utilized
21,752 context-switches # 0.096 K/sec
127 cpu-migrations # 0.001 K/sec
927 page-faults # 0.004 K/sec
594,117,596,619 cycles # 2.618 GHz
298,809,437,018 stalled-cycles-frontend # 50.29% frontend cycles idle
stalled-cycles-backend
845,746,011,976 instructions # 1.42 insns per cycle
# 0.35 stalled cycles per insn
298,414,546,095 branches # 1315.017 M/sec
95,739,331 branch-misses # 0.03% of all branches

60.032115099 seconds time elapsed

..so about 35% of the time is used in branching and we're getting  about 1.42 instructions per cycle and no many stall cycles, so the code is most probably executing inside the instruction cache, which isn't surprising because the test is rather small.

My final experiment was to measure the stall cycles when performing complex long double floating point math operations, again with stress-ng.

$ perf stat stress-ng --cpu 4 --cpu-method clongdouble -t 60 --times
stress-ng: info: [7854] dispatching hogs: 4 cpu
stress-ng: info: [7854] successful run completed in 60.00s
stress-ng: info: [7854] for a 60.00s run time:
stress-ng: info: [7854] 240.00s available CPU time
stress-ng: info: [7854] 225.15s user time ( 93.81%)
stress-ng: info: [7854] 0.44s system time ( 0.18%)
stress-ng: info: [7854] 225.59s total time ( 93.99%)

Performance counter stats for 'stress-ng --cpu 4 --cpu-method clongdouble -t 60 --times':

225578.329426 task-clock (msec) # 3.757 CPUs utilized
38,443 context-switches # 0.170 K/sec
96 cpu-migrations # 0.000 K/sec
845 page-faults # 0.004 K/sec
651,620,307,394 cycles # 2.889 GHz
521,346,311,902 stalled-cycles-frontend # 80.01% frontend cycles idle
stalled-cycles-backend
17,079,721,567 instructions # 0.03 insns per cycle
# 30.52 stalled cycles per insn
2,903,757,437 branches # 12.873 M/sec
52,844,177 branch-misses # 1.82% of all branches

60.048819970 seconds time elapsed

The complex math operations take some time to complete, stalling on average over 35 cycles per op.  Instead of using 4 concurrent processes, I re-ran this using just the two CPUs and eliminating 2 of the hyperthreads.  This resulted in 25.4 stall cycles per instruction showing that hyperthreaded processes are stalling because of contention on the floating point units.

Perf stat is an incredibly useful tool for examining performance issues at a very low level.   It is simple to use and yet provides excellent stats to allow one to identify issues and fine tune performance critical code.  Well worth using.

24 November, 2014 07:43PM by Colin Ian King (noreply@blogger.com)

hackergotchi for Tanglu developers

Tanglu developers

Cutelyst 0.5.0

A bit more than one year after the initial commit, Cutelyst makes it’s 5th release.

It’s now powering 3 commercial applications, the last one recently got into production and is the most complex of them, making heavy use of Grantlee and Cutelyst capabilities.

Speaking of Grantlee if you use it on Qt5 you will get hit by QTBUG-41469 which sadly doesn’t seems to get fixed in time for 5.4, but uWSGI can constrain your application resources so your server doesn’t go out of memory (worth the leak due to it’s usefulness).

Here is an overview since 0.4.0 release:

  • Remove hardcode build setting to “Debug”, so that one can build with “Release” increasing performance up to 20% – https://gitorious.org/cutelyst/pages/CutelystPerformance
  • Request::uploads() API was changed to be useful in real world, filling a QMap with the form field name as a key and in the proper order sent by the client
  • Introduced a new C_ATTR macro which allows to have the same Perl attributs syntax like C_ATTR(method_name, :Path(/foo/bar) :Args)
  • Added an Action class RoleACL which allows for doing authorization on control lists, making it easy to deny access to some resources if a user doesn’t match the needed role
  • Added a RenderView class to make it easier to delegate the rendering to a view such as Grantlee
  • Request class is now QObject class so that we can use it on Grantlee as ctx.request.something
  • Make use of uWSGI ini (–init) configuration file to also configure the Cutelyst application
  • Better docs
  • As always some bugs were fixed

I’m very happy with the results, all those site performance tools like webpagetest give great scores for the apps, and I have started to work on translating Catalyst tutorial to Cutelyst, but I realize that I need Chained dispatcher working before that…

If you want to try it, I’ve made a hello-world app available today at https://gitorious.org/cutelyst/hello-world

Download here!


24 November, 2014 06:51PM by dantti

hackergotchi for Ubuntu developers

Ubuntu developers

Dustin Kirkland: USENIX LISA14 Talk: Deploy and Scale OpenStack


I had the great pleasure to deliver a 90 minute talk at the USENIX LISA14 conference, in Seattle, Washington.

During the course of the talk, we managed to:

  • Deploy OpenStack Juno across 6 physical nodes, on an Orange Box on stage
  • Explain all of the major components of OpenStack (Nova, Neutron, Swift, Cinder, Horizon, Keystone, Glance, Ceilometer, Heat, Trove, Sahara)
  • Explore the deployed OpenStack cloud's Horizon interface in depth
  • Configured Neutron networking with internal and external networks, as well as a gateway and a router
  • Setup our security groups to open ICMP and SSH ports
  • Upload an SSH keypair
  • Modify the flavor parameters
  • Update a bunch of quotas
  • Add multiple images to Glance
  • Launch some instances until we max out our hypervisor limits
  • Scale up the Nova Compute nodes from 3 units to 6 units
  • Deploy a real workload (Hadoop + Hive + Kibana + Elastic Search)
  • Then, we deleted the entire environment, and ran it all over again from scratch, non-stop
Slides and a full video are below.  Enjoy!




Cheers,
Dustin

24 November, 2014 05:01PM by Dustin Kirkland (noreply@blogger.com)

hackergotchi for Xanadu developers

Xanadu developers

Monitorea lo que sucede en tu GNU/Linux con Iftop, Iotop y Htop

En GNU/Linux existen multitud de herramientas dedicadas a monitorear lo que sucede en nuestro sistema y a veces seleccionar la herramienta adecuada puede resultar difícil, es por eso que hoy les hablare sobre iftop, iotop y htop, tres herramientas que a pesar de ser utilizadas mediante el terminal resultan súper fáciles de manejar y nos permiten observar con detalle lo que sucede en nuestro sistema.

La primera de las tres se llama iftop, una herramienta que produce una lista frecuentemente actualizada de las conexiones de red clasificadas por el uso de ancho de banda.

Iftop

La siguiente en la lista se llama iotop, cuyo propósito es mostrarnos una lista de los procesos de lectura/escritura de los discos en nuestro sistema y la velocidad de dicho acceso organizándolo de mayor a menor.

iotop

Por ultimo les hablare de htop, para quienes conocen el comando top notaran que existen algunas similitudes y algunas diferencias, su interfaz se enfoca en mejorar la experiencia de los usuario del comando top mediante colores y barras que hacen mas fácil entender la información que se muestra.

Htop

La instalación de cualquiera de estas herramientas es bastante sencilla, solo debes ejecutar el comando correspondiente a tu distribución.

# apt install iftop iotop htop

# yum install iftop iotop htop

# pacman -S iftop iotop htop

Para su ejecución solo debes abrir un terminal como root o utilizar sudo.

# iftop
# iotop
# htop

Referencias:


Tagged: htop, iftop, iotop, monitoreo

24 November, 2014 03:33PM by sinfallas

hackergotchi for Ubuntu developers

Ubuntu developers

Didier Roche: Ubuntu Developer Tools needs you for its new name!

We’ve been talking about the Ubuntu Developer Tools Center for a few months now. We’ve seen a lot of people testing it out & contributing and we had a good session at the Ubuntu Online Summit about what the near future holds for UDTC.

Also during that session, emerging from feedback we received we talked about how “UDTC” and “Ubuntu Developer Tools Centre” is a bit of mouthfull, and the acronym is quite easy to muddle. We agreed that we needed a new name, and that’s where we need your help.

We’re looking for a name which succinctly describes what the Developer Tools Center is all about, its values and philosophy. Specifically, that we are about developing ON Ubuntu, not just FOR Ubuntu. That we strive to ensure that the tools made available via the tools center are always in line with latest version delivered by the upstream developers. That we automate the testing and validation of this, so developers can rely on us. And that use LTS releases as our environment of choice so developers have a solid foundation on which to build. In a nutshell, a name that conveys that we love developers!

If you have a great idea for a new name please let us know by commenting on the Google+ post or by commenting on this blog post.

The final winner will be chosen by a group of Ubuntu contributors but please +1 your favorite to help us come up with a shortlist. The winner will receive the great honor of an Ubuntu T Shirt and knowing that they have changed history! We’ll close this contest by Monday 8th of December.

Now, it’s all up to you! If you want to also contribute to other parts of this ubuntu loves developers effort, you’re more than welcome!

24 November, 2014 03:31PM

Daniel Holbach: I Am Who I Am Because Of Who We All Are

I read the “We Are Not Loco” post  a few days ago. I could understand that Randall wanted to further liberate his team in terms of creativity and everything else, but to me it looks feels the wrong approach.

The post makes a simple promise: do away with bureaucracy, rename the team to use a less ambiguous name, JFDI! and things are going to be a lot better. This sounds compelling. We all like simplicity; in a faster and more complicated world we all would like things to be simpler again.

What I can also agree with is the general sense of empowerment. If you’re member of a team somewhere or want to become part of one: go ahead and do awesome things – your team will appreciate your hard work and your ideas.

So what was it in the post that made me sad? It took me a while to find out what specifically it was. The feeling set in when I realised somebody turned their back on a world-wide community and said “all right, we’re doing our own thing – what we used to do together to us is just old baggage”.

Sure, it’s always easier not having to discuss things in a big team. Especially if you want to agree on something like a name or any other small detail this might take ages. On the other hand: the world-wide LoCo community has achieved a lot of fantastic things together: there are lots of coordinated events around the world, there’s the LoCo team portal, and most importantly, there’s a common understanding of what teams can do and we all draw inspiration from each other’s teams. By making this a global initiative we created numerous avenues where new contributors find like-minded individuals (who all live in different places on the globe, but share the same love for Ubuntu and organising local events and activities). Here we can learn from each other, experiment and find out together what the best practices for local community awesomeness are.

Going away and equating the global LoCo community with bureaucracy to me is desolidarisation – it’s quite the opposite of “I Am Who I Am Because Of Who We All Are”.

Personally I would have preferred a set of targeted discussions which try to fix processes, improve communication channels and inspire a new round leaders of Ubuntu LoCo teams. Not everything you do in a LoCo team has to be approved by the entire set of other teams, actual reality in the LoCo world is quite different from that.

If you have ideas to discuss or suggestions, feel free to join our loco-contacts mailing list and bring it up there! It’s your chance to hang out with a lot of fun people from around the globe. :-)

24 November, 2014 03:07PM

hackergotchi for siduction

siduction

Release notes for siduction 2014.1

We are very happy to present to you the final release of siduction 2014.1 – Indian Summer. siduction is a distribution based on Debian’s unstable branch and we try to release a few new snapshots over the course of each year. For 2014 it will be just this final release. We did a lot of stabilizing work in the past year, besides working on further integrating systemd and working on dev releases. We know it is not ideal to have an install medium that is older than six months, so please accept our apologies for that, we will try to release more often.

All our flavours are in pretty good shape, so we will not waste time with an RC and do the real release right away.

siduction 2014.1 – Indian Summer is shipped with six desktop environments: KDE SC, XFCE, LXDE, LXQt, GNOME and Cinnamon, all in 32- and 64-bit variants. From the included DEs this time around only LXDE fits on a CD with 700 MegaByte. But as CDs become more irrelevant with every day, we are not too worried about this and recommend to use USB-Sticks for installation.

The released images are a snapshot of Debian unstable, that also goes by the name of Sid, from 2014-11-22. They are enhanced with some useful packages and scripts, our own installer and a custom patched version of the linux-kernel 3.17, accompanied by X-Server 1.16.1.

Besides those desktop environments we also include noX, which is an environment without X. There is, last, but not least, an image that listens to the name of Xorg and it features the minimal window manager Fluxbox on top of X.

A year ago we decided to release with systemd, while Debian was still discussing, what init system to use in the future. Meanwhile Debian and Ubuntu have decided to go with systemd as well. It is the most technicaly advanced of the init systems at hand. We have a preliminary section on systemd in our sidu-manual, which will be expanded and translated to other languages.

What is new?
Cinnamon
After our dev release of Cinnamon in October was well received, we are shipping Cinnamon as a full member of the siduction flavour family. For further information on the innards of this GTK+ 3 driven desktop environment, please refer to the release notes of that release.

We gain two, we loose one. Razor-Qt is not released by us anymore as it is at it’s end of life and merged into LXQt, which we have the pleasure to talk about now, because that is the 2nd addition to our family. With our developer agaida being upstream of LXQt, we will always have the very latest functional packages in our repositories. LXQt has matured a lot and deserves to be part of the family. Even though it has not yet reached the polished swiftness of LXDE, it is on a good way. It has been completely built on Qt 5 and is in large parts prepared for Wayland.

And besides that?
KDE SC
KDE SC has matured to version 4.14.2, which is one of the last iterations of the KDE 4 chapter. We have taken Kickoff menu out and implemented Homerun instead. Systemsettings has two new modules that debian does (not yet) have. One of them is called ‘Desktop Search Advanced and is a more detailed configuration module for Baloo, the successor of Nepomuk. Also, as a second search agent for Baloo besides Dolphin we have integrated Milou into the panel. The other new module in systemsettings is labeled Systemd and ships a plethora of options that can be of tremendous help with configuring the systemd daemon.

You can safely assume this is the last siduction release shipping software from KDE’s fourth cycle. Our next release will ship Frameworks 5 and Plasma 5.

GNOME
GNOME shipped version is 3.14.1 and it brings new things. As GNOME is still pretty new in our release cycle, here is a few hints on how to run it:

There are two ways to start your gnome-session:
* GNOME-Classic, which implements the GNOME2 look
* GNOME, which implements the GNOME 3 look and desktop-effects
To choose GNOME or GNOME-CLASSIC users should choose default session from the display manager menu. By default in live mode GNOME 3 is started but it will use software rendering. To use GNOME 3 with hardware rendering, users of ATI cards must install firmware-linux-nonfree before starting the installer. Boot cheatcode “gnome” was removed because it is now deprecated. Windows look in GNOME has changed because GNOME developers dropped minimize and maximize buttons. To minimize or maximize a window, you must use right click in the window title bar and choose minimize or maximize from the menu. Also, to maximize a window you can double click on the window title bar. We have added to Favourites Applications (aka Dash) some of most used applications. You will discover hexchat, transmission, libreoffice, siduction bug report tool, gnome-terminal and many more in Favourites Applications.

We ship noX, for the second time around, as an official release, which was first introduced in October 2012 as development release. As there is no graphical environment, you need to use cli-installer as root to run the installation.

XFCE is still being shipped in version 4.10.1 and is as reliable as ever. It is a desktop environment that just gets out of your way when work needs to be done.

Next to LXQt we also ship the latest version of LXDE, which is also lightweight, but relies on GTK+ 2 instead of Qt. LXDE will be developed as long as GTK+ 2 stays usable.

A lot of time consuming changes again went into adapting the codebase we forked to our needs and integration of systemd. Work on the sidu-manual, as it is called now, is ongoing to make it a lot easier to add new content than before.

All in all we closed around 230 bugs since the last final release.

The installer offers btrfs still as an experimental filesystem. Please be careful if you use it and always backup your data. Also, the installer for now has been reduced to it’s basic features until more sophisticated stuff works more reliably. Due to some internal changes in fdisk some parts of the automatic partitioning have to be rewritten. as there was not enough time, we took that feature out for now.

We had to make some changes to the concept of our artwork. We used to devote each release to a rock song and try to have a matching artwork. For two reasons we gave up on that idea. For one, for a while this year we had no art team at all. On the other hand it takes quite some time to integrate artwork into the infrastructure in it’s respective places and make it all work. With the new concept things became a bit easier, all we basicaly need to do is alter the colours or patterns of the given artwork. The distro-art we are using for this and the following releases for the forseeable future was created by Bob, a professional artist. Thanks a lot for your contribution!

Our Resources

siduction Forum
siduction Blog
Git Archive
Distro News
Bug-Tracker
siduction-Map

Support can be obtained on our forum as well as on IRC. The relevant channels on OFTC-Network are #siduction for english support or #siduction-core, ifyou like to join in and participate. On your desktop you also find an icon that takesyou to the right channel for support, depending on the chosen language.

To be able to act as a testbed for Debian, we are introducing our own bug-tracker. Let me explain how you can help us and Debian by submitting bugreports for broken packages. Weathered users will know how to file bugs directly with the Debian BTS (Bug Tracking System). For users not so comfortable with the system we have reportbug-ng preinstalled.

If you think, you found a bug in a Debian package, please start reportbug-ng and put the name of the package in the adressline ontop. The app will now search through the already filed bugs for that package and show those. Now it’s up to you to determine, if “your” bug has already been reported. Ifit is, ask yourself if you have anything relevant to add to this report or maybe even a patch. If not, you are done for this time. If the bug has not been reported yet and you are not familiar with the BTS yet, you may report the bug in our Bug-Tracker.

That obviously goes for siduction packages as well. We will sort the bugs for you and file them in the appropriate place, if it’s reproducible. Please look out for a forum post with more detailed info on the bug-tracker soon. If all this seems to complicated for now, feel free to use the bugs-thread on the forum for now, it will keep working until final release.

There is nothing we can tell you about our release cycle other than that we strive for up to four releases per year, but that may vary greatly, depending on developement of siduction and Debian Unstable.

As we are always looking for contributors, here is what to do: Come to IRC to channel #siduction-core and talk to us about what you would like to do within the project, or where you think you could help. As you will notice if you scroll down, we have no art-team at the moment. If you are willing and capable, talk to us.
Hardware Tips

If you should own a ATI Radeon graphics accelerator, please use the failsafe option, when booting the Live-ISO. This option will add the cheatcodes radeon.modeset=0 xmodule=vesa to the Kernel bootline, so that you can boot to X. Before installing, on the Live-ISO, please install firmware-linux-nonfree. To do so, please open your /etc/apt/sources.list.d/debian.list with your favourite editor as root and append contrib non-free to the end of the firstline. Save the edit and do:

apt-get update && apt-get install firmware-linux-nonfree

If you install the operating system now, the package will be installed also, preventing you from a garbled screen when first rebooting. Mind that if you reboot before installing the system, the changes you made will be lost.

If your system has wireless network, this will probably not work out of the box with free drivers, so you better start with wired network connected. You might want to use the script fw-detect to get information on wireless drivers. The installer will prompt you for any missing firmware and guide you through the process of installing it.

Last but not least a hint for users of the kernel based virtual machine KVM. The developement of a frontend for the kernel based virtual machine (kvm) has begun as a fork of qemu with the name qemu-kvm or short “kvm”. Since qemu version 1.4 all patches of the kvm fork have been integrated back into the qemu source. Also there has been much progress in the field of virtualization. So there is a lot of outdated documentation around. We have a current worksheet for Qemu in our wiki.

Credits for siduction 2014.1:

Core Team:
Alf Gaida (agaida)
Angelescu Ovidiu (convbsd)
Axel Beu (ab)
Ferdinand Thommes (devil)
J. Theede (musca)
Tom Wroblewski (GoingEasy9)
Torsten Wohlfarth (towo)

Maintainers of the siduction Desktop Environments:
Cinnamon: J. Theede (musca)
GNOME: Angelescu Ovidiu (convbsd)
KDE: Ferdinand Thommes (devil), José Manuel Santamaría Lema (santa)
LXDE: Alf Gaida (agaida)
LXQt: Alf Gaida (agaida) noX: Alf Gaida (agaida)
XFCE: Torsten Wohlfarth (towo)
Xorg: agaida/convbsd

Art Team:
Bob
We need more contributors for siduction release art!

Code, ideas and support:
ayla
bluelupo
der_bud
J. Hamatoma (hama)
Markus Schimpf (arno911)
bodhi

Thank you!

Also thank you very much to all testers and all the people giving us support in any possible way. This is also your achievement.

We also want to thank Debian, as we are using their base.
And now enjoy!

On behalf of the siduction team:
Ferdinand Thommes
And now enjoy!

24 November, 2014 07:06AM by Ferdinand Thommes

hackergotchi for TurnKey Linux

TurnKey Linux

How to setup an email to SMS forwarding gateway address with Postfix

SMS is an important tool in the arsenal I use to fight my never-ending war with productivity-destroying distractions.

Colleagues, friends, mailing lists, foes, viagra peddlers and nigerian princes send me in aggregate about 300 pieces of of e-mail every day, most of it not really that important. All of this noise can eat up an enormous amount of my attention so when I really need to concentrate I'm pretty much forced to ignore e-mail altogether for sometimes weeks at a time.

By comparison I only receive less than 10 SMS messages a day on my phone. They're also much shorter so I can go over them much more quickly. Consequently, it's usually much easier to get my attention with an SMS than an e-mail so people who know me have learned to use SMS as an out-of-band high-priority communication channel.

OTOH, I realize sending an SMS when you're busy hacking away at your computer can be a bit of a bother. You have to get out your phone, and fiddle around with a crappy virtual keyboard, etc.

So to get the best of both worlds I setup a secret email address on my mail server that sends e-mails directly to my phone as SMS. This address I give out to anyone who really needs to get in touch with me even when I'm offline.

This is implemented as a configuration on my postfix mail server, which pipes any email sent to a particular secret alias through to the extract-body.py script, which extracts the body of the message, and then to a little script which sends the body of the message to clickatell, a company that provides a simple SMS sending API.

Let's take a look behind the scenes:

# grep sms /etc/postfix/aliases
sms.c2e1: "|/usr/local/bin/extract-body.py|/usr/local/bin/clickatell-sendmsg --stdin to=0541232123 --maxlen=140"

# cat /usr/local/bin/clickatell-sendmsg
#!/bin/sh

export CLICKATELL_USER=liraz
export CLICKATELL_PASSWORD=mypassword
export CLICKATELL_API_ID=1232123

exec $(dirname $0)/clickatell-sendmsg.py "$@"

I'm attaching the required scripts to the end of this post.

Notes:

  • Clickatell's HTTP/HTTPS API can be easily invoked via curl

  • clickatell-sendmsg.py is a 133 line python wrapper around curl

  • extract-body.py is a is a 11 line wrapper that extracts the message body from an email that postfix pipes through to these aliases.

    Also, if a PGP signature exists it will delete it.

24 November, 2014 05:15AM by Liraz Siri

hackergotchi for Parsix developers

Parsix developers

An updated kernel based on Linux 3.14.25 is now available for Parsix GNU/Linux 7...

An updated kernel based on Linux 3.14.25 is now available for Parsix GNU/Linux 7.0 (Nestor). Update your systems to install it.

24 November, 2014 03:57AM by Parsix GNU/Linux

hackergotchi for Xanadu developers

Xanadu developers

I2p en Debian, Ubuntu, Mint y Trisquel

I2P es una red anónima, que presenta una capa simple que las aplicaciones pueden usar para enviarse mensajes entre si de forma anónima y segura. La propia red está estrictamente basada en mensajes, pero hay una librería disponible para permitir comunicación en streaming confiable sobre ella. Toda comunicación está cifrada extremo a extremo (en total hay cuatro capas de cifrado usadas cuando se envía un mensaje), e incluso los puntos de los extremos son identificadores criptográficos (esencialmente un par de claves públicas).

Ninguna red puede ser “perfectamente anónima”. El objetivo continuado de I2P es hacer los ataques más y más difíciles de montar. Su anónimato se volverá más fuerte al crecer el tamaño de la red y con la revisión académica que está en marcha.

¿Qué puede hacer con él?

Dentro de la red I2P las aplicaciones no tienen restricciones en como pueden comunicarse aquellas que normalmente usan UDP pueden usar las funcionalidades básicas de I2P, y aquellas aplicaciones que normalmente usan TCP pueden utilizar la librería ‘tipo TCP de streaming’. I2P incluye una aplicación genérica de puente TCP/I2P (I2PTunnel) que permite enviar flujos TCP dentro de la red I2P, también recibir flujos TCP de fuera de la red y enviar estos a una dirección IP específica.

Para instalar I2p en Debian hay que agregar el repositorio correspondiente a tu versión en el archivo source.list.

  • Estable
deb http://deb.i2p2.no/ stable main
deb-src http://deb.i2p2.no/ stable main
  •  Testing o Sid
deb http://deb.i2p2.no/ unstable main
deb-src http://deb.i2p2.no/ unstable main

Luego descargue la clave usada para firmar el repositorio desde aquí y agréguela usando el siguiente comando.

apt-key add debian-repo.pub

Ahora actualice la lista de repositorios e instale

# apt update
# apt install i2p i2p-keyring

Para Ubuntu, Mint y Trisquel el procedimiento es ligeramente diferente ya que usaremos un PPA para realizar la instalación.

# apt-add-repository ppa:i2p-maintainers/i2p
# apt update
# apt install i2p

Para iniciar el programa después de la instalación (sin importar el método utilizado) podemos utilizar alguno de los siguientes comandos.

  • A petición del usuario
$ i2prouter start
  •  Como servicio
# dpkg-reconfigure i2p

Después de instalar, recuerde ajustar su NAT/cortafuegos. Los puertos a abrir pueden verse en la web de configuración de la red en la consola del router.

Si quiere acceder a eepsites (sitios web en la red I2P) a través su navegador, eche un vistazo a la página sobre configuración para proxy del navegador para ver unas instrucciones sencillas.

Referencias:


Tagged: anonimo, i2p, red

24 November, 2014 01:52AM by sinfallas

hackergotchi for Ubuntu developers

Ubuntu developers

Stephen Michael Kellat: Our Tools: MORE POWER

Once the blog post by Jono Bacon hit about seeking a reboot in community governance, multiple threads bloomed in several directions. Things have wandered away from the original topic of governance structures to hit more vague, more general issues. To an extent I metaphorically keep biting my tongue about saying much more in the thread.

I do know that I have put forward the notion that we attempt an export to EPUB format of the xubuntu-docs package documentation. This, in part, is to help potentially ease a threshold for access. With the e-reader devices that do exist you could access the documentation on a separate device to read while you sit at the computer. This is only meant as an exploratory experimental notion rather than a commitment to ship.

In light of the feedback complaining about how DocBook can be difficult to address, sometimes it can be appropriate to test some of its power and show it off. DocBook has quite a lot of power to it if you have the ability to leverage it. With the variety of ways it can be exported into other formats than just the HTML files we already see shipped in Xubuntu, we can test new ways of shipping.

To the outsider, many of the processes used in creation of the various flavors of Ubuntu may seem like they can be simplified as they seem unnecessarily complicated. In some cases, we have excess power and flexibility built in for future expansion. In the times between Long Term Support releases we may need to take the time to show those who wish to join the community the power of our toolsets and what we can do with them.

24 November, 2014 12:00AM

November 23, 2014

Dimitri John Ledkov: Analyzing public OpenPGP keys

OpenPGP Message Format (RFC 4880) well defines key structure and wire formats (openpgp packets). Thus when I looked for public key network (SKS) server setup, I quickly found pointers to dump files in said format for bootstrapping a key server.

I did not feel like experimenting with Python and instead opted for Go and found http://code.google.com/p/go.crypto/openpgp/packet library that has comprehensive support for parsing openpgp low level structures. I've downloaded the SKS dump, verified it's MD5SUM hashes (lolz), and went ahead to process them in Go.

With help from http://github.com/lib/pq and database/sql, I've written a small program to churn through all the dump files, filter for primary RSA keys (not subkeys) and inject them into a database table. The things that I have chosen to inject are fingerprint, N, E. N & E are the modulus of the RSA key pair and the public exponent. Together they form a public part of an RSA keypair. So far, nothing fancy.

Next I've run an SQL query to see how unique things are... and found 92 unique N & E pairs that have from two and up to fifteen duplicates. In total it is 231 unique fingerprints, which use key material with a known duplicate in the public key network. That didn't sound good. And also odd - given that over 940 000 other RSA keys managed to get unique enough entropy to pull out a unique key out of the keyspace haystack (which is humongously huge by the way).

Having the list of the keys, I've fetched them and they do not look like regular keys - their UIDs do not have names & emails, instead they look like something from the monkeysphere. The keys look like they are originally used for TLS and/or SSH authentication, but were converted into OpenPGP format and uploaded into the public key server. This reminded me of the Debian's SSL key generation vulnerability CVE-2008-0166. So these keys might have been generated with bad entropy due to affected tools by that CVE and later converted to OpenPGP.

Looking at the openssl-blacklist package, it should be relatively easy for me to generate all possible RSA key-pairs and I believe all other material that is hashed to generate the fingerprint are also available (RFC 4880#12.2). Thus it should be reasonably possible to generate matching private keys, generate revocation certificates and publish the revocation certificate with pointers to CVE-2008-0166. (Or email it to the people who have signed given monkeysphered keys). When I have a minute I will work on generating openpgp-blacklist type of scripts to address this.

If anyone is interested in the Go source code I've written to process openpgp packets, please drop me a line and I'll publish it on github or something.

23 November, 2014 09:15PM by Dimitri John Ledkov (noreply@blogger.com)

hackergotchi for Parsix developers

Parsix developers

New security updates are available for Parsix GNU/Linux 7.0 (Nestor) and 6.0 (Tr...

New security updates are available for Parsix GNU/Linux 7.0 (Nestor) and 6.0 (Trev). Please see http://www.parsix.org/wiki/Security for details.

23 November, 2014 09:03PM by Parsix GNU/Linux

hackergotchi for Ubuntu developers

Ubuntu developers

Ovidiu-Florin Bogdan: Awesome BSP in München

An awesome BSP just took place in München where teams from Kubuntu, Kolab, KDE PIM, Debian and LibreOffice came and planned the future and fixed bugs. This is my second year participating at this BSP and I must say it was an awesome experience. I got to see again my colleagues from Kubuntu and got to […]

23 November, 2014 06:10PM

Sam Hewitt: Totally Not Weird Cheese Gel (Made with Science!)

A significant part of cooking is chemical science, but few people think of it in this way, but when combining cooking with what people consider stereotypical chemistry –using & mixing things with long technical names– you can have more fun.

Cheese as a Condiment

A typical method of adding cheese to things is simply to place grated cheese over a pile of food and melting it (in an oven usually). Now one of the problems (as I see it) with this, is that when you heat cheese it tends to split apart into milk solids and liquid milk fat, so you end up with unnecessary grease.

In my mind, the cheese-as-a-condiment is one of that is more smooth & creamy, such as that of a fondue, but your average "out-of-the-package" cheese does not melt this way. You can purchase one of several (disgusting) cheese products that gives you this effect, but it's more fun to make one yourself and you have the added benefit of knowing what goes into it.

Can be then used on nachos, for example:

Emulsification

One way to do this to use a chemical emulsifier to make the liquid fats –cheese– soluble in something that it is normally not soluble in –such as water. Essentially, something that's done frequently in a factory setting to help make many processed cheese products, spreads, dips, etc.

Now there are a tonne of food-safe chemical emulsifiers each with slightly different properties that you could use, but the one that I have a stock of, and that works particularly well with milk fats, like those in cheese, is sodium citrate –the salt of citric acid– which you can get from your friendly online distributor of science-y cooking products.

Many of these are also flavourless, or given the usually relatively small amount in food, the flavour that might be imparted is insignificant. They're essentially used for textural changes.

    Ingredients

  • 250 mL water*
  • 10 grams sodium citrate
  • 3-4 cups grated cheese –such as, cheddar**

*if you're feeling more experimentative, you can use a different (water-based) liquid for additional flavour, such as wine or an infusion

**you can use whichever cheeses you fancy, but I'd avoid processed cheeses as they may have additives that could mess up the chemisty

    Directions

  1. In a pot make a solution of 0.25g/mL sodium citrate in water –boil the water and dissolve the salt.
  2. Reduce the heat and begin to melt the cheese into the water handful at a time while whisking constantly.
  3. When all the cheese has melted keep stirring while the mixture thickens.
  4. Serve or use hot –keep it warm.

At the end of this what you'll essentially have is a "cheese gel" which will stiffen as it cools, but it can easily be reheated to regain its smooth consistency.

When you've completed the emulusion you can add other ingredients to jazz it up a bit –some dried spices or chopped jalapenos, for example– before pouring it over things or using it as a dip. Do note, if you're pouring it over nachos, it's best to have heated the chips first.

Another great use for your cheese gel is to pour it out, while hot, onto a baking sheet and let cool. Then you can cut it into squares for that perfect melt needed for the perfect cheeseburger.

23 November, 2014 06:00PM

Ubuntu Podcast from the UK LoCo: S07E34 – The One with Unagi

We’re back with Season Seven, Episode Thirty-four of the Ubuntu Podcast! Just Laura Cowen and Mark Johnson here again.

In this week’s show:

  • We discuss the Ind.ie crowdsourcing campaign.

  • We also discuss:

  • We share some Command Line Lurve (from ionagogo) which finds live streams on a page. It’s great for watching online feeds without Flash. Just point it at a web page and it finds all the streams. Run with “best” (or a specific stream type) and it launches your video player such as VLC:
    livestreamer
    
  • And we read your feedback. Thanks for sending it in!

We’ll be back next week, so please send your comments and suggestions to: podcast@ubuntu-uk.org
Join us on IRC in #uupc on Freenode
Leave a voicemail via phone: +44 (0) 203 298 1600, sip: podcast@sip.ubuntu-uk.org and skype: ubuntuukpodcast
Follow us on Twitter
Find our Facebook Fan Page
Follow us on Google+

23 November, 2014 02:30PM

Jonathan Riddell: Junior Job: Breeze Icon theme for LibreOffice

Here’s a nice project if you’re bored and wanting to help make a very visual difference to KDE, port the Breeze icon theme to LibreOffice.

Wiki page up at https://community.kde.org/KDE_Visual_Design_Group/LibreOffice_Breeze

All help welcome

Open, Save and PDF icons are breeze, all the rest still to go

 

facebooktwittergoogle_pluslinkedinby feather

23 November, 2014 02:23PM

Jonathan Riddell: KDE Promo Idea

we-strongly-suggest-using-kde-this-christmas

New seasonal KDE marketing campaign.  Use Kubuntu to get off the naughty list.

 

facebooktwittergoogle_pluslinkedinby feather

23 November, 2014 12:11PM

Charles Profitt: Custom Wallpaper

I recently upgraded to Ubuntu 14.10 and wanted to adorn my desktop with some new wallpapers. Usually, I find several suitable wallpapers on the web, but this time I did not. I then decided to make my own and wanted to share the results. All the following wallpapers were put together using GIMP.

plain hex template

Plain Hex Template

Hex Template Two

Hex Template Two

hex with dwarf

Hex With Dwarf

Hex Dragon

Hex Dragon


23 November, 2014 04:34AM

Stephen Michael Kellat: Checking Links Post-Snow

There may have been a ton of snowfall in the Lake Erie shore region susceptible to "Lake Effect" over the past week. We have had some warming up.

Thankfully we haven't had infrastructure failures. There had been some fears of that. A week has come to a close and a new one is to begin.

23 November, 2014 12:00AM

November 22, 2014

hackergotchi for Blankon developers

Blankon developers

Sokhibi: Review Laptop DELL Inspiron N4030

Sudah beberapa kali Istana Media mereview Hardware yang support Linux, selama ini sebagian besar hardware yang kami review adalah motherboard karena barang tersebut memang paling banyak yang kami miliki, beberapa komentar di jejaring sosial banyak yang minta review Laptop maupun Netbook. Maka dari itu kami berusaha memenuhi permintaan itu walau dengan keterbatasan Hardware yang kami miliki.

22 November, 2014 11:12PM by Istana Media (noreply@blogger.com)

hackergotchi for Ubuntu developers

Ubuntu developers

Bryan Quigley: Would you crowdfund a $500 Ubuntu “open to the core” laptop?

UPDATE - I’ve removed the silly US restriction.  I know there are more options in Europe, China, India, etc, but why shouldn’t you get access to the “open to the core” laptop!
This would definitely come with at least 3 USB ports (and at least one USB 3.0 port).

Since Jolla had success with crowdfunding a tablet, it’s a good time to see if we can get some mid-range Ubuntu laptops for sale to consumers in as many places as possible.  I’d like to get some ideas about whether there is enough demand for a very open $500 Ubuntu laptop.

Would you crowdfund this? (Core Goals)

  • 15″ 1080p Matte Screen
  • 720p Webcam with microphone
  • Spill-resistant and nice to type on keyboard
  • Intel i3+ or AMD A6+
  • Built-in Intel or AMD graphics with no proprietary firmware
  • 4 GB Ram
  • 128 GB SSD (this would be the one component that might have to be proprietary as I’m not aware of another option)
  • Ethernet 10/100/1000
  • Wireless up to N
  • HDMI
  • SD card reader
  • CoreBoot (No proprietary BIOS)
  • Ubuntu 14.04 preloaded of course
  • Agreement with manufacturer to continue selling this laptop (or similar one) with Ubuntu preloaded to consumers for at least 3 years.

Stretch Goals? Or should they be core goals?

Will only be added if they don’t push the cost up significantly (or if everyone really wants them) and can be done with 100% open source software/firmware.

  • Touchscreen
  • Convertible to Tablet
  • GPS
  • FM Tuner (and built-in antenna)
  • Digital TV Tuner (and built-in antenna)
  • Ruggedized
  • Direct sunlight readable screen
  • “Frontlight” tech.  (think Amazon PaperWhite)
  • Bluetooth
  • Backlit keyboard
  • USB Power Adapter

Take my quick survey if you want to see this happen.  If at least 1000 people say “Yes,” I’ll approach manufacturers.   The first version might just end up being a Chromebook modified with better specs, but I think that would be fine.

Link to survey – http://goo.gl/forms/bwmBf92O1d
Loading…

22 November, 2014 09:37PM

Jonathan Riddell: Blog Moved

KDE Project:

I've moved my developer blog to my vanity domain jriddell.org, which has hosted my personal blog since 1999 (before the word existed). Tags used are Planet KDE and Planet Ubuntu for the developer feeds.

Sorry no DCOP news on jriddell.org.

22 November, 2014 03:21PM

Rafael Carreras: Release party in Barcelona

15794067981_0d173ce352_z

Another time, and there has been 16, ubuntaires celebrated the release party of the next Ubuntu version, in this case, 14.10 Utopic Unicorn.

This time, we went to Barcelona, at Raval, at the very centre, thanks to our friends of the TEB.

As always, we started with explaining what Ubuntu is and how our Catalan LoCo Team works and later Núria Alonso from the TEB explained the Ubuntu migration done at the Xarxa Òmnia.

15797518182_0a05d96fde_z

The installations room was plenty from the very first moment.

15611105340_1de89d36b4_z

There also was a very profitable auto-learning workshop on how to do an Ubuntu metadistribution.

15772275826_99d1a77d8b_z

 

And in another room, there were two Arduino workshops.

15610528118_927a8d7cc2_z15794076701_cc538bf9ba_z

 

And, of course, ubuntaires love to eat well.

 

15615259540_76daed408b_z 15614277959_c98bda1d33_z

 

Pictures by Martina Mayrhofer and Walter García, all rights reserved.

 
 

22 November, 2014 02:32PM

Jonathan Riddell: Blog Move, Bug Squashing Party in Munich

Welcome to my blog on the updated jriddell.org, now featuring my personal blog (which has existed for about 15 years or at least before the word blog existed) together with my developer blog previously on blogs.kde.org.

I’m at the Bug Squashing Party in Munich, the home of KDE and Plasma and Kubuntu rollouts in the public sector. There’s a bunch of Kubuntu people here too as well as folks from Debian, KDE PIM and LibreOffice.

So far Christian and Aaron (yes that Aaron) have presented their idea for re-writing Akonadi.

And I’ve sat down with the guys from LibreOffice and worked out why Qt4 themeing isn’t working under Plasma 5, I’m about to submit my first bugfix to Libreoffice! Next step Breeze icon theme then Qt 5 support, scary.

IMG 20141121 224006 Kubuntu People
IMG 20141121 225014 It can only be Harald
IMG 20141121 172556Akonadi: Lots of Bad
IMG 20141121 172609 Let’s re-write Akonadi!

facebooktwittergoogle_pluslinkedinby feather

22 November, 2014 12:26PM

Valorie Zimmerman: The Community Working Group needs you?

Hi folks,

Our Community Working Group has dwindled a bit, and some of our members have work that keeps them away from doing CWG work. So it is time to put out another call for volunteers.

The KDE community is growing, which is wonderful. In spite of that growth, we have less "police" type work to do these days. This leaves us more time to make positive efforts to keep the community healthy, and foster dialog and creativity within our teams.

One thing I've noticed is that listowners, IRC channel operators and forum moderators are doing an excellent job of keeping our communication channels friendly, welcoming and all-around helpful. Each of these leadership roles is crucial to keeping the community healthy.

Also, the effort to create the KDE Manifesto has adjusted KDE infrastructure to be directly and consciously supporting community values. The commitments section is particularly helpful.

Please write us at Community-wg@kde.org if you would like to become a part of our community gardening work.




22 November, 2014 05:35AM by Valorie Zimmerman (noreply@blogger.com)

Joe Liau: Documenting the Death of the Dumb Telephone – Part 5: Touch-heavy

 

"U can't touch this" Source

“U can’t touch this”[4] Source

“Touch-a touch-a touch-a touch me. I wanna be dirty.”[1] — Love, Your Dumb Phone

It’s not a problem with a dirty touch screen; that would be a stretch for an entire post. It’s a problem with the dirty power[2]: perhaps an even farther stretch. But, “I’m cold on a mission, so pull on back,”[4] and stretch yourself for a moment because your phone won’t stretch for you.

We’re constantly trying to stretch the battery life of our phones, but the phones keep demanding to be touched, which drains the battery. Phones have this “dirty power” over us, but maybe there are also some “spikes” in the power management of these dumb devices. The greatest feature is also the greatest flaw in the device. It is the fact that it has to be touched in order to react. Does it even react in the most effective way? What indication is there to let you know how the phone has been touched? Do the phone reduce the amount of touches in order so save battery power? If it is not smart enough to do so, then maybe it shouldn’t have a touch screen at all!

Auto-brightness. “Can’t touch this.”[4]
Lock screen. “Can’t touch this.”[4]
Phone clock. “Can’t touch this.”[4]

Yes, your phone has these things, but they never seem to work at the right time. Never mind that I have to turn on the screen to check the time. These things currently seem to follow one set of rules instead of knowing when to activate. So when you “move slide your rump,”[4] you still end up with the infamous butt dial, and the “Dammit, Janet![1] My battery is about to die” situation.

There are already developments in these areas, which indicate that the dumb phone is truly on its last legs. “So wave your hands in the air.”[4] But, seriously, let’s reduce the number of touches, “get your face off the screen”[3] and live your life.

“Stop. Hammer time!”[4]

sop

[1] Song by Richard O’Brien
[2] Fartbarf is fun.
[3] Randall RossCommunity Leadership Summit 2014
[4] Excessively touched on “U Can’t Touch This” by MC Hammer

22 November, 2014 03:36AM

Elizabeth K. Joseph: My Vivid Vervet has crazy hair

Keeping with my Ubuntu toy tradition, I placed an order for a vervet stuffed toy, available in the US via: Miguel the Vervet Monkey.

He arrived today!

He’ll be coming along to his first Ubuntu event on December 10th, a San Francisco Ubuntu Hour.

22 November, 2014 02:57AM