July 07, 2015

hackergotchi for Lunar

Lunar

Reproducible builds: week 10 in Stretch cycle

What happened about the reproducible builds effort this week:

Media coverage

Daniel Stender published an English translation of the article which originally appeared in Linux Magazin in Admin Magazine.

Toolchain fixes

Fixes landed in the Debian archive:

  • Lunar uploaded docbook-to-man/1:2.0.0-34 which removes a timestamp in generated manpages. Original patch by Chris Lamb.
  • Stefano Rivera uploaded dh-python/1.20150628-1 which now sorts namespace files. Original patch by Chris Lamb.
  • Christian Hofstaedtler uploaded ruby2.2/2.2.2-2 which now uses UTC for the dates in gemspec files. Original patch by Chris Lamb.

Lunar submitted to Debian the patch already sent upstream adding a --clamp-mtime option to tar.

Patches have been submitted to add support for SOURCE_DATE_EPOCH to txt2man (Reiner Herrmann), epydoc (Reiner Herrmann), GCC (Dhole), and Doxygen (akira).

Dhole uploaded a new experimental debhelper to the reproducible repository which exports SOURCE_DATE_EPOCH. As part of the experiment, the patch also sets TZ to UTC which should help with most timezone issues. It might still be problematic for some packages which would change their settings based on this.

Mattia Rizzolo sent upstream a patch originally written by Lunar to make the generate-id() function be deterministic in libxslt. While that patch was quickly rejected by upstream, Andrew Ayer came up with a much better one… which sadly could have some performance impact. Daniel Veillard replied with another patch that should be deterministic in most cases without needing extra data structures. It's impact is currently being investigated by retesting packages on reproducible.debian.net.

akira added a new option to sbuild for configuring the path in which packages are built. This will be needed for the srebuild script.

Niko Tyni asked Perl upstream about it using the __DATE__ and __TIME__ C processor macros.

Packages fixed

The following 143 packages became reproducible due to changes in their build dependencies: alot, argvalidate, astroquery, blender, bpython, brian, calibre, cfourcc, chaussette, checkbox-ng, cloc, configshell, daisy-player, dipy, dnsruby, dput-ng, dsc-statistics, eliom, emacspeak, freeipmi, geant321, gpick, grapefruit, heat-cfntools, imagetooth, jansson, jmapviewer, lava-tool, libhtml-lint-perl, libtime-y2038-perl, lift, lua-ldoc, luarocks, mailman-api, matroxset, maven-hpi-plugin, mknbi, mpi4py, mpmath, msnlib, munkres, musicbrainzngs, nova, pecomato, pgrouting, pngcheck, powerline, profitbricks-client, pyepr, pylibssh2, pylogsparser, pystemmer, pytest, python-amqp, python-apt, python-carrot, python-crypto, python-darts.lib.utils.lru, python-demgengeo, python-graph, python-mock, python-musicbrainz2, python-pathtools, python-pskc, python-psutil, python-pypump, python-repoze.sphinx.autointerface, python-repoze.tm2, python-repoze.what-plugins, python-repoze.what, python-repoze.who-plugins, python-xstatic-term.js, reclass, resource-agents, rgain, rttool, ruby-aggregate, ruby-archive-tar-minitar, ruby-bcat, ruby-blankslate, ruby-coffee-script, ruby-colored, ruby-dbd-mysql, ruby-dbd-odbc, ruby-dbd-pg, ruby-dbd-sqlite3, ruby-dbi, ruby-dirty-memoize, ruby-encryptor, ruby-erubis, ruby-fast-xs, ruby-fusefs, ruby-gd, ruby-git, ruby-globalhotkeys, ruby-god, ruby-hike, ruby-hmac, ruby-integration, ruby-ipaddress, ruby-jnunemaker-matchy, ruby-memoize, ruby-merb-core, ruby-merb-haml, ruby-merb-helpers, ruby-metaid, ruby-mina, ruby-net-irc, ruby-net-netrc, ruby-odbc, ruby-packet, ruby-parseconfig, ruby-platform, ruby-plist, ruby-popen4, ruby-rchardet, ruby-romkan, ruby-rubyforge, ruby-rubytorrent, ruby-samuel, ruby-shoulda-matchers, ruby-sourcify, ruby-test-spec, ruby-validatable, ruby-wirble, ruby-xml-simple, ruby-zoom, ryu, simplejson, spamassassin-heatu, speaklater, stompserver, syncevolution, syncmaildir, thin, ticgit, tox, transmissionrpc, vdr-plugin-xine, waitress, whereami, xlsx2csv, zathura.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

  • cdo/1.6.6+dfsg.1-1 uploaded by Alastair McKinstry.
  • dsdo/1.6.36-6 by Agustin Martin Domingo.
  • liboro-java/2.0.8a-11 by Emmanuel Bourg.
  • simgrid/3.11.1-10 uploaded by Martin Quinson, original patch by akira.

Patches submitted which have not made their way to the archive yet:

reproducible.debian.net

A new package set for the X Strike Force has been added. (h01ger)

Bugs tagged with “locale” are now visible in the statistics. (h01ger)

Some work has been done add tests for NetBSD. (h01ger)

Many changes by Mattia Rizzolo have been merged on the whole infrastructure:

  • IRC notifications when known reproducible packages stops buildig successfully.
  • Packages marked ftbfs_due_to_obsolete_dependencies now appears as FTBFS on the package tracker.
  • When listing packages affected by an issue, packages without bugs are grouped together, making it easier to spot the ones who requires work.
  • Both build logs are now saved separately. A diff between the two files is available.
  • The text output of debbindiff is now available as well for easier search and reports.
  • Build logs and debbindiff output are now stored compressed with gzip.
  • The builder used for a given test is now recorded in the database.
  • The manual scheduler available from Alioth gained new options:
    • -i/--issues: schedule all packages affected by the given issue.
    • -r/--status: schedule all packages with the given status.
    • -b/--before: schedule all packages built before the given date
    • -t/--after: schedule all packages built after the given date.
    • --noisy: notify the IRC channel also when the build starts, with a URL to watch it in real time.

debbindiff development

Version 26 has been released on June 28th fixing the comparison of files of unknown format. (Lunar)

A missing dependency identified in python-rpm affecting debbindiff installation without recommended packages was promptly fixed by Michal Čihař.

Lunar also started a massive code rearchitecture to enhance code reuse and enable new features. Nothing visible yet, though.

Documentation update

josch and Mattia Rizzolo documented how to reschedule packages from Alioth.

Package reviews

142 obsolete reviews have been removed, 344 added and 107 updated this week.

Chris West (Faux) filled 13 new bugs for packages failing to build from sources.

The following new issues have been added: snapshot_placeholder_replaced_with_timestamp_in_pom_properties, different_encoding, timestamps_in_documentation_generated_by_org_mode and timestamps_in_pdf_generated_by_matplotlib.

07 July, 2015 01:46PM

Petter Reinholdtsen

MPEG LA on "Internet Broadcast AVC Video" licensing and non-private use

After asking the Norwegian Broadcasting Company (NRK) why they can broadcast and stream H.264 video without an agreement with the MPEG LA, I was wiser, but still confused. So I asked MPEG LA if their understanding matched that of NRK. As far as I can tell, it does not.

I started by asking for more information about the various licensing classes and what exactly is covered by the "Internet Broadcast AVC Video" class that NRK pointed me at to explain why NRK did not need a license for streaming H.264 video:

According to a MPEG LA press release dated 2010-02-02, there is no charge when using MPEG AVC/H.264 according to the terms of "Internet Broadcast AVC Video". I am trying to understand exactly what the terms of "Internet Broadcast AVC Video" is, and wondered if you could help me. What exactly is covered by these terms, and what is not?

The only source of more information I have been able to find is a PDF named AVC Patent Portfolio License Briefing, which states this about the fees:

  • Where End User pays for AVC Video
    • Subscription (not limited by title) – 100,000 or fewer subscribers/yr = no royalty; > 100,000 to 250,000 subscribers/yr = $25,000; >250,000 to 500,000 subscribers/yr = $50,000; >500,000 to 1M subscribers/yr = $75,000; >1M subscribers/yr = $100,000
    • Title-by-Title - 12 minutes or less = no royalty; >12 minutes in length = lower of (a) 2% or (b) $0.02 per title
  • Where remuneration is from other sources
    • Free Television - (a) one-time $2,500 per transmission encoder or (b) annual fee starting at $2,500 for > 100,000 HH rising to maximum $10,000 for >1,000,000 HH
    • Internet Broadcast AVC Video (not title-by-title, not subscription) – no royalty for life of the AVC Patent Portfolio License

Am I correct in assuming that the four categories listed is the categories used when selecting licensing terms, and that "Internet Broadcast AVC Video" is the category for things that do not fall into one of the other three categories? Can you point me to a good source explaining what is ment by "title-by-title" and "Free Television" in the license terms for AVC/H.264?

Will a web service providing H.264 encoded video content in a "video on demand" fashing similar to Youtube and Vimeo, where no subscription is required and no payment is required from end users to get access to the videos, fall under the terms of the "Internet Broadcast AVC Video", ie no royalty for life of the AVC Patent Portfolio license? Does it matter if some users are subscribed to get access to personalized services?

Note, this request and all answers will be published on the Internet.

The answer came quickly from Benjamin J. Myers, Licensing Associate with the MPEG LA:

Thank you for your message and for your interest in MPEG LA. We appreciate hearing from you and I will be happy to assist you.

As you are aware, MPEG LA offers our AVC Patent Portfolio License which provides coverage under patents that are essential for use of the AVC/H.264 Standard (MPEG-4 Part 10). Specifically, coverage is provided for end products and video content that make use of AVC/H.264 technology. Accordingly, the party offering such end products and video to End Users concludes the AVC License and is responsible for paying the applicable royalties.

Regarding Internet Broadcast AVC Video, the AVC License generally defines such content to be video that is distributed to End Users over the Internet free-of-charge. Therefore, if a party offers a service which allows users to upload AVC/H.264 video to its website, and such AVC Video is delivered to End Users for free, then such video would receive coverage under the sublicense for Internet Broadcast AVC Video, which is not subject to any royalties for the life of the AVC License. This would also apply in the scenario where a user creates a free online account in order to receive a customized offering of free AVC Video content. In other words, as long as the End User is given access to or views AVC Video content at no cost to the End User, then no royalties would be payable under our AVC License.

On the other hand, if End Users pay for access to AVC Video for a specific period of time (e.g., one month, one year, etc.), then such video would constitute Subscription AVC Video. In cases where AVC Video is delivered to End Users on a pay-per-view basis, then such content would constitute Title-by-Title AVC Video. If a party offers Subscription or Title-by-Title AVC Video to End Users, then they would be responsible for paying the applicable royalties you noted below.

Finally, in the case where AVC Video is distributed for free through an "over-the-air, satellite and/or cable transmission", then such content would constitute Free Television AVC Video and would be subject to the applicable royalties.

For your reference, I have attached a .pdf copy of the AVC License. You will find the relevant sublicense information regarding AVC Video in Sections 2.2 through 2.5, and the corresponding royalties in Section 3.1.2 through 3.1.4. You will also find the definitions of Title-by-Title AVC Video, Subscription AVC Video, Free Television AVC Video, and Internet Broadcast AVC Video in Section 1 of the License. Please note that the electronic copy is provided for informational purposes only and cannot be used for execution.

I hope the above information is helpful. If you have additional questions or need further assistance with the AVC License, please feel free to contact me directly.

Having a fresh copy of the license text was useful, and knowing that the definition of Title-by-Title required payment per title made me aware that my earlier understanding of that phrase had been wrong. But I still had a few questions:

I have a small followup question. Would it be possible for me to get a license with MPEG LA even if there are no royalties to be paid? The reason I ask, is that some video related products have a copyright clause limiting their use without a license with MPEG LA. The clauses typically look similar to this:

This product is licensed under the AVC patent portfolio license for the personal and non-commercial use of a consumer to (a) encode video in compliance with the AVC standard ("AVC video") and/or (b) decode AVC video that was encoded by a consumer engaged in a personal and non-commercial activity and/or AVC video that was obtained from a video provider licensed to provide AVC video. No license is granted or shall be implied for any other use. additional information may be obtained from MPEG LA L.L.C.

It is unclear to me if this clause mean that I need to enter into an agreement with MPEG LA to use the product in question, even if there are no royalties to be paid to MPEG LA. I suspect it will differ depending on the jurisdiction, and mine is Norway. What is MPEG LAs view on this?

According to the answer, MPEG LA believe those using such tools for non-personal or commercial use need a license with them:

With regard to the Notice to Customers, I would like to begin by clarifying that the Notice from Section 7.1 of the AVC License reads:

THIS PRODUCT IS LICENSED UNDER THE AVC PATENT PORTFOLIO LICENSE FOR THE PERSONAL USE OF A CONSUMER OR OTHER USES IN WHICH IT DOES NOT RECEIVE REMUNERATION TO (i) ENCODE VIDEO IN COMPLIANCE WITH THE AVC STANDARD ("AVC VIDEO") AND/OR (ii) DECODE AVC VIDEO THAT WAS ENCODED BY A CONSUMER ENGAGED IN A PERSONAL ACTIVITY AND/OR WAS OBTAINED FROM A VIDEO PROVIDER LICENSED TO PROVIDE AVC VIDEO. NO LICENSE IS GRANTED OR SHALL BE IMPLIED FOR ANY OTHER USE. ADDITIONAL INFORMATION MAY BE OBTAINED FROM MPEG LA, L.L.C. SEE HTTP://WWW.MPEGLA.COM

The Notice to Customers is intended to inform End Users of the personal usage rights (for example, to watch video content) included with the product they purchased, and to encourage any party using the product for commercial purposes to contact MPEG LA in order to become licensed for such use (for example, when they use an AVC Product to deliver Title-by-Title, Subscription, Free Television or Internet Broadcast AVC Video to End Users, or to re-Sell a third party's AVC Product as their own branded AVC Product).

Therefore, if a party is to be licensed for its use of an AVC Product to Sell AVC Video on a Title-by-Title, Subscription, Free Television or Internet Broadcast basis, that party would need to conclude the AVC License, even in the case where no royalties were payable under the License. On the other hand, if that party (either a Consumer or business customer) simply uses an AVC Product for their own internal purposes and not for the commercial purposes referenced above, then such use would be included in the royalty paid for the AVC Products by the licensed supplier.

Finally, I note that our AVC License provides worldwide coverage in countries that have AVC Patent Portfolio Patents, including Norway.

I hope this clarification is helpful. If I may be of any further assistance, just let me know.

The mentioning of Norwegian patents made me a bit confused, so I asked for more information:

But one minor question at the end. If I understand you correctly, you state in the quote above that there are patents in the AVC Patent Portfolio that are valid in Norway. This make me believe I read the list available from <URL: http://www.mpegla.com/main/programs/AVC/Pages/PatentList.aspx > incorrectly, as I believed the "NO" prefix in front of patents were Norwegian patents, and the only one I could find under Mitsubishi Electric Corporation expired in 2012. Which patents are you referring to that are relevant for Norway?

Again, the quick answer explained how to read the list of patents in that list:

Your understanding is correct that the last AVC Patent Portfolio Patent in Norway expired on 21 October 2012. Therefore, where AVC Video is both made and Sold in Norway after that date, then no royalties would be payable for such AVC Video under the AVC License. With that said, our AVC License provides historic coverage for AVC Products and AVC Video that may have been manufactured or Sold before the last Norwegian AVC patent expired. I would also like to clarify that coverage is provided for the country of manufacture and the country of Sale that has active AVC Patent Portfolio Patents.

Therefore, if a party offers AVC Products or AVC Video for Sale in a country with active AVC Patent Portfolio Patents (for example, Sweden, Denmark, Finland, etc.), then that party would still need coverage under the AVC License even if such products or video are initially made in a country without active AVC Patent Portfolio Patents (for example, Norway). Similarly, a party would need to conclude the AVC License if they make AVC Products or AVC Video in a country with active AVC Patent Portfolio Patents, but eventually Sell such AVC Products or AVC Video in a country without active AVC Patent Portfolio Patents.

As far as I understand it, MPEG LA believe anyone using Adobe Premiere and other video related software with a H.264 distribution license need a license agreement with MPEG LA to use such tools for anything non-private or commercial, while it is OK to set up a Youtube-like service as long as no-one pays to get access to the content. I still have no clear idea how this applies to Norway, where none of the patents MPEG LA is licensing are valid. Will the copyright terms take precedence or can those terms be ignored because the patents are not valid in Norway?

07 July, 2015 07:50AM

Russ Allbery

INN 2.6.0 release candidate

In more INN-related news (and catching up on my substantial backlog), a second release candidate for the INN 2.6.0 release is now available. (The first one was only circulated on the inn-workers mailing list.)

INN 2.6.0 is the next major release of INN, with lots of improvements to the build system, protocol support, and the test suite, among many other things. Changes have been accumulating slowly for quite some time.

There are a lot of changes, so I won't go into all the details here. If you're curious, take a look at the NEWS file. You can get the release candidate from ftp.isc.org. (Note that this link will be removed once INN 2.6.0 is released.)

As always, thanks to Julien ÉLIE for preparing this release and doing most of the maintenance work on INN!

For more information about INN, see the official ISC download page or from my personal INN pages. The latter also has links to the full changelog and the other INN documentation.

07 July, 2015 03:23AM

hackergotchi for Matthew Palmer

Matthew Palmer

It's 10pm, do you know where your SSL certificates are?

The Internet is going encrypted. Revelations of mass-surveillance of Internet traffic has given the Internet community the motivation to roll out encrypted services – the biggest of which is undoubtedly HTTP.

The weak point, though, is SSL Certification Authorities. These are “trusted third parties” who are supposed to validate that a person requesting a certificate for a domain is authorised to have a certificate for that domain. It is no secret that these companies have failed to do the job entrusted to them, again, and again, and again. Oh, and another one.

However, at this point, doing away with CAs and finding some other mechanism isn’t feasible. There is no clear alternative, and the inertia in the current system is overwhelming, to the point where it would take a decade or more to migrate away from the CA-backed SSL certificate ecosystem, even if there was something that was widely acknowledged to be superior in every possible way.

This is where Certificate Transparency comes in. This protocol, which works as part of the existing CA ecosystem, requires CAs to publish every certificate they issue, in order for the certificate to be considered “valid” by browsers and other user agents. While it doesn’t guarantee to prevent misissuance, it does mean that a CA can’t cover up or try to minimise the impact of a breach or other screwup – their actions are fully public, for everyone to see.

Much of Certificate Transparency’s power, however, is diminished if nobody is looking at the certificates which are being published. That is why I have launched sslaware.com, a site for searching the database of logged certificates. At present, it is rather minimalist, however I intend on adding more features, such as real-time notifications (if a new cert for your domain or organisation is logged, you’ll get an e-mail about it), and more advanced searching capabilities.

If you care about the security of your website, you should check out SSL Aware and see what certificates have been issued for your site. You may be unpleasantly surprised.

07 July, 2015 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

July 06, 2015

hackergotchi for Matthew Garrett

Matthew Garrett

Anti Evil Maid 2 Turbo Edition

The Evil Maid attack has been discussed for some time - in short, it's the idea that most security mechanisms on your laptop can be subverted if an attacker is able to gain physical access to your system (for instance, by pretending to be the maid in a hotel). Most disk encryption systems will fall prey to the attacker replacing the initial boot code of your system with something that records and then exfiltrates your decryption passphrase the next time you type it, at which point the attacker can simply steal your laptop the next day and get hold of all your data.

There are a couple of ways to protect against this, and they both involve the TPM. Trusted Platform Modules are small cryptographic devices on the system motherboard[1]. They have a bunch of Platform Configuration Registers (PCRs) that are cleared on power cycle but otherwise have slightly strange write semantics - attempting to write a new value to a PCR will append the new value to the existing value, take the SHA-1 of that and then store this SHA-1 in the register. During a normal boot, each stage of the boot process will take a SHA-1 of the next stage of the boot process and push that into the TPM, a process called "measurement". Each component is measured into a separate PCR - PCR0 contains the SHA-1 of the firmware itself, PCR1 contains the SHA-1 of the firmware configuration, PCR2 contains the SHA-1 of any option ROMs, PCR5 contains the SHA-1 of the bootloader and so on.

If any component is modified, the previous component will come up with a different measurement and the PCR value will be different, Because you can't directly modify PCR values[2], this modified code will only be able to set the PCR back to the "correct" value if it's able to generate a sequence of writes that will hash back to that value. SHA-1 isn't yet sufficiently broken for that to be practical, so we can probably ignore that. The neat bit here is that you can then use the TPM to encrypt small quantities of data[3] and ask it to only decrypt that data if the PCR values match. If you change the PCR values (by modifying the firmware, bootloader, kernel and so on), the TPM will refuse to decrypt the material.

Bitlocker uses this to encrypt the disk encryption key with the TPM. If the boot process has been tampered with, the TPM will refuse to hand over the key and your disk remains encrypted. This is an effective technical mechanism for protecting against people taking images of your hard drive, but it does have one fairly significant issue - in the default mode, your disk is decrypted automatically. You can add a password, but the obvious attack is then to modify the boot process such that a fake password prompt is presented and the malware exfiltrates the data. The TPM won't hand over the secret, so the malware flashes up a message saying that the system must be rebooted in order to finish installing updates, removes itself and leaves anyone except the most paranoid of users with the impression that nothing bad just happened. It's an improvement over the state of the art, but it's not a perfect one.

Joanna Rutkowska came up with the idea of Anti Evil Maid. This can take two slightly different forms. In both, a secret phrase is generated and encrypted with the TPM. In the first form, this is then stored on a USB stick. If the user suspects that their system has been tampered with, they boot from the USB stick. If the PCR values are good, the secret will be successfully decrypted and printed on the screen. The user verifies that the secret phrase is correct and reboots, satisfied that their system hasn't been tampered with. The downside to this approach is that most boots will not perform this verification, and so you rely on the user being able to make a reasonable judgement about whether it's necessary on a specific boot.

The second approach is to do this on every boot. The obvious problem here is that in this case an attacker simply boots your system, copies down the secret, modifies your system and simply prints the correct secret. To avoid this, the TPM can have a password set. If the user fails to enter the correct password, the TPM will refuse to decrypt the data. This can be attacked in a similar way to Bitlocker, but can be avoided with sufficient training: if the system reboots without the user seeing the secret, the user must assume that their system has been compromised and that an attacker now has a copy of their TPM password.

This isn't entirely great from a usability perspective. I think I've come up with something slightly nicer, and certainly more Web 2.0[4]. Anti Evil Maid relies on having a static secret because expecting a user to remember a dynamic one is pretty unreasonable. But most security conscious people rely on dynamic secret generation daily - it's the basis of most two factor authentication systems. TOTP is an algorithm that takes a seed, the time of day and some reasonably clever calculations and comes up with (usually) a six digit number. The secret is known by the device that you're authenticating against, and also by some other device that you possess (typically a phone). You type in the value that your phone gives you, the remote site confirms that it's the value it expected and you've just proven that you possess the secret. Because the secret depends on the time of day, someone copying that value won't be able to use it later.

But instead of using your phone to identify yourself to a remote computer, we can use the same technique to ensure that your computer possesses the same secret as your phone. If the PCR states are valid, the computer will be able to decrypt the TOTP secret and calculate the current value. This can then be printed on the screen and the user can compare it against their phone. If the values match, the PCR values are valid. If not, the system has been compromised. Because the value changes over time, merely booting your computer gives your attacker nothing - printing an old value won't fool the user[5]. This allows verification to be a normal part of every boot, without forcing the user to type in an additional password.

I've written a prototype implementation of this and uploaded it here. Do pay attention to the list of limitations - without a bootloader that measures your kernel and initrd, you're still open to compromise. Adding TPM support to grub is on my list of things to do. There are also various potential issues like an attacker being able to use external DMA-capable devices to obtain the secret, especially since most Linux distributions still ship kernels that don't enable the IOMMU by default. And, of course, if your firmware is inherently untrustworthy there's multiple ways it can subvert this all. So treat this very much like a research project rather than something you can depend on right now. There's a fair amount of work to do to turn this into a meaningful improvement in security.

[1] I wrote about them in more detail here, including a discussion of whether they can be used for general purpose DRM (answer: not really)

[2] In theory, anyway. In practice, TPMs are embedded devices running their own firmware, so who knows what bugs they're hiding.

[3] On the order of 128 bytes or so. If you want to encrypt larger things with a TPM, the usual way to do it is to generate an AES key, encrypt your material with that and then encrypt the AES key with the TPM.

[4] Is that even a thing these days? What do we say instead?

[5] Assuming that the user is sufficiently diligent in checking the value, anyway

comment count unavailable comments

06 July, 2015 05:39PM

Internet abuse culture is a tech industry problem

After Jesse Frazelle blogged about the online abuse she receives, a common reaction in various forums[1] was "This isn't a tech industry problem - this is what being on the internet is like"[2]. And yes, they're right. Abuse of women on the internet isn't limited to people in the tech industry. But the severity of a problem is a product of two separate factors: its prevalence and what impact it has on people.

Much of the modern tech industry relies on our ability to work with people outside our company. It relies on us interacting with a broader community of contributors, people from a range of backgrounds, people who may be upstream on a project we use, people who may be employed by competitors, people who may be spending their spare time on this. It means listening to your users, hearing their concerns, responding to their feedback. And, distressingly, there's significant overlap between that wider community and the people engaging in the abuse. This abuse is often partly technical in nature. It demonstrates understanding of the subject matter. Sometimes it can be directly tied back to people actively involved in related fields. It's from people who might be at conferences you attend. It's from people who are participating in your mailing lists. It's from people who are reading your blog and using the advice you give in their daily jobs. The abuse is coming from inside the industry.

Cutting yourself off from that community impairs your ability to do work. It restricts meeting people who can help you fix problems that you might not be able to fix yourself. It results in you missing career opportunities. Much of the work being done to combat online abuse relies on protecting the victim, giving them the tools to cut themselves off from the flow of abuse. But that risks restricting their ability to engage in the way they need to to do their job. It means missing meaningful feedback. It means passing up speaking opportunities. It means losing out on the community building that goes on at in-person events, the career progression that arises as a result. People are forced to choose between putting up with abuse or compromising their career.

The abuse that women receive on the internet is unacceptable in every case, but we can't ignore the effects of it on our industry simply because it happens elsewhere. The development model we've created over the past couple of decades is just too vulnerable to this kind of disruption, and if we do nothing about it we'll allow a large number of valuable members to be driven away. We owe it to them to make things better.

[1] Including Hacker News, which then decided to flag the story off the front page because masculinity is fragile

[2] Another common reaction was "But men get abused as well", which I'm not even going to dignify with a response

comment count unavailable comments

06 July, 2015 05:37PM

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, June 2015

This was my seventh month working on Debian LTS. I was assigned 14.75 hours of work by Freexian's Debian LTS initiative.

p7zip

I did not receive any feedback from upstream for my proposed fix for CVE-2015-1038 mentioned last month, so I went ahead and uploaded it based on my own testing. (I also uploaded the fix to wheezy-security, jessie-security and sid.)

Afterwards, I received a request from upstream for a patch against their latest release (even the version in sid is quite a long way behind that), so I ported the fix forward to that.

linux-2.6

I backported further security fixes, but had to give up on one (CVE-2014-8172, AIO soft lockup) as the fix depends on wide-ranging changes. For CVE-2015-1805 (pipe iovec overrun leading to memory corruption), the upstream fix was also not applicable, but this looked so serious that we needed to fix it anyway. Red Hat had already fixed this in their 2.6.32-based kernel and they didn't have overlapping changes to the pipe implementation, so I was able to extract this fix from their source tarball. I uploaded and issued DLA-246-1.

Unfortunately, I failed to notice that Linux 2.6.32.66 had introduced two regressions that were fixed in 2.6.32.67. While these didn't appear in my testing, one of them did affect several users that were quick to upgrade. I applied the upstream fixes, made a second upload and issued DLA-246-2.

I also triaged the issues that are still unfixed, and I spent some time working on a fix for CVE-2015-1350 (unprivileged chown removes setcap attribute), but I haven't yet completed the backport to 2.6.32 or tested it.

openssl

I looked at OpenSSL, which is still marked as affected by CVE-2015-4000 (encryption downgrade aka Logjam). After discussion with the LTS team I made a note of the current situation, which is that a full fix (rejecting Diffie-Hellman keys shorter than 1024 bits) must wait until more servers have been upgraded.

06 July, 2015 01:03PM

Mike Gabriel

My FLOSS activities in June 2015

June 2015 has been mainly dedicated to these five fields of endeavour:

  • first uploads of MATE 1.10 to Debian experimental (still work in progress)
  • development of nx-libs (3.6.x branch)
  • meeting other nx-libs developers at X2Go: The Gathering 2015 at Linuxhotel in Essen, Germany
  • contribution to Debian and Debian LTS,
  • production deployment of Ganeti and Ganeti Manager (a web frontend for Ganeti)

Received Sponsorship

Last month's contributions of mine (8h) to the Debian LTS project had been contracted by Freexian [1] again. Thanks to Raphael Hertzog for having me on the team. Thanks to all the people and companies sponsoring the Debian LTS Team's work.

Also a big thanks to people from Hetzner GmbH for sponsoring my stay at X2Go: The Gathering 2015 @ Linuxhotel (in Essen, Germany).

MATE 1.10 entering Debian experimental

Together with Martin Wimpress from Ubuntu MATE and other people in the Debian MATE Packaging Team I managed to upload a great portion of the MATE 1.10 packages to Debian experimental.

Please note that this is still work in progress. Not all MATE 1.10 packages have been uploaded yet and several packages from the MATE 1.10 series in Debian have grave bugs still (mostly packaging and installation issues).

The plan is to make the complete MATE 1.10 stack available in Debian experimental by the end of July and also get all the open kinks fixed by then.

Development nx-libs 3.6.x

In June 2015, I have looked at various aspects of nx-libs development:

read more

06 July, 2015 12:54PM by sunweaver

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Apport Integration with Debian - GSoC Update

For this year's Google Summer of Code, I have been mentoring Yuru Roy Shao, on Integrating Apport with Debian. Yuru is a CS student studying at University of Michigan, USA completing his PhD.

For around 2+ years, Apport was packaged for Debian, but remained in Experimental. While we did have a separate (Debian BTS aware) crashdb, the general concerns (bug spam, too many duplicates etc) were the reason we held its inclusion.

With this GSoC, Yuru has been bringing some of the missing integration pieces of Debian. For example, we are now using debianbts to to talk to the Debian BTS Server, and fetch bug reports for the user.

While apport's Bug Report data collection itself is very comprehensive, still for Debian, it will have the option to use native as well as reportbug. This will allow us to use the many hooks people have integrated so far with reportbug. Both Bug Report data collectors will be available.

Yuru has blogged about his GSoC progress so far, here. Please do have a read, and let us know your views. If the travel formalities work out well, I intend to attend Debconf this year, and can talk in more detail.

Categories: 

Keywords: 

Like: 

06 July, 2015 07:49AM by Ritesh Raj Sarraf

Russ Allbery

INN 2.5.5

(This release has actually been ready for a while, but there were a few technical difficulties with getting it copied up to the right places, and then I got very distracted by various life stuff.)

This is the first new release of INN in about a year, and hopefully the last in the 2.5.x series. A beta release of INN 2.6.0 will be announced shortly (probably tomorrow).

As is typical for bug-fix releases, this release rolls up a bunch of small bug fixes that have been made over the past year. The most notable changes include new inn.conf parameters to fine-tune the SSL/TLS configuration for nnrpd (generally to tighten it over the OpenSSL defaults), a few new flags to various utilities, multiple improvements to pullnews, and support for properly stopping cnfsstat and innwatch if INN is started and then quickly stopped.

As always, thanks to Julien ÉLIE for preparing this release and doing most of the maintenance work on INN!

You can get the latest version from the official ISC download page or from my personal INN pages. The latter also has links to the full changelog and the other INN documentation.

06 July, 2015 05:13AM

July 05, 2015

hackergotchi for Ben Armstrong

Ben Armstrong

BLT Bike Trail – Early Summer 2015

This is one of my regular walking routes, from home to Five Island Lake and back. It’s about 15 km. I usually walk too briskly to capture the many visual delights of this route. Today on the trip out, I stopped and took several photos to share with you.

[slb_group]

An early morning walk up the BLT bike trail. Click to start the slideshow.An early morning walk up the BLT bike trail to Five Island Lake (pictured here) and back. Click to start the slideshow.

The walk starts from our subdivision. It’s cool and clear when I leave.

Saskatoon berries Saskatoon berries Saskatoon berries Dew on leaves Dew on leaves Pitcher plants Something’s attacking this alder. Maybe woolly aphids?

Wild strawberries Wild strawberry Wild strawberry Wild strawberry Wild strawberries Daisy Daisy Vetch Vetch Water lily Water lily

Sensitive fern Squirrel! Cranberry Lake

Cranberry Lake
[/slb_group]

05 July, 2015 08:26PM by Ben Armstrong

Thorsten Alteholz

My Debian Activities in June 2015

FTP assistant

This month I marked 539 packages for accept, rejected 61 of them and had to send 24 emails to maintainers. This is a new personal record. Even in the month before the Jessie freeze I accepted only 407 packages. So, very well done (self-laudation has to happen from time to time :-) ).

Another record was broken as well. After 19 month of doing this kind of work, I got my first insulting email. I would prefer to wait another 19 month before I get the next one …

Squeeze LTS

This was my twelfth month that I did some work for the Squeeze LTS initiative, started by Raphael Hertzog at Freexian.

This month I got assigned a workload of only 14.5h and I spent most of it to work on a new upload of php5. Unfortunately there have been so many CVEs comming in, that I didn’t do an upload yet.

Other stuff I uploaded was

  • [DLA 258-1] jqueryui security update
  • [DLA 262-1] libcrypto++ security update

This month I also had my first one and a half weeks of doing frontdesk work. As introduced in this email, every member of the LTS team should do some LTS CVE triage. Up to now it was mainly done by Raphael and he wants to share this task with everybody else. So I answered questions on the IRC channel, on the LTS list and looked for CVEs that are important for Squeeze LTS or could be ignored.

Other stuff

This month I also uploaded a new version of harminv and wondered why the package didn’t move to testing. Of course there is a document how to do a transition of a library properly. But hey, it is me, I know everything better and of course I can use a shortcut. Oh boy, I was wrong. So I also uploaded new versions of meep, meep-lam4, meep-openmpi, meep-mpi-default and meep-mpich2.

And the moral of the story: If you don’t understand why something should be done in a specific way, you shouldn’t try to do it different.

Donations

Again, thanks alot to all donors. I really appreciate this and hope that everybody is pleased with my commitment. Don’t hesitate to make suggestions for improvements.

05 July, 2015 07:51PM by alteholz

Petter Reinholdtsen

New laptop - some more clues and ideas based on feedback

Several people contacted me after my previous blog post about my need for a new laptop, and provided very useful feedback. I wish to thank every one of these. Several pointed me to the possibility of fixing my X230, and I am already in the process of getting Lenovo to do so thanks to the on site, next day support contract covering the machine. But the battery is almost useless (I expect to replace it with a non-official battery) and I do not expect the machine to live for many more years, so it is time to plan its replacement. If I did not have a support contract, it was suggested to find replacement parts using FrancEcrans, but it might present a language barrier as I do not understand French.

One tip I got was to use the Skinflint web service to compare laptop models. It seem to have more models available than prisjakt.no. Another tip I got from someone I know have similar keyboard preferences was that the HP EliteBook 840 keyboard is not very good, and this matches my experience with earlier EliteBook keyboards I tested. Because of this, I will not consider it any further.

When I wrote my blog post, I was not aware of Thinkpad X250, the newest Thinkpad X model. The keyboard reintroduces mouse buttons (which is missing from the X240), and is working fairly well with Debian Sid/Unstable according to Corsac.net. The reports I got on the keyboard quality are not consistent. Some say the keyboard is good, others say it is ok, while others say it is not very good. Those with experience from X41 and and X60 agree that the X250 keyboard is not as good as those trusty old laptops, and suggest I keep and fix my X230 instead of upgrading, or get a used X230 to replace it. I'm also told that the X250 lack leds for caps lock, disk activity and battery status, which is very convenient on my X230. I'm also told that the CPU fan is running very often, making it a bit noisy. In any case, the X250 do not work out of the box with Debian Stable/Jessie, one of my requirements.

I have also gotten a few vendor proposals, one was Pro-Star, another was Libreboot. The latter look very attractive to me.

Again, thank you all for the very useful feedback. It help a lot as I keep looking for a replacement.

Update 2015-07-06: I was recommended to check out the lapstore.de web shop for used laptops. They got several different old thinkpad X models, and provide one year warranty.

05 July, 2015 07:40PM

hackergotchi for Sjoerd Simons

Sjoerd Simons

Debian Jessie on Raspberry Pi 2

Apart from being somewhat slow, one of the downsides of the original Raspberry Pi SoC was that it had an old ARM11 core which implements the ARMv6 architecture. This was particularly unfortunate as most common distributions (Debian, Ubuntu, Fedora, etc) standardized on the ARMv7-A architecture as a minimum for their ARM hardfloat ports. Which is one of the reasons for Raspbian and the various other RPI specific distributions.

Happily, with the new Raspberry Pi 2 using Cortex-A7 Cores (which implement the ARMv7-A architecture) this issue is out of the way, which means that a a standard Debian hardfloat userland will run just fine. So the obvious first thing to do when an RPI 2 appeared on my desk was to put together a quick Debian Jessie image for it.

The result of which can be found at: https://images.collabora.co.uk/rpi2/

Login as root with password debian (Obviously do change the password and create a normal user after booting). The image is 3G, so should fit on any SD card marketed as 4G or bigger. Using bmap-tools for flashing is recommended, otherwise you'll be waiting for 2.5G of zeros to be written to the card, which tends to be rather boring. Note that the image is really basic and will just get you to a login prompt on either serial or hdmi, batteries are very much not included, but can be apt-getted :).

Technically, this image is simply a Debian Jessie debootstrap with a extra packages for hardware support. Unlike Raspbian the first partition (which contains the firmware & kernel files to boot the system) is mounted on /boot/firmware rather then on /boot. This is because the VideoCore expects the first partition to be a FAT filesystem, but mounting FAT on /boot really doesn't work right on Debian systems as it contains files managed by dpkg (e.g. the kernel package) which requires a POSIX compatible filesystem. Essentially the same reason why Debian is using /boot/efi for the ESP partition on Intel systems rather the mounting it on /boot directly.

For reference, the RPI2 specific packages in this image are from https://repositories.collabora.co.uk/debian/ in the jessie distribution and rpi2 component (this repository is enabled by default on the image). The relevant packages there are:

  • linux: Current 3.18 based package from Debian experimental (3.18.5-1~exp1 at the time of this writing) with a stack of patches on top from the raspberrypi github repository and tweaked to build an rpi2 flavour as the patchset isn't multiplatform capable :(
  • raspberrypi-firmware-nokernel: Firmware package and misc libraries packages taken from Raspbian, with a slight tweak to install in /boot/firmware rather then /boot.
  • flash-kernel: Current flash-kernel package from debian experimental, with a small addition to detect the RPI 2 and "flash" the kernel to /boot/firmware/kernel7.img (which is what the GPU will try to boot on this board).

For the future, it would be nice to see the Raspberry Pi 2 support out of the box on Debian. For that to happen, the most important thing would be to have some mainline kernel support for this board (supporting multiplatform!) so it can be build as part of debians armmp kernel flavour. And ideally, having the firmware load a bootloader (such as u-boot) rather than a kernel directly to allow for a much more flexible boot sequence and support for using an initramfs (u-boot has some support for the original Raspberry Pi, so adding Raspberry Pi 2 support should hopefully not be too tricky)

Update: An updated image (20150705) is available with the latest packages from Jessie and a GPG key that's not expired :).

05 July, 2015 06:06PM by Sjoerd Simons

Dominique Dumont

Robert Edmonds

Git packaging workflow for py-lmdb

Recently, I packaged the py-lmdb Python binding for the LMDB database library. This package is going to be team maintained by the pkg-db group, which is responsible for maintaining BerkeleyDB and LMDB packages. Below are my notes on (re-)Debianizing this package and how the Git repository for the source package is laid out.

The upstream py-lmdb developer has a Git-centric workflow. Development is done on the master branch, with regular releases done as fast-forward merges to the release branch. Release tags of the form py-lmdb_X.YZ are provided. The only tarballs provided are the ones that GitHub automatically generates from tags. Since these tarballs are synthetic and the content of these tarballs matches the content on the corresponding tag, we will ignore them in favor of using the release tags directly. (The --git-pristine-tar-commit option to gbp-buildpackage will be used so that .orig.tar.gz files can be replicated so that the Debian archive will accept subsequent uploads, but tarballs are otherwise irrelevant to our workflow.)

To make it clear that the release tags come from upstream's repository, they should be prefixed with upstream/, which would preferably result in a DEP-14 compliant scheme. (Unfortunately, since upstream's release tags begin with py-lmdb_, this doesn't quite match the pattern that DEP-14 recommends.)

Here is how the local packaging repository is initialized. Note that git clone isn't used, so that we can customize how the tags are fetched. Instead, we create an empty Git repository and add the upstream repository as the upstream remote. The --no-tags option is used, so that git fetch does not import the remote's tags. However, we also add a custom fetch refspec refs/tags/*:refs/tags/upstream/* so that the remote's tags are explicitly fetched, but with the upstream/ prefix.

$ mkdir py-lmdb
$ cd py-lmdb
$ git init
Initialized empty Git repository in /home/edmonds/debian/py-lmdb/.git/
$ git remote add --no-tags upstream https://github.com/dw/py-lmdb
$ git config --add remote.upstream.fetch 'refs/tags/*:refs/tags/upstream/*'
$ git fetch upstream
remote: Counting objects: 3336, done.
remote: Total 3336 (delta 0), reused 0 (delta 0), pack-reused 3336
Receiving objects: 100% (3336/3336), 2.15 MiB | 0 bytes/s, done.
Resolving deltas: 100% (1958/1958), done.
From https://github.com/dw/py-lmdb
 * [new branch]      master     -> upstream/master
 * [new branch]      release    -> upstream/release
 * [new branch]      win32-sparse-patch -> upstream/win32-sparse-patch
 * [new tag]         last-cython-version -> upstream/last-cython-version
 * [new tag]         py-lmdb_0.1 -> upstream/py-lmdb_0.1
 * [new tag]         py-lmdb_0.2 -> upstream/py-lmdb_0.2
 * [new tag]         py-lmdb_0.3 -> upstream/py-lmdb_0.3
 * [new tag]         py-lmdb_0.4 -> upstream/py-lmdb_0.4
 * [new tag]         py-lmdb_0.5 -> upstream/py-lmdb_0.5
 * [new tag]         py-lmdb_0.51 -> upstream/py-lmdb_0.51
 * [new tag]         py-lmdb_0.52 -> upstream/py-lmdb_0.52
 * [new tag]         py-lmdb_0.53 -> upstream/py-lmdb_0.53
 * [new tag]         py-lmdb_0.54 -> upstream/py-lmdb_0.54
 * [new tag]         py-lmdb_0.56 -> upstream/py-lmdb_0.56
 * [new tag]         py-lmdb_0.57 -> upstream/py-lmdb_0.57
 * [new tag]         py-lmdb_0.58 -> upstream/py-lmdb_0.58
 * [new tag]         py-lmdb_0.59 -> upstream/py-lmdb_0.59
 * [new tag]         py-lmdb_0.60 -> upstream/py-lmdb_0.60
 * [new tag]         py-lmdb_0.61 -> upstream/py-lmdb_0.61
 * [new tag]         py-lmdb_0.62 -> upstream/py-lmdb_0.62
 * [new tag]         py-lmdb_0.63 -> upstream/py-lmdb_0.63
 * [new tag]         py-lmdb_0.64 -> upstream/py-lmdb_0.64
 * [new tag]         py-lmdb_0.65 -> upstream/py-lmdb_0.65
 * [new tag]         py-lmdb_0.66 -> upstream/py-lmdb_0.66
 * [new tag]         py-lmdb_0.67 -> upstream/py-lmdb_0.67
 * [new tag]         py-lmdb_0.68 -> upstream/py-lmdb_0.68
 * [new tag]         py-lmdb_0.69 -> upstream/py-lmdb_0.69
 * [new tag]         py-lmdb_0.70 -> upstream/py-lmdb_0.70
 * [new tag]         py-lmdb_0.71 -> upstream/py-lmdb_0.71
 * [new tag]         py-lmdb_0.72 -> upstream/py-lmdb_0.72
 * [new tag]         py-lmdb_0.73 -> upstream/py-lmdb_0.73
 * [new tag]         py-lmdb_0.74 -> upstream/py-lmdb_0.74
 * [new tag]         py-lmdb_0.75 -> upstream/py-lmdb_0.75
 * [new tag]         py-lmdb_0.76 -> upstream/py-lmdb_0.76
 * [new tag]         py-lmdb_0.77 -> upstream/py-lmdb_0.77
 * [new tag]         py-lmdb_0.78 -> upstream/py-lmdb_0.78
 * [new tag]         py-lmdb_0.79 -> upstream/py-lmdb_0.79
 * [new tag]         py-lmdb_0.80 -> upstream/py-lmdb_0.80
 * [new tag]         py-lmdb_0.81 -> upstream/py-lmdb_0.81
 * [new tag]         py-lmdb_0.82 -> upstream/py-lmdb_0.82
 * [new tag]         py-lmdb_0.83 -> upstream/py-lmdb_0.83
 * [new tag]         py-lmdb_0.84 -> upstream/py-lmdb_0.84
 * [new tag]         py-lmdb_0.85 -> upstream/py-lmdb_0.85
 * [new tag]         py-lmdb_0.86 -> upstream/py-lmdb_0.86
$

Note that at this point we have content from the upstream remote in our local repository, but we don't have any local branches:

$ git status
On branch master

Initial commit

nothing to commit (create/copy files and use "git add" to track)
$ git branch -a
  remotes/upstream/master
  remotes/upstream/release
  remotes/upstream/win32-sparse-patch
$

We will use the DEP-14 naming scheme for the packaging branches, so the branch for packages targeted at unstable will be called debian/sid. Since I already made an initial 0.84-1 upload, we need to start the debian/sid branch from the upstream 0.84 tag and import the original packaging content from that upload. The --no-track flag is passed to git checkout initially so that Git doesn't consider the upstream release tag upstream/py-lmdb_0.84 to be the upstream branch for our packaging branch.

$ git checkout --no-track -b debian/sid upstream/py-lmdb_0.84
Switched to a new branch 'debian/sid'
$

At this point I imported the original packaging content for 0.84-1 with git am. Then, I signed the debian/0.84-1 tag:

$ git tag -s -m 'Debian release 0.84-1' debian/0.84-1
$ git verify-tag debian/0.84-1
gpg: Signature made Sat 04 Jul 2015 02:49:42 PM EDT using RSA key ID AAF6CDAE
gpg: Good signature from "Robert Edmonds <edmonds@mycre.ws>" [ultimate]
gpg:                 aka "Robert Edmonds <edmonds@fsi.io>" [ultimate]
gpg:                 aka "Robert Edmonds <edmonds@debian.org>" [ultimate]
$

New upstream releases are integrated by fetching new upstream tags and non-fast-forward merging into the packaging branch. The latest release is 0.86, so we merge from the upstream/py-lmdb_0.86 tag.

$ git fetch upstream --dry-run
[...]
$ git fetch upstream
[...]
$ git checkout debian/sid
Already on 'debian/sid'
$ git merge --no-ff --no-edit upstream/py-lmdb_0.86
Merge made by the 'recursive' strategy.
 ChangeLog                     |  46 ++++++++++++++
 docs/index.rst                |  46 +++++++++++++-
 docs/themes/acid/layout.html  |   4 +-
 examples/dirtybench-gdbm.py   |   6 ++
 examples/dirtybench.py        |  19 ++++++
 examples/nastybench.py        |  18 ++++--
 examples/parabench.py         |   6 ++
 lib/lmdb.h                    |  37 ++++++-----
 lib/mdb.c                     | 281 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++---------------------
 lib/midl.c                    |   2 +-
 lib/midl.h                    |   2 +-
 lib/py-lmdb/preload.h         |  48 ++++++++++++++
 lmdb/__init__.py              |   2 +-
 lmdb/cffi.py                  | 120 ++++++++++++++++++++++++-----------
 lmdb/cpython.c                |  86 +++++++++++++++++++------
 lmdb/tool.py                  |   5 +-
 misc/gdb.commands             |  21 ++++++
 misc/runtests-travisci.sh     |   3 +-
 misc/runtests-ubuntu-12-04.sh |  28 ++++----
 setup.py                      |   2 +
 tests/crash_test.py           |  22 +++++++
 tests/cursor_test.py          |  37 +++++++++++
 tests/env_test.py             |  73 +++++++++++++++++++++
 tests/testlib.py              |  14 +++-
 tests/txn_test.py             |  20 ++++++
 25 files changed, 773 insertions(+), 175 deletions(-)
 create mode 100644 lib/py-lmdb/preload.h
 create mode 100644 misc/gdb.commands
$

Here I did some additional development work like editing the debian/gbp.conf file and applying a fix for #790738 to make the package build reproducibly. The package is now ready for an 0.86-1 upload, so I ran the following gbp dch command:

$ gbp dch --release --auto --new-version=0.86-1 --commit
gbp:info: Found tag for topmost changelog version '6bdbb56c04571fe2d5d22aa0287ab0dc83959de5'
gbp:info: Continuing from commit '6bdbb56c04571fe2d5d22aa0287ab0dc83959de5'
gbp:info: Changelog has been committed for version 0.86-1
$

This automatically generates a changelog entry for 0.86-1, but it includes commit summaries for all of the upstream commits since the last release, which I had to edit out.

Then, I used gbp buildpackage with BUILDER=pbuilder to build the package in a clean, up-to-date sid chroot. After checking the result, I signed the debian/0.86-1 tag:

$ git tag -s -m 'Debian release 0.86-1' debian/0.86-1
$

The package is now ready to be pushed to git.debian.org. First, a bare repository is initialized:

$ ssh git.debian.org
edmonds@moszumanska:~$ cd /srv/git.debian.org/git/pkg-db/
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db$ umask 002
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db$ mkdir py-lmdb.git
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db$ cd py-lmdb.git/
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ git --bare init --shared
Initialized empty shared Git repository in /srv/git.debian.org/git/pkg-db/py-lmdb.git/
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ echo 'py-lmdb Debian packaging' > description
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ mv hooks/post-update.sample hooks/post-update
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ chmod a+x hooks/post-update
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ logout
Shared connection to git.debian.org closed.

Then, we add a new debian remote to our local packaging repository. Per our repository conventions, we need to ensure that only branch names matching debian/* and pristine-tar and tag names matching debian/* and upstream/* are pushed to the debian remote when we run git push debian, so we add a a set of remote.debian.push refspecs that correspond to these conventions. We also add an explicit remote.debian.fetch refspec to fetch tags.

$ git remote add debian ssh://git.debian.org/git/pkg-db/py-lmdb.git
$ git config --add remote.debian.push 'refs/tags/debian/*'
$ git config --add remote.debian.push 'refs/tags/upstream/*'
$ git config --add remote.debian.push 'refs/heads/debian/*'
$ git config --add remote.debian.push 'refs/heads/pristine-tar'
$ git config --add remote.debian.fetch 'refs/tags/*:refs/tags/*'

We now run the initial push to the remote Git repository. The --set-upstream option is used so that our local branches will be configured to track the corresponding remote branches. Also note that the debian/* and upstream/* tags are pushed as well.

$ git push debian --set-upstream
Counting objects: 3333, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (1083/1083), done.
Writing objects: 100% (3333/3333), 1.37 MiB | 0 bytes/s, done.
Total 3333 (delta 2231), reused 3314 (delta 2218)
To ssh://git.debian.org/git/pkg-db/py-lmdb.git
 * [new branch]      pristine-tar -> pristine-tar
 * [new branch]      debian/sid -> debian/sid
 * [new tag]         debian/0.84-1 -> debian/0.84-1
 * [new tag]         debian/0.86-1 -> debian/0.86-1
 * [new tag]         upstream/last-cython-version -> upstream/last-cython-version
 * [new tag]         upstream/py-lmdb_0.1 -> upstream/py-lmdb_0.1
 * [new tag]         upstream/py-lmdb_0.2 -> upstream/py-lmdb_0.2
 * [new tag]         upstream/py-lmdb_0.3 -> upstream/py-lmdb_0.3
 * [new tag]         upstream/py-lmdb_0.4 -> upstream/py-lmdb_0.4
 * [new tag]         upstream/py-lmdb_0.5 -> upstream/py-lmdb_0.5
 * [new tag]         upstream/py-lmdb_0.51 -> upstream/py-lmdb_0.51
 * [new tag]         upstream/py-lmdb_0.52 -> upstream/py-lmdb_0.52
 * [new tag]         upstream/py-lmdb_0.53 -> upstream/py-lmdb_0.53
 * [new tag]         upstream/py-lmdb_0.54 -> upstream/py-lmdb_0.54
 * [new tag]         upstream/py-lmdb_0.56 -> upstream/py-lmdb_0.56
 * [new tag]         upstream/py-lmdb_0.57 -> upstream/py-lmdb_0.57
 * [new tag]         upstream/py-lmdb_0.58 -> upstream/py-lmdb_0.58
 * [new tag]         upstream/py-lmdb_0.59 -> upstream/py-lmdb_0.59
 * [new tag]         upstream/py-lmdb_0.60 -> upstream/py-lmdb_0.60
 * [new tag]         upstream/py-lmdb_0.61 -> upstream/py-lmdb_0.61
 * [new tag]         upstream/py-lmdb_0.62 -> upstream/py-lmdb_0.62
 * [new tag]         upstream/py-lmdb_0.63 -> upstream/py-lmdb_0.63
 * [new tag]         upstream/py-lmdb_0.64 -> upstream/py-lmdb_0.64
 * [new tag]         upstream/py-lmdb_0.65 -> upstream/py-lmdb_0.65
 * [new tag]         upstream/py-lmdb_0.66 -> upstream/py-lmdb_0.66
 * [new tag]         upstream/py-lmdb_0.67 -> upstream/py-lmdb_0.67
 * [new tag]         upstream/py-lmdb_0.68 -> upstream/py-lmdb_0.68
 * [new tag]         upstream/py-lmdb_0.69 -> upstream/py-lmdb_0.69
 * [new tag]         upstream/py-lmdb_0.70 -> upstream/py-lmdb_0.70
 * [new tag]         upstream/py-lmdb_0.71 -> upstream/py-lmdb_0.71
 * [new tag]         upstream/py-lmdb_0.72 -> upstream/py-lmdb_0.72
 * [new tag]         upstream/py-lmdb_0.73 -> upstream/py-lmdb_0.73
 * [new tag]         upstream/py-lmdb_0.74 -> upstream/py-lmdb_0.74
 * [new tag]         upstream/py-lmdb_0.75 -> upstream/py-lmdb_0.75
 * [new tag]         upstream/py-lmdb_0.76 -> upstream/py-lmdb_0.76
 * [new tag]         upstream/py-lmdb_0.77 -> upstream/py-lmdb_0.77
 * [new tag]         upstream/py-lmdb_0.78 -> upstream/py-lmdb_0.78
 * [new tag]         upstream/py-lmdb_0.79 -> upstream/py-lmdb_0.79
 * [new tag]         upstream/py-lmdb_0.80 -> upstream/py-lmdb_0.80
 * [new tag]         upstream/py-lmdb_0.81 -> upstream/py-lmdb_0.81
 * [new tag]         upstream/py-lmdb_0.82 -> upstream/py-lmdb_0.82
 * [new tag]         upstream/py-lmdb_0.83 -> upstream/py-lmdb_0.83
 * [new tag]         upstream/py-lmdb_0.84 -> upstream/py-lmdb_0.84
 * [new tag]         upstream/py-lmdb_0.85 -> upstream/py-lmdb_0.85
 * [new tag]         upstream/py-lmdb_0.86 -> upstream/py-lmdb_0.86
Branch pristine-tar set up to track remote branch pristine-tar from debian.
Branch debian/sid set up to track remote branch debian/sid from debian.
$

After the initial push, we need to configure the remote repository so that clones will checkout the debian/sid branch by default:

$ ssh git.debian.org
edmonds@moszumanska:~$ cd /srv/git.debian.org/git/pkg-db/py-lmdb.git/
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ git symbolic-ref HEAD refs/heads/debian/sid
edmonds@moszumanska:/srv/git.debian.org/git/pkg-db/py-lmdb.git$ logout
Shared connection to git.debian.org closed.

We can check if there are any updates in upstream's Git repository with the following command:

$ git fetch upstream --dry-run -v
From https://github.com/dw/py-lmdb
 = [up to date]      master     -> upstream/master
 = [up to date]      release    -> upstream/release
 = [up to date]      win32-sparse-patch -> upstream/win32-sparse-patch
 = [up to date]      last-cython-version -> upstream/last-cython-version
 = [up to date]      py-lmdb_0.1 -> upstream/py-lmdb_0.1
 = [up to date]      py-lmdb_0.2 -> upstream/py-lmdb_0.2
 = [up to date]      py-lmdb_0.3 -> upstream/py-lmdb_0.3
 = [up to date]      py-lmdb_0.4 -> upstream/py-lmdb_0.4
 = [up to date]      py-lmdb_0.5 -> upstream/py-lmdb_0.5
 = [up to date]      py-lmdb_0.51 -> upstream/py-lmdb_0.51
 = [up to date]      py-lmdb_0.52 -> upstream/py-lmdb_0.52
 = [up to date]      py-lmdb_0.53 -> upstream/py-lmdb_0.53
 = [up to date]      py-lmdb_0.54 -> upstream/py-lmdb_0.54
 = [up to date]      py-lmdb_0.56 -> upstream/py-lmdb_0.56
 = [up to date]      py-lmdb_0.57 -> upstream/py-lmdb_0.57
 = [up to date]      py-lmdb_0.58 -> upstream/py-lmdb_0.58
 = [up to date]      py-lmdb_0.59 -> upstream/py-lmdb_0.59
 = [up to date]      py-lmdb_0.60 -> upstream/py-lmdb_0.60
 = [up to date]      py-lmdb_0.61 -> upstream/py-lmdb_0.61
 = [up to date]      py-lmdb_0.62 -> upstream/py-lmdb_0.62
 = [up to date]      py-lmdb_0.63 -> upstream/py-lmdb_0.63
 = [up to date]      py-lmdb_0.64 -> upstream/py-lmdb_0.64
 = [up to date]      py-lmdb_0.65 -> upstream/py-lmdb_0.65
 = [up to date]      py-lmdb_0.66 -> upstream/py-lmdb_0.66
 = [up to date]      py-lmdb_0.67 -> upstream/py-lmdb_0.67
 = [up to date]      py-lmdb_0.68 -> upstream/py-lmdb_0.68
 = [up to date]      py-lmdb_0.69 -> upstream/py-lmdb_0.69
 = [up to date]      py-lmdb_0.70 -> upstream/py-lmdb_0.70
 = [up to date]      py-lmdb_0.71 -> upstream/py-lmdb_0.71
 = [up to date]      py-lmdb_0.72 -> upstream/py-lmdb_0.72
 = [up to date]      py-lmdb_0.73 -> upstream/py-lmdb_0.73
 = [up to date]      py-lmdb_0.74 -> upstream/py-lmdb_0.74
 = [up to date]      py-lmdb_0.75 -> upstream/py-lmdb_0.75
 = [up to date]      py-lmdb_0.76 -> upstream/py-lmdb_0.76
 = [up to date]      py-lmdb_0.77 -> upstream/py-lmdb_0.77
 = [up to date]      py-lmdb_0.78 -> upstream/py-lmdb_0.78
 = [up to date]      py-lmdb_0.79 -> upstream/py-lmdb_0.79
 = [up to date]      py-lmdb_0.80 -> upstream/py-lmdb_0.80
 = [up to date]      py-lmdb_0.81 -> upstream/py-lmdb_0.81
 = [up to date]      py-lmdb_0.82 -> upstream/py-lmdb_0.82
 = [up to date]      py-lmdb_0.83 -> upstream/py-lmdb_0.83
 = [up to date]      py-lmdb_0.84 -> upstream/py-lmdb_0.84
 = [up to date]      py-lmdb_0.85 -> upstream/py-lmdb_0.85
 = [up to date]      py-lmdb_0.86 -> upstream/py-lmdb_0.86

We can check if any co-maintainers have pushed updates to the git.debian.org repository with the following command:

$ git fetch debian --dry-run -v
From ssh://git.debian.org/git/pkg-db/py-lmdb
 = [up to date]      debian/sid -> debian/debian/sid
 = [up to date]      pristine-tar -> debian/pristine-tar
 = [up to date]      debian/0.84-1 -> debian/0.84-1
 = [up to date]      debian/0.86-1 -> debian/0.86-1
 = [up to date]      upstream/last-cython-version -> upstream/last-cython-version
 = [up to date]      upstream/py-lmdb_0.1 -> upstream/py-lmdb_0.1
 = [up to date]      upstream/py-lmdb_0.2 -> upstream/py-lmdb_0.2
 = [up to date]      upstream/py-lmdb_0.3 -> upstream/py-lmdb_0.3
 = [up to date]      upstream/py-lmdb_0.4 -> upstream/py-lmdb_0.4
 = [up to date]      upstream/py-lmdb_0.5 -> upstream/py-lmdb_0.5
 = [up to date]      upstream/py-lmdb_0.51 -> upstream/py-lmdb_0.51
 = [up to date]      upstream/py-lmdb_0.52 -> upstream/py-lmdb_0.52
 = [up to date]      upstream/py-lmdb_0.53 -> upstream/py-lmdb_0.53
 = [up to date]      upstream/py-lmdb_0.54 -> upstream/py-lmdb_0.54
 = [up to date]      upstream/py-lmdb_0.56 -> upstream/py-lmdb_0.56
 = [up to date]      upstream/py-lmdb_0.57 -> upstream/py-lmdb_0.57
 = [up to date]      upstream/py-lmdb_0.58 -> upstream/py-lmdb_0.58
 = [up to date]      upstream/py-lmdb_0.59 -> upstream/py-lmdb_0.59
 = [up to date]      upstream/py-lmdb_0.60 -> upstream/py-lmdb_0.60
 = [up to date]      upstream/py-lmdb_0.61 -> upstream/py-lmdb_0.61
 = [up to date]      upstream/py-lmdb_0.62 -> upstream/py-lmdb_0.62
 = [up to date]      upstream/py-lmdb_0.63 -> upstream/py-lmdb_0.63
 = [up to date]      upstream/py-lmdb_0.64 -> upstream/py-lmdb_0.64
 = [up to date]      upstream/py-lmdb_0.65 -> upstream/py-lmdb_0.65
 = [up to date]      upstream/py-lmdb_0.66 -> upstream/py-lmdb_0.66
 = [up to date]      upstream/py-lmdb_0.67 -> upstream/py-lmdb_0.67
 = [up to date]      upstream/py-lmdb_0.68 -> upstream/py-lmdb_0.68
 = [up to date]      upstream/py-lmdb_0.69 -> upstream/py-lmdb_0.69
 = [up to date]      upstream/py-lmdb_0.70 -> upstream/py-lmdb_0.70
 = [up to date]      upstream/py-lmdb_0.71 -> upstream/py-lmdb_0.71
 = [up to date]      upstream/py-lmdb_0.72 -> upstream/py-lmdb_0.72
 = [up to date]      upstream/py-lmdb_0.73 -> upstream/py-lmdb_0.73
 = [up to date]      upstream/py-lmdb_0.74 -> upstream/py-lmdb_0.74
 = [up to date]      upstream/py-lmdb_0.75 -> upstream/py-lmdb_0.75
 = [up to date]      upstream/py-lmdb_0.76 -> upstream/py-lmdb_0.76
 = [up to date]      upstream/py-lmdb_0.77 -> upstream/py-lmdb_0.77
 = [up to date]      upstream/py-lmdb_0.78 -> upstream/py-lmdb_0.78
 = [up to date]      upstream/py-lmdb_0.79 -> upstream/py-lmdb_0.79
 = [up to date]      upstream/py-lmdb_0.80 -> upstream/py-lmdb_0.80
 = [up to date]      upstream/py-lmdb_0.81 -> upstream/py-lmdb_0.81
 = [up to date]      upstream/py-lmdb_0.82 -> upstream/py-lmdb_0.82
 = [up to date]      upstream/py-lmdb_0.83 -> upstream/py-lmdb_0.83
 = [up to date]      upstream/py-lmdb_0.84 -> upstream/py-lmdb_0.84
 = [up to date]      upstream/py-lmdb_0.85 -> upstream/py-lmdb_0.85
 = [up to date]      upstream/py-lmdb_0.86 -> upstream/py-lmdb_0.86
$

We can check if anything needs to be pushed from our local repository to the git.debian.org repository with the following command:

$ git push debian --dry-run -v
Pushing to ssh://git.debian.org/git/pkg-db/py-lmdb.git
To ssh://git.debian.org/git/pkg-db/py-lmdb.git
 = [up to date]      debian/sid -> debian/sid
 = [up to date]      pristine-tar -> pristine-tar
 = [up to date]      debian/0.84-1 -> debian/0.84-1
 = [up to date]      debian/0.86-1 -> debian/0.86-1
 = [up to date]      upstream/last-cython-version -> upstream/last-cython-version
 = [up to date]      upstream/py-lmdb_0.1 -> upstream/py-lmdb_0.1
 = [up to date]      upstream/py-lmdb_0.2 -> upstream/py-lmdb_0.2
 = [up to date]      upstream/py-lmdb_0.3 -> upstream/py-lmdb_0.3
 = [up to date]      upstream/py-lmdb_0.4 -> upstream/py-lmdb_0.4
 = [up to date]      upstream/py-lmdb_0.5 -> upstream/py-lmdb_0.5
 = [up to date]      upstream/py-lmdb_0.51 -> upstream/py-lmdb_0.51
 = [up to date]      upstream/py-lmdb_0.52 -> upstream/py-lmdb_0.52
 = [up to date]      upstream/py-lmdb_0.53 -> upstream/py-lmdb_0.53
 = [up to date]      upstream/py-lmdb_0.54 -> upstream/py-lmdb_0.54
 = [up to date]      upstream/py-lmdb_0.56 -> upstream/py-lmdb_0.56
 = [up to date]      upstream/py-lmdb_0.57 -> upstream/py-lmdb_0.57
 = [up to date]      upstream/py-lmdb_0.58 -> upstream/py-lmdb_0.58
 = [up to date]      upstream/py-lmdb_0.59 -> upstream/py-lmdb_0.59
 = [up to date]      upstream/py-lmdb_0.60 -> upstream/py-lmdb_0.60
 = [up to date]      upstream/py-lmdb_0.61 -> upstream/py-lmdb_0.61
 = [up to date]      upstream/py-lmdb_0.62 -> upstream/py-lmdb_0.62
 = [up to date]      upstream/py-lmdb_0.63 -> upstream/py-lmdb_0.63
 = [up to date]      upstream/py-lmdb_0.64 -> upstream/py-lmdb_0.64
 = [up to date]      upstream/py-lmdb_0.65 -> upstream/py-lmdb_0.65
 = [up to date]      upstream/py-lmdb_0.66 -> upstream/py-lmdb_0.66
 = [up to date]      upstream/py-lmdb_0.67 -> upstream/py-lmdb_0.67
 = [up to date]      upstream/py-lmdb_0.68 -> upstream/py-lmdb_0.68
 = [up to date]      upstream/py-lmdb_0.69 -> upstream/py-lmdb_0.69
 = [up to date]      upstream/py-lmdb_0.70 -> upstream/py-lmdb_0.70
 = [up to date]      upstream/py-lmdb_0.71 -> upstream/py-lmdb_0.71
 = [up to date]      upstream/py-lmdb_0.72 -> upstream/py-lmdb_0.72
 = [up to date]      upstream/py-lmdb_0.73 -> upstream/py-lmdb_0.73
 = [up to date]      upstream/py-lmdb_0.74 -> upstream/py-lmdb_0.74
 = [up to date]      upstream/py-lmdb_0.75 -> upstream/py-lmdb_0.75
 = [up to date]      upstream/py-lmdb_0.76 -> upstream/py-lmdb_0.76
 = [up to date]      upstream/py-lmdb_0.77 -> upstream/py-lmdb_0.77
 = [up to date]      upstream/py-lmdb_0.78 -> upstream/py-lmdb_0.78
 = [up to date]      upstream/py-lmdb_0.79 -> upstream/py-lmdb_0.79
 = [up to date]      upstream/py-lmdb_0.80 -> upstream/py-lmdb_0.80
 = [up to date]      upstream/py-lmdb_0.81 -> upstream/py-lmdb_0.81
 = [up to date]      upstream/py-lmdb_0.82 -> upstream/py-lmdb_0.82
 = [up to date]      upstream/py-lmdb_0.83 -> upstream/py-lmdb_0.83
 = [up to date]      upstream/py-lmdb_0.84 -> upstream/py-lmdb_0.84
 = [up to date]      upstream/py-lmdb_0.85 -> upstream/py-lmdb_0.85
 = [up to date]      upstream/py-lmdb_0.86 -> upstream/py-lmdb_0.86
Everything up-to-date

Finally, in order to set up a fresh local clone of the git.debian.org repository that's configured like the local repository created above, we have to do the following:

$ git clone --origin debian ssh://git.debian.org/git/pkg-db/py-lmdb.git
Cloning into 'py-lmdb'...
remote: Counting objects: 3333, done.
remote: Compressing objects: 100% (1070/1070), done.
remote: Total 3333 (delta 2231), reused 3333 (delta 2231)
Receiving objects: 100% (3333/3333), 1.37 MiB | 1.11 MiB/s, done.
Resolving deltas: 100% (2231/2231), done.
Checking connectivity... done.
$ cd py-lmdb
$ git remote add --no-tags upstream https://github.com/dw/py-lmdb
$ git config --add remote.upstream.fetch 'refs/tags/*:refs/tags/upstream/*'
$ git fetch upstream
remote: Counting objects: 56, done.
remote: Total 56 (delta 25), reused 25 (delta 25), pack-reused 31
Unpacking objects: 100% (56/56), done.
From https://github.com/dw/py-lmdb
 * [new branch]      master     -> upstream/master
 * [new branch]      release    -> upstream/release
 * [new branch]      win32-sparse-patch -> upstream/win32-sparse-patch
$ git branch --track pristine-tar debian/pristine-tar 
Branch pristine-tar set up to track remote branch pristine-tar from debian.
$ git config --add remote.debian.push 'refs/tags/debian/*'
$ git config --add remote.debian.push 'refs/tags/upstream/*'
$ git config --add remote.debian.push 'refs/heads/debian/*'
$ git config --add remote.debian.push 'refs/heads/pristine-tar'
$ git config --add remote.debian.fetch 'refs/tags/*:refs/tags/*'
$

This is a fair amount of effort beyond a simple git clone, though, so I wonder if anything can be done to optimize this.

05 July, 2015 12:56AM by Robert Edmonds

July 04, 2015

hackergotchi for Guido Günther

Guido Günther

Debian work in June 2015

June was the second month I contributed to Debian LTS under the Freexian umbrella. In total I spent ten hours working on:

Besides that I did CVE triaging of 17 CVEs to check if and how they affect oldoldstable security. The information provided by the Security team on these issues in data/CVE/list is an awesome help here. So I tried to be as verbose when triaging CVEs that weren't looked at for Wheezy or Jessie yet.

On non LTS time I patched our lts-cve-triage tool to allow to skip packages that are already in dla-needed.txt. This avoids wasting time on CVEs that were already triaged.

04 July, 2015 10:46AM

July 03, 2015

hackergotchi for Norbert Preining

Norbert Preining

Debian/TeX Live 2015.20150703-1

The first upload of new packages after TeX Live 2015 hit unstable. Against my expectations, the bugs didn’t come in in the thousands, more or less there were only some fixes necessary in the binary package, which lead to a few updates over the last week. This upload fixes an RC bug (missing replaces), and also takes a step further in the Debianization of the packages: I finally removed texconfig and texlinks programs, as they are not useful on Debian, and should actually not be used.

Debian - TeX Live 2015

Besides a few other fixes, of course there was the usual chore of package updates.

Updated packages

babel-french, biblatex-fiwi, biblatex-opcit-booktitle, c90, chemformula, chemgreek, cjkutils, ctex, curve2e, dozenal, eledmac, elements, enotez, garuda-c90, koma-script, l3build, latex, leadsheets, norasi-c90, pkuthss, poemscol, pstricks, pst-solides3d, siunitx, termmenu, texlive-scripts, tudscr, upmethodology, xindy.

New packages

arabi-add, br-lex.

Enjoy.

03 July, 2015 03:08PM by Norbert Preining

Petter Reinholdtsen

Time to find a new laptop, as the old one is broken after only two years

My primary work horse laptop is failing, and will need a replacement soon. The left 5 cm of the screen on my Thinkpad X230 started flickering yesterday, and I suspect the cause is a broken cable, as changing the angle of the screen some times get rid of the flickering.

My requirements have not really changed since I bought it, and is still as I described them in 2013. The last time I bought a laptop, I had good help from prisjakt.no where I could select at least a few of the requirements (mouse pin, wifi, weight) and go through the rest manually. Three button mouse and a good keyboard is not available as an option, and all the three laptop models proposed today (Thinkpad X240, HP EliteBook 820 G1 and G2) lack three mouse buttons). It is also unclear to me how good the keyboard on the HP EliteBooks are. I hope Lenovo have not messed up the keyboard, even if the quality and robustness in the X series have deteriorated since X41.

I wonder how I can find a sensible laptop when none of the options seem sensible to me? Are there better services around to search the set of available laptops for features? Please send me an email if you have suggestions.

03 July, 2015 05:10AM

July 02, 2015

Enrico Zini

italian-fattura-elettronica

Billing an Italian public administration

Here's a simple guide for how I managed to bill one of my customers as is now mandated by law in Italy.

Create a new virtualbox machine

I would never do any of this to any system I would ever want to use for anything else, so it's virtual machine time.

  • I started virtualbox, created a new machine for Ubuntu 32bit, 8Gb disk, 4Gb RAM, and placed the .vdi image in an encrypted partition. The web services of Infocert's fattura-pa requires "Java (JRE) a 32bit di versione 1.6 o superiore".
  • I installed Ubuntu 12.04 on it: that is what dike declares to support.
  • I booted the VM, installed virtualbox-guest-utils, and de sure I also had virtualbox-guest-x11
  • I restarted the VM so that I could resize the virtualbox window and have Ubuntu resize itself as well. Now I could actually read popup error messages in full.
  • I changed the desktop background to something that gave me the idea that this is an untrusted machine where I need to be very careful of what I type. I went for bright red.

Install smart card software into it

  • apt-get install pcscd pcsc-tools opensc
  • In virtualbox, I went to Devices/USB devices and enabled the smart card reader in the virtual machine.
  • I ran pcsc_scan to see if it could see my smart card.
  • I ran Firefox, went to preferences, advanced, security devices, load. Module name is "CRS PKCS#11", module path is /usr/lib/opensc-pkcs11.so
  • I went to https://fattura-pa.infocamere.it/fpmi/service and I was able to log in. To log in, I had to type the PIN 4 times into popups that offered little explanations about what was going on, enjoying cold shivers because the smart card would lock itself at the 3rd failed attempt.
  • Congratulations to myself! I thought that all was set, but unfortunately, at this stage, I was not able to do anything else except log into the website.

Descent into darkness

Set up things for fattura-pa

  • I got the PDF with the setup instructions from here. Get it too, for a reference, a laugh, and in case you do not believe the instructions below.
  • I went to https://www.firma.infocert.it/installazione/certificato.php, and saved the two certificates.
  • Firefox, preferences, advanced, show certificates, I imported both CA certificates, trusted for everything, all my base are belong to them.
  • apt-get install icedtea-plugin
  • I went to https://fattura-pa.infocamere.it/fpmi/service and tried to sign. I could not: I got an error about invalid UTF8 for something or other in Firefox's stdandard error. Firefox froze and had to be killed.

Set up things for signing locally with dike

  • I removed icedtea so that I could use the site without firefox crashing.
  • I installed DiKe For Ubuntu 12.04 32bit
  • I ran dikeutil to see if it could talk to my smart card
  • When signing with the website, I chose the manual signing options and downloaded the zip file with the xml to be signed.
  • I got a zip file, unzipped it.
  • I loaded the xml into dike.
  • I signed it with dike.
  • I got this error message: "nessun certificato di firma presente sul dispositivo di firma" and then this error message: "Impossibile recuperare il certificato dal dispositivo di firma". No luck.

Set up things for signing locally with ArubaSign

  • I went to https://www.pec.it/Download.aspx
  • I downloaded ArubaSign for Linux 32 bit.
  • Oh! People say that it only works with Oracle's version of Java.
  • sudo add-apt-repository ppa:webupd8team/java
  • apt-get update
  • apt-get install oracle-java7-installer
  • During the installation process I had to agree to also sell my soul to Oracle.
  • tar axf ArubaSign*.tar*
  • cd ArubaSing-*/apps/dist
  • java -jar ArubaSign.jar
  • I let it download its own updates. Another time I did not. It does not seem to matter: I get asked that question every time I start it anyway.
  • I enjoyed the fancy brushed metal theme, and had an interesting time navigating an interface where every label on every icon or input field was truncated.
  • I downloaded https://www.pec.it/documenti/Manuale_ArubaSign2_firma%20Remota_V03_02_07_2012.pdf to get screenshots of that interface with all the labels intact
  • I signed the xml that I got from the website. I got told that I needed to really view carefully what I was signing, because the signature would be legally binding
  • I enjoyed carefully reading a legally binding, raw XML file.
  • I told it to go ahead, and there was now a .p7m file ready for me. I rejoiced, as now I might, just might actually get paid for my work.

Try fattura-pa again

Maybe fattura-pa would work with Oracle's Java plugin?

  • I went to https://fattura-pa.infocamere.it/fpmi/service
  • I got asked to verify java at www.java.com. I did it.
  • I told FireFox to enable java.
  • Suddenly, and while I was still in java.com's tab, I got prompted about allowing Infocert's applet to run: I allowed it to run.
  • I also got prompted several times, still while the current tab was not even Infocert's tab, about running components that could compromise the security of my system. I allowed and unblocked all of them.
  • I entered my PIN.
  • Congratulations! Now I have two ways of generating legally binding signatures with government issued smart cards!

Aftermath

I shut down that virtual machine and I'm making sure I never run anything important on it. Except, of course, generating legally binding signatures as required by the Italian government.

What could possibly go wrong?

02 July, 2015 09:48PM

Antonio Terceiro

Upgrades to Jessie, Ruby 2.2 transition, and chef update

Last month I started to track all the small Debian-related things that I do. My initial motivation was to be concious about how often I spend short periods of time working on Debian. Sometimes it’s during lunch breaks, weekends, first thing in the morning before regular work, after I am done for the day with regular work, or even during regular work, since I do have the chance of doing Debian work as part of my regular work occasionally.

Now that I have this information, I need to do something with it. So this is probably the first of monthly updates I will post about my Debian work. Hopefully it won’t be the last.

Upgrades to Jessie

I (finally) upgraded my two servers to Jessie. The first one, my home server, is a Utilite which is a quite nice ARM box. It is silent and consumes very little power. The only problem I had with it is that the vendor-provided kernel is too old, so I couldn’t upgrade udev, and therefore couldn’t switch to systemd. I had to force systemv for now, until I can manage to upgrade the kernel and configure uboot to properly boot the official Debian kernel.

On my VPS things are way better. I was able to upgrade nicely, and it is now running a stock Jessie system.

fixed https on ci.debian.net

pabs had let me know on IRC of an issue with the TLS certificate for ci.debian.net, which took me a few iterations to get right. It was missing the intermediate certificates, and is now fixed. You can now enjoy Debian CI under https .

Ruby 2.2 transition

I was able to start the Ruby 2.2 transition, which has the goal of switch to Ruby 2.2 on unstable. The first step was updating ruby-defaults adding support to build Ruby packgaes for both Ruby 2.1 and Ruby 2.2. This was followed by updates to gem2deb (0.18, 0.18.1, 0.18.2, and 0.18.3) and rubygems-integration . At this point, after a few rebuild requests only 50 out of 137 packages need to be looked at; some of them just use the default Ruby, so a rebuild once we switch the default will be enough to make it use Ruby 2.2, while others, specially Ruby libraries, will still need porting work or other fixes.

Updated the Chef stack

Bringing chef to the very latest upstream release into unstable was quite some work.

I had to update:

  • ruby-columnize (0.9.0-1)
  • ruby-mime-types (2.6.1-1)
  • ruby-mixlib-log 1.6.0-1
  • ruby-mixlib-shellout (2.1.0-1)
  • ruby-mixlib-cli (1.5.0-1)
  • ruby-mixlib-config (2.2.1-1)
  • ruby-mixlib-authentication (1.3.0-2)
  • ohai (8.4.0-1)
  • chef-zero (4.2.2-1)
  • ruby-specinfra (2.35.1-1)
  • ruby-serverspec (2.18.0-1)
  • chef (12.3.0-1)
  • ruby-highline (1.7.2-1)
  • ruby-safe-yaml (1.0.4-1)

In the middle I also had to package a new dependency, ruby-ffi-yajl, which was very quickly ACCEPTED thanks to the awesome work of the ftp-master team.

Random bits

  • Sponsored a upload of redir by Lucas Kanashiro
  • chake, a tool that I wrote for managing servers with chef but without a central chef server, got ACCEPTED into the official Debian archive.
  • vagrant-lxc , a vagrant plugin for using lxc as backend and lxc containters as development environments, was also ACCEPTED into unstable.
  • I got the deprecated ruby-rack1.4 package removed from Debian

02 July, 2015 08:26PM

hackergotchi for Christoph Berg

Christoph Berg

PostgreSQL 9.5 in Debian

Today saw the release of PostgreSQL 9.5 Alpha 1. Packages for all supported Debian and Ubuntu releases are available on apt.postgresql.org:

deb http://apt.postgresql.org/pub/repos/apt/ YOUR_RELEASE_HERE-pgdg main 9.5

The package is also waiting in NEW to be accepted for Debian experimental.

Being curious which PostgreSQL releases have been in use over time, I pulled some graphics from Debian's popularity contest data:

Before we included the PostgreSQL major version in the package name, "postgresql" contained the server, so that line represents the installation count of the pre-7.4 releases at the left end of the graph.

Interestingly, 7.4 reached its installation peak well past 8.1's. Does anyone have an idea why that happened?

02 July, 2015 06:03PM

hackergotchi for Simon Kainz

Simon Kainz

DUCK challenge at DebConf15

New features in DUCK

Carnivore-* data

DUCK now uses carnivore-{names,email} tables from UDD, giving a nice list of packages grouped by Maintainer/Uploader names.

Domain grouping

A per-domain-listing is now also available here.

DUCK challenge at DebConf15

After announcing DUCK in mid-june 2012, the number of source packages with issues is still somewhat stable around 1700. After a recent update of the curl libs, i also managed to get rid of 200 false positives, caused by SSL-verification issues, as can be seen here.

To speed things up a bit and lower the number of broken links, i hereby propose the following challenge:

The first 99 persons who fix at least 1 broken URL and upload the fixed package before end of DebConf15 will get an awesome "200 OK" DUCK-branded lighter at DebConf15!

Lighter Army of Lighters

The challenge starts right now!

I will try hard to not forget anyone who fixes packages (note the s ;-), but if you feel missed out, please contact me at DC15.

Also, please remember that this is not a valid excuse to NMU packages ;-).

su

02 July, 2015 02:45PM by Simon Kainz

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Laptop Mode Tools 1.67

I am pleased to announce the release of Laptop Mode Tools, version 1.67.

This release has many important bug fixes, and everyone is recommended to upgrade. Of the many, one important fix is to, more reliably check for Device Mapper based devices, which is common these days with Crypt and LVM.

For the summary of changes to quote from git log:

1.67 - Thu Jul  2 17:05:07 IST 2015
    * Relax minimum window size to accomodate low res screens
    * Fix variable name to comply with our "constants" assuptions
    * Get more aggressive in power saving for Intel HD Audio
    * Account Device Mapper devices
    * Add swsusp freeze support
    * Switch battery-level-polling default to True
    * Detect ethernet carrier, early and relibaly
    * changes the boolean setting *_ACTIVATE_SATA_POWER to a customizable
      *_SATA_POLICY, with backward-compatible defaults and documentation
      Thanks Yuir D'Elia

PS: On a side note, over the years, Linux's power savings functionality has improved a lot, all thanks to its use in the mobile worlds. At the same time, because of more companies shipping drivers depending on external firmware, stability has become less reliable. And to add to that, bare functionality of devices typically ask for disabling, you know what, LPM.

So, at the end, the result is the same.

Categories: 

Keywords: 

Like: 

02 July, 2015 12:11PM by Ritesh Raj Sarraf

Petter Reinholdtsen

MakerCon Nordic videos now available on Frikanalen

Last oktober I was involved on behalf of NUUG with recording the talks at MakerCon Nordic, a conference for the Maker movement. Since then it has been the plan to publish the recordings on Frikanalen, which finally happened the last few days. A few talks are missing because the speakers asked the organizers to not publish them, but most of the talks are available. The talks are being broadcasted on RiksTV channel 50 and using multicast on Uninett, as well as being available from the Frikanalen web site. The unedited recordings are available on Youtube too.

This is the list of talks available at the moment. Visit the Frikanalen video pages to view them.

  • Evolutionary algorithms as a design tool - from art to robotics (Kyrre Glette)
  • Make and break (Hans Gerhard Meier)
  • Making a one year school course for young makers (Olav Helland)
  • Innovation Inspiration - IPR Databases as a Source of Inspiration (Hege Langlo)
  • Making a toy for makers (Erik Torstensson)
  • How to make 3D printer electronics (Elias Bakken)
  • Hovering Clouds: Looking at online tool offerings for Product Design and 3D Printing (William Kempton)
  • Travelling maker stories (Øyvind Nydal Dahl)
  • Making the first Maker Faire in Sweden (Nils Olander)
  • Breaking the mold: Printing 1000’s of parts (Espen Sivertsen)
  • Ultimaker — and open source 3D printing (Erik de Bruijn)
  • Autodesk’s 3D Printing Platform: Sparking innovation (Hilde Sevens)
  • How Making is Changing the World – and How You Can Too! (Jennifer Turliuk)
  • Open-Source Adventuring: OpenROV, OpenExplorer and the Future of Connected Exploration (David Lang)
  • Making in Norway (Haakon Karlsen Jr., Graham Hayward and Jens Dyvik)
  • The Impact of the Maker Movement (Mike Senese)

Part of the reason this took so long was that the scripts NUUG had to prepare a recording for publication were five years old and no longer worked with the current video processing tools (command line argument changes). In addition, we needed better audio normalization, which sent me on a detour to package bs1770gain for Debian. Now this is in place and it became a lot easier to publish NUUG videos on Frikanalen.

02 July, 2015 12:10PM

hackergotchi for Michael Prokop

Michael Prokop

HAProxy with Debian/squeeze clients causing random “Hash Sum mismatch”

Update on 2015-07-02 22:15 UTC: as Petter Reinholdtsen noted in the comments:

Try adding /etc/apt/apt.conf.d/90squid with content like this:

Acquire::http::Pipeline-Depth 0;

It turn off the feature in apt confusing proxies.

” – this indeed avoids those “Hash Sum mismatch” failures with HAProxy as well. Thanks, Petter!

Many of you might know apt’s “Hash Sum mismatch” issue and there are plenty of bug reports about it (like #517874, #624122, #743298 + #762079).

Recently I saw the “Hash Sum mismatch” usually only when using “random” mirrors with e.g. httpredir.debian.org in apt’s sources.list, but with a static mirror such issues usually don’t exist anymore. A customer of mine has a Debian mirror and this issue wasn’t a problem there neither, until recently:

Since the mirror also includes packages provided to customers and the mirror needs to be available 24/7 we decided to provide another instance of the mirror and put those systems behind HAProxy (version 1.5.8-3 as present in Debian/jessie). The HAProxy setup worked fine and we didn’t notice any issues in our tests, until the daily Q/A builds randomly started to report failures:

Failed to fetch http://example.org/foobar_amd64.deb Hash Sum mismatch

When repeating the download there was no problem though. This problem only appeared about once every 15-20 minutes with random package files and it affected only Debian/squeeze clients (wheezy and jessie aren’t affected at all). The problem also didn’t appear when directly accessing the mirrors behind HAproxy. We tried plenty of different options for apt (Acquire::http::No-Cache=true, Acquire::http::No-Partial=true,…) and also played with some HAProxy configurations, nothing really helped. With apt’s “Debug::Acquire::http=True” we saw that there really was a checksum failure and HTTP status code 102 (‘Processing‘, or in terms of apt: ‘Waiting for headers‘) seems to be involved. The actual problem between apt on Debian/squeeze and HAProxy is still unknown to us though.

While digging deeper into this issue is on my todo list yet, I found a way to avoid those “Hash Sum mismatch” failures: switch from http to https in sources.list. As soon as https is used the problem doesn’t appear anymore. I’m documenting it here just in case anyone else should run into it.

02 July, 2015 10:17AM by mika

hackergotchi for Steve Kemp

Steve Kemp

My new fitness challenge

So recently I posted on twitter about a sudden gain in strength:

To put that more into context I should give a few more details. In the past I've been using an assisted pull-up machine, which offers a counterweight to make such things easier.

When I started the exercise I assumed I couldn't do it for real, so I used the machine and set it on 150lb. Over a few weeks I got as far as being able to use it with only 80lb. (Which means I was lifting my entire body-weight minus 80lb. With the assisted-pullup machine smaller numbers are best!)

One evening I was walking to the cinema with my wife and told her I thought I'd be getting close to doing one real pull-up soon, which sounds a little silly, but I guess is pretty common for random men who are 40 as I almost am. As it happens there were some climbing equipment nearby so I said "Here see how close I am", and I proceeded to do 1.5 pullups. (The second one was bad, and didn't count, as I got 90% of the way "up".)

Having had that success I knew I could do "almost two", and I set a goal for the next gym visit: 3 x 3-pullups. I did that. Then I did two more for fun on the way out (couldn't quite manage a complete set.)

So that's the story of how I went from doing 1.5 pullus to doing 11 in less than a week. These days I can easily do 3x3, but struggle with more. It'll come, slowly.

So pull-up vs. chin-up? This just relates to which way you place your hands: palm facing you (chin-up) and palm way from you (pull-up).

Some technical details here but chinups are easier, and more bicep-centric.

Anyway too much writing. My next challenge is the one-armed pushup. However long it takes, and I think it will take a while, that's what I'm working toward.

02 July, 2015 08:18AM

hackergotchi for Norbert Preining

Norbert Preining

An amusing lintian error — Lenna

Well, there we are, trying to build another round of TeX Live packages for Debian, just to realize that the lintian error that should have been downgraded to warning (or removed) is still around, due to doubts about the license. Ok. Well, anyway, but what I found is even more funny:

E: texlive-extra source: license-problem-non-free-img-lenna texmf-dist/doc/latex/reflectgraphics/lenna.jpg

which is about one of the most used images in images processing courses, Lenna:

who-is-lenna
Without comments, I just quote the lintian error … it is a whole lot of fun to read.

Ref: https://en.wikipedia.org/wiki/Lenna, https://www.debian.org/vote/2012/vote_002, #771191
Info: The given source file is cropped from playboy centerfold.

This image is a picture of Lena Söderberg, shot by photographer Dwight Hooker, cropped from the centerfold of the November 1972 issue of Playboy magazine.

According to Hutchison, Jamie (May–June 2001). “Culture, Communication, and an Information Age Madonna” (PDF). IEEE Professional Communication Society Newsletter 45 (3). (page 5 second column second paragraph), this image is distributable but not free.

Moreover, Lenna photo has been pointed to as an example of sexism in the sciences, reinforcing gender stereotypes.

Please use well known and free test image.

Please also submit md5sum, sha1sum, and sha256 of this file as a bug report for lintian.

How fortunate our generation is that we don’t have anything else to care about …

Anyway, back to rebuilding orig.tars, source packages, and binary packages!

02 July, 2015 07:28AM by Norbert Preining

hackergotchi for Lars Wirzenius

Lars Wirzenius

Obnam 1.10 released (backup software)

I have just released version 1.10 of Obnam, my backup program. See the website at http://obnam.org for details on what it does. The new version is available from git (see http://git.liw.fi) and as Debian packages from http://code.liw.fi/debian, and uploaded to Debian, hopefully soon in unstable.

The NEWS file extract below gives the highlights of what's new in this version.

Version 1.10, released 2015-07-01

Major bug fixes:

  • Lars Wirzenius fixed the obnam backup command to lock the whole repository, the same way as obnam forget does, when it removes checkpoint generations. This means that during checkpoint removal, no other client can make a backup, which is unfortunate. To avoid that, set leave-checkpoints = yes in the configuration. That will prevent obnam backup from removing checkpoints.

Minor new features:

  • Lars Wirzenius added the obnam list-formats command to list all repository formats.

  • The default value for the upload-queue-size setting is now 1024, chosen based on some benchmarking made by Lars Wirzenius to balance speed and memory use.

  • An EXPERIMENTAL new repository format, green-albatross, as been introduced. It is not ready for actual use, and is only added so that its code doesn't diverge far from the main line of development.

  • Teemu Hukkanen reported that the Synology NAS device returns EACCES instead of ENOENT when user tries to remove a non-existent file. Obnam now copes with either error code.

Minor fixes:

  • python setup.py build no longer formats the manual page into plain text. This is now done in python setup.py docs instead. The latter is an optional build step, and probably only works on Debian.

  • obnam restore --to=DIR now requires that the directory DIR either doesn't exist, or it is empty when the restore starts. This is to prevent users from restore on top of a running system.

02 July, 2015 05:10AM

July 01, 2015

hackergotchi for Daniel Silverstone

Daniel Silverstone

Be careful what you ask for

Date: Wed, 01 Jul 2015 06:13:16 -0000
From: 123-reg <noreply@123-reg.co.uk>
To: dsilvers@digital-scurf.org
Subject: Tell us what you think for your chance to win
X-Mailer: MIME::Lite 3.027 (F2.74; T1.28; A2.04; B3.13; Q3.13)

Tell us what you think of 123-reg!

<!--

.style1 {color: #1996d8}

-->

Well 123-reg mostly I think you don't know how to do email.

01 July, 2015 01:28PM by Daniel Silverstone

hackergotchi for Christian Perrier

Christian Perrier

[LIFE] Running activities

Hello dear readers,

It has been quite some time since I blogged on Planet Debian,so today, I just want to give some news to fellow Debian pals.

My involvment in Debian is still there. I'm probably less visible nowadays, but I'm still actively working on some packages, monotiring some i18n activities and doing work on D-I.

But, as you know, running has taken precedence nowadays and is still becoming a growing part of my life (along with my family, of course).

This year, I had a first "summit" running the "Vulcain" trail race in French "Massif Central" (mountains in Central France), which was 80km and 3000m positive climb race. It was run mostly in snow and with quite bad weather conditions, a good training for more difficult races. I completed it in about more than 12 hours, for a race that finally had less than 60% finishers.

Later on, most races were preparation races for the summer moutain races : I mostly ran three 50km trail races in the Paris and neighbourhood area. All of them were very good results with a good feeling. Some were run along with friends from the Kikourou.net web community, where I am now very active.

My training was also strongly increased wrt former years (yes that *is* possible), peaking at more than 500km during May, where I was mostly on holidays all month long (lucky man).

And now, the first Great Great Thing of the year is coming : La Montagn'hard, 110 kilometers, about 9000 meters positive climb, around Les Contamines, close to Mont-Blanc in French Alps.

That is a Big One, indeed. Technically more difficult than the TDS race I ran last August, during DebConf (120km, but "only" 7000 meters climb). Montagn'hard is indeed known as one of the most difficult moutain trail races in France.

I plan to complete it in about 29 hours....but that can indeed be 30, 32 or even 35, who knows what can happen? Given the very high temperatures over Europe this week (they'll peak at about 38°C on Saturday in the Alps), that will be an incredibly difficult challenge and we expect about only 40% finishers.

A live tracking will be available for thos who care at http://chrono.geofp.com/montagnhard2015/v3/. Wish me luck !

Next challenge will be end of August, with the "Echappee Belle" race : 144km and 10.000 meters positive climb, still in French Alps (Belledonne range, this time). About 48 hours, or even up to 55, two nights out.....harder and hopefully better, faster, stronger...:-)

01 July, 2015 04:37AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

My thermometer (Fplug) is no longer returning temperature.

My thermometer (Fplug) is no longer returning temperature. It does give me humidity. The values don't really look sane either, maybe it's not a great product.

01 July, 2015 12:44AM by Junichi Uekawa

hackergotchi for Steve McIntyre

Steve McIntyre

Quick trip to Sweden

Jo and I spent a few days in Sweden and had an awesome time! The main reason for being there was Leif and Maria's wedding way up north in Skellefteå. They cunningly organised their ceremony for the Midsummer weekend, which was an excellent plan - we had a full weekend of partying while we were there. :-)

the happy couple

We had some time to ourselves while we were there, so we wandered about a little and got to see some of the beautiful coastal countryside.

Trundön

Then on the way home we stopped off in Umeå to visit Mattias Wadenstein (maswan) and his wife Melanie, and he showed me around some of the machines that he's been admining on behalf of Debian. Maybe I'm a sad geek, but I feel quite a bond with one of the machines there, pettersson.debian.org. It's the official CD build machine for Debian, and I've been responsible for thrashing it really hard for the last 5 years or so... :-)

Pettersson and friends

Massive thanks to the University of Umeå and their Academic Computer Club for hosting Debian machines and serving all the CD images for us!

maswan and a lot of disks

The only downsides from the trip were the massive tiredness (midnight sun is pretty, but notconducive to sleep!) the mosquito bites and the nasty plague^Wcold that we picked up while we were there... Ah well. :-)

01 July, 2015 12:06AM

June 29, 2015

hackergotchi for Jonathan McDowell

Jonathan McDowell

What Jonathan Did Next

While I mentioned last September that I had failed to be selected for an H-1B and had been having discussions at DebConf about alternative employment, I never got around to elaborating on what I’d ended up doing.

Short answer: I ended up becoming a law student, studying for a Masters in Legal Science at Queen’s University Belfast. I’ve just completed my first year of the 2 year course and have managed to do well enough in the 6 modules so far to convince myself it wasn’t a crazy choice.

Longer answer: After Vello went under in June I decided to take a couple of months before fully investigating what to do next, largely because I figured I’d either find something that wanted me to start ASAP or fail to find anything and stress about it. During this period a friend happened to mention to me that the applications for the Queen’s law course were still open. He happened to know that it was something I’d considered before a few times. Various discussions (some of them over gin, I’ll admit) ensued and I eventually decided to submit an application. This was towards the end of August, and I figured I’d also talk to people at DebConf to see if there was anything out there tech-wise that I could get excited about.

It turned out that I was feeling a bit jaded about the whole tech scene. Another friend is of the strong opinion that you should take a break at least every 10 years. Heeding her advice I decided to go ahead with the law course. I haven’t regretted it at all. My initial interest was largely driven by a belief that there are too few people who understand both tech and law. I started with interests around intellectual property and contract law as well as issues that arise from trying to legislate for the global nature of most tech these days. However the course is a complete UK qualifying degree (I can go on to do the professional qualification in NI or England & Wales) and the first year has been about public law. Which has been much more interesting than I was expecting (even, would you believe it, EU law). Especially given the potential changing constitutional landscape of the UK after the recent general election, with regard to talk of repeal of the Human Rights Act and a referendum on exit from the EU.

Next year will concentrate more on private law, and I’m hoping to be able to tie that in better to what initially drove me to pursue this path. I’m still not exactly sure which direction I’ll go once I complete the course, but whatever happens I want to keep a linkage between my skill sets. That could be either leaning towards the legal side but with the appreciation of tech, returning to tech but with the appreciation of the legal side of things or perhaps specialising further down an academic path that links both. I guess I’ll see what the next year brings. :)

29 June, 2015 10:22PM

hackergotchi for Lunar

Lunar

Reproducible builds: week 9 in Stretch cycle

What happened about the reproducible builds effort this week:

Toolchain fixes

Norbert Preining uploaded texinfo/6.0.0.dfsg.1-2 which makes texinfo indices reproducible. Original patch by Chris Lamb.

Lunar submitted recently rebased patches to make the file order of files inside .deb stable.

akira filled #789843 to make tex4ht stop printing timestamps in its HTML output by default.

Dhole wrote a patch for xutils-dev to prevent timestamps when creating gzip compresed files.

Reiner Herrmann sent a follow-up patch for wheel to use UTC as timezone when outputing timestamps.

Mattia Rizzolo started a discussion regarding the failure to build from source of subversion when -Wdate-time is added to CPPFLAGS—which happens when asking dpkg-buildflags to use the reproducible profile. SWIG errors out because it doesn't recognize the aforementioned flag.

Trying to get the .buildinfo specification to more definitive state, Lunar started a discussion on storing the checksums of the binary package used in dpkg status database.

akira discovered—while proposing a fix for simgrid—that CMake internal command to create tarballs would record a timestamp in the gzip header. A way to prevent it is to use the GZIP environment variable to ask gzip not to store timestamps, but this will soon become unsupported. It's up for discussion if the best place to fix the problem would be to fix it for all CMake users at once.

Infrastructure-related work

Andreas Henriksson did a delayed NMU upload of pbuilder which adds minimal support for build profiles and includes several fixes from Mattia Rizzolo affecting reproducibility tests.

Neils Thykier uploaded lintian which both raises the severity of package-contains-timestamped-gzip and avoids false positives for this tag (thanks to Tomasz Buchert).

Petter Reinholdtsen filled #789761 suggesting that how-can-i-help should prompt its users about fixing reproducibility issues.

Packages fixed

The following packages became reproducible due to changes in their build dependencies: autorun4linuxcd, libwildmagic, lifelines, plexus-i18n, texlive-base, texlive-extra, texlive-lang.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Untested uploaded as they are not in main:

Patches submitted which have not made their way to the archive yet:

  • #789648 on apt-dater by Dhole: allow the build date to be set externally and set it to the time of the latest debian/changelog entry.
  • #789715 on simgrid by akira: fix doxygen and patch CMakeLists.txt to give GZIP=-n for tar.
  • #789728 on aegisub by Juan Picca: get rid of __DATE__ and __TIME__ macros.
  • #789747 on dipy by Juan Picca: set documentation date for Sphinx.
  • #789748 on jansson by Juan Picca: set documentation date for Sphinx.
  • #789799 on tmexpand by Chris Lamb: remove timestamps, hostname and username from the build output.
  • #789804 on libevocosm by Chris Lamb: removes generated files which include extra information about the build environment.
  • #789963 on qrfcview by Dhole: removes the timestamps from the the generated PNG icon.
  • #789965 on xtel by Dhole: removes extra timestamps from compressed files by gzip and from the PNG icon.
  • #790010 on simbody by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #790023 on stx-btree by akira: pass HTML_TIMESTAMP=NO to Doxygen.
  • #790034 on siscone by akira: removes $datetime from footer.html used by Doxygen.
  • #790035 on thepeg by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #790072 on libxray-spacegroup-perl by Chris Lamb: set $Storable::canonical = 1 to make space_groups.db.PL output deterministic.
  • #790074 on visp by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #790081 on wfmath by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #790082 on wreport by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #790088 on yudit by Chris Lamb: removes timestamps from the build system by passing a static comment.
  • #790122 on clblas by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #790133 on dcmtk by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #790139 on glfw3 by akira: patch for Doxygen timestamps further improved by James Cowgill by removing $datetime from the footer.
  • #790228 on gtkspellmm by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #790232 on ucblogo by Reiner Herrmann: set LC_ALL to C before sorting.
  • #790235 on basemap by Juan Picca: set documentation date for Sphinx.
  • #790258 on guymager by Reiner Herrmann: use the date from the latest debian/changelog as build date
  • #790309 on pelican by Chris Lamb: removes useless (and unreproducible) tests.

debbindiff development

debbindiff/23 includes a few bugfixes by Helmut Grohne that result in a significant speedup (especially on larger files). It used to exhibit the quadratic time string concatenation antipattern.

Version 24 was released on June 23rd in a hurry to fix an undefined variable introduced in the previous version. (Reiner Herrmann)

debbindiff now has a test suite! It is written using the PyTest framework (thanks Isis Lovecruft for the suggestion). The current focus has been on the comparators, and we are now at 93% of code coverage for these modules.

Several problems were identified and fixed in the process: paths appearing in output of javap, readelf, objdump, zipinfo, unsqusahfs; useless MD5 checksum and last modified date in javap output; bad handling of charsets in PO files; the destination path for gzip compressed files not ending in .gz; only metadata of cpio archives were actually compared. stat output was further trimmed to make directory comparison more useful.

Having the test suite enabled a refactoring of how comparators were written, switching from a forest of differences to a single tree. This helped removing dust from the oldest parts of the code.

Together with some other small changes, version 25 was released on June 27th. A follow up release was made the next day to fix a hole in the test suite and the resulting unidentified leftover from the comparator refactoring. (Lunar)

Documentation update

Ximin Luo improved code examples for some proposed environment variables for reference timestamps. Dhole added an example on how to fix timestamps C pre-processor macros by adding a way to set the build date externally. akira documented her fix for tex4ht timestamps.

Package reviews

94 obsolete reviews have been removed, 330 added and 153 updated this week.

Hats off for Chris West (Faux) who investigated many fail to build from source issues and reported the relevant bugs.

Slight improvements were made to the scripts for editing the review database, edit-notes and clean-notes. (Mattia Rizzolo)

Meetings

A meeting was held on June 23rd. Minutes are available.

The next meeting will happen on Tuesday 2015-07-07 at 17:00 UTC.

Misc.

The Linux Foundation announced that it was funding the work of Lunar and h01ger on reproducible builds in Debian and other distributions. This was further relayed in a Bits from Debian blog post.

29 June, 2015 09:03PM

Paul Wise

The aliens are amongst us!

Don't worry, they can't cope with our atmosphere.

Alien on the ground

Perhaps they are just playing dead. Don't turn your back if you see one.

Folks may want to use this alien in free software. The original photo is available on request. To the extent possible under law, I have waived all copyright and related or neighboring rights to this work. The alien has signed a model release. An email or a link to this page would be appreciated though.

29 June, 2015 08:29AM

hackergotchi for Norbert Preining

Norbert Preining

The Talos Principle – Solving puzzles using SAT solvers

After my last post on Portal, there was a sale of The Talos Principle, so I got it and started playing. And soon I got stuck at these kind of puzzles where one has to fit in pieces into a frame. As a logician I hate to solve something by trial and error, so I decided I write a solver for these kind of puzzles, based on a propositional logic encoding and satisfiability solver. Sometimes it is good to be logician!

Talos-Puzzles

In the Talos Principle, access to new worlds and specific items is often blocked by gates that open by putting Sigils into the frame. Of course, collecting the sigils is the most challenging part, but that is often solvable by logical thinking. On the other hand, solving these fitting puzzles drove me crazy, so let us solve them with a SAT solver.

Encoding

I used a propositional encoding that for each combination of cells and sigils assigns a propositional variable, which is true if the specific sigil rests in on that cell in the final solution. That is, we have variable (encoded as x_i_j_n) where runs over the cells of the frame, and over the (numbered) sigils.

Setup

I have written a perl program that for a definition of a puzzle (see later), outputs SMT2 code, which then is checked for satisfiability and generation of model with the z3 solver (which is available in Debian).

Necessary assertions

We have to state relations between these propositional variables to obtain a proper solution, in particular we have added the following statements:

  • every field has at least one sigil on it
  • every field has at most one sigil on it
  • every sigil is used at least once
  • defining equations for the sigil’s form

Let us go through them one by one:

Every field has at least on sigil on it

That is an easy part by asserting

In the SMT2 code it would look like

(assert (or x_i_j_1 x_i_j_2 ... x_i_j_n))

Every field has at most one sigil on it

This can be achieved by asserting for each tile and each pair of different sigil (numbers), that not both of the two hold:

and in SMT2 code:

(assert (and
  (not (and x_1_1_1 x_1_1_2))
  (not (and x_1_1_1 x_1_1_3))
...
(assert (and
  (not (and x_1_2_1 x_1_2_2))
  (not (and x_1_2_1 x_1_2_3))
...

Every sigil is used at least once

This was a bit a tricky one. First I thought I want to express that every sigil is used exactly once by excluding that for one sigil there are more fields assigned to it then the sigil contains parts. So if a sigil occupies 4 tiles, then every combination of 5 tiles needs to evaluate to false. But with 8×8 or so frames, the number of combinations simply explodes to above several million, which brings my harddrive size and z3 to an end.

The better idea was to say that every sigil was used at least once. Since all sigils together exactly fill the frame, this is enough. This can be done easily by assuming that for each sigil, at least one of the tiles is assigned to it:

or in SMT code for a 6×6 frame and the first sigil:

(assert (or x_1_1_n x_1_2_n ...  x_6_6_1))

Defining equations for the sigil’s form

Of course the most important part are the defining equations for the various sigils. Here I choose the following path:

  • choose for each sigil form an anchor point
  • for each tile in the frame and each sigil, put the anchor of the sigil on the tile, and express the 4 directions of rotation

So for example for the top-most sigil in the above photo, I choose the anchor point to be the center, and if that was in , I need to assume that for the upright position

holds. In the same way, when rotated right, we need

All these options have to be disjunctively connected, in SMT code for the case where the anchor lies at (4,2).

(assert (or
  ...
  (and x_3_2_n x_4_2_n x_5_2_n x_4_3_n)
  (and x_3_3_n x_3_2_n x_3_1_n x_4_2_n)
  (and x_3_2_n x_4_2_n x_5_2_n x_4_1_n)
...

When generating these equations one has to be careful not to include rotated sigils that stick out of the frame, though.

Although the above might not be the optimal encoding, the given assertions suffice to check for SAT and produce a model, which allows me to solve the riddles.

Implementation in Perl

To generate the SMT2 code, I used a Perl script, which is very quickly hacked together. The principle function is (already coded for the above riddle):

create_smt2_def(8,6,'a','a','b','cl','cl','cr','cr','cr','q','q','sl','sl');

where the first two arguments define the size of the frame, and the rest are codes for sigil types:

  • a podest, the first sigil in the above screen shot
  • b stick, the third sigil above, the long stick
  • cl club left, the forth sigil above, a club facing left
  • cr club right, the sixth sigil above, a club facing right
  • q square, the ninth sigil above
  • sl step left, the last sigil in the above image
  • sr step right, mirror of step left (not used above)

This function first sets up the header of the smt2 file, followed by shipping out all the necessary variable definitions, in SMT these are defined as Boolean functions, and the other assertions (please see the Perl code linked below for details). The most interesting part are the definitions of the sigils:

  # for each piece, call the defining assertions
  for my $n (1..$nn) {
    my $p = $pieces[$n-1];
    print "(assert (or\n";
    for my $i (1..$xx) {
      for my $j (1..$yy) {
        if ($p eq 'q') { 
          type_square($xx,$yy,$i,$j,$n); 
        } elsif ($p eq 'a') {
          type_potest($xx,$yy,$i,$j,$n);
    ....

Every sigil type has its own definiton, in case of the a podest, the type_podest function:

sub type_potest {
  my ($xx,$yy,$i,$j,$n) = @_;
  my ($il, $jl, $ir, $jr, $iu, $ju);
  $il = $i - 1; $ir = $i + 1; $iu = $i;
  $jl = $jr = $j; $ju = $j + 1;
  do_rotate_shipout($xx,$yy, $i, $j, $n, $il, $jl, $ir, $jr, $iu, $ju);
}

This function is prototypical, one defines the tiles a sigil occupies if the anchor is placed on (i,j) for an arbitrary orientation of the sigil, and then calls do_rotate_shipout on the list of occupied tiles. This function in turn is very simple:

sub do_rotate_shipout {
  my ($xx,$yy, $i, $j, $n, @pairs) = @_ ;
  for my $g (0..3) {
    @pairs = rotate90($i, $j, @pairs);
    check_and_shipout($xx,$yy, $n, $i, $j, @pairs);
  }
}

as it only rotates four times by 90 degrees, and then checks whether the rotated sigil is completely within the frame, and if yes ships out the assertion code. The rotation is done by multiplying the vector from (i,j) to the tile position with the (0 -1 1 0) matrix and adding it again to (i,j):

sub rotate90 {
  my ($i, $j, @pairs) = @_ ;
  my @ret;
  while (@pairs) {
    my $ii = shift @pairs;
    my $jj = shift @pairs;
    my $ni = $i - ($jj - $j);
    my $nj = $j + ($ii - $i);
    push @ret, $ni, $nj;
  }
  return @ret;
}

There are a few more functions, for those interested, the full Perl code is here: tangram.pl. There is no user interface or any config file reading done, I just edit the source code if I need to solve a riddle.

Massaging the output

Last but not least, the output of the z3 solver is a bit noisy, so I run the output through a few Unix commands to get only the true assignments, which gives me the location of the tiles. That is, I run the following pipeline:

perl tangram.pl | z3 -in | egrep 'define-fun|true|false'  | sed -e 'h;s/.*//;G;N;s/\n//g' | grep true | sort

which produces a list like the following as output:

  (define-fun x_1_1_10 () Bool    true)
  (define-fun x_1_2_10 () Bool    true)
  (define-fun x_1_3_5 () Bool    true)
  (define-fun x_1_4_6 () Bool    true)
  (define-fun x_1_5_6 () Bool    true)
  (define-fun x_1_6_6 () Bool    true)
  (define-fun x_2_1_10 () Bool    true)
  (define-fun x_2_2_10 () Bool    true)
  (define-fun x_2_3_5 () Bool    true)
  ...

from which I can read up the solution that puts the tenth sigil (a square) in the lower left corner:
Talos-Puzzle-solved

29 June, 2015 12:21AM by Norbert Preining

June 28, 2015

hackergotchi for Ben Armstrong

Ben Armstrong

Bluff Trail – Early Summer 2015

Here’s a photo journal of a walk I just completed around the Pot Lake loop of the Bluff Wilderness Hiking Trail. Hope you enjoy it!

I was dissatisfied with my initial post, so have reduced the size to improve load time, changed the gallery software and have rewritten many of the captions.

[slb_group]

My favourite stunted tamarack, clinging to the rocks.Click this photo to start the slideshow.

The Bluff Wilderness Hiking Trail is a series of four loops. Today I’ll tackle only the first in order to get as many pictures as possible, and because when hiking solo, I prefer to stay on the more heavily traveled part. In late summer, I’ll probably do all four loops with my friend Ryan again. Meanwhile, I’ll stay in shape coming out here when I can for shorter walks.

The trail starts here. Right away, even before heading between the marker stones, there’s a pretty view of Cranberry Lake off to the right. Pink lady slippers have been plentiful this year! All along the trail, you can find hundreds, if not thousands of these. Boardwalks provide dry passage across the boggy bits. The pitcher plants are thriving, too. The vines with tiny, round leaves are wintergreen. Tasty! I’ve not seen anything larger than a deer out here. Roots and rocks are a recurring theme. Many feet beating down on the first loop have packed the earth hard and have exposed more roots than you’ll see deeper into the trail system where fewer hikers travel. Step carefully. Sheep laurel is an eye-catcher. To me, they look like little, pink jewels. Yellow markers clearly mark the Pot Lake loop. At crucial junctures there are some signs to point you the right way. If you’re looking only at your feet, you may see black circles marked with arrows on some of the rocks, also pointing the way. The vegetation varies from scraggly and clinging to the rocks, to lush and green. The path is wet in parts. Nothing impassible, though. On the first loop in particular, and even within only the first kilometre, there are some pretty stunning views of the lake. Very quickly, you’ll find yourself perched up top on the rocks. On this stretch, you need to hop a bit from one rock to the next. Just as quickly as you ascended, you descend back down to the lake again. A good place to stop and have a snack. The start of the first loop, itself. I chide myself for not having discovered this trail until about 2007, even though we moved out here in 2005. It’s a treasure we’re indebted to the WRWEO for preserving. Leave some comments in the box, if you like. You’ll find these maps along the way to track your progress. Remember to follow proper trail etiquette. A particularly steep climb! Despite the best trail maintenance efforts, water still goes where it wants! Good to have boots. Another steep climb. This tree is chattering at me. Raising quite a racket. What’s in the hole? I patiently wait with my camera to get a good shot of the tree’s occupants. Mother woodpecker, feeding her babies! As the feeding continues, the chirping intensifies! Up on top of a big rock with a view over the lake is another favourite stopping place for a snack. The view at my snack break rock. I never get tired of it. There are some pretty big boulders. Next time I’ll bring a friend to stand under it for scale. Another terrific view. A stream just before the first portage. This portage connects Cranberry Lake with Pot Lake. The portage is clearly marked. The Pot Lake end of the portage. One of several upturned trees, with the roots now forming a wall along one side of the trail. The spiral of the roots mimics the spiral split in this boulder which you can walk all of the way into until you reach the centre. Rocks and trees and trees and rocks … White pine catkins. Suddenly the close forest opens up into a wide view all around. My favourite stunted tamarack, clinging to the rocks. Last year’s cones. This year’s aren’t out yet on this tree. I love the tamarack’s delicate new needles. These blueberries have a head start. Can’t wait until it’s time to harvest them. Nearly at the top of the Pot Lake loop. This is Pot Lake, itself. I was struck by this one little blighted berry, showing a blush of distress-induced colour amongst the green ones. Another view, nearly at the top. A sign beneath the map: Rare Plant Species: Mountain Sandwort – Arenaria groenlandica. Use caution and stay on designated trails. Avoid disturbing habitat. Finally, the top. A map to mark your progress. Apparently the Mountain Sandwort, right at the base of the sign. The mountain sandwort is a pretty, delicate little thing. After passing the sign, the view from the top over Cranberry Lake, indeed both lakes, is a reward worth climbing all of the way up to see. Some other hiker has left a cairn. The descent is not too steep at first. The approach down to the back side of Pot Lake from the top is marked by exposed bedrock and scrubby plants. Finally back to some cover, which is welcome on hotter days. A ferny fairyland. It’s not all abrupt ups and downs. Here’s an easy, winding stretch of trail. Cinnamon ferns. Ambling along. Rounding the end of Pot Lake. Not very long after reaching the lake, the path climbs several metres up above it. There are some nice, rootless bits on this side of the loop, giving your ankles and knees a break. This guy seemed extremely agitated to see me. A new tamarack cone! Did I mention I like tamaracks? The steep descent back to Cranberry Lake again.

Finally back where the loop starts.

[/slb_group]

28 June, 2015 07:22PM by Ben Armstrong

Sven Hoexter

moto g GPS reset when it is not working with CM 12.1

There seems to be an issue with the moto g, CM 12.1 (nightlies) and the GPS. My GPS receiver stopped to work as well and I could recover it with the following steps in fastboot mode as described on xda-developers.

fastboot erase modemst1
fastboot erase modemst2
fastboot reboot

That even works with the 4.2.2 fastboot packaged in anroid-tools-fastboot.

28 June, 2015 07:06PM

Russell Coker

RAID Pain

One of my clients has a NAS device. Last week they tried to do what should have been a routine RAID operation, they added a new larger disk as a hot-spare and told the RAID array to replace one of the active disks with the hot-spare. The aim was to replace the disks one at a time to grow the array. But one of the other disks had an error during the rebuild and things fell apart.

I was called in after the NAS had been rebooted when it was refusing to recognise the RAID. The first thing that occurred to me is that maybe RAID-5 isn’t a good choice for the RAID. While it’s theoretically possible for a RAID rebuild to not fail in such a situation (the data that couldn’t be read from the disk with an error could have been regenerated from the disk that was being replaced) it seems that the RAID implementation in question couldn’t do it. As the NAS is running Linux I presume that at least older versions of Linux have the same problem. Of course if you have a RAID array that has 7 disks running RAID-6 with a hot-spare then you only get the capacity of 4 disks. But RAID-6 with no hot-spare should be at least as reliable as RAID-5 with a hot-spare.

Whenever you recover from disk problems the first thing you want to do is to make a read-only copy of the data. Then you can’t make things worse. This is a problem when you are dealing with 7 disks, fortunately they were only 3TB disks and only each had 2TB in use. So I found some space on a ZFS pool and bought a few 6TB disks which I formatted as BTRFS filesystems. For this task I only wanted filesystems that support snapshots so I could work on snapshots not on the original copy.

I expect that at some future time I will be called in when an array of 6+ disks of the largest available size fails. This will be a more difficult problem to solve as I don’t own any system that can handle so many disks.

I copied a few of the disks to a ZFS filesystem on a Dell PowerEdge T110 running kernel 3.2.68. Unfortunately that system seems to have a problem with USB, when copying from 4 disks at once each disk was reading about 10MB/s and when copying from 3 disks each disk was reading about 13MB/s. It seems that the system has an aggregate USB bandwidth of 40MB/s – slightly greater than USB 2.0 speed. This made the process take longer than expected.

One of the disks had a read error, this was presumably the cause of the original RAID failure. dd has the option conv=noerror to make it continue after a read error. This initially seemed good but the resulting file was smaller than the source partition. It seems that conv=noerror doesn’t seek the output file to maintain input and output alignment. If I had a hard drive filled with plain ASCII that MIGHT even be useful, but for a filesystem image it’s worse than useless. The only option was to repeatedly run dd with matching skip and seek options incrementing by 1K until it had passed the section with errors.

for n in /dev/loop[0-6] ; do echo $n ; mdadm –examine -v -v –scan $n|grep Events ; done

Once I had all the images I had to assemble them. The Linux Software RAID didn’t like the array because not all the devices had the same event count. The way Linux Software RAID (and probably most RAID implementations) work is that each member of the array has an event counter that is incremented when disks are added, removed, and when data is written. If there is an error then after a reboot only disks with matching event counts will be used. The above command shows the Events count for all the disks.

Fortunately different event numbers aren’t going to stop us. After assembling the array (which failed to run) I ran “mdadm -R /dev/md1” which kicked some members out. I then added them back manually and forced the array to run. Unfortunately attempts to write to the array failed (presumably due to mismatched event counts).

Now my next problem is that I can make a 10TB degraded RAID-5 array which is read-only but I can’t mount the XFS filesystem because XFS wants to replay the journal. So my next step is to buy another 2*6TB disks to make a RAID-0 array to contain an image of that XFS filesystem.

Finally backups are a really good thing…

28 June, 2015 10:31AM by etbe

June 27, 2015

hackergotchi for Christian Perrier

Christian Perrier

Bugs #780000 - 790000

Thorsten Glaser reported Debian bug #780000 on Saturday March 7th 2015, against the gcc-4.9 package.

Bug #770000 was reported as of November 18th so there have been 10,000 bugs in about 3.5 months, which was significantly slower than earlier.

Matthew Vernon reported Debian bug #790000 on Friday June 26th 2015, against the pcre3 package.

Thus, there have been 10,000 bugs in 3.5 months again. It seems that the bug report rate stabilized again.

Sorry for missing bug #780000 annoucement. I'm doing this since....November 2007 for bug #450000 and it seems that this lack of attention is somehow significant wrt my involvment in Debian. Still, this involvment is still here and I'll try to "survive" in the project until we reach bug #1000000...:-)

See you for bug #800000 annoucement and the result of the bets we placed on the date it would happen.

27 June, 2015 06:13AM

June 25, 2015

hackergotchi for Norbert Preining

Norbert Preining

TeX Live 2015 hits Debian/unstable

Here we go, I just uploaded 15 packages to the Debian archive that brings TeX Live in Debian up to the 2015 release (and a bit newer)!

Debian - TeX Live 2015

Uploaded packages are asymptote, biber, context, context-modules, jadetex, musixtex, pmx, tex-common, texinfo, texinfo-doc-nonfree, texlive-base, texlive-bin, texlive-extra, texlive-lang, xmltex.

The packages are basically what has been in experimental for quite some time, plus a checkout of tlnet from yesterday. For details on the changes and the new packaging, please consult this post.

So, now let the flood of bug reports begin, but in the mean time, enjoy!

25 June, 2015 11:03PM by Norbert Preining

June 24, 2015

TeX Live Manager News June 2015

TeX Live 2015 has been released, and normal operation with daily updates has started. During the freeze time and afterwards I have made a few changes to the TeX Live Manager (tlmgr) that I want to highlight here.

texlive-tlmgr

The main changes are better error and return code handling (which should be hardly visible for the users), and more more informative output of the tlmgr info action, incorporating more data from the TeX Catalogue.

Error handling

With a program that started as an experiment that has grown into the central configuration and management program, there are lots of old code pieces that did not do proper error signaling via return values. That meant that the return value of a tlmgr run didn’t have any meaning, mostly because it was 0 (success) most of the times.

I have now tried to do proper return code handling throughout the tlmgr code base, that is the tlmgr.pl and the necessary Perl modules.

While this should not be a user visible changes, it turned out that the MacOS TeX Live Utility by Adam Maxwell (btw, a great program, it would be nice to have something similar written for Unix replacing the bit clumsy tlmgr gui), got broken for paper configuration, due to forgotten return value fixes in the TLPaper.pm module. That is fixed now in our repository.

All in all we do hope that the return value of a tlmgr run now gives proper information about success or error. I might add a bit more semantics by returning bit-values in case of errors, but this is in early stages of thinking.

TeX Catalogue data in tlmgr info

Since more or less the very beginning we incorporated information from the TeX Catalogue into our database. In particular did we carry over the license information, version, CTAN directory, and date of last change of information in the Catalogue.

ctan-page-asana-mathRecently (or not so recently, I actually don’t know), CTAN has enriched their package view with more information, in particular a list of topics, and a list of related packages. Take for example the Asana-math package. It’s CTAN page now displays besides the previously available information also a list of topics and a list of related packages. The topic index can also be browsed directly when searching for a specific package.

I have now added functionality in the TeX Live Manager that tlmgr info also prints out the topic names and related packages. In the case of Asana Math fonts, that would look like:

$ tlmgr info Asana-Math
package:     Asana-Math
category:    Package
shortdesc:   A font to typeset maths in Xe(La)TeX and Lua(La)TeX.
longdesc:    The Asana-Math font is an OpenType font that includes almost all mathematical Unicode symbols and it can be used to typeset mathematical text with any software that can understand the MATH OpenType table (e.g., XeTeX 0.997 and Microsoft Word 2007). The font is beta software. Typesetting support for use with LaTeX is provided by the fontspec and unicode-math packages.
installed:   Yes
revision:    37556
sizes:       doc: 9k, run: 1177k
relocatable: No
cat-version: 000.955
cat-date:    2015-06-02 20:04:19 +0200
cat-license: ofl
cat-topics:  font font-maths font-otf font-ttf
cat-related: stix xits
collection:  collection-fontsextra

GUIs could use the topic names and related packages to link directly to the CTAN page.

At the moment the related packages are named according to CTAN standards, which are a bit different from what we use in TeX Live. I am not sure whether I will change that, or ship out both names. We will see.


The changes are currently in testing, see section about Test version here, and will be pushed out in due time, probably in the next week.

As usual, in case of any problems or bugs, please contact us at the TeX Live mailing list.

Enjoy.

24 June, 2015 11:55PM by Norbert Preining

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

JSONB in Postgres

PostgreSQL continues to amaze. Load in 45 MB (47 581 294 bytes) of JSON in a single-column table with a generic index, and voila:

sesse=# \timing
Timing is on.
sesse=# select jsonb_extract_path(contents, 'short_score') from analysis where contents @> '{"position":{"fen":"rnbqkb1r/pp3ppp/2p1pn2/3p2B1/2PP4/2N2N2/PP2PPPP/R2QKB1R b KQkq - 1 5"}}';
 jsonb_extract_path
\--------------------
 "+0.17"
(1 row)
  
Time: 2,286 ms

Millisecond-level arbitrary JSON queries.

(In the end, I designed the database more traditionally SQL-like, but it was fun to see that this would actually work.)

Update to clarify: That's a little over 2 milliseconds, not 2286 milliseconds.

24 June, 2015 09:22PM

Russell Coker

Smart Phones Should Measure Charge Speed

My first mobile phone lasted for days between charges. I never really found out how long it’s battery would last because there was no way that I could use it to deplete the charge in any time that I could spend awake. Even if I had managed to run the battery out the phone was designed to accept 4*AA batteries (it’s rechargeable battery pack was exactly that size) so I could buy spare batteries at any store.

Modern phones are quite different in physical phone design (phones that weigh less than 4*AA batteries aren’t uncommon), functionality (fast CPUs and big screens suck power), and use (games really drain your phone battery). This requires much more effective chargers, when some phones are intensively used (EG playing an action game with Wifi enabled) they can’t be charged as they use more power than the plug-pack supplies. I’ve previously blogged some calculations about resistance and thickness of wires for phone chargers [1], it’s obvious that there are some technical limitations to phone charging based on the decision to use a long cable at ~5V.

My calculations about phone charge rate were based on the theoretical resistance of wires based on their estimated cross-sectional area. One problem with such analysis is that it’s difficult to determine how thick the insulation is without destroying the wire. Another problem is that after repeated use of a charging cable some conductors break due to excessive bending. This can significantly increase the resistance and therefore increase the charging time. Recently a charging cable that used to be really good suddenly became almost useless. My Galaxy Note 2 would claim that it was being charged even though the reported level of charge in the battery was not increasing, it seems that the cable only supplied enough power to keep the phone running not enough to actually charge the battery.

I recently bought a USB current measurement device which is really useful. I have used it to diagnose power supplies and USB cables that didn’t work correctly. But one significant way in which it fails is in the case of problems with the USB connector. Sometimes a cable performs differently when connected via the USB current measurement device.

The CurrentWidget program [2] on my Galaxy Note 2 told me that all of the dedicated USB chargers (the 12V one in my car and all the mains powered ones) supply 1698mA (including the ones rated at 1A) while a PC USB port supplies ~400mA. I don’t think that the Note 2 measurement is particularly reliable. On my Galaxy Note 3 it always says 0mA, I guess that feature isn’t implemented. An old Galaxy S3 reports 999mA of charging even when the USB current measurement device says ~500mA. It seems to me that method the CurrentWidget uses to get the current isn’t accurate if it even works at all.

Android 5 on the Nexus 4/5 phones will tell the amount of time until the phone is charged in some situations (on the Nexus 4 and Nexus 5 that I used for testing it didn’t always display it and I don’t know why). This is an useful but it’s still not good enough.

I think that what we need is to have the phone measure the current that’s being supplied and report it to the user. Then when a phone charges slowly because apps are using some power that won’t be mistaken for a phone charging slowly due to a defective cable or connector.

24 June, 2015 02:00AM by etbe

June 23, 2015

Sandro Tosi

CFEngine: upgrade Debian packages

say you use CFEngine to install Debian packages on your server, so it's likely you'll have a bundle looking like this:

bundle agent agentname
{
    vars:

        "packages" slist => {
                             "pkg1",
                             "pkg2",
                             "pkg3"
                            };

    packages:

        "$(packages)"
            package_policy => "addupdate",
            package_method => apt_get;

}

this works great to guarantee those packages are installed, but if a newer version is available in the repositories, that wont be installed. If you want CFEngine to do that too, then the web suggests this trick:

    packages:

        "$(packages)"
            package_policy => "addupdate",
            package_version => "999999999",
            package_method => apt_get;

which tweak the install system declaring that you want to install version 999999999 of each package, so if you have available a higher version than the one installed, CFEngine will happily upgrade it for you. It works great.. but sometimes it doesn't. why oh why?

That's because Debian versions can have a epoch: every plain version (like 1.0-1) has an implicit epoch of 0, and same goes for the 999999999 above, that means if any of the installed packages has an epoch, that version will sort higher than 999999999 and the package wont be upgraded. If you want to be sure to upgrade every package, then the right solution is:

    packages:

        "$(packages)"
            package_policy => "addupdate",
            package_version => "9:999999999",
            package_method => apt_get;

23 June, 2015 07:37PM by Sandro Tosi (noreply@blogger.com)

Bits from Debian

Reproducible Builds get funded by the Core Infrastructure Initiative

The Core Infrastructure Initiative announced today that they will support two Debian Developers, Holger Levsen and Jérémy Bobbio, with $200,000 to advance their Debian work in reproducible builds and to collaborate more closely with other distributions such as Fedora, Ubuntu, OpenWrt to benefit from this effort.

The Core Infrastructure Initiative (CII) was established in 2014 to fortify the security of key open source projects. This initiative is funded by more than 20 companies and managed by The Linux Foundation.

The reproducible builds initiative aims to enable anyone to reproduce bit by bit identical binary packages from a given source, thus enabling anyone to independently verify that a binary matches the source code from which it was said it was derived. For example, this allow the users of Debian to rebuild packages and obtain exactly identical packages to the ones provided by the Debian repositories.

23 June, 2015 12:00PM by Ana Guerrero Lopez

hackergotchi for Ben Armstrong

Ben Armstrong

Debian Live Rescue needs some love

You may have noticed that Jessie no longer includes the useful rescue flavour of live image, formerly included in Wheezy and earlier releases, and neither will Stretch unless you take action. This is my second public call for help this year to revive it. So if you care about rescue, here’s how you can help:

  • First, try a self-built image, based on the old live-image-rescue configuration. While Jessie still contains the live-image-rescue configuration for live-build as a starting point, to successfully build this image for yourself, you need to edit the package lists to drop or substitute any packages that aren’t in the archive. As of writing, this includes libphash0, mii-diag, denyhosts, hal and emacs23-nox. (Tip: for the latter, substitute emacs24-nox.)
  • Join or form a team to maintain the rescue metapackages in the long term. All of the official Debian Live images are based on metapackages that are looked after by various other teams, (principally the desktop teams,) with rescue being the sole exception. The old package lists include some forensics packages, so you may wish to contact Debian Forensics, but I don’t want to presume they’ll take it on.
  • Have your team decide on what a rescue system should include. You might start with the old lists, spruced up a bit just to make the image build, or you might take an entirely different tack. This is your project, so it’s up to you.
  • File a bug on tasksel, preferably with patch, to include a task-forensics and/or task-rescue task (or whatever you decide the task or tasks should be called).
  • File a bug on the live-images package to include your work.

If you have any questions not answered in this post, please feel free to leave a comment on this blog, talk to the Debian Live team on irc — I’m SynrG, and hang out with the team at #debian-live @ irc.oftc.net) — or drop us an email at debian-live@lists.debian.org.

23 June, 2015 11:16AM by Ben Armstrong

Russell Coker

One Android Phone Per Child

I was asked for advice on whether children should have access to smart phones, it’s an issue that many people are discussing and seems worthy of a blog post.

Claimed Problems with Smart Phones

The first thing that I think people should read is this XKCD post with quotes about the demise of letter writing from 99+ years ago [1]. Given the lack of evidence cited by people who oppose phone use I think we should consider to what extent the current concerns about smart phone use are just reactions to changes in society. I’ve done some web searching for reasons that people give for opposing smart phone use by kids and addressed the issues below.

Some people claim that children shouldn’t get a phone when they are so young that it will just be a toy. That’s interesting given the dramatic increase in the amount of money spent on toys for children in recent times. It’s particularly interesting when parents buy game consoles for their children but refuse mobile phone “toys” (I know someone who did this). I think this is more of a social issue regarding what is a suitable toy than any real objection to phones used as toys. Obviously the educational potential of a mobile phone is much greater than that of a game console.

It’s often claimed that kids should spend their time reading books instead of using phones. When visiting libraries I’ve observed kids using phones to store lists of books that they want to read, this seems to discredit that theory. Also some libraries have Android and iOS apps for searching their catalogs. There are a variety of apps for reading eBooks, some of which have access to many free books but I don’t expect many people to read novels on a phone.

Cyber-bullying is the subject of a lot of anxiety in the media. At least with cyber-bullying there’s an electronic trail, anyone who suspects that their child is being cyber-bullied can check that while old-fashioned bullying is more difficult to track down. Also while cyber-bullying can happen faster on smart phones the victim can also be harassed on a PC. I don’t think that waiting to use a PC and learn what nasty thing people are saying about you is going to be much better than getting an instant notification on a smart phone. It seems to me that the main disadvantage of smart phones in regard to cyber-bullying is that it’s easier for a child to participate in bullying if they have such a device. As most parents don’t seem concerned that their child might be a bully (unfortunately many parents think it’s a good thing) this doesn’t seem like a logical objection.

Fear of missing out (FOMO) is claimed to be a problem, apparently if a child has a phone then they will want to take it to bed with them and that would be a bad thing. But parents could have a policy about when phones may be used and insist that a phone not be taken into the bedroom. If it’s impossible for a child to own a phone without taking it to bed then the parents are probably dealing with other problems. I’m not convinced that a phone in bed is necessarily a bad thing anyway, a phone can be used as an alarm clock and instant-message notifications can be turned off at night. When I was young I used to wait until my parents were asleep before getting out of bed to use my PC, so if smart-phones were available when I was young it wouldn’t have changed my night-time computer use.

Some people complain that kids might use phones to play games too much or talk to their friends too much. What do people expect kids to do? In recent times the fear of abduction has led to children doing playing outside a lot less, it used to be that 6yos would play with other kids in their street and 9yos would be allowed to walk to the local park. Now people aren’t allowing 14yo kids walk to the nearest park alone. Playing games and socialising with other kids has to be done over the Internet because kids aren’t often allowed out of the house. Play and socialising are important learning experiences that have to happen online if they can’t happen offline.

Apps can be expensive. But it’s optional to sign up for a credit card with the Google Play store and the range of free apps is really good. Also the default configuration of the app store is to require a password entry before every purchase. Finally it is possible to give kids pre-paid credit cards and let them pay for their own stuff, such pre-paid cards are sold at Australian post offices and I’m sure that most first-world countries have similar facilities.

Electronic communication is claimed to be somehow different and lesser than old-fashioned communication. I presume that people made the same claims about the telephone when it first became popular. The only real difference between email and posted letters is that email tends to be shorter because the reply time is smaller, you can reply to any questions in the same day not wait a week for a response so it makes sense to expect questions rather than covering all possibilities in the first email. If it’s a good thing to have longer forms of communication then a smart phone with a big screen would be a better option than a “feature phone”, and if face to face communication is preferred then a smart phone with video-call access would be the way to go (better even than old fashioned telephony).

Real Problems with Smart Phones

The majority opinion among everyone who matters (parents, teachers, and police) seems to be that crime at school isn’t important. Many crimes that would result in jail sentences if committed by adults receive either no punishment or something trivial (such as lunchtime detention) if committed by school kids. Introducing items that are both intrinsically valuable and which have personal value due to the data storage into a typical school environment is probably going to increase the amount of crime. The best options to deal with this problem are to prevent kids from taking phones to school or to home-school kids. Fixing the crime problem at typical schools isn’t a viable option.

Bills can potentially be unexpectedly large due to kids’ inability to restrain their usage and telcos deliberately making their plans tricky to profit from excess usage fees. The solution is to only use pre-paid plans, fortunately many companies offer good deals for pre-paid use. In Australia Aldi sells pre-paid credit in $15 increments that lasts a year [2]. So it’s possible to pay $15 per year for a child’s phone use, have them use Wifi for data access and pay from their own money if they make excessive calls. For older kids who need data access when they aren’t at home or near their parents there are other pre-paid phone companies that offer good deals, I’ve previously compared prices of telcos in Australia, some of those telcos should do [3].

It’s expensive to buy phones. The solution to this is to not buy new phones for kids, give them an old phone that was used by an older relative or buy an old phone on ebay. Also let kids petition wealthy relatives for a phone as a birthday present. If grandparents want to buy the latest smart-phone for a 7yo then there’s no reason to stop them IMHO (this isn’t a hypothetical situation).

Kids can be irresponsible and lose or break their phone. But the way kids learn to act responsibly is by practice. If they break a good phone and get a lesser phone as a replacement or have to keep using a broken phone then it’s a learning experience. A friend’s son head-butted his phone and cracked the screen – he used it for 6 months after that, I think he learned from that experience. I think that kids should learn to be responsible with a phone several years before they are allowed to get a “learner’s permit” to drive a car on public roads, which means that they should have their own phone when they are 12.

I’ve seen an article about a school finding that tablets didn’t work as well as laptops which was touted as news. Laptops or desktop PCs obviously work best for typing. Tablets are for situations where a laptop isn’t convenient and when the usage involves mostly reading/watching, I’ve seen school kids using tablets on excursions which seems like a good use of them. Phones are even less suited to writing than tablets. This isn’t a problem for phone use, you just need to use the right device for each task.

Phones vs Tablets

Some people think that a tablet is somehow different from a phone. I’ve just read an article by a parent who proudly described their policy of buying “feature phones” for their children and tablets for them to do homework etc. Really a phone is just a smaller tablet, once you have decided to buy a tablet the choice to buy a smart phone is just about whether you want a smaller version of what you have already got.

The iPad doesn’t appear to be able to make phone calls (but it supports many different VOIP and video-conferencing apps) so that could technically be described as a difference. AFAIK all Android tablets that support 3G networking also support making and receiving phone calls if you have a SIM installed. It is awkward to use a tablet to make phone calls but most usage of a modern phone is as an ultra portable computer not as a telephone.

The phone vs tablet issue doesn’t seem to be about the capabilities of the device. It’s about how portable the device should be and the image of the device. I think that if a tablet is good then a more portable computing device can only be better (at least when you need greater portability).

Recently I’ve been carrying a 10″ tablet around a lot for work, sometimes a tablet will do for emergency work when a phone is too small and a laptop is too heavy. Even though tablets are thin and light it’s still inconvenient to carry, the issue of size and weight is a greater problem for kids. 7″ tablets are a lot smaller and lighter, but that’s getting close to a 5″ phone.

Benefits of Smart Phones

Using a smart phone is good for teaching children dexterity. It can also be used for teaching art in situations where more traditional art forms such as finger painting aren’t possible (I have met a professional artist who has used a Samsung Galaxy Note phone for creating art work).

There is a huge range of educational apps for smart phones.

The Wikireader (that I reviewed 4 years ago) [4] has obvious educational benefits. But a phone with Internet access (either 3G or Wifi) gives Wikipedia access including all pictures and is a better fit for most pockets.

There are lots of educational web sites and random web sites that can be used for education (Googling the answer to random questions).

When it comes to preparing kids for “the real world” or “the work environment” people often claim that kids need to use Microsoft software because most companies do (regardless of the fact that most companies will be using radically different versions of MS software by the time current school kids graduate from university). In my typical work environment I’m expected to be able to find the answer to all sorts of random work-related questions at any time and I think that many careers have similar expectations. Being able to quickly look things up on a phone is a real work skill, and a skill that’s going to last a lot longer than knowing today’s version of MS-Office.

There are a variety of apps for tracking phones. There are non-creepy ways of using such apps for monitoring kids. Also with two-way monitoring kids will know when their parents are about to collect them from an event and can stay inside until their parents are in the area. This combined with the phone/SMS functionality that is available on feature-phones provides some benefits for child safety.

iOS vs Android

Rumour has it that iOS is better than Android for kids diagnosed with Low Functioning Autism. There are apparently apps that help non-verbal kids communicate with icons and for arranging schedules for kids who have difficulty with changes to plans. I don’t know anyone who has a LFA child so I haven’t had any reason to investigate such things. Anyone can visit an Apple store and a Samsung Experience store as they have phones and tablets you can use to test out the apps (at least the ones with free versions). As an aside the money the Australian government provides to assist Autistic children can be used to purchase a phone or tablet if a registered therapist signs a document declaring that it has a therapeutic benefit.

I think that Android devices are generally better for educational purposes than iOS devices because Android is a less restrictive platform. On an Android device you can install apps downloaded from a web site or from a 3rd party app download service. Even if you stick to the Google Play store there’s a wider range of apps to choose from because Google is apparently less restrictive.

Android devices usually allow installation of a replacement OS. The Nexus devices are always unlocked and have a wide range of alternate OS images and the other commonly used devices can usually have an alternate OS installed. This allows kids who have the interest and technical skill to extensively customise their device and learn all about it’s operation. iOS devices are designed to be sealed against the user. Admittedly there probably aren’t many kids with the skill and desire to replace the OS on their phone, but I think it’s good to have option.

Android phones have a range of sizes and features while Apple only makes a few devices at any time and there’s usually only a couple of different phones on sale. iPhones are also a lot smaller than most Android phones, according to my previous estimates of hand size the iPhone 5 would be a good tablet for a 3yo or good for side-grasp phone use for a 10yo [5]. The main benefits of a phone are for things other than making phone calls so generally the biggest phone that will fit in a pocket is the best choice. The tiny iPhones don’t seem very suitable.

Also buying one of each is a viable option.

Conclusion

I think that mobile phone ownership is good for almost all kids even from a very young age (there are many reports of kids learning to use phones and tablets before they learn to read). There are no real down-sides that I can find.

I think that Android devices are generally a better option than iOS devices. But in the case of special needs kids there may be advantages to iOS.

23 June, 2015 02:26AM by etbe

June 22, 2015

Sven Hoexter

Free SSL/TLS snakeoil from wosign.com

I've been a proponet of CaCert.org for a long time and I'm still using those certificates in some places, but lately I gave in and searched for something that validates even on iOS. It's not that I strictly need it, it's more a favour to make life for friends and family easier.

I turned down startssl.com because I always manage to somehow lose the client certificate for the portal login. Plus I failed to generate several certificates for subdomains within the primary domain. I want to use different keys on purpose so SANs are not helpful, neither are wildcard certs for which you've to pay anyway. Another point against a wildcard cert from startssl is that I'd like to refrain from sending in my scanned papers for verification.

On a sidenote I'm also not a fan of random email address extractions from whois to sent validation codes to. I just don't see why the abuse desk of a registrar should be able to authorize on DV certificates for a domain under my control. startssl abuse desk in dv validation

So I decided to pay the self proclaimed leader of the snakeoil industrie (Comodo) via cheapsslshop.com. That made 12USD for a 3 year Comodo DV certificate. Fair enough for the mailsetup I share with a few friends, and the cheapest one I could find at that time. Actually no hassle with logins or verification. It looks a bit like a scam but the payment is done via 2checkout if I remember correctly and the certificate got issued via a voucher code by Comodo directly. Drawback: credit card payment.

Now while we're all waiting for letsencrypt.org I learned about the free offer of wosign.com. The CA is issued by the StartSSL Root CA, so technically we're very close to step one. Beside of that I only had to turn off uBlock origin and the rest of the JavaScript worked fine with Iceweasel once I clicked on the validity time selection checkbox. They offer the certificate for up to 3 years, you can paste your own csr and you can add up to 100 SANs. The only drawback is that it took them about 12 hours to issue the certificate and the mails look a hell lot like spam if you sent them through Spamassassin.

That provides now a free and validating certificate for sven.stormbind.net in case you'd like to check out the chain. The validation chain is even one certificate shorter then the chain for the certificate I bought from Comodo. So in case anyone else is waiting for letsencrypt to start, you might want to check wosign until Mozilla et al are ready.

From my point of view the only reason to pay one of the major CAs is for the service of running a reliable OCSP system. I also pointed that out here. It's more and more about the service you buy and no longer just money for a few ones and zeroes.

22 June, 2015 07:39PM

Niels Thykier

Introducing dak auto-decruft

Debian now have over 22 000 source packages and 45 500 binary packages.  To counter that, the FTP masters and I have created a dak tool to automatically remove packages from unstable!  This is also much more efficient than only removing them from testing! :)

 

The primary goal of the auto-decrufter is to remove a regular manual work flow from the FTP masters.  Namely, the removal of the common cases of cruft, such as “Not Built from Source” (NBS) and “Newer Version In Unstable” (NVIU).  With the auto-decrufter in place, such cruft will be automatically removed when there are no reverse dependencies left on any architecture and nothing Build-Depends on it any more.

Despite the implication in the “opening” of this post, this will in fact not substantially reduce the numbers of packages in unstable. :) Nevertheless, it is still very useful for the FTP masters, the release team and packaging Debian contributors.

The reason why the release team benefits greatly from this tool, is that almost every transition generates one piece of “NBS”-cruft.  Said piece of cruft currently must be  removed from unstable before the transition can progress into its final phase.  Until recently that removal has been 100% manual and done by the FTP masters.

The restrictions on auto-decrufter means that we will still need manual decrufts. Notably, the release team will often complete transitions even when some reverse dependencies remain on non-release architectures.  Nevertheless, it is definitely an improvement.

 

Omelettes and eggs: As an old saying goes “You cannot make an omelette without breaking eggs”.  Less so when the only “test suite” is production.  So here are some of the “broken eggs” caused by implementation of the auto-decrufter:

  • About 30 minutes of “dak rm” (without –no-action) would unconditionally crash.
  • A broken dinstall when “dak auto-decruft” was run without “–dry-run” for the first time.
  • A boolean condition inversion causing removals to remove the “override” for partial removals (and retain it for “full” removals).
    • Side-effect, this broke Britney a couple of times because dak now produced some “unexpected” Packages files for unstable.
  • Not to mention the “single digit bug closure” bug.

Of the 3, the boolean inversion was no doubt the worst.  By the time we had it fixed, at least 50 (unique) binary packages had lost their “override”.  Fortunately, it was possible to locate these issues using a database query and they have now been fixed.

Before I write any more non-trivial patches for dak, I will probably invest some time setting up a basic test framework for dak first.

 


Filed under: Debian, Release-Team

22 June, 2015 01:11PM by Niels Thykier

hackergotchi for Lunar

Lunar

Reproducible builds: week 8 in Stretch cycle

What happened about the reproducible builds effort this week:

Toolchain fixes

Andreas Henriksson has improved Johannes Schauer initial patch for pbuilder adding support for build profiles.

Packages fixed

The following 12 packages became reproducible due to changes in their build dependencies: collabtive, eric, file-rc, form-history-control, freehep-chartableconverter-plugin , jenkins-winstone, junit, librelaxng-datatype-java, libwildmagic, lightbeam, puppet-lint, tabble.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which have not made their way to the archive yet:

  • #788747 on 0xffff by Dhole: allow embedded timestamp to be set externally and set it to the time of the debian/changelog.
  • #788752 on analog by Dhole: allow embedded timestamp to be set externally and set it to the time of the debian/changelog.
  • #788757 on jacktrip by akira: remove $datetime from the documentation footer.
  • #788868 on apophenia by akira: remove $date from the documentation footer.
  • #788920 on orthanc by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #788955 on rivet by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #789040 on liblo by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #789049 on mpqc by akira: remove $datetime from the documentation footer.
  • #789071 on libxkbcommon by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #789073 on libxr by akira: remove $datetime from the documentation footer.
  • #789076 on lvtk by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #789087 on lmdb by akira: pass HTML_TIMESTAMP=NO to Doxygen.
  • #789184 on openigtlink by akira: remove $datetime from the documentation footer.
  • #789264 on openscenegraph by akira: pass HTML_TIMESTAMP=NO to Doxygen.
  • #789308 on trigger-rally-data by Mattia Rizzolo: call dh_fixperms even when overriding dh_fixperms.
  • #789396 on libsidplayfp by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #789399 on psocksxx by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #789405 on qdjango by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #789406 on qof by akira: set HTML_TIMESTAMP=NO in Doxygen configuration.
  • #789428 on qsapecng by akira: pass HTML_TIMESTAMP=NO to Doxygen.

reproducible.debian.net

Bugs with the ftbfs usertag are now visible on the bug graphs. This explain the recent spike. (h01ger)

Andreas Beckmann suggested a way to test building packages using the “funny paths” that one can get when they contain the full Debian package version string.

debbindiff development

Lunar started an important refactoring introducing abstactions for containers and files in order to make file type identification more flexible, enabling fuzzy matching, and allowing parallel processing.

Documentation update

Ximin Luo detailed the proposal to standardize environment variables to pass a reference source date to tools that needs one (e.g. documentation generator).

Package reviews

41 obsolete reviews have been removed, 168 added and 36 updated this week.

Some more issues affecting packages failing to build from source have been identified.

Meetings

Minutes have been posted for Tuesday June 16th meeting.

The next meeting is scheduled Tuesday June 23rd at 17:00 UTC.

Presentations

Lunar presented the project in French during Pas Sage en Seine in Paris. Video and slides are available.

22 June, 2015 12:48PM

June 21, 2015

hackergotchi for Steve Kemp

Steve Kemp

We're all about storing objects

Recently I've been experimenting with camlistore, which is yet another object storage system.

Camlistore gains immediate points because it is written in Go, and is a project initiated by Brad Fitzpatrick, the creator of Perlbal, memcached, and Livejournal of course.

Camlistore is designed exactly how I'd like to see an object storage-system - each server allows you to:

  • Upload a chunk of data, getting an ID in return.
  • Download a chunk of data, by ID.
  • Iterate over all available IDs.

It should be noted more is possible, there's a pretty web UI for example, but I'm simplifying. Do your own homework :)

With those primitives you can allow a client-library to upload a file once, then in the background a bunch of dumb servers can decide amongst themselves "Hey I have data with ID:33333 - Do you?". If nobody else does they can upload a second copy.

In short this kind of system allows the replication to be decoupled from the storage. The obvious risk is obvious though: if you upload a file the chunks might live on a host that dies 20 minutes later, just before the content was replicated. That risk is minimal, but valid.

There is also the risk that sudden rashes of uploads leave the system consuming all the internal-bandwith constantly comparing chunk-IDs, trying to see if data is replaced that has been copied numerous times in the past, or trying to play "catch-up" if the new-content is larger than the replica-bandwidth. I guess it should possible to detect those conditions, but they're things to be concerned about.

Anyway the biggest downside with camlistore is documentation about rebalancing, replication, or anything other than simple single-server setups. Some people have blogged about it, and I got it working between two nodes, but I didn't feel confident it was as robust as I wanted it to be.

I have a strong belief that Camlistore will become a project of joy and wonder, but it isn't quite there yet. I certainly don't want to stop watching it :)

On to the more personal .. I'm all about the object storage these days. Right now most of my objects are packed in a collection of boxes. On the 6th of next month a shipping container will come pick them up and take them to Finland.

For pretty much 20 days in a row we've been taking things to the skip, or the local charity-shops. I expect that by the time we've relocated the amount of possesions we'll maintain will be at least a fifth of our current levels.

We're working on the general rule of thumb: "If it is possible to replace an item we will not take it". That means chess-sets, mirrors, etc, will not be carried. DVDs, for example, have been slashed brutally such that we're only transferring 40 out of a starting collection of 500+.

Only personal, one-off, unique, or "significant" items will be transported. This includes things like personal photographs, family items, and similar. Clothes? Well I need to take one jacket, but more can be bought. The only place I put my foot down was books. Yes I'm a kindle-user these days, but I spent many years tracking down some rare volumes, and though it would be possible to repeat that effort I just don't want to.

I've also decided that I'm carrying my complete toolbox. Some of the tools I took with me when I left home at 18 have stayed with me for the past 20+ years. I don't need this specific crowbar, or axe, but I'm damned if I'm going to lose them now. So they stay. Object storage - some objects are more important than they should be!

21 June, 2015 04:10PM

Enrico Zini

debtags-rewrite-python3

debtags rewritten in python3

In my long quest towards closing #540218, I have uploaded a new libept to experimental. Then I tried to build debtags on a sid+experimental chroot and the result runs but has libc's free() print existential warnings about whatevers.

At a quick glance, there are now things around like a new libapt, gcc 5 with ABI changes, and who knows what else. I figured how much time it'd take me to debug something like that, and I've used that time to rewrite debtags in python3. It took 8 hours, 5 of pleasant programming and the usual tax of another 3 of utter frustration packaging the results. I guess I gained over the risk of spending an unspecified amount of hours of just pure frustration.

So from now on debtags is going to be a pure python3 package, with dependencies on only python3-apt and python3-debian. 700 lines of python instead of several C++ files built on 4 layers of libraries. Hopefully, this is the last of the big headaches I get from hacking on this package. Also, one less package using libept.

21 June, 2015 04:04PM

June 20, 2015

hackergotchi for Joachim Breitner

Joachim Breitner

Running circle-packing in the Browser, now using GHCJS

Quite a while ago, I wrote a small Haskell library called circle-packing to pack circles in a tight arrangement. Back then, I used the Haskell to JavaScript compiler fay to create a pretty online demo of that library, and shortly after, I create the identical demo using haste (another Haskell to JavaScript compiler).

The main competitor of these two compilers, and the most promising one, is GHCJS. Back then, it was too annoying to install. But after two years, things have changed, and it only takes a few simple commands to get GHCJS running, so I finally created the circle packing demo in a GHCJS variant.

Quick summary: Cabal integration is very good (like haste, but unline fay), interfacing JavaScript is nice and easy (like fay, but unlike haste), and a quick check seems to indicate that it is faster than either of these two. I should note that I did not update the other two demos, so they represent the state of fay and haste back then, respectively.

With GHCJS now available at my fingertips, maybe I will produce some more Haskell to be run in your browser. For example, I could port FrakView, a GUI program to render, expore and explain iterated function systems, from GTK to HTML.

20 June, 2015 08:50PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Lunar

Lunar

Reproducible builds: week 4 in Stretch cycle

What happened about the reproducible builds effort for this week:

Toolchain fixes

Lunar rebased our custom dpkg on the new release, removing a now unneeded patch identified by Guillem Jover. An extra sort in the buildinfo generator prevented a stable order and was quickly fixed once identified.

Mattia Rizzolo also rebased our custom debhelper on the latest release.

Packages fixed

The following 30 packages became reproducible due to changes in their build dependencies: animal-sniffer, asciidoctor, autodock-vina, camping, cookie-monster, downthemall, flashblock, gamera, httpcomponents-core, https-finder, icedove-l10n, istack-commons, jdeb, libmodule-build-perl, libur-perl, livehttpheaders, maven-dependency-plugin, maven-ejb-plugin, mozilla-noscript, nosquint, requestpolicy, ruby-benchmark-ips, ruby-benchmark-suite, ruby-expression-parser, ruby-github-markup, ruby-http-connection, ruby-settingslogic, ruby-uuidtools, webkit2gtk, wot.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which did not make their way to the archive yet:

  • #775531 on console-setup by Reiner Herrmann: update and split patch written in January.
  • #785535 on maradns by Reiner Herrmann: use latest entry in debian/changelog as build date.
  • #785549 on dist by Reiner Herrmann: set hostname and domainname to predefined value.
  • #785583 on s5 by Juan Picca: set timezone to UTC when unzipping files.
  • #785617 on python-carrot by Juan Picca: use latest entry in debian/changelog as documentation build date.
  • #785774 on afterstep by Juan Picca: modify documentation generator to allow a build date to be set instead of the current time, then use latest entry in debian/changelog as reference.
  • #786508 on ttyload by Juan Picca: remove timestamp from documentation.
  • #786568 on linux-minidisc by Lunar: use latest entry in debian/changelog as build date.
  • #786615 on kfreebsd-10 by Steven Chamberlain: make order of file in source tarballs stable.
  • #786633 on webkit2pdf by Reiner Herrmann: use latest entry in debian/changelog as documentation build date.
  • #786634 on libxray-scattering-perl by Reiner Herrmann: tell Storable::nstore to produce sorted output.
  • #786637 on nvidia-settings by Lunar: define DATE, WHOAMI, andHOSTNAME_CMD` to stable values.
  • #786710 on armada-backlight by Reiner Herrmann: use latest entry in debian/changelog as documentation build date.
  • #786711 on leafpad by Reiner Herrmann: use latest entry in debian/changelog as documentation build date.
  • #786714 on equivs by Reiner Herrmann: use latest entry in debian/changelog as documentation build date.

Also, the following bugs have been reported:

  • #785536 on maradns by Reiner Herrmann: unreproducible deadwood binary.
  • #785624 on doxygen by Christoph Berg: timestamps in manpages generated makes builds non-reproducible.
  • #785736 on git-annex by Daniel Kahn Gillmor: documentation should be made reproducible.
  • #786593 on wordwarvi by Holger Levsen: please provide a --distrobuild build switch.
  • #786601 on sbcl by Holger Levsen: FTBFS when locales-all is installed instead of locales.
  • #786669 on ruby-celluloid by Holger Levsen: tests sometimes fail, causing ftbfs sometimes.
  • #786743 on obnam by Holger Levsen: FTBFS.

reproducible.debian.net

Holger Levsen made several small bug fixes and a few more visible changes:

  • For packages in testing, comparisions will be done using the sid version of debbindiff.
  • The scheduler will now schedule old packages from sid twice often as the ones in testing as we care more about the former at the moment.
  • More statistics are now visible and the layout has been improved.
  • Variations between the first and second build are now explained on the statistics page.

strip-nondeterminism

Version 0.007-1 of strip-nondeterminism—the tool to post-process various file formats to normalize them—has been uploaded by Holger Levsen. Version 0.006-1 was already in the reproducible repository, the new version mainly improve the detection of Maven's pom.properties files.

debbindiff development

At the request of Emmanuel Bourg, Reiner Herrmann added a comparator for Java .class files.

Documentation update

Christoph Berg created a new page for the timestamps in manpages created by Doxygen.

Package reviews

93 obsolete reviews have been removed, 76 added and 43 updated this week.

New identified issues: timestamps in manpages generated by Doxygen, modification time differences in files extracted by unzip, tstamp task used in Ant build.xml, timestamps in documentation generated by ASDocGen. The description for build id related issues has been clarified.

Meetings

Holger Levsen announced a first meeting on Wednesday, June 3rd, 2015, 19:00 UTC. The agenda is amendable on the wiki.

Misc.

Lunar worked on a proof-of-concept script to import the build environment found in .buildinfo files to UDD. Lucas Nussbaum has positively reviewed the proposed schema.

Holger Levsen cleaned up various experimental toolchain repositories, marking merged brances as such.

20 June, 2015 08:18AM

Reproducible builds: week 5 in Stretch cycle

What happened about the reproducible builds effort for this week:

Toolchain fixes

Uploads that should help other packages:

  • Stephen Kitt uploaded mingw-w64/4.0.2-2 which avoids inserting timestamps in PE binaries, and specify dlltool's temp prefix so it generates reproducible files.
  • Stephen Kitt uploaded binutils-mingw-w64/6.1 which fixed dlltool to initialize its output's .idata$6 section, avoiding random data ending up there.

Patch submitted for toolchain issues:

  • #787159 on openjdk-7 by Emmanuel Bourg: sort the annotations and enums in package-tree.html produced by javadoc.
  • #787250 on python-qt4 by Reiner Herrmann: sort imported modules to get reproducible output.
  • #787251 on pyqt5 by Reiner Herrmann: sort imported modules to get reproducible output.

Some discussions have been started in Debian and with upstream:

Packages fixed

The following 8 packages became reproducible due to changes in their build dependencies: access-modifier-checker, apache-log4j2, jenkins-xstream, libsdl-perl, maven-shared-incremental, ruby-pygments.rb, ruby-wikicloth, uimaj.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which did not make their way to the archive yet:

  • #777308 on dhcp-helper by Dhole: fix mtimes of packaged files.
  • #786927 on flowscan by Dhole: remove timestamps from gzip files and fix mtimes of packaged files.
  • #786959 on python3.5 by Lunar: set build date of binary and documentation to the time of latest debian/changelog entry, prevent gzip from storing a timestamp.
  • #786965 on python3.4 by Lunar: same as python3.5.
  • #786978 on python2.7 by Lunar: same as python3.5.
  • #787122 on xtrlock by Dhole: fix mtimes of packaged files.
  • #787123 on rsync by Dhole: remove timestamps from gzip files and fix mtimes of packaged files.
  • #787125 on pachi by Dhole: fix mtimes of packaged files.
  • #787126 on nis by Dhole: remove timestamps from gzip files and fix mtimes of packaged files.
  • #787206 on librpc-xml-perl by Reiner Herrmann: remove timestamps from generated code.
  • #787265 on libwx-perl by Reiner Herrmann: produce sorted output.
  • #787303 on dos2unix by Juan Picca: set manpage date to the time of latest entry in debian/changelog.
  • #787327 on vim by Reiner Herrmann: remove usage of __DATE__ and __TIME__ macros.

Discussions that have been started:

reproducible.debian.net

Holger Levsen added two new package sets: pkg-javascript-devel and pkg-php-pear. The list of packages with and without notes are now sorted by age of the latest build.

Mattia Rizzolo added support for email notifications so that maintainers can be warned when a package becomes unreproducible. Please ask Mattia or Holger or in the #debian-reproducible IRC channel if you want to be notified for your packages!

strip-nondeterminism development

Andrew Ayer fixed the gzip handler so that it skip adding a predetermined timestamp when there was none.

Documentation update

Lunar added documentation about mtimes of file extracted using unzip being timezone dependent. He also wrote a short example on how to test reproducibility.

Stephen Kitt updated the documentation about timestamps in PE binaries.

Documentation and scripts to perform weekly reports were published by Lunar.

Package reviews

50 obsolete reviews have been removed, 51 added and 29 updated this week. Thanks Chris West and Mathieu Bridon amongst others.

New identified issues:

Misc.

Lunar will be talking (in French) about reproducible builds at Pas Sage en Seine on June 19th, at 15:00 in Paris.

Meeting will happen this Wednesday, 19:00 UTC.

20 June, 2015 08:18AM

Russell Coker

BTRFS Status June 2015

The version of btrfs-tools in Debian/Jessie is incapable of creating a filesystem that can be mounted by the kernel in Debian/Wheezy. If you want to use a BTRFS filesystem on Jessie and Wheezy (which isn’t uncommon with removable devices) the only options are to use the Wheezy version of mkfs.btrfs or to use a Jessie kernel on Wheezy. I recently got bitten by this issue when I created a BTRFS filesystem on a removable device with a lot of important data (which is why I wanted metadata duplication and checksums) and had to read it on a server running Wheezy. Fortunately KVM in Wheezy works really well so I created a virtual machine to read the disk. Setting up a new KVM isn’t that difficult, but it’s not something I want to do while a client is anxiously waiting for their data.

BTRFS has been working well for me apart from the Jessie/Wheezy compatability issue (which was an annoyance but didn’t stop me doing what I wanted). I haven’t written a BTRFS status report for a while because everything has been OK and there has been nothing exciting to report.

I regularly get errors from the cron jobs that run a balance supposedly running out of free space. I have the cron jobs due to past problems with BTRFS running out of metadata space. In spite of the jobs often failing the systems keep working so I’m not too worried at the moment. I think this is a bug, but there are many more important bugs.

Linux kernel version 3.19 was the first version to have working support for RAID-5 recovery. This means version 3.19 was the first version to have usable RAID-5 (I think there is no point even having RAID-5 without recovery). It wouldn’t be prudent to trust your important data to a new feature in a filesystem. So at this stage if I needed a very large scratch space then BTRFS RAID-5 might be a viable option but for anything else I wouldn’t use it. BTRFS still has had little performance optimisation, while this doesn’t matter much for SSD and for single-disk filesystems for a RAID-5 of hard drives that would probably hurt a lot. Maybe BTRFS RAID-5 would be good for a scratch array of SSDs. The reports of problems with RAID-5 don’t surprise me at all.

I have a BTRFS RAID-1 filesystem on 2*4TB disks which is giving poor performance on metadata, simple operations like “ls -l” on a directory with ~200 subdirectories takes many seconds to run. I suspect that part of the problem is due to the filesystem being written by cron jobs with files accumulating over more than a year. The “btrfs filesystem” command (see btrfs-filesystem(8)) allows defragmenting files and directory trees, but unfortunately it doesn’t support recursively defragmenting directories but not files. I really wish there was a way to get BTRFS to put all metadata on SSD and all data on hard drives. Sander suggested the following command to defragment directories on the BTRFS mailing list:

find / -xdev -type d -execdir btrfs filesystem defrag -c {} +

Below is the output of “zfs list -t snapshot” on a server I run, it’s often handy to know how much space is used by snapshots, but unfortunately BTRFS has no support for this.

NAME USED AVAIL REFER MOUNTPOINT
hetz0/be0-mail@2015-03-10 2.88G 387G
hetz0/be0-mail@2015-03-11 1.12G 388G
hetz0/be0-mail@2015-03-12 1.11G 388G
hetz0/be0-mail@2015-03-13 1.19G 388G

Hugo pointed out on the BTRFS mailing list that the following command will give the amount of space used for snapshots. $SNAPSHOT is the name of a snapshot and $LASTGEN is the generation number of the previous snapshot you want to compare with.

btrfs subvolume find-new $SNAPSHOT $LASTGEN | awk '{total = total + $7}END{print total}'

One upside of the BTRFS implementation in this regard is that the above btrfs command without being piped through awk shows you the names of files that are being written and the amounts of data written to them. Through casually examining this output I discovered that the most written files in my home directory were under the “.cache” directory (which wasn’t exactly a surprise).

Now I am configuring workstations with a separate subvolume for ~/.cache for the main user. This means that ~/.cache changes don’t get stored in the hourly snapshots and less disk space is used for snapshots.

Conclusion

My observation is that things are going quite well with BTRFS. It’s more than 6 months since I had a noteworthy problem which is pretty good for a filesystem that’s still under active development. But there are still many systems I run which could benefit from the data integrity features of ZFS and BTRFS that don’t have the resources to run ZFS and need more reliability than I can expect from an unattended BTRFS system.

At this time the only servers I run with BTRFS are located within a reasonable drive from my home (not the servers in Germany and the US) and are easily accessible (not the embedded systems). ZFS is working well for some of the servers in Germany. Eventually I’ll probably run ZFS on all the hosted servers in Germany and the US, I expect that will happen before I’m comfortable running BTRFS on such systems. For the embedded systems I will just take the risk of data loss/corruption for the next few years.

20 June, 2015 04:47AM by etbe

June 18, 2015

Bálint Réczey

Debian is preparing the transition to FFmpeg!

Ending an era of shipping Libav as default the Debian Multimedia Team is working out the last details of switching to FFmpeg. If you would like to read more about the reasons please read the rationale behind the change on the dedicated wiki page. If you feel so be welcome to test the new ffmpeg packages or join the discussion starting here. (Warning, the thread is loooong!)

 

18 June, 2015 11:56AM by Réczey Bálint

June 17, 2015

hackergotchi for DebConf team

DebConf team

Striving for more diversity at DebConf15 (Posted by DebConf Team)

DebConf is not just for Debian Developers, we welcome all members of our community active in different areas, like translation, documentation, artwork, testing, specialized derivatives, and many other ways that help make Debian better.

In fact, we would like to open DebConf to an even broader audience, and we strongly believe that more diversity at DebConf and in the Debian community will significantly help us towards our goal of becoming the Universal Operating System.

The DebConf team is proud to announce that we have started designing a specific diversity sponsorship programme to attract people to DebConf that would otherwise not consider attending our conference or not be able to join us.

In order to apply for this special sponsorship, please write an email to outreach@debian.org, before July 6th, about your interest in Debian and your sponsorship needs (accomodation, travel). Please include a sentence or two about why you are applying for a Diversity Sponsorship. You can also nominate people you think should be considered for this sponsorship programme.

Please feel free to send this announcement on to groups or individuals that could be interested in this sponsorship programme.

And we’re also looking forward to your feedback. We’re just getting started and you can help shape these efforts.

17 June, 2015 10:09AM by DebConf Organizers