August 10, 2020

Russ Allbery

rra-c-util 8.3

In this release of my utility library for my other packages, I finally decided to drop support for platforms without a working snprintf.

This dates back to the early 2000s and a very early iteration of this package. At the time, there were still some older versions of UNIX without snprintf at all. More commonly, it was buggy. The most common problem was that it would return -1 if the buffer wasn't large enough rather than returning the necessary size of the buffer. Or, in some cases, it wouldn't support a buffer size of 0 and a NULL buffer to get the necessary size.

At the time I added this support for INN and some other packages, Solaris had several of these issues. But C99 standardized the correct snprintf behavior, and slowly every maintained operating system was fixed. (I forget whether it was fixed in Solaris 8 or Solaris 9, but regardless, Solaris has had a working snprintf for many years.) Meanwhile, the replacement function (Patrick Powell's version, also used by mutt and other packages) was a huge wad of code and a corresponding test suite. Over time, I've increased the aggressiveness of linters to try to catch more dangerous C pitfalls, and that's required carrying more and more small modifications plus a preamble to disable various warnings that I didn't want to try to fix.

The straw that broke the camel's back was Clang's new case fallthrough warning. Clang stopped supporting the traditional /* fallthrough */ comment. It now prefers [[clang:fallthrough]] syntax, but of course older compilers choke on that. It does support the GCC __attribute__((__fallthrough__)) syntax, but older compilers don't like that construction because they think it's an empty statement. It was a mess, and I decided the time had come to drop this support effort.

At this point, if you're still running an operating system without C99 snprintf, I think it's essentially a retrocomputing or at least extremely stable legacy production situation, and you're unlikely to want the latest and greatest releases of new software. Hopefully that assumption is correct, or at least correct enough.

(I realize the right solution to this problem is probably for me to use Gnulib for portability. But converting to it is a whole other project with a lot of other implications and machinery, and I'm not sure that's what I want to spend time on.)

Also in this release is a fix for network tests on hosts with no IPv4 addresses (more on this when I release the next version of remctl), fixes for style issues found by Perl::Critic::Freenode, and some other test suite improvements.

You can get the latest version from the rra-c-util distribution page.

10 August, 2020 02:15AM

August 09, 2020

DocKnot 3.05

I keep telling myself that the next release of DocKnot will be the one where I convert everything to YAML and then feel confident about uploading it to Debian, and then I keep finding one more thing to fix to release another package I'm working on.

Anyway, this is the package I use to generate software documentation and, in the long run, will subsume my static web site generator and software release workflow. This release tweaks a heuristic for wrapping paragraphs in text documents, fixes the status badge for software with Debian packages to do what I had intended, and updates dependencies based on the advice of Perl::Critic::Freenode.

You can get the latest version from CPAN or from the DocKnot distribution page.

09 August, 2020 11:30PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Started writing some golang code.

Started writing some golang code. Trying to rewrite some of the tools as a daily driver for machine management tool. It's easier than rust in that having a good rust compiler is a hassle though golang preinstalled on systems can build and run. go run is simple enough to invoke on most Debian systems.

09 August, 2020 07:36AM by Junichi Uekawa

hackergotchi for Charles Plessy

Charles Plessy

Thank you, VAIO

I use everyday a VAIO Pro mk2 that I bought 5 years ago with 3 years of warranty. It has been a few months that I was noticing that something was slowly inflating inside. In July, things accelerated to the point that its thickness had doubled. After we called the customer service of VAIO, somebody came to pick up the laptop in order to make a cost estimate. Then we learned on the phone that it would be free. It is back in my hands in less than two weeks. Bravo VAIO !

09 August, 2020 01:01AM

August 08, 2020

hackergotchi for Holger Levsen

Holger Levsen

20200808-debconf8

DebConf8

This tshirt is 12 years old and from DebConf8.

DebConf8 was my 6th DebConf and took place in Mar de la Plata, Argentina.

Also this is my 6th post in this series of posts about DebConfs and for the last two days for the first time I failed my plan to do one post per day. And while two days ago I still planned to catch up on this by doing more than one post in a day, I have now decided to give in to realities, which mostly translates to sudden fantastic weather in Hamburg and other summer related changes in life. So yeah, I still plan to do short posts about all the DebConfs I was lucky to attend, but there might be days without a blog post. Anyhow, Mar de la Plata.

When we held DebConf in Argentina it was winter there, meaning locals and other folks would wear jackets, scarfs, probably gloves, while many Debian folks not so much. Andreas Tille freaked out and/or amazed local people by going swimming in the sea every morning. And when I told Stephen Gran that even I would find it a bit cold with just a tshirt he replied "na, the weather is fine, just like british summer", while it was 14 celcius and mildly raining.

DebConf8 was the first time I've met Valessio Brito, who I had worked together since at least DebConf6. That meeting was really super nice, Valessio is such a lovely person. Back in 2008 however, there was just one problem: his spoken English was worse than his written one, and that was already hard to parse sometimes. Fast forward eleven years to Curitiba last year and boom, Valessio speaks really nice English now.

And, you might wonder why I'm telling this, especially if you were exposed to my Spanish back then and also now. So my point in telling this story about Valessio is to illustrate two things: a.) one can contribute to Debian without speaking/writing much English, Valessio did lots of great artwork since DebConf6 and b.) one can learn English by doing Debian stuff. It worked for me too!

During set up of the conference there was one very memorable moment, some time after the openssl maintainer, Kurt Roeckx arrived at the venue: Shortly before DebConf8 Luciano Bello, from Argentina no less, had found CVE-2008-0166 which basically compromised the security of sshd of all Debian and Ubuntu installations done in the last 4 years (IIRC two Debian releases were affected) and which was commented heavily and noticed everywhere. So poor Kurt arrived and wondered whether we would all hate him, how many toilets he would have to clean and what not... And then, someone rather quickly noticed this, approached some people and suddenly a bunch of people at DebConf were group-hugging Kurt and then we were all smiling and continuing doing set up of the conference.

That moment is one of my most joyful memories of all DebConfs and partly explains why I remember little about the conference itself, everything else pales in comparison and most things pale over the years anyway. As I remember it, the conference ran very smoothly in the end, despite quite some organisational problems right before the start. But as usual, once the geeks arrive and are happily geeking, things start to run smooth, also because Debian people are kind and smart and give hands and brain were needed.

And like other DebConfs, Mar de la Plata also had moments which I want to share but I will only hint about, so it's up to you to imagine the special leaves which were brought to that cheese and wine party! ;-)

Update: added another xkcd link, spelled out Kurt's name after talking to him and added a link to a video of the group hug.

08 August, 2020 04:10PM

Reproducible Builds

Reproducible Builds in July 2020

Welcome to the July 2020 report from the Reproducible Builds project.

In these monthly reports, we round-up the things that we have been up to over the past month. As a brief refresher, the motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced from the original free software source code to the pre-compiled binaries we install on our systems. (If you’re interested in contributing to the project, please visit our main website.)

General news

At the upcoming DebConf20 conference (now being held online), Holger Levsen will present a talk on Thursday 27th August about “Reproducing Bullseye in practice”, focusing on independently verifying that the binaries distributed from ftp.debian.org were made from their claimed sources.

Tavis Ormandy published a blog post making the provocative claim that “You don’t need reproducible builds”, asserting elsewhere that the many attacks that have been extensively reported in our previous reports are “fantasy threat models”. A number of rebuttals have been made, including one from long-time contributor Reproducible Builds contributor Bernhard Wiedemann.

On our mailing list this month, Debian Developer Graham Inggs posted to our list asking for ideas why the openorienteering-mapper Debian package was failing to build on the Reproducible Builds testing framework. Chris Lamb remarked from the build logs that the package may be missing a build dependency, although Graham then used our own diffoscope tool to show that the resulting package remains unchanged with or without it. Later, Nico Tyni noticed that the build failure may be due to the relationship between the FILE C preprocessor macro and the -ffile-prefix-map GCC flag.

An issue in Zephyr, a small-footprint kernel designed for use on resource-constrained systems, around .a library files not being reproducible was closed after it was noticed that a key part of their toolchain was updated that now calls --enable-deterministic-archives by default.

Reproducible Builds developer kpcyrd commented on a pull request against the libsodium cryptographic library wrapper for Rust, arguing against the testing of CPU features at compile-time. He noted that:

I’ve accidentally shipped broken updates to users in the past because the build system was feature-tested and the final binary assumed the instructions would be present without further runtime checks

David Kleuker also asked a question on our mailing list about using SOURCE_DATE_EPOCH with the install(1) tool from GNU coreutils. When comparing two installed packages he noticed that the filesystem ‘birth times’ differed between them. Chris Lamb replied, realising that this was actually a consequence of using an outdated version of diffoscope and that a fix was in diffoscope version 146 released in May 2020.

Later in July, John Scott posted asking for clarification regarding on the Javascript files on our website to add metadata for LibreJS, the browser extension that blocks non-free Javascript scripts from executing. Chris Lamb investigated the issue and realised that we could drop a number of unused Javascript files [][][] and added unminified versions of Bootstrap and jQuery [].


Development work

Website

On our website this month, Chris Lamb updated the main Reproducible Builds website and documentation to drop a number of unused Javascript files [][][] and added unminified versions of Bootstrap and jQuery []. He also fixed a number of broken URLs [][].

Gonzalo Bulnes Guilpain made a large number of grammatical improvements [][][][][] as well as some misspellings, case and whitespace changes too [][][].

Lastly, Holger Levsen updated the README file [], marked the Alpine Linux continuous integration tests as currently disabled [] and linked the Arch Linux Reproducible Status page from our projects page [].

diffoscope

diffoscope is our in-depth and content-aware diff utility that can not only locate and diagnose reproducibility issues, it provides human-readable diffs of all kinds. In July, Chris Lamb made the following changes to diffoscope, including releasing versions 150, 151, 152, 153 & 154:

  • New features:

    • Add support for flash-optimised F2FS filesystems. (#207)
    • Don’t require zipnote(1) to determine differences in a .zip file as we can use libarchive. []
    • Allow --profile as a synonym for --profile=-, ie. write profiling data to standard output. []
    • Increase the minimum length of the output of strings(1) to eight characters to avoid unnecessary diff noise. []
    • Drop some legacy argument styles: --exclude-directory-metadata and --no-exclude-directory-metadata have been replaced with --exclude-directory-metadata={yes,no}. []
  • Bug fixes:

    • Pass the absolute path when extracting members from SquashFS images as we run the command with working directory in a temporary directory. (#189)
    • Correct adding a comment when we cannot extract a filesystem due to missing libguestfs module. []
    • Don’t crash when listing entries in archives if they don’t have a listed size such as hardlinks in ISO images. (#188)
  • Output improvements:

    • Strip off the file offset prefix from xxd(1) and show bytes in groups of 4. []
    • Don’t emit javap not found in path if it is available in the path but it did not result in an actual difference. []
    • Fix ... not available in path messages when looking for Java decompilers that used the Python class name instead of the command. []
  • Logging improvements:

    • Add a bit more debugging info when launching libguestfs. []
    • Reduce the --debug log noise by truncating the has_some_content messages. []
    • Fix the compare_files log message when the file does not have a literal name. []
  • Codebase improvements:

    • Rewrite and rename exit_if_paths_do_not_exist to not check files multiple times. [][]
    • Add an add_comment helper method; don’t mess with our internal list directly. []
    • Replace some simple usages of str.format with Python ‘f-strings’ [] and make it easier to navigate to the main.py entry point [].
    • In the RData comparator, always explicitly return None in the failure case as we return a non-None value in the success one. []
    • Tidy some imports [][][] and don’t alias a variable when we do not use it. []
    • Clarify the use of a separate NullChanges quasi-file to represent missing data in the Debian package comparator [] and clarify use of a ‘null’ diff in order to remember an exit code. []
  • Other changes:

Jean-Romain Garnier also made the following changes:

  • Allow passing a file with a list of arguments via diffoscope @args.txt. (!62)
  • Improve the output of side-by-side diffs by detecting added lines better. (!64)
  • Remove offsets before instructions in objdump [][] and remove raw instructions from ELF tests [].

Other tools

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. It is used automatically in most Debian package builds. In July, Chris Lamb ensured that we did not install the internal handler documentation generated from Perl POD documents [] and fixed a trivial typo []. Marc Herbert added a --verbose-level warning when the Archive::Cpio Perl module is missing. (!6)

reprotest is our end-user tool to build same source code twice in widely differing environments and then checks the binaries produced by each build for any differences. This month, Vagrant Cascadian made a number of changes to support diffoscope version 153 which had removed the (deprecated) --exclude-directory-metadata and --no-exclude-directory-metadata command-line arguments, and updated the testing configuration to also test under Python version 3.8 [].


Distributions

Debian

In June 2020, Timo Röhling filed a wishlist bug against the debhelper build tool impacting the reproducibility status of hundreds of packages that use the CMake build system. This month however, Niels Thykier uploaded debhelper version 13.2 that passes the -DCMAKE_SKIP_RPATH=ON and -DBUILD_RPATH_USE_ORIGIN=ON arguments to CMake when using the (currently-experimental) Debhelper compatibility level 14.

According to Niels, this change:

… should fix some reproducibility issues, but may cause breakage if packages run binaries directly from the build directory.

34 reviews of Debian packages were added, 14 were updated and 20 were removed this month adding to our knowledge about identified issues. Chris Lamb added and categorised the nondeterministic_order_of_debhelper_snippets_added_by_dh_fortran_mod [] and gem2deb_install_mkmf_log [] toolchain issues.

Lastly, Holger Levsen filed two more wishlist bugs against the debrebuild Debian package rebuilder tool [][].

openSUSE

In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update.

Bernhard also published the results of performing 12,235 verification builds of packages from openSUSE Leap version 15.2 and, as a result, created three pull requests against the openSUSE Build Result Compare Script [][][].

Other distributions

In Arch Linux, there was a mass rebuild of old packages in an attempt to make them reproducible. This was performed because building with a previous release of the pacman package manager caused file ordering and size calculation issues when using the btrfs filesystem.

A system was also implemented for Arch Linux packagers to receive notifications if/when their package becomes unreproducible, and packagers now have access to a dashboard where they can all see all their unreproducible packages (more info).

Paul Spooren sent two versions of a patch for the OpenWrt embedded distribution for adding a ‘build system’ revision to the ‘packages’ manifest so that all external feeds can be rebuilt and verified. [][]

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of these patches, including:

Vagrant Cascadian also reported two issues, the first regarding a regression in u-boot boot loader reproducibility for a particular target [] and a non-deterministic segmentation fault in the guile-ssh test suite []. Lastly, Jelle van der Waa filed a bug against the MeiliSearch search API to report that it embeds the current build date.

Testing framework

We operate a large and many-featured Jenkins-based testing framework that powers tests.reproducible-builds.org.

This month, Holger Levsen made the following changes:

  • Debian-related changes:

    • Tweak the rescheduling of various architecture and suite combinations. [][]
    • Fix links for ‘404’ and ‘not for us’ icons. (#959363)
    • Further work on a rebuilder prototype, for example correctly processing the sbuild exit code. [][]
    • Update the sudo configuration file to allow the node health job to work correctly. []
    • Add php-horde packages back to the pkg-php-pear package set for the bullseye distribution. []
    • Update the version of debrebuild. []
  • System health check development:

    • Add checks for broken SSH [], logrotate [], pbuilder [], NetBSD [], ‘unkillable’ processes [], unresponsive nodes [][][][], proxy connection failures [], too many installed kernels [], etc.
    • Automatically fix some failed systemd units. []
    • Add notes explaining all the issues that hosts are experiencing [] and handle zipped job log files correctly [].
    • Separate nodes which have been automatically marked as down [] and show status icons for jobs with issues [].
  • Misc:

In addition, Mattia Rizzolo updated the init_node script to suggest using sudo instead of explicit logout and logins [][] and the usual build node maintenance was performed by Holger Levsen [][][][][][], Mattia Rizzolo [][] and Vagrant Cascadian [][][][].



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

08 August, 2020 03:52PM

Thorsten Alteholz

My Debian Activities in July 2020

FTP master

This month I accepted 434 packages and rejected 54. The overall number of packages that got accepted was 475.

Debian LTS

This was my seventy-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 25.25h. During that time I did LTS uploads of:

  • [DLA 2289-1] mupdf security update for five CVEs
  • [DLA 2290-1] e2fsprogs security update for one CVE
  • [DLA 2294-1] salt security update for two CVEs
  • [DLA 2295-1] curl security update for one CVE
  • [DLA 2296-1] luajit security update for one CVE
  • [DLA 2298-1] libapache2-mod-auth-openidc security update for three CVEs

I started to work on python2.7 as well but stumbled over some hurdles in the testsuite, so I did not upload a fixed version yet.

This month was much influenced by the transition from Jessie LTS to Stretch LTS and one or another workflow/script needed some adjustments.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the twenty fifth ELTS month.

During my allocated time I uploaded:

  • ELA-230-1 for luajit
  • ELA-231-1 for curl

Like in LTS, I also started to work on python2.7 and encountered the same hurdles in the testsuite. So I did not upload a fixed version for ELTS as well.

Last but not least I did some days of frontdesk duties.

Other stuff

In this section nothing much happened this month. Thanks a lot to everybody who NMUed a package to fix a bug.

08 August, 2020 09:54AM by alteholz

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RVowpalWabbit 0.0.15: Some More CRAN Build Issues

Another maintenance RVowpalWabbit package update brought us to version 0.0.15 earlier today. We attempted to fix one compilation error on Solaris, and addressed a few SAN/UBSAN issues with the gcc build.

As noted before, there is a newer package rvw based on the excellent GSoC 2018 and beyond work by Ivan Pavlov (mentored by James and myself) so if you are into Vowpal Wabbit from R go check it out.

CRANberries provides a summary of changes to the previous version. More information is on the RVowpalWabbit page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

08 August, 2020 04:55AM

François Marier

Setting the default web browser on Debian and Ubuntu

If you are wondering what your default web browser is set to on a Debian-based system, there are several things to look at:

$ xdg-settings get default-web-browser
brave-browser.desktop

$ xdg-mime query default x-scheme-handler/http
brave-browser.desktop

$ xdg-mime query default x-scheme-handler/https
brave-browser.desktop

$ ls -l /etc/alternatives/x-www-browser
lrwxrwxrwx 1 root root 29 Jul  5  2019 /etc/alternatives/x-www-browser -> /usr/bin/brave-browser-stable*

$ ls -l /etc/alternatives/gnome-www-browser
lrwxrwxrwx 1 root root 29 Jul  5  2019 /etc/alternatives/gnome-www-browser -> /usr/bin/brave-browser-stable*

Debian-specific tools

The contents of /etc/alternatives/ is system-wide defaults and must therefore be set as root:

sudo update-alternatives --config x-www-browser
sudo update-alternatives --config gnome-www-browser

The sensible-browser tool (from the sensible-utils package) will use these to automatically launch the most appropriate web browser depending on the desktop environment.

Standard MIME tools

The others can be changed as a normal user. Using xdg-settings:

xdg-settings set default-web-browser brave-browser-beta.desktop

will also change what the two xdg-mime commands return:

$ xdg-mime query default x-scheme-handler/http
brave-browser-beta.desktop

$ xdg-mime query default x-scheme-handler/https
brave-browser-beta.desktop

since it puts the following in ~/.config/mimeapps.list:

[Default Applications]
text/html=brave-browser-beta.desktop
x-scheme-handler/http=brave-browser-beta.desktop
x-scheme-handler/https=brave-browser-beta.desktop
x-scheme-handler/about=brave-browser-beta.desktop
x-scheme-handler/unknown=brave-browser-beta.desktop

Note that if you delete these entries, then the system-wide defaults, defined in /etc/mailcap, will be used, as provided by the mime-support package.

Changing the x-scheme-handler/http (or x-scheme-handler/https) association directly using:

xdg-mime default brave-browser-nightly.desktop x-scheme-handler/http

will only change that particular one. I suppose this means you could have one browser for insecure HTTP sites (hopefully with HTTPS Everywhere installed) and one for HTTPS sites though I'm not sure why anybody would want that.

Summary

In short, if you want to set your default browser everywhere (using Brave in this example), do the following:

sudo update-alternatives --config x-www-browser
sudo update-alternatives --config gnome-www-browser
xdg-settings set default-web-browser brave-browser.desktop

08 August, 2020 04:10AM

Jelmer Vernooij

Improvements to Merge Proposals by the Janitor

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

Since the original post, merge proposals created by the janitor now include the debdiff between a build with and without the changes (showing the impact to the binary packages), in addition to the merge proposal diff (which shows the impact to the source package).

New merge proposals also include a link to the diffoscope diff between a vanilla build and the build with changes. Unfortunately these can be a bit noisy for packages that are not reproducible yet, due to the difference in build environment between the two builds.

This is part of the effort to keep the changes from the janitor high-quality.

The rollout surfaced some bugs in lintian-brush; these have been either fixed or mitigated (e.g. by disabling specified fixers).

For more information the Janitor's lintian-fixes efforts, see the landing page

08 August, 2020 12:30AM by Jelmer Vernooij, Perry Lorrier

August 07, 2020

Antonio Terceiro

When community service providers fail

I'm starting a new blog, and instead of going into the technical details on how it's made, or on how I migrated my content from the previous ones, I want focus on why I did it.

It's been a while since I have written a blog post. I wanted to get back into it, but also wanted to finally self-host my blog/homepage because I have been let down before. And sadly, I was not let down by a for-profit, privacy invading corporation, but by a free software organization.

The sad story of wiki.softwarelivre.org

My first blog that was hosted in a blog engine written by me, which was hosted in a TWiki, and later Foswiki instance previously available at wiki.softwarelivre.org, hosted by ASL.org.

I was the one who introduced the tool to the organization in the first place. I had come from a previous, very fruitful experience on the use of wikis for creation of educational material while in university, which ultimately led me to become a core TWiki, and then Foswiki developer.

In 2004, I had just moved to Porto Alegre, got involved in ASL.org, and there was a demand for a tool like that. 2 years later, I left Porto Alegre, and some time after that the daily operations of ASL.org when it became clear that it was not really prepared for remote participation. I was still maintaing the wiki software and the OS for quite some years after, until I wasn't anymore.

In 2016, the server that hosted it went haywire, and there were no backups. A lot of people and free software groups lost their content forever. My blog was the least important content in there. To mention just a few examples, here are some groups that lost their content in there:

  • The Brazilian Foswiki user group hosted a bunch of getting started documentation in Portuguese, organized meetings, and coordinated through there.
  • GNOME Brazil hosted its homepage there until the moment the server broke.
  • The Inkscape Brazil user group had an amazing space there where they shared tutorials, a gallery of user-contributed drawings, and a lot more.

Some of this can still be reached via the Internet Archive Wayback Machine, but that is only useful for recovering content, not for it to be used by the public.

The announced tragedy of softwarelivre.org

My next blog after that was hosted at softwarelivre.org, a Noosfero instance also hosted by ASL.org. When it was introduced in 2010, this Noosfero instance became responsible for the main domain softwarelivre.org name. This was a bold move by ASL.org, and was a demonstration of trust in a local free software project, led by a local free software cooperative (Colivre).

I was a lead developer in the Noosfero project for a long time, and I was also involved in maintaining that server as well.

However, for several years there is little to no investment in maintaining that service. I already expect that it will probably blow up at some point as the wiki did, or that it may be just shut down on purpose.

On the responsibility of organizations

Today, a large part of wast mot people consider "the internet" is controlled by a handful of corporations. Most popular services on the internet might look like they are gratis (free as in beer), but running those services is definitely not without costs. So when you use services provided by for-profit companies and are not paying for them with money, you are paying with your privacy and attention.

Society needs independent organizations to provide alternatives.

The market can solve a part of the problem by providing ethical services and charging for them. This is legitimate, and as long as there is transparency about how peoples' data and communications are handled, there is nothing wrong with it.

But that only solves part of the problem, as there will always be people who can't afford to pay, and people and communities who can afford to pay, but would rather rely on a nonprofit. That's where community-based services, provided by nonprofits, are also important. We should have more of them, not less.

So it makes me worry to realize ASL.org left the community in the dark. Losing the wiki wasn't even the first event of its kind, as the listas.softwarelivre.org mailing list server, with years and years of community communications archived in it, broke with no backups in 2012.

I do not intend to blame the ASL.org leadership personally, they are all well meaning and good people. But as an organization, it failed to recognize the importance of this role of service provider. I can even include myself in it: I was member of the ASL.org board some 15 years ago; I was involved in the deployment of both the wiki and Noosfero, the former as a volunteer and the later professionally. Yet, I did nothing to plan the maintenance of the infrastructure going forward.

When well meaning organizations fail, people who are not willing to have their data and communications be exploited for profit are left to their own devices. I can afford a virtual private server, and have the technical knowledge to import my old content into a static website generator, so I did it. But what about all the people who can't, or don't?

Of course, these organizations have to solve the challenge of being sustainable, and being able to pay professionals to maintain the services that the community relies on. We should be thankful to these organizations, and their leadership needs to recognize the importance of those services, and actively plan for them to be kept alive.

07 August, 2020 10:00PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Vimwiki

At the start of the year I begun keeping a daily diary for work as a simple text file. I've used various other approaches for this over the years, including many paper diaries and more complex digital systems. One great advantage of the one-page text file was it made assembling my weekly status report email very quick, nearly just a series of copies and pastes. But of course there are drawbacks and room for improvement.

vimwiki is a personal wiki plugin for the vim and neovim editors. I've tried to look at it before, years ago, but I found it too invasive, changing key bindings and display settings for any use of vim, and I use vim a lot.

I decided to give it another look. The trigger was actually something completely unrelated: Steve Losh's blog post "Coming Home to vim". I've been using vim for around 17 years but I still learned some new things from that blog post. In particular, I've never bothered to Use The Leader for user-specific shortcuts.

The Leader, to me, feels like a namespace that plugins should not touch: it's like the /usr/local of shortcut keys, a space for the local user only. Vimwiki's default bindings include several incorporating the Leader. Of course since I didn't use the leader, those weren't the ones that bothered me: It turns out I regularly use carriage return and backspace for moving the cursor around in normal mode, and Vimwiki steals both of those. It also truncates the display of (what it thinks are) URIs. It turns out I really prefer to see exactly what's in the file I'm editing. I haven't used vim folds since I first switched to it, despite them being why I switched.

Disabling all the default bindings and URI concealing stuff and Vimwiki is now much less invasive and I can explore its features at my own pace:

let g:vimwiki_key_mappings = { 'all_maps': 0, }
let g:vimwiki_conceallevel = 0
let g:vimwiki_url_maxsave = 0 

Followed by explicitly configuring the bindings I want. I'm letting it steal carriage return. And yes, I've used some Leader bindings after all.

nnoremap <leader>ww :VimwikiIndex<cr>
nnoremap <leader>wi :VimwikiDiaryIndex<cr>
nnoremap <leader>wd :VimwikiMakeDiaryNote<cr>

nnoremap <CR> :VimwikiFollowLink<cr>
nnoremap <Tab> :VimwikiNextLink<cr>
nnoremap <S-Tab> :VimwikiPrevLink<cr>
nnoremap <C-Down> :VimwikiDiaryNextDay<cr>
nnoremap <C-Up> :VimwikiDiaryPrevDay<cr>

,wd (my leader) now brings me straight to today's diary page, and I can create separate, non-diary pages for particular work items (e.g. a Ticket reference) that will span more than one day, and keep all the relevant stuff in one place.

07 August, 2020 10:55AM

Reproducible Builds (diffoscope)

diffoscope 155 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 155. This version includes the following changes:

[ Chris Lamb ]
* Bump Python requirement from 3.6 to 3.7 - most distributions are either
  shipping3.5 or 3.7, so supporting 3.6 is not somewhat unnecessary and also
  more difficult to test locally.
* Improvements to setup.py:
  - Apply the Black source code reformatter.
  - Add some URLs for the site of PyPI.org.
  - Update "author" and author email.
* Explicitly support Python 3.8.

[ Frazer Clews ]
* Move away from the deprecated logger.warn method logger.warning.

[ Mattia Rizzolo ]
* Document ("classify") on PyPI that this project works with Python 3.8.

You find out more by visiting the project homepage.

07 August, 2020 12:00AM

August 06, 2020

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

nanotime 0.3.0: Yuge New Features!

A fresh major release of the nanotime package for working with nanosecond timestamps is hitting CRAN mirrors right now.

nanotime relies on the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from work by Leonardo Silvestri who rejigged internals in S4—and now added new types for periods, intervals and durations. This is what is commonly called a big fucking deal!! So a really REALLY big thank you to my coauthor Leonardo for all these contributions.

With all these Yuge changes patiently chisseled in by Leonardo, it took some time since the last release and a few more things piled up. Matt Dowle corrected something we borked for integration with the lovely and irreplacable data.table. We also switched to the awesome yet minimal tinytest package by Mark van der Loo, and last but not least we added the beginnings of a proper vignette—currently at nine pages but far from complete.

The NEWS snippet adds full details.

Changes in version 0.3.0 (2020-08-06)

  • Use tzstr= instead of tz= in call to RcppCCTZ::parseDouble()) (Matt Dowle in #49).

  • Add new comparison operators for nanotime and charcters (Dirk in #54 fixing #52).

  • Switch from RUnit to tinytest (Dirk in #55)

  • Substantial functionality extension in with new types nanoduration, nanoival and nanoperiod (Leonardo in #58, #60, #62, #63, #65, #67, #70 fixing #47, #51, #57, #61, #64 with assistance from Dirk).

  • A new (yet still draft-ish) vignette was added describing the four core types (Leonardo and Dirk in #71).

  • A required compilation flag for Windows was added (Leonardo in #72).

  • RcppCCTZ function are called in new 'non-throwing' variants to not trigger exeception errors (Leonardo in #73).

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

06 August, 2020 11:53PM

hackergotchi for Chris Lamb

Chris Lamb

The Bringers of Beethoven

This is a curiously poignant work to me that I doubt I would ever be able to communicate. I found it about fifteen years ago, along with a friend who I am quite regrettably no longer in regular contact with, so there was some complicated nostalgia entangled with rediscovering it today.

What might I say about it instead? One tell-tale sign of 'good' art is that you can find something new in it, or yourself, each time. In this sense, despite The Bringers of Beethoven being more than a little ridiculous, it is somehow 'good' music to me. For example, it only really dawned on me now that the whole poem is an allegory for a GDR-like totalitarianism.

But I also realised that it is not an accident that it is Beethoven himself (quite literally the soundtrack for Enlightenment humanism) that is being weaponised here, rather than some fourth-rate composer of military marches or one with a problematic past. That is to say, not only is the poem arguing that something universally recognised as an unalloyed good can be subverted for propagandistic ends, but that is precisely the point being made by the regime. An inverted Clockwork Orange, if you like.

Yet when I listen to it again I can't help but laugh. I think of the 18th-century poet Alexander Pope, who first used the word bathos to refer to those abrupt and often absurd transitions from the elevated to the ordinary, contrasting it with the concept of pathos, the sincere feeling of sadness and tragedy. I can't think of two better words.

06 August, 2020 09:48PM

hackergotchi for Joey Hess

Joey Hess

Mr Process's wild ride

When a unix process is running in a directory, and that directory gets renamed, the process is taken on a ride to a new location in the filesystem. Suddenly, any "../" paths it might be using point to new, and unexpected locations.

This can be a source of interesting behavior, and also of security holes.

Suppose root is poking around in ~user/foo/bar/ and decides to vim ../../etc/conffile

If the user notices this process is running, they can mv ~/foo/bar /tmp and when vim saves the file, it will write to /tmp/bar/../../etc/conffile AKA /etc/conffile.

(Vim does warn that the file has changed while it was being edited. Other editors may not. Or root may be feeling especially BoFH and decide to overwrite the user's changes to their file. Or the rename could perhaps be carefully timed to avoid vim's overwrite protection.)

Or, suppose root, in the same place, decides to archive ../../etc with tar, and then delete it:

tar cf etc.tar ../../etc; rm -rf ../../etc

Now the user has some time to take root's shell on a ride, before the rm starts ... and make it delete all of /etc!

Anyone know if this class of security hole has a name?

06 August, 2020 08:02PM

Christian Kastner

My new favorite utility: autojump

Like any developer, I have amassed an impressive collection of directory trees both broad and deep. Navigating these trees became increasingly cumbersome, and setting CDPATH, using auto-completion, and working with the readline history search alleviated this only somewhat.

Enter autojump, from the package of the same name.

Whatever magic it uses is unbelievably effective. I estimate that in at least 95% of my cases, typing j <name-fragment> changes to the directory I was actually thinking of.

Say I'm working on package scikit-learn. My clone of the Salsa repo is in ~/code/pkg-scikit-learn/scikit-learn. Changing to that directory is trivial, I only need to specify a name fragment:

$ j sci
/home/christian/code/pkg-scikit-learn/scikit-learn
christian@workstation:~/code/pkg-scikit-learn/scikit-learn

But what if I want to work on scikit-learn upstream, to prepare a patch, for example? That repo has been cloned to ~/code/github/scikit-learn. No problem at all, just add another name fragment:

$ j gi sci
/home/christian/code/github/scikit-learn
christian@workstation:~/code/github/scikit-learn

The magic, however, is most evident with directory trees I rarely enter. As in: I have a good idea of the directory name I wish to change to, but I don't really recall its exact name, nor where (in the tree) it is located. I used to rely on autocomplete to somehow get there which can involve hitting the [TAB] key far too many times, and falling back to find in the worst case, but now, autojump always seems gets me there on first try.

I can't believe that this has been available in Debian for 10 years and I only discovered it now.

06 August, 2020 02:41PM by Christian Kastner

Sam Hartman

Good Job Debian: Compatibility back to 1999

So, I needed a container of Debian Slink (2.1), released back in 1999. I expected this was going to be a long and involved process. Things didn't look good from the start:
root@mount-peerless:/usr/lib/python3/dist-packages/sqlalchemy# debootstrap  slink /build/slink2 
http://archive.debian.org/debian                                                               
E: No such script: /usr/share/debootstrap/scripts/slink

Hmm, I thought I remembered slink support for debootstrap--not that slink used debootstrap by default--back when I was looking through the debootstrap sources years ago. Sure enough looking through the changelogs, slink support was dropped back in 2005.
Okay, well, this isn't going to work either, but I guess I could try debootstrapping sarge and from there go back to slink.
Except it worked fine.
Go us!

06 August, 2020 12:54PM

August 05, 2020

hackergotchi for Holger Levsen

Holger Levsen

20200805-debconf7

DebConf7

This tshirt is 13 years old and from DebConf7.

DebConf7 was my 5th DebConf and took place in Edinburgh, Scotland.

And finally I could tell people I was a DD :-D Though as you can guess, that's yet another story to be told. So anyway, Edinburgh.

I don't recall exactly whether the video team had to record 6 or 7 talk rooms on 4 floors, but this was probably the most intense set up we ran. And we ran a lot, from floor to floor, and room to room.

DebConf7 was also special because it had a very special night venue, which was in an ex-church in a rather normal building, operated as sort of community center or some such, while the old church interior was still very much visible as in everything new was build around the old stuff.

And while the night venue was cool, it also ment we (video team) had no access to our machines over night (or for much of the evening), because we had to leave the university over night and the networking situation didn't allow remote access with the bandwidth needed to do anything video.

The night venue had some very simple house rules, like don't rearrange stuff, don't break stuff, don't fix stuff and just a few little more and of course we broke them in the best possible way: Toresbe with the help of people I don't remember fixed the organ, which was broken for decades. And so the house sounded in some very nice new old tune and I think everybody was happy we broke that rule.

I believe the city is really nice from the little I've seen of it. A very nice old town, a big castle on the hill :) I'm not sure whether I missed the day trip to Glasgow to fix video things or to rest or both...

Another thing I missed was getting a kilt, for which Phil Hands made a terrific design (update: the design is called tartan and was made by Phil indeed!), which spelled Debian in morse code. That was pretty cool and the kilts are really nice on DebConf group pictures since then. And if you've been wearing this kilt regularily for the last 13 years it was probably also a sensible investment. ;)

It seems I don't have that many more memories of this DebConf, British power plugs and how to hack them comes to my mind and some other stuff here and there, but I remember less than previous years. I'm blaming this on the intense video setup and also on the sheer amount of people, which was the hightest until then and for some years, I believe maybe even until Heidelberg 8 years later. IIRC there were around 470 people there and over my first five years of DebConf I was incredible lucky to make many friends in Debian, so I probably just hung out and had good times.

05 August, 2020 10:27PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCCTZ 0.2.8: Minor API Extension

A new minor release 0.2.8 of RcppCCTZ is now on CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now at least three others do—using copies in their packages which remains less than ideal.

This version adds three no throw variants of three existing functions, contributed again by Leonardo. This will be used in an upcoming nanotime release which we are finalising now.

Changes in version 0.2.8 (2020-08-04)

  • Added three new nothrow variants (for win32) needed by the expanded nanotime package (Leonardo in #37)

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

05 August, 2020 01:25AM

hackergotchi for Holger Levsen

Holger Levsen

20200804-debconf6

DebConf6

This tshirt is 14 years old and from DebConf6.

DebConf6 was my 4th DebConf and took place in Oaxtepec, Mexico.

I'm a bit exhausted right now which is probably quite fitting to write something about DebConf6... many things in life are a question of perception, so I will mention the waterfall and the big swirl and the band playing with the fireworks during the conference dinner, the joy that we finally could use the local fiber network (after asking for months) just after discovering that the 6h shopping tour forgot to bring the essential pig tail connectors to connect the wireless antennas to the cards, which we needed to provide network to the rooms where the talks would take place.

DebConf6 was the first DebConf with live streaming using dvswitch (written by Ben Hutchings and removed from unstable in 2015 as the world had moved to voctomix, which is yet another story to be told eventually). The first years (so DebConf6 and some) the videoteam focussed on getting the post processing done and the videos released, and streaming was optional, even though it was an exciting new feature and we still managed to stream mostly all we recorded and sometimes more... ;)

Setting up the network uplink also was very challenging and took, I don't remember exactly, until day 4 or 5 of DebCamp (which lasted 7 days), so there were group of geeks in need of network, and mostly unable to fix it, because for fixing it we needed to communicate and IRC was down. (There was no mobile phone data at that time, the first iphone wasn't sold yet, it were the dark ages.)

I remember literally standing on a roof to catch the wifi signal and excitingly shouting "I got one ping back! ... one ping back ...", less excitingly. I'll spare you the details now (and me writing them down) but I'll say that the solution involved Neil McGovern climbing an antenna and attaching a wifi antenna up high, probably 15m or 20m or some such. Finally we had uplink. I don't recall if that pig tail connector incident happened before of after, but in the end the network setup worked nicely on the wide area we occupied. Even though in some dorms the cleaning people daily removed one of our APs to be able to watch TV while cleaning ;) (Which kind of was ok, but still... they could have plugged it back in.)

I also joyfully remember a certain vegetarian table, a most memorable bus ride (I'll just say 5 or rather cinco, and, unrelated except on the same bus ride, "Jesus" (and "Maria" for sure..)!) and talking with Jim Gettys and thus learning about the One Laptop per Child (OLPC) project.

As for any DebConf, there's sooo much more to be told, but I'll end here and just thank Gunnar Wolf (as he masterminded much of this DebConf) and go to bed now :-)

05 August, 2020 12:24AM

August 04, 2020

Osamu Aoki

exim4 configuration for Desktop (better gmail support)

Since gmail rewrites "From:" address now (2020) and keep changing access limitation, it is wise not  to use it as smarthost any more.  (If you need to access multiple gmail addresses from mutt etc, use esmtp etc.)

---
For most of our Desktop PC running with stock exim4 and mutt, I think sending out mail is becoming a bit rough since using random smarthost causes lots of trouble due to the measures taken to prevent spams.

As mentioned in Exim4 user FAQ , /etc/hosts should have FQDN with external DNS resolvable domain name listed instead of localdomain to get the correct EHLO/HELO line.  That's the first step.

The stock configuration of exim4 only allows you to use single smarthost for all your mails.  I use one address for my personal use which is checked by my smartphone too.  The other account is for subscribing to the mailing list.  So I needed to tweak ...

Usually, mutt is smart enough to set the From address since my .muttrc has

# Set default for From: for replyes for alternates.
set reverse_name

So how can I teach exim4 to send mails depending on the  mail accounts listed in the From header.

For my gmail accounts, each mail should be sent to the account specific SMTP connection matching your From header to get all the modern SPAM protection data in right state.  DKIM, SPF, DMARC...  (Besides, they overwrite From: header anyway if you use wrong connection.)

For my debian.org mails, mails should be sent from my shell account on people.debian.org so it is very unlikely to be blocked.  Sometimes, I wasn't sure some of these debian.org mails sent through my ISP's smarthost are really getting to the intended person.

To these ends, I have created small patches to the /etc/exim4/conf.d files and reported it to Debian BTS: #869480 Support multiple smarthosts (gmail support).  These patches are for the source package.

To use my configuration tweak idea, you have easier route no matter which exim version you are using.  Please copy and read pertinent edited files from my github site to your installed /etc/exim4/conf.d files and get the benefits.
If you really wish to keep envelope address etc. to match From: header, please rewite agressively using the From: header using eddited rewrite/31_exim4-config_rewriting as follows:

.ifndef NO_EAA_REWRITE_REWRITE
*@+local_domains "${lookup{${local_part}}lsearch{/etc/email-addresses}\
                   {$value}fail}" f
# identical rewriting rule for /etc/mailname
*@ETC_MAILNAME "${lookup{${local_part}}lsearch{/etc/email-addresses}\
                   {$value}fail}" f
.endif
* "$h_from:" Frs

So far its working fine for me but if you find bug, let me know.

Osamu

04 August, 2020 03:03PM by osamu.aoki@gmail.com (noreply@blogger.com)

August 03, 2020

hackergotchi for Holger Levsen

Holger Levsen

20200803-debconf5

DebConf5

This tshirt is 15 years old and from DebConf5. It still looks quite nice! :)

DebConf5 was my 3rd DebConf and took place in Helsinki, or rather Espoo, in Finland.

This was one of my most favorite DebConfs (though I basically loved them all) and I'm not really sure why, I guess it's because of the kind of community at the event. We stayed in some future dorms of the universtity, which were to be first used by some European athletics chamopionship and which we could use even before that, guests zero. Being in Finland there were of course saunas in the dorms, which we frequently used and greatly enjoyed. Still, one day we had to go on a trip to another sauna in the forest, because of course you cannot visit Finland and only see one sauna. Or at least, you should not.

Another aspect which increased community bonding was that we had to authenticate using 802.10 (IIRC, please correct me) which was an authentication standard mostly used for wireless but which also works for wired ethernet, except that not many had used it on Linux before. Thus quite some related bugs were fixed in the first days of DebCamp...

Then my powerpc ibook also decided to go bad, so I had to remove 30 screws to get the harddrive out and 30 screws back in, to not have 30 screws laying around for a week. Then I put the harddrive into a spare (x86) laptop and only used my /home partition and was very happy this worked nicely. And then, for travelling back, I had to unscrew and screw 30 times again. (I think my first attempt took 1.5h and the fourth only 45min or so ;) Back home then I bought a laptop where one could remove the harddrive using one screw.

Oh, and then I was foolish during the DebConf5 preparations and said, that I could imagine setting up a team and doing video recordings, as previous DebConfs mostly didn't have recordings and the one that had, didn't have releases of them...

And so we did videos. And as we were mostly inexperienced we did them the hard way: during the day we recorded on tape and then when the talks were done, we used a postprocessing tool called 'cinelerra' and edited them. And because Eric Evans was on the team and because Eric worked every night almost all night, all nights, we managed to actually release them all when DebConf5 was over. I very well remember many many (23 or 42) Debian people cleaning the dorms thoroughly (as they were brand new..) and Eric just sitting somewhere, exhausted and watching the cleaners. And everybody was happy Eric was idling there, cause we knew why. In the aftermath of DebConf5 Ben Hutchings then wrote videolink (removed from sid in 2013) which we used to create video DVDs of our recordings based on a simple html file with links to the actual videos.

There were many more memorable events. The boat ride was great. A pirate flag appeared. One night people played guitar until very late (or rather early) close to the dorms, so at about 3 AM someone complained about it, not in person, but on the debian-devel mailinglist. And those drunk people playing guitar, replied immediatly on the mailinglist. And then someone from the guitar group gave a talk, at 9 AM, and the video is online... ;) (It's a very slowwwwwww talk.)

If you haven't been to or close to the polar circles it's almost impossible to anticipate how life is in summer there. It get's a bit darker after midnight or rather after 1 AM and then at 3 AM it get's light again, so it's reaaaaaaally easy to miss the night once and it's absolutly not hard to miss the night for several nights in a row. And then I shared a room with 3 people who all snore quite loud...

There was more. I was lucky to witness the first (or second?) cheese and whine party which at that time took place in a dorm room with, dunno 10 people and maybe 15 kinds of cheese. And, of course, I met many wonderful people there, to mention a few I'll say Jesus, I mean mooch or data, Amaya and p2. And thanks to some bad luck which turned well, I also had my first time ever Sushi in Helsinki.

And and and. DebConfs are soooooooo good! :-) I'll stop here as I originally planned to only write a paragraph or two about each and there are quite some to be written!

Oh, and as we all learned, there are probably no mosquitos in Helsinki, just in Espoo. And you can swim naked through a lake and catch a taxi on the other site, with no clothes and no money, no big deal. (And you might not believe it, but that wasn't me. I cannot swim that well.)

03 August, 2020 10:13PM

Giovanni Mascellani

Bye bye Python 2!

And so, today, while I was browsing updates for my Debian unstable laptop, I noticed that aptitude wouldn't automatically upgrade python2 and related packages (I don't know why, and at this point don't care). So I decided to dare: I removed the python2 package to see what the dependency solver would have proposed me. It turned out that there was basically nothing I couldn't live without.

So, bye bye Python 2. It was a long ride and I loved programming with you. But now it's the turn of your younger brother.

$ python
bash: python: comando non trovato

(guess what "comando non trovato" means?)

And thanks to all those who made this possible!

03 August, 2020 07:00PM by Giovanni Mascellani

Sylvain Beucler

Debian LTS and ELTS - July 2020

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.

In July, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 25.25h for LTS (out of 30 max; all done) and 13.25h for ELTS (out of 20 max; all done).

We shifted suites: welcome Stretch LTS and Jessie ELTS. The LTS->ELTS switch happened at the start of the month, but the oldstable->LTS switch happened later (after finalizing and flushing proposed-updates to a last point release), causing some confusion but nothing major.

ELTS - Jessie

  • New local build setup
  • ELTS buildds: request timezone harmonization
  • Reclassify in-progress updates from jessie-LTS to jessie-ELTS
  • python3.4: finish preparing update, security upload ELA 239-1
  • net-snmp: global triage: bisect CVE-2019-20892 to identify affected version, jessie/stretch not-affected
  • nginx: global triage: clarify CVE-2013-0337 status; locate CVE-2020-11724 original patch and regression tests, update MITRE
  • nginx: security upload ELA-247-1 with 2 CVEs

LTS - Stretch

  • Reclassify in-progress/needed updates from stretch/oldstable to stretch-LTS
  • rails: upstream security: follow-up on CVE-2020-8163 (RCE) on upstream bug tracker and create pull request for 4.x (merged), hence getting some upstream review
  • rails: global security: continue coordinating upload in multiple Debian versions, prepare fixes for common stretch/buster vulnerabilities in buster
  • rails: security upload DLA-2282 fixing 3 CVEs
  • python3.5: security upload DLA-2280-1 fixing 13 pending non-critical vulnerabilities, and its test suite
  • nginx: security upload DLA-2283 (cf. common ELTS work)
  • net-snmp: global triage (cf. common ELTS work)
  • public IRC monthly team meeting
  • reach out to clarify the intro from last month's report, following unsettled feedback during meeting

Documentation/Scripts

  • ELTS/README.how-to-release-an-update: fix typo
  • ELTS buildd: attempt to diagnose slow perfs, provide comparison with Debian and local builds
  • LTS/Meetings: improve presentation
  • SourceOnlyUpload: clarify/de-dup pbuilder doc
  • LTS/Development: reference build logs URL, reference proposed-updates issue during dists switch, reference new-upstream-versioning discussion, multiple jessie->stretch fixes and clean-ups
  • LTS/Development/Asan: drop wheezy documentation
  • Warn about jruby mis-triage
  • Provide feedback for ksh/CVE-2019-14868
  • Provide feedback for condor update
  • LTS/TestsSuites/nginx: test with new request smuggling test cases

03 August, 2020 01:52PM

Arnaud Rebillout

GoAccess 1.4, a detailed tutorial

GoAccess v1.4 was just released a few weeks ago! Let's take this chance to write a loooong tutorial. We'll go over every steps to install and operate GoAccess. This is a tutorial aimed at those who don't play sysadmin every day, and that's why it's so long, I did my best to provide thorough explanations all along, so that it's more than just a "copy-and-paste" kind of tutorial. And for those who do play sysadmin everyday: please try not to fall asleep while reading, and don't hesitate to drop me an e-mail if you spot anything inaccurate in here. Thanks!

Introduction

So what's GoAccess already? GoAccess is a web log analyzer, and it allows you to visualize the traffic for your website, and get to know a bit more about your visitors: how many visitors and hits, for which pages, coming from where (geolocation, operating system, web browser...), etc... It does so by parsing the access logs from your web server, be it Apache, NGINX or whatever.

GoAccess gives you different options to display the statistics, and in this tutorial we'll focus on producing a HTML report. Meaning that you can see the statistics for your website straight in your web browser, under the form of a single HTML page.

For an example, you can have a look at the stats of my blog here: http://goaccess.arnaudr.io.

GoAccess is written in C, it has very few dependencies, it had been around for about 10 years, and it's distributed under the MIT license.

Assumptions

This tutorial is about installing and configuring, so I'll assume that all the commands are run as root. I won't prefix each of them with sudo.

I use the Apache web server, running on a Debian system. I don't think it matters so much for this tutorial though. If you're using NGINX it's fine, you can keep reading.

Also, I will just use the name SITE for the name of the website that we want to analyze with GoAccess. Just replace that with the real name of your site.

I also assume the following locations for your stuff:

  • the website is at /var/www/SITE
  • the logs are at /var/log/apache2/SITE (yes, there is a sub-directory)
  • we're going to save the GoAccess database in /var/lib/goaccess-db/SITE.

If you have your stuff in /srv/SITE/{log,www} instead, no worries, just adjust the paths accordingly, I bet you can do it.

Installation

The latest version of GoAccess is v1.4, and it's not yet available in the Debian repositories. So for this part, you can follow the instructions from the official GoAccess download page. Install steps are explained in details, so there's nothing left for me to say :)

When this is done, let's get started with the basics.

We're talking about the latest version v1.4 here, let's make sure:

$ goaccess --version
GoAccess - 1.4.
...

Now let's try to create a HTML report. I assume that you already have a website up and running.

GoAccess needs to parse the access logs. These logs are optional, they might or might not be created by your web server, depending on how it's configured. Usually, these log files are named access.log, unsurprisingly.

You can check if those logs exist on your system by running this command:

find /var/log -name access.log

Another important thing to know is that these logs can be in different formats. In this tutorial we'll assume that we work with the combined log format, because it seems to be the most common default.

To check what kind of access logs your web server produces, you must look at the configuration for your site.

For an Apache web server, you should have such a line in the file /etc/apache2/sites-enabled/SITE.conf:

CustomLog ${APACHE_LOG_DIR}/SITE/access.log combined

For NGINX, it's quite similar. The configuration file would be something like /etc/nginx/sites-enabled/SITE, and the line to enable access logs would be something like:

access_log /var/log/nginx/SITE/access.log

Note that NGINX writes the access logs in the combined format by default, that's why you don't see the word combined anywhere in the line above: it's implicit.

Alright, so from now on we assume that yes, you have access log files available, and yes, they are in the combined log format. If that's the case, then you can already run GoAccess and generate a report, for example for the log file /var/log/apache2/access.log

goaccess \
    --log-format COMBINED \
    --output /tmp/report.html \
    /var/log/apache2/access.log

It's possible to give GoAccess more than one log files to process, so if you have for example the file access.log.1 around, you can use it as well:

goaccess \
    --log-format COMBINED \
    --output /tmp/report.html \
    /var/log/apache2/access.log \
    /var/log/apache2/access.log.1

If GoAccess succeeds (and it should), you're on the right track!

All is left to do to complete this test is to have a look at the HTML report created. It's a single HTML page, so you can easily scp it to your machine, or just move it to the document root of your site, and then open it in your web browser.

Looks good? So let's move on to more interesting things.

Web server configuration

This part is very short, because in terms of configuration of the web server, there's very little to do. As I said above, the only thing you want from the web server is to create access log files. Then you want to be sure that GoAccess and your web server agree on the format for these files.

In the part above we used the combined log format, but GoAccess supports many other common log formats out of the box, and even allows you to parse custom log formats. For more details, refer to the option --log-format in the GoAccess manual page.

Another common log format is named, well, common. It even has its own Wikipedia page. But compared to combined, the common log format contains less information, it doesn't include the referrer and user-agent values, meaning that you won't have it in the GoAccess report.

So at this point you should understand that, unsurprisingly, GoAccess can only tell you about what's in the access logs, no more no less.

And that's all in term of web server configuration.

Configuration to run GoAccess unprivileged

Now we're going to create a user and group for GoAccess, so that we don't have to run it as root. The reason is that, well, for everything running unattended on your server, the less code runs as root, the better. It's good practice and common sense.

In this case, GoAccess is simply a log analyzer. So it just needs to read the logs files from your web server, and there is no need to be root for that, an unprivileged user can do the job just as well, assuming it has read permissions on /var/log/apache2 or /var/log/nginx.

The log files of the web server are usually part of the adm group (though it might depend on your distro, I'm not sure). This is something you can check easily with the following command:

ls -l /var/log | grep -e apache2 -e nginx

As a result you should get something like that:

drwxr-x--- 2 root adm 20480 Jul 22 00:00 /var/log/apache2/

And as you can see, the directory apache2 belongs to the group adm. It means that you don't need to be root to read the logs, instead any unprivileged user that belongs to the group adm can do it.

So, let's create the goaccess user, and add it to the adm group:

adduser --system --group --no-create-home goaccess
addgroup goaccess adm

And now, let's run GoAccess unprivileged, and verify that it can still read the log files:

setpriv \
    --reuid=goaccess --regid=goaccess \
    --init-groups --inh-caps=-all \
    -- \
    goaccess \
    --log-format COMBINED \
    --output /tmp/report2.html \
    /var/log/apache2/access.log

setpriv is the command used to drop privileges. The syntax is quite verbose, it's not super friendly for tutorials, but don't be scared and read the manual page to learn what it does.

In any case, this command should work, and at this point, it means that you have a goaccess user ready, and we'll use it to run GoAccess unprivileged.

Integration, option A - Run GoAccess once a day, from a logrotate hook

In this part we wire things together, so that GoAccess processes the log files once a day, adds the new logs to its internal database, and generates a report from all that aggregated data. The result will be a single HTML page.

Introducing logrotate

In order to do that, we'll use a logrotate hook. logrotate is a little tool that should already be installed on your server, and that runs once a day, and that is in charge of rotating the log files. "Rotating the logs" means moving access.log to access.log.1 and so on. With logrotate, a new log file is created every day, and log files that are too old are deleted. That's what prevents your logs from filling up your disk basically :)

You can check that logrotate is indeed installed and enabled with this command (assuming that your init system is systemd):

systemctl status logrotate.timer

What's interesting for us is that logrotate allows you to run scripts before and after the rotation is performed, so it's an ideal place from where to run GoAccess. In short, we want to run GoAccess just before the logs are rotated away, in the prerotate hook.

But let's do things in order. At first, we need to write a little wrapper script that will be in charge of running GoAccess with the right arguments, and that will process all of your sites.

The wrapper script

This wrapper is made to process more than one site, but if you have only one site it works just as well, of course.

So let me just drop it on you like that, and I'll explain afterward. Here's my wrapper script:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
#!/bin/bash

# Process log files /var/www/apache2/SITE/access.log,
# only if /var/lib/goaccess-db/SITE exists.
# Create HTML reports in $1, a directory that must exist.

set -eu

OUTDIR=
LOGDIR=/var/log/apache2
DBDIR=/var/lib/goaccess-db

fail() { echo >&2 "$@"; exit 1; }

[ $# -eq 1 ] || fail "Usage: $(basename $0) OUTPUT_DIRECTORY"

OUTDIR=$1

[ -d "$OUTDIR" ] || fail "'$OUTDIR' is not a directory"
[ -d "$LOGDIR" ] || fail "'$LOGDIR' is not a directory"
[ -d "$DBDIR"  ] || fail "'$DBDIR' is not a directory"

for d in $(find "$LOGDIR" -mindepth 1 -maxdepth 1 -type d); do
    site=$(basename "$sitedir")
    dbdir=$DBDIR/$site
    logfile=$d/access.log
    outfile=$OUTDIR/$site.html

    if [ ! -d "$dbdir" ] || [ ! -e "$logfile" ]; then
        echo "‣ Skipping site '$site'"
        continue
    else
        echo "‣ Processing site '$site'"
    fi

    setpriv \
        --reuid=goaccess --regid=goaccess \
        --init-groups --inh-caps=-all \
        -- \
    goaccess \
        --agent-list \
        --anonymize-ip \
        --persist \
        --restore \
        --config-file /etc/goaccess/goaccess.conf \
        --db-path "$dbdir" \
        --log-format "COMBINED" \
        --output "$outfile" \
        "$logfile"
done

So you'd install this script at /usr/local/bin/goaccess-wrapper for example, and make it executable:

chmod +x /usr/local/bin/goaccess-wrapper

A few things to note:

  • We run GoAccess with --persist, meaning that we save the parsed logs in the internal database, and --restore, meaning that we include everything from the database in the report. In other words, we aggregate the data at every run, and the report grows bigger every time.
  • The parameter --config-file /etc/goaccess/goaccess.conf is a workaround for #1849. It should not be needed for future versions of GoAccess > 1.4.

As is, the script makes the assumption that the logs for your site are logged in a sub-directory /var/log/apache2/SITE/. If it's not the case, adjust that in the wrapper accordingly.

The name of this sub-directory is then used to find the GoAccess database directory /var/lib/goaccess-db/SITE/. This directory is expected to exist, meaning that if you don't create it yourself, the wrapper won't process this particular site. It's a simple way to control which sites are processed by this GoAccess wrapper, and which sites are not.

So if you want goaccess-wrapper to process the site SITE, just create a directory with the name of this site under /var/lib/goaccess-db:

mkdir -p /var/lib/goaccess-db/SITE
chown goaccess:goaccess /var/lib/goaccess-db/SITE

Now let's create an output directory:

mkdir /tmp/goaccess-reports
chown goaccess:goaccess /tmp/goaccess-reports

And let's give a try to the wrapper script:

goaccess-wrapper /tmp/goaccess-reports
ls /tmp/goaccess-reports

Which should give you:

SITE.html

At the same time, you can check that GoAccess populated the database with a bunch of files:

ls /var/lib/goaccess-db/SITE

Setting up the logrotate prerotate hook

At this point, we have the wrapper in place. Let's now add a pre-rotate hook so that goaccess-wrapper runs once a day, just before the logs are rotated away.

The logrotate config file for Apache2 is located at /etc/logrotate.d/apache2, and for NGINX it's at /etc/logrotate.d/nginx. Among the many things you'll see in this file, here's what is of interest for us:

  • daily means that your logs are rotated every day
  • sharedscripts means that the pre-rotate and post-rotate scripts are executed once total per rotation, and not once per log file.

In the config file, there is also this snippet:

prerotate
    if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
        run-parts /etc/logrotate.d/httpd-prerotate; \
    fi; \
endscript

It indicates that scripts in the directory /etc/logrotate.d/httpd-prerotate/ will be executed before the rotation takes place. Refer to the man page run-parts(8) for more details...

Putting all of that together, it means that logs from the web server are rotated once a day, and if we want to run scripts just before the rotation, we can just drop them in the httpd-prerotate directory. Simple, right?

Let's first create this directory if it doesn't exist:

mkdir -p /etc/logrotate.d/httpd-prerotate/

And let's create a tiny script at /etc/logrotate.d/httpd-prerotate/goaccess:

1
2
#!/bin/sh
exec goaccess-wrapper /tmp/goaccess-reports

Don't forget to make it executable:

chmod +x /etc/logrotate.d/httpd-prerotate/goaccess

As you can see, the only thing that this script does is to invoke the wrapper with the right argument, ie. the output directory for the HTML reports that are generated.

And that's all. Now you can just come back tomorrow, check the logs, and make sure that the hook was executed and succeeded. For example, this kind of command will tell you quickly if it worked:

journalctl | grep logrotate

Integration, option B - Run GoAccess once a day, from a systemd service

OK so we've just seen how to use a logrotate hook. One downside with that is that we have to drop privileges in the wrapper script, because logrotate runs as root, and we don't want to run GoAccess as root. Hence the rather convoluted syntax with setpriv.

Rather than embedding this kind of thing in a wrapper script, we can instead run the wrapper script from a [systemd][] service, and define which user runs the wrapper straight in the systemd service file.

Introducing systemd niceties

So we can create a systemd service, along with a systemd timer that fires daily. We can then set the user and group that execute the script straight in the systemd service, and there's no need for setpriv anymore. It's a bit more streamlined.

We can even go a bit further, and use systemd parameterized units (also called templates), so that we have one service per site (instead of one service that process all of our sites). That will simplify the wrapper script a lot, and it also looks nicer in the logs.

With this approach however, it seems that we can't really run exactly before the logs are rotated away, like we did in the section above. But that's OK. What we'll do is that we'll run once a day, no matter the time, and we'll just make sure to process both log files access.log and access.log.1 (ie. the current logs and the logs from yesterday). This way, we're sure not to miss any line from the logs.

Note that GoAccess is smart enough to only consider newer entries from the log files, and discard entries that are already in the database. In other words, it's safe to parse the same log file more than once, GoAccess will do the right thing. For more details see "INCREMENTAL LOG PROCESSING" from man goaccess.

systemd]: https://freedesktop.org/wiki/Software/systemd/

Implementation

And here's how it all looks like.

First, a little wrapper script for GoAccess:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#!/bin/bash

# Usage: $0 SITE DBDIR LOGDIR OUTDIR

set -eu

SITE=$1
DBDIR=$2
LOGDIR=$3
OUTDIR=$4

LOGFILES=()
for ext in log log.1; do
    logfile="$LOGDIR/access.$ext"
    [ -e "$logfile" ] && LOGFILES+=("$logfile")
done

if [ ${#LOGFILES[@]} -eq 0 ]; then
    echo "No log files in '$LOGDIR'"
    exit 0
fi

goaccess \
    --agent-list \
    --anonymize-ip \
    --persist \
    --restore \
    --config-file /etc/goaccess/goaccess.conf \
    --db-path "$DBDIR" \
    --log-format "COMBINED" \
    --output "$OUTDIR/$SITE.html" \
    "${LOGFILES[@]}"

This wrapper does very little. Actually, the only thing it does is to check for the existence of the two log files access.log and access.log.1, to be sure that we don't ask GoAccess to process a file that does not exist (GoAccess would not be happy about that).

Save this file under /usr/local/bin/goaccess-wrapper, don't forget to make it executable:

chmod +x /usr/local/bin/goaccess-wrapper

Then, create a systemd parameterized unit file, so that we can run this wrapper as a systemd service. Save it under /etc/systemd/system/goaccess@.service:

[Unit]
Description=Update GoAccess report - %i
ConditionPathIsDirectory=/var/lib/goaccess-db/%i
ConditionPathIsDirectory=/var/log/apache2/%i
ConditionPathIsDirectory=/tmp/goaccess-reports
PartOf=goaccess.service

[Service]
Type=oneshot
User=goaccess
Group=goaccess
Nice=19
ExecStart=/usr/local/bin/goaccess-wrapper \
 %i \
 /var/lib/goaccess-db/%i \
 /var/log/apache2/%i \
 /tmp/goaccess-reports

So, what is a systemd parameterized unit? It's a service to which you can pass an argument when you enable it. The %i in the unit definition will be replaced by this argument. In our case, the argument will be the name of the site that we want to process.

As you can see, we use the directive ConditionPathIsDirectory= extensively, so that if ever one of the required directories does not exist, the unit will just be skipped (and marked as such in the logs). It's a graceful way to fail.

We run the wrapper as the user and group goaccess, thanks to User= and Group=. We also use Nice= to give a low priority to the process.

At this point, it's already possible to test. Just make sure that you created a directory for the GoAccess database:

mkdir -p /var/lib/goaccess-db/SITE
chown goaccess:goaccess /var/lib/goaccess-db/SITE

Also make sure that the output directory exists:

mkdir /tmp/goaccess-reports
chown goaccess:goaccess /tmp/goaccess-reports

Then reload systemd and fire the unit to see if it works:

systemctl daemon-reload
systemctl start goaccess@SITE.service
journalctl | tail

And that should work already.

As you can see, the argument, SITE, is passed in the systemctl start command. We just append it after the @, in the name of the unit.

Now, let's create another GoAccess service file, which sole purpose is to group all the parameterized units together, so that we can start them all in one go. Note that we don't use a systemd target for that, because ultimately we want to run it once a day, and that would not be possible with a target. So instead we use a dummy oneshot service.

So here it is, saved under /etc/systemd/system/goaccess.service:

[Unit]
Description=Update GoAccess reports
Requires= \
 goaccess@SITE1.service \
 goaccess@SITE2.service

[Service]
Type=oneshot
ExecStart=true

As you can see, we simply list the sites that we want to process in the Requires= directive. In this example we have two sites named SITE1 and SITE2.

Let's ensure that everything is still good:

systemctl daemon-reload
systemctl start goaccess.service
journalctl | tail

Check the logs, both sites SITE1 and SITE2 should have been processed.

And finally, let's create a timer, so that systemd runs goaccess.service once a day. Save it under /etc/systemd/system/goaccess.timer.

[Unit]
Description=Daily update of GoAccess reports

[Timer]
OnCalendar=daily
RandomizedDelaySec=1h
Persistent=true

[Install]
WantedBy=timers.target

Finally, enable the timer:

systemctl daemon-reload
systemctl enable --now goaccess.timer

At this point, everything should be OK. Just come back tomorrow and check the logs with something like:

journalctl | grep goaccess

Last word: if you have only one site to process, of course you can simplify, for example you can hardcode all the paths in the file goaccess.service instead of using a parameterized unit. Up to you.

Daily operations

So in this part, we assume that you have GoAccess all setup and running, once a day or so. Let's just go over a few things worth noting.

Serve your report

Up to now in this tutorial, we created the reports in /tmp/goaccess-reports, but that was just for the sake of the example. You will probably want to save your reports in a directory that is served by your web server, so that, well, you can actually look at it in your web browser, that was the point, right?

So how to do that is a bit out of scope here, and I guess that if you want to monitor your website, you already have a website, so you will have no trouble serving the GoAccess HTML report.

However there's an important detail to be aware of: GoAccess shows all the IP addresses of your visitors in the report. As long as the report is private it's OK, but if ever you make your GoAccess report public, then you should definitely invoke GoAccess with the option --anonymize-ip.

Keep an eye on the logs

In this tutorial, the reports we create, along with the GoAccess databases, will grow bigger every day, forever. It also means that the GoAccess processing time will grow a bit each day.

So maybe the first thing to do is to keep an eye on the logs, to see how long it takes to GoAccess to do its job every day. Also, maybe you'd like to keep an eye on the size of the GoAccess database with:

du -sh /var/lib/goaccess-db/SITE

If your site has few visitors, I suspect it won't be a problem though.

You could also be a bit pro-active in preventing this problem in the future, and for example you could break the reports into, say, monthly reports. Meaning that every month, you would create a new database in a new directory, and also start a new HTML report. This way you'd have monthly reports, and you make sure to limit the GoAccess processing time, by limiting the database size to a month.

This can be achieved very easily, by including something like YEAR-MONTH in the database directory, and in the HTML report. You can handle that automatically in the wrapper script, for example:

sfx=$(date +'%Y-%m')

mkdir -p $DBDIR/$sfx

goaccess \
    --db-path $DBDIR/$sfx \
    --output "$OUTDIR/$SITE-$sfx.html" \
    ...

You get the idea.

Further notes

Migration from older versions

With the --persist option, GoAccess keeps all the information from the logs in a database, so that it can re-use it later. In prior versions, GoAccess used the Tokyo Cabinet key-value store for that. However starting from v1.4, GoAccess dropped this dependency and now uses its own database format.

As a result, the previous database can't be used anymore, you will have to remove it and restart from zero. At the moment there is no way to convert the data from the old database to the new one. If you're interested, this is discussed upstream at [#1783][bug-1783].

Another thing that changed with this new version is the name for some of the command-line options. For example, --load-from-disk was dropped in favor of --restore, and --keep-db-files became --persist. So you'll have to look at the documentation a bit, and update your script(s) accordingly.

Other ways to use GoAccess

It's also possible to do it completely differently. You could keep GoAccess running, pretty much like a daemon, with the --real-time-html option, and have it process the logs continuously, rather than calling it on a regular basis.

It's also possible to see the GoAccess report straight in the terminal, thanks to libncurses, rather than creating a HTML report.

And much more, GoAccess is packed with features.

Conclusion

I hope that this tutorial helped some of you folks. Feel free to drop an e-mail for comments.

03 August, 2020 12:00AM by Arnaud Rebillout

August 02, 2020

Enrico Zini

Libreoffice presentation tips

Snap guides

Dragging from the rulers does not always create snap guides. If it doesn't, click on the slide background, "Snap guides", "Insert snap guide". In my case, after the first snap guide was manually inserted, it was possible to drag new one from the rulers.

Master slides

How to edit a master slide

  • Show master slides side pane
  • Right click on master slide
  • Edit Master...
  • An icon appears in the toolbar: "Close Master View"
  • Apply to all slides might not apply to the first slide created as the document was opened

Change styles in master slide

Do not change properties of text by selecting placeholder text in the Master View. Instead, open the Styles and formatting sidebar, and edit the styles in there.

This means the style changes are applied to pages in all layouts, not just the "Title, Content" layout that is the only one editable in the "Master View".

How to duplicate a master slide

There seems to be no feature implemented for this, but you can do it, if you insist:

  • Save a copy of the document
  • Rename the master slide
  • Drag a slide, that uses the renamed master slide, from the copy of the document to the original one

It's needed enough that someone made a wikihow: https://www.wikihow.com/Copy-a-LibreOffice-Impress-Master-Slide archive.org

How to change the master slide for a layout that is not "Title, Content"

I could not find a way to do it, but read on for a workaround.

I found an ask.libreoffice.org question that went unanswered.

I asked on #libreoffice on IRC and got no answer:

Hello. I'm doing the layout for a presentation in impress, and I can edit all sorts of aspects of the master slide. It seems that I can only edit the "Title, Content" layout of the master slide, though. I'd like to edit, for example, the "Title only" layout so that the title appears in a different place than the top of the page. Is it possible to edit specific layouts in a master page?

In the master slide editor it seems impossible to select a layout, for example.

Alternatively I tried creating multiple master slides, but then if I want to create a master slide for a title page, there's no way to remove the outline box, or the title box.

My work around has been to create multiple master slides, one for each layout. For a title layout, I moved the outline box into a corner, and one has to remove it manually after create a new slide.

There seems to be no way of changing the position of elements not found in the "Title, Content" layout, like "Subtitle". On the other hand, given that one's working with an entirely different master slide, one can abuse the outline box as a subtitle.

Note that if you later decide to change a style element for all the slides, you'll need to go propagate the change to the "Styles and Formatting" menu of all master slides you're using.

02 August, 2020 01:00PM

Andrew Cater

Debian 10.5 media testing - 202001082250 - last few debian-live images being tested for amd64 - Calamares issue - Post 5 of several.

Last few debian-live images being tested for amd64. We have found a bug with the debian-live Gnome flavour. This specifically affects installs after booting from the live media and then installing to the machine using  the Calamares installer found on the desktop. The bug was introduced as a fix for one issue that has produced further buggy behaviour as a result.

Fixes are known - we've had highvoltage come and debug them with us - but will not be put out with this release but will wait for the 10.6 release which will allow for a longer time for debugging overall.

You can still run from the live-media, you can still install with the standard Debian installers found in the menu of the live-media disk - this is _only_ a limited time issue with the Calamares installer. At this point in the release cycle, it's been judged better to release the images as they are - with known and documented issues - than to try and debug them in a hurry and risk damaging or delaying a stable point release.

02 August, 2020 12:59PM by Andrew Cater (noreply@blogger.com)

Enrico Zini

Gender, inclusive communities, and dragonflies

From https://en.wikipedia.org/wiki/Dragonfly#Sex_ratios:

Sex ratios

The sex ratio of male to female dragonflies varies both temporally and spatially. Adult dragonflies have a high male-biased ratio at breeding habitats. The male-bias ratio has contributed partially to the females using different habitats to avoid male harassment.

As seen in Hine's emerald dragonfly (Somatochlora hineana), male populations use wetland habitats, while females use dry meadows and marginal breeding habitats, only migrating to the wetlands to lay their eggs or to find mating partners.

Unwanted mating is energetically costly for females because it affects the amount of time that they are able to spend foraging.

02 August, 2020 09:32AM

August 01, 2020

Molly de Blanc

busy busy

I’ve been working with Karen Sandler over the past few months on the first draft of the Declaration of Digital Autonomy. Feedback welcome, please be constructive. It’s a pretty big deal for me, and feels like the culmination of a lifetime of experiences and the start of something new.

We talked about it at GUADEC and HOPE. We don’t have any other talks scheduled yet, but are available for events, meetups, dinner parties, and b’nai mitzvahs.

01 August, 2020 09:15PM by mollydb

Andrew Cater

Debian 10.5 media testing - 202008012055 - post 4 of several

We've more or less finished testing on the Debian install images. Now moving on to the debian-live images. Bugs found and being triaged live as I type. Lots of typing and noises in the background of the video conference. Now at about 12-14 hours in on this for some of the participants. Lots of good work still going on, as ever.

01 August, 2020 09:01PM by Andrew Cater (noreply@blogger.com)

Debian 10.5 media testing - pause for supper - 202001081715 - post 3 of several

Various of the folk doing this have taken a food break until 1900 local. A few glitches, a few that needed to be tried over again - but it's all going fairly well.

It is likely that at least one of the CD images will be dropped. The XFCE desktop install CD for i386 is now too large to fit on CD media. The netinst .iso files / the DVD 1 file / any of the larger files available via Jigdo will all help you achieve the same result.

There are relatively few machines that are i386 architecture only - it might be appropriate for people to use 64 bit amd64 from this point onwards as pure i386 machines are now approaching ten years old as a minimum. If you do need a graphical user environment for a pure i386 machine, it can be installed by using an expert install or using tasksel in the installation process.

01 August, 2020 05:37PM by Andrew Cater (noreply@blogger.com)

Debian 10.5 media testing - continuing quite happily - 202001081320 - post 2 of several

We've now settled into a reasonable rhythm: RattusRattus and Isy and Sledge all working away hard in Cambridge: Schweer in Germany and me here in Cheltenham.

Lots of chat backwards and forwards and a good deal of work being done, as ever.

It's really good to be back in the swing of it and we owe thanks to folk for setting up infrastructure for us to use for video chat, which makes a huge difference: even though I know what they're like, it's still good to see my colleagues.

01 August, 2020 05:24PM by Andrew Cater (noreply@blogger.com)

Debian 10.5 media testing process started 202008011145 - post 1 of several.

The media testing process has started slightly late. There will be a _long_ testing process over much of the day: the final media image releases are likely to be at about 0200-0300UTC tomorrow.

Just settling in for a long day of testing: as ever, it's good to be chatting with my Debian colleagues in Cambridge and with Schweer in Germany. It's going to be a hot one - 30 Celsius (at least) and high humidity for all of us.

EDIT: Corrected for UTC :)

01 August, 2020 01:01PM by Andrew Cater (noreply@blogger.com)

Utkarsh Gupta

FOSS Activites in July 2020

Here’s my (tenth) monthly update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 17th month of contributing to Debian. I became a DM in late March last year and a DD last Christmas! \o/

Well, this month I didn’t do a lot of Debian stuff, like I usually do, however, I did a lot of things related to Debian (indirectly via GSoC)!

Anyway, here are the following things I did this month:

Uploads and bug fixes:

Other $things:

  • Mentoring for newcomers.
  • FTP Trainee reviewing.
  • Moderation of -project mailing list.
  • Sponsored php-twig for William, ruby-growl, ruby-xmpp4r, and uby-uniform-notifier for Cocoa, sup-mail for Iain, and node-markdown-it for Sakshi.

GSoC Phase 2, Part 2!

In May, I got selected as a Google Summer of Code student for Debian again! \o/
I am working on the Upstream-Downstream Cooperation in Ruby project.

The first three blogs can be found here:

Also, I log daily updates at gsocwithutkarsh2102.tk.

Whilst the daily updates are available at the above site^, I’ll breakdown the important parts of the later half of the second month here:

  • Marc Andre, very kindly, helped in fixing the specs that were failing earlier this month. Well, the problem was with the specs, but I am still confused how so. Anyway..
  • Finished documentation of the second cop and marked the PR as ready to be reviewed.
  • David reviewed and suggested some really good changes and I fixed/tweaked that PR as per his suggestion to finally finish the last bits of the second cop, RelativeRequireToLib.
  • Merged the PR upon two approvals and released it as v0.2.0! 💖
  • We had our next weekly meeting where we discussed the next steps and the things that are supposed to be done for the next set of cops.
  • Introduced rubocop-packaging to the outer world and requested other upstream projects to use it! It is being used by 13 other projects already! 😭💖
  • Started to work on packaging-style-guide but I didn’t push anything to the public repository yet.
  • Worked on refactoring the cops_documentation Rake task which was broken by the new auto-corrector API. Opened PR #7 for it. It’ll be merged after the next RuboCop release as it uses CopsDocumentationGenerator class from the master branch.
  • Whilst working on autoprefixer-rails, I found something unusual. The second cop shouldn’t really report offenses if the require_relative calls are from lib to lib itself. This is a false-positive. Opened issue #8 for the same.

Whilst working on rubocop-packaging, I contributed to more Ruby projects, refactoring their library a little bit and mostly fixing RuboCop issues and fixing issues that the Packaging extension reports as “offensive”.
Following are the PRs that I raised:


Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my tenth month as a Debian LTS and my first as a Debian ELTS paid contributor.
I was assigned 25.25 hours for LTS and 13.25 hours for ELTS and worked on the following things:

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

Other (E)LTS Work:

  • Did my LTS frontdesk duty from 29th June to 5th July.
  • Triaged qemu, firefox-esr, wordpress, libmediainfo, squirrelmail, xen, openjpeg2, samba, and ldb.
  • Mark CVE-2020-15395/libmediainfo as no-dsa for Jessie.
  • Mark CVE-2020-13754/qemu as no-dsa/intrusive for Stretch and Jessie.
  • Mark CVE-2020-12829/qemu as no-dsa for Jessie.
  • Mark CVE-2020-10756/qemu as not-affected for Jessie.
  • Mark CVE-2020-13253/qemu as postponed for Jessie.
  • Drop squirrelmail and xen for Stretch LTS.
  • Add notes for tomcat8, shiro, and cacti to take care of the Stretch issues.
  • Emailed team@security.d.o and debian-lts@l.d.o regarding possible clashes.
  • Maintenance of LTS Survey on the self-hosted LimeSurvey instance. Received 1765 (just wow!) responses.
  • Attended the fourth LTS meeting. MOM here.
  • General discussion on LTS private and public mailing list.

Other(s)

Sometimes it gets hard to categorize work/things into a particular category.
That’s why I am writing all of those things inside this category.
This includes two sub-categories and they are as follows.

Personal:

This month I did the following things:

  • Released v0.2.0 of rubocop-packaging on RubyGems! 💯
    It’s open-sourced and the repository is here.
    Bug reports and pull requests are welcomed! 😉
  • Released v0.1.0 of get_root on RubyGems! 💖
    It’s open-sourced and the repository is here.
  • Wrote max-word-frequency, my Rails C1M2 programming assignment.
    And made it pretty neater & cleaner!
  • Refactored my lts-dla and elts-ela scripts entirely and wrote them in Ruby so that there are no issues and no false-positives! 🚀
    Check lts-dla here and elts-ela here.
  • And finally, built my first Rails (mini) web-application! 🤗
    The repository is here. This was also a programming assignment (C1M3).
    And furthermore, hosted it at Heroku.

Open Source:

Again, this contains all the things that I couldn’t categorize earlier.
Opened several issues and PRs:

  • Issue #8273 against rubocop, reporting a false-positive auto-correct for Style/WhileUntilModifier.
  • Issue #615 against http reporting a weird behavior of a flaky test.
  • PR #3791 for rubygems/bundler to remove redundant bundler/setup require call from spec_helper generated by bundle gem.
  • Issue #3831 against rubygems, reporting a traceback of undefined method, rubyforge_project=.
  • Issue #238 against nheko asking for enhancement in showing the font name in the very font itself.
  • PR #2307 for puma to constrain rake-compiler to v0.9.4.
  • And finally, I joined the Cucumber organization! \o/

Thank you for sticking along for so long :)

Until next time.
:wq for today.

01 August, 2020 09:00AM

Paul Wise

FLOSS Activities July 2020

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian wiki: unblock IP addresses, approve accounts, reset email addresses

Communication

Sponsors

The purple-discord, ifenslave and psqlodbc work was sponsored by my employer. All other work was done on a volunteer basis.

01 August, 2020 12:58AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

August and feels like it finally.

August and feels like it finally. July didn't feel like July and felt like June because it rained so much. This is summer.

01 August, 2020 12:54AM by Junichi Uekawa

July 31, 2020

hackergotchi for Ben Hutchings

Ben Hutchings

Debian LTS work, July 2020

I was assigned 20 hours of work by Freexian's Debian LTS initiative, but only worked 5 hours this month and returned the remainder to the pool.

Now that Debian 9 'stretch' has entered LTS, the stretch-backports suite will be closed and no longer updated. However, some stretch users rely on the newer kernel version provided there. I prepared to add Linux 4.19 to the stretch-security suite, alongside the standard package of Linux 4.9. I also prepared to update the firmware-nonfree package so that firmware needed by drivers in Linux 4.19 will also be available in stretch's non-free section. Both these updates will be based on the packages in stretch-backports, but needed some changes to avoid conflicts or regressions for users that continue using Linux 4.9 or older non-Debian kernel versions. I will upload these after the Debian 10 'buster' point release.

31 July, 2020 10:40PM

hackergotchi for Chris Lamb

Chris Lamb

Free software activities in July 2020

Here is my monthly update covering what I have been doing in the free and open source software world during July 2020 (previous month):

  • Opened a pull request to make the build reproducible in PyERFA, a set of Python bindings for various astronomy-related utilities (#45), as well as one for PeachPy assembler to make the output of codecode/x86_64.py reproducible (#108).
  • As part of being on the board of directors of the Open Source Initiative and Software in the Public Interest I attended their respective monthly meetings and participated in various licensing and other discussions occurring on the internet, as well as the usual internal discussions regarding logistics and policy etc. This month, it was SPI's Annual General Meeting and the OSI has been running a number of remote strategy sessions for the board.

  • Fixed an issue in my tickle-me-email library that implements Getting Things Done (GTD)-like behaviours in IMAP inboxes to ensure that all messages have a unique Message-Id header. [...]

  • Reviewed and merged even more changes by Pavel Dolecek into my Strava Enhancement Suite, a Chrome extension to improve the user experience on the Strava athletic tracker.

  • Updated travis.debian.net, my hosted service for projects that host their Debian packaging on GitHub, to use the Travis CI continuous integration platform) to fix a compatibility issue with the latest version of mk-build-deps. [...][...]

For Lintian, the static analysis tool for Debian packages:

  • Update the regular expression to search for all the released versions in a .changes file. [...]

  • Avoid false-positives when matching sensible-utils utilities such as i3-sensible-pager. (#966022)

  • Rename the send-patch tag to patch-not-forwarded-upstream. [...]

  • Drop reminders from 26 tags that false-positives should be reported to Lintian as this is implicit in all our tags. [...]


§


Reproducible Builds

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The project is proud to be a member project of the Software Freedom Conservancy. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:


§


diffoscope

Elsewhere in our tooling, I made the following changes to diffoscope, including preparing and uploading versions 150, 151, 152, 153 & 154 to Debian:

  • New features:

    • Add support for flash-optimised F2FS filesystems. (#207)
    • Don't require zipnote(1) to determine differences in a .zip file as we can use libarchive. [...]
    • Allow --profile as a synonym for --profile=-, ie. write profiling data to standard output. [...]
    • Increase the minimum length of the output of strings(1) to eight characters to avoid unnecessary diff noise. [...]
    • Drop some legacy argument styles: --exclude-directory-metadata and --no-exclude-directory-metadata have been replaced with --exclude-directory-metadata={yes,no}. [...]
  • Bug fixes:

    • Pass the absolute path when extracting members from SquashFS images as we run the command with working directory in a temporary directory. (#189)
    • Correct adding a comment when we cannot extract a filesystem due to missing libguestfs module. [...]
    • Don't crash when listing entries in archives if they don't have a listed size such as hardlinks in ISO images. (#188)
  • Output improvements:

    • Strip off the file offset prefix from xxd(1) and show bytes in groups of 4. [...]
    • Don't emit javap not found in path if it is available in the path but it did not result in an actual difference. [...]
    • Fix ... not available in path messages when looking for Java decompilers that used the Python class name instead of the command. [...]
  • Logging improvements:

    • Add a bit more debugging info when launching libguestfs. [...]
    • Reduce the --debug log noise by truncating the has_some_content messages. [...]
    • Fix the compare_files log message when the file does not have a literal name. [...]
  • Codebase improvements:

    • Rewrite and rename exit_if_paths_do_not_exist to not check files multiple times. [...][...]
    • Add an add_comment helper method; don't mess with our internal list directly. [...]
    • Replace some simple usages of str.format with Python 'f-strings' [...] and make it easier to navigate to the main.py entry point [...].
    • In the RData comparator, always explicitly return None in the failure case as we return a non-None value in the success one. [...]
    • Tidy some imports [...][...][...] and don't alias a variable when we don't end up using. [...]
    • Clarify the use of a separate NullChanges quasi-file to represent missing data in the Debian package comparator [...] and clarify use of a 'null' diff in order to remember an exit code. [...]
  • Misc:


§


Debian

In Debian, I made the following uploads this month:


§


Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 for the Extended LTS project. This included:

You can find out more about the project via the following video:

31 July, 2020 09:55PM

hackergotchi for Jonathan Carter

Jonathan Carter

Free Software Activities for 2020-07

Here are my uploads for the month of July, which is just a part of my free software activities, I’ll try to catch up on the rest in upcoming posts. I haven’t indulged in online conferences much over the last few months, but this month I attended the virtual editions of Guadec 2020 and HOPE 2020. HOPE isn’t something I knew about before and I enjoyed it a lot, you can find their videos on archive.org.

Debian Uploads

2020-07-05: Sponsor backport gamemode-1.5.1-5 for Debian buster-backports.

2020-07-06: Sponsor package piper (0.5.1-1) for Debian unstable (mentors.debian.net request).

2020-07-14: Upload package speedtest-cli (2.0.2-1+deb10u1) to Debian buster (Closes: #940165, #965116).

2020-07-15: Upload package calamares (3.2.27-1) to Debian unstable.

2020-07-15: Merge MR#1 for gnome-shell-extension-dash-to-panel.

2020-07-15: Upload package gnome-shell-extension-dash-to-panel (38-1) to Debian unstable.

2020-07-15: Upload package gnome-shell-extension-disconnect-wifi (25-1) to Debian unstable.

2020-07-15: Upload package gnome-shell-extension-draw-on-your-screen (6.1-1) to Debian unstable.

2020-07-15: Upload package xabacus (8.2.8-1) to Debian unstable.

2020-07-15: Upload package s-tui (1.0.2-1) to Debian unstable.

2020-07-15: Upload package calamares-settings-debian (10.0.2-1+deb10u2) to Debian buster (Closes: #934503, #934504).

2020-07-15: Upload package calamares-settings-debian (10.0.2-1+deb10u3) to Debian buster (Closes: #959541, #965117).

2020-07-15: Upload package calamares-settings-debian (11.0.2-1) to Debian unstable.

2020-07-19: Upload package bluefish (2.2.11+svn-r8872-1) to Debian unstable (Closes: #593413, #593427, #692284, #730543, #857330, #892502, #951143).

2020-07-19: Upload package bundlewrap (4.0.0-1) to Debian unstable.

2020-07-20: Upload package bluefish (2.2.11+svn-r8872-1) to Debian unstable (Closes: #965332).

2020-07-22: Upload package calamares (3.2.27-1~bpo10+1) to Debian buster-backports.

2020-07-24: Upload package bluefish (2.2.11_svn-r8872-3) to Debian unstable (Closes: #965944).

31 July, 2020 05:01PM by jonathan

François Marier

Extending GPG key expiry

Extending the expiry on a GPG key is not very hard, but it's easy to forget a step. Here's how I did my last expiry bump.

Update the expiry on the main key and the subkey:

gpg --edit-key KEYID
> expire
> key 1
> expire
> save

Upload the updated key to the keyservers:

gpg --export KEYID | curl -T - https://keys.openpgp.org
gpg --keyserver keyring.debian.org --send-keys KEYID

31 July, 2020 03:45AM

Reproducible Builds (diffoscope)

diffoscope 154 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 154. This version includes the following changes:

[ Chris Lamb ]

* Add support for F2FS filesystems.
  (Closes: reproducible-builds/diffoscope#207)
* Allow "--profile" as a synonym for "--profile=-".
* Add an add_comment helper method so don't mess with our _comments list
  directly.
* Add missing bullet point in a previous changelog entry.
* Use "human-readable" over unhyphenated version.
* Add a bit more debugging around launching guestfs.
* Profile the launch of guestfs filesystems.
* Correct adding a comment when we cannot extract a filesystem due to missing
  guestfs module.

You find out more by visiting the project homepage.

31 July, 2020 12:00AM

July 30, 2020

Russell Coker

July 29, 2020

hackergotchi for Norbert Preining

Norbert Preining

KDE/Plasma Status Update 2020-07-30

Only a short update on the current status of my KDE/Plasma package for Debian sid and testing:

  • Frameworks 5.72
  • Plasma 5.19.4
  • Apps 20.04.3
  • Digikam 7.0.0
  • Ark CVE-2020-16116 fixed in version 20.04.3-1~np2

Hope that helps a few people. See this post for how to setup archives.

Enjoy.

29 July, 2020 11:03PM by Norbert Preining

Dima Kogan

An awk corner case?

So even after years and years of experience, core tools still find ways to surprise me. Today I tried to do some timestamp comparisons with mawk (vnl-filter, to be more precise), and ran into a detail of the language that made it not work. Not a bug, I guess, since both mawk and gawk are affected. I'll claim "language design flaw", however.

Let's say I'm processing data with unix timestamps in it (seconds since the epoch). gawk and recent versions of mawk have strftime() for that:

$ date
Wed Jul 29 15:31:13 PDT 2020

$ date +"%s"
1596061880

$ date +"%s" | mawk '{print strftime("%H",$1)}'
15

And let's say I want to do something conditional on them. I want only data after 9:00 each day:

$ date +"%s" | mawk 'strftime("%H",$1) >= 9 {print "Yep. After 9:00"}'

That's right. No output. But it is 15:31 now, and I confirmed above that strftime() reports the right time, so it should know that it's after 9:00, but it doesn't. What gives?

As we know, awk (and perl after it) treat numbers and strings containing numbers similarly: 5+5 and ="5"+5= both work the same, which is really convenient. This can only work if it can be inferred from context whether we want a number or a string; it knows that addition takes two numbers, so it knows to convert ="5"= into a number in the example above.

But what if an operator is ambiguous? Then it picks a meaning based on some internal logic that I don't want to be familiar with. And apparently awk implements string comparisons with the same < and > operators, as numerical comparisons, creating the ambiguity I hit today. strftime returns strings, and you get silent, incorrect behavior that then demands debugging. How to fix? By telling awk to treat the output of strftime() as a number:

$ date +"%s" | mawk '0+strftime("%H",$1) >= 9 {print "Yep. After 9:00"}'

Yep. After 9:00

With the benefit of hindsight, they really should not have reused any operators for both number and string operations. Then these ambiguities wouldn't occur, and people wouldn't be grumbling into their blogs decades after these decisions were made.

29 July, 2020 10:45PM by Dima Kogan

Enrico Zini

Building and packaging a sysroot

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

After having had some success with a sysroot in having a Qt5 cross-build environment that includes QtWebEngine, the next step is packaging the sysroot so it can be available both to build the cross-build environment, and to do cross-development with it.

The result is this Debian source package which takes a Raspberry Pi OS disk image, provisions it in-place, extracts its contents, and packages them.

Yes. You may want to reread the last paragraph.

It works directly in the disk image to avoid a nasty filesystem issue on emulated 32bit Linux over a 64bit mounted filesystem.

This feels like the most surreal Debian package I've ever created, and this saga looks like one of the hairiest yaks I've ever shaved.

Integrating this monster codebase, full of bundled code and hacks, into a streamlined production and deployment system has been for me a full stack nightmare, and I have a renewed and growing respect for the people in the Qt/KDE team in Debian, who manage to stay on top of this mess, so that it all just works when we need it.

29 July, 2020 08:15AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Installing and Running Ubuntu on a 2015-ish MacBook Air

So a few months ago kiddo one dropped an apparently fairly large cup of coffee onto her one and only trusted computer. With a few months (then) to graduation (which by now happened), and with the apparent “genuis bar” verdict of “it’s a goner” a new one was ordered. As it turns out this supposedly dead one coped well enough with the coffee so that after a few weeks of drying it booted again. But give the newer one, its apparent age and whatnot, it was deemed surplus. So I poked around a little on the interwebs and conclude that yes, this could work.

Fast forward a few months and I finally got hold of it, and had some time to play with it. First, a bootable usbstick was prepared, and the machine’s content was really (really, and check again: really) no longer needed, I got hold of it for good.

tl;dr It works just fine. It is a little heavier than I thought (and isn’t “air” supposed to be weightless?) The ergonomics seem quite nice. The keyboard is decent. Screen-resolution on this pre-retina simple Air is so-so at 1440 pixels. But battery live seems ok and e.g. the camera is way better than what I have in my trusted Lenovo X1 or at my desktop. So just as a zoom client it may make a lot of sense; otherwise just walking around with it as a quick portable machine seems perfect (especially as my Lenovo X1 still (ahem) suffers from one broken key I really need to fix…).

Below are some lightly edited notes from the installation. Initial steps were quick: maybe an hour or less? Customizing a machine takes longer than I remembered, this took a few minutes here and there quite a few times, but always incremental.

Initial Steps

  • Download of Ubuntu 20.04 LTS image: took a few moments, even on broadband, feels slower than normal (fast!) Ubuntu package updates, maybe lesser CDN or bad luck

  • Startup Disk Creator using a so-far unused 8gb usb drive

  • Plug into USB, recycle power, press “Option” on macOS keyboard: voila

  • After a quick hunch… no to ‘live/test only’ and yes to install, whole disk

  • install easy, very few questions, somehow skips wifi

  • so activate wifi manually — and everythings pretty much works

Customization

  • First deal with ‘fn’ and ‘ctrl’ key swap. Install git and followed this github repo which worked just fine. Yay. First (manual) Linux kernel module build needed need in … half a decade? Longer?

  • Fire up firefox, go to ‘download chrome’, install chrome. Sign in. Turn on syncing. Sign into Pushbullet and Momentum.

  • syncthing which is excellent. Initially via apt, later from their PPA. Spend some time remembering how to set up the mutual handshakes between devices. Now syncing desktop/server, lenovo x1 laptop, android phone and this new laptop

  • keepassx via apt and set up using Sync/ folder. Now all (encrypted) passwords synced.

  • Discovered synergy now longer really free, so after a quick search found and installed barrier (via apt) to have one keyboard/mouse from desktop reach laptop.

  • Added emacs via apt, so far ‘empty’, so config files yet

  • Added ssh via apt, need to propagate keys to github and gitlab

  • Added R via add-apt-repository --yes "ppa:marutter/rrutter4.0" and add-apt-repository --yes "ppa:c2d4u.team/c2d4u4.0+". Added littler and then RStudio

  • Added wajig (apt frontend) and byobu, both via apt

  • Created ssh key, shipped it to server and github + gitlab

  • Cloned (not-public) ‘dotfiles’ repo and linked some dotfiles in

  • Cloned git repo for nord-theme for gnome terminal and installed it; also added it to RStudio via this repo

  • Emacs installed, activated dotfiles, then incrementally install a few elpa-* packages and a few M-x package-install including nord-theme, of course

  • Installed JetBrains Mono font from my own local package; activated for Gnome Terminal and Emacs

  • Install gnome-tweak-tool via apt, adjusted a few settings

  • Ran gsettings set org.gnome.desktop.wm.preferences focus-mode 'sloppy'

  • Set up camera following this useful GH repo

  • At some point also added slack and zoom, because, well, it is 2020

  • STILL TODO:

    • docker
    • bother with email setup?,
    • maybe atom/code/…?

29 July, 2020 01:52AM

July 28, 2020

hackergotchi for Chris Lamb

Chris Lamb

Pop culture matters

Many people labour under the assumption that pop culture is trivial and useless while only 'high' art can grant us genuine and eternal knowledge about the world. Given that we have a finite time on this planet, we are all permitted to enjoy pop culture up to a certain point, but we should always minimise our interaction with it, and consume more moral and intellectual instruction wherever possible.

Or so the theory goes. What these people do not realise is that pop and mass culture can often provide more information about the world, humanity in general and — what is even more important — ourselves.

This is not quite the debate around whether high art is artistically better, simply that pop culture can be equally informative. Jeremy Bentham argued in the 1820s that "prejudice apart, the game of push-pin is of equal value with the arts and sciences of music and poetry", that it didn't matter where our pleasures come from. (John Stuart Mill, Bentham's intellectual rival, disagreed.) This fundamental question of philosophical utilitarianism will not be resolved here.

However, what might begin to be resolved is our instinctive push-back against pop culture. We all share an automatic impulse to disregard things we do not like and to pretend they do not exist, but this wishful thinking does not mean that these cultural products do not continue to exist when we aren't thinking about them and, more to our point, continue to influence others and even ourselves.

Take, for example, the recent trend for 'millennial pink'. With its empty consumerism, faux nostalgia, reductive generational stereotyping, objectively ugly æsthetics and tedious misogyny (photographed with Rose Gold iPhones), the very combination appears to have been deliberately designed to annoy me, curiously providing circumstantial evidence in favour of intelligent design. But if I were to immediately dismiss millennial pink and any of the other countless cultural trends I dislike simply because I find them disagreeable, I would be willingly keeping myself blind to their underlying ideology, their significance and their effect on society at large. If I had any ethical or political reservations I might choose not to engage with them economically or to avoid advertising them to others, but that is a different question altogether.

Even if we can't notice this pattern within ourselves we can first observe it in others. We can all recall moments where someone has brushed off a casual reference to pop culture, be it Tiger King, TikTok, team sports or Taylor Swift; if you can't, simply look for the abrupt change of tone and the slightly-too-quick dismissal. I am not suggesting you attempt to dissuade others or even to point out this mental tic, but merely seeing it in action can be highly illustrative in its own way.

In summary, we can simultaneously say that pop culture is not worthy of our time relative to other pursuits while consuming however much of it we want, but deliberately dismissing pop culture doesn't mean that a lot of other people are not interacting with it and is therefore undeserving of any inquiry. And if that doesn't convince you, just like the once-unavoidable millennial pink, simply sticking our collective heads in the sand will not mean that wider societal-level ugliness is going to disappear anytime soon.

Anyway, that's a very long way of justifying why I plan to re-watch TNG.

28 July, 2020 11:02PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

ttdo 0.0.6: Bugfix

A bugfix release of our (still small) ttdo package arrived on CRAN overnight. As introduced last fall, the ttdo package extends the most excellent (and very minimal / zero depends) unit testing package tinytest by Mark van der Loo with the very clever and well-done diffobj package by Brodie Gaslam to give us test results with visual diffs:

ttdo screenshot

This release corrects a minor editing error spotted by the ever-vigilant John Blischak.

The NEWS entry follow.

Changes in ttdo version 0.0.6 (2020-07-27)

  • Correct a minor editing mistake spotted by John Blischak.

CRANberries provides the usual summary of changes to the previous version. Please use the GitHub repo and its issues for any questions.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

28 July, 2020 10:36PM

hackergotchi for Jonathan Carter

Jonathan Carter

Free Software Activities for 2020-06

Hmm, this is the latest I’ve posted my monthly updates yet (nearly by a month!). June was both crazy on the incoming side, and at the same time I just wasn’t that productive (at least since then I caught up a lot). In theory, lockdown means that I spend less time in traffic, in shops or with friends and have more time to do stuff, in practice I go to bed later and later and waste more time watching tv shows and playing mobile games. A cycle that I have at least broken free from since June.

Debian Package Uploads

2020-06-04: Upload package btfs (2.21-1) to Debian unstable.

2020-06-04: Upload package gnome-shell-extension-disconnect-wifi (24-1) to Debian unstable.

2020-06-18: Sponsor package gamemode (1.5.1-5) for Debian unstable (Games team request).

2020-06-21: Upload package calamares (3.2.26-1) to Debian unstable.

2020-06-21: Upload package s-tui (1.0.1-1) to Debian unstable.

2020-06-29: Sponsor package libinih (48-1~bpo10+1) for Debian buster-backports.

2020-06-30: Upload packge calamares (3.2.26-1~bpo10+1) to Debian buster-backports.

2020-06-30: Upload package toot (0.27.0-1) to Debian unstable.

2020-06-30: Upload package calamares (3.2.26.1-1) to Debian unstable.

28 July, 2020 06:15PM by jonathan

hackergotchi for Steve Kemp

Steve Kemp

I'm a bit of a git (hacker?)

Sometimes I enjoy reading the source code to projects I like, use, or am about to install for the first time. This was something I used to do on a very regular basis, looking for security issues to report. Nowadays I don't have so much free time, but I still like to inspect the source code to new applications I install, and every now and again I'll find the time to look at the source to random projects.

Reading code is good. Reading code is educational.

One application I've looked at multiple times is redis, which is a great example of clean and well-written code. That said when reading the redis codebase I couldn't help noticing that there were a reasonably large number of typos/spelling mistakes in the comments, so I submitted a pull-request:

Sadly that particular pull-request didn't receive too much attention, although a previous one updating the configuration file was accepted. I was recently reminded of these pull-requests when I was when I was doing some other work. So I figured I'd have a quick scan of a couple of other utilities.

In the past I'd just note spelling mistakes when I came across them, usually I'd be opening each file in a project one by one and reading them from top to bottom. (Sometimes I'd just open files in emacs and run "M-x ispell-comments-and-strings", but more often I'd just notice them with my eyes). It did strike me that if I were to do this in a more serious fashion it would be good to automate it.

So this time round I hacked up a simple "dump comments" utility, which would scan named files and output the contents of any comments (be they single-line, or multi-line). Once I'd done that I could spell-check easily:

 $ go run dump-comments.go *.c > comments
 $ aspell -c comments

Anyway the upshot of that was a pull-request against git:

We'll see if that makes its way live sometime. In case I get interested in doing this again I've updated my sysbox-utility collection to have a comments sub-command. That's a little more robust and reliable than my previous hack:

$ sysbox comments -pretty=true $(find . -name '*.c')
..
..

The comments sub-command has support for:

  • Single-line comments, for C, as prefixed with //.
  • Multi-line comments, for C++, as between /* and */.
  • Single-line comments, for shell, as prefixed with #.
  • Lua comments, both single-line (prefixed with --) and multiline between --[[ and --]].

Adding new support would be trivial, I just need a start and end pattern to search against. Pull-requests welcome:

28 July, 2020 05:45PM

Russ Allbery

Review: The City in the Middle of the Night

Review: The City in the Middle of the Night, by Charlie Jane Anders

Publisher: Tor
Copyright: February 2019
Printing: February 2020
ISBN: 1-4668-7113-X
Format: Kindle
Pages: 366

January is a tidally-locked planet divided between permanent night and permanent day, an unfortunate destination for a colony starship. Now, humans cling to a precarious existence along the terminator, huddling in two wildly different cities and a handful of smaller settlements, connected by a road through the treacherous cold.

The novel opens with Sophie, a shy university student from the dark side of the city of Xiosphant. She has an overwhelming crush on Bianca, her high-class, self-confident roommate and one of the few people in her life to have ever treated her with compassion and attention. That crush, and her almost non-existent self-esteem, lead her to take the blame for Bianca's petty theft, resulting in what should have been a death sentence. Sophie survives only because she makes first contact with a native intelligent species of January, one that the humans have been hunting for food and sport.

Sadly, I think this is enough Anders for me. I've now bounced off two of her novels, both for structural reasons that I think go deeper than execution and indicate a fundamental mismatch between what Anders wants to do as an author and what I'm looking for as a reader.

I'll talk more about what this book is doing in a moment, but I have to start with Bianca and Sophie. It's difficult for me to express how much I loathed this relationship and how little I wanted to read about it. It took me about five pages to peg Bianca as a malignant narcissist and Sophie's all-consuming crush as dangerous codependency. It took the entire book for Sophie to figure out how awful Bianca is to her, during which Bianca goes through the entire abusive partner playbook of gaslighting, trivializing, contingent affection, jealous rage, and controlling behavior. And meanwhile Sophie goes back to her again, and again, and again, and again. If I hadn't been reading this book on a Kindle, I think it would have physically hit a wall after their conversation in the junkyard.

This is truly a matter of personal taste and preference. This is not an unrealistic relationship; this dynamic happens in life all too often. I'm sure there is someone for whom reading about Sophie's spectacularly poor choices is affirming or cathartic. I've not personally experienced this sort of relationship, which doubtless matters.

But having empathy for someone who is making awful and self-destructive life decisions and trusting someone they should not be trusting and who is awful to them in every way is difficult work. Sophie is the victim of Bianca's abuse, but she does so many stupid and ill-conceived things in support of this twisted relationship that I found it very difficult to not get angry at her. Meanwhile, Anders writes Sophie as so clearly fragile and uncertain and devoid of a support network that getting angry at her is like kicking a puppy. The result for me was spending nearly an entire book in a deeply unpleasant state of emotional dissonance. I may be willing to go through that for a close friend, but in a work of fiction it's draining and awful and entirely not fun.

The other viewpoint character had the opposite problem for me. Mouth starts the book as a traveling smuggler, the sole survivor of a group of religious travelers called the Citizens. She's practical, tough, and guarded. Beneath that, I think the intent was to show her as struggling to come to terms with the loss of her family and faith community. Her first goal in the book is to recover a recording of Citizen sacred scripture to preserve it and to reconnect with her past.

This sounds interesting on the surface, but none of it gelled. Mouth never felt to me like someone from a faith community. She doesn't act on Citizen beliefs to any meaningful extent, she rarely talks about them, and when she does, her attitude is nostalgia without spirituality. When Mouth isn't pursuing goals that turn out to be meaningless, she aimlessly meanders through the story. Sophie at least has agency and makes some important and meaningful decisions. Mouth is just there, even when Anders does shattering things to her understanding of her past.

Between Sophie and Bianca putting my shoulders up around my ears within the first few pages of the first chapter and failing to muster any enthusiasm for Mouth, I said the eight deadly words ("I don't care what happens to these people") about a hundred pages in and the book never recovered.

There are parts of the world-building I did enjoy. The alien species that Sophie bonds with is not stunningly original, but it's a good (and detailed) take on one of the alternate cognitive and social models that science fiction has dreamed up. I was comparing the strangeness and dislocation unfavorably to China Miéville's Embassytown while I was reading it, but in retrospect Anders's treatment is more decolonialized. Xiosphant's turn to Circadianism as their manifestation of order is a nicely understated touch, a believable political overreaction to the lack of a day/night cycle. That touch is significantly enhanced by Sophie's time working in a salon whose business model is to help Xiosphant residents temporarily forget about time. And what glimmers we got of politics on the colony ship and their echoing influence on social and political structures were intriguing.

Even with the world-building, though, I want the author to be interested in and willing to expand the same bits of world-building that I'm engaged with. Anders didn't seem to be. The reader gets two contrasting cities along a road, one authoritarian and one libertine, which makes concrete a metaphor for single-axis political classification. But then Anders does almost nothing with that setup; it's just the backdrop of petty warlord politics, and none of the political activism of Bianca's student group seems to have relevance or theoretical depth. It's a similar shallowness as the religion of Mouth's Citizens: We get a few fragments of culture and religion, but without narrative exploration and without engagement from any of the characters. The way the crew of the Mothership was assembled seems to have led to a factional and racial caste system based on city of origin and technical expertise, but I couldn't tell you more than that because few of the characters seem to care. And so on.

In short, the world-building that I wanted to add up to a coherent universe that was meaningful to the characters and to the plot seemed to be little more than window-dressing. Anders tosses in neat ideas, but they don't add up to anything. They're just background scenery for Bianca and Sophie's drama.

The one thing that The City in the Middle of the Night does well is Sophie's nervous but excited embrace of the unknown. It was delightful to see the places where a typical protagonist would have to overcome a horror reaction or talk themselves through tradeoffs and where Sophie's reaction was instead "yes, of course, let's try." It provided an emotional strength to an extended first-contact exploration scene that made it liberating and heart-warming without losing the alienness. During that part of the book (in which, not coincidentally, Bianca does not appear), I was able to let my guard down and like Sophie for the first time, and I suspect that was intentional on Anders's part.

But, overall, I think the conflict between Anders's story-telling approach and my preferences as a reader are mostly irreconcilable. She likes to write about people who make bad decisions and compound their own problems. In one of the chapters of her non-fiction book about writing that's being serialized on Tor.com she says "when we watch someone do something unforgivable, we're primed to root for them as they search desperately for an impossible forgiveness." This is absolutely not true for me; when I watch a character do something unforgivable, I want to see repudiation from the protagonists and ideally some clear consequences. When that doesn't happen, I want to stop reading about them and find something more enjoyable to do with my time. I certainly don't want to watch a viewpoint character insist that the person who is doing unforgivable things is the center of her life.

If your preferences on character and story arc are closer to Anders's than mine, you may like this book. Certainly lots of people did; it was nominated for multiple awards and won the Locus Award for Best Science Fiction Novel. But despite the things it did well, I had a truly miserable time reading it and am not anxious to repeat the experience.

Rating: 4 out of 10

28 July, 2020 03:49AM

July 27, 2020

hackergotchi for Matthew Garrett

Matthew Garrett

Filesystem deduplication is a sidechannel

First off - nothing I'm going to talk about in this post is novel or overly surprising, I just haven't found a clear writeup of it before. I'm not criticising any design decisions or claiming this is an important issue, just raising something that people might otherwise be unaware of.

With that out of the way: Automatic deduplication of data is a feature of modern filesystems like zfs and btrfs. It takes two forms - inline, where the filesystem detects that data being written to disk is identical to data that already exists on disk and simply references the existing copy rather than, and offline, where tooling retroactively identifies duplicated data and removes the duplicate copies (zfs supports inline deduplication, btrfs only currently supports offline). In a world where disks end up with multiple copies of cloud or container images, deduplication can free up significant amounts of disk space.

What's the security implication? The problem is that deduplication doesn't recognise ownership - if two users have copies of the same file, only one copy of the file will be stored[1]. So, if user a stores a file, the amount of free space will decrease. If user b stores another copy of the same file, the amount of free space will remain the same. If user b is able to check how much free space is available, user b can determine whether the file already exists.

This doesn't seem like a huge deal in most cases, but it is a violation of expected behaviour (if user b doesn't have permission to read user a's files, user b shouldn't be able to determine whether user a has a specific file). But we can come up with some convoluted cases where it becomes more relevant, such as law enforcement gaining unprivileged access to a system and then being able to demonstrate that a specific file already exists on that system. Perhaps more interestingly, it's been demonstrated that free space isn't the only sidechannel exposed by deduplication - deduplication has an impact on access timing, and can be used to infer the existence of data across virtual machine boundaries.

As I said, this is almost certainly not something that matters in most real world scenarios. But with so much discussion of CPU sidechannels over the past couple of years, it's interesting to think about what other features also end up leaking information in ways that may not be obvious.

(Edit to add: deduplication isn't enabled on zfs by default and is explicitly triggered on btrfs, so unless it's something you've enabled then this isn't something that affects you)

[1] Deduplication is usually done at the block level rather than the file level, but given zfs's support for variable sized blocks, identical files should be deduplicated even if they're smaller than the maximum record size

comment count unavailable comments

27 July, 2020 07:57PM

hackergotchi for Wouter Verhelst

Wouter Verhelst

On Statements, Facts, Hypotheses, Science, Religion, and Opinions

The other day, we went to a designer's fashion shop whose owner was rather adamant that he was never ever going to wear a face mask, and that he didn't believe the COVID-19 thing was real. When I argued for the opposing position, he pretty much dismissed what I said out of hand, claiming that "the hospitals are empty dude" and "it's all a lie". When I told him that this really isn't true, he went like "well, that's just your opinion". Well, no -- certain things are facts, not opinions. Even if you don't believe that this disease kills people, the idea that this is a matter of opinion is missing the ball by so much that I was pretty much stunned by the level of ignorance.

His whole demeanor pissed me off rather quickly. While I disagree with the position that it should be your decision whether or not to wear a mask, it's certainly possible to have that opinion. However, whether or not people need to go to hospitals is not an opinion -- it's something else entirely.

After calming down, the encounter got me thinking, and made me focus on something I'd been thinking about before but hadn't fully forumlated: the fact that some people in this world seem to misunderstand the nature of what it is to do science, and end up, under the claim of being "sceptical", with various nonsense things -- see scientology, flat earth societies, conspiracy theories, and whathaveyou.

So, here's something that might (but probably won't) help some people figuring out stuff. Even if it doesn't, it's been bothering me and I want to write it down so it won't bother me again. If you know all this stuff, it might be boring and you might want to skip this post. Otherwise, take a deep breath and read on...

Statements are things people say. They can be true or false; "the sun is blue" is an example of a statement that is trivially false. "The sun produces light" is another one that is trivially true. "The sun produces light through a process that includes hydrogen fusion" is another statement, one that is a bit more difficult to prove true or false. Another example is "Wouter Verhelst does not have a favourite color". That happens to be a true statement, but it's fairly difficult for anyone that isn't me (or any one of the other Wouters Verhelst out there) to validate as true.

While statements can be true or false, combining statements without more context is not always possible. As an example, the statement "Wouter Verhelst is a Debian Developer" is a true statement, as is the statement "Wouter Verhelst is a professional Volleybal player"; but the statement "Wouter Verhelst is a professional Volleybal player and a Debian Developer" is not, because while I am a Debian Developer, I am not a professional Volleybal player -- I just happen to share a name with someone who is.

A statement is never a fact, but it can describe a fact. When a statement is a true statement, either because we trivially know what it states to be true or because we have performed an experiment that proved beyond any possible doubt that the statement is true, then what the statement describes is a fact. For example, "Red is a color" is a statement that describes a fact (because, yes, red is definitely a color, that is a fact). Such statements are called statements of fact. There are other possible statements. "Grass is purple" is a statement, but it is not a statement of fact; because as everyone knows, grass is (usually) green.

A statement can also describe an opinion. "The Porsche 911 is a nice car" is a statement of opinion. It is one I happen to agree with, but it is certainly valid for someone else to make a statement that conflicts with this position, and there is nothing wrong with that. As the saying goes, "opinions are like assholes: everyone has one". Statements describing opinions are known as statements of opinion.

The differentiating factor between facts and opinions is that facts are universally true, whereas opinions only hold for the people who state the opinion and anyone who agrees with them. Sometimes it's difficult or even impossible to determine whether a statement is true or not. The statement "The numbers that win the South African Powerball lottery on the 31st of July 2020 are 2, 3, 5, 19, 35, and powerball 14" is not a statement of fact, because at the time of writing, the 31st of July 2020 is in the future, which at this point gives it a 1 in 24,435,180 chance to be true). However, that does not make it a statement of opinion; it is not my opinion that the above numbers will win the South African powerball; instead, it is my guess that those numbers will be correct. Another word for "guess" is hypothesis: a hypothesis is a statement that may be universally true or universally false, but for which the truth -- or its lack thereof -- cannot currently be proven beyond doubt. On Saturday, August 1st, 2020 the above statement about the South African Powerball may become a statement of fact; most likely however, it will instead become a false statement.

An unproven hypothesis may be expressed as a matter of belief. The statement "There is a God who rules the heavens and the Earth" cannot currently (or ever) be proven beyond doubt to be either true or false, which by definition makes it a hypothesis; however, for matters of religion this is entirely unimportant, as for believers the belief that the statement is correct is all that matters, whereas for nonbelievers the truth of that statement is not at all relevant. A belief is not an opinion; an opinion is not a belief.

Scientists do not deal with unproven hypotheses, except insofar that they attempt to prove, through direct observation of nature (either out in the field or in a controlled laboratory setting) that the hypothesis is, in fact, a statement of fact. This makes unprovable hypotheses unscientific -- but that does not mean that they are false, or even that they are uninteresting statements. Unscientific statements are merely statements that science cannot either prove or disprove, and that therefore lie outside of the realm of what science deals with.

Given that background, I have always found the so-called "conflict" between science and religion to be a non-sequitur. Religion deals in one type of statements; science deals in another. The do not overlap, since a statement can either be proven or it cannot, and religious statements by their very nature focus on unprovable belief rather than universal truth. Sure, the range of things that science has figured out the facts about has grown over time, which implies that religious statements have sometimes been proven false; but is it heresy to say that "animals exist that can run 120 kph" if that is the truth, even if such animals don't exist in, say, Rome?

Something very similar can be said about conspiracy theories. Yes, it is possible to hypothesize that NASA did not send men to the moon, and that all the proof contrary to that statement was somehow fabricated. However, by its very nature such a hypothesis cannot be proven or disproven (because the statement states that all proof was fabricated), which therefore implies that it is an unscientific statement.

It is good to be sceptical about what is being said to you. People can have various ideas about how the world works, but only one of those ideas -- one of the possible hypotheses -- can be true. As long as a hypothesis remains unproven, scientists love to be sceptical themselves. In fact, if you can somehow prove beyond doubt that a scientific hypothesis is false, scientists will love you -- it means they now know something more about the world and that they'll have to come up with something else, which is a lot of fun.

When a scientific experiment or observation proves that a certain hypothesis is true, then this probably turns the hypothesis into a statement of fact. That is, it is of course possible that there's a flaw in the proof, or that the experiment failed (but that the failure was somehow missed), or that no observance of a particular event happened when a scientist tried to observe something, but that this was only because the scientist missed it. If you can show that any of those possibilities hold for a scientific proof, then you'll have turned a statement of fact back into a hypothesis, or even (depending on the exact nature of the flaw) into a false statement.

There's more. It's human nature to want to be rich and famous, sometimes no matter what the cost. As such, there have been scientists who have falsified experimental results, or who have claimed to have observed something when this was not the case. For that reason, a scientific paper that gets written after an experiment turned a hypothesis into fact describes not only the results of the experiment and the observed behavior, but also the methodology: the way in which the experiment was run, with enough details so that anyone can retry the experiment.

Sometimes that may mean spending a large amount of money just to be able to run the experiment (most people don't have an LHC in their backyard, say), and in some cases some of the required materials won't be available (the latter is expecially true for, e.g., certain chemical experiments that involve highly explosive things); but the information is always there, and if you spend enough time and money reading through the available papers, you will be able to independently prove the hypothesis yourself. Scientists tend to do just that; when the results of a new experiment are published, they will try to rerun the experiment, partially because they want to see things with their own eyes; but partially also because if they can find fault in the experiment or the observed behavior, they'll have reason to write a paper of their own, which will make them a bit more rich and famous.

I guess you could say that there's three types of people who deal with statements: scientists, who deal with provable hypotheses and statements of fact (but who have no use for unprovable hypotheses and statements of opinion); religious people and conspiracy theorists, who deal with unprovable hypotheses (where the religious people deal with these to serve a large cause, while conspiracy theorists only care about the unprovable hypotheses); and politicians, who should care about proven statements of fact and produce statements of opinion, but who usually attempt the reverse of those two these days :-/

Anyway...

mic drop

27 July, 2020 03:52PM

hackergotchi for Steve Kemp

Steve Kemp

Growing food is fun.

"I grew up on a farm" is something I sometimes what I tell people. It isn't true, but it is a useful shorthand. What is true is that my parents both come from a farming background, my father's family up in Scotland, my mother's down in Yorkshire.

Every summer my sisters and myself would have a traditional holiday at the seaside, which is what people do in the UK (Blackpool, Scarborough, Great Yarmouth, etc). Before, or after, that we'd spend the rest of the summer living on my grandmother's farm.

I loved spending time on the farm when I was a kid, and some of my earliest memories date from that time. For example I remember hand-feeding carrots to working dogs (alsatians) that were taller than I was. I remember trying to ride on the backs of those dogs, and how that didn't end well. In fact the one and only time I can recall my grandmother shouting at me, or raising her voice at all, was when my sisters and I spent an afternoon playing in the coal-shed. We were filthy and covered in coal-dust from head to toe. Awesome!

Anyway the only reason I bring this up is because I have a little bit of a farming background, largely irrelevant in my daily life, but also a source of pleasant memories. Despite it being an animal farm (pigs, sheep, cows) there was also a lot of home-grown food, which my uncle Albert would deliver/sell to people nearby out of the back of a van. That same van that would be used to ferry us to see the fireworks every November. Those evenings were very memorable too - they would almost always involve flasks of home-made vegetable soup.

Nowadays I live in Finland, and earlier in the year we received access to an allotment - a small piece of land (10m x 10m) for €50/year - upon which we can grow our own plants, etc.

My wife decided to plant flowers and make it look pretty. She did good.

I decided to plant "food". I might not have done this stuff from scratch before, but I was pretty familiar with the process from my youth, and also having the internet to hand to make the obvious searches such as "How do you know when you can harvest your garlic?"

Before I started I figured it couldn't be too hard, after all if you leave onions/potatoes in the refrigerator for long enough they start to grow! It isn't like you have to do too much to help them. In short it has been pretty easy and I'm definitely going to be doing more of it next year.

I've surprised myself by enjoying the process as much as I have. Every few days I go and rip up the weeds, and water the things we've planted. So far I've planted, and harvested, Radish, Garlic, Onions, and in a few more weeks I'll be digging up potatoes.

I have no particular point to this post, except to say that if you have a few hours spare a week, and a slab of land to hand upon which you can dig and plant I'd recommend it. Sure there were annoyances, and not a single one of the carrot-seeds I planted showed any sign of life, but the other stuff? The stuff that grew? Very tasty, om nom nom ..

(It has to be said that when we received the plot there was a jungle growing upon it. Once we tidied it all up we found raspberries, roses, and other things. The garlic I reaped was already growing so I felt like a cheat to harvest it. That said I did plant a couple of bulbs on my balcony so I could say "I grew this from scratch". Took a while, but I did indeed harvest my own garlic.)

27 July, 2020 12:00PM

hackergotchi for Martin Michlmayr

Martin Michlmayr

ledger2beancount 2.4 released

I released version 2.4 of ledger2beancount, a ledger to beancount converter.

There are two notable changes in this release:

  1. I fixed two regressions introduced in the last release. Sorry about the breakage!
  2. I improved support for hledger. I believe all syntax differences in hledger are supported now.

Here are the changes in 2.4:

  • Fix regressions introduced in version 2.3
    • Handle price directives with comments
    • Don't assume implicit conversion when price is on second posting
  • Improve support for hledger
    • Fix parsing of hledger tags
    • Support commas as decimal markers
    • Support digit group marks through commodity and D directives
    • Support end aliases directive
    • Support regex aliases
    • Recognise total balance assertions
    • Recognise sub-account balance assertions
  • Add support for define directive
  • Convert all uppercase metadata tags to all lowercase
  • Improve handling of ledger lots without cost
  • Allow transactions without postings
  • Fix parsing issue in commodity declarations
  • Support commodities that contain quotation marks
  • Add --version option to show version
  • Document problem of mixing apply and include

Thanks to Kirill Goncharov for pointing out one regressions, to Taylor R Campbell for for a patch, to Stefano Zacchiroli for some input, and finally to Simon Michael for input on hledger!

You can get ledger2beancount from GitHub

27 July, 2020 08:07AM by Martin Michlmayr

Russ Allbery

Summer haul

I'm buying rather too many books at the moment and not reading enough of them (in part because I got back into Minecraft and in part because I got a bit stuck on a few difficult books). I think I've managed to get myself unstuck again, though, and have started catching up on reviews.

2020. It's kind of a lot. And I'm not even that heavily affected.

Katherine Addison — The Angel of the Crows (sff)
Marie Brennan — A Natural History of Dragons (sff)
Kacen Callender — Queen of the Conquered (sff)
Jo Clayton — Diadem from the Stars (sff)
Jo Clayton — Lamarchos (sff)
Jo Clayton — Irsud (sff)
Clifford D. Conner — The Tragedy of American Science (nonfiction)
Kate Elliott — Unconquerable Sun (sff)
Rory Fanning & Craig Hodges — Long Shot (nonfiction)
Michael Harrington — Socialism: Past & Future (nonfiction)
Nalo Hopkinson — Brown Girl in the Ring (sff)
Kameron Hurley — The Stars Are Legion (sff)
N.K. Jemisin — Emergency Skin (sff)
T. Kingfisher — A Wizard's Guide to Defensive Baking (sff)
T. Kingfisher — Nine Goblins (sff)
Michael Lewis — The Fifth Risk (nonfiction)
Paul McAuley — War of the Maps (sff)
Gretchen McCulloch — Because Internet (nonfiction)
Hayao Miyazaki — Nausicaä of the Valley of the Wind (graphic novel)
Annalee Newitz — The Future of Another Timeline (sff)
Nick Pettigrew — Anti-Social (nonfiction)
Rivers Solomon, et al. — The Deep (sff)
Jo Walton — Or What You Will (sff)
Erik Olin Wright — Stardust to Stardust (nonfiction)

Of these, I've already read and reviewed The Fifth Risk (an excellent book).

27 July, 2020 04:31AM