February 25, 2018

hackergotchi for Norbert Preining

Norbert Preining

Debian/TeX Live 2017.20180225-1

To my big surprise, the big rework didn’t create any havoc at all, not one bug report regarding the change. That is good. OTOH, I took some time off due to various surprising (and sometimes disturbing) things that have happened in the last month, so the next release took a bit longer than expected.

I am giving here the list of new and updated packages extracted from my local tlmgr.log file, but I am having some doubts, the list seems in both cases a bit too long for me 😉 Anyway, it was a very busy month for TeX packages.

We are also moving at high speed to TeX Live 2018. There will be probably one more release of Debian packages before we switch to the 2018 branch, which aligns nicely with the release planning of TeX Live.

Enjoy.

New packages

abnt, adigraph, algobox, algolrevived, aligned-overset, alkalami, amscls-doc, authorarchive, axodraw2, babel-azerbaijani, beamertheme-saintpetersburg, beilstein, bib2gls, biblatex-enc, biblatex-oxref, biochemistry-colors, blowup, bredzenie, bxcalc, bxjaprnind, bxorigcapt, bxtexlogo, cesenaexam, cheatsheet, childdoc, cje, cm-mf-extra-bold, cmsrb, coelacanth, collection-plaingeneric, combofont, context-handlecsv, crossreftools, ctan-o-mat, currency, dejavu-otf, dijkstra, draftfigure, ducksay, dviinfox, dynkin-diagrams, endofproofwd, eqnnumwarn, fancyhandout, fetchcls, fixjfm, fontawesome5, fontloader-luaotfload, forms16be, glossaries-finnish, gotoh, graphicxpsd, gridslides, hackthefootline, hagenberg-thesis, hecthese, hithesis, hlist, hyphen-belarusian, ifptex, ifxptex, invoice2, isopt, istgame, jfmutil, knowledge, komacv-rg, ku-template, labelschanged, ladder, latex-mr, latex-refsheet, lccaps, limecv, llncsconf, luapackageloader, lyluatex, maker, marginfit, mathfam256, mathfixs, mcexam, mensa-tex, modernposter, modular, mptrees, multilang, musicography, na-box, na-position, niceframe-type1, nicematrix, notestex, numnameru, octave, outlining, pdfprivacy, pdfreview, pixelart, plex, plex-otf, pm-isomath, poetry, polexpr, pst-antiprism, pst-calculate, pst-dart, pst-geometrictools, pst-poker, pst-rputover, pst-spinner, pst-vehicle, pxufont, rutitlepage, scientific-thesis-cover, scratch, scratchx, sectionbreak, sesstime, sexam, shobhika, short-math-guide, simpleinvoice, simplekv, spark-otf, spark-otf-fonts, spectralsequences, stealcaps, termcal-de, textualicomma, thaienum, thaispec, theatre, thesis-gwu, tikz-feynhand, tikz-karnaugh, tikz-ladder, tikz-layers, tikz-relay, tikz-sfc, tikzcodeblocks, tikzducks, timbreicmc, translator, typewriter, typoaid, uhhassignment, unitn-bimrep, univie-ling, upzhkinsoku, wallcalendar, witharrows, xechangebar, xii-lat, xltabular, xsim, xurl, zebra-goodies, zhlipsum.

Updated packages

ESIEEcv, FAQ-en, GS1, HA-prosper, IEEEconf, IEEEtran, MemoirChapStyles, academicons, achemso, acro, actuarialsymbol, adobemapping, afm2pl, aleph, algorithm2e, amiri, amscls, amsldoc-it, amsmath, amstex, amsthdoc-it, animate, aomart, apa6, appendixnumberbeamer, apxproof, arabi, arabluatex, arara, archaeologie, arsclassica, autosp, awesomebox, babel, babel-english, babel-french, babel-georgian, babel-hungarian, babel-latvian, babel-russian, babel-ukrainian, bangorcsthesis, bangorexam, baskervillef, bchart, beamer, beamerswitch, beebe, besjournals, beuron, bgteubner, biber, biblatex, biblatex-abnt, biblatex-anonymous, biblatex-apa, biblatex-archaeology, biblatex-arthistory-bonn, biblatex-bookinother, biblatex-cheatsheet, biblatex-chem, biblatex-chicago, biblatex-fiwi, biblatex-gb7714-2015, biblatex-gost, biblatex-iso690, biblatex-manuscripts-philology, biblatex-philosophy, biblatex-publist, biblatex-realauthor, biblatex-sbl, biblatex-shortfields, biblatex-source-division, biblatex-trad, biblatex-true-citepages-omit, bibleref, bibletext, bibtex, bibtexperllibs, bibtexu, bidi, bnumexpr, bookcover, bookhands, bpchem, br-lex, bxbase, bxjscls, bxnewfont, bxpapersize, bytefield, c90, callouts, calxxxx-yyyy, catechis, cbfonts-fd, ccicons, cdpbundl, cellspace, changebar, checkcites, chemfig, chemmacros, chemschemex, chet, chickenize, chktex, circuitikz, citeall, cjk-gs-integrate, cjkutils, classicthesis, cleveref, cm, cmexb, cmpj, cns, cochineal, collref, complexity, comprehensive, computational-complexity, context, context-filter, context-fullpage, context-letter, context-title, context-vim, contracard, cooking-units, correctmathalign, covington, cquthesis, crossrefware, cslatex, csplain, csquotes, css-colors, ctan-o-mat, ctanify, ctex, ctie, curves, cweb, cyrillic-bin, cyrplain, datatool, datetime2-bahasai, datetime2-german, datetime2-spanish, datetime2-ukrainian, dccpaper, dejavu-otf, delimset, detex, dnp, doclicense, docsurvey, dox, dozenal, drawmatrix, dtk, dtl, dvi2tty, dviasm, dvicopy, dvidvi, dviljk, dvipdfmx, dvipng, dvipos, dvips, e-french, easyformat, ebproof, eledmac, elements, elpres, elzcards, embrac, emisa, enotez, eplain, epstopdf, eqparbox, esami, etoc, etoolbox, europasscv, euxm, exam, expex, factura, fancyhdr, fancylabel, fbb, fei, feyn, fibeamer, fira, fithesis, fmtcount, fnspe, fontinst, fontname, fontools, fonts-tlwg, fontspec, fonttable, fontware, footnotehyper, forest, fvextra, genealogytree, genmisc, gfsdidot, glossaries, glossaries-extra, glyphlist, gost, graphbox, graphics, graphics-def, graphics-pln, gregoriotex, gsftopk, gtl, guide-to-latex, gustlib, gustprog, halloweenmath, handout, hepthesis, hobby, hvfloat, hvindex, hyperref, hyperxmp, hyph-utf8, hyphen-base, hyphen-churchslavonic, hyphen-german, hyphen-latin, ifluatex, ifplatform, ijsra, impatient-cn, inconsolata, ipaex, iscram, jadetex, japanese-otf, japanese-otf-uptex, jlreq, jmlr, jmn, jsclasses, kantlipsum, karnaugh-map, ketcindy, keyfloat, kluwer, koma-script, komacv, kpathsea, l3build, l3experimental, l3kernel, l3packages, lacheck, lambda, langsci, latex-bin, latex2e-help-texinfo, latex2e-help-texinfo-fr, latex2e-help-texinfo-spanish, latex2man, latex2nemeth, latexbug, latexconfig, latexdiff, latexindent, latexmk, lato, lcdftypetools, leadsheets, leipzig, lettre, libertine, libertinegc, libertinus, libertinust1math, limap, lion-msc, listofitems, lithuanian, lni, lollipop, lt3graph, lua-check-hyphen, lualatex-math, luamplib, luatex, luatexja, luatexko, luatodonotes, luaxml, lwarp, m-tx, macros2e, make4ht, makedtx, makeindex, mandi, manfnt-font, marginnote, markdown, math-into-latex-4, mathpunctspace, mathtools, mcf2graph, media9, metafont, metapost, mex, mfirstuc, mflua, mfnfss, mfware, mhchem, microtype, minted, mltex, morewrites, mpostinl, mptopdf, msu-thesis, multiexpand, musixtex, mwcls, mwe, nddiss, ndsu-thesis, newpx, newtx, newtxtt, nlctdoc, noto, novel, numspell, nwejm, oberdiek, ocgx2, omegaware, oplotsymbl, optidef, ot-tableau, otibet, overlays, overpic, pagecolor, patgen, pdflatexpicscale, pdfpages, pdftex, pdftools, pdfwin, pdfx, perfectcut, pgf, pgfgantt, pgfplots, phfqit, philokalia, phonenumbers, phonrule, pkgloader, placeat, platex, platex-tools, platexcheat, poemscol, polski, polynom, powerdot, prerex, presentations, preview, probsoln, program, ps2pk, pst-barcode, pst-cie, pst-circ, pst-dart, pst-eucl, pst-exa, pst-fit, pst-fractal, pst-func, pst-geo, pst-ghsb, pst-node, pst-ode, pst-ovl, pst-pdf, pst-pdgr, pst-plot, pst-pulley, pst-solarsystem, pst-solides3d, pst-tools, pst2pdf, pstool, pstools, pstricks, pstricks-add, psutils, ptex, ptex-base, ptex-fontmaps, ptex-fonts, ptex2pdf, pxbase, pxchfon, pxjahyper, pxrubrica, pythontex, qpxqtx, quran, ran_toks, randomlist, randomwalk, rec-thy, refenums, reledmac, repere, resphilosophica, revtex4, robustindex, roex, rubik, sasnrdisplay, screenplay-pkg, scsnowman, seetexk, siunitx, skak, skrapport, songs, spreadtab, srbook-mem, stage, struktex, svg, sympytexpackage, synctex, systeme, t1utils, tcolorbox, testidx, tetex, tex, tex-refs, tex4ebook, tex4ht, texconfig, texcount, texdef, texdirflatten, texdoc, texfot, texosquery, texshade, texsis, textgreek, texware, texworks, thalie, thesis-ekf, thuthesis, tie, tikz-kalender, tikz-timing, tikzmark, tikzpeople, tikzsymbols, tlcockpit, tlshell, tocloft, toptesi, tpic2pdftex, tqft, tracklang, translation-biblatex-de, translations, ttfutils, tudscr, tugboat, turabian-formatting, ucharclasses, udesoftec, ulthese, unfonts-core, unfonts-extra, unicode-data, unicode-math, uowthesistitlepage, updmap-map, uplatex, upmethodology, uptex-base, uptex-fonts, variablelm, varsfromjobname, velthuis, visualtikz, vlna, web, widetable, wordcount, xassoccnt, xcharter, xcjk2uni, xcntperchap, xdvi, xecjk, xepersian, xetex, xetexconfig, xetexko, xetexref, xgreek, xii, xindy, xint, xmltex, xmltexconfig, xpinyin, xsavebox, ycbook, yhmath, zhnumber, zxjatype.

25 February, 2018 01:17PM by Norbert Preining

hackergotchi for Daniel Lange

Daniel Lange

Debian Gitlab (salsa.debian.org) tricks

Debian is moving the git hosting from alioth.debian.org, an instance of Fusionforge, to salsa.debian.org which is a Gitlab instance.

There is some background reading available on https://wiki.debian.org/Salsa/. This also has pointers to an import script to ease migration for people that move repositories. It's definitely worth hanging out in #alioth on oftc, too, to learn more about salsa / gitlab in case you have a persistent irc connection.

As of now() salsa has 15,320 projects, 2,655 users in 298 groups.
Alioth has 29,590 git repositories (which is roughly equivalent to a project in Gitlab), 30,498 users in 1,154 projects (which is roughly equivalent a group in Gitlab).

So we currently have 50% of the git repositories migrated. One month after leaving beta. This is very impressive.
As Alioth has naturally accumulated some cruft, Alexander Wirt (formorer) estimates that 80% of the repositories in use have already been migrated.

So it's time to update your local .git/config URLs!

Mehdi Dogguy has written nice scripts to ease handling salsa / gitlab via the (extensive and very well documented) API. Among them is list_projects that gets you nice overview of the projects in a specific group. This is especially true for the "Debian" group that contains the former collab-maint repositories, so source code that can and shall be maintained by Debian Developers collectively.

Finding migrated repositories

Salsa can search quite quickly via the Web UI: https://salsa.debian.org/search?utf8=✓&search=htop

Salsa search screenshot

but finding the URL to clone the repository from is more clicks and ~4MB of data each time (yeah, the modern web), so

$ curl --silent https://salsa.debian.org/api/v4/projects?search="htop" | jq .
[
  {
    "id": 9546,
    "description": "interactive processes viewer",
    "name": "htop",
    "name_with_namespace": "Debian / htop",
    "path": "htop",
    "path_with_namespace": "debian/htop",
    "created_at": "2018-02-05T12:44:35.017Z",
    "default_branch": "master",
    "tag_list": [],
    "ssh_url_to_repo": "git@salsa.debian.org:debian/htop.git",
    "http_url_to_repo": "https://salsa.debian.org/debian/htop.git",
    "web_url": "https://salsa.debian.org/debian/htop",
    "avatar_url": null,
    "star_count": 0,
    "forks_count": 0,
    "last_activity_at": "2018-02-17T18:23:05.550Z"
  }
]

is a bit nicer.

Please notice the git url format is a bit odd, it's either
git@salsa.debian.org:debian/htop.git or
ssh://git@salsa.debian.org/debian/htop.git.

Notice the ":" -> "/" after the hostname. Bit me once.

Finding repositories to update

At this time I found it useful to check which of the repositories I have cloned had not yet been updated in the local .git/config:

find ~/debconf ~/my_sources ~/shared -ipath '*.git/config' -exec grep -H 'url.*git\.debian' '{}' \;

Thanks to Jörg Jaspert (Ganneff) the Debconf repositories have all been moved to Salsa now.
Hint: Bug him for his scripts if you need to do complex moves.

Updating the URLs has been an hours work on my side and there is little you can do to speed that up if - as in the Debconf case - teams have used the opportunity to clean up and things are not as easy as using sed -i.

But there is no reason to do this more than once, so for the laptops...

Speeding up migration on multiple devices

rsync -armuvz --existing --include="*/" --include=".git/config" --exclude="*" ~/debconf/ laptop:debconf/

will rsync the .git/config files that you changed to other systems where you keep partial copies.

On these a simple git pull to get up to remote HEAD or using the git_pull_all one-liner from https://daniel-lange.com/archives/99-Managing-a-project-consisting-of-multiple-git-repositories.html will suffice.

Git short URL

Stefano Rivera (tumbleweed) shared this clever trick:

git config --global url."ssh://git@salsa.debian.org/".insteadOf salsa:

This way you can git clone salsa:debian/htop.

25 February, 2018 11:30AM by Daniel Lange (nospam@example.com)

Enrico Zini

Automatic deploy from gitlab/salsa CI

At SnowCamp I migrated Front Desk-related repositories to Salsa gitlab and worked on setting up Continuous Integration for the web applications I maintain in Debian.

The result is a reusable Django app that integrates with gitlab's webhooks

It is currently working for https://contributors.debian.org and I'll soon reuse it for https://nm.debian.org and https://debtags.debian.org.

The only setup needed on DSA side is to enable systemd linger on the deploy user.

The CI/deploy workflow is this:

  • gitlab runs tests in the CI
  • gitlab notifies pipeline status changes via a webhook
  • when a selected pipeline changes status to success, the application queues a deploy for that shasum by creating a shasum.deploy file in a queue directory
  • a systemd .path unit running as the deploy user triggers when the new file is created and runs manage.py deploy as the deploy user

And manage.py deploy does this:

  • git fetch
  • abort of the shasum of the head of the deploy branch does not match one of the .deploy files in the queue directory
  • abort if the head of the deploy branch is not signed by a gpg key present in a deploy keyring
  • abort if the head of the deploy branch is not a successor of the currently deployed commit
  • update the working copy
  • run a deploy script
  • remove all .deploy files seen when the script was called
  • send an email to the site admins with a log of the whole deploy process, whether it succeeded or it was aborted

For more details, see the app's README.md

I find it wonderful that we got to a stage where we can have this in Debian, and I am very grateful to all the work that has been done and is being done in setting up and maintaining Salsa.

25 February, 2018 10:24AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

My chrome extension became useful for me.

My chrome extension became useful for me. It's nice. chrome.tabs API is a little weird, I want to open and manage tabs and then auto-process some things but executeScript interface is strange. Also I don't know how I would detect page transition without polling.

25 February, 2018 01:26AM by Junichi Uekawa

hackergotchi for Nicolas Dandrimont

Nicolas Dandrimont

Report from Debian SnowCamp: day 3

[Previously: day 1, day 2]

Thanks to Valhalla and other members of LIFO, a bunch of fine Debian folks have convened in Laveno, on the shores of Lake Maggiore, for a nice weekend of relaxing and sprinting on various topics, a SnowCamp.

As a starter, and on request from Valhalla, please enjoy an attempt at a group picture (unfortunately, missing a few people). Yes, the sun even showed itself for a few moments today!

One of the numerous SnowCamp group pictures

As for today’s activities… I’ve cheated a bit by doing stuff after sending yesterday’s report and before sleep: I reviewed some of Stefano’s dc18 pull requests; I also fixed papered over the debexpo uscan bug.

After keeping eyes closed for a few hours, the day was then spent tickling the python-gitlab module, packaged by Federico, in an attempt to resolve https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=890594 in a generic way.

The features I intend to implement are mostly inspired from jcowgill’s multimedia-cli:

  • per-team yaml configuration of “expected project state” (access level, hooks and other integrations, enablement of issues, merge requests, CI, …)
  • new repository creation (according to a team config or a personal config, e.g. for collab-main the Debian group)
  • audit of project configurations
  • mass-configuration changes for projects

There could also be some use for bits of group management, e.g. to handle the access control of the DebConf group and its subgroups, although I hear Ganneff prefers shell scripts.

My personal end goal is to (finally) do the 3D printer team repository migration, but e.g. the Python team would like to update configuration of all repos to use the new KGB hook instead of irker, so some generic interest in the tool exists.

As the tool has a few dependencies (because I really have better things to do than reimplement another wrapper over the GitLab API) I’m not convinced devscripts is the right place for it to live… We’ll see when I have something that does more than print a list of projects to show!

In the meantime, I have the feeling Stefano has lined up a new batch of DebConf website pull requests for me, so I guess that’s what I’m eating for breakfast “tomorrow”… Stay tuned!

My attendance to SnowCamp is in part made possible by donations to the Debian project. If you want to keep the project going, please consider donating, joining the Debian partners program, or sponsoring the upcoming Debian Conference.

25 February, 2018 12:49AM by olasd

February 24, 2018

John Goerzen

Remembering Tom Wallis, The System Administrator That Made The World Better

I never asked Tom why he hired me.

I was barely 17 at the time – already a Debian developer, full of more enthusiasm than experience, and Tom offered me a job. It was my first real sysadmin job, and to my delight, I got to work with Unix. For two years, I was the part-time assistant systems administrator for the Computer Science department at Wichita State University. And Tom was my boss, mentor, and old friend. Tom was walking proof that a system administrator can make the world a better place.

That amazing time was two decades ago now. And in the time since, every so often Tom and I would exchange emails. I enjoyed occasionally dropping by his office at work and surprising him.

So it was a shock to get an email this week that Tom had married for the first time at age 54, and passed away four days later due to a boating accident while on his honeymoon.

Tom was a man with a big laugh and an even bigger heart. When I started a Linux Users Group (LUG) on campus, there was Tom – helping to arrange a place to meet, Internet access when we needed it, and gave his evenings to simply be present and a supporter.

I had (and still have) a passion for Free/Open Source software. Linux was just new at the time, and was barely present in the department when I started. I was fortunate that CS was the “little dept. that could” back then, with wonderful people but not a lot of money, so a free operating system helped with a lot of problems. Tom supported me in my enthusiasm to introduce Debian all over the place. His trust meant much, and brought out the best in me.

I learned a lot from Tom, and more than just technology. A state university can be heavily bureaucratic place at times. Tom was friends with every “can-do” person on campus, it seemed, and they all managed to pull through and get things done – sometimes working around policies that were a challenge.

I have sometimes wondered if I am doing enough, making a big enough difference in the world. Does a programmer really make a difference in people’s lives?

Tom Wallis is proof that the answer is yes. From the stories I heard at his funeral today, I can only guess how many other lives he touched.

This week, Tom gave me one final gift: a powerful reminder that sysadmins and developers can make the world a better place, can touch people’s lives. I hope Tom knew how much I appreciated him. If I find a way to make a difference in someone’s life — maybe an intern I’ve hired, or someone I take flying — than I will have found a way to pass on Tom’s gift to another, and I hope I can.

tom-wallis-penguin

(This penguin was sitting out on the table of memorabilia from Tom today. I remember it from a shelf in his office.)

24 February, 2018 08:54PM by John Goerzen

hackergotchi for Vincent Bernat

Vincent Bernat

OPL2LPT: an AdLib sound card for the parallel port

The AdLib sound card was the first popular sound card for IBM PC—prior to that, we were pampered by the sound of the PC speaker. Connected to an 8-bit ISA slot, it is powered by a Yamaha YM3812 chip, also known as OPL2. This chip can drive 9 sound channels whose characteristics can be fine tuned through 244 write-only registers.

AdLib sound card

I had one but I am unable to locate it anymore. Models on eBay are quite rare and expensive. It is possible to build one yourself (either Sergey’s one or this faithful reproduction). However, you still need an ISA port. The limitless imagination of some hackers can still help here. For example, you can combine Sergey’s Xi 8088 processor board, with his ISA 8-bit backplane, his Super VGA card and his XT-CF-Lite card to get your very own modernish IBM PC compatible. Alternatively, you can look at the AdLib sound card on a parallel port from Raphaël Assénat.

The OPL2LPT sound card🔗

Recently, the 8-Bit Guy released a video about an AdLib sound card for the parallel port, the OPL2LPT. While current motherboards don’t have a parallel port anymore, it’s easy to add a PCI-Express one. So, I bought a pre-soldered OPL2LPT and a few days later, it was at my doorstep:

OPL2LPT sound card

The expected mode of operation for such a device is to plug it to an ISA parallel port (accessible through I/O port 0x378), load a DOS driver to intercept calls to AdLib’s address and run some AdLib-compatible game. While correctly supported by Linux, the PCI-Express parallel port doesn’t operate like an ISA one. QEMU comes with a parallel port emulation but, due to timing issues, cannot correctly drive the OPL2LPT. However, VirtualBox emulation is good enough.1

On Linux, the OPL2LPT can be programmed almost like an actual AdLib. The following code writes a value to a register:

static void lpt_write(uint8_t data, uint8_t ctrl) {
  ieee1284_write_data(port, data);
  ieee1284_write_control(port, (ctrl | C1284_NINIT) ^ C1284_INVERTED);
  ieee1284_write_control(port,  ctrl                ^ C1284_INVERTED);
  ieee1284_write_control(port, (ctrl | C1284_NINIT) ^ C1284_INVERTED);
}

void opl_write(uint8_t reg, uint8_t value) {
  lpt_write(reg, C1284_NSELECTIN | C1284_NSTROBE);
  usleep(4); // 3.3 microseconds

  lpt_write(value, C1284_NSELECTIN);
  usleep(23);
}

To “natively” use the OPL2LPT, I have modified the following applications:

  • ScummVM, an emulator for classic point-and-click adventure games, including many Lucas­Arts games—patch
  • QEMU, a quick generic emulator—patch with a minimal emulation for timers and hard-coded sleep delays 🙄
  • DOSBox, an x86 emulator bundled with DOSpatch with a complete emulation for timers and a dedicated working thread2

You can compare the results in the following video, with the introduction of Indiana Jones and the Last Crusade, released in 1989:3

  • 0:00, DOSBox with an emulated PC speaker
  • 0:58, DOSBox with an emulated AdLib
  • 1:51, VirtualBox with the OPL2LPT (on an emulated parallel port)
  • 2:42, patched ScummVM with the OPL2LPT (native)
  • 3:33, patched QEMU with the OPL2LPT (native)
  • 4:24, patched DOSBox with the OPL2LPT (native)
  • 5:17, patched DOSBox with an improved OPL3 emulator (Nuked OPL3)
  • 6:10, ScummVM with the CD track (FM Towns version)

I let you judge how good is each option! There are two ways to buy an OPL2LPT: in Europe, from Serdashop or in North America, from the 8-Bit Guy.

Addendum🔗

Indiana Jones and the Fate of Atlantis🔗

Here is another video featuring Indiana Jones and the Fate of Atlantis, released in 1992, running in DOSBox with the OPL2LPT. It’s the second game using the iMUSE sound system: music is synchronized with actions and transitions are done seamlessly. Quite a feat at the time!

Monkey Island 2🔗

The first game featuring iMuse is Monkey Island 2, released in 1991. The video below displays the first minutes of the game, running in DOSBox with the OPL2LPT.

Notably, at 5:33, when Guybrush is in Woodtick, a small town on Scabb Island, the music plays around a variation of a basic theme with a different instrument for each building without any interruption.

How the videos were recorded🔗

With a VGA adapter, many games use Mode 13h, a 256-color mode with a 320×200 resolution. On a 4:3 display, this mode doesn’t feature square pixels: they are stretched vertically by a factor of 1.2.

The above videos were recorded with FFmpeg (and edited with Blender). It packs a lot of useful filters making it easy to automate video capture. Here is an example:

FONT="font=Monkey Island 1991 refined:
      fontcolor=OrangeRed:
      fontsize=16:
      x=w-text_w-10"
ffmpeg -y \
 -thread_queue_size 64 \
 -f x11grab -draw_mouse 0 -r 30 -s 640x400 -i :0+844,102 \
 -thread_queue_size 64 \
 -f pulse -ac 1 -i default \
 -filter_complex "[0:v]pad=854:400:0:0,
      drawtext=${FONT}:y= 10:text=Indiana Jones 3,
      drawtext=${FONT}:y= 34:text=Intro,
      drawtext=${FONT}:y=100:text=DOSBox,
      drawtext=${FONT}:y=124:text=VGA,
      drawtext=${FONT}:y=148:text=PC speaker,
      scale=854:480[game];
      [1:a]showwaves=s=214x100:colors=OrangeRed:scale=lin[waves];
      [1:a]showcqt=s=214x100[spectrum];
      [waves][spectrum]vstack[vis];
      [game][vis]overlay=x=640:y=280" \
 -pix_fmt yuv420p -c:v libx264 -qp 0 -preset ultrafast \
 indy3-dosbox-pcspkr.mkv

The interesting part is the filter_complex argument. The input video is padded from 640×400 to 854×400 as a first step to a 16:9 aspect ratio.4 Using The Secret Font of Monkey Island, some text is added to the right of the video. The result is then scaled to 854×480 to get the final aspect ratio while stretching pixels to the expected 1.2 factor. The video up to this point is sent to a stream named game. As a second step, from the input audio, we build two visualisations: a waveform and a spectrum. They are stacked vertically and the result is a stream named vis. The last step is to overlay the visualisation stream over the gameplay stream.


  1. There is no dialog to configure a parallel port. This needs to be done from the command-line after the instance creation:

    $ VBoxManage modifyvm "FreeDOS (games)" --lptmode1 /dev/parport0
    $ VBoxManage modifyvm "FreeDOS (games)" --lpt1 0x378 7
    

    ↩︎

  2. With QEMU or DOSBox, it should be the responsability of the executing game to respect the required delays for the OPL2 to process the received bytes. However, QEMU doesn’t seem to try to emulate I/O delays while DOSBox seems to not be precise enough. For the later, to overcome this shortcoming, the OPL2LPT is managed from a dedicated thread receiving the writes and ensuring the required delays are met. ↩︎

  3. Indiana Jones and the Last Crusade was the first game I tried after plugging in the brand new AdLib sound card I compelled my parents to buy on a trip to Canada in 1992. At the time, no brick and mortar retailer sold this card in my French city and online purchases (through the Minitel) were limited to consumer goods (like a VHS player). Hard times. 😏 ↩︎

  4. A common method to extend a 4:3 video to a 16:9 aspect ratio without adding black bars is to add a blurred background using the same video as a source. I didn’t do this here but it is also possible with FFmpeg↩︎

24 February, 2018 07:16PM by Vincent Bernat

Dima Kogan

Vnlog integration with feedgnuplot

This is mostly a continuation of the last post, but it's so nice!

As feedgnuplot reads data, it interprets it into separate datasets with IDs that can be used to refer to these datasets. For instance you can pass feedgnuplot --autolegend to create a legend for each dataset, labelling each with its ID. Or you can set specific directives for one dataset but not another: feedgnuplot --style position 'with lines' --y2 temperature would plot the position data with lines, and the temperature data on the second y axis.

Let's say we were plotting a data stream

1 1
2 4
3 9
4 16
5 25

Without --domain this data would be interpreted like this:

  • without --dataid. This stream would be interpreted as two data sets: IDs 0 and 1. There're 5 points in each one
  • with --dataid. This stream would be interpreted as 5 different datasets with IDs 1, 2, 3, 4 and 5. Each of these datasets would contain point point each.

This is a silly example for --dataid, obviously. You'd instead have a dataset like

temperature 34 position 4
temperature 35 position 5
temperature 36 position 6
temperature 37 position 7

and this would mean two datasets: temperature and position. This is nicely flexible because it can be as sparse as you want: each row doesn't need to have one temperature and one position, although in many datasets you would have exactly this. Real datasets are often more complex:

1 temperature1 34 temperature2 35 position 4
2 temperature1 35 temperature2 36
3 temperature1 36 temperature2 33
4 temperature1 37 temperature2 32 position 7

Here the first column could be a domain of sort sort, time for instance. And we have two different temperature sensors. And we don't always get a position report for whatever reason. This works fine, but is verbose, and usually the data is never stored in this way; I'd use awk to convert the data from its original form into this form for plotting. Now that vnlog is a thing, feedgnuplot has direct support for it, and this works like a 3rd way to get dataset IDs: vnlog headers. I'd represent the above like this:

# time temperature1 temperature2 position
1 34 35 4
2 35 36 -
3 36 33 -
4 37 32 7

This would be the working representation; I'd log directly to this format, and work with this data even before plotting it. But I can now plot this directly:

$ < data.vnl 
  feedgnuplot --domain --vnlog --autolegend --with lines 
              --style position 'with points pt 7' --y2 position

I think the command above makes it clear what was intended. It looks like this:

vnl1.svg

The input data is now much more concise, I don't need a different format for plotting, and the amount of typing has been greatly reduced. And I can do the normal vnlog things. What if I want to plot only temperatures:

$ < data.vnl 
  vnl-filter -p time,temp |
  feedgnuplot --domain --vnlog --autolegend --with lines

Nice!

24 February, 2018 05:24PM by Dima Kogan

hackergotchi for Martín Ferrari

Martín Ferrari

Report from SnowCamp #1

As Nicolas already reported, a bunch of Debian folk gathered in the North of Italy for a long weekend of work and socialisation.

Valhalla had the idea of taking the SunCamp concept and doing it in another location, and along with people from LIFO they made it happen. Thanks to all who worked on this!


I arrived late on Wednesday, after a very relaxed car journey from Lyon. Sadly, on Thursday I had to attend some unexpected personal issues, and it was not very productive for Debian work. Luckily, between Friday and today, I managed to get back in track.

I uploaded new versions of Prometheus-related package to stretch-backports, so they are in line with current versions in testing:

  • prometheus-alertmanager, which also provides a fix for #891202: False owner/group for /var/lib/prometheus.
  • python-prometheus-client, carrying some useful updates for users.

I fixed two RC bugs in important Go packages, both caused by the recent upload of Golang 1.10:

  • #890927: golang-golang-x-tools: FTBFS and Debci failure with golang-1.10-go
  • #890938: golang-google-cloud FTBFS: FAIL: TestAgainstJSONEncodingNoTags

I also had useful chats about continuous testing of Go package, and improvements to git-buildpackage to better support our workflow. I plan to try and write some code for it.

Finally, I had some long discussions about joining an important team in Debian, but I can't still report on that :-)

Comment

24 February, 2018 03:59PM

hackergotchi for Nicolas Dandrimont

Nicolas Dandrimont

Report from Debian SnowCamp: day 2

[Previously: day 1]

Thanks to Valhalla and other members of LIFO, a bunch of fine Debian folks have convened in Laveno, on the shores of Lake Maggiore, for a nice weekend of relaxing and sprinting on various topics, a SnowCamp.

Today’s pièce de résistance was the long overdue upgrade of the machine hosting mentors.debian.net to (jessie then) stretch. We’ve spent most of the afternoon doing the upgrades with Mattia.

The first upgrade to jessie was a bit tricky because we had to clean up a lot of cruft that accumulated over the years. I even managed to force an unexpected database restore test 😇. After a few code fixes, and getting annoyed at apache2.4 for ignoring VirtualHost configs that don’t end with .conf (and losing an hour of debugging time in the process…), we managed to restore the functonality of the website.

We then did the stretch upgrade, which was somewhat smooth sailing in comparison… We had to remove some functionality which depended on packages that didn’t make it to stretch: fedmsg, and the SOAP interface. We also noticed that the gpg2 transition completely broke the… “interesting” GPG handling of mentors… An install of gnupg1 later everything should be working as it was before.

We’ve also tried to tackle our current need for a patched FTP daemon. To do so, we’re switching the default upload queue directory from / to /pub/UploadQueue/. Mattia has submitted bugs for dput and dupload, and will upload an updated dput-ng to switch the default. Hopefully we can do the full transition by the next time we need to upgrade the machine.

Known bugs: the uscan plugin now fails to parse the uscan output… But at least it “supports” version=4 now 🙃

Of course, we’re still sorely lacking volunteers who would really care about mentors.debian.net; the codebase is a pile of hacks upon hacks upon hacks, all relying on an old version of a deprecated Python web framework. A few attempts have been made at a smooth transition to a more recent framework, without really panning out, mostly for lack of time on the part of the people running the service. I’m still convinced things should restart from scratch, but I don’t currently have the energy or time to drive it… Ugh.

More stuff will happen tomorrow, but probably not on mentors.debian.net. See you then!

My attendance to SnowCamp is in part made possible by donations to the Debian project. If you want to keep the project going, please consider donating, joining the Debian partners program, or sponsoring the upcoming Debian Conference.

24 February, 2018 12:15AM by olasd

February 23, 2018

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

“Stop Mang Fun of Me”

Somebody recently asked me if I am the star of bash.org quote #75514 (a snippet of online chat from a large collaboratively built collection):

<mako> my letter "eye" stopped worng
<luca> k, too?
<mako> yeah
<luca> sounds like a mountain dew spill
<mako> and comma
<mako> those three
<mako> ths s horrble
<luca> tme for a new eyboard
<luca> 've successfully taen my eyboard apart
       and fxed t by cleanng t wth alcohol
<mako> stop mang fun of me
<mako> ths s a laptop!!

It was me. A circuit on my laptop had just blown out my I, K, ,, and 8 keys. At the time I didn’t think it was very funny.

I no idea anyone had saved a log and had forgotten about the experience until I saw the bash.org quote. I appreciate it now so I’m glad somebody did!

This was unrelated to the time that I poured water into two computers in front of 1,500 people and the time that I carefully placed my laptop into a full bucket of water.

23 February, 2018 07:45PM by Benjamin Mako Hill

hackergotchi for Gunnar Wolf

Gunnar Wolf

Material for my UNL course, «Security in application development», available on GitLab

I have left this blog to linger without much activity... My life has got quite a bit busy. So, I'll try to put some life back here ☺

During the last trimester last year, I was invited as a distance professor to teach «Security in application development» in the «TUSL (Techical Universitary degree on Free Software)» short career taught by the online studies branch of Universidad Nacional del Litoral, based in Santa Fé, Argentina. The career is a three year long program that provides a facilitating, professional, terminal degree according to current Argentinian regulations (that demand people providing professional services on informatics to be "matriculated"). It is not a full Bachelors degree, as it does not allow graduated students to continue with a postgraduate; I have sometimes seen such programs offered as Associate degrees in some USA circles.

Anyway - I am most proud to say I had already a bit of experience giving traditional university courses, but this is my first time actually designing a course that's completely taken in writing; I have distance-taught once, but it was completely video-based, with forums used mostly for student participation.

So, I wrote quite a bit of material for my course. And, not to brag, but I think I did it nicely. The material is completely in Spanish, but some of you might be interested in it. And the most natural venue to share it with is, of course, the TUSL group in GitLab.

The TUSL group is quite interesting; when I made my yearly pilgrimage to Argentina in December, we met and chatted, even had a small conference for students and interested people in the region. I hope to continue to be involved in their efforts.

Anyway, as for my material — Strange as it might seem, I wrote mostly using the Moodle editor. I have been translating my writings to a more flexible Markdown, but you will find parts of it are still just HTML dumps taken with wget (taken as I don't want the course to be cleaned and forgotten!) The repository is split between the reading materials I gave the students (links to external material and to material written by myself) and the activities, where I basically just mirrored/statified the interactions through the forums.

I hope this material is interesting to some of you. And, of course, feel free to fix my errors and send merge requests ☺

23 February, 2018 06:26PM by gwolf

Andrew Shadura

How to stop gnome-settings-daemon messing with keyboard layouts

In case you, just like me, want to have a heavily customised keyboard layout configuration, possibly with different layouts on different input devices (I recommend inputplug to make that work), you probably don’t want your desktop environment to mess with your settings or, worse, re-set them to some default from time to time. Unfortunately, that’s exactly what gnome-settings-daemon does by default in GNOME and Unity. While I could modify inputplug to detect that and undo the changes immediately, it turned out this behaviour can be disabled with an underdocumented option:

gsettings set org.gnome.settings-daemon.plugins.keyboard active false

Thanks to Sebastien Bacher for helping me with this two years ago.

23 February, 2018 03:23PM

hackergotchi for Jo Shields

Jo Shields

Update on MonoDevelop Linux releases

Once upon a time, mono-project.com had two package repositories – one for RPM files, one for Deb files. This, as it turned out, was untenable – just building on an old distribution was insufficient to offer “works on everything” packages, due to dependent library APIs not being necessarily forward-compatible. For example, openSUSE users could not install MonoDevelop, because the versions of libgcrypt, libssl, and libcurl on their systems were simply incompatible with those on CentOS 7. MonoDevelop packages were essentially abandoned as unmaintainable.

Then, nearly 2 years ago, a reprieve – a trend towards development of cross-distribution packaging systems made it viable to offer MonoDevelop in a form which did not care about openSUSE or CentOS or Ubuntu or Debian having incompatible libraries. A release was made using Flatpak (born xdg-app). And whilst this solved a host of distribution problems, it introduced new usability problems. Flatpak means sandboxing, and without explicit support for sandbox escape at the appropriate moment, users would be faced with a different experience than the one they expected (e.g. not being able to P/Invoke libraries in /usr/lib, as the sandbox’s /usr/lib is different).

In 2 years of on-off development (mostly off – I have a lot of responsibilities and this was low priority), I wasn’t able to add enough sandbox awareness to the core of MonoDevelop to make the experience inside the sandbox feel as natural as the experience outside it. The only community contribution to make the process easier was this pull request against DBus#, which helped me make a series of improvements, but not at a sufficient rate to make a “fully Sandbox-capable” version any time soon.

In the interim between giving up on MonoDevelop packages and now, I built infrastructure within our CI system for building and publishing packages targeting multiple distributions (not the multi-distribution packages of yesteryear). And so to today, when recent MonoDevelop .debs and .rpms are or will imminently be available in our Preview repositories. Yes it’s fully installed in /usr, no sandboxing. You can run it as root if that’s your deal.

MonoDevelop 7.4.0.1026 on CentOS 6

Where’s the ARM builds?

https://github.com/mono/monodevelop/pull/3923

Where’s the ARM64 builds?

https://github.com/ericsink/SQLitePCL.raw/issues/199

Why aren’t you offering builds for $DISTRIBUTION?

It’s already an inordinate amount of work to support the 10(!) distributions I already do. Especially when, due to an SSL state engine bug in all versions of Mono prior to 5.12, nuget restore in the MonoDevelop project fails about 40% of the time. With 12 (currently) builds running concurrently, the likelihood of a successful publication of a known-good release is about 0.2%. I’m on build attempt 34 since my last packaging fix, at time of writing.

Can this go into my distribution now?

Oh God no. make dist should generate tarballs which at least work now, but they’re very much not distribution-quality. See here.

What about Xamarin Studio/Visual Studio for Mac for Linux?

Probably dead, for now. Not that it ever existed, of course. *cough*. But if it did exist, a major point of concern for making something capital-S-Supportable (VS Enterprise is about six thousand dollars) is being able to offer a trustworthy, integration-tested product. There are hundreds of lines of patches applied to “the stack” in Mac releases of Visual Studio for Mac, Xamarin.Whatever, and Mono. Hundreds to Gtk+2 alone. How can we charge people money for a product which might glitch randomly because the version of Gtk+2 in the user’s distribution behaves weirdly in some circumstances? If we can’t control the stack, we can’t integration test, and if we can’t integration test, we can’t make a capital-P Product. The frustrating part of it all is that the usability issues of MonoDevelop in a sandbox don’t apply to the project types used by Xamarin Studio/VSfM developers. Android development end-to-end works fine. Better than Mac/Windows in some cases, in fact (e.g. virtualization on AMD processors). But because making Gtk#2 apps sucks in MonoDevelop, users aren’t interested. And without community buy-in on MonoDevelop, there’s just no scope for making MonoDevelop-plus-proprietary-bits.

Why does the web stuff not work?

WebkitGtk dropped support for Gtk+2 years ago. It worked in Flatpak MonoDevelop because we built an old WebkitGtk, for use by widgets.

Aren’t distributions talking about getting rid of Gtk+2?

Yes 😬

23 February, 2018 02:19PM by directhex

February 22, 2018

hackergotchi for Nicolas Dandrimont

Nicolas Dandrimont

Report from Debian SnowCamp: day 1

Thanks to Valhalla and other members of LIFO, a bunch of fine Debian folks have convened in Laveno, on the shores of Lake Maggiore, for a nice weekend of relaxing and sprinting on various topics, a SnowCamp.

This morning, I arrived in Milan at “omfg way too early” (5:30AM, thanks to a 30 minute early (!) night train), and used the opportunity to walk the empty streets around the Duomo while the Milanese .oO(mapreri) were waking up. This gave me the opportunity to take very nice pictures of monuments without people, which is always appreciated!

 

After a short train ride to Laveno, we arrived at the Hostel at around 10:30. Some people had already arrived the day before, so there already was a hacking kind of mood in the air.  I’d post a panorama but apparently my phone generated a corrupt JPEG 🙄

After rearranging the tables in the common spaces to handle power distribution correctly (♥ Gaffer Tape), we could start hacking!

Today’s efforts were focused on the DebConf website: there were a bunch of pull requests made by Stefano that I reviewed and merged:

I’ve also written a modicum of code.

Finally, I have created the Debian 3D printing team on salsa in preparation for migrating our packages to git. But now is time to do the sleep thing. See you tomorrow?

My attendance to SnowCamp is in part made possible by donations to the Debian project. If you want to keep the project going, please consider donating, joining the Debian partners program, or sponsoring the upcoming Debian Conference.

22 February, 2018 10:40PM by olasd

hackergotchi for Jonathan Dowland

Jonathan Dowland

A Nice looking Blog

I stumbled across this rather nicely-formatted blog by Alex Beal and thought I'd share it. It's a particular kind of minimalist style that I like, because it puts the content first. It reminds me of Mark Pilgrim's old blog.

I can't remember which post in particular I came across first, but the one that I thought I would share was this remarkably detailed personal research project on tracking mood.

That would have been the end of it, but I then stumbled across this great review of "Type Driven Development with Idris", a book by Edwin Brady. I bought this book during the Christmas break but I haven't had much of a chance to deep dive into it yet.

22 February, 2018 08:00PM

Russell Coker

Dell PowerEdge T30

I just did a Debian install on a Dell PowerEdge T30 for a client. The Dell web site is a bit broken at the moment, it didn’t list the price of that server or give useful specs when I was ordering it. I was under the impression that the server was limited to 8G of RAM, that’s unusually small but it wouldn’t be the first time a vendor crippled a low end model to drive sales of more expensive systems. It turned out that the T30 model I got has 4*DDR4 sockets with only one used for an 8G DIMM. It apparently can handle up to 64G of RAM.

It has space for 4*3.5″ SATA disks but only has 4*SATA connectors on the motherboard. As I never use the DVD in a server this isn’t a problem for me, but if you want 4 disks and a DVD then you need to buy a PCI or PCIe SATA card.

Compared to the PowerEdge T130 I’m using at home the new T30 is slightly shorter and thinner while seeming to have more space inside. This is partly due to better design and partly due to having 2 hard drives in the top near the DVD drive which are a little inconvenient to get to. The T130 I have (which isn’t the latest model) has 4*3.5″ SATA drive bays at the bottom which are very convenient for swapping disks.

It has two PCIe*16 slots (one of which is apparently quad speed), one shorter PCIe slot, and a PCI slot. For a cheap server a PCI slot is a nice feature, it means I can use an old PCI Ethernet card instead of buying a PCIe Ethernet card. The T30 cost $1002 so using an old Ethernet card saved 1% of the overall cost.

The T30 seems designed to be more of a workstation or personal server than a straight server. The previous iterations of the low end tower servers from Dell didn’t have built in sound and had PCIe slots that were adequate for a RAID controller but vastly inadequate for video. This one has built in line in and out for audio and has two DisplayPort connectors on the motherboard (presumably for dual-head support). Apart from the CPU (an E3-1225 which is slower than some systems people are throwing out nowadays) the system would be a decent gaming system.

It has lots of USB ports which is handy for a file server, I can attach lots of backup devices. Also most of the ports support “super speed”, I haven’t yet tested out USB devices that support such speeds but I’m looking forward to it. It’s a pity that there are no USB-C ports.

One deficiency of the T30 is the lack of a VGA port. It has one HDMI and two DisplayPort sockets on the motherboard, this is really great for a system on or under your desk, any monitor you would want on your desk will support at least one of those interfaces. But in a server room you tend to have an old VGA monitor that’s there because no-one wants it on their desk. Not supporting VGA may force people to buy a $200 monitor for their server room. That increases the effective cost of the system by 20%. It has a PC serial port on the motherboard which is a nice server feature, but that doesn’t make up for the lack of VGA.

The BIOS configuration has an option displayed for enabling charging devices from USB sockets when a laptop is in sleep mode. It’s disappointing that they didn’t either make a BIOS build for a non-laptop or have the BIOS detect at run-time that it’s not on laptop hardware and hide that.

Conclusion

The PowerEdge T30 is a nice low-end workstation. If you want a system with ECC RAM because you need it to be reliable and you don’t need the greatest performance then it will do very well. It has Intel video on the motherboard with HDMI and DisplayPort connectors, this won’t be the fastest video but should do for most workstation tasks. It has a PCIe*16 quad speed slot in case you want to install a really fast video card. The CPU is slow by today’s standards, but Dell sells plenty of tower systems that support faster CPUs.

It’s nice that it has a serial port on the motherboard. That could be used for a serial console or could be used to talk to a UPS or other server-room equipment. But that doesn’t make up for the lack of VGA support IMHO.

One could say that a tower system is designed to be a desktop or desk-side system not run in any sort of server room. However it is cheaper than any rack mounted systems from Dell so it will be deployed in lots of small businesses that have one server for everything – I will probably install them in several other small businesses this year. Also tower servers do end up being deployed in server rooms, all it takes is a small business moving to a serviced office that has a proper server room and the old tower servers end up in a rack.

Rack vs Tower

One reason for small businesses to use tower servers when rack servers are more appropriate is the issue of noise. If your “server room” is the room that has your printer and fax then it typically won’t have a door and you just can’t have the noise of a rack mounted server in there. 1RU systems are inherently noisy because the small diameter of the fans means that they have to spin fast. 2RU systems can be made relatively quiet if you don’t have high-end CPUs but no-one seems to be trying to do that.

I think it would be nice if a company like Dell sold low-end servers in a rack mount form-factor (19 inches wide and 2RU high) that were designed to be relatively quiet. Then instead of starting with a tower server and ending up with tower systems in racks a small business could start with a 19 inch wide system on a shelf that gets bolted into a rack if they move into a better office. Any laptop CPU from the last 10 years is capable of running a file server with 8 disks in a ZFS array. Any modern laptop CPU is capable of running a file server with 8 SSDs in a ZFS array. This wouldn’t be difficult to design.

22 February, 2018 02:06PM by etbe

February 21, 2018

Renata D'Avila

How to use the EventCalendar ical

Hello!

If you follow this blog, you should probably know by now that I have been working with my mentors to contribute to MoinMoin EventCalendar macro, adding the possility to export the events' data to an icalendar file.

A screenshot of the code, with the function definition for creating the ical file from events from the macro

The code (which can be found on this Github repository) isn't quite ready yet, because I'm still working to convert the recurrence rule to the icalendar format, but other than that, it should be working. Hopefully.

The icalendar file is now generated as an attachment the moment the macro is loaded. I created an "ical" link at the bottom of the calendar. When activated, this link prompts the download of the ical attachment of the page. Being an attachment, there is still the possibility to just view ical the file using the "attachment" menu if the user wishes to do so.

Wiki page showing the calendar, with the 'ical' link at the bottom

There are two ways of importing this calendar on Thunderbird. The first one is to download the file by clicking on the link and then proceeding to import it manually to Thunderbird.

Thunderbird screenshot, with the menus "Events and Tasks" and "Import" selected

The second option is to "Create a new calendar / On the network" and to use the URL address from the ical link as the "location", as it is shown below:

Thunderbird screenshot, showing the new calendar dialog and the ical URL pasted into the "location" textboxd

As usual, it's possible to customize the name for the calendar, the color for the events and such...

Thunderbird screenshot, showing the new calendar with it's events

I noticed a few Wikis that use the EventCalendar, such as Debian wiki itself and the FSFE wiki. Python wiki also seems to be using MoinMoin and EventCalendar, but it seems that they use a Google service to export the event data do iCal.

If you read this and are willing to try the code in your wiki and give me feedback, I would really appreciate. You can find the ways to contact me in my Debian Wiki profile.

21 February, 2018 10:49PM by Renata

hackergotchi for Jonathan McDowell

Jonathan McDowell

Getting Debian booting on a Lenovo Yoga 720

I recently got a new work laptop, a 13” Yoga 720. It proved difficult to install Debian on; pressing F12 would get a boot menu allowing me to select a USB stick I have EFI GRUB on, but after GRUB loaded the kernel and the initrd it would just sit there never outputting anything else that indicated the kernel was even starting. I found instructions about Ubuntu 17.10 which helped but weren’t the complete picture. What seems to be the situation is that the kernel won’t happily boot if “Legacy Support” is not enabled - enabling this (and still booting as EFI) results in a happier experience. However in order to be able to enable legacy boot you have to switch the SATA controller from RAID to AHCI, which can cause Windows to get unhappy about its boot device going away unless you warn it first.

  • Fire up an admin shell in Windows (right click on the start menu)
  • bcdedit /set safeboot minimal
  • Reboot into the BIOS
  • Change the SATA Controller mode from RAID to AHCI (dire warnings about “All data will be erased”. It’s not true, but you’ve back up first, right?) Set “Boot Mode” to “Legacy Support”.
  • Save changes and let Windows boot to Safe Mode
  • Fire up an admin shell in Windows (right click on the start menu again)
  • bcdedit /deletevalue safeboot
  • Reboot again and Windows will load in normal mode with the AHCI drivers

Additionally I had problems getting the GRUB entry added to the BIOS; efibootmgr shows it fine but it never appears in the BIOS boot list. I ended up using Windows to add it as the primary boot option using the following (<guid> gets replaced with whatever the new “Debian” section guid is):

bcdedit /enum firmware
bcdedit /copy "{bootmgr}" /d "Debian"
bcdedit /set "{<guid>}" path \EFI\Debian\grubx64.efi
bcdedit /set "{fwbootmgr}" displayorder "{<guid>}" /addfirst

Even with that at one point the BIOS managed to “forget” about the GRUB entry and require me to re-do the final “displayorder” command.

Once you actually have the thing installed and booting it seems fine - I’m running Buster due to the fact it’s a Skylake machine with lots of bits that seem to want a newer kernel, but claimed battery life is impressive, the screen is very shiny (though sometimes a little too shiny and reflective) and the NVMe SSD seems pretty nippy as you’d expect.

21 February, 2018 09:46PM

hackergotchi for MJ Ray

MJ Ray

How hard can typing æ, ø and å be?

Petter Reinholdtsen: How hard can æ, ø and å be? comments on the rubbish state of till printers and their mishandling of foreign characters.

Last week, I was trying to type an email, on a tablet, in Dutch. The tablet was running something close to Android and I was using a Bluetooth keyboard, which seemed to be configured correctly for my location in England.

Dutch doesn’t even have many accents. I wanted an e acute (é). If you use the on screen keyboard, this is actually pretty easy, just press and hold e and slide to choose the accented one… but holding e on a Bluetooth keyboard? eeeeeeeeeee!

Some guides suggest Alt and e, then e. Apparently that works, but not on keyboards set to Great British… because, I guess, we don’t want any of that foreign muck since the Brexit vote, or something(!)

Even once you figure out that madness and switch the keyboard back to international, which also enables alt i, u, n and so on to do other accents, I can’t find grave, check, breve or several other accents. I managed to send the emails in Dutch but I’d struggle with various other languages.

Have I missed a trick or what are the Android developers thinking? Why isn’t there a Compose key by default? Is there any way to get one?

21 February, 2018 04:14PM by mjr

Sam Hartman

Tools of Love

From my spiritual blog

I have been quiet lately. My life has been filled with gentle happiness, work, and less gentle wedding planning. How do you write about quiet happiness without sounding like the least contemplative aspects of Facebook? How do I share this part of the journey in a way that others can learn from? I was offering thanks the other day and was reminded of one of my early experiences at Fires of Venus. Someone was talking about how they were there working to do the spiritual work they needed in order to achieve their dream of opening a restaurant. I'll admit that when I thought of going to a multi-day retreat focused on spiritual connection to love, opening a restaurant had not been at the forefront of my mind. And yet, this was their dream, and surely dreams are the stuff of love. As they continued, they talked about finding self love deep enough to have the confidence to believe in dreams.



As I recalled this experience, I offered thanks for all the tools I've found to use as a lover. Every time I approach something with joy and awe, I gain new insight into the beauty of the world around us. In my work within the IETF I saw the beauty of the digital world we're working to create. Standing on sacred land, I can find the joy and love of nature and the moment.



I can share the joy I find and offer it to others. I've been mentoring someone at work. They're at a point where they're appreciating some of the great mysteries of computing like “Reflections on Trusting Trust” or two's compliment arithmetic. I’ve had the pleasure of watching their moments of discovery and also helping them understand the complex history in how we’ve built the digital world we have. Each moment of delight reinforces the idea that we live in a world where we expect to find this beauty and connect with it. Each experience reinforces the idea that we live in a world filled with things to love.



And so, I’ve turned even my experiences as a programmer into tools for teaching love and joy. I’ve been learning another new tool lately. I’ve been putting together the dance mix for my wedding. Between that and a project last year, I’ve learned a lot about music. I will never be a professional DJ or song producer. However, I have always found joy in music and dance, and I absolutely can be good enough to share that with my friends. I can be good enough to let music and rhythm be tools I use to tell stories and share joy. In learning skills and improving my ability with music, I better appreciate the music I hear.



The same is true with writing: both my work here and my fiction. I’m busy enough with other things that I am unlikely to even attempt writing as my livelihood. Even so, I have more tools for sharing the love I find and helping people find the love and joy in their world.



These are all just tools. Words and song won’t suddenly bring us all together any more than physical affection and our bodies. However, words, song, and the joy we find in each other and in the world we build can help us find connection and empathy. We can learn to see the love that is there between us. All these tools can help us be vulnerable and open together. And that—the changes we work within ourselves using these tools—can bring us to a path of love. And so how do I write about happiness? I give thanks for the things it allows me to explore. I find value in growing and trying new things. In my best moments, each seems a lens through which I can grow as a lover as I walk Venus’s path.


21 February, 2018 01:00AM

February 20, 2018

Reproducible builds folks

Reproducible Builds: Weekly report #147

Here's what happened in the Reproducible Builds effort between Sunday February 11 and Saturday February 17 2018:

Media coverage

Reproducible work in other projects

Packages reviewed and fixed, and bugs filed

Various previous patches were merged upstream:

Reviews of unreproducible packages

38 package reviews have been added, 27 have been updated and 13 have been removed in this week, adding to our knowledge about identified issues.

4 issue types have been added:

One issue type has been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (24)
  • Boyuan Yang (1)
  • Cédric Boutillier (1)
  • Jeremy Bicha (1)
  • Matthias Klose (1)

diffoscope development

  • Chris Lamb:
    • Add support for comparing Berkeley DB files (#890528). This is currently incomplete because the Berkeley DB libraries do not return the same uid/hash reliably (it returns "random" memory contents) so we must strip those from the human-readable output.

Website development

Misc.

This week's edition was written by Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

20 February, 2018 08:56PM

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Lookalikes

Hippy/mako lookalikes

Did I forget a period of my life when I grew a horseshoe mustache and dreadlocks, walked around topless, and illustrated this 2009 article in the Economist on the economic boon that hippy festivals represent to rural American communities?


Previous lookalikes are here.

20 February, 2018 07:45PM by Benjamin Mako Hill

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Time to Join Extended Long Term Support for Debian 7 Wheezy

Debian 7 Wheezy LTS period ends on May 31st and some companies asked Freexian if they could get security support past this date. Since about half of the current team of paid LTS contributors is willing to continue to provide security updates for Wheezy, I have started to work on making this possible.

I just initiated a discussion on debian-devel with multiple Debian teams to see whether it is possible to continue to use debian.org infrastructure to host the wheezy security updates that would be prepared in this extended LTS period.

From the sponsor side, this extended LTS will not work like the regular LTS. It is unrealistic to continue to support all packages and all architectures so only the packages/architectures requested by sponsors will be supported. The amount invoiced to each sponsor will be directly related to the package list that they ask us to support. We made an estimation (based on history) of how much it costs to support each package and we split that cost between all the sponsors that are requesting support for this package. That cost is re-evaluated quarterly and will likely increase over time as sponsors are stopping their support (when they finished to migrate all their machines for example).

This extended LTS will also have some restrictions in terms of packages that we can support. For instance, we will no longer support the linux kernel from wheezy, you will have to switch to the kernel used in jessie (or maybe we will maintain a backport ourselves in wheezy). It is also not yet clear whether we can support OpenJDK since upstream support of version 7 stops at the end of June. And switching to OpenJDK 8 is likely non-trivial. There are likely other unsupportable packages too.

Anyway, if your company needs wheezy security support past end of May, now is the time to worry about it. Please send us a mail with the list of source packages that you would like to see supported. The more companies get involved, the less it will cost to each of them. Our plans are to gather the required data from interested companies in the next few weeks and make a first estimation of the price they will have to pay for the first quarter by mid-march. Then they confirm that they are OK with the offer and we will emit invoices in April so that they can be paid before end of May.

Note however that we decided that it would not be possible to sponsor extended wheezy support (and thus influence which packages are supported) if you are not among the regular LTS sponsors (at bronze level at least). Extended LTS would not be possible without the regular LTS so if you need the former, you have to support the latter too.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

20 February, 2018 04:57PM by Raphaël Hertzog

hackergotchi for Michal &#268;iha&#345;

Michal Čihař

Weblate 2.19.1

Weblate 2.19.1 has been released today. This is bugfix only release mostly to fix problematic migration from 2.18 which some users have observed.

Full list of changes:

  • Fixed migration issue on upgrade from 2.18.
  • Improved file upload API validation.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

20 February, 2018 02:00PM

hackergotchi for Nicolas Dandrimont

Nicolas Dandrimont

Listing and loading of Debian repositories: now live on Software Heritage

Software Heritage is the project for which I’ve been working during the past two and a half years now. The grand vision of the project is to build the universal software archive, which will collect, preserve and share the Software Commons.

Today, we’ve announced that Software Heritage is archiving the contents of Debian daily. I’m reposting this article on my blog as it will probably be of interest to readers of Planet Debian.

TL;DR: Software Heritage now archives all source packages of Debian as well as its security archive daily. Everything is ready for archival of other Debian derivatives as well. Keep on reading to get details of the work that made this possible.

History

When we first announced Software Heritage, back in 2016, we had archived the historical contents of Debian as present on the snapshot.debian.org service, as a one-shot proof of concept import.

This code was then left in a drawer and never touched again, until last summer when Sushant came do an internship with us. We’ve had the opportunity to rework the code that was originally written, and to make it more generic: instead of the specifics of snapshot.debian.org, the code can now work with any Debian repository. Which means that we could now archive any of the numerous Debian derivatives that are available out there.

This has been live for a few months, and you can find Debian package origins in the Software Heritage archive now.

Mapping a Debian repository to Software Heritage

The main challenge in listing and saving Debian source packages in Software Heritage is mapping the content of the repository to the generic source history data model we use for our archive.

Organization of a Debian repository

Before we start looking at a bunch of unpacked Debian source packages, we need to know how a Debian repository is actually organized.

At the top level of a Debian repository lays a set of suites, representing versions of the distribution, that is to say a set of packages that have been tested and are known to work together. For instance, Debian currently has 6 active suites, from wheezy (“old old stable” version), all the way up to experimental; Ubuntu has 8, from precise (12.04 LTS), up to bionic (the future 18.04 release), as well as a devel suite. Each of those suites also has a bunch of “overlay” suites, such as backports, which are made available in the archive alongside full suites.

Under the suites, there’s another level of subdivision, which Debian calls components, and Ubuntu calls areas. Debian uses its components to segregate packages along licensing terms (main, contrib and non-free), while Ubuntu uses its areas to denote the level of support of the packages (main, universe, multiverse, …).

Finally, components contain source packages, which merge upstream sources with distribution-specific patches, as well as machine-readable instructions on how to build the package.

Organization of the Software Heritage archive

The Software Heritage archive is project-centric rather than version-centric. What this means is that we are interested in keeping the history of what was available in software origins, which can be thought of as a URL of a repository containing software artifacts, tagged with a type representing the means of access to the repository.

For instance, the origin for the GitHub mirror of the Linux kernel repository has the following data:

For each visit of an origin, we take a snapshot of all the branches (and tagged versions) of the project that were visible during that visit, complete with their full history. See for instance one of the latest visits of the Linux kernel. For the specific case of GitHub, pull requests are also visible as virtual branches, so we fetch those as well (as branches named refs/pull/<pull request number>/head).

Bringing them together

As we’ve seen, Debian archives (just as well as archives for other “traditional” Linux distributions) are release-centric rather than package-centric. Mapping distributions to the Software Heritage archive therefore takes a little bit of gymnastics, to transpose the list of source packages available in each suite to a list of available versions per source package. We do this step by step:

  1. Download the Sources indices for all the suites and components known in the Debian repository
  2. Parse the Sources indices, listing all source packages inside
  3. For each source package, tell the Debian loader to load all the available versions (grouped by name), generating a complete snapshot of the state of the source package across the Debian repository

The source packages are mapped to origins using the following format:

  • type: deb
  • url: deb://<repository name>/packages/<source package name> (e.g. deb://Debian/packages/linux)

We use a repository name rather than the actual URL to a repository so that links can persist even if a given mirror disappears.

Loading Debian source packages

To load Debian source packages into the Software Heritage archive, we have to convert them: Debian-based distributions distribute source packages as a set of files, a dsc (Debian Source Control) and a set of tarballs (usually, an upstream tarball and a Debian-specific overlay). On the other hand, Software Heritage only stores version-control information such as revisions, directories, files.

Unpacking the source packages

Our philosophy at Software Heritage is to store the source code of software in the precise form that allows a developer to start working on it. For Debian source packages, this is the unpacked source code tree, with all patches applied. After checking that the files we have downloaded match the checksums published in the index files, we simply use dpkg-source -x to extract the source package, with patches applied, ready to build. This also means that we currently fail to import packages that don’t extract with the version of dpkg-source available in Debian Stretch.

Generating a synthetic revision

After walking the extracted source package tree, computing identifiers for all its contents, we get the identifier of the top-level tree, which we will reference in the synthetic revision.

The synthetic revision contains the “reproducible” metadata that is completely intrinsic to the Debian source package. With the current implementation, this means:

  • the author of the package, and the date of modification, as referenced in the last entry of the source package changelog (referenced as author and committer)
  • the original artifact (i.e. the information about the original source package)
  • basic information about the history of the package (using the parsed changelog)

However, we never set parent revisions in the synthetic commits, for two reasons:

  • there is no guarantee that packages referenced in the changelog have been uploaded to the distribution, or imported by Software Heritage (our update frequency is lower than that of the Debian archive)
  • even if this guarantee existed, and all versions of all packages were available in Software Heritage, there would be no guarantee that the version referenced in the changelog is indeed the version we imported in the first place

This makes the information stored in the synthetic revision fully intrinsic to the source package, and reproducible. In turn, this allows us to keep a cache, mapping the original artifacts to synthetic revision ids, to avoid loading packages again once we have loaded them once.

Storing the snapshot

Finally, we can generate the top-level object in the Software Heritage archive, the snapshot. For instance, you can see the snapshot for the latest visit of the glibc package.

To do so, we generate a list of branches by concatenating the suite, the component, and the version number of each detected source package (e.g. stretch/main/2.24-10 for version 2.24-10 of the glibc package available in stretch/main). We then point each branch to the synthetic revision that was generated when loading the package version.

In case a version of a package fails to load (for instance, if the package version disappeared from the mirror between the moment we listed the distribution, and the moment we could load the package), we still register the branch name, but we make it a “null” pointer.

There’s still some improvements to make to the lister specific to Debian repositories: it currently hardcodes the list of components/areas in the distribution, as the repository format provides no programmatic way of eliciting them. Currently, only Debian and its security repository are listed.

Looking forward

We believe that the model we developed for the Debian use case is generic enough to capture not only Debian-based distributions, but also RPM-based ones such as Fedora, Mageia, etc. With some extra work, it should also be possible to adapt it for language-centric package repositories such as CPAN, PyPI or Crates.

Software Heritage is now well on the way of providing the foundations for a generic and unified source browser for the history of traditional package-based distributions.

We’ll be delighted to welcome contributors that want to lend a hand to get there.

20 February, 2018 01:52PM by olasd

hackergotchi for Daniel Pocock

Daniel Pocock

Hacking at EPFL Toastmasters, Lausanne, tonight

As mentioned in my earlier blog, I give a talk about Hacking at the Toastmasters club at EPFL tonight. Please feel free to join us and remember to turn off your mobile device or leave it at home, you never know when it might ring or become part of a demonstration.

20 February, 2018 11:39AM by Daniel.Pocock

February 19, 2018

hackergotchi for Rapha&#235;l Hertzog

Raphaël Hertzog

Freexian’s report about Debian Long Term Support, January 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, about 160 work hours have been dispatched among 11 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased slightly at 187 hours per month. It would be nice if the slow growth could continue as the amount of work seems to be slowly growing too.

The security tracker currently lists 23 packages with a known CVE and the dla-needed.txt file 23 too. The number of open issues seems to be stable compared to last month which is a good sign.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

19 February, 2018 05:18PM by Raphaël Hertzog

hackergotchi for Steve Kemp

Steve Kemp

How we care for our child

This post is a departure from the regular content, which is supposed to be "Debian and Free Software", but has accidentally turned into a hardware blog recently!

Anyway, we have a child who is now about 14 months old. The way that my wife and I care for him seems logical to us, but often amuses local people. So in the spirit of sharing this is what we do:

  • We divide the day into chunks of time.
  • At any given time one of us is solely responsible for him.
    • The other parent might be nearby, and might help a little.
    • But there is always a designated person who will be changing nappies, feeding, and playing at any given point in the day.
  • The end.

So our weekend routine, covering Saturday and Sunday, looks like this:

  • 07:00-08:00: Husband
  • 08:01-13:00: Wife
  • 13:01-17:00: Husband
  • 17:01-18:00: Wife
  • 18:01-19:30: Husband

Our child, Oiva, seems happy enough with this and he sometimes starts walking from one parent to the other at the appropriate time. But the real benefit is that each of us gets some time off - in my case I get "the morning" off, and my wife gets the afternoon off. We can hide in our bedroom, go shopping, eat cake, or do anything we like.

Week-days are similar, but with the caveat that we both have jobs. I take the morning, and the evenings, and in exchange if he wakes up overnight my wife helps him sleep and settle between 8PM-5AM, and if he wakes up later than 5AM I deal with him.

Most of the time our child sleeps through the night, but if he does wake up it tends to be in the 4:30AM/5AM timeframe. I'm "happy" to wake up at 5AM and stay up until I go to work because I'm a morning person and I tend to go to bed early these days.

Day-care is currently a complex process. There are three families with small children, and ourselves. Each day of the week one family hosts all the children, and the baby-sitter arrives there too (all the families live within a few blocks of each other).

All of the parents go to work, leaving one carer in charge of 4 babies for the day, from 08:15-16:15. On the days when we're hosting the children I greet the carer then go to work - on the days the children are at a different families house I take him there in the morning, on my way to work, and then my wife collects him in the evening.

At the moment things are a bit terrible because most of the children have been a bit sick, and the carer too. When a single child is sick it's mostly OK, unless that is the child which is supposed to be host-venue. If that child is sick we have to panic and pick another house for that day.

Unfortunately if the child-carer is sick then everybody is screwed, and one parent has to stay home from each family. I guess this is the downside compared to sending the children to public-daycare.

This is private day-care, Finnish-style. The social-services (kela) will reimburse each family €700/month if you're in such a scheme, and carers are limited to a maximum of 4 children. The net result is that prices are stable, averaging €900-€1000 per-child, per month.

(The €700 is refunded after a month or two, so in real-terms people like us pay €200-€300/month for Monday-Friday day-care. Plus a bit of beaurocracy over deciding which family is hosting, and which parents are providing food. With the size being capped, and the fees being pretty standard the carers earn €3600-€4000/month, which is a good amount. To be a school-teacher you need to be very qualified, but to do this caring is much simpler. It turns out that being an English-speaker can be a bonus too, for some families ;)

Currently our carer has a sick-note for three days, so I'm staying home today, and will likely stay tomorrow too. Then my wife will skip work on Wednesday. (We usually take it in turns but sometimes that can't happen easily.)

But all of this is due to change in the near future, because we've had too many sick days, and both of us have missed too much work.

More news on that in the future, unless I forget.

19 February, 2018 10:15AM

February 18, 2018

hackergotchi for Daniel Pocock

Daniel Pocock

SwissPost putting another nail in the coffin of Swiss sovereignty

A few people have recently asked me about the SwissID, as SwissPost has just been sending spam emails out to people telling them "Link your Swiss Post user account to SwissID".

This coercive new application of technology demands users email addresses and mobile phone numbers "for security". A web site coercing people to use text messages "for security" has quickly become a red flag for most people and many blogs have already covered why it is only an illusion of security, putting your phone account at risk so companies can profit from another vector for snooping on you.

SwissID is not the only digital identity solution in Switzerland but as it is run by SwissPost and has a name similar to another service it is becoming very well known.

In 2010 they began offering a solution which they call SuisseID (notice the difference? They are pronounced the same way.) based on digital certificates and compliant with Swiss legislation. Public discussion focussed on the obscene cost with little comment about the privacy consequences and what this means for Switzerland as a nation.

Digital certificates often embed an email address in the certificate.

With SwissID, however, they have a web site that looks like little more than vaporware, giving no details at all whether certificates are used. It appears they are basically promoting an app that is designed to harvest the email addresses and phone numbers of any Swiss people who install it, lulling them into that folly by using a name that looks like their original SuisseID. If it looks like phishing, if it feels like phishing and if it smells like phishing to any expert takes a brief sniff of their FAQ, then what else is it?

The thing is, the original SuisseID runs on a standalone smartcard so it doesn't need to have your mobile phone number, have permissions to all the data in your phone and be limited to working in areas with mobile phone signal.

The emails currently being sent by SwissPost tell people they must "Please use a private e-mail address for this purpose" but they don't give any information about the privacy consequences of creating such an account or what their app will do when it has access to read all the messages and contacts in your phone.

The actions you can take that they didn't tell you about

  • You can post a registered letter to SwissPost and tell them that for privacy reasons, you are immediately retracting the email addresses and mobile phone numbers they currently hold on file and that you are exercising your right not to give an email address or mobile phone number to them in future.
  • If you do decide you want a SwissID, create a unique email address for it and only use that email address with SwissPost so that it can't be cross-referenced with other companies. This email address is also like a canary in a coal mine: if you start receiving spam on that email address then you know SwissPost/SwissID may have been hacked or the data has been leaked or sold.
  • Don't install their app and if you did, remove it and you may want to change your mobile phone number.

Oddly enough, none of these privacy-protecting ideas were suggested in the email from SwissPost. Who's side are they on?

Why should people be concerned?

SwissPost, like every postal agency, has seen traditional revenues drop and so they seek to generate more revenue from direct marketing and they are constantly looking for ways to extract and profit from data about the public. They are also a huge company with many employees: when dealing with vast amounts of data in any computer system, it only takes one employee to compromise everything: just think of how Edward Snowden was able to act alone to extract many of the NSA's most valuable secrets.

SwissPost is going to great lengths to get accurate data on every citizen and resident in Switzerland, including deploying an app to get your mobile phone number and demanding an email address when you use their web site. That also allows them to cross-reference with your IP addresses.

  • Any person or organization who has your email address or mobile number may find it easier to get your home address.
  • Any person or organization who has your home address may be able to get your email address or mobile phone number.
  • When you call a company from your mobile phone and their system recognizes your phone number, it becomes easier for them to match it to your home address.
  • If SwissPost and the SBB successfully convince a lot of people to use a SwissID, some other large web sites may refuse to allow access without getting you to link them to your SwissID and all the data behind it too. Think of how many websites already try to coerce you to give them your mobile phone number and birthday to "secure" your account, but worse.

The Google factor

The creepiest thing is that over seventy percent of people are apparently using Gmail addresses in Switzerland and these will be a dependency of their registration for SwissID.

Given that SwissID is being promoted as a solution compliant with ZertES legislation that can act as an interface between citizens and the state, the intersection with such a powerful foreign actor as Gmail is extraordinary. For example, if people are registering to vote in Switzerland's renowned referendums and their communication is under the surveillance of a foreign power like the US, that is a mockery of democracy and it makes the allegations of Russian election hacking look like child's play.

Switzerland's referendums, decentralized system of Government, part-time army and privacy regime are all features that maintain a balance between citizen and state: by centralizing power in the hands of SwissID and foreign IT companies, doesn't it appear that the very name SwissID is a mockery of the Swiss identity?

Yellow in motion

No canaries were harmed in the production of this blog.

18 February, 2018 10:17PM by Daniel.Pocock

Dima Kogan

Vnlog!

In the last few jobs I've worked at I ended up writing a tool to store data in a nice format, and to be able to manipulate it easily. I'd rewrite this from scratch each time partly because I was never satisfied with the previous version. Each iteration was an improvement on the previous one, and this version is the good one. I wrote it at NASA/JPL, went through the release process (this thing was called asciilog then), added a few more features, and I'm now releasing it. The toolkit lives here and here's the initial README:

Summary

Vnlog (pronounced "vanillog") is a trivially-simple log format:

  • A whitespace-separated table of ASCII human-readable text
  • Lines beginning with # are comments
  • The first line that begins with a single # (not ## or #!) is a legend, naming each column

Example:

#!/usr/bin/whatever
# a b c
1 2 3
## another comment
4 5 6

Such data works very nicely with normal UNIX tools (awk, sort, join), can be easily read by fancier tools (numpy, matlab (yuck), excel (yuck), etc), and can be plotted with feedgnuplot. This tookit provides some tools to manipulate vnlog data and a few libraries to read/write it. The core philosophy is to keep everything as simple and light as possible, and to provide methods to enable existing (and familiar!) tools and workflows to be utilized in nicer ways.

Synopsis

In one terminal, sample the CPU temperature over time, and write the data to a file as it comes in, at 1Hz:

$ ( echo '# time temp1 temp2 temp3';
    while true; do echo -n "`date +%s` "; < /proc/acpi/ibm/thermal awk '{print $2,$3,$4; fflush()}';
    sleep 1; done )
    > /tmp/temperature.vnl

In another terminal, I sample the consumption of CPU resources, and log that to a file:

$ (echo "# user system nice idle waiting hardware_interrupt software_interrupt stolen";
   top -b -d1 | awk '/%Cpu/ {print $2,$4,$6,$8,$10,$12,$14,$16; fflush()}')
   > /tmp/cpu.vnl

These logs are now accumulating, and I can do stuff with them. The legend and the last few measurements:

$ vnl-tail /tmp/temperature.vnl
# time temp1 temp2 temp3
1517986631 44 38 34
1517986632 44 38 34
1517986633 44 38 34
1517986634 44 38 35
1517986635 44 38 35
1517986636 44 38 35
1517986637 44 38 35
1517986638 44 38 35
1517986639 44 38 35
1517986640 44 38 34

I grab just the first temperature sensor, and align the columns

$ < /tmp/temperature.vnl vnl-tail |
    vnl-filter -p time,temp=temp1 |
    vnl-align
#  time    temp
1517986746 45
1517986747 45
1517986748 46
1517986749 46
1517986750 46
1517986751 46
1517986752 46
1517986753 45
1517986754 45
1517986755 45

I do the same, but read the log data in realtime, and feed it to a plotting tool to get a live reporting of the cpu temperature. This plot updates as data comes in. I then spin a CPU core (while true; do true; done), and see the temperature climb. Here I'm making an ASCII plot that's pasteable into the docs.

$ < /tmp/temperature.vnl vnl-tail -f           |
    vnl-filter --unbuffered -p time,temp=temp1 |
     feedgnuplot --stream --domain
       --lines --timefmt '%s' --set 'format x "%M:%S"' --ymin 40
       --unset grid --terminal 'dumb 80,40'

  70 +----------------------------------------------------------------------+
     |      +      +      +      +       +      +      +      +      +      |
     |                                                                      |
     |                                                                      |
     |                                                                      |
     |                      **                                              |
  65 |-+                   ***                                            +-|
     |                    ** *                                              |
     |                    *  *                                              |
     |                    *  *                                              |
     |                   *   *                                              |
     |                  **   **                                             |
  60 |-+                *     *                                           +-|
     |                 *      *                                             |
     |                 *      *                                             |
     |                 *      *                                             |
     |                **      *                                             |
     |                *       *                                             |
  55 |-+              *       *                                           +-|
     |                *       *                                             |
     |                *       **                                            |
     |                *        *                                            |
     |               **        *                                            |
     |               *         **                                           |
  50 |-+             *          **                                        +-|
     |               *           **                                         |
     |               *            ***                                       |
     |               *              *                                       |
     |               *              ****                                    |
     |               *                 *****                                |
  45 |-+             *                     ***********                    +-|
     |    ************                               ********************** |
     |          * **                                                        |
     |                                                                      |
     |                                                                      |
     |      +      +      +      +       +      +      +      +      +      |
  40 +----------------------------------------------------------------------+
   21:00  22:00  23:00  24:00  25:00   26:00  27:00  28:00  29:00  30:00  31:00

Cool. I can then join the logs, pull out simultaneous CPU consumption and temperature numbers, and plot the path in the temperature-cpu space:

$ vnl-join -j time /tmp/temperature.vnl /tmp/cpu.vnl |
  vnl-filter -p temp1,user                           |
  feedgnuplot --domain --lines
    --unset grid --terminal 'dumb 80,40'

  45 +----------------------------------------------------------------------+
     |           +           +           +          +           +           |
     |                                       *                              |
     |                                       *                              |
  40 |-+                                    **                            +-|
     |                                      **                              |
     |                                     * *                              |
     |                                     * *      *    *    *             |
  35 |-+               ****      *********** **** * **** ***  ******      +-|
     |        *********   ********       *   *  *****  *** * ** *  *        |
     |        *    *                            * * *  * * ** * *  *        |
     |        *    *                                   *   *  *    *        |
  30 |-+      *                                                    *      +-|
     |        *                                                    *        |
     |        *                                                    *        |
     |        *                                                    *        |
  25 |-+      *                                                    *      +-|
     |        *                                                    *        |
     |        *                                                    *        |
     |        *                                                    *        |
  20 |-+      *                                                    *      +-|
     |        *                                                    *        |
     |        *                                                    *        |
     |      * *                                                    *        |
  15 |-+    * *  *                                                 *      +-|
     |      * *  *                                                 *        |
     |      ***  *                                                 *        |
     |      ***  *                                                 *        |
  10 |-+    ***  *                                                 *      +-|
     |      ***  *                                                 *        |
     |      ***  *                                                 *        |
     |      ***  *                                                 *        |
   5 |-+    ***  *                                                 *      +-|
     |      ***  *                                                 *        |
     |      * *  * *                                               *        |
     |      **** * ** *****  *********** +       *******       *****        |
   0 +----------------------------------------------------------------------+
     40          45          50          55         60          65          70

Description

As stated before, vnlog tools are designed to be very simple and light. There exist other tools that are similar. For instance:

These all provide facilities to run various analyses, and are neither simple nor light. Vnlog by contrast doesn't analyze anything, but makes it easy to write simple bits of awk or perl to process stuff to your heart's content. The main envisioned use case is one-liners, and the tools are geared for that purpose. The above mentioned tools are much more powerful than vnlog, so they could be a better fit for some use cases.

In the spirit of doing as little as possible, the provided tools are wrappers around tools you already have and are familiar with. The provided tools are:

  • vnl-filter is a tool to select a subset of the rows/columns in a vnlog and/or to manipulate the contents. This is effectively an awk wrapper where the fields can be referenced by name instead of index. 20-second tutorial:
vnl-filter -p col1,col2,colx=col3+col4 'col5 > 10' --has col6

will read the input, and produce a vnlog with 3 columns: col1 and col2 from the input and a column colx that's the sum of col3 and col4 in the input. Only those rows for which the col5 > 10 is true will be output. Finally, only those rows that have a non-null value for col6 will be selected. A null entry is signified by a single - character.

vnl-filter --eval '{s += x} END {print s}'

will evaluate the given awk program on the input, but the column names work as you would hope they do: if the input has a column named x, this would produce the sum of all values in this column.

  • vnl-sort, vnl-join, vnl-tail are wrappers around the corresponding GNU Coreutils tools. These work exactly as you would expect also: the columns can be referenced by name, and the legend comment is handled properly. These are wrappers, so all the commandline options those tools have "just work" (except options that don't make sense in the context of vnlog). As an example, vnl-tail -f will follow a log: data will be read by vnl-tail as it is written into the log (just like tail -f, but handling the legend properly). And you already know how to use these tools without even reading the manpages! Note: these were written for and have been tested with the Linux kernel and GNU Coreutils sort, join and tail. Other kernels and tools probably don't (yet) work. Send me patches.
  • vnl-align aligns vnlog columns for easy interpretation by humans. The meaning is unaffected
  • Vnlog::Parser is a simple perl library to read a vnlog
  • libvnlog is a C library to simplify writing a vnlog. Clearly all you really need is printf(), but this is useful if we have lots of columns, many containing null values in any given row, and/or if we have parallel threads writing to a log
  • vnl-make-matrix converts a one-point-per-line vnlog to a matrix of data. I.e.
$ cat dat.vnl
# i j x
0 0 1
0 1 2
0 2 3
1 0 4
1 1 5
1 2 6
2 0 7
2 1 8
2 2 9
3 0 10
3 1 11
3 2 12

$ < dat.vnl vnl-filter -p i,x | vnl-make-matrix --outdir /tmp
Writing to '/tmp/x.matrix'

$ cat /tmp/x.matrix
1 2 3
4 5 6
7 8 9
10 11 12

All the tools have manpages that contain more detail. And tools will probably be added with time.

C interface

For most uses, these logfiles are simple enough to be generated with plain prints. But then each print statement has to know which numeric column we're populating, which becomes effortful with many columns. In my usage it's common to have a large parallelized C program that's writing logs with hundreds of columns where any one record would contain only a subset of the columns. In such a case, it's helpful to have a library that can output the log files. This is available. Basic usage looks like this:

In a shell:

$ vnl-gen-header 'int w' 'uint8_t x' 'char* y' 'double z' 'void* binary' > vnlog_fields_generated.h

In a C program test.c:

#include "vnlog_fields_generated.h"

int main()
{
    vnlog_emit_legend();

    vnlog_set_field_value__w(-10);
    vnlog_set_field_value__x(40);
    vnlog_set_field_value__y("asdf");
    vnlog_emit_record();

    vnlog_set_field_value__z(0.3);
    vnlog_set_field_value__x(50);
    vnlog_set_field_value__w(-20);
    vnlog_set_field_value__binary("\x01\x02\x03", 3);
    vnlog_emit_record();

    vnlog_set_field_value__w(-30);
    vnlog_set_field_value__x(10);
    vnlog_set_field_value__y("whoa");
    vnlog_set_field_value__z(0.5);
    vnlog_emit_record();

    return 0;
}

Then we build and run, and we get

$ cc -o test test.c -lvnlog

$ ./test

# w x y z binary
-10 40 asdf - -
-20 50 - 0.2999999999999999889 AQID
-30 10 whoa 0.5 -

The binary field in base64-encoded. This is a rarely-used feature, but sometimes you really need to log binary data for later processing, and this makes it possible.

So you

  1. Generate the header to define your columns
  2. Call vnlog_emit_legend()
  3. Call vnlog_set_field_value__...() for each field you want to set in that row.
  4. Call vnlog_emit_record() to write the row and to reset all fields for the next row. Any fields unset with a vnlog_set_field_value__...() call are written as null: -

This is enough for 99% of the use cases. Things get a bit more complex if we have have threading or if we have multiple vnlog ouput streams in the same program. For both of these we use vnlog contexts.

To support reentrant writing into the same vnlog by multiple threads, each log-writer should create a context, and use it when talking to vnlog. The context functions will make sure that the fields in each context are independent and that the output records won't clobber each other:

void child_writer( // the parent context also writes to this vnlog. Pass NULL to
                   // use the global one
                   struct vnlog_context_t* ctx_parent )
{
    struct vnlog_context_t ctx;
    vnlog_init_child_ctx(&ctx, ctx_parent);

    while(records)
    {
        vnlog_set_field_value_ctx__xxx(&ctx, ...);
        vnlog_set_field_value_ctx__yyy(&ctx, ...);
        vnlog_set_field_value_ctx__zzz(&ctx, ...);
        vnlog_emit_record_ctx(&ctx);
    }
}

If we want to have multiple independent vnlog writers to different streams (with different columns andlegends), we do this instead:

file1.c:

#include "vnlog_fields_generated1.h"

void f(void)
{
    // Write some data out to the default context and default output (STDOUT)
    vnlog_emit_legend();
    ...
    vnlog_set_field_value__xxx(...);
    vnlog_set_field_value__yyy(...);
    ...
    vnlog_emit_record();
}

file2.c:

#include "vnlog_fields_generated2.h"

void g(void)
{
    // Make a new session context, send output to a different file, write
    // out legend, and send out the data
    struct vnlog_context_t ctx;
    vnlog_init_session_ctx(&ctx);
    FILE* fp = fopen(...);
    vnlog_set_output_FILE(&ctx, fp);
    vnlog_emit_legend_ctx(&ctx);
    ...
    vnlog_set_field_value__a(...);
    vnlog_set_field_value__b(...);
    ...
    vnlog_emit_record();
}

Note that it's the user's responsibility to make sure the new sessions go to a different FILE by invoking vnlog_set_output_FILE(). Furthermore, note that the included vnlog_fields_....h file defines the fields we're writing to; and if we have multiple different vnlog field definitions in the same program (as in this example), then the different writers must live in different source files. The compiler will barf if you try to #include two different vnlog_fields_....h files in the same source.

More APIs are

vnlog_printf(...) and vnlog_printf_ctx(ctx, ...) write to a pipe like printf() does. This exists for comments.

vnlog_clear_fields_ctx(ctx, do_free_binary): Clears out the data in a context and makes it ready to be used for the next record. It is rare for the user to have to call this manually. The most common case is handled automatically (clearing out a context after emitting a record). One area where this is useful is when making a copy of a context:

struct vnlog_context_t ctx1;
// .... do stuff with ctx1 ... add data to it ...

struct vnlog_context_t ctx2 = ctx1;
// ctx1 and ctx2 now both have the same data, and the same pointers to
// binary data. I need to get rid of the pointer references in ctx1

vnlog_clear_fields_ctx(&ctx1, false);

vnlog_free_ctx(ctx):

Frees memory for an vnlog context. Do this before throwing the context away. Currently this is only needed for context that have binary fields, but this should be called in for all contexts, just in case

numpy interface

The built-in numpy.loadtxt numpy.savetxt functions work well to read and write these files. For example to write to standard output a vnlog with fields a, b and c:

numpy.savetxt(sys.stdout, array, fmt="%g", header="a b c")

Note that numpy automatically adds the # to the header. To read a vnlog from a file on disk, do something like

array = numpy.loadtxt('data.vnl')

These functions know that # lines are comments, but don't interpret anything as field headers. That's easy to do, so I'm not providing any helper libraries. I might do that at some point, but in the meantime, patches are welcome.

Caveats and bugs

The tools that wrap GNU coreutils (vnl-sort, vnl-join, vnl-tail) are written specifically to work with the Linux kernel and the GNU coreutils. None of these have been tested with BSD tools or with non-Linux kernels, and I'm sure things don't just work. It's probably not too effortful to get that running, but somebody needs to at least bug me for that. Or better yet, send me nice patches :)

These tools are meant to be simple, so some things are hard requirements. A big one is that columns are whitespace-separated. There is no mechanism for escaping or quoting whitespace into a single field. I think supporting something like that is more trouble than it's worth.

Author

Dima Kogan (dima@secretsauce.net)

License and copyright

This library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version.

Copyright 2016-2017 California Institute of Technology

Copyright 2017-2018 Dima Kogan (dima@secretsauce.net)

b64_cencode.c comes from cencode.c in the libb64 project. It is written by Chris Venter (chris.venter@gmail.com) who placed it in the public domain. The full text of the license is in that file.

18 February, 2018 11:58AM by Dima Kogan

Petter Reinholdtsen

The SysVinit upstream project just migrated to git

Surprising as it might sound, there are still computers using the traditional Sys V init system, and there probably will be until systemd start working on Hurd and FreeBSD. The upstream project still exist, though, and up until today, the upstream source was available from Savannah via subversion. I am happy to report that this just changed.

The upstream source is now in Git, and consist of three repositories:

I do not really spend much time on the project these days, and I has mostly retired, but found it best to migrate the source to a good version control system to help those willing to move it forward.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

18 February, 2018 08:20AM

February 17, 2018

hackergotchi for Joey Hess

Joey Hess

futures of distributions

Seems Debian is talking about why they are unable to package whole categories of modern software, such as anything using npm. It's good they're having a conversation about that, and I want to give a broader perspective.

Lars Wirzenius's blog post about it explains the problem well from the Debian perspective. In short: The granularity at which software is built has fundamentally changed. It's now typical for hundreds of small libraries to be used by any application, often pegged to specific versions. Language-specific tools manage all the resulting complexity automatically, but distributions can't muster the manpower to package a fraction of this stuff.

Lars lists some ideas for incremental improvements, but the space within which a Linux distribution exists has changed, and that calls not for incremental changes, but for a fundamental rethink from the ground up. Whether Debian is capable of making such fundamental changes at this point in its lifecycle is up to its developers to decide.

Perhaps other distributions are dealing with the problem better? One way to evaluate this is to look at how a given programming language community feels about a distribution's handling of their libraries. Do they generally see the distribution as a road block that must be worked around, or is the distribution a useful part of their workflow? Do they want their stuff included in the distribution, or does that seem like a lot of pointless bother?

I can only speak about the Haskell community. While there are some exceptions, it generally is not interested in Debian containing Haskell packages, and indeed system-wide installations of Haskell packages can be an active problem for development. This is despite Debian having done a much better job at packaging a lot of Haskell libraries than it has at say, npm libraries. Debian still only packages one version of anything, and there is lag and complex process involved, and so friction with the Haskell community.

On the other hand, there is a distribution that the Haskell community broadly does like, and that's Nix. A subset of the Haskell community uses Nix to manage and deploy Haskell software, and there's generally a good impression of it. Nix seems to be doing something right, that Debian is not doing.

It seems that Nix also has pretty good support for working with npm packages, including ingesting a whole dependency chain into the package manager with a single command, and thousands of npm libraries included in the distribution I don't know how the npm community feels about Nix, but my guess is they like it better than Debian.

Nix is a radical rethink of the distribution model. And it's jettisoned a lot of things that Debian does, like manually packaging software, or extreme license vetting. It's interesting that Guix, which uses the same technologies as Nix, but seems in many ways more Debian-like with its care about licensing etc, has also been unable to manage npm packaging. This suggests to me that at least some of the things that Nix has jettisoned need to be jettisoned in order to succeed in the new distribution space.

But. Nix is not really exploding in popularity from what I can see. It seems to have settled into a niche of its own, and is perhaps expanding here and there, but not rapidly. It's insignificant compared with things like Docker, that also radically rethink the distribution model.

We could easily end up with some nightmare of lithification, as described by Robert "r0ml" Lefkowitz in his Linux.conf.au talk. Endlessly copied and compacted layers of code, contained or in the cloud. Programmer-archeologists right out of a Vinge SF novel.

r0ml suggests that we assume that's where things are going (or indeed where they already are outside little hermetic worlds like Debian), and focus on solving technical problems, like deployment of modifications of cloud apps, that prevent users from exercising software freedoms.

In a way, r0ml's ideas are what led me to thinking about extending Scuttlebutt with Annah, and indeed if you squint at that right, it's an idea for a radically different kind of distribution.

Well, that's all I have. No answers of course.

17 February, 2018 08:51PM

John Goerzen

The downfall of… Trump or Democracy?

The future of the United States as a democracy is at risk. That’s plenty scary. More scary is that many Americans know this, but don’t care. And even more astonishing is that this same thing happened 45 years ago.

I remember it clearly. January 30, just a couple weeks ago. On that day, we had the news that FBI deputy director McCabe — a frequent target of apparently-baseless Trump criticism — had been pushed out. The Trump administration refused to enforce the bipartisan set of additional sanctions on Russia. And the House Intelligence Committee voted on party lines to release what we all knew then, and since have seen confirmed, was a memo filled with errors designed to smear people investigating the president, but which nonetheless contained enough classified material to cause an almighty kerfuffle in Washington.

I told my wife that evening, “I think today will be remembered as a turning point. Either to the downfall of Trump, or the downfall of our democracy, but I don’t know which.”

I have not written much about this scandal, because so many quality words have already been written. But it is time to add something.

I was interested in Watergate years ago. Back in middle school, I read All the President’s Men. I wondered what it must have been like to live through those events — corruption at the highest level of government, dirty tricks, not knowing how it would play out. I wished I could have experienced it.

A couple of decades later, I have got my wish and I am not amused. After all:

“If these allegations prove to be true, what they were seeking to steal was not the jewels, money or other property of American citizens, but something much more valuable — their most precious heritage, the right to vote in a free election…

If the allegations… are substantiated, there has been a very serious subversion of the integrity of the electoral process, and the committee will be obliged to consider the manner in which such a subversion affects the continued existence of this nation as a representative democracy, and how, if we are to survive, such subversions may be prevented in the future.”

Sen. Sam Ervin Jr, May 17, 1973

That statement from 45 years ago captures accurately my contemporary fears. If foreign interference in our elections is not only tolerated but embraced, where does that leave us? Are we really a republic anymore?

I have been diving back into Watergate. In One Man Against The World: The Tragedy of Richard Nixon, written by Tim Weiner in 2015, he dives into the Nixon story in unprecedented detail, thanks to the release of many more files from that time. In his very first page, he writes:

[Nixon] made war in pursuit of peace. He committed crimes in the name of the law. He tore the country apart while trying to unite it. He sabotaged his presidency by violating the Constitution. He destroyed himself and damaged the nation through deliberate acts of folly…

He practiced geopolitics without subtlety; he preferred subterfuge and brutality. He dropped bombs and napalm without remorse; he believed they delivered a political message beyond flood and fire. Hr charted the course of the war without a strategy; he delivered victory to his adversaries.

His gravest decisions undermined his allies abroad. His grandest delusions armed his enemies at home…

The truth was not in him; secrecy and deception were his touchstones.

That these words describe another American president, one that I’m sure Weiner had not foreseen, is jarring. The parallels between Nixon and Trump in the pages of Weiner’s book are so strong that one sometimes wonders if Weiner has a more accurate story of Trump than Wolff got – and also if the pages of his book let us see what’s in store for us this year.

Today I started listening to the excellent podcast Slow Burn. If you have time for nothing else, listen to episode 5: True Believers. It discusses the politicization of the Senate Watergate committee, and more ominously, the efforts of reports to understand the people that still supported Nixon — despite all the damning testimony already out there.

Gail Sheehy went to a bar where Nixon supporters gathered, wanting to get their reaction to the Watergate hearings. The supporters didn’t want to watch. They thought the hearings were just an attempt by liberals to take down Nixon. Sheehy found the president’s people to be “angry, demoralized, and disconcertingly comfortable with the idea of a police state run by Richard Nixon.”

These guys felt they were nobodies… except Richard Nixon gave them an identity. He was a tough guy who was “going to get rid of all those anti-war people, anarchists, terrorists… the people that were tearing down our country!”

Art Buchwald’s tongue-in-cheek handy excuses for Nixon backers seems to be copied almost verbatim by Fox News (substitute Hillary’s emails for Chappaquiddick).

And what happened to the scum of Richard Nixon’s era? Yes, some went to jail, but not all.

  • Steve King, one of Nixon’s henchmen that kidnapped Martha Mitchell (wife of Attorney General and Nixon henchman John Mitchell) for a week to keep her from spilling the beans on Watergate, beat her up, and had her drugged — well he was appointed by Trump to be ambassador to the Czech Republic and confirmed by the Senate.
  • The man that said that the Watergate burglars were “not criminal at heart” because “their only aim was to re-elect the president” later got elected president himself, and pardoned one of the burglars. (Ronald Reagan)
  • The man that said “just let the president do his job!” was also elected president (George H. W. Bush)
  • The man that finally carried out Nixon’s order to fire special prosecutor Archibald Cox was nominated to the Supreme Court, but his nomination was blocked in the Senate. (Robert Bork) He was, however, on the United States Court of Appeals for 6 years.
  • And in an odd conspiracy-laden introduction to a reprint of a youth’s history book on Watergate, none other than Roger Stone, wrapped up in Trump’s shenanigans, was trying to defend Nixon. Oh, and he was a business partner with Paul Manafort and lobbyist for Ferdinand Marcos.

One comfort from all of this is the knowledge that we had been there before. We had lived through an era of great progress in civil rights, and right after that elected a dictatorial crook president. We survived the president’s fervent supporters refusing to believe overwhelming evidence of his crookedness. We survived.

And yet, that is no guarantee. After all, as John Dean put it, Nixon “might have survived if there’d been a Fox News.”

17 February, 2018 08:36PM by John Goerzen

hackergotchi for Lars Wirzenius

Lars Wirzenius

What is Debian all about, really? Or: friction, packaging complex applications

Another weekend, another big mailing list thread

This weekend, those interested in Debian development have been having a discussion on the debian-devel mailing list about "What can Debian do to provide complex applications to its users?". I'm commenting on that in my blog rather than the mailing list, since this got a bit too long to be usefully done in an email.

directhex's recent blog post "Packaging is hard. Packager-friendly is harder." is also relevant.

The problem

To start with, I don't think the email that started this discussion poses the right question. The problem not really about complex applications, we already have those in Debian. See, for example, LibreOffice. The discussion is really about how Debian should deal with the way some types of applications are developed upstream these days. They're not all complex, and they're not all big, but as usual, things only get interesting when n is big.

A particularly clear example is the whole nodejs ecosystem, but it's not limited to that and it's not limited to web applications. This is also not the first time this topic arises, but we've never come to any good conclusion.

My understanding of the problem is as follows:

A current trend in software development is to use programming languages, often interpreted high level languages, combined with heavy use of third-party libraries, and a language-specific package manager for installing libraries for the developer to use, and sometimes also for the sysadmin installing the software for production to use. This bypasses the Linux distributions entirely. The benefit is that it has allowed ecosystems for specific programming languages where there is very little friction for using libraries written in that language to be used by developers, speeding up development cycles a lot.

When I was young(er) the world was horrible

In comparison, in the old days, which for me means the 1990s, and before Debian took over my computing life, the cycle was something like this:

I would be writing an application, and would need to use a library to make some part of my application easier to write. To use that library, I would download the source code archive of the latest release, and laboriously decipher and follow the build and installation instructions, fix any problems, rinse, repeat. After getting the library installed, I would get back to developing my application. Often the installation of the dependency would take hours, so not a thing to be undertaken lightly.

Debian made some things better

With Debian, and apt, and having access to hundreds upon hundreds of libraries packaged for Debian, this become a much easier process. But only for the things packaged for Debian.

For those developing and publishing libraries, Debian didn't make the process any easier. They would still have to publish a source code archive, but also hope that it would eventually be included in Debian. And updates to libraries in the Debian stable release would not get into the hands of users until the next Debian stable release. This is a lot of friction. For C libraries, that friction has traditionally been tolerable. The effort of making the library in the first place is considerable, so any friction added by Debian is small by comparison.

The world has changed around Debian

In the modern world, developing a new library is much easier, and so also the friction caused by Debian is much more of a hindrance. My understanding is that things now happen more like this:

I'm developing an application. I realise I could use a library. I run the language-specific package manager (pip, cpan, gem, npm, cargo, etc), it downloads the library, installs it in my home directory or my application source tree, and in less than the time it takes to have sip of tea, I can get back to developing my application.

This has a lot less friction than the Debian route. The attraction to application programmers is clear. For library authors, the process is also much streamlined. Writing a library, especially in a high-level language, is fairly easy, and publishing it for others to use is quick and simple. This can lead to a virtuous cycle where I write a useful little library, you use and tell me about a bug or a missing feature, I add it, publish the new version, you use it, and we're both happy as can be. Where this might have taken weeks or months in the old days, it can now happen in minutes.

The big question: why Debian?

In this brave new world, why would anyone bother with Debian anymore? Or any traditional Linux distribution, since this isn't particularly specific to Debian. (But I mention Debian specifically, since it's what I now best.)

A number of things have been mentioned or alluded to in the discussion mentioned above, but I think it's good for the discussion to be explicit about them. As a computer user, software developer, system administrator, and software freedom enthusiast, I see the following reasons to continue to use Debian:

  • The freeness of software included in Debian has been vetted. I have a strong guarantee that software included in Debian is free software. This goes beyond the licence of that particular piece of software, but includes practical considerations like the software can actually be built using free tooling, and that I have access to that tooling, because the tooling, too, is included in Debian.

    • There was a time when Debian debated (with itself) whether it was OK to include a binary that needed to be built using a proprietary C compiler. We decided that it isn't, or not in the main package archive.

    • These days we have the question of whether "minimised Javascript" is OK to be included in Debian, if it can't be produced using tools packaged in Debian. My understanding is that we have already decided that it's not, but the discussion continues. To me, this seems equivalent to the above case.

  • I have a strong guarantee that software in a stable Debian release won't change underneath me in incompatible ways, except in special circumstances. This means that if I'm writing my application and targeting Debian stable, the library API won't change, at least not until the next Debian stable release. Likewise for every other bit of software I use. Having things to continue to work without having to worry is a good thing.

    • Note that a side-effect of the low friction of library development current ecosystems sometimes results in the library API changing. This would mean my application would need to change to adapt to the API change. That's friction for my work.
  • I have a strong guarantee that a dependency won't just disappear. Debian has a large mirror network of its package archive, and there are easy tools to run my own mirror, if I want to. While running my own mirror is possible for other package management systems, each one adds to the friction.

    • The nodejs NPM ecosystem seems to be especially vulnerable to this. More than once packages have gone missing, resulting other projects, which depend on the missing packages, to start failing.

    • The way the Debian project is organised, it is almost impossible for this to happen in Debian. Not only are package removals carefully co-ordinated, packages that are depended on on by other packages aren't removed.

  • I have a strong guarantee that a Debian package I get from a Debian mirror is the official package from Debian: either the actual package uploaded by a Debian developer or a binary package built by a trusted Debian build server. This is because Debian uses cryptographic signatures of the package lists and I have a trust path to the Debian signing key.

    • At least some of the language specific package managers fail to have such a trust path. This means that I have no guarantees that the library package I download today, was the same code uploaded by library author.

    • Note that https does not help here. It protects the transfer from the package manger's web server to me, but makes absolutely no guarantees about the validity of the package. There's been enough cases of the package repository having been attacked that this matters to me. Debian's signatures protect against malicious changes on mirror hosts.

  • I have a reasonably strong guarantee that any problem I find can be fixed, by me or someone else. This is not a strong guarantee, because Debian can't do anything about insanely complicated code, for example, but at least I can rely on being able to rebuild the software. That's a basic requirement for fixing a bug.

  • I have a reasonably strong guarantee that, after upgrading to the next Debian stable release, my stuff continues to work. Upgrades may always break, but at least Debian tests them and treats it as a bug if an upgrade doesn't work, or loses user data.

These are the reasons why I think Debian and the way it packages and distributes software is still important and relevant. (You may disagree. I'm OK with that.)

What about non-Linux free operating systems

I don't have much personal experience with non-Linux systems, so I've only talked about Linux here. I don't think the BSD systems, for example, are actually all that different from Linux distributions. Feel free to substitute "free operating system" for "Linux" throughout.

What is it Debian tries to do, anyway?

The previous section is one level of abstraction too low. It's important, but it's beneficial take a further step back and consider what it is Debian actually tries to achieve. Why does Debian exist?

The primary goal of Debian is to enable its users to use their computers using only free software. The freedom aspect is fundamentally important and a principle that Debian is not willing to compromise on.

The primary approach to achieve this goal is to produce a "distribution" of free software, to make installing a free software operating system and applications, and to maintain such a computer, a feasible thing for our users.

This leads to secondary goals, such as:

  • Making it easy to install Debian on a computer. (For values of easy that should be compared to toggling boot sector bytes manually.)

    We've achieved this, though of course things can always be improved.

  • Making it easy to install applications on a computer with Debian. (Again, compared to the olden days, when that meant configuring and compiling everything from scratch, with no guidance.)

    We've achieved this, too.

  • A system with Debian installed is reasonably secure, and easy to keep reasonably secure.

    This means Debian will provide security support for software it distributes, and has ways in which to install security fixes. We've achieved this, though this, too, can always be improved.

  • A system with Debian installed should keep working for extended periods of time. This is important to make using Debian feasible. If it takes too much effort to have a computer running Debian, it's not feasible for many people to that, and then Debian fails its primary goal.

    This is why Debian has stable releases with years of security support. We've achieved this.

The disconnect

On the one hand, we have Debian, which pretty much has achieved what I declare to be its primary goal. On the other hand, a lot of developers now expect much less friction than what Debian offers. This is a disconnect that is cause, I believe, the debian-devel discussion, and variants of that discussion all over the open source landscape.

These discussions often go one of two ways, depending on which community is talking.

  • In the distribution and more old-school communities, the low-friction approach of language-specific package managers is often considered to be a horror, and an abandonment of all the good things that the Linux world has achieved. "Young saplings, who do they think they are, all agile and bendy and with no principles at all, get off our carefully cultivated lawn."

  • In the low-friction communities, Linux distributions are something only old, stodgy, boring people care about. "Distributions are dead, they only get in the way, nobody bothers with them anymore."

This disconnect will require effort by both sides to close the gap.

On the one hand, so much new software is being written by people using the low-friction approach, that Linux distributions may fail to attract new users and especially new developers, and this will hurt them and their users.

On the other hand, the low-friction people may be sawing the tree branch they're sitting on. If distributions suffer, the base on which low-friction development relies on, will wither away, and we'll be left with running low-friction free software on proprietary platforms.

Things for low-friction proponents to improve

Here's a few things I've noticed that go wrong in the various communities oriented towards the low-friction approach.

  • Not enough care is given to copyright licences. This is a boring topic, but it's the legal basis that all of free software and open source is based on. If copyright licences are violated, or copyrights are not respected, or copyrights or licences are not expressed well enough, or incompatible licences are mixed, the result is very easily not actually either free software or open source.

    It's boring, but be sufficiently pedantic here. It's not even all that difficult.

  • Do provide actual source. It seems quite a number of Javascript projects only distribute "minimised" versions of code. That's not actually source code, any more than, say, Java byte code is, even if a de-compiler can make it kind of editable. If source isn't available, it's not free software or open source.

  • Please try to be careful with API changes. What used to work should still work with a new version of a library. If you need to make an API change that breaks compatibility, find a way to still support those who rely on the old API, using whatever mechanisms available to you. Ideally, support the old API for a long time, years. Two weeks is really not enough.

  • Do be careful with your dependencies. Locking down dependencies on a specific version makes things difficult for distributions, because they often can only provide one or a very small number of versions of any one package. Likewise, avoid embedding dependencies in your own source tree, because that explodes the amount of work distributions have to do to patch security holes. (No, distributions can't rely on tends of thousands of upstream to each do the patching correctly and promptly.)

Things for Debian to improve

There are many sources of friction that come from Debian itself. Some of them are unavoidable: if upstream projects don't take care of copyright licence hygiene, for example, then Debian will impose that on them and that can't be helped. Other things are more avoidable, however. Here's a list off the top of my head:

  • A lot of stuff in Debian happens over email, which might happen using a web application, if it were not for historical reasons. For example, the Debian bug tracking system (bugs.debian.org) requires using email, and given delays caused by spam filtering, this can cause delays of more than fifteen minutes. This is a source of friction that could be avoided.

  • Likewise, Debian voting happens over email, which can cause friction from delays.

  • Debian lets its package maintainers use any version control system, any packaging helper tooling, and packaging workflow they want. This means that every package is, to some extent, a new territory for someone other than its primary maintainers. Even when the same tools are used, they can be used in variety of different ways. Consistency should reduce friction.

  • There's too little infrastructure to do things like collecting copyright information into debian/control. This really shouldn't be a manual task.

  • Debian packaging uses arcane file formats, loosely based on email headers. More standard formats might make things easier, and reduce friction.

  • There's not enough automated testing, or it's too hard to use, making it too hard to know if a new package will work, or a modified package doesn't break anything that used to work.

  • Overall, making a Debian package tends to require too much manual work. Packaging helpers like dh certainly help, but not enough. I don't have a concrete suggestion how to reduce it, but it seems like an area Debian should work on.

  • Maybe consider supporting installing multiple versions of a package, even if only for, say, Javascript libraries. Possibly with a caveat that only specific versions will be security supported, and a way to alert the sysadmin if vulnerable packages are installed. Dunno, this is a difficult one.

  • Maybe consider providing something where the source package gets automatically updated to every new upstream release (or commit), with binary packages built from that, and those automatically tested. This might be a separate section of the archive, and packages would be included into the normal part of the archive only by manual decision.

  • There's more, but mostly not relevant to this discussion, I think. For example, Debian is a big project, and the mere size is a cause of friction.

Comments?

I don't allow comments on my blog, and I don't want to debate this in private. If you have comments on anything I've said above, please post to the debian-devel mailing list. Thanks.

Baits

To ensure I get some responses, I will leave these bait here:

Anyone who's been programming less than 12332 days is a young whipper-snapper and shouldn't be taken seriously.

Depending on the latest commit of a library is too slow. The proper thing to do for really fast development is to rely on the version in the unsaved editor buffer of the library developer.

You shouldn't have read any of this. I'm clearly a troll.

17 February, 2018 05:13PM

hackergotchi for Martín Ferrari

Martín Ferrari

OSM in IkiWiki

Since about 15 years ago, I have been thinking of creating a geo-referenced wiki of pubs, with loads of structured data to help searching. I don't know if that would be useful for anybody else, but I know I would use it!

Sadly, the many times I started coding something towards that goal, I ended blocked by something, and I keep postponing my dream project.

Independently of that, for the past two years I have been driving a regular social meeting in Dublin for CouchSurfers, called the Dublin Mingle. The idea is pretty simple: to go every week to a different pub, and make friends.

I wanted to make a map marking all the places visited. Completely useless, but pretty! So, I went back to looking into IkiWiki internals, as the current osm plugin would not fulfill all my needs, and has a few annoying bugs.

After a few days of work, I made it: a refurbished osm plugin that uses the modern and pretty Leaflet library. If the javascript is not lost in the way (because you are reading from an aggregator, for example), below you should see the result. Otherwise, you can see it in action on its own page: Mingle)

©

The code is still not ready for merging into Ikiwiki, as I need to write tests and documentation. But you can find the changes in my GitHub repo.

It is still a long way to go before I can create my pubs wiki, but it is the first building block! Now I need a way to easily import and sync data from OSM, and then to create a structured search function.

Comment

17 February, 2018 03:11PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Downloading all the Critical Role podcasts in one batch

I've been watching Critical Role1 for a while now and since I've started my master's degree I haven't had much time to sit down and watch the show on YouTube as I used to do.

I thus started listening to the podcasts instead; that way, I can listen to the show while I'm doing other productive tasks. Pretty quickly, I grew tired of manually downloading every episode each time I finished the last one. To make things worst, the podcast is hosted on PodBean and they won't let you download episodes on a mobile device without their app. Grrr.

After the 10th time opening the terminal on my phone to download the podcast using some wget magic I decided enough was enough: I was going to write a dumb script to download them all in one batch.

I'm a little ashamed to say it took me more time than I had intended... The PodBean website uses semi-randomized URLs, so I could not figure out a way to guess the paths to the hosted audio files. I considered using youtube-dl to get the DASH version of the show on YouTube, but Google has been heavily throttling DASH streams recently. Not cool Google.

I then had the idea to use iTune's RSS feed to get the audio files. Surely they would somehow be included there? Of course Apple doesn't give you a simple RSS feed link on the iTunes podcast page, so I had to rummage around and eventually found out this is the link you have to use:

https://itunes.apple.com/lookup?id=1243705452&entity=podcast

Surprise surprise, from the json file this links points to, I found out the main Critical Role podcast page has a proper RSS feed. To my defense, the RSS button on the main podcast page brings you to some PodBean crap page.

Anyway, once you have the RSS feed, it's only a matter of using grep and sed until you get what you want.

Around 20 minutes later, I had downloaded all the episodes, for a total of 22Gb! Victory dance!

Script

Here's the bash script I wrote. You will need recode to run it, as the RSS feed includes some HTML entities.

# Get the whole RSS feed
wget -qO /tmp/criticalrole.rss http://criticalrolepodcast.geekandsundry.com/feed/

# Extract the URLS and the episode titles
mp3s=( $(grep -o "http.\+mp3" /tmp/criticalrole.rss) )
titles=( $(tail -n +45 /tmp/criticalrole.rss | grep -o "<title>.\+</title>" \
           | sed -r 's@</?title>@@g; s@ @\\@g' | recode html..utf8) )

# Download all the episodes under their titles
for i in ${!titles[*]}
do
  wget -qO "$(sed -e "s@\\\@\\ @g" <<< "${titles[$i]}").mp3" ${mp3s[$i]}
done

1 - For those of you not familiar with Critical Role, it's web series where a group of voice actresses and actors from LA play Dungeons & Dragons. It's so good even people like me who never played D&D can enjoy it..

17 February, 2018 05:00AM by Louis-Philippe Véronneau

Sergio Durigan Junior

Hello, Planet Debian

Hey, there. This is long overdue: my entry in Planet Debian! I’m creating this post because, until now, I didn’t have a debian tag in my blog! Well, not anymore.

Stay tunned!

17 February, 2018 05:00AM

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

My Kuro5hin Diary Entries

Kuro5hin logo

Kuro5hin (pronounced “corrosion” and abbreviated K5) was a website created in 1999 that was popular in the early 2000s. K5 users could post stories to be voted upon as well as entries to their personal diaries.

I posted a couple dozen diary entries between 2002 and 2003 during my final year of college and the months immediately after.

K5 was taken off-line in 2016 and the Internet Archive doesn’t seem to have snagged comments or full texts of most diary entries. Luckily, someone managed to scrape most of them before they went offline.

Thanks to this archive, you can now once again hear from 21-year-old-me in the form of my old K5 diary entries which I’ve imported to my blog Copyrighteous. I fixed the obvious spelling errors but otherwise restrained myself and left them intact.

If you’re interested in preserving your own K5 diaries, I wrote some Python code to parse the K5 HTML files for diary pages and import them into WordPress using it’s XML-RPC API. You’ll need to tweak the code to use it but it’s pretty straightforward.

17 February, 2018 03:23AM by Benjamin Mako Hill

February 16, 2018

hackergotchi for Steve Kemp

Steve Kemp

Updated my package-repository

Yesterday I overhauled my Debian package-hosting repository, in response to user-complaints.

I started down the rabit hole due to:

  W: No Hash entry in Release file /.._._Release which is considered strong enough for security purposes

I fixed that by changing my hashes from SHA1 to SHA256 + SHA512, but I was only making a little progress, due to the more serious problem, my repository-signing key was DSA-based and "small". I replaced it with a modern key, then changed how I generate my packages and all is well.

In the past I was generating the Release files manually, via a silly shell-script. Anyway here is my trivial Makefile for making the per-project and per-distribution archive, no doubt it could be improved:

   all: repo

   clean:
       @rm -f InRelease Packages Sources Packages.gz Sources.gz Release Release.gpg

   Packages: $(wildcard *.deb)
       @apt-ftparchive packages . > Packages 2>/dev/null
       @gzip -c Packages > Packages.gz

   Sources: $(wildcard *.tar.gz)
       @apt-ftparchive sources . > Sources 2>/dev/null
       @gzip -c Sources > Sources.gz

   repo: Packages Sources
       @apt-ftparchive release . > Release
       @gpg --yes --clearsign -o InRelease Release
       @gpg --yes -abs -o Release.gpg Release

In conclusion, in the unlikely event you're using my packages please see GPG-instructions. I've also hidden any packages which were solely for Squeeze and Wheezy, but they continue to exist to avoid breaking links.

16 February, 2018 10:00PM

February 15, 2018

hackergotchi for Erich Schubert

Erich Schubert

Disable Web Notification Prompts

Recently, tons of website ask you for the permission to display browser notifications. 99% of the time, you will not want these. In fact, all the notifications increase stress, so you should try to get rid of them for your own productivity. Eliminate distractions.

I find even the prompt for these notifications very annoying. With Chrome/Chromium it is even worse than with Firefox.

In Chrome, you can disable the functionality by going to the location chrome://settings/content/notifications and toggling the switch (the label will turn to “blocked”, from “ask”).

In Firefox, go to about:config, and toggle dom.webnotifications.enabled is supposed to help, but does not disable the prompts here. You need to even disable dom.push.enabled completely. That may break some services that you want, but I have not yet noticed anything.

15 February, 2018 09:41PM by Erich Schubert

hackergotchi for Joachim Breitner

Joachim Breitner

Interleaving normalizing reduction strategies

A little, not very significant, observation about lambda calculus and reduction strategies.

A reduction strategy determines, for every lambda term with redexes left, which redex to reduce next. A reduction strategy is normalizing if this procedure terminates for every lambda term that has a normal form.

A fun fact is: If you have two normalizing reduction strategies s1 and s2, consulting them alternately may not yield a normalizing strategy.

Here is an example. Consider the lambda-term o = (λx.xxx), and note that oo → ooo → oooo → …. Let Mi = (λx.(λx.x))(oooo) (with i ocurrences of o). Mi has two redexes, and reduces to either (λx.x) or Mi + 1. In particular, Mi has a normal form.

The two reduction strategies are:

  • s1, which picks the second redex if given Mi for an even i, and the first (left-most) redex otherwise.
  • s2, which picks the second redex if given Mi for an odd i, and the first (left-most) redex otherwise.

Both stratgies are normalizing: If during a reduction we come across Mi, then the reduction terminates in one or two steps; otherwise we are just doing left-most reduction, which is known to be normalizing.

But if we alternatingly consult s1 and s2 while trying to reduce M2, we get the sequence


M2 → M3 → M4 → …

which shows that this strategy is not normalizing.

Afterthought: The interleaved strategy is not actually a reduction strategy in the usual definition, as it not a pure (stateless) function from lambda term to redex.

15 February, 2018 07:17PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Holger Levsen

Holger Levsen

20180215-mini-debconf-hamburg

Everything about the Mini-DebConf in Hamburg in May 2018

Moin!

With great joy we are finally offically announcing the Debian MiniDebConf which will take place in Hamburg (Germany) from May 16 to 20, with three days of Debcamp style hacking, followed by two days of talks, workshops and more hacking. And then, Monday the 21st is also a holiday in Germany, so you might choose to extend your stay by a day! (Though there will not be an official schedule for the 21st.)

;tl;dr: We're having a MiniDebConf in Hamburg on May 16-20. It's going to be awesome. You should all come! Register now!

the longer version:

Registration

Please register now, registration is free and now open until May 1st.

In order to register, add your name and details to the registration page in the Debian wiki.

There's space for approximately 150 people due to limited space in the main auditorium.

Please register ASAP, as we need this information for planning food and hacking space size calculations.

Talks wanted (CfP)

We have assembled a content team (consisting of Margarita Manterola, Michael Banck and Lee Garrett), who soon will publish an extra post for the CfP. Though you don't need to wait for that and can already send your proposals to

    cfp@minidebconfhamburg.debian.net

We will have talks on Saturday and Sunday, the exact slots are yet to be determined by the content team.

We expect submissions and talks to be held in English, as this is the working language in Debian and at this event.

Debian Sprints

The miniDebcamp from Wednesday to Friday is a perfect opportunity to host Debian sprints. We would welcome if teams assemble and work together on their projects.

Location

The event will be hosted in the former Victoria Kaserne, now called Fux (or Frappant), which is a collective art space located in a historical monument. It is located between S-Altona and S-Holstenstraße, so there is a direct subway connection to/from the Hamburg Airport (HAM) and Altona is also a long distance train station.

There's a Gigabit-Fiber uplink connection and wireless coverage (almost) everywhere in the venue and in the outside areas. (And then, we can also fix locations without wireless coverage.)

Within the venue, there are three main areas we will use, plus the garden and corridors:

dock europe

dock europe is an international educational centre with a meeting space within the venue which offers three rooms which can be combined into one big one. During the Mini-DebCamp from Wednesday to Friday we will probably use the rooms in the split configuration, while on Saturday and Sunday it will be one big room hosting presentations and such stuff. There are also two small rooms we can use as small hacklabs for 4-6 people.

dock europe also provides accomodation for some us, see further below.

CCCHH hackerspace

Just down two corridors in the same floor and building as dock europe there is the CCC Hamburg Hackerspace which will be open for us on all five days and which can be used for "regular Debian hacking" or, if you find some nice CCCHH members to help you, you might also be able to use the lasercutter, 3d printer, regular printer and many other tools and devices. It's definitly also suitable for smaller ad-hoc workshops but beware, it will also somewhat be the noisy hacklab, as it will also be open to regular CCC folks when we are there.

fux und ganz

The Fux also has a cantina called "fux und ganz" which will serve us (and other visitors of the venue) with lunch and dinner. Please register until May 1st to ease their planning as well!

Accommodation

The Mini-DebConf will take place in the center of Hamburg, so there are many accomodation options available. Some suggestions for housing options are given in the wiki and you might want to share your findings there too.

There is also limited on-site accomodation available, dock europe provides 36 beds in double-rooms in the venue. The rooms are nice, small, clean, have a locker, wireless and are just one floor away from our main spaces. There's also a sufficient amount of showers and toilets and breakfast is available (for those 36 people) as well.

Thankfully nattie has agreed to be in charge of distributing these 36 beds, so please mail her if you want a bed. The beds will be distributed from two buckets on a first come, first serve base:

  • 24 beds for anyone, first come, first serve, costs 27e/night.
  • 12 beds for video team, frontdesk desk, talk meisters, etc, also by first come, first served and nattie decides, whether you qualify indeed. Those also costs 27e/night.

Sponsors wanted

Making a Mini DebConf happen costs money, we need to rent the venue, video gear, hopefully can pay hard working volunteers lunch and dinner and maybe also sponsor some travel. So we really appreciate companies willing to support this meeting!

We have three sponsor categories:

  • 1000€ = sponsor, listed as such in all material.

  • 2500€ = gold sponsor, listed as such in all material, logo featured in the videos.

  • 5000€ = platinum sponsor, listed as such prominently in all material, logo featured prominently in the videos

Plus, there's corporate registration as an option too, where we will charge you 250€ for the registration. Please contact us if you are interested in that!

More volunteers wanted

Some things still need more helping hands:

So far we thankfully have Nattie volunteering for frontdesk duties. In turn, she'd be very thankful if some people join her staffing frontdesk, because shared work is more joyful!

The same goes for the video team. So far, we know the gear will arrive, and probably a person knowing how to operate it, but that's it. Please consider making sure we'll have videos released! ;) (And streams hopefully too.)

Also, please consider submitting a talk or holding a workshop! cfp@minidebconfhamburg.debian.net is waiting for you!

Finally, we would also very much welcome a nice logo and t-shirts with it being printed. Can you come up with a logo? Print shirts?

Contact

If you want to help, need help, have comments or want to contact us for other reasons, there are several ways:

Looking forward to see you in Hamburg!

Holger, for the 2018 Mini DebConf Hamburg team

15 February, 2018 04:16PM

hackergotchi for Michal &#268;iha&#345;

Michal Čihař

Weblate 2.19

Weblate 2.19 has been released today. The biggest improvement are probably addons to customize translation workflow, but there are some other enhancements as well.

Full list of changes:

  • Fixed imports across some file formats.
  • Display human friendly browser information in audit log.
  • Added TMX exporter for files.
  • Various performance improvements for loading translation files.
  • Added option to disable access management in Weblate in favor of Django one.
  • Improved glossary lookup speed for large strings.
  • Compatibility with django_auth_ldap 1.3.0.
  • Configuration errors are now stored and reported persistently.
  • Honor ignore flags in whitespace autofixer.
  • Improved compatibility with some Subversion setups.
  • Improved built in machine translation service.
  • Added support for SAP Translation Hub service.
  • Added support for Microsoft Terminology service.
  • Removed support for advertisement in notification mails.
  • Improved translation progress reporting at language level.
  • Improved support for different plural formulas.
  • Added support for Subversion repositories not using stdlayout.
  • Added addons to customize translation workflows.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

15 February, 2018 03:30PM

Arturo Borrero González

New round of GSoC: 2018

GSoC goodies

The other day Google published the list of accepted projects for this year round of Google Summer of Code. Many organizations were accepted, and there are 3 that are specially interesting to me: Netfilter, Wikimedia Foundation and Debian.

The GSoC initiative is a great opportunity to enter the professional FLOSS world, knowing more of your favorite project, having a mentor and earning a stipend along the way.

The Netfilter project (check the dashboard) has published a list of ideas for students to work on. I will likely be mentoring here. Be aware that students who submit patches as part of the warmup period are more likely to be elected.

The Debian project (check the dashboard) also has a great list of proposals in a variety of different technologies, from packaging to Android, also with some web and backend project ideas. Is great to see again Debian participating in GSoC. Last year we weren’t present.

The Wikimedia Foundation (check the dashboard) has like 8 projects for students to work on, also with different scopes, including an interesting project for improving Toolforge.

So, students, don’t be afraid to participate! There are a lot of projects, different technologies and people to work with, so there should be one waiting for you.

15 February, 2018 09:22AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Xerox printers on Debian - an update

This blog post is 90% rant and 10% update on how I made our new Xerox Altalink printer work on Debian. Skip the rant by clicking here.

I think the lamest part of my current job is that we heavily rely on multifunction printers. We need to print a high volume of complicated documents on demand. You know, 1500 copies of a color booklet printed on 11x17 paper folded in 3 stapled in the middle kind of stuff.

Pardon my French, but printers suck big time. The printer market is an oligopoly clusterfuck and it seems it keeps getting worse (looking at you, Fuji-Xerox merger). None of the drivers support Linux properly, all the printers are big piles of proprietary code and somehow the corporations selling them keep adding features no one needs.

My reaction when I learnt we needed to replace our current printer

Good job XeroxFuji Xerox, the new shiny printer you forced us to rent1 comes with an app that lets you print directly from Dropbox. I guess they expect people:

  • not to use a password manager
  • not to use long randomly generated passwords
  • not to use 2FA
  • to use proprietary services like Dropbox

Oh wait, I guess that's what people actually do. My bad.

As for fixing their stupid bugs, try again.

Xerox Altalink C8045

Rant aside, here's a short follow-up on the blog post I wrote two years ago on how to install Xerox printers on Debian.

As far as I can see, the Xerox Altalink C8045 seems to be working properly with the x86_64-5.20.606.3946 version of the Xerox driver for Debian. Make sure that you use the bi-directional setup or else you might have trouble. Sadly, all the gimmicks I wrote about two years ago still stand.

If you find that interesting, I also rewrote our Puppet module that manages all of this for you to be compatible with Puppet 4. Yay?


1 - Long story short, we used to have a Xerox ColorQube printer that used wax instead of toner to print, but Xerox doesn't support them anymore and bought back our support contract. E-waste FTW.

15 February, 2018 05:00AM by Louis-Philippe Véronneau

February 14, 2018

Bits from Debian

hackergotchi for Daniel Silverstone

Daniel Silverstone

Epic Journey in my Ioniq

This weekend just-gone was my father's 90th birthday, so since we don't go to Wales very often, we figured we should head down to visit. As this would be our first major journey in the Ioniq (I've done Manchester to Cambridge a few times now, but this is almost 3 times further) we took an additional day off (Friday) so that we could easily get from our home in southern Manchester to my parent's house in St Davids, Pembrokeshire.

I am not someone to enter into these experiences lightly. I spent several hours consulting with zap-map and also Google maps, looking at chargers en-route. In the UK there's a significant number of chargers on the motorway system provided by Ecotricity but this infrastructure is not pervasive and doesn't really extend beyond the motorway service stations (and some IKEAs). I made my plan for the journey to Wales, ensuring that each planned stop was simply the first in a line of possible stops in order that if something went wrong, I'd have enough charge to move forwards from there.

First leg took us from our home to the Ecotricity charger at Hilton Park Southbound services. My good and dear friend Tim very kindly offered to charge us for free and he used one of his fifty-two free charges to top us up. This went flawlessly and set us in a very good mood for the journey to come. Since we would then have a very long jump from the M5 to the M4, we decided that our second charge would be to top-up at Chateau Impney which has a Polar charger. Unfortunately by this point the wind and rain were up and the charger failed to work properly, eventually telling us that its input voltages were unbalanced and then powering itself off entirely. We decided to head to the other Polar charger at Webbs of Wychbold. That charger started up fine so we headed in, had a loo visit, grabbed some lunch, watched the terrapins swimming around, and when a sufficient time had passed for the car to charge, headed back only to discover that it had emergency-stopped mere moments after we'd left the car, so we had no charge for the entire time we were there. No matter we thought - we'd sit in the car while it charged, and eat our lunch. Sadly we were defeated, the charger repeatedly e-stopped, so we gave up.

Our fallback position was to charge at the Strensham services at the M5/M50 junction. Sadly the southbound services have no chargers at all (they're under a lot of building work right now, so perhaps that's part of it) so we had to get to the northbound services and charge there. That charge went fine, and with a £2.85 bill from Ecotricity automatically paid, we snuck our way along back-roads and secret junctions to the southbound services, and headed off down the M50. Sadly we're now a lot later than we should have been, having lost about ninety minutes in total to the wasted time at the two Polar chargers, which meant that we hit a lot of congestion at Monmouth and around Newport on the M4.

We made it to Cardiff Gate where we plugged in, set it charging, and then headed into the service area where we happened to meet my younger brother who was heading home too. He went off, and I looked at the Ecotricity app on my phone which had decided at that point that I wasn't charging at all. I went out to check, the charger was still delivering current, so, chalking it up to a bit of a de-sync, we went in, had a coffee and a relax, and then headed out to the car to wait for it to finish charging. It finished, we unplugged, and headed out. But to this day I've not been charged by Ecotricity for that so "yay".

Our final stop along the M4 was Swansea West. Unfortunately the Pont Abraham services don't have a rapid charger compatible with my car so we have to stop earlier. Fortunately there are three chargers at Swansea West. Unfortunately the CCS was plugged into an i3 which wasn't charging but was set to keep the connector locked in so I couldn't snarf it. I plugged into a slower (AC) charger to get a bit of juice while we went in to wait for the i3 owner to leave. I nipped out after 10 minutes and conveniently they'd gone, so I swapped the car over to the CCS charger and set it going. 37 minutes later and that charger had actually worked, charged me up, and charged me a princely £5.52 for the privilege.

From here we nipped along the A48/A40, dropped in on my sister-in-law to collect a gift for my father, and then got to St Davids at around nine pm. A mere eleven hours after we left Manchester. By comparison, when I drove a Passat, I would leave Manchester at 3pm, drive 100 fewer miles, and arrive at around 9pm, having had a few nice stops for loo breaks and dinner.

Saturday it had been raining quite hard overnight, St Davids has one (count it, ONE) charger compatible with my car (type 2 in this instance) but fortunately it's free to use (please make donation in the tourist-information-office). Unfortunately after the rain, the parking space next to the charger was under a non-trivial amount of water, so poor Rob had to mountaineer next to the charger to plug in without drowning. We set the car charging and went to have a nice breakfast in St Davids. A few hours later, I wandered back up to the car park with Rob and we unplugged and retrieved the car. Top marks for the charger, but a pity the space was a swimming pool.

Sunday morning dawned bright and early we headed out to Llandewi Velfrey to visit my brother who runs Silverstone Green Energy. We topped up there and then headded to Sarn Parc at his suggestion. It's a nice service area, unfortunately the AC/Chademo charger was giving 'Remote Start Error' so the Leaf there was on the Chademo/CCS charger. However as luck would have it, that charger was on free-vend, so once we got on the charger (30m later or so) we got to charge for free. Thanks Ecotricity.

From Sarn Parc, we decided that since we'd had such a good experience at Strensham North, we'd go directly there. We arrived with 18m to spare in the "tank" but unfortunately the CCS/Chademo charger was broken (with an error along the lines of PWB1 is 0x0008) and there was an eGolf there which also had wanted to use CCS but had to charge slowly in order to get enough range to get to another charger. As a result we had to sit there for an hour to wait for him to have enough in his 'tank' that he was prepared to let us charge. We then got a "full" 45 minute charge (£1.56, 5.2kWh) which gave us enough to get north again to Chateau Impney (which had been marked working again on Zap-map).

The charge there worked fine (yay) so we drove on north to Keele services. We arrived in the snow/hail/rain (yay northern weather) found the charger, plugged in, tried to set it going using the app, and we were told "Unable to contact charger". So I went through the process again and we were told "Charger in use". It bloody well wasn't in use, because I was plugged into it and it definitely wasn't charging my car. We waited for the rain to die down again and looked at the charger, which at that moment said "Connect vehicle" and then it started up charging the car (yay). We headed in for a loo and dinner break. Unfortunately the app couldn't report on progress but it had started charging so we were confident we'd be fine. More fool us. It had stopped charging moments after we'd left the car and once again we wasted time because it wasn't charging when we thought it was. We returned, discovered the car hadn't charged, but then discovered the charger had switched to free-vend so we charged up again for free, but that was another 40 minute wait.

Finally we got home (via a short stop at the pub) and on Monday I popped along to a GMEV rapid charger, and it worked perfectly as it has every single time I've used it.

So, in conclusion, the journey was reasonably cheap, which is nice, but we had two failed charge attempts on Polar, and several Ecotricity cockups (though they did mostly end up in our favour in terms of money) which cost us around 90 to 120 minutes in each direction. The driving itself (in the Ioniq) was fine and actually meant I wasn't frazzled and unhappy the whole time, but the charging infrastructure is simply not good enough. It's unreliable, Ecotricity don't have support lines at the weekend (or evenings/early mornings), and is far too sparse to be useful when one wishes to travel somewhere not on the motorway network. If I'd tried to drive my usual route, I'd have had to spend four hours in Aberystwyth using my granny charger to put about 40 miles in the tank from a public 3 pin socket.

14 February, 2018 09:18PM by Daniel Silverstone

hackergotchi for Erich Schubert

Erich Schubert

Online Dating Cannot Work Well

Daniel Pocock (via planet.debian.org) points out what tracking services online dating services expose you to. This certainly is an issue, and of course to be expected by a free service (you are the product – advertisers are the customer). Oh, and in case you forgot already: some sites employ fake profiles to retain you as long as possible on their site… But I’d like to point out how deeply flawed online dating is. It is surprising that some people meet successfully there; and I am not surprised that so many dates turn out to not work: they earn money if you remain single, and waste time on their site, not if you are successful.

I am clearly not an expert on online dating, because I am happily married. I met my wife in a very classic setting: offline, in my extended social circle. The motivation for this post is that I am concerned about seeing people waste their time. If you want to improve your life, eliminate apps and websites that are just distraction! And these days, we see more online/app distraction than ever. Smartphone zombie apocalpyse.

There are some obvious issues with online dating:

  • you treat people as if they were an object in an online shop. If you want to find a significant other, don’t treat him/her like a shoe.
  • you get too many choices. So if one turns out to be just 99% okay, then you will ignore this in favor of another 100% potential match.
  • you get to choose exactly what you want. No need to tolerate. And of course you know exactly what fits to you, don’t you? No, actually we are pretty bad at that, and a good relationship will require you to be tolerant.
  • inflated expectations: in reality, the 100s turn out to be more like 55% matches, because the image was photoshopped, they are too nervous, and their profile was written by a ghostwriter. Oh, and some of them will simply be chatbots, or employees, or already married, or …. So they don’t even exist.
  • because you are also just 99%, everybody seems to prefer someone else, and you are only the second choice, if chosen at all. You don’t get picked.
  • you will never be comfortable on the actual first date. Because of inflated expectations, it will be disappointing, and you just want to get away.
  • the companies earn money if you are online at their site, not if you are successful.

And yes, there is scientific research backing up these things. For example:

Online Dating: A Critical Analysis From the Perspective of Psychological Science
Eli J. Finkel, Paul W. Eastwick, Benjamin R. Karney, Harry T. Reis, Susan Sprecher, Psychological Science in the Public Interest, 13(1), 3-66.
“the ready access to a large pool of potential partners can elicit an evaluative, assessment-oriented mindset that leads online daters to objectify potential partners and might even undermine their willingness to commit to one of them”

and

Dating preferences and meeting opportunities in mate choice decisions
Belot, Michèle, and Marco Francesconi, Journal of Human Resources 48.2 (2013): 474-508.
“[in speed dating] suggesting that a highly popular individual is almost 5 times more likely to get a date with another highly popular mate than with a less popular individual”

which means that if you are not in the top most attractive accounts, you probably just get swiped away.

If you want to maximize your chances of meeting someone, you probably have to use this approach (vimeo.com).

And you can find many more reports on “Generation Tinder” and its hard time to find partners because of inflated expectations. It is also because these apps and online services make you unhappy, and that makes you unattractive.

Instead, I suggest you extend your offline social circle.

For example, I used to go dancing a lot. Not the “drunken, and too loud music to talk” kind, but ballroom. Not only this can drastically improve your social and communication skills (in particular, non-verbal communication, but also just being natural rather than nervous), but it also provides great opportunities to meet new people with a shared interest. And quite a lot of my friends in dancing got married to a partner they first met at a dance.

For others, other social sport does this job (although many find chit chat at the gym or yoga annoying). Walk your dog in a new area - you may meet some new faces there. But it is best if you get to talk. Apparently, some people love meeting strangers for cooking (where you’d cook and eat antipasti, main dishes, and dessert in different places). Go to some board game nights, etc. I think anything will do that lets you meet new people with at least some shared interest or social connection, and where you are not just going because of dating (because then you’ll be stressed out), but where you can relax. If you are authentically relaxed and happy, this will make you attractive. And hey, maybe someone will want to meet you a second time.

Spending all that time online chatting or swiping certainly will not improve your social skills when you actually have to face someone face-to-face… it is the worst thing to do, if you aren’t already a very open person that easily chats up strangers (and then you won’t need it).

Forget all that online crap you get pulled into all the time. Don’t let technology hijack your social life, and make you addicted to scrolling through online profiles of people you are not going to meet. Don’t be the product, and nor is your significant other.

They earn money if you spend time on their website, not if you meet your significant other.

So don’t expect them to work. They don’t need to, and they don’t intend to. Dating is something you need to do offline.

14 February, 2018 07:46PM by Erich Schubert

hackergotchi for Sean Whitton

Sean Whitton

A better defence against the evil maid attack on a laptop

The attack

Laptops need full disc encryption. Indeed, my university has explicitly banned us keeping any information on our students’ grades on our laptops unless we use FDE. Not even comments on essays, apparently, as that counts as grade information.

There must, though, exist unencrypted code that tells the computer how to decrypt everything else. Otherwise you can’t turn your laptop on. If you’re only trying to protect your data from your laptop being permanently stolen, it’s fine for this to be in an unencrypted partition on the laptop’s HDD: when your laptop is stolen, the data you are trying to protect remains encrypted.

An evil maid attack involves the replacement of this unencrypted code with something malicious – perhaps it e-mails data from the encrypted partition to someone who wants it. Of course, if someone has access to your laptop without your knowledge, they can always work around any security scheme you might develop. They might add a hardware keylogger, for example. So why might we want to try to protect against the evil maid attack – haven’t we already lost if someone is able to modify the contents of the unencrypted partition of our hard drive?

Well, the thing about the evil maid attack is that it is very quick and easy to modify the contents of a laptop’s hard drive, as compared to other security-bypassing hardware modifications, which take much longer to perform without detection. Users expect to be able to easily replace their hard drives, so they are usually accessible with the removal of just a single screw. It could take less than five minutes to deploy the evil maid payload.

Laptops are often left unattended for the two or three minutes it would take to deliver an evil maid payload; they are less often left for long enough that deeper hardware modifications could be made. So it is worth taking steps to prevent evil maid attacks.

The best solution

UEFI Secure Boot. But

  • Debian does not support this yet; and
  • my laptop does not have the hardware support anyway.

My current solution

The standard solution is to put the unencrypted hard drive partition on a USB key, and keep that on one’s keychain. Then there is no unencrypted code on the laptop at all; you boot from the USB, which decrypts the root partition, and then you unmount the USB key.

Problem with this solution

The big problem with this is kernel and bootloader upgrades. You have to ensure your USB key is mounted before your package manager upgrades the kernel. This effectively rules out using unattended-upgrades to get security upgrades for the kernel. They must be applied manually. Further, you probably want a backup USB key with the kernel and bootloader on it. Now you have to upgrade both, using commands like apt-get --reinstall.

This is a real maintenance burden and is likely to delay your security upgrades. And the whole point of putting /boot on a USB key was to improve security!

Something better

Recent GRUB is able to decrypt partitions itself. So /boot can reside within your encrypted root partition. GRUB’s setup scripts are smart enough that you can switch over to this in just a few steps:

  1. Move contents of /boot from USB drive into root partition.
  2. Remove/comment /boot from /etc/fstab.
  3. Set GRUB_ENABLE_CRYPTODISK=y in /etc/default/grub.
  4. grub-install /dev/sda
  5. update-grub

It’s still true that there must be unencrypted code that knows how to decrypt the root partition. Where does that go? grub-install is the command that installs that code; where does it put it? The ArchLinux wiki has the answer. If you’re using EFI, it will go in the EFI system partition (ESP). Under BIOS, if your drive is formatted with an MBR, it goes in the “post-MBR gap” between the MBR and the first partition (on drive partitioned with very old tools, this post-MBR gap might be too small to accommodate the larger GRUB image that contains the decryption code; however, drives partitioned with recent tools that “support 1 MiB partition alignment” (including the Debian stretch installer) will be fine – to check fdisk -l and look at where your first partition starts). Under BIOS, if your drive is formatted with a GPT, you have to add a 1MiB BIOS boot partition, and the code goes there.

We’ve resolved the issue of package updates modifying /boot, which now resides in the encrypted root partition. However, this is not all of the picture. If we are using EFI, now we have unencrypted code in the EFI system partition which is subject to the evil maid attack. And if we respond by moving the EFI system partition onto a USB drive, the package update problem reoccurs: the EFI system partition will not always be mounted. If we are using BIOS, the evil maid reoccurs since it is not that much harder to modify the code in the post-MBR gap or the BIOS boot partition.

My proposed solution, pending UEFI Secure Boot, is to use BIOS boot with a MBR partition table, keep /boot in the encrypted root partition and grub-install to the USB drive. dpkg-reconfigure grub-pc and tell it to never grub-install to anything. Then set the laptop’s boot order to never try to boot from the HDD, only from USB. (There’s no real advantage of GPT with my simple partitioning setup but I think that would also work fine.)

How does this solve the various issues I’ve raised? Well, the amount of code on the USB drive is very small (less than 1MiB) so it is much less likely to require manual updates. Kernel updates will modify /boot; only bootloader could require me to manually run grub-install to modify the contents of the post-MBR gap, but these are very infrequent.

Of course, the BIOS could be cracked such that the laptop will boot from the HDD no matter what USB I have plugged in, or even only when some USB is plugged in, but that’s a hardware modification beyond the evil maid, against which we are not trying to protect.

As a nice bonus, the USB drive’s single FAT32 partition is now usable for sneakernet.

14 February, 2018 07:22PM

Renata D'Avila

Debugging MoinMoin and using an IDE

Debugging

When I was creating the cal_action, I didn't quite know how to debug MoinMoin. Could I use pudb with the wiki? I wasn't sure how. To figure out if the code I was writing worked, I ended up consulting the error logs from Apache. It sort of worked, but of course that was very far from the ideal. What if I wanted to check something that wasn't an error?

Well, MoinMoin supposedly has a logging module, that lives on moin-1.V.V/wiki/config/logging/, but I simply couldn't get it to work as I wanted.

I searched some more and found a guide on setting up Winpdb Source Level Debugger, but I don't use Windows (really, where is the GNU/Linux guide to debug?), so that didn't help. 😭

But... MoinMoin does offer a guide on setting up a development enviroment with Eclipse that I ended up following.

Using an IDE

Up until this point, most of the code I created in Python where simple scripts that could be ran and debugged in the terminal. I had used IDLE while I was taking the Python para Zumbis (Python for Zombies) course, but other than that, I just used a code editor (Sublime, then Vim and, finally, Atom) when programming in Python.

When I was taking a tech vocational course, I used Eclipse, an Integrated development environment, or IDE to code in Java, but that was it. After I passed the course and didn't need to code in Java again, I simply let go of the IDE.

As it turns out, going back to Eclipse, along with the PyDev plugin - both free software - was what actually helped me in debugging and figuring my way around the MoinMoin macro.

The steps I had to take:

  1. Install eclipse-pydev and it's dependencies using Synaptic (Debian package manager)
  2. Define Python 2.7 as the interpreter in preferences
  3. Create a new workspace
  4. Create a new project
  5. Import the installed MoinMoin into the new project
  6. Configure the new wiki
  7. Run wikiserver.py

To develop the plugins (macro and actions):

  1. Create a new workdir for the Plugins, that goes alongsite Moin
  2. Copy the contents from the plugin directory of the wiki to the new directory

On step 2, though, instead of copying I just created a symbolic link to the files I had been working on that where in another directory. It would make no sense to have two copies of the same file in different places in the computer - besides, it would just complicate tracking what changes had been made ans where. To create a symbolic link:

$ ln -s PATH-TO-THE-ORIGINAL-FILE PATH-TO-THE-DESTINATION/FILE_ON_DESTINATION

More on symbolic links can be found using the command man ln on Debian's terminal.

With the Eclipse console, I could use print help(request) to figure out what methods would be available to me with the request provided by the macro. With this, I finally began to figure out how to create the response we want (without returning the whole wiki page with it, just the event information in the icalendar format).

Eclipse IDE running the wikiserver.py from MoinMoin and showing the debug output in a terminal

If you don't know what I mean with request/response: in simple terms, when you click something on a webpage (for instance, my ical link in the bottom of the calendar) in your internet browser, you are requesting a resource (the icalendar file). It's up to the server to respond with the appropriate resource (the file) or with an status code explaining why it can't fulfill your request (for instance, you get an 404 error when the page - resource - you're trying to access - requesting - can't be found).

A simplified diagram of a static web server showing the request-response cycle.

Here you can find more information on client-Server overview, by Mozilla web docs.

So now I'm working on constructing that response. Thanks to the Eclipse console, now I know that just trying to use the response.write() method with the return value of my method I get a TypeError: Expected bytes. I will probably have to transform the result of the method to generate the icalendar into bytes instead of InstanceClass. Well, at least I can say that the choices that have been made when writing the ExportPDF macro come to me more clearly now.

14 February, 2018 07:19PM by Renata

Debugging MoinMoin and using an IDE

Debugging

When I was creating the cal_action, I didn't quite know how to debug MoinMoin. Could I use pudb with the wiki? I wasn't sure how. To figure out if the code I was writing worked, I ended up consulting the error logs from Apache. It sort of worked, but of course that was very far from the ideal. What if I wanted to check something that wasn't an error?

Well, MoinMoin supposedly has a logging module, that lives on moin-1.V.V/wiki/config/logging/, but I simply couldn't get it to work as I wanted.

I searched some more and found a guide on setting up Winpdb Source Level Debugger, but I don't use Windows (really, where is the GNU/Linux guide to debug?), so that didn't help. 😭

But... MoinMoin does offer a guide on setting up a development enviroment with Eclipse that I ended up following.

Using an IDE

Up until this point, most of the code I created in Python where simple scripts that could be ran and debugged in the terminal. I had used IDLE while I was taking the Python para Zumbis (Python for Zombies) course, but other than that, I just used a code editor (Sublime, then Vim and, finally, Atom) when programming in Python.

When I was taking a tech vocational course, I used Eclipse, an Integrated development environment, or IDE to code in Java, but that was it. After I passed the course and didn't need to code in Java again, I simply let go of the IDE.

As it turns out, going back to Eclipse, along with the PyDev plugin - both free software - was what actually helped me in debugging and figuring my way around the MoinMoin macro.

The steps I had to take:

  1. Install eclipse-pydev and it's dependencies using Synaptic (Debian package manager)
  2. Define Python 2.7 as the interpreter in preferences
  3. Create a new workspace
  4. Create a new project
  5. Import the installed MoinMoin into the new project
  6. Configure the new wiki
  7. Run wikiserver.py

To develop the plugins (macro and actions):

  1. Create a new workdir for the Plugins, that goes alongsite Moin
  2. Copy the contents from the plugin directory of the wiki to the new directory

On step 2, though, instead of copying I just created a symbolic link to the files I had been working on that where in another directory. It would make no sense to have two copies of the same file in different places in the computer - besides, it would just complicate tracking what changes had been made ans where. To create a symbolic link:

$ ln -s PATH-TO-THE-ORIGINAL-FILE PATH-TO-THE-DESTINATION/FILE_ON_DESTINATION

More on symbolic links can be found using the command man ln on Debian's terminal.

With the Eclipse console, I could use print help(request) to figure out what methods would be available to me with the request provided by the macro. With this, I finally began to figure out how to create the response we want (without returning the whole wiki page with it, just the event information in the icalendar format).

Eclipse IDE running the wikiserver.py from MoinMoin and showing the debug output in a terminal

If you don't know what I mean with request/response: in simple terms, when you click something on a webpage (for instance, my ical link in the bottom of the calendar) in your internet browser, you are requesting a resource (the icalendar file). It's up to the server to respond with the appropriate resource (the file) or with an status code explaining why it can't fulfill your request (for instance, you get an 404 error when the page - resource - you're trying to access - requesting - can't be found).

A simplified diagram of a static web server showing the request-response cycle.

Here you can find more information on client-Server overview, by Mozilla web docs.

So now I'm working on constructing that response. Thanks to the Eclipse console, now I know that just trying to use the response.write() method with the return value of my method I get a TypeError: Expected bytes. I will probably have to transform the result of the method to generate the icalendar into bytes instead of InstanceClass. Well, at least I can say that the choices that have been made when writing the ExportPDF macro come to me more clearly now.

14 February, 2018 07:19PM by Renata

hackergotchi for Daniel Pocock

Daniel Pocock

What is the best online dating site and the best way to use it?

Somebody recently shared this with me, this is what happens when you attempt to access Parship, an online dating site, from the anonymous Tor Browser.

Experian is basically a private spy agency. Their website boasts about how they can:

  • Know who your customers are regardless of channel or device
  • Know where and how to reach your customers with optimal messages
  • Create and deliver exceptional experiences every time

Is that third objective, an "exceptional experience", what you were hoping for with their dating site honey trap? You are out of luck: you are not the customer, you are the product.

When the Berlin wall came down, people were horrified at what they found in the archives of the Stasi. Don't companies like Experian and Facebook gather far more data than this?

So can you succeed with online dating?

There are only three strategies that are worth mentioning:

  • Access sites you can't trust (which includes all dating sites, whether free or paid for) using anonymous services like Tor Browser and anonymous email addresses. Use fake photos and fake all other data. Don't send your real phone number through the messaging or chat facility in any of these sites because they can use that to match your anonymous account to a real identity: instead, get an extra SIM card that you pay for and top-up with cash. One person told me they tried this for a month as an experiment, expediently cutting and pasting a message to each contact to arrange a meeting for coffee. At each date they would give the other person a card that apologized for their completely fake profile photos and offering to start over now they could communicate beyond the prying eyes of the corporation.
  • Join online communities that are not primarily about dating and if a relationship comes naturally, it is a bonus.
  • If you really care about your future partner and don't want your photo to be a piece of bait used to exploit and oppress them, why not expand your real-world activities?

14 February, 2018 05:25PM by Daniel.Pocock

hackergotchi for Jo Shields

Jo Shields

Packaging is hard. Packager-friendly is harder.

Releasing software is no small feat, especially in 2018. You could just upload your source code somewhere (a Git, Subversion, CVS, etc, repo – or tarballs on Sourceforge, or whatever), but it matters what that source looks like and how easy it is to consume. What does the required build environment look like? Are there any dependencies on other software, and if so, which versions? What if the versions don’t match exactly?

Most languages feature solutions to the build environment dependency – Ruby has Gems, Perl has CPAN, Java has Maven. You distribute a manifest with your source, detailing the versions of the dependencies which work, and users who download your source can just use those.

Then, however, we have distributions. If openSUSE or Debian wants to include your software, then it’s not just a case of calling into CPAN during the packaging process – distribution builds need to be repeatable, and work offline. And it’s not feasible for packagers to look after 30 versions of every library – generally a distribution will contain 1-3 versions of a given library, and all software in the distribution will be altered one way or another to build against their version of things. It’s a long, slow, arduous process.

Life is easier for distribution packagers, the more the software released adheres to their perfect model – no non-source files in the distribution, minimal or well-formed dependencies on third parties, swathes of #ifdefs to handle changes in dependency APIs between versions, etc.

Problem is, this can actively work against upstream development.

Developers love npm or NuGet because it’s so easy to consume – asking them to abandon those tools is a significant impediment to developer flow. And it doesn’t scale – maybe a friendly upstream can drop one or two dependencies. But 10? 100? If you’re consuming a LOT of packages via the language package manager, as a developer, being told “stop doing that” isn’t just going to slow you down – it’s going to require a monumental engineering effort. And there’s the other side effect – moving from Yarn or Pip to a series of separate download/build/install steps will slow down CI significantly – and if your project takes hours to build as-is, slowing it down is not going to improve the project.

Therein lies the rub. When a project has limited developer time allocated to it, spending that time on an effort which will literally make development harder and worse, for the benefit of distribution maintainers, is a hard sell.

So, a concrete example: MonoDevelop. MD in Debian is pretty old. Why isn’t it newer? Well, because the build system moved away from a packager ideal so far it’s basically impossible at current community & company staffing levels to claw it back. Build-time dependency downloads went from a half dozen in the 5.x era (somewhat easily patched away in distributions) to over 110 today. The underlying build system changed from XBuild (Mono’s reimplementation of Microsoft MSBuild, a build system for Visual Studio projects) to real MSbuild (now FOSS, but an enormous shipping container of worms of its own when it comes to distribution-shippable releases, for all the same reasons & worse). It’s significant work for the MonoDevelop team to spend time on ensuring all their project files work on XBuild with Mono’s compiler, in addition to MSBuild with Microsoft’s compiler (and any mix thereof). It’s significant work to strip out the use of NuGet and Paket packages – especially when their primary OS, macOS, doesn’t have “distribution packages” to depend on.

And then there’s the integration testing problem. When a distribution starts messing with your dependencies, all your QA goes out the window – users are getting a combination of literally hundreds of pieces of software which might carry your app’s label, but you have no idea what the end result of that combination is. My usual anecdote here is when Ubuntu shipped Banshee built against a new, not-regression-tested version of SQLite, which caused a huge performance regression in random playback. When a distribution ships a broken version of an app with your name on it – broken by their actions, because you invested significant engineering resources in enabling them to do so – users won’t blame the distribution, they’ll blame you.

Releasing software is hard.

14 February, 2018 11:21AM by directhex

Petter Reinholdtsen

Using VLC to stream bittorrent sources

A few days ago, a new major version of VLC was announced, and I decided to check out if it now supported streaming over bittorrent and webtorrent. Bittorrent is one of the most efficient ways to distribute large files on the Internet, and Webtorrent is a variant of Bittorrent using WebRTC as its transport channel, allowing web pages to stream and share files using the same technique. The network protocols are similar but not identical, so a client supporting one of them can not talk to a client supporting the other. I was a bit surprised with what I discovered when I started to look. Looking at the release notes did not help answering this question, so I started searching the web. I found several news articles from 2013, most of them tracing the news from Torrentfreak ("Open Source Giant VLC Mulls BitTorrent Streaming Support"), about a initiative to pay someone to create a VLC patch for bittorrent support. To figure out what happend with this initiative, I headed over to the #videolan IRC channel and asked if there were some bug or feature request tickets tracking such feature. I got an answer from lead developer Jean-Babtiste Kempf, telling me that there was a patch but neither he nor anyone else knew where it was. So I searched a bit more, and came across an independent VLC plugin to add bittorrent support, created by Johan Gunnarsson in 2016/2017. Again according to Jean-Babtiste, this is not the patch he was talking about.

Anyway, to test the plugin, I made a working Debian package from the git repository, with some modifications. After installing this package, I could stream videos from The Internet Archive using VLC commands like this:

vlc https://archive.org/download/LoveNest/LoveNest_archive.torrent

The plugin is supposed to handle magnet links too, but since The Internet Archive do not have magnet links and I did not want to spend time tracking down another source, I have not tested it. It can take quite a while before the video start playing without any indication of what is going on from VLC. It took 10-20 seconds when I measured it. Some times the plugin seem unable to find the correct video file to play, and show the metadata XML file name in the VLC status line. I have no idea why.

I have created a request for a new package in Debian (RFP) and asked if the upstream author is willing to help make this happen. Now we wait to see what come out of this. I do not want to maintain a package that is not maintained upstream, nor do I really have time to maintain more packages myself, so I might leave it at this. But I really hope someone step up to do the packaging, and hope upstream is still maintaining the source. If you want to help, please update the RFP request or the upstream issue.

I have not found any traces of webtorrent support for VLC.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

14 February, 2018 07:00AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

BH 1.66.0-1

A new release of the BH package arrived on CRAN a little earlier: now at release 1.66.0-1. BH provides a sizeable portion of the Boost C++ libraries as a set of template headers for use by R, possibly with Rcpp as well as other packages.

This release upgrades the version of Boost to the Boost 1.66.0 version released recently, and also adds one exciting new library: Boost compute which provides a C++ interface to multi-core CPU and GPGPU computing platforms based on OpenCL.

Besides the usual small patches we need to make (i.e., cannot call abort() etc pp to satisfy CRAN Policy) we made one significant new change in response to a relatively recent CRAN Policy change: compiler diagnostics are not suppressed for clang and g++. This may make builds somewhat noisy so we all may want to keep our ~/.R/Makevars finely tuned suppressing a bunch of warnings...

Changes in version 1.66.0-1 (2018-02-12)

  • Upgraded to Boost 1.66.0 (plus the few local tweaks)

  • Added Boost compute (as requested in #16)

Via CRANberries, there is a diffstat report relative to the previous release.

Comments and suggestions are welcome via the mailing list or the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

14 February, 2018 01:37AM

February 13, 2018

hackergotchi for Gunnar Wolf

Gunnar Wolf

Is it an upgrade, or a sidegrade?

I first bought a netbook shortly after the term was coined, in 2008. I got one of the original 8.9" Acer Aspire One. Around 2010, my Dell laptop was stolen, so the AAO ended up being my main computer at home — And my favorite computer for convenience, not just for when I needed to travel light. Back then, Regina used to work in a national park and had to cross her province (~6hr by a combination of buses) twice a week, so she had one as well. When she came to Mexico, she surely brought it along. Over the years, we bought new batteries and chargers, as they died over time...

Five years later, it started feeling too slow, and I remember to start having keyboard issues. Time to change.

Sadly, 9" computers were no longer to be found. Even though I am a touch typist, and a big person, I miss several things about the Acer's tiny keyboard (such as being able to cover the diagonal with a single hand, something useful when you are typing while standing). But, anyway, I got the closest I could to it — In July 2013, I bought the successor to the Acer Aspire One: An 10.5" Acer Aspire One Nowadays, the name that used to identify just the smallest of the Acer Family brethen covers at least up to 15.6" (which is not exactly helpful IMO).

Anyway, for close to five years I was also very happy with it. A light laptop that didn't mean a burden to me. Also, very important: A computer I could take with me without ever thinking twice. I often tell people I use a computer I got at a supermarket, and that, bought as new, costed me under US$300. That way, were I to lose it (say, if it falls from my bike, if somebody steals it, if it gets in any way damaged, whatever), it's not a big blow. Quite a difference from my two former laptops, both over US$1000.

I enjoyed this computer a lot. So much, I ended up buying four of them (mine, Regina's, and two for her family members).

Over the last few months, I have started being nagged by unresponsivity, mainly in the browser (blame me, as I typically keep ~40 tabs open). Some keyboard issues... I had started thinking about changing my trusty laptop. Would I want a newfangle laptop-and-tablet-in-one? Just thinking about fiddling with the OS to recognize stuff was a sort-of-turnoff...

This weekend we had an incident with spilled water. After opening and carefully ensuring the computer was dry, it would not turn on. Waited an hour or two, and no changes. Clear sign, a new computer is needed ☹

I went to a nearby store, looked at the offers... And, in part due to the attitude of the salesguy, I decided not to (installing Linux will void any warranty, WTF‽ In 2018‽). Came back home, and... My Acer works again!

But, I know five years are enough. I decided to keep looking for a replacement. After some hesitation, I decided to join what seems to be the elite group in Debian, and go for a refurbished Thinkpad X230.

And that's why I feel this is some sort of "sidegrade" — I am replacing a five year old computer with another five year old computer. Of course, a much sturdier one, built to last, originally sold as an "Ultrabook" (that means, meant for a higher user segment) much more expandable... I'm paying ~US$250, which I'm comfortable with. Looking at several online forums, it is a model quite popular with "knowledgeable" people AFAICT even now. I was hoping, just for the sake of it, to find a X230t (foldable and usable as tablet)... But I won't put too much time into looking for it.

The Thinkpad is 12", which I expect will still fit in my smallish satchel I take to my classes. The machine looks as tweakable as I can expect. Spare parts for replacement are readily available. I have 4GB I bought for the Acer I will probably be able to carry on to this machine, so I'm ready with 8GB. I'm eager to feel the keyboard, as it's often repeated it's the best in the laptop world (although it's not the classic one anymore) I'm just considering to pop ~US$100 more and buy an SSD drive, and... Well, lets see how much does this new sidegrade make me smile!

13 February, 2018 07:43PM by gwolf

hackergotchi for Riku Voipio

Riku Voipio

Making sense of /proc/cpuinfo on ARM

Ever stared at output of /proc/cpuinfo and wondered what the CPU is?

...
processor : 7
BogoMIPS : 2.40
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x0
CPU part : 0xd03
CPU revision : 3
Or maybe like:

$ cat /proc/cpuinfo
processor : 0
model name : ARMv7 Processor rev 2 (v7l)
BogoMIPS : 50.00
Features : half thumb fastmult vfp edsp thumbee vfpv3 tls idiva idivt vfpd32 lpae
CPU implementer : 0x56
CPU architecture: 7
CPU variant : 0x2
CPU part : 0x584
CPU revision : 2
...
The bits "CPU implementer" and "CPU part" could be mapped to human understandable strings. But the Kernel developers are heavily against the idea. Therefor, to the next idea: Parse in userspace. Turns out, there is a common tool almost everyone has installed does similar stuff. lscpu(1) from util-linux. So I proposed a patch to do ID mapping on arm/arm64 to util-linux, and it was accepted! So using lscpu from util-linux 2.32 (hopefully to be released soon) the above two systems look like:

Architecture: aarch64
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 1
Vendor ID: ARM
Model: 3
Model name: Cortex-A53
Stepping: r0p3
CPU max MHz: 1200.0000
CPU min MHz: 208.0000
BogoMIPS: 2.40
L1d cache: unknown size
L1i cache: unknown size
L2 cache: unknown size
NUMA node0 CPU(s): 0-7
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
And

$ lscpu
Architecture: armv7l
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Vendor ID: Marvell
Model: 2
Model name: PJ4B-MP
Stepping: 0x2
CPU max MHz: 1333.0000
CPU min MHz: 666.5000
BogoMIPS: 50.00
Flags: half thumb fastmult vfp edsp thumbee vfpv3 tls idiva idivt vfpd32 lpae
As we can see, lscpu is quite versatile and can show more information than just what is available in cpuinfo.

13 February, 2018 07:18PM by Riku Voipio (noreply@blogger.com)

Reproducible builds folks

Reproducible Builds: Weekly report #146

Here's what happened in the Reproducible Builds effort between Sunday February 4 and Saturday February 10 2018:

Media coverage

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

63 package reviews have been added, 26 have been updated and 19 have been removed in this week, adding to our knowledge about identified issues.

A new issue type have been added:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (34)
  • Antonio Terceiro (1)
  • James Cowgill (1)
  • Matthias Klose (1)

diffoscope development

In addition, Juliana—our Outreachy intern—continues her work on parallel processing.

disorderfs development

jenkins.debian.net development

Misc.

This week's edition was written by Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

13 February, 2018 06:00PM

Petter Reinholdtsen

Version 3.1 of Cura, the 3D print slicer, is now in Debian

A new version of the 3D printer slicer software Cura, version 3.1.0, is now available in Debian Testing (aka Buster) and Debian Unstable (aka Sid). I hope you find it useful. It was uploaded the last few days, and the last update will enter testing tomorrow. See the release notes for the list of bug fixes and new features. Version 3.2 was announced 6 days ago. We will try to get it into Debian as well.

More information related to 3D printing is available on the 3D printing and 3D printer wiki pages in Debian.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

13 February, 2018 05:20AM