Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

April 01, 2025

hackergotchi for Colin Watson

Colin Watson

Free software activity in March 2025

Most of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

OpenSSH

Changes in dropbear 2025.87 broke OpenSSH’s regression tests. I cherry-picked the fix.

I reviewed and merged patches from Luca Boccassi to send and accept the COLORTERM and NO_COLOR environment variables.

Python team

Following up on last month, I fixed some more uscan errors:

  • python-ewokscore
  • python-ewoksdask
  • python-ewoksdata
  • python-ewoksorange
  • python-ewoksutils
  • python-processview
  • python-rsyncmanager

I upgraded these packages to new upstream versions:

  • bitstruct
  • django-modeltranslation (maintained by Freexian)
  • django-yarnpkg
  • flit
  • isort
  • jinja2 (fixing CVE-2025-27516)
  • mkdocstrings-python-legacy
  • mysql-connector-python (fixing CVE-2025-21548)
  • psycopg3
  • pydantic-extra-types
  • pydantic-settings
  • pytest-httpx (fixing a build failure with httpx 0.28)
  • python-argcomplete
  • python-cymem
  • python-djvulibre
  • python-ecdsa
  • python-expandvars
  • python-holidays
  • python-json-log-formatter
  • python-keycloak (fixing a build failure with httpx 0.28)
  • python-limits
  • python-mastodon (in the course of which I found #1101140 in blurhash-python and proposed a small cleanup to slidge)
  • python-model-bakery
  • python-multidict
  • python-pip
  • python-rsyncmanager
  • python-service-identity
  • python-setproctitle
  • python-telethon
  • python-trio
  • python-typing-extensions
  • responses
  • setuptools-scm
  • trove-classifiers
  • zope.testrunner

In bookworm-backports, I updated python-django to 3:4.2.19-1.

Although Debian’s upgrade to python-click 8.2.0 was reverted for the time being, I fixed a number of related problems anyway since we’re going to have to deal with it eventually:

dh-python dropped its dependency on python3-setuptools in 6.20250306, which was long overdue, but it had quite a bit of fallout; in most cases this was simply a question of adding build-dependencies on python3-setuptools, but in a few cases there was a missing build-dependency on python3-typing-extensions which had previously been pulled in as a dependency of python3-setuptools. I fixed these bugs resulting from this:

We agreed to remove python-pytest-flake8. In support of this, I removed unnecessary build-dependencies from pytest-pylint, python-proton-core, python-pyzipper, python-tatsu, python-tatsu-lts, and python-tinycss, and filed #1101178 on eccodes-python and #1101179 on rpmlint.

There was a dnspython autopkgtest regression on s390x. I independently tracked that down to a pylsqpack bug and came up with a reduced test case before realizing that Pranav P had already been working on it; we then worked together on it and I uploaded their patch to Debian.

I fixed various other build/test failures:

I enabled more tests in python-moto and contributed a supporting fix upstream.

I sponsored Maximilian Engelhardt to reintroduce zope.sqlalchemy.

I fixed various odds and ends of bugs:

I contributed a small documentation improvement to pybuild-autopkgtest(1).

Rust team

I upgraded rust-asn1 to 0.20.0.

Science team

I finally gave in and joined the Debian Science Team this month, since it often has a lot of overlap with the Python team, and Freexian maintains several packages under it.

I fixed a uscan error in hdf5-blosc (maintained by Freexian), and upgraded it to a new upstream version.

I fixed python-vispy: missing dependency on numpy abi.

Other bits and pieces

I fixed debconf should automatically be noninteractive if input is /dev/null.

I fixed a build failure with GCC 15 in yubihsm-shell (maintained by Freexian).

Prompted by a CI failure in debusine, I submitted a large batch of spelling fixes and some improved static analysis to incus (#1777, #1778) and distrobuilder.

After regaining access to the repository, I fixed telegnome: missing app icon in ‘About’ dialogue and made a new 0.3.7 release.

01 April, 2025 12:17PM by Colin Watson

hackergotchi for Guido Günther

Guido Günther

Free Software Activities March 2025

Another short status update of what happened on my side last month. Some more ModemManager bits landed, Phosh 0.46 is out, haptic feedback is now better tunable plus some more. See below for details (no April 1st joke in there, I promise):

phosh

  • Fix swapped arguments in ABI check (MR)
  • Sync packaging with Debian so testing packages becomes easier (MR)
  • Fix crash when primary output goes away (MR)
  • More consistent button press feedback (MR
  • Undraft the lockscreen wallpaper branch (MR) - another ~2y old MR out of the way.
  • Indicate ongoing WiFi scans (MR)
  • Limit ABI compliance check to public headers (MR)
  • Document most gsettings in a manpage (MR)
  • (Hopefully) make integration test more robust (MR)
  • Drop superfluous build invocation in CI by fixing the missing dep (MR)
  • Fix top-panel icon size (MR)
  • Release 0.46~rc1, 0.46.0
  • Simplify adding new symbols (MR)
  • Fix crash when taking screenshot on I/O starved system (MR)
  • Split media-player and mpris-manger (MR)
  • Handle Cell Broadcast notification categories (MR)

phoc

  • xwayland: Allow views to use opacity: (MR)
  • Track wlroots 0.19.x (MR)
  • Initial support for workspaces (MR)
  • Don't crash when gtk-layer-shell wants to reposition popups (MR)
  • Some cleanups split out of other MRs (MR)
  • Release 0.46~rc1, 0.46.0
  • Add meson dist job and work around meson not applying patches in meson dist (MR, MR)
  • Small render to allow Vulkan renderer to work (MR)
  • Fix possible crash when closing applications (MR)
  • Rename XdgSurface to XdgToplevel to prevent errors like the above (MR)

phosh-osk-stub

  • Make switching into (and out of) symbol2 level more pleasant (MR)
  • Simplify UI files as prep for the GTK4 switch (MR)
  • Release 0.46~rc1, 0.46.0)

phosh-mobile-settings

  • Format meson files (MR)
  • Allow to set lockscren wallpaper (MR)
  • Allow to set maximum haptic feedback (MR)
  • Release 0.46~rc1, 0.46.0
  • Avoid warnings when running CI/autopkgtest (MR)

phosh-tour

pfs

  • Add search when opening files (MR)
  • Show loading state when opening folders (MR)
  • Move demo to its own folder (MR)
  • Release 0.0.2

xdg-desktop-portal-gtk

  • Add some support for v2 of the notification portal (MR)
  • Make two function static (MR)

xdg-desktop-portal-phosh

  • Add preview for lockscreen wallpapers (MR)
  • Update to newer pfs to support search (MR)
  • Release 0.46~rc1), 0.46.0
  • Add initial support for notification portal v2 (MR) thus finally allowing flatpaks to submit proper feedback.
  • Style consistency (MR, MR)
  • Add Cell Broadcast categories (MR)

meta-phosh

  • Small release helper tweaks (MR)

feedbackd

  • Allow for vibra patterns with different magnitudes (MR)
  • Allow to tweak maximum haptic feedback strength (MR)
  • Split out libfeedback.h and check more things in CI (MR)
  • Tweak haptic in default profile a bit (MR)
  • dev-vibra: Allow to use full magnitude range (MR)
  • vibra-periodic: Use [0.0, 1.0] as ranges for magnitude (MR)
  • Release 0.8.0, 0.8.1)
  • Only cancel feedback if ever inited (MR)

feedbackd-device-themes

  • Increase button feedback for sarge (MR)

gmobile

  • Release 0.2.2
  • Format and validate meson files (MR)

livi

  • Don't emit properties changed on position changes (MR)

Debian

  • libmbim: Update to 1.31.95 (MR)
  • libmbim: Upload to unstable and add autopkgtest (MR)
  • libqmi: Update to 1.35.95 (MR)
  • libqmi: Upload to unstable and add autopkgtest (MR)
  • modemmanager: Update to 1.23.95 to experimental and add autopkgtest (MR)
  • modemmanager: Upload to unstable (MR)
  • modemmanager: Add missing nodoc build deps (MR)
  • Package osmo-cbc: (Repo)
  • feedbackd: Depend on adduser (MR)
  • feedbackd: Release 0.8.0, 0.8.1
  • feedbackd-device-themes: Release 0.8.0, 0.8.1
  • phosh: Release 0.46~rc1, 0.46.0
  • phoc: Release 0.46~rc1, 0.46.0
  • phosh-osk-stub: Release 0.46~rc1, 0.46.0
  • xdg-desktop-portal-phosh: Release 0.46~rc1, 0.46.0
  • phosh-mobile-settings: Release 0.46~rc1, 0.46.0, fix autopkgtest
  • phosh-tour: Release 0.46.0
  • gmobile: Release 0.2.2-1
  • gmobile: Ensure udev rules are applied on updates (MR)

git-buildpackage

  • Ease creating packages from scratch and document that better (MR, Testcase MR)

feedbackd-device-themes

  • Tweak some haptic for oneplus,fajita (MR)
  • Drop superfluous periodic feedbacks and cleanup CI (MR)

wlroots

  • xwm: Allow to set opacity (MR)

ModemManager

  • Fix typos (MR)
  • Add support for setting channels via libmm-glib and mmcli (MR)

Tuba

  • Set input-hint for OSK word completion (MR)

xdg-spec

  • Propose _NET_WM_WINDOW_OPACITY (which is around since ages) (MR)

gnome-calls

  • Help startup ordering (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • phosh: Remove usage of phosh_{app_grid, overview}handlesearch (MR)
  • phosh: app-grid-button: Prepare for GTK 4 by using gestures and other migrations (MR) - merged
  • phosh: valign search results (MR) - merged
  • phosh: top-panel: Hide setting's details on fold (MR) - merged
  • phosh: Show frame with an animation (MR) - merged
  • phosh: Use gtk_widget_set_visible (MR) - merged
  • phosh: Thumbnail aspect ration tweak (MR) - merged
  • phosh: Add clang/llvm ci step (MR)
  • mobile-broadband-provider-info: Bild APN (MR) - merged
  • iio-sensor-proxy: Buffer driver probing fix (MR) - merged
  • iio-sensor-proxy: Double free (MR) - merged
  • debian: Autopkgtests for ModemManager (MR)
  • debian: gitignore: phosh-pim debian build directory (MR)
  • debian: Better autopkgtests for MM (MR) - merged
  • feedbackd: tests: Depend on daemon for integration test (MR) - merged
  • libcmatrix: Various improvements (MR)
  • gmobile/hwdb: Add Sargo (MR) - merged
  • gmobile/hwdb: Add xiaomi-daisy (MR) - merged
  • gmobile/hwdb: Add SHIFT6mq (MR) - merged
  • meta-posh: Add reproducibility check (MR) - merged
  • git-buildpackage: Dependency fixes (MR) - merged
  • git-buildpackage: Rename tracking (MR)

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 April, 2025 08:05AM

March 31, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rblpapi 0.3.15 on CRAN: Several Refinements

bloomberg terminal

Version 0.3.16 of the Rblpapi package arrived on CRAN today. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg (but note that a valid Bloomberg license and installation is required).

This is the sixteenth release since the package first appeared on CRAN in 2016. It contains several enhancements. Two contributed PRs improve an error message, and extended connection options. We cleaned up a bit of internal code. And this release also makes the build conditional on having a valid build environment. This has been driven by the fact CRAN continues to builder under macOS 13 for x86_64, but Bloomberg no longer supplies a library and headers. And our repeated requests to be able to opt out of the build were, well, roundly ignored. So now the builds will succeed, but on unviable platforms such as that one we will only offer ‘empty’ functions. But no more build ERRORS yelling at us for three configurations.

The detailed list of changes follow below.

Changes in Rblpapi version 0.3.16 (2025-03-31)

  • A quota error message is now improved (Rodolphe Duge in #400)

  • Convert remaining throw into Rcpp::stop (Dirk in #402 fixing #401)

  • Add optional appIdentityKey argument to blpConnect (Kai Lin in #404)

  • Rework build as function of Blp library availability (Dirk and John in #406, #409, #410 fixing #407, #408)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is at the Rblpapi repo or the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

31 March, 2025 10:00PM

RProtoBuf 0.4.24 on CRAN: Minor Polish

A new maintenance release 0.4.24 of RProtoBuf arrived on CRAN today. RProtoBuf provides R with bindings for the Google Protocol Buffers (“ProtoBuf”) data encoding and serialization library used and released by Google, and deployed very widely in numerous projects as a language and operating-system agnostic protocol.

This release brings an both an upstream API update affecting one function, and an update to our use of the C API of R, also in one function. Nothing user-facing, and no surprises expected.

The following section from the NEWS.Rd file has full details.

Changes in RProtoBuf version 0.4.24 (2025-03-31)

  • Add bindings to EnumValueDescriptor::name (Mike Kruskal in #108)

  • Replace EXTPTR_PTR with R_ExternalPtrAddr (Dirk)

Thanks to my CRANberries, there is a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the ‘quick’ overview vignette, and the pre-print of our JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

31 March, 2025 09:29PM

Russell Coker

Simon Josefsson

On Binary Distribution Rebuilds

I rebuilt (the top-50 popcon) Debian and Ubuntu packages, on amd and arm64, and compared the results a couple of months ago. Since then the Reproduce.Debian.net effort has been launched. Unlike my small experiment, that effort is a full-scale rebuild with more architectures. Their goal is to reproduce what is published in the Debian archive.

One differences between these two approaches are the build inputs: The Reproduce Debian effort use the same build inputs which were used to build the published packages. I’m using the latest version of published packages for the rebuild.

What does that difference imply? I believe reproduce.debian.net will be able to reproduce more of the packages in the archive. If you build a C program using one version of GCC you will get some binary output; and if you use a later GCC version you are likely to end up with a different binary output. This is a good thing: we want GCC to evolve and produce better output over time. However it means in order to reproduce the binaries we publish and use, we need to rebuild them using whatever build dependencies were used to prepare those binaries. The conclusion is that we need to use the old GCC to rebuild the program, and this appears to be the Reproduce.Debian.Net approach.

It would be a huge success if the Reproduce.Debian.net effort were to reach 100% reproducibility, and this seems to be within reach.

However I argue that we need go further than that. Being able to rebuild the packages reproducible using older binary packages only begs the question: can we rebuild those older packages? I fear attempting to do so ultimately leads to a need to rebuild 20+ year old packages, with a non-negligible amount of them being illegal to distribute or are unable to build anymore due to bit-rot. We won’t solve the Trusting Trust concern if our rebuild effort assumes some initial binary blob that we can no longer build from source code.

I’ve made an illustration of the effort I’m thinking of, to reach something that is stronger than reproducible rebuilds. I am calling this concept a Idempotent Rebuild, which is an old concept that I believe is the same as John Gilmore has described many years ago.

The illustration shows how the Debian main archive is used as input to rebuild another “stage #0” archive. This stage #0 archive can be compared with diffoscope to the main archive, and all differences are things that would be nice to resolve. The packages in the stage #0 archive is used to prepare a new container image with build tools, and the stage #0 archive is used as input to rebuild another version of itself, called the “stage #1” archive. The differences between stage #0 and stage #1 are also useful to analyse and resolve. This process can be repeated many times. I believe it would be a useful property if this process terminated at some point, where the stage #N archive was identical to the stage #N-1 archive. If this would happen, I label the output archive as an Idempotent Rebuild of the distribution.

How big is N today? The simplest assumption is that it is infinity. Any build timestamp embedded into binary packages will change on every iteration. This will cause the process to never terminate. Fixing embedded timestamps is something that the Reproduce.Debian.Net effort will also run into, and will have to resolve.

What other causes for differences could there be? It is easy to see that generally if some output is not deterministic, such as the sort order of assembler object code in binaries, then the output will be different. Trivial instances of this problem will be caught by the reproduce.debian.net effort as well.

Could there be higher order chains that lead to infinite N? It is easy to imagine the existence of these, but I don’t know how they would look like in practice.

An ideal would be if we could get down to N=1. Is that technically possible? Compare building GCC, it performs an initial stage 0 build using the system compiler to produce a stage 1 intermediate, which is used to build itself again to stage 2. Stage 1 and 2 is compared, and on success (identical binaries), the compilation succeeds. Here N=2. But this is performed using some unknown system compiler that is normally different from the GCC version being built. When rebuilding a binary distribution, you start with the same source versions. So it seems N=1 could be possible.

I’m unhappy to not be able to report any further technical progress now. The next step in this effort is to publish the stage #0 build artifacts in a repository, so they can be used to build stage #1. I already showed that stage #0 was around ~30% reproducible compared to the official binaries, but I didn’t save the artifacts in a reusable repository. Since the official binaries were not built using the latest versions, it is to be expected that the reproducibility number is low. But what happens at stage #1? The percentage should go up: we are now compare the rebuilds with an earlier rebuild, using the same build inputs. I’m eager to see this materialize, and hope to eventually make progress on this. However to build stage #1 I believe I need to rebuild a much larger number packages in stage #0, it could be roughly similar to the “build-essentials-depends” package set.

I believe the ultimate end goal of Idempotent Rebuilds is to be able to re-bootstrap a binary distribution like Debian from some other bootstrappable environment like Guix. In parallel to working on a achieving the 100% Idempotent Rebuild of Debian, we can setup a Guix environment that build Debian packages using Guix binaries. These builds ought to eventually converge to the same Debian binary packages, or there is something deeply problematic happening. This approach to re-bootstrap a binary distribution like Debian seems simpler than rebuilding all binaries going back to the beginning of time for that distribution.

What do you think?

PS. I fear that Debian main may have already went into a state where it is not able to rebuild itself at all anymore: the presence and assumption of non-free firmware and non-Debian signed binaries may have already corrupted the ability for Debian main to rebuild itself. To be able to complete the idempotent and bootstrapped rebuild of Debian, this needs to be worked out.

31 March, 2025 08:21AM by simon

Russ Allbery

Review: Ghostdrift

Review: Ghostdrift, by Suzanne Palmer

Series: Finder Chronicles #4
Publisher: DAW
Copyright: May 2024
ISBN: 0-7564-1888-7
Format: Kindle
Pages: 378

Ghostdrift is a science fiction adventure and the fourth (and possibly final) book of the Finder Chronicles. You should definitely read this series in order and not start here, even though the plot of this book would stand alone.

Following The Scavenger Door, in which he made enemies even more dramatically than he had in the previous books, Fergus Ferguson has retired to the beach on Coralla to become a tea master and take care of his cat. It's a relaxing, idyllic life and a much-needed total reset. Also, he's bored. The arrival of his alien friend Qai, in some kind of trouble and searching for him, is a complex balance between relief and disappointment.

Bas Belos is one of the most notorious pirates of the Barrens. He has someone he wants Fergus to find: his twin sister, who disappeared ten years ago. Fergus has an unmatched reputation for finding things, so Belos kidnapped Qai's partner to coerce her into finding Fergus. It's not an auspicious beginning to a relationship, and Qai was ready to fight once they got her partner back, but Belos makes Fergus an offer of payment that, startlingly, is enough for him to take the job mostly voluntarily.

Ghostdrift feels a bit like a return to Finder. Fergus is once again alone among strangers, on an assignment that he's mostly not discussing with others, piecing together clues and navigating tricky social dynamics. I missed his friends, particularly Ignatio, and while there are a few moments with AI ships, they play less of a role.

But Fergus is so very good at what he does, and Palmer is so very good at writing it. This continues to be competence porn at its best. Belos's crew thinks Fergus is a pirate recruited from a prison colony, and he quietly sets out to win their trust with a careful balance of self-deprecation and unflappable skill, helped considerably by the hidden gift he acquired in Finder. The character development is subtle, but this feels like a Fergus who understands friendship and other people at a deeper and more satisfying level than the Fergus we first met three books ago.

Palmer has a real talent for supporting characters and Ghostdrift is no exception. Belos's crew are criminals and murderers, and Palmer does remind the reader of that occasionally, but they're also humans with complex goals and relationships. Belos has earned their loyalty by being loyal and competent in a rough world where those attributes are rare. The morality of this story reminds me of infiltrating a gang: the existence of the gang is not a good thing, and the things they do are often indefensible, but they are an understandable reaction to a corrupt social system. The cops (in this case, the Alliance) are nearly as bad, as we've learned over the past couple of books, and considerably more insufferable. Fergus balances the ethical complexity in a way that I found satisfyingly nuanced, while quietly insisting on his own moral lines.

There is a deep science fiction plot here, possibly the most complex of the series so far. The disappearance of Belos's sister is the tip of an iceberg that leads to novel astrophysics, dangerous aliens, mysterious ruins, and an extended period on a remote and wreck-strewn planet. I groaned a bit when the characters ended up on the planet, since treks across primitive alien terrain with jury-rigged technology are one of my least favorite science fiction tropes, but I need not have worried. Palmer knows what she's doing; the pace of the plot does slow a bit at first, but it quickly picks up again, adding enough new setting and plot complications that I never had a chance to be bored by alien plants. It helps that we get another batch of excellent supporting characters for Fergus to observe and win over.

This series is such great science fiction. Each book becomes my new favorite, and Ghostdrift is no exception. The skeleton of its plot is a satisfying science fiction mystery with multiple competing factions, hints of fascinating galactic politics, complicated technological puzzles, and a sense of wonder that reminds me of reading Larry Niven's Known Space series. But the characters are so much better and more memorable than classic SF; compared to Fergus, Niven's Louis Wu barely exists and is readily forgotten as soon as the story is over. Fergus starts as a quiet problem-solver, but so much character depth unfolds over the course of this series. The ending of this book was delightfully consistent with everything we've learned about Fergus, but also the sort of ending that it's hard to imagine the Fergus from Finder knowing how to want.

Ghostdrift, like each of the books in this series, reaches a satisfying stand-alone conclusion, but there is no reason within the story for this to be the last of the series. The author's acknowledgments, however, says that this the end. I admit to being disappointed, since I want to read more about Fergus and there are numerous loose ends that could be explored. More importantly, though, I hope Palmer will write more novels in any universe of her choosing so that I can buy and read them.

This is fantastic stuff. This review comes too late for the Hugo nominating deadline, but I hope Palmer gets a Best Series nomination for the Finder Chronicles as a whole. She deserves it.

Rating: 9 out of 10

31 March, 2025 04:21AM

March 30, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

It's always the best ones that die first

Berge Schwebs Bjørlo, aged 40, died on March 4th in an avalanche together with his friend Ulf, while on winter holiday.

When writing about someone who recently died, it is common to make lists. Lists of education, of where they worked, on projects they did.

But Berge wasn't common. Berge was an outlier. A paradox, even.

Berge was one of my closest friends; someone who always listened, someone you could always argue with (“I'm a pacifist, but I'm aware that this is an extreme position”) but could rarely be angry at. But if you ask around, you'll see many who say similar things; how could someone be so close to so many at the same time?

Berge had running jokes going on 20 years or more. Many of them would be related to his background from Bergen; he'd often talk about “the un-central east” (aka Oslo), yet had to admit at some point that actually started liking the city. Or about his innate positivity (“I'm in on everything but suicide and marriage!”). I know a lot of people have described his humor as dry, but I found him anything but. Just a free flow of living.

He lived his life in free software, but rarely in actually writing code; I don't think I've seen a patch from him, and only the occasional bug report. Instead, he would spend his time guiding others; he spent a lot of time in PostgreSQL circles, helping people with installation or writing queries or chiding them for using an ORM (“I don't understand why people love to make life so hard for themselves”) or just discussing life, love and everything. Somehow, some people's legacy is just the number of others they touched, and Berge touched everyone he met. Kindness is not something we do well in the free software community, but somehow, it came natural to him. I didn't understand until after he died why he was so chronically bad at reading backlog and hard to get hold of; he was interacting with so many people, always in the present and never caring much about the past.

I remember that Berge once visited my parents' house, and was greeted by our dog, who after a pat promptly went back to relaxing lazily on the floor. “Awh! If I were a dog, that's the kind of dog I'd be.” In retrospect, for someone who lived a lot of his life in 300 km/h (at times quite literally), it was an odd thing to say, but it was just one of those paradoxes.

Berge loved music. He'd argue for intensely political punk, but would really consume everything with great enthuisasm and interest. One of the last albums I know he listened to was Thomas Dybdahl's “… that great October sound”:

Tear us in different ways but leave a thread throughout the maze
In case I need to find my way back home
All these decisions make for people living without faith
Fumbling in the dark nowhere to roam

Dreamweaver
I'll be needing you tomorrow and for days to come
Cause I'm no daydreamer
But I'll need a place to go if memory fails me & let you slip away

Berge wasn't found by a lazy dog. He was found by Shane, a very good dog.

Somehow, I think he would have approved of that, too.

Picture of Berge

30 March, 2025 10:45PM

Swiss JuristGate

Link between institutional abuse, Swiss jurists, Debianism and FSFE

Friday, an expert in the subject of persecution asked me a question: is there a link between the paedophiles and the Swiss jurists?

I reflected on the subject and I found several links between the cultural problems.

At the same time, the BBC published a report on Justin Welby, former head of the Anglican church. He resigned because of the John Smyth QC scandal. John Smyth was a powerful lawyer who had also been a judge for six years. Smyth simultaneously had the role of Reader in the church.

I wrote several blogs about the links between the Code of Conduct gaslighting and the document Crimen Sollicitationis.

The pope sent the Crimen Sollicitationis to each diocese in 1962. It was a secret document. Article 70 insists on total secrecy about abuse:

70. All these official communications shall always be made under the secret of the Holy Office; and, since they are of the utmost importance for the common good of the Church, the precept to make them is binding under pain of grave [sin].

Moreover, we are not even supposed to discuss the existance of the document:

TO BE KEPT CAREFULLY IN THE SECRET ARCHIVE OF THE CURIA FOR INTERNAL USE.

Gerhard Ulrich was a human rights activist. He published a list of judges, their mistakes and their conflicts of interest. In Switzerland, the conflicts of interest are a private subject under article 173(3) of the Swiss criminal code:

The accused is not permitted to lead evidence in support of and is criminally liable for statements that are made or disseminated with the primary intention of accusing someone of disreputable conduct without there being any public interest or any other justified cause, and particularly where such statements refer to a person’s private or family life.

In 2001, the secret document became widely known.

When FINMA published a jugement against the Swiss jurists in 2023, they redacted the names, they redacted the dates and even worse, they redacted most of the paragraphs.

FINMA jugement, décision, Parreaux Thiebaud & Partners, Justicia SA, Justiva SA, Mathieu Parreaux, Gaelle Jeanmonod

 

A little bit like Crimen Sollicitationis, certainly.

We find the same problems in the case of John Smyth, Anglican church and Justin Welby:

From 2013, the Church of England knew “at the highest level” about the abuse, the report says, but failed to refer it either to the police or to the relevant authorities in South Africa, where Smyth died while under investigation by the police.

The Swiss authorities, the bar association of Geneva and FINMA had knowledge of Parreaux, Thiébaud & Partners since 2021 or earlier. Why did they redact the majority of paragraphs from the judgment? They wanted to hide their pre-existing knowledge of the scandal.

Why did the church authorities or Swiss authorities protect people like John Smyth and Mathieu Parreaux? Men like that have knowledge of all the scandals and institutional failings throughout their career. We discussed the same question in the (leaked) debian-private mailing list. Each time somebody is bullied and punished by the overlords, there is a risk that they will publish a full copy of debian-private.

We have filled in the secrets in the JuristGate web site.

When I acquired Swiss citizenship, I had to take an oath. ( la promesse solennelle vaudois, Loi sur le droit de cité vaudois 2018).

You promise to be true to the federal constitution and the constitution of the Canton of Vaud. You promise to maintain and defend on every occasion and with all your powers the rights, freedoms and independence of your new country, to develop and advance her reputation and wealth and equally to avoid all that could cause her loss or damage.

There is a problem: after the death of my father, racist Swiss women started writing gossip. The rudeness of selfish and arrogant people who impose upon my family at a time of grief outweighs the seriousness of the oath.

Swiss authorities closed the legal protection insurance. The director of FINMA resigned with a payout of 581,000 Swiss francs, but he did not provide replacement lawyers for the clients.

At the same time, the racist Swiss women demanded a publication.

FSFE uses the trademark of the FSF without authorisation. They received a bequest of EUR 150,000. They declared the bequest a secret subject.

The Swiss citizenship oath implies that we have to find a private solution to the Debian crisis, but the racist Swiss women wanted a publication of lies after the death of my father.

cut your face off

 

The truth is clear, my father died.

In 2018, the intern Renata D'Avila published a blog about the risks of location tracking services. She chose a quote from the French philosopher Pierre-Joseph Proudhon:

To be GOVERNED is to be watched, inspected, spied upon, directed, ... then, at the slightest resistance, the first word of complaint, to be repressed, fined, vilified, harassed, hunted down, abused, clubbed, disarmed, bound, choked, imprisoned, judged, condemned, shot, deported, sacrificed, sold, betrayed; and to crown all, mocked, ridiculed, derided, outraged, dishonored.

In Australia, Lilie James rejected her boyfriend's demand for location services on her phone and she was clubbed to death. My intern had predicted the crime with her use of the quote alongside her description of problems with Google.

According to the Debian Social Contract, point 3:

We won't hide problems.

but the Debianists threatened my intern by sending her secret emails:

Debian "Community Team" (political police) to Renata, private email of 13 June 2018: Reinforcing positive attitudes and steps that you see in Debian towards women inclusion can also motivate yourself, the other Debian contributors, and possible newcomers, to go on working in that strategy and foster diversity in Debian. This does not mean to avoid criticism or hiding problems, but providing a more accurate vision of how the Brazilian Debian community works towards our common goals.

The threat:

Finally, we would like to say a word about the participation in Debian events that is financed (at least in part) by Debian. We believe that a matching fund, (Mini)DebConf bursary or any other financial help to attend a Debian event is a big endorsement from Debian to the person who receives it, and we believe that your behaviour in MiniDebConf Curitiba 2018 did not match the excellence that we expect for a bursary applicant. Thus, we are considering requesting a rejection of your application to the bursaries team.

Renata did not come back to any Debian events.

In Australia's Royal Commission into Institutional Abuse, we find the same thing:

Culture of secrecy

We are satisfied that there was a prevailing culture within the Archdiocese, led by Archbishop Little, of dealing with complaints internally and confidentially to avoid scandal to the Church.

and again between the priests and their victims:

He said that, on both occasions, Father Daniel encouraged BTH to remain silent by reference to the seal of confession.

The seal of confession is like the Code of Conduct gaslighting in the free software projects. If the victim doesn't maintain the silence, they will be blocked from becoming a priest or a developer.

Frans Pop published his suicide note the night before the Debian Day anniversary. When colleagues discussed his death outside the secret debian-private mailing list, they suffered immediate reprisals.

Unfortunately we also had the misfortune to read information on twitter that, to our current knowledge, must have been gathered from this -private list. We think this a very unfortunate event that is missing every kind of common sense and decency, and therefore saw forced to suspend the accounts of the people leaked from membership on this list.

Pascal/Lia Daobing: You both are no longer subscribed to debian-private, for the next 4 weeks. The one and only reason this list exists is to have a place where we can share information that is not immediately leaked into the public. Twitter is not -private. Please, in the future, respect the rules of the environment you are in, especially in such a special case like this.

The next death was Adrian von Bidder. It was Palm Sunday, the same day as the marriage between Carla and I. Adrian von Bidder died in Switzerland the official report has not been published. Yet.

Switzerland and the Catholic Church both openly promote their culture of secrecy. Debianists have claimed to be committed to transparency so the imposition of secrecy in Debian demonstrates an even greater lack of integrity.

Joerg Jaspert wrote about Frans Pop:

He has done a lot of work for the project, invested a lot of his time and appearently (read the statement of his parents) the Debian project was a very important part of his life.

The last words in the last email of Frans Pop, sent the night before Debian Day:

All mails I ever sent to d-private (and mails quoting them) shall remain private.

Chris Lamb, Debian, Reproducible Builds, Google

30 March, 2025 09:30PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSpdlog 0.0.21 on CRAN: New Upstream

Version 0.0.21 of RcppSpdlog arrived on CRAN today and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This release updates the code to the version 1.15.2 of spdlog which was released this weekend as well.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.21 (2025-03-30)

  • Upgraded to upstream release spdlog 1.15.2 (including fmt 11.1.4)

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

30 March, 2025 08:43PM

RcppZiggurat 0.1.8 on CRAN: Build Refinements

ziggurats

A new release 0.1.8 of RcppZiggurat is now on the CRAN network for R, following up on the 0.1.7 release last week which was the first release in four and a half years.

The RcppZiggurat package updates the code for the Ziggurat generator by Marsaglia and others which provides very fast draws from a Normal (or Exponential) distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl—all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

This release switches the vignette to the standard trick of premaking it as a pdf and including it in a short Sweave document that imports it via pdfpages; this minimizes build-time dependencies on other TeXLive components. It also incorporates a change contributed by Tomas to rely on the system build of the GSL on Windows as well if Rtools 42 or later is found. No other changes.

The NEWS file entry below lists all changes.

Changes in version 0.1.8 (2025-03-30)

  • The vignette is now premade and rendered as Rnw via pdfpage to minimize the need for TeXLive package at build / install time (Dirk)

  • Windows builds now use the GNU GSL when Rtools is 42 or later (Tomas Kalibera in #25)

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the Rcppziggurat page or the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

30 March, 2025 02:01PM

Russ Allbery

Review: Cascade Failure

Review: Cascade Failure, by L.M. Sagas

Series: Ambit's Run #1
Publisher: Tor
Copyright: 2024
ISBN: 1-250-87126-3
Format: Kindle
Pages: 407

Cascade Failure is a far-future science fiction adventure with a small helping of cyberpunk vibes. It is the first of a (so far) two-book series, and was the author's first novel.

The Ambit is an old and small Guild ship, not much to look at, but it holds a couple of surprises. One is its captain, Eoan, who is an AI with a deep and insatiable curiosity that has driven them and their ship farther and farther out into the Spiral. The other is its surprisingly competent crew: a battle-scarred veteran named Saint who handles the fighting, and a talented engineer named Nash who does literally everything else. The novel opens with them taking on supplies at Aron Outpost. A supposed Guild deserter named Jalsen wanders into the ship looking for work.

An AI ship with a found-family crew is normally my catnip, so I wanted to love this book. Alas, I did not.

There were parts I liked. Nash is great: snarky, competent, and direct. Eoan is a bit distant and slightly more simplistic of a character than I was expecting, but I appreciated the way Sagas put them firmly in charge of the ship and departed from the conventional AI character presentation. Once the plot starts in earnest (more on that in a moment), we meet Anke, the computer hacker, whose charming anxiety reaction is a complete inability to stop talking and who adds some needed depth to the character interactions. There's plenty of action, a plot that makes at least some sense, and a few moments that almost achieved the emotional payoff the author was attempting.

Unfortunately, most of the story focuses on Saint and Jal, and both of them are irritatingly dense cliches.

The moment Jal wanders onto the Ambit in the first chapter, the reader is informed that Jal, Saint, and Eoan have a history. The crew of the Ambit spent a year looking for Jal and aren't letting go of him now that they've found him. Jal, on the other hand, clearly blames Saint for something and is not inclined to trust him. Okay, fine, a bit generic of a setup but the writing moved right along and I was curious enough.

It then takes a full 180 pages before the reader finds out what the hell is going on with Saint and Jal. Predictably, it's a stupid misunderstanding that could have been cleared up with one conversation in the second chapter.

Cascade Failure does not contain a romance (and to the extent that it hints at one, it's a sapphic romance), but I swear Saint and Jal are both the male protagonist from a certain type of stereotypical heterosexual romance novel. They're both the brooding man with the past, who is too hurt to trust anyone and assumes the worst because he's unable to use his words or ask an open question and then listen to the answer. The first half of this book is them being sullen at each other at great length while both of them feel miserable. Jal keeps doing weird and suspicious things to resolve a problem that would have been far more easily resolved by the rest of the crew if he would offer any explanation at all. It's not even suspenseful; we've read about this character enough times to know that he'll turn out to have a heart of gold and everything will be a misunderstanding. I found it tedious. Maybe people who like slow burn romances with this character type will have a less negative reaction.

The real plot starts at about the time Saint and Jal finally get their shit sorted out. It turns out to have almost nothing to do with either of them. The environmental control systems of worlds are suddenly failing (hence the book title), and Anke, the late-arriving computer programmer and terraforming specialist, has a rather wild theory about what's happening. This leads to a lot of action, some decent twists, and a plot that felt very cyberpunk to me, although unfortunately it culminates in an absurdly-cliched action climax.

This book is an action movie that desperately wants to make you feel all the feels, and it worked about as well as that typically works in action movies for me. Jaded cynicism and an inability to communicate are not the ways to get me to have an emotional reaction to a book, and Jal (once he finally starts talking) is so ridiculously earnest that it's like reading the adventures of a Labrador puppy. There was enough going on that it kept me reading, but not enough for the story to feel satisfying. I needed a twist, some depth, way more Nash and Anke and way less of the men, something.

Everyone is going to compare this book to Firefly, but Firefly had better banter, created more complex character interactions due to the larger and more varied crew, and played the cynical mercenary for laughs instead of straight, all of which suited me better. This is not a bad book, particularly once it gets past the halfway point, but it's not that memorable either, at least for me. If you're looking for a space adventure with heavy action hero and military SF vibes that wants to be about Big Feelings but gets there in mostly obvious ways, you could do worse. If you're looking for a found-family starship crew story more like Becky Chambers, I think you'll find this one a bit too shallow and obvious.

Not really recommended, although there's nothing that wrong with it and I'm sure other people's experience will differ.

Followed by Gravity Lost, which I'm unlikely to read.

Rating: 6 out of 10

30 March, 2025 04:42AM

March 29, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

tinythemes 0.0.3 at CRAN: Nags

tinythemes demo

A second maintenance release of our still young-ish package tinythemes arrived on CRAN today. tinythemes provides the theme_ipsum_rc() function from hrbrthemes by Bob Rudis in a zero (added) dependency way. A simple example is (also available as a demo inside the package) contrasts the default style (on the left) with the one added by this package (on the right):

This version responds solely to things CRAN now nags about. As these are all package quality improvement we generally oblige happily (and generally fix in the respective package repo when we notice). I am currently on a quest to get most/all of my nags down so new releases are sometimes the way to go even when not under a ‘deadline’ gun (as with other releases this week).

The full set of changes since the last release (a little over a year ago) follows.

Changes in tinythemes version 0.0.3 (2025-03-29)

  • Updated a badge URL in README.md

  • Updated manual pages with proper anchor links

  • Rewrote one example without pipe to not require minimum R version

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the repo where comments and suggestions are welcome.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

29 March, 2025 03:22PM

March 28, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Banned evidence: Ars Technica forums censored email predicting DebConf23 death, Abraham Raji & Debian cover-up

Various blogs have referred to an email predicting the death of Abraham Raji. The email was published for the first time in the Ars Technica forum and then the whole thread was locked and deleted.

The email is real. I made some inquiries with various coroners who handled previous Debian-related deaths. I wrote back to the Cambridgeshire Coroner's office on 9 September 2023, the first day of DebConf23. I predicted there was a higher risk in this group and three days later Abraham Raji died on the DebConf day trip.

Why doesn't Ars Technica want anybody to see this email? Quite simply, the email proves once again that I was right about the toxic culture.

Subject: Re: Inquest Christopher Rutter - Information Request
Date: Sat, 9 Sep 2023 18:59:26 +0200
From: Daniel Pocock <daniel@pocock.pro>
To: Coroners <Coroners@cambridgeshire.gov.uk>


Hi [redacted],

I've updated the document with some extra email evidence and two more
deaths, both of those being under management from a doctoral candidate
at Cambridge.

Based on my own experience of both Debian culture, the Pell situation
and the evidence in these emails, I feel that there is an ongoing risk
to the health of people who engage with this culture.

Please kindly confirm if the coroner can escalate this to the relevant
people or whether you need somebody to present the document in person.

Regards,

Daniel

The key emails from various web sites, including the suicide discussions, have been placed in a single document that can be forwarded to the relevant police or coroner each time a new victim dies in similar circumstances.

Employers and families are totally unaware of what some people are doing in the debian-private (leaked) conflict zone. The brother of Frans Pop told me that Debianists came to the funeral but they kept him in the dark. Not any more.

Ars Technica moderators suggested the conversation with the coroner could be spam.

It is a creepy coincidence that earlier in the same year, I had been talking to the Carabinieri about the tactics used to silence victims of blackmail and abuse. We were having the conversation in the very same hour that Cardinal George Pell was having his surgery. He died the same day. Pell's name was mentioned again to the Cambridgeshire coroner and Abraham Raji died.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

 

Ars Technica, banned, censored, evidence

 

Don't forget that this latest discussion only came up after we realized that my former Outreachy intern had predicted the circumstances involved in the death of Lilie James.

 

Lilie James, graduation

 

Please see the chronological history of how the Debian harassment and abuse culture evolved.

28 March, 2025 07:30PM

Ian Jackson

Rust is indeed woke

Rust, and resistance to it in some parts of the Linux community, has been in my feed recently. One undercurrent seems to be the notion that Rust is woke (and should therefore be rejected as part of culture wars).

I’m going to argue that Rust, the language, is woke. So the opponents are right, in that sense. Of course, as ever, dissing something for being woke is nasty and fascist-adjacent.

Community

The obvious way that Rust may seem woke is that it has the trappings, and many of the attitudes and outcomes, of a modern, nice, FLOSS community. Rust certainly does better than toxic environments like the Linux kernel, or Debian. This is reflected in a higher proportion of contributors from various kinds of minoritised groups. But Rust is not outstanding in this respect. It certainly has its problems. Many other projects do as well or better.

And this is well-trodden ground. I have something more interesting to say:

Technological values - particularly, compared to C/C++

Rust is woke technology that embodies a woke understanding of what it means to be a programming language.

Ostensible values

Let’s start with Rust’s strapline:

A language empowering everyone to build reliable and efficient software.

Surprisingly, this motto is not mere marketing puff. For Rustaceans, it is a key goal which strongly influences day-to-day decisions (big and small).

Empowering everyone is a key aspect of this, which aligns with my own personal values. In the Rust community, we care about empowerment. We are trying to help liberate our users. And we want to empower everyone because everyone is entitled to technological autonomy. (For a programming language, empowering individuals means empowering their communities, of course.)

This is all very airy-fairy, but it has concrete consequences:

Attitude to the programmer’s mistakes

In Rust we consider it a key part of our job to help the programmer avoid mistakes; to limit the consequences of mistakes; and to guide programmers in useful directions.

If you write a bug in your Rust program, Rust doesn’t blame you. Rust asks “how could the compiler have spotted that bug”.

This is in sharp contrast to C (and C++). C nowadays is an insanely hostile programming environment. A C compiler relentlessly scours your program for any place where you may have violated C’s almost incomprehensible rules, so that it can compile your apparently-correct program into a buggy executable. And then the bug is considered your fault.

These aren’t just attitudes implicitly embodied in the software. They are concrete opinions expressed by compiler authors, and also by language proponents. In other words:

Rust sees programmers writing bugs as a systemic problem, which must be addressed by improvements to the environment and the system. The toxic parts of the C and C++ community see bugs as moral failings by individual programmers.

Sound familiar?

The ideology of the hardcore programmer

Programming has long suffered from the myth of the “rockstar”. Silicon Valley techbro culture loves this notion.

In reality, though, modern information systems are far too complicated for a single person. Developing systems is a team sport. Nontechnical, and technical-adjacent, skills are vital: clear but friendly communication; obtaining and incorporating the insights of every member of your team; willingness to be challenged. Community building. Collaboration. Governance.

The hardcore C community embraces the rockstar myth: they imagine that a few super-programmers (or super-reviewers) are able to spot bugs, just by being so brilliant. Of course this doesn’t actually work at all, as we can see from the atrocious bugfest that is the Linux kernel.

These “rockstars” want us to believe that there is a steep hierarchy in programmming; that they are at the top of this hierarchy; and that being nice isn’t important.

Sound familiar?

Memory safety as a power struggle

Much of the modern crisis of software reliability arises from memory-unsafe programming languages, mostly C and C++.

Addressing this is a big job, requiring many changes. This threatens powerful interests; notably, corporations who want to keep shipping junk. (See also, conniptions over the EU Product Liability Directive.)

The harms of this serious problem mostly fall on society at large, but the convenience of carrying on as before benefits existing powerful interests.

Sound familiar?

Memory safety via Rust as a power struggle

Addressing this problem via Rust is a direct threat to the power of established C programmers such as gatekeepers in the Linux kernel. Supplanting C means they will have to learn new things, and jostle for status against better Rustaceans, or be replaced. More broadly, Rust shows that it is practical to write fast, reliable, software, and that this does not need (mythical) “rockstars”.

So established C programmer “experts” are existing vested interests, whose power is undermined by (this approach to) tackling this serious problem.

Sound familiar?

Notes

This is not a RIIR manifesto

I’m not saying we should rewrite all the world’s C in Rust. We should not try to do that.

Rust is often a good choice for new code, or when a rewrite or substantial overhaul is needed anyway. But we’re going to need other techniques to deal with all of our existing C. CHERI is a very promising approach. Sandboxing, emulation and automatic translation are other possibilities. The problem is a big one and we need a toolkit, not a magic bullet.

But as for Linux: it is a scandal that substantial new drivers and subsystems are still being written in C. We could have been using Rust for new code throughout Linux years ago, and avoided very many bugs. Those bugs are doing real harm. This is not OK.

Disclosure

I first learned C from K&R I in 1989. I spent the first three decades of my life as a working programmer writing lots and lots of C. I’ve written C++ too. I used to consider myself an expert C programmer, but nowadays my C is a bit rusty and out of date. Why is my C rusty? Because I found Rust, and immediately liked and adopted it (despite its many faults).

I like Rust because I care that the software I write actually works: I care that my code doesn’t do harm in the world.

On the meaning of “woke”

The original meaning of “woke” is something much more specific, to do with racism. For the avoidance of doubt, I don’t think Rust is particularly antiracist.

I’m using “woke” (like Rust’s opponents are) in the much broader, and now much more prevalent, culture wars sense.

Pithy conclusion

If you’re a senior developer who knows only C/C++, doesn’t want their authority challenged, and doesn’t want to have to learn how to write better software, you should hate Rust.

Also you should be fired.


Edited 2025-03-28 17:10 UTC to fix minor problems and add a new note about the meaning of the word "woke".



comment count unavailable comments

28 March, 2025 05:09PM

John Goerzen

Why You Should (Still) Use Signal As Much As Possible

As I write this in March 2025, there is a lot of confusion about Signal messenger due to the recent news of people using Signal in government, and subsequent leaks.

The short version is: there was no problem with Signal here. People were using it because they understood it to be secure, not the other way around.

Both the government and the Electronic Frontier Foundation recommend people use Signal. This is an unusual alliance, and in the case of the government, was prompted because it understood other countries had a persistent attack against American telephone companies and SMS traffic.

So let’s dive in. I’ll cover some basics of what security is, what happened in this situation, and why Signal is a good idea.

This post isn’t for programmers that work with cryptography every day. Rather, I hope it can make some of these concepts accessible to everyone else.

What makes communications secure?

When most people are talking about secure communications, they mean some combination of these properties:

  1. Privacy - nobody except the intended recipient can decode a message.
  2. Authentication - guarantees that the person you are chatting with really is the intended recipient.
  3. Ephemerality - preventing a record of the communication from being stored. That is, making it more like a conversation around the table than a written email.
  4. Anonymity - keeping your set of contacts to yourself and even obfuscating the fact that communications are occurring.

If you think about it, most people care the most about the first two. In fact, authentication is a key part of privacy. There is an attack known as man in the middle in which somebody pretends to be the intended recipient. The interceptor reads the messages, and then passes them on to the real intended recipient. So we can’t really have privacy without authentication.

I’ll have more to say about these later. For now, let’s discuss attack scenarios.

What compromises security?

There are a number of ways that security can be compromised. Let’s think through some of them:

Communications infrastructure snooping

Let’s say you used no encryption at all, and connected to public WiFi in a coffee shop to send your message. Who all could potentially see it?

  • The owner of the coffee shop’s WiFi
  • The coffee shop’s Internet provider
  • The recipient’s Internet provider
  • Any Internet providers along the network between the sender and the recipient
  • Any government or institution that can compel any of the above to hand over copies of the traffic
  • Any hackers that compromise any of the above systems

Back in the early days of the Internet, most traffic had no encryption. People were careful about putting their credit cards into webpages and emails because they knew it was easy to intercept them. We have been on a decades-long evolution towards more pervasive encryption, which is a good thing.

Text messages (SMS) follow a similar path to the above scenario, and are unencrypted. We know that all of the above are ways people’s texts can be compromised; for instance, governments can issue search warrants to obtain copies of texts, and China is believed to have a persistent hack into western telcos. SMS fails all four of our attributes of secure communication above (privacy, authentication, ephemerality, and anonymity).

Also, think about what information is collected from SMS and by who. Texts you send could be retained in your phone, the recipient’s phone, your phone company, their phone company, and so forth. They might also live in cloud backups of your devices. You only have control over your own phone’s retention.

So defenses against this involve things like:

  • Strong end-to-end encryption, so no intermediate party – even the people that make the app – can snoop on it.
  • Using strong authentication of your peers
  • Taking steps to prevent even app developers from being able to see your contact list or communication history

You may see some other apps saying they use strong encryption or use the Signal protocol. But while they may do that for some or all of your message content, they may still upload your contact list, history, location, etc. to a central location where it is still vulnerable to these kinds of attacks.

When you think about anonymity, think about it like this: if you send a letter to a friend every week, every postal carrier that transports it – even if they never open it or attempt to peak inside – will be able to read the envelope and know that you communicate on a certain schedule with that friend. The same can be said of SMS, email, or most encrypted chat operators. Signal’s design prevents it from retaining even this information, though nation-states or ISPs might still be able to notice patterns (every time you send something via Signal, your contact receives something from Signal a few milliseconds later). It is very difficult to provide perfect anonymity from well-funded adversaries, even if you can provide very good privacy.

Device compromise

Let’s say you use an app with strong end-to-end encryption. This takes away some of the easiest ways someone could get to your messages. But it doesn’t take away all of them.

What if somebody stole your phone? Perhaps the phone has a password, but if an attacker pulled out the storage unit, could they access your messages without a password? Or maybe they somehow trick or compel you into revealing your password. Now what?

An even simpler attack doesn’t require them to steal your device at all. All they need is a few minutes with it to steal your SIM card. Now they can receive any texts sent to your number - whether from your bank or your friend. Yikes, right?

Signal stores your data in an encrypted form on your device. It can protect it in various ways. One of the most important protections is ephemerality - it can automatically delete your old texts. A text that is securely erased can never fall into the wrong hands if the device is compromised later.

An actively-compromised phone, though, could still give up secrets. For instance, what if a malicious keyboard app sent every keypress to an adversary? Signal is only as secure as the phone it runs on – but still, it protects against a wide variety of attacks.

Untrustworthy communication partner

Perhaps you are sending sensitive information to a contact, but that person doesn’t want to keep it in confidence. There is very little you can do about that technologically; with pretty much any tool out there, nothing stops them from taking a picture of your messages and handing the picture off.

Environmental compromise

Perhaps your device is secure, but a hidden camera still captures what’s on your screen. You can take some steps against things like this, of course.

Human error

Sometimes humans make mistakes. For instance, the reason a reporter got copies of messages recently was because a participant in a group chat accidentally added him (presumably that participant meant to add someone else and just selected the wrong name). Phishing attacks can trick people into revealing passwords or other sensitive data. Humans are, quite often, the weakest link in the chain.

Protecting yourself

So how can you protect yourself against these attacks? Let’s consider:

  • Use a secure app like Signal that uses strong end-to-end encryption where even the provider can’t access your messages
  • Keep your software and phone up-to-date
  • Be careful about phishing attacks and who you add to chat rooms
  • Be aware of your surroundings; don’t send sensitive messages where people might be looking over your shoulder with their eyes or cameras

There are other methods besides Signal. For instance, you could install GnuPG (GPG) on a laptop that has no WiFi card or any other way to connect it to the Internet. You could always type your messages on that laptop, encrypt them, copy the encrypted text to a floppy disk (or USB device), take that USB drive to your Internet computer, and send the encrypted message by email or something. It would be exceptionally difficult to break the privacy of messages in that case (though anonymity would be mostly lost). Even if someone got the password to your “secure” laptop, it wouldn’t do them any good unless they physically broke into your house or something. In some ways, it is probably safer than Signal. (For more on this, see my article How gapped is your air?)

But, that approach is hard to use. Many people aren’t familiar with GnuPG. You don’t have the convenience of sending a quick text message from anywhere. Security that is hard to use most often simply isn’t used. That is, you and your friends will probably just revert back to using insecure SMS instead of this GnuPG approach because SMS is so much easier.

Signal strikes a unique balance of providing very good security while also being practical, easy, and useful. For most people, it is the most secure option available.

Signal is also open source; you don’t have to trust that it is as secure as it says, because you can inspect it for yourself. Also, while it’s not federated, I previously addressed that.

Government use

If you are a government, particularly one that is highly consequential to the world, you can imagine that you are a huge target. Other nations are likely spending billions of dollars to compromise your communications. Signal itself might be secure, but if some other government can add spyware to your phones, or conduct a successful phishing attack, you can still have your communications compromised.

I have no direct knowledge, but I think it is generally understood that the US government maintains communications networks that are entirely separate from the Internet and can only be accessed from secure physical locations and secure rooms. These can be even more secure than the average person using Signal because they can protect against things like environmental compromise, human error, and so forth. The scandal in March of 2025 happened because government employees were using Signal rather than official government tools for sensitive information, had taken advantage of Signal’s ephemerality (laws require records to be kept), and through apparent human error had directly shared this information with a reporter. Presumably a reporter would have lacked access to the restricted communications networks in the first place, so that wouldn’t have been possible.

This doesn’t mean that Signal is bad. It just means that somebody that can spend billions of dollars on security can be more secure than you. Signal is still a great tool for people, and in many cases defeats even those that can spend lots of dollars trying to defeat it.

And remember - to use those restricted networks, you have to go to specific rooms in specific buildings. They are still not as convenient as what you carry around in your pocket.

Conclusion

Signal is practical security. Do you want phone companies reading your messages? How about Facebook or X? Have those companies demonstrated that they are completely trustworthy throughout their entire history?

I say no. So, go install Signal. It’s the best, most practical tool we have.


This post is also available on my website, where it may be periodically updated.

28 March, 2025 02:51AM by John Goerzen

March 27, 2025

hackergotchi for Bits from Debian

Bits from Debian

Viridien Platinum Sponsor of DebConf25

viridien-logo

We are pleased to announce that Viridien has committed to sponsor DebConf25 as a Platinum Sponsor.

Viridien is an advanced technology, digital and Earth data company that pushes the boundaries of science for a more prosperous and sustainable future.

Viridien has been using Debian-based systems to power most of its HPC infrastructure and its cloud platform since 2009 and currently employs two active Debian Project Members.

As a Platinum Sponsor, Viridien is contributing to the Debian annual Developers' conference, directly supporting the progress of Debian and Free Software. Viridien contributes to strengthen the community that collaborates on the Debian project from all around the world throughout all of the year.

Thank you very much, Viridien, for your support of DebConf25!

Become a sponsor too!

DebConf25 will take place from 14 to 20 July 2025 in Brest, France, and will be preceded by DebCamp, from 7 to 13 July 2025.

DebConf25 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf25 website at https://debconf25.debconf.org/sponsors /become-a-sponsor/.

27 March, 2025 10:50AM by Sahil Dhiman

March 24, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Anticipated in 2018: Lilie James & Location tracking, Googlists complained

The ABC published a summary of the Lilie James inquest with a focus on the Location Tracking issues. The coroner hasn't completed their official report yet and these news reports are only summaries of the evidence. (See my previous blog about what people hide from coroners).

In the winter 2017/2018 round of Outreachy, I selected Renata D'Avila from Brazil to be an intern for Debian ( Debian official announcement and Outreachy project list).

During the application process, we ask each applicant to do a small programming task and submit the results. I was startled to see Renata giving help to the women she was competing against. It turns out that while tech industry diversity programs try to attract interns who are fresh out of university, Renata had already worked as a schoolteacher for a number of years and helping the other women was just part of her nature.

In the middle of the program, Renata published a blog post with the title The right to be included in the conversation. Renata's blog post features a screenshot of Google Maps with lines marked on it showing how Googlists have tracked her movements around Porto Alegre, here it is again, along with some analysis:

Google, Stalking, Harassment, women, interns, Outreachy

 

Renata's blog opens with a quote from Pierre-Joseph Proudhon that anticipates the prospect of being clubbed to death:

To be GOVERNED is to be watched, inspected, spied upon, directed, ... then, at the slightest resistance, the first word of complaint, to be repressed, fined, vilified, harassed, hunted down, abused, clubbed, disarmed, bound, choked, imprisoned, judged, condemned, shot, deported, sacrificed, sold, betrayed; and to crown all, mocked, ridiculed, derided, outraged, dishonored.

The ABC report mimics the quote chosen by Renata:

The inquest examining Ms James's death at St Andrew's Cathedral School in 2023, heard she had tried to set boundaries with Thijssen the weekend before.

Creepy. But it gets worse.

The Googlists couldn't stand this. Google is one of the companies that contributes money to these diversity internships. When women are selected for the program, their blog posts are syndicated into various web sites where they are seen by many Google employees and their followers. It was really shocking for them when this blog about how creepy they are suddenly appeared all over the open source eco-system.

Various rumors appeared. They created rumors about "behavior", rumours about "harassment" and rumors about "abuse". They created rumors that I was dating an intern.

They sent threats to Renata, which she didn't tell me about until a few months later. I published some of those communications.

Debian "Community Team" (political police) to Renata, private email of 13 June 2018: Reinforcing positive attitudes and steps that you see in Debian towards women inclusion can also motivate yourself, the other Debian contributors, and possible newcomers, to go on working in that strategy and foster diversity in Debian. This does not mean to avoid criticism or hiding problems, but providing a more accurate vision of how the Brazilian Debian community works towards our common goals.

In other words, we can tell fairy tales but nobody is allowed to speculate about the negative risks associated with location tracking or anything else that comes from Googlists. Because now that it actually happened to a location tracking victim, we can all say I chose the right woman for the internship.

Please watch Renata speaking in this video. They continue spending vast sums of money on "diversity" internships but diversity of thought is not welcome.

Related: the Code of Conduct gaslighting in open-source software hobbyist groups may violate the new coercive control laws too.

Even more scary are the predictions I made when Donald Trump was elected for the first time.

Googlists have spent over US$130,000 to try and discredit me, to stop women telling me stuff and to stop us making predictions that are uncomfortably close to the truth.

RIP Lilie James

Lilie James, graduation

24 March, 2025 09:30PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Who pays the cost of progress in software?

I am told, by friends who have spent time at Google, about the reason Google Reader finally disappeared. Apparently it had become a 20% Project for those who still cared about it internally, and there was some major change happening to one of it upstream dependencies that was either going to cause a significant amount of work rearchitecting Reader to cope, or create additional ongoing maintenance burden. It was no longer viable to support it as a side project, so it had to go. This was a consequence of an internal culture at Google where service owners are able to make changes that can break downstream users, and the downstream users are the ones who have to adapt.

My experience at Meta goes the other way. If you own a service or other dependency and you want to make a change that will break things for the users, it’s on you to do the migration, or at the very least provide significant assistance to those who own the code. You don’t just get to drop your new release and expect others to clean up; doing that tends to lead to changes being reverted. The culture flows the other way; if you break it, you fix it (nothing is someone else’s problem).

There are pluses and minuses to both approaches. Users having to drive the changes to things they own stops them from blocking progress. Service/code owners having to drive the changes avoids the situation where a wildly used component drops a new release that causes a lot of high priority work for folk in order to adapt.

I started thinking about this in the context of Debian a while back, and a few incidents since have resulted in my feeling that we’re closer to the Google model than the Meta model. Anyone can upload a new version of their package to unstable, and that might end up breaking all the users of it. It’s not quite as extreme as rolling out a new service, because it’s unstable that gets affected (the clue is in the name, I really wish more people would realise that), but it can still result in release critical bugs for lots other Debian contributors.

A good example of this are toolchain changes. Major updates to GCC and friends regularly result in FTBFS issues in lots of packages. Now in this instance the maintainer is usually diligent about a heads up before the default changes, but it’s still a whole bunch of work for other maintainers to adapt (see the list of FTBFS bugs for GCC 15 for instance - these are important, but not serious yet). Worse is when a dependency changes and hasn’t managed to catch everyone who might be affected, so by the time it’s discovered it’s release critical, because at least one package no longer builds in unstable.

Commercial organisations try to avoid this with a decent CI/CD setup that either vendors all dependencies, or tracks changes to them and tries rebuilds before allowing things to land. This is one of the instances where a monorepo can really shine; if everything you need is in there, it’s easier to track the interconnections between different components. Debian doesn’t have a CI/CD system that runs for every upload, allowing us to track exact causes of regressions. Instead we have Lucas, who does a tremendous job of running archive wide rebuilds to make sure we can still build everything. Unfortunately that means I am often unfairly grumpy at him; my heart sinks when I see a bug come in with his name attached, because it often means one of my packages has a new RC bug where I’m going to have to figure out what changed elsewhere to cause it. However he’s just (very usefully) surfacing an issue someone else created, rather than actually being the cause of the problem.

I don’t know if I have a point to this post. I think it’s probably that I wish folk in Free Software would try and be mindful of the incompatible changes they might introducing, and the toil they create for other volunteer developers, often not directly visible to the person making the change. The approach done by the Debian toolchain maintainers strikes me as a good balance; they do a bunch of work up front to try and flag all the places that might need to make changes, far enough in advance of the breaking change actually landing. However they don’t then allow a tardy developer to block progress.

24 March, 2025 09:11PM

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (January and February 2025)

The following contributors got their Debian Developer accounts in the last two months:

  • Bo Yu (vimer)
  • Maytham Alsudany (maytham)
  • Rebecca Natalie Palmer (mpalmer)

The following contributors were added as Debian Maintainers in the last two months:

  • NoisyCoil
  • Arif Ali
  • Julien Plissonneau Duquène
  • Maarten Van Geijn
  • Ben Collins

Congratulations!

24 March, 2025 03:00PM by Jean-Pierre Giraud

Simon Josefsson

Reproducible Software Releases

Around a year ago I discussed two concerns with software release archives (tarball artifacts) that could be improved to increase confidence in the supply-chain security of software releases. Repeating the goals for simplicity:

  • Release artifacts should be built in a way that can be reproduced by others
  • It should be possible to build a project from source tarball that doesn’t contain any generated or vendor files (e.g., in the style of git-archive).

While implementing these ideas for a small project was accomplished within weeks – see my announcement of Libntlm version 1.8 – adressing this in complex projects uncovered concerns with tools that had to be addressed, and things stalled for many months pending that work.

I had the notion that these two goals were easy and shouldn’t be hard to accomplish. I still believe that, but have had to realize that improving tooling to support these goals takes time. It seems clear that these concepts are not universally agreed on and implemented generally.

I’m now happy to recap some of the work that led to releases of libtasn1 v4.20.0, inetutils v2.6, libidn2 v2.3.8, libidn v1.43. These releases all achieve these goals. I am working on a bunch of more projects to support these ideas too.

What have the obstacles so far been to make this happen? It may help others who are in the same process of addressing these concerns to have a high-level introduction to the issues I encountered. Source code for projects above are available and anyone can look at the solutions to learn how the problems are addressed.

First let’s look at the problems we need to solve to make “git-archive” style tarballs usable:

Version Handling

To build usable binaries from a minimal tarballs, it need to know which version number it is. Traditionally this information was stored inside configure.ac in git. However I use gnulib’s git-version-gen to infer the version number from the git tag or git commit instead. The git tag information is not available in a git-archive tarball. My solution to this was to make use of the export-subst feature of the .gitattributes file. I store the file .tarball-version-git in git containing the magic cookie like this:

$Format:%(describe)$

With this, git-archive will replace with a useful version identifier on export, see the libtasn1 patch to achieve this. To make use of this information, the git-version-gen script was enhanced to read this information, see the gnulib patch. This is invoked by ./configure to figure out which version number the package is for.

Translations

We want translations to be included in the minimal source tarball for it to be buildable. Traditionally these files are retrieved by the maintainer from the Translation project when running ./bootstrap, however there are two problems with this. The first one is that there is no strong authentication or versioning information on this data, the tools just download and place whatever wget downloaded into your source tree (printf-style injection attack anyone?). We could improve this (e.g., publish GnuPG signed translations messages with clear versioning), however I did not work on that further. The reason is that I want to support offline builds of packages. Downloading random things from the Internet during builds does not work when building a Debian package, for example. The translation project could solve this by making a monthly tarball with their translations available, for distributors to pick up and provide as a separate package that could be used as a build dependency. However that is not how these tools and projects are designed. Instead I reverted back to storing translations in git, something that I did for most projects back when I was using CVS 20 years ago. Hooking this into ./bootstrap and gettext workflow can be tricky (ideas for improvement most welcome!), but I used a simple approach to store all directly downloaded po/*.po files directly as po/*.po.in and make the ./bootstrap tool move them in place, see the libidn2 commit followed by the actual ‘make update-po’ commit with all the translations where one essential step is:

# Prime po/*.po from fall-back copy stored in git.
for poin in po/*.po.in; do
    po=$(echo $poin | sed 's/.in//')
    test -f $po || cp -v $poin $po
done
ls po/*.po | sed 's|.*/||; s|\.po$||' > po/LINGUAS

Fetching vendor files like gnulib

Most build dependencies are in the shape of “You need a C compiler”. However some come in the shape of “source-code files intended to be vendored”, and gnulib is a huge repository of such files. The latter is a problem when building from a minimal git archive. It is possible to consider translation files as a class of vendor files, since they need to be copied verbatim into the project build directory for things to work. The same goes for *.m4 macros from the GNU Autoconf Archive. However I’m not confident that the solution for all vendor files must be the same. For translation files and for Autoconf Archive macros, I have decided to put these files into git and merge them manually occasionally. For gnulib files, in some projects like OATH Toolkit I also store all gnulib files in git which effectively resolve this concern. (Incidentally, the reason for doing so was originally that running ./bootstrap took forever since there is five gnulib instances used, which is no longer the case since gnulib-tool was rewritten in Python.) For most projects, however, I rely on ./bootstrap to fetch a gnulib git clone when building. I like this model, however it doesn’t work offline. One way to resolve this is to make the gnulib git repository available for offline use, and I’ve made some effort to make this happen via a Gnulib Git Bundle and have explained how to implement this approach for Debian packaging. I don’t think that is sufficient as a generic solution though, it is mostly applicable to building old releases that uses old gnulib files. It won’t work when building from CI/CD pipelines, for example, where I have settled to use a crude way of fetching and unpacking a particular gnulib snapshot, see this Libntlm patch. This is much faster than working with git submodules and cloning gnulib during ./bootstrap. Essentially this is doing:

GNULIB_REVISION=$(. bootstrap.conf >&2; echo $GNULIB_REVISION)
wget -nv https://gitlab.com/libidn/gnulib-mirror/-/archive/$GNULIB_REVISION/gnulib-mirror-$GNULIB_REVISION.tar.gz
gzip -cd gnulib-mirror-$GNULIB_REVISION.tar.gz | tar xf -
rm -fv gnulib-mirror-$GNULIB_REVISION.tar.gz
export GNULIB_SRCDIR=$PWD/gnulib-mirror-$GNULIB_REVISION
./bootstrap --no-git
./configure
make

Test the git-archive tarball

This goes without saying, but if you don’t test that building from a git-archive style tarball works, you are likely to regress at some point. Use CI/CD techniques to continuously test that a minimal git-archive tarball leads to a usable build.

Mission Accomplished

So that wasn’t hard, was it? You should now be able to publish a minimal git-archive tarball and users should be able to build your project from it.

I recommend naming these archives as PROJECT-vX.Y.Z-src.tar.gz replacing PROJECT with your project name and X.Y.Z with your version number. The archive should have only one sub-directory named PROJECT-vX.Y.Z/ containing all the source-code files. This differentiate it against traditional PROJECT-X.Y.Z.tar.gz tarballs in that it embeds the git tag (which typically starts with v) and contains a wildcard-friendly -src substring. Alas there is no consistency around this naming pattern, and GitLab, GitHub, Codeberg etc all seem to use their own slightly incompatible variant.

Let’s go on to see what is needed to achieve reproducible “make dist” source tarballs. This is the release artifact that most users use, and they often contain lots of generated files and vendor files. These files are included to make it easy to build for the user. What are the challenges to make these reproducible?

Build dependencies causing different generated content

The first part is to realize that if you use tool X with version A to generate a file that goes into the tarball, version B of that tool may produce different outputs. This is a generic concern and it cannot be solved. We want our build tools to evolve and produce better outputs over time. What can be addressed is to avoid needless differences. For example, many tools store timestamps and versioning information in the generated files. This causes needless differences, which makes audits harder. I have worked on some of these, like Autoconf Archive timestamps but solving all of these examples will take a long time, and some upstream are reluctant to incorporate these changes. My approach meanwhile is to build things using similar environments, and compare the outputs for differences. I’ve found that the various closely related forks of GNU/Linux distributions are useful for this. Trisquel 11 is based on Ubuntu 22.04, and building my projects using both and comparing the differences only give me the relevant differences to improve. This can be extended to compare AlmaLinux with RockyLinux (for both versions 8 and 9), Devuan 5 against Debian 12, PureOS 10 with Debian 11, and so on.

Timestamps

Sometimes tools store timestamps in files in a way that is harder to fix. Two notable examples of this are *.po translation files and Texinfo manuals. For translation files, I have resolved this by making sure the files use a predictable POT-Creation-Date timestamp, and I set it to the modification timestamps of the NEWS file in the repository (which I set to the git commit of the latest commit elsewhere) like this:

dist-hook: po-CreationDate-to-mtime-NEWS
.PHONY: po-CreationDate-to-mtime-NEWS
po-CreationDate-to-mtime-NEWS: mtime-NEWS-to-git-HEAD
  $(AM_V_GEN)for p in $(distdir)/po/*.po $(distdir)/po/$(PACKAGE).pot; do \
    if test -f "$$p"; then \
      $(SED) -e 's,POT-Creation-Date: .*\\n",POT-Creation-Date: '"$$(env LC_ALL=C TZ=UTC0 stat --format=%y $(srcdir)/NEWS | cut -c1-16,31-)"'\\n",' < $$p > $$p.tmp && \
      if cmp $$p $$p.tmp > /dev/null; then \
        rm -f $$p.tmp; \
      else \
        mv $$p.tmp $$p; \
      fi \
    fi \
  done

Similarily, I set a predictable modification time of the texinfo source file like this:

dist-hook: mtime-NEWS-to-git-HEAD
.PHONY: mtime-NEWS-to-git-HEAD
mtime-NEWS-to-git-HEAD:
  $(AM_V_GEN)if test -e $(srcdir)/.git \
                && command -v git > /dev/null; then \
    touch -m -t "$$(git log -1 --format=%cd \
      --date=format-local:%Y%m%d%H%M.%S)" $(srcdir)/NEWS; \
  fi

However I’ve realized that this needs to happen earlier and probably has to be run during ./configure time, because the doc/version.texi file is generated on first build before running ‘make dist‘ and for some reason the file is not rebuilt at release time. The Automake texinfo integration is a bit inflexible about providing hooks to extend the dependency tracking.

The method to address these differences isn’t really important, and they change over time depending on preferences. What is important is that the differences are eliminated.

ChangeLog

Traditionally ChangeLog files were manually prepared, and still is for some projects. I maintain git2cl but recently I’ve settled with gnulib’s gitlog-to-changelog because doing so avoids another build dependency (although the output formatting is different and arguable worse for my git commit style). So the ChangeLog files are generated from git history. This means a shallow clone will not produce the same ChangeLog file depending on how deep it was cloned. For Libntlm I simply disabled use of generated ChangeLog because I wanted to support an even more extreme form of reproducibility: I wanted to be able to reproduce the full “make dist” source archives from a minimal “git-archive” source archive. However for other projects I’ve settled with a middle ground. I realized that for ‘git describe‘ to produce reproducible outputs, the shallow clone needs to include the last release tag. So it felt acceptable to assume that the clone is not minimal, but instead has some but not all of the history. I settled with the following recipe to produce ChangeLog's covering all changes since the last release.

dist-hook: gen-ChangeLog
.PHONY: gen-ChangeLog
gen-ChangeLog:
  $(AM_V_GEN)if test -e $(srcdir)/.git; then			\
    LC_ALL=en_US.UTF-8 TZ=UTC0					\
    $(top_srcdir)/build-aux/gitlog-to-changelog			\
       --srcdir=$(srcdir) --					\
       v$(PREV_VERSION)~.. > $(distdir)/cl-t &&			\
       { printf '\n\nSee the source repo for older entries\n'	\
         >> $(distdir)/cl-t &&					\
         rm -f $(distdir)/ChangeLog &&				\
         mv $(distdir)/cl-t $(distdir)/ChangeLog; }		\
  fi

I’m undecided about the usefulness of generated ChangeLog files within ‘make dist‘ archives. Before we have stable and secure archival of git repositories widely implemented, I can see some utility of this in case we lose all copies of the upstream git repositories. I can sympathize with the concept of ChangeLog files died when we started to generate them from git logs: the files no longer serve any purpose, and we can ask people to go look at the git log instead of reading these generated non-source files.

Long-term reproducible trusted build environment

Distributions comes and goes, and old releases of them goes out of support and often stops working. Which build environment should I chose to build the official release archives? To my knowledge only Guix offers a reliable way to re-create an older build environment (guix gime-machine) that have bootstrappable properties for additional confidence. However I had two difficult problems here. The first one was that I needed Guix container images that were usable in GitLab CI/CD Pipelines, and this side-tracked me for a while. The second one delayed my effort for many months, and I was inclined to give up. Libidn distribute a C# implementation. Some of the C# source code files included in the release tarball are generated. By what? You guess it, by a C# program, with the source code included in the distribution. This means nobody could reproduce the source tarball of Libidn without trusting someone elses C# compiler binaries, which were built from binaries of earlier releases, chaining back into something that nobody ever attempts to build any more and likely fail to build due to bit-rot. I had two basic choices, either remove the C# implementation from Libidn (which may be a good idea for other reasons, since the C and C# are unrelated implementations) or build the source tarball on some binary-only distribution like Trisquel. Neither felt appealing to me, but a late christmas gift of a reproducible Mono came to Guix that resolve this.

Embedded images in Texinfo manual

For Libidn one section of the manual has an image illustrating some concepts. The PNG, PDF and EPS outputs were generated via fig2dev from a *.fig file (hello 1985!) that I had stored in git. Over time, I had also started to store the generated outputs because of build issues. At some point, it was possible to post-process the PDF outputs with grep to remove some timestamps, however with compression this is no longer possible and actually the grep command I used resulted in a 0-byte output file. So my embedded binaries in git was no longer reproducible. I first set out to fix this by post-processing things properly, however I then realized that the *.fig file is not really easy to work with in a modern world. I wanted to create an image from some text-file description of the image. Eventually, via the Guix manual on guix graph, I came to re-discover the graphviz language and tool called dot (hello 1993!). All well then? Oh no, the PDF output embeds timestamps. Binary editing of PDF’s no longer work through simple grep, remember? I was back where I started, and after some (soul- and web-) searching I discovered that Ghostscript (hello 1988!) pdfmarks could be used to modify things here. Cooperating with automake’s texinfo rules related to make dist proved once again a worthy challenge, and eventually I ended up with a Makefile.am snippet to build images that could be condensed into:

info_TEXINFOS = libidn.texi
libidn_TEXINFOS += libidn-components.png
imagesdir = $(infodir)
images_DATA = libidn-components.png
EXTRA_DIST += components.dot
DISTCLEANFILES = \
  libidn-components.eps libidn-components.png libidn-components.pdf
libidn-components.eps: $(srcdir)/components.dot
  $(AM_V_GEN)$(DOT) -Nfontsize=9 -Teps < $< > $@.tmp
  $(AM_V_at)! grep %%CreationDate $@.tmp
  $(AM_V_at)mv $@.tmp $@
libidn-components.pdf: $(srcdir)/components.dot
  $(AM_V_GEN)$(DOT) -Nfontsize=9 -Tpdf < $< > $@.tmp
# A simple sed on CreationDate is no longer possible due to compression.
# 'exiftool -CreateDate' is alternative to 'gs', but adds ~4kb to file.
# Ghostscript add <1kb.  Why can't 'dot' avoid setting CreationDate?
  $(AM_V_at)printf '[ /ModDate ()\n  /CreationDate ()\n  /DOCINFO pdfmark\n' > pdfmarks
  $(AM_V_at)$(GS) -q -dBATCH -dNOPAUSE -sDEVICE=pdfwrite -sOutputFile=$@.tmp2 $@.tmp pdfmarks
  $(AM_V_at)rm -f $@.tmp pdfmarks
  $(AM_V_at)mv $@.tmp2 $@
libidn-components.png: $(srcdir)/components.dot
  $(AM_V_GEN)$(DOT) -Nfontsize=9 -Tpng < $< > $@.tmp
  $(AM_V_at)mv $@.tmp $@
pdf-recursive: libidn-components.pdf
dvi-recursive: libidn-components.eps
ps-recursive: libidn-components.eps
info-recursive: $(top_srcdir)/.version libidn-components.png

Surely this can be improved, but I’m not yet certain in what way is the best one forward. I like having a text representation as the source of the image. I’m sad that the new image size is ~48kb compared to the old image size of ~1kb. I tried using exiftool -CreateDate as an alternative to GhostScript, but using it to remove the timestamp added ~4kb to the file size and naturally I was appalled by this ignorance of impending doom.

Test reproducibility of tarball

Again, you need to continuously test the properties you desire. This means building your project twice using different environments and comparing the results. I’ve settled with a small GitLab CI/CD pipeline job that perform bit-by-bit comparison of generated ‘make dist’ archives. It also perform bit-by-bit comparison of generated ‘git-archive’ artifacts. See the Libidn2 .gitlab-ci.yml 0-compare job which essentially is:

0-compare:
  image: alpine:latest
  stage: repro
  needs: [ B-AlmaLinux8, B-AlmaLinux9, B-RockyLinux8, B-RockyLinux9, B-Trisquel11, B-Ubuntu2204, B-PureOS10, B-Debian11, B-Devuan5, B-Debian12, B-gcc, B-clang, B-Guix, R-Guix, R-Debian12, R-Ubuntu2404, S-Trisquel10, S-Ubuntu2004 ]
  script:
  - cd out
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep    -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
  - sha256sum */*.tar.* */*/*.tar.* | grep    -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
  - sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
# Confirm modern git-archive tarball reproducibility
  - cmp b-almalinux8/src/*.tar.gz b-almalinux9/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-rockylinux8/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-rockylinux9/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-debian12/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz b-devuan5/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz r-guix/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz r-debian12/src/*.tar.gz
  - cmp b-almalinux8/src/*.tar.gz r-ubuntu2404/src/*v2.*.tar.gz
# Confirm old git-archive (export-subst but long git describe) tarball reproducibility
  - cmp b-trisquel11/src/*.tar.gz b-ubuntu2204/src/*.tar.gz
# Confirm really old git-archive (no export-subst) tarball reproducibility
  - cmp b-debian11/src/*.tar.gz b-pureos10/src/*.tar.gz
# Confirm 'make dist' generated tarball reproducibility
  - cmp b-almalinux8/*.tar.gz b-rockylinux8/*.tar.gz
  - cmp b-almalinux9/*.tar.gz b-rockylinux9/*.tar.gz
  - cmp b-pureos10/*.tar.gz b-debian11/*.tar.gz
  - cmp b-devuan5/*.tar.gz b-debian12/*.tar.gz
  - cmp b-trisquel11/*.tar.gz b-ubuntu2204/*.tar.gz
  - cmp b-guix/*.tar.gz r-guix/*.tar.gz
# Confirm 'make dist' from git-archive tarball reproducibility
  - cmp s-trisquel10/*.tar.gz s-ubuntu2004/*.tar.gz

Notice that I discovered that ‘git archive’ outputs differ over time too, which is natural but a bit of a nuisance. The output of the job is illuminating in the way that all SHA256 checksums of generated tarballs are included, for example the libidn2 v2.3.8 job log:

$ sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
368488b6cc8697a0a937b9eb307a014396dd17d3feba3881e6911d549732a293  b-trisquel11/libidn2-2.3.8.tar.gz
368488b6cc8697a0a937b9eb307a014396dd17d3feba3881e6911d549732a293  b-ubuntu2204/libidn2-2.3.8.tar.gz
59db2d045fdc5639c98592d236403daa24d33d7c8db0986686b2a3056dfe0ded  b-debian11/libidn2-2.3.8.tar.gz
59db2d045fdc5639c98592d236403daa24d33d7c8db0986686b2a3056dfe0ded  b-pureos10/libidn2-2.3.8.tar.gz
5bd521d5ecd75f4b0ab0fc6d95d444944ef44a84cad859c9fb01363d3ce48bb8  s-trisquel10/libidn2-2.3.8.tar.gz
5bd521d5ecd75f4b0ab0fc6d95d444944ef44a84cad859c9fb01363d3ce48bb8  s-ubuntu2004/libidn2-2.3.8.tar.gz
7f1dcdea3772a34b7a9f22d6ae6361cdcbe5513e3b6485d40100b8565c9b961a  b-almalinux8/libidn2-2.3.8.tar.gz
7f1dcdea3772a34b7a9f22d6ae6361cdcbe5513e3b6485d40100b8565c9b961a  b-rockylinux8/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-clang/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-debian12/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-devuan5/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  b-gcc/libidn2-2.3.8.tar.gz
8031278157ce43b5813f36cf8dd6baf0d9a7f88324ced796765dcd5cd96ccc06  r-debian12/libidn2-2.3.8.tar.gz
acf5cbb295e0693e4394a56c71600421059f9c9bf45ccf8a7e305c995630b32b  r-ubuntu2404/libidn2-2.3.8.tar.gz
cbdb75c38100e9267670b916f41878b6dbc35f9c6cbe60d50f458b40df64fcf1  b-almalinux9/libidn2-2.3.8.tar.gz
cbdb75c38100e9267670b916f41878b6dbc35f9c6cbe60d50f458b40df64fcf1  b-rockylinux9/libidn2-2.3.8.tar.gz
f557911bf6171621e1f72ff35f5b1825bb35b52ed45325dcdee931e5d3c0787a  b-guix/libidn2-2.3.8.tar.gz
f557911bf6171621e1f72ff35f5b1825bb35b52ed45325dcdee931e5d3c0787a  r-guix/libidn2-2.3.8.tar.gz

I’m sure I have forgotten or suppressed some challenges (sprinkling LANG=C TZ=UTC0 helps) related to these goals, but my hope is that this discussion of solutions will inspire you to implement these concepts for your software project too. Please share your thoughts and additional insights in a comment below. Enjoy Happy Hacking in the course of practicing this!

24 March, 2025 11:09AM by simon

Arnaud Rebillout

Buid container images with buildah/podman in GitLab CI

Oh no, it broke again!

Today, this .gitlab-ci.yml file no longer works in GitLab CI:

build-container-image:
  stage: build
  image: debian:testing
  before_script:
    - apt-get update
    - apt-get install -y buildah ca-certificates
  script:
    - buildah build -t $CI_REGISTRY_IMAGE .

The command buildah build ... fails with this error message:

STEP 2/3: RUN  apt-get update
internal:0:0-0: Error: Could not process rule: No such file or directory
internal:0:0-0: Error: Could not process rule: No such file or directory
error running container: did not get container start message from parent: EOF
Error: building at STEP "RUN apt-get update": setup network: netavark: nftables error: nft did not return successfully while applying ruleset

After some investigation, it's caused by the recent upload of netavark 1.14.0-2. In this version, netavark switched from iptables to nftables as the default firewall driver. That doesn't really fly on GitLab Saas shared runners.

For the complete background, refer to https://discussion.fedoraproject.org/t/125528. Note that the issue with GitLab was reported back in November, but at this point the conversation had died out.

Fortunately, it's easy to workaround, we can tell netavark to keep using iptables via the environment variables NETAVARK_FW. The .gitlab-ci.yml file above becomes:

build-container-image:
  stage: build
  image: debian:testing
  variables:
    # Cf. https://discussion.fedoraproject.org/t/125528/7
    NETAVARK_FW: iptables
  before_script:
    - apt-get update
    - apt-get install -y buildah ca-certificates
  script:
    - buildah build -t $CI_REGISTRY_IMAGE .

And everything works again!

If you're interested in this issue, feel free to fork https://gitlab.com/arnaudr/gitlab-build-container-image and try it by yourself.

24 March, 2025 12:00AM by Arnaud Rebillout

March 22, 2025

hackergotchi for Luke Faraone

Luke Faraone

I'm running for the OSI board... maybe

The Open Source Initiative has two classes of board seats: Affiliate seats, and Individual Member seats. 

In the upcoming election, each affiliate can nominate a candidate, and each affiliate can cast a vote for the Affiliate candidates, but there's only 1 Affiliate seat available. I initially expressed interest in being nominated as an Affiliate candidate via Debian. But since Bradley Kuhn is also running for an Affiliate seat with a similar platform to me, especially with regards to the OSAID, I decided to run as part of an aligned "ticket" as an Individual Member to avoid contention for the 1 Affiliate seat.

Bradley and I discussed running on a similar ticket around 8/9pm Pacific, and I submitted my candidacy around 9pm PT on 17 February. 

I was dismayed when I received the following mail from Nick Vidal:

Dear Luke,

Thank you for your interest in the OSI Board of Directors election. Unfortunately, we are unable to accept your application as it was submitted after the official deadline of Monday Feb 17 at 11:59 pm UTC. To ensure a fair process, we must adhere to the deadline for all candidates.

We appreciate your enthusiasm and encourage you to stay engaged with OSI’s mission. We hope you’ll consider applying in the future or contributing in other meaningful ways.

Best regards,
OSI Election Teams

Nowhere on the "OSI’s board of directors in 2025: details about the elections" page do they list a timezone for closure of nominations; they simply list Monday 17 February. 

The OSI's contact address is in California, so it seems arbitrary and capricious to retroactively define all of these processes as being governed by UTC.

I was not able to participate in the "potential board director" info sessions accordingly, but people who attended heard that the importance of accommodating differing TZ's was discussed during the info session, and that OSI representatives mentioned they try to accommodate TZ's of everyone. This seems in sharp contrast with the above policy. 

I urge the OSI to reconsider this policy and allow me to stand for an Individual seat in the current cycle. 

Upd, N.B.: to people writing about this, I use they/them pronouns

22 March, 2025 04:30PM by Luke Faraone (noreply@blogger.com)

Antoine Beaupré

Losing the war for the free internet

Warning: this is a long ramble I wrote after an outage of my home internet. You'll get your regular scheduled programming shortly.

I didn't realize this until relatively recently, but we're at war.

Fascists and capitalists are trying to take over the world, and it's bringing utter chaos.

We're more numerous than them, of course: this is only a handful of people screwing everyone else over, but they've accumulated so much wealth and media control that it's getting really, really hard to move around.

Everything is surveilled: people are carrying tracking and recording devices in their pockets at all time, or they drive around in surveillance machines. Payments are all turning digital. There's cameras everywhere, including in cars. Personal data leaks are so common people kind of assume their personal address, email address, and other personal information has already been leaked.

The internet itself is collapsing: most people are using the network only as a channel to reach a "small" set of "hyperscalers": mind-boggingly large datacenters that don't really operate like the old internet. Once you reach the local endpoint, you're not on the internet anymore. Netflix, Google, Facebook (Instagram, Whatsapp, Messenger), Apple, Amazon, Microsoft (Outlook, Hotmail, etc), all those things are not really the internet anymore.

Those companies operate over the "internet" (as in the TCP/IP network), but they are not an "interconnected network" as much as their own, gigantic silos so much bigger than everything else that they essentially dictate how the network operates, regardless of standards. You access it over "the web" (as in "HTTP") but the fabric is not made of interconnected links that cross sites: all those sites are trying really hard to keep you captive on their platforms.

Besides, you think you're writing an email to the state department, for example, but you're really writing to Microsoft Outlook. That app your university or border agency tells you to install, the backend is not hosted by those institutions, it's on Amazon. Heck, even Netflix is on Amazon.

Meanwhile I've been operating my own mail server first under my bed (yes, really) and then in a cupboard or the basement for almost three decades now. And what for?

So I can tell people I can? Maybe!

I guess the reason I'm doing this is the same reason people are suddenly asking me about the (dead) mesh again. People are worried and scared that the world has been taken over, and they're right: we have gotten seriously screwed.

It's the same reason I keep doing radio, minimally know how to grow food, ride a bike, build a shed, paddle a canoe, archive and document things, talk with people, host an assembly. Because, when push comes to shove, there's no one else who's going to do it for you, at least not the way that benefits the people.

The Internet is one of humanity's greatest accomplishments. Obviously, oligarchs and fascists are trying to destroy it. I just didn't expect the tech bros to be flipping to that side so easily. I thought we were friends, but I guess we are, after all, enemies.

That said, that old internet is still around. It's getting harder to host your own stuff at home, but it's not impossible. Mail is tricky because of reputation, but it's also tricky in the cloud (don't get fooled!), so it's not that much easier (or cheaper) there.

So there's things you can do, if you're into tech.

Share your wifi with your neighbours.

Build a LAN. Throw a wire over to your neighbour too, it works better than wireless.

Use Tor. Run a relay, a snowflake, a webtunnel.

Host a web server. Build a site with a static site generator and throw it in the wind.

Download and share torrents, and why not a tracker.

Run an IRC server (or Matrix, if you want to federate and lose high availability).

At least use Signal, not Whatsapp or Messenger.

And yes, why not, run a mail server, join a mesh.

Don't write new software, there's plenty of that around already.

(Just kidding, you can write code, cypherpunk.)

You can do many of those things just by setting up a FreedomBox.

That is, after all, the internet: people doing their own thing for their own people.

Otherwise, it's just like sitting in front of the television and watching the ads. Opium of the people, like the good old time.

Let a billion droplets build the biggest multitude of clouds that will storm over this world and rip apart this fascist conspiracy.

Disobey. Revolt. Build.

We are more than them.

22 March, 2025 03:00PM

Minor outage at Teksavvy business

This morning, internet was down at home. The last time I had such an issue was in February 2023, when my provider was Oricom. Now I'm with a business service at Teksavvy Internet (TSI), in which I pay 100$ per month for a 250/50 mbps business package, with a static IP address, on which I run, well, everything: email services, this website, etc.

Mitigation

Email

The main problem when the service goes down like this for prolonged outages is email. Mail is pretty resilient to failures like this but after some delay (which varies according to the other end), mail starts to drop. I am actually not sure what the various settings are among different providers, but I would assume mail is typically kept for about 24h, so that's our mark.

Last time, I setup VMs at Linode and Digital Ocean to deal better with this. I have actually kept those VMs running as DNS servers until now, so that part is already done.

I had fantasized about Puppetizing the mail server configuration so that I could quickly spin up mail exchangers on those machines. But now I am realizing that my Puppet server is one of the service that's down, so this would not work, at least not unless the manifests can be applied without a Puppet server (say with puppet apply).

Thankfully, my colleague groente did amazing work to refactor our Postfix configuration in Puppet at Tor, and that gave me the motivation to reproduce the setup in the lab. So I have finally Puppetized part of my mail setup at home. That used to be hand-crafted experimental stuff documented in a couple of pages in this wiki, but is now being deployed by Puppet.

It's not complete yet: spam filtering (including DKIM checks and graylisting) are not implemented yet, but that's the next step, presumably to do during the next outage. The setup should be deployable with puppet apply, however, and I have refined that mechanism a little bit, with the run script.

Heck, it's not even deployed yet. But the hard part / grunt work is done.

Other

The outage was "short" enough (5 hours) that I didn't take time to deploy the other mitigations I had deployed in the previous incident.

But I'm starting to seriously consider deploying a web (and caching) reverse proxy so that I endure such problems more gracefully.

Side note on proper servics

Typically, I tend to think of a properly functioning service as having four things:

  1. backups
  2. documentation
  3. monitoring
  4. automation
  5. high availability

Yes, I miscounted. This is why you have high availability.

Backups

Duh. If data is maliciously or accidentally destroyed, you need a copy somewhere. Preferably in a way that malicious joe can't get to.

This is harder than you think.

Documentation

I have an entire template for this. Essentially, it boils down to using https://diataxis.fr/ and this "audit" guide. For me, the most important parts are:

  • disaster recovery (includes backups, probably)
  • playbook
  • install/upgrade procedures (see automation)

You probably know this is hard, and this is why you're not doing it. Do it anyways, you'll think it sucks, but you'll be really grateful for whatever scraps you wrote when you're in trouble.

Monitoring

If you don't have monitoring, you'll know it fails too late, and you won't know it recovers. Consider high availability, work hard to reduce noise, and don't have machine wake people up, that's literally torture and is against the Geneva convention.

Consider predictive algorithm to prevent failures, like "add storage within 2 weeks before this disk fills up".

This is harder than you think.

Automation

Make it easy to redeploy the service elsewhere.

Yes, I know you have backups. That is not enough: that typically restores data and while it can also include configuration, you're going to need to change things when you restore, which is what automation (or call it "configuration management" if you will) will do for you anyways.

This also means you can do unit tests on your configuration, otherwise you're building legacy.

This is probably as hard as you think.

High availability

Make it not fail when one part goes down.

Eliminate single points of failures.

This is easier than you think, except for storage and DNS (which, I guess, means it's harder than you think too).

Assessment

In the above 5 items, I check two:

  1. backups
  2. documentation

And barely: I'm not happy about the offsite backups, and my documentation is much better at work than at home (and even there, I have a 15 year backlog to catchup on).

I barely have monitoring: Prometheus is scraping parts of the infra, but I don't have any sort of alerting -- by which I don't mean "electrocute myself when something goes wrong", I mean "there's a set of thresholds and conditions that define an outage and I can look at it".

Automation is wildly incomplete. My home server is a random collection of old experiments and technologies, ranging from Apache with Perl and CGI scripts to Docker containers running Golang applications. Most of it is not Puppetized (but the ratio is growing). Puppet itself introduces a huge attack vector with kind of catastrophic lateral movement if the Puppet server gets compromised.

And, fundamentally, I am not sure I can provide high availability in the lab. I'm just this one guy running my home network, and I'm growing older. I'm thinking more about winding things down than building things now, and that's just really sad, because I feel we're losing (well that escalated quickly).

Resolution

In the end, I didn't need any mitigation and the problem fixed itself. I did do quite a bit of cleanup so that feels somewhat good, although I despaired quite a bit at the amount of technical debt I've accumulated in the lab.

Timeline

Times are in UTC-4.

  • 6:52: IRC bouncer goes offline
  • 9:20: called TSI support, waited on the line 15 minutes then was told I'd get a call back
  • 9:54: outage apparently detected by TSI
  • 11:00: no response, tried calling back support again
  • 11:10: confirmed bonding router outage, no official ETA but "today", source of the 9:54 timestamp above
  • 12:08: TPA monitoring notices service restored
  • 12:34: call back from TSI; service restored, problem was with the "bonder" configuration on their end, which was "fighting between Montréal and Toronto"

22 March, 2025 04:25AM

March 21, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Heathrow power failure, UK skills shortage, Brexit & Russia

Heathrow airport was shut down for almost the whole day due to a loss of electrical supply.

People were quick to ask if it involved foul play and if it might involve Russia.

The answer was already there in broad daylight.

The UK's national grid has been publicly appealing for help with the skills shortage for some time.

Brexit and Covid both hit the UK in early 2020. The skills shortage became more dire. There are many who argue that Brexit played a much bigger role than Covid in the long term skills shortage.

Who brought on Brexit? There were many actors involved. The role of Russia was significant enough that there is a dedicated Wikipedia page about Russian interference in the 2016 Brexit referendum.

Understanding critical infrastructure

The air traffic control facilities are critical infrastructure. The passenger check-in facilities are not. It was uncomfortable to see that UK ministers commenting on the crisis were unable to explain this distinction to the public or even comment on the state of the air traffic control.

Nonetheless, whether every part of the facility is critical infrastructure or not, it was simply bad for the UK's national brand.

The UK knows better

The last time energy supplies at Heathrow were cut so dramatically was the fire at the Buncefield fuel depot. I was living about 5km away from the depot when it exploded at about 3am on a Sunday morning. It turns out that both Heathrow and Luton airports were obtaining fuel through pipelines from Buncefield and airlines had to arrange alternative stops for refuelling.

During the cold war, British Telecom (BT) built nuclear-proof telephone exchange bunkers deep under UK cities. As luck would have it, I was living in Manchester at the time their indestructible telephone exchange caught fire. They discovered that everything from the cash machines and payment terminals to the dispatch system for ambulances depended on the same telephone exchange. Many of those things were not working for about two weeks. Nobody ever found evidence linking Russians to the fire.

Incidents like these are well known in the UK, they have had inquiries at Westminster and they would be wise to dig up the reports from these previous incidents and see how many of the recommendations have really been implemented and whether they can generalize the lessons from previous incidents to help prevent today's catastrophe from repeating itself.

Help from France

The day before the crisis, Tour de France organizers announced that the UK will host the grand depart and the first three stages of the race in 2027. As the saying goes, on your bikes.

21 March, 2025 10:00PM

Jamie McClelland

AI's Actual Impact

Two years after OpenAI launched ChatGPT 3.5, humanity is not on the cusp of extinction and Elon Musk seems more responsible for job loss than any AI agent.

However, ask any web administrator and you will learn that large language models are having a significant impact on the world wide web (or, for a less technical account, see Forbes articles on bots). At May First, a membership organization that has been supporting thousands of web site for over 20 years, we have never seen anything like this before.

It started in 2023. Web sites that performed quite well with a steady viewership started having traffic spikes. These were relatively easy to diagnose, since most of the spikes came from visitors that properly identified themselves as bots, allowing us to see that the big players - OpenAI, Bing, Google, Facebook - were increasing their efforts to scrape as much content from web sites as possible.

Small brochure sites were mostly unaffected because they could be scraped in a matter of minutes. But large sites with an archive of high quality human written content were getting hammered. Any web site with a search feature or a calendar or any interface that generated exponential hits that could be followed were particularly vulnerable.

But hey, that’s what robots.txt is for, right? To tell robots to back off if you don’t want them scraping your site?

Eventually, the cracks began to show. Bots were ignoring robots.txt (did they ever pay that much attention to it in the first place?). Furthermore, rate limiting requests by user agent also began to fail. When you post a link on Facebook, a bot identifying itself as “facebooketernalhit” is invoked to preview the page so it can show a picture and other meta data. We don’t want to rate limit that bot, right? Except, Facebook is also using this bot to scrape your site, often bringing your site to its knees. And don’t get me started on TwitterBot.

Eventually, it became clear that the majority of the armies of bots scraping our sites have completely given up on identifying themselves as bots and are instead using user agents indistinguishable from regular browsers. By using thousands of different IP addresses, it has become really hard to separate the real humans from the bots.

Now what?

So, no, unfortunately, your web site is not suddenly getting really popular. And, you are blessed with a whole new set of strategic decisions.

Fortunately, May First has undergone a major infrastructure transition, resulting in centralized logging of all web sites and a fleet of web proxy servers that intercept all web traffic. Centralized logging means we can analyze traffic and identify bots more easily, and a web proxy fleet allows us to more easily implement rules across all web sites.

However, even with all of our latest changes and hours upon hours of work to keep out the bots, our members are facing some hard decisions about maintaining an open web.

One member of May First provides Google translations of their web site to every language available. But wow, that is now a disaster because instead of having every bot under the sun scrapping all 843 (a made up number) pieces of unique content on their site, the same bots are scraping 843 * (number of available languages) pieces of content on their site. Should they stop providing this translation service in order to ensure people can access their site in the site’s primary language?

Should web sites turn off their search features that include drop down options of categories to prevent bots from systematically refreshing the search page with every possible combination of search terms?

Do we need to alter our calendar software to avoid providing endless links into the future (ok, that is an easy one)?

What’s next?

Something has to change.

  • Lock down web 2.0. Web 2.0 brought us wonderful dynamic web sites, which Drupal and WordPress and many other pieces of amazing software have supported for over a decade. This is the software that is getting bogged down by bots. Maybe we need to figure out a way to lock down the dynamic aspects of this software to logged in users and provide static content for everyone else?

  • Paywalls and accounts everywhere. There’s always been an amazing non-financial reward to providing a web site with high quality movement oriented content for free. It populates the search engines, provides links to inspiring and useful content in moments of crises, and can galvanize movements. But these moments of triumph happen between long periods of hard labor that now seems to mostly feed capitalist AI scumbags. If we add a new set of expenses and labor to keep the sites running for this purpose, how sustainable is that? Will our treasure of free movement content have to move behind paywalls or logins? If we provide logins, will that keep the bots out or just create a small hurdle for them to automate the account creation process? What happens when we can’t search for this kind of content via search engines?

  • Cutting deals. What if our movement content providers are forced to cut deals with the AI entrepreneurs to allow the paying scumbags to fund the content creation. Eww. Enough said.

  • Bot detection. Maybe we just need to get better at bot detection? This will surely be an arms race, but would have some good benefits. Bots have also been filling out our forms and populating our databases with spam, testing credit cards against our donation pages, conducting denial of service attacks and all kinds of other irritating acts of vandalism. If we were better at stopping bots automatically it would have a lot of benefits. But what impact would it have on our web sites and the experience of using them? What about “good” bots (RSS feed readers, payment processors, web hooks, uptime detectors)? Will we cut the legs off any developer trying to automate something?

I’m not really sure where this is going, but it seems that the world wide web is about to head in a new direction.

21 March, 2025 12:27PM

March 20, 2025

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

Installing a desktop environment on the HP Omen

`dmidecode | grep -A8 ‘^System Information’`

tells me that the Manufacturer is HP and Product Name is OMEN Transcend Gaming Laptop 14-fb0xxx

I’m provisioning a new piece of hardware for my eng consultant and it’s proving more difficult than I expected. I must admit guilt for some of this difficulty. Instead of installing using the debian installer on my keychain, I dd’d the pv block device of the 16 inch 2023 version onto the partition set aside from it. I then rebooted into rescue mode and cleaned up the grub config, corrected the EFI boot partition’s path in /etc/fstab, ran the grub installer from the rescue menu, and rebooted.

On the initial boot of the system, X or Wayland or whatever is supposed to be talking to this vast array of GPU hardware in this device, it’s unable to do more than create a black screen on vt1. It’s easy enough to switch to vt2 and get a shell on the installed system. So I’m doing that and investigating what’s changed in Trixie. It seems like it’s pretty significant. Did they just throw out Keith Packard’s and Behdad Esfahbod’s work on font rendering? I don’t understand what’s happening in this effort to abstract to a simpler interface. I’ll probably end up reading more about it.

In an effort to have Debian re-configure the system for Desktop use, I have uninstalled as many packages as I could find that were in the display and human interface category, or were firmware/drivers for devices not present in this Laptop’s SoC. Some commands I used to clear these packages and re-install connamon follow:

```
dpkg -S /etc/X11
dpkg -S /usr/lib/firmware
apt-get purge $(dpkg -l | grep -i \
  -e gnome -e gtk -e x11-common -e xfonts- -e libvdpau -e dbus-user-session -e gpg-agent \
  -e bluez -e colord -e cups -e fonts -e drm -e xf86 -e mesa -e nouveau -e cinnamon \
  -e avahi -e gdk -e pixel -e desktop -e libreoffice -e x11 -e wayland -e xorg \
  -e firmware-nvidia-graphics -e firmware-amd-graphics -e firmware-mediatek -e firmware-realtek \
  | awk '{print $2}')
apt-get autoremove
apt-get purge $(dpkg -l | grep '^r' | awk '{print $2}')
tasksel install cinnamon-desktop
```

And then I rebooted. When it came back up, I was greeted with a login prompt, and Trixie looks to be fully functional on this device, including the attached wifi radio, tethering to my android, and the thunderbolt-attached Marvell SFP+ enclosure.

I’m also installing libvirt and fetched the DVD iso material for Debian, Ubuntu and Rocky in case we have a need of building VMs during the development process. These are the platforms that I target at work with gcp Dataproc, so I’m pretty good at performing maintenance operation on them at this point.

20 March, 2025 11:06PM by C.J. Collier

hackergotchi for Daniel Pocock

Daniel Pocock

Hidden from coroners and the public: tech industry cultural contagion

A coroner in Australia is currently examining the tragic death of Lilie James. There has been intense focus on her ex-boyfriend, Paul Thijssen but experts have said they are unable to confirm why he suddenly engaged in such a violent crime, followed by his own suicide. Reports are also examining the role of location tracking services and the way generation Z is living a social-media dopamine-fuelled reality TV lifestyle.

British coroner Andrew Walker wrote that online content influenced the teenager Molly Russell to the point of suicide. Could exposure to online content and the related culture have been a factor in the way men like Paul Thijssen and Kyle Clifford suddenly snap and go down this path?

If Thijssen and Clifford were both under influence from the same content, for example, Andrew Tate, could we call them a suicide cluster, despite the fact they were on opposite sides of the planet? Here is a suggested definition of a suicide cluster in New South Wales.

Suicide clusters have a number of different definitions (Johansson et al, 2006; Larkin & Beautrais, 2012; Niedzwiedz et al, 2014). A widely accepted definition is a ‘group of suicides or suicide attempts, or both, that occur closer together in time and space than would normally be expected on the basis of statistical prediction or community expectation’ (O'Carroll et al, 1988). The majority of studies exclude attempts at suicide (Johansson et al, 2006). Although there seems to be some indication that suicide clusters should involve at least three suicide cases, there appears to be less agreement about their closeness in time and space.

The people who created the technology

Before answering that, we need to delve into the culture of the engineers who create the technology. If it is not even safe for us then it is probably not safe for the public at large.

Looking at the Debian suicide cluster, I couldn't help wondering how many of the cases were examined by a coroner and did anybody submit any of the hidden emails to the coroner. After enormous email chains on debian-private, Frans Pop sent his resignation the night before Debian Day in 2010. It reads like a suicide note but we didn't know that until later.

I contacted Pop's brother in 2022. Pop's brother told me that some Debian and Ubuntu people came to the funeral, they took away computer equipment, they never told family members the "resignation" coincided with Debian Day and they never disclosed the debian-private emails to the family or the coroner.

DP: Hi [redacted], are you the brother of Frans? I am sorry for your loss. Debian has hidden thousands of emails about this.

Pop: Hi. Yes. Who is this?

DP: https://danielpocock.com/debian-open-source-volunteer-suicides-compensation/

Pop: This is quite shocking to me. I did not know anything about this.

Pop: Are you Daniel?

The cover-up of the Frans Pop suicide is symptomatic of the industry at large.

Silicon Valley overlords don't want to send lawyers to represent them at an inquest if they can simply sweep things under the carpet by maintaining a culture of fear.

The next death was on our wedding day and it looked like a copy-cat suicide.

In Australia, a coroner can talk about these patterns publicly. In Switzerland, where Adrian von Bidder died, the police help companies to cover up the cause of death.

Did Swiss police help Google and Ubuntu cover up a suicide cluster that included our wedding day?

Recording one: : "this is going to get worse and worse"

Recording two: : "you are going to be alone against us"

The Debian group has a a history of over 30 years of threats and intimidation, harassment and abuse.

Rogue employers have free reign in Switzerland

The man who makes these threats and the punch is an Indian. But it happened in Switzerland. Is it any surprise that colleagues are committing suicide? Swiss jurists told me this type of behavior is acceptable in the workplace.

Recording one: : "I mean I don't know what makes you stay here after I am so abusive to you"

Recording two: : the punch

Coercive control is a product of cultural contagion

Reading through the news reports about the lives of these people who died far too young, I see a way of life that is totally different to the way we grew up before social control media.

Journalists have gone to great lengths to write about a small group of men like Paul Thijssen stalking their former partner, but what about the companies who demand we install apps on our phones so they can track us more intensively than if we only used their web site? If we care about stalking, we need to confront all forms of it in equal proportion.

Paul Thijssen, like the British crossbow murderer, had no history of criminal conduct. Many of the men working in Google, Facebook and Twitter/X also have no criminal record but they have vast amounts of data about people at their fingertips. Any one of those men could be the next wildcard.

Some women spend more time taking selfies and arranging them in their social control media profile than they spend on hair and makeup. The social control media platforms encourage women to present themselves like this and they know that men won't use the platforms if men can't find this type of content from the women they are curious about. In other words, the whole system has created a culture of stalking and despite the fact that Paul Thijssen's actions were incredibly sinister, Thijssen may be nothing more than a symptom of this culture.

What can a coroner do?

The coroner's office only becomes involved after the worst has come to pass and somebody has died.

British coroner Andrew Walker made headlines when he became the first coroner in the world to send warnings to Google, Facebook, Twitter and other large companies. The full report is published online by UK authorities. Walker took those extraordinary steps after the inquest into the death of Molly Russell.

Warnings from a coroner are not the same as orders from a judge or the government. However, if a company ignores a warning and further deaths occur in similar circumstances then there is a bigger possibility that managers could be prosecuted on charges of corporate manslaughter or gross negligence manslaughter for the deaths that occurred after the warning.

Coroners can give voice to independent experts

In the example of Frans Pop, I demonstrated one of the techniques used to hide information from the families of victims.

The job of the coroner is to bring information into the public domain.

The tech industry has made big claims about being open, free and empowering people. Companies operating in the open source ecosystem are notorious for making such claims. Elon Musk frequently boasts about free speech but the journalists asking about Musk himself found their Twitter/X accounts shut down.

The overlords do not want to allow any debate to begin. They seek to either censor their critics using technical means, such as account suspensions or they humiliate their critics with girlish rumors about "harassment" or "behavior".

One of the things that a coroner can do is to give a platform to the independent expert critics who Silicon Valley overlords are afraid of.

In 2021, Frances Haugen disclosed tens of thousands of internal documents from Facebook. She has been asked to testify in numerous public inquiries and parliaments around the world. The public attacks against my family and I started around the same time that Google sacked Timnit Gebru, co-head of their ethics department.

If Google is going out of their way to discredit critics like Timnit Gebru and I then there is only one reason for that: we were right about Google.

When society wants to prevent this culture from getting worse, when grieving families are searching for answers, why would you not give them the opportunity to hear from people who have already risked their careers to expose the inconvenient truth? Nothing could be more sincere.

Political leaders failing to lead on social control media

Political leaders simultaneously try to cultivate support on social control media while trying not to be the one who wakes the wrong dragon in Silicon Valley.

The nature of a politician's profession, how they engage volunteers and voters prevents them from being objective in relation to social control media.

This is a gap that needs to be filled by other institutions. Whenever the opportunity arises in the context of a coronial inquest, it is vital to allow some time for subject matter experts to contribute insights about the dark side of this industry.

Individual tragedies are part of a widescale mental health pandemic

The coronial inquest tends to focus on the specific personal circumstances of a death or a group of deaths.

We need to remember that these deaths are the tip of the iceberg. In 2024 I had a conversation with a schoolteacher who left the profession after working just one year. She told me it was all about the behavior created by social control media. Children have become unteachable.

There is similar conflict in the workplace. Employers are expressing concern about employees who stay up all night on their devices, they are tired when they arrive at work and they are disturbed throughout the day by notifications from all the apps. Employees have similar concerns about employers who call them or send them messages in their personal time. Many people feel like they are on standby seven nights per week without any renumeration for being constantly connected.

While the coroner is asked to focus on a specific sequence of events leading up to a death, the Silicon Valley overlords are thinking in terms of how they can manipulate the behavior of the entire population.

Did Silicon Valley listen to the British coroner?

Have another look at the video from the United Nations Forum on Business and Human Rights, 2018, where I predicted somebody undesirable would take over Facebook or Twitter.

 

Then look at the photos released by the White House on St Patrick's day. Donald Trump and Elon Musk seem to know who Conor McGregor is. In November, a jury found McGregor responsible for a serious injury to a woman.

Are there men like Paul Thijssen and Conor McGregor working inside companies like Google, Facebook and Twitter/X, poring over the data harvested from all their subjects? Techrights seems to think so.

Does Musk listen to the British coroner Andrew Walker who sent recommendations to social control media companies about protecting the public?

Elon Musk, Donald Trump, Conor McGregor

20 March, 2025 10:00PM

Sven Hoexter

Purpose A Wellbeing Economies Film

The film is centered around the idea of establishing an alternative to the GDP as the metric to measure success of a country/society. The film follows mostly Katherine Trebeck on her journey of convincing countries to look beyond the GDP. I very much enjoyed watching this documentary to get a first impression of the idea itself and the effort involved. I had the chance to watch the german version of it online. But there is now another virtual screening offered by the Permacultur Film Club on the 29th and 30th of March 2025. This screening is on a pay-as-you-like-and-can basis and includes a Q&A session with Kathrine Trebeck.

Trailer 1 and Trailer 2 are available on Youtube if you like to get a first impression.

20 March, 2025 03:12PM

k8s deployment build-in preStop sleep

Seems in the k8s world there are sufficient enough race conditions in shutting down pods and removing those from endpoint slices in time. Thus people started to do all kind of workarounds like adding a statically linked sleep binary to otherwise "distroless" and rather empty OCI images to just run a sleep command on shutdown before really shutting down. Or even base64 encoding the sleep binary and shipping it via configMap. Or whatever else. Eventually the situation was so severe that upstream decided to implement a sleep feature in the deployment resource directly.

In short it looks like this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: foo
spec:
  template:
    spec:
      lifecycle:
        preStop:
          sleep:
            seconds: 10

Maybe highlighting that "feature" helps some more people to get rid of their own preStop sleep commands and make some deployments a tiny bit simpler.

20 March, 2025 02:15PM

March 19, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Debian Pregnancy Cluster, when I stopped using IRC

In my last blog, I started to explore the phenomena of a Debian pregnancy cluster that occurred in the lead-up to DebConf13 which was held in Switzerland.

Ana, from Spain, was another case. Around the time that Marga started the conspiratorial emails, Ana started contacting me almost every day on IRC. The plotters saw me as the most open-minded person in Switzerland and hoped that I would be able to exert influence on other members of the local team.

If Ana's employer was a client I would have had to bill an hour per day, approximately five hours per week, for the discussions that appeared on IRC and elsewhere. I eventually stopped using IRC:

Subject: Re: not on IRC
Date: Tue, 27 Nov 2012 13:18:31 +0100
From: Ana Guerrero <ana@debian.org>
To: Daniel Pocock <daniel@pocock.com.au>

On Tue, Nov 27, 2012 at 09:03:49AM +0000, Daniel Pocock wrote:
> 
> 
> 
> Hi Ana,
> 
> I'm on the road for a few more days, so I am not monitoring IRC at all
> and only intermittently accessing email
> 
> Can you please email me if there is any urgent question, etc?
> 
> What do you think of Marga's current direction?  Do you think other
> people may get involved now?
>

Don't worry too much and keep a low profile. We have almost gotten they
postpone the contract signing tomorrow. that's a first step.
I'm serious abotu the low profile thing, they like blaming things on you
and if you're not active, they cann't.

A few months later, the email appeared on debian-private confirming that Ana had been part of the Debian pregnancy cluster. The baby arrived just two months before DebConf13. Now we know the real reason women were pushing to change the venue.

As a senior developer, I felt that IRC did not give me a net benefit for the time I was online. Whether it is on IRC, on social control media or on a self-managed social media/forum like Discourse, there is a tendency for everybody to focus their questions on the senior developers and we get swamped.

Subject: A different kind of release
Date: Sun, 12 May 2013 18:53:56 +0200
From: Aurelien Jarno <aurel32@debian.org>
To: debian-private@lists.debian.org
CC: Ana Guerrero <ana@debian.org>

Hi,

We are happy parents of a son called Manuel, born a few days ago.
Everybody is fine, and we are trying to get used to our new schedule.
Therefore our time to contribute to Debian might vary from day to day.

Ana & Aurelien

[ This message is to be kept private forever ]

-- 
Aurelien Jarno	                        GPG: 1024D/F1BCDB73
aurelien@aurel32.net                 http://www.aurel32.net

Privacy & Debian hyprocrisy

The email concludes with a footnote, [ This message is to be kept private forever ]. When my father died, rogue Debianists demonstrated utter contempt for the privacy of my family. Yet they impose upon the rest of us to hide their conflicts of interest and the notorious Debian suicide cluster.

It is important to look at the full email and the pressure on DebConf13 and then take a fresh look at the attacks on José Manuel Santamaría Lema ("santa"). Looking at the attacks on Santa, Ana sent the mail to start the lynching just a few weeks before she went on maternity leave.

Then they chose the name Manuel for their new son.

I feel that other people have paid a price for Ana's insecurities during her first pregnancy, yet the pregnancy was hidden from most people, especially from Santa. When they named their son Manuel, did they realize they were borrowing from the name of the volunteer they lynched?

Subject: Removing a non-DD member of the Qt/KDE team
Date: Thu, 28 Mar 2013 10:00:57 +0100
From: Ana Guerrero <ana@debian.org>
To: debian-private@lists.debian.org, nm@debian.org, leader@debian.org

[sorry for the duplicated email if you receive it twice]

Hi,

This email is about the decision of the Qt/KDE team [1] of banning
José Manuel Santamaria Lema, aka santa, from the team. This means removing
him from the pkg-kde alioth group and removing him from all the group
maintained packages.
After that there are only 4 packages maintained only by him, that are KDE
related and you can find plans at [2].
José is also a DM [3], at the moment, he's only allowed to upload a single
package that as seen in [2] will go to the hands of the KDE team. The DD
who authorized him the rights for uploading this package is revoking them.
With this email, we are also asking FD/NM committee to remove him as DM.

First of all, none of us have any doubt of the technical skills of José.
However, he has shown he doesn't know how to work inside a team,
... [ snip defamation ] ...

That is from the woman who didn't tell other team members that she was in the final stages of her first pregnancy.

It looks like Ana and other team members couldn't keep up with the pace at which Santa was working. Ana's availability was changing due to her pregnancy and other team members would have had extra effort because of that. Is it fair that other developers are punished at a time like that?

I fully support the right of women to participate in technical hobbies like Debian. Employers have certain obligations to women during pregnancy. Volunteers collaborating remotely over the Internet often have no idea whether a woman is pregnant and even if we did know, it isn't fair that other volunteers have suffered adverse consequences in that period.

Santa's activity completely stopped, so the Qt/KDE team simultaneously lost two people:

José Manuel Santamaría Lema, Debian, character assassination, vendetta, expulsion, lynching

Resignations followed

Subject: Resignation
Date: Fri, 19 Apr 2013 14:59:27 -0500
From: John Hasler <jhasler@newsguy.com>
Reply-To: John Hasler <john@dhh.gt.org>
Organization: Dancing Horse Hill
To: debian-private@lists.debian.org

I've resigned.  Your resignation procedure says I must announce that
fact to this list. I've sent the requisite message to
keyring@rt.debian.org and orphaned my packages.  Please notify me if
there is anything I've missed.  Otherwise please do not respond.
-- 
John Hasler jhasler@newsguy.com
Elmwood, WI USA

and some people just stepped back for a while:

Subject: [VAC] -> end of April
Date: Sat, 20 Apr 2013 12:04:05 +0200
From: intrigeri <intrigeri@debian.org>
To: debian-private@lists.debian.org

Hi,

I'll be offline until the end of the month. I'll be planting a garden
with vegetable seeds... and more generally trying to take care of
myself and avoid burn-out while it's still possible.

Regarding the Wheezy release, I won't be able to do any further work
on #704754 ("release-notes: [wheezy] mention AppArmor support") that
the release-notes manager will deem required, so what's left to do in
this area will have to be done by someone else, if at all.

Other than that, my packages should be in good enough state for
Wheezy. In any case, most are team-maintained and I'm on the Low
Threshold NMU list.

Cheers,
-- 
  intrigeri
  | GnuPG key @ https://gaffer.ptitcanardnoir.org/intrigeri/intrigeri.asc
  | OTR fingerprint @ https://gaffer.ptitcanardnoir.org/intrigeri/otr.asc

Does Debian vendetta culture impact other families?

Subject: [VAC] 14/04-??/04
Date: Sat, 13 Apr 2013 10:44:18 +0530
From: Kartik Mistry <kartik@debian.org>
To: debian-private@lists.debian.org

Hi,

I need to fix,
1. Shifting my home --> new home.
2. Fix health of my wife.
3. Fix myself to get enough motivation.

#3 is not issue, it pops up time to time. #2 is serious and needs urgent
attention and medical advices.

I'm also travelling from tonight till Friday.

So, please feel free to release!

To be kept private forever. Apart from #2, many people outside this list knows
about my this plan/travel.

--
Kartik Mistry | IRC: kart_
{0x1f1f, kartikm}.wordpress.com

How do you recognize a cult?

Public Health England and the UN DRR tell us that you only need two suicides to declare a suicide cluster. There is no similar threshold for a pregnancy cluster but people were keeping score. Christian Perrier tells us two couples had babies but there were at least three in 12 months. It looks like he didn't know about Marga because she was being even more secretive about it.

Subject: Re: A different kind of release
Date: Mon, 13 May 2013 07:14:14 +0200
From: Christian PERRIER <bubulle@debian.org>
To: debian-private@lists.debian.org
CC: Aurelien Jarno <aurel32@debian.org>, Ana Guerrero <ana@debian.org>

Quoting Aurelien Jarno (aurel32@debian.org):
> Hi,
> 
> We are happy parents of a son called Manuel, born a few days ago.
> Everybody is fine, and we are trying to get used to our new schedule.
> Therefore our time to contribute to Debian might vary from day to day.

\o/ to one more future DD....:-)

Good luck in your new life schedule, A&A...things might change a bit
in the upcoming months..

You guys know I like stupid stats, so you won't be surprised to learn
that, so far, and unless I'm missing something, that makes two "2nd
generation DD" we're aware about....

Lynching on the day you give birth

Ana's baby announced on the same day debian-private started the lynching of Josselin Mouette.

Subject: nomination and call for supporters for expulsion of Josselin Mouette
Date: Sun, 12 May 2013 00:15:29 +0300
From: Eugene V. Lyubimkin 
To: da-manager@debian.org

Hello,

[ Josselin and debian-private@ BCC'ed ]

...

Let's not forget that Adrian von Bidder died on our wedding day and it was discussed like a suicide.. As he died in Switzerland, the coroner's report and the official cause of death was kept hidden.

They tried to tell us Adrian von Bidder's death was a heart attack. They never provided a coroner's report for Ray Dassen either but informally they suggested it is a heart attack too. It was right in the middle of the conflicts about DebConf13, Daniel Baumann, Santa & the Debian pregnancy cluster:

Subject: Ray Dassen passed away
Date: Wed, 22 May 2013 13:32:04 +0200
From: Thijs Kinkhorst <thijs@debian.org>
To: debian-private@lists.debian.org

Hi all,

This morning I've received the sad news that long-time Debian Developer, Ray Dassen, died last weekend from a heart attack, just 40 years old.

Ray has been a Debian Developer for an incredible 19 years, joining our project in 1994, and continued to be an active contributor just until recently.

They remembered how it was done for Frans Pop (Debian Day suicide)

They went to great lengths to hush up the Debian Day volunteer suicide. Those tactics were recalled when Ray Dassen died.

Subject: Re: Ray Dassen passed away
Date: Sat, 25 May 2013 11:04:49 -0700
From: Russ Allbery <rra@debian.org>
Organization: The Eyrie
To: debian-private@lists.debian.org

Dmitry Smirnov <onlyjob@debian.org> writes:

> By the way why is this information only in -private?
> I mean when I die please let everybody know so they won't waste their
> time trying to reach me...

Usually the initial coordination of our reaction to an announcement of
this sort is done in private just in case there are privacy concerns from
the family, anything we have to coordinate about how they want to handle
notification, etc.  That way, we can be careful about how we phrase our
first public notice, which has been important for a few instances in the
past.

-- 
Russ Allbery (rra@debian.org)               <http://www.eyrie.org/~eagle/>

Not only women get baby brain

OPW is the Outreach Program for Women, now known as Outreachy. A lot of money is spent on recruiting women to the program. No reports have ever been presented about the economic benefit of those expenditures.

Subject: Re: OPW Student in Kingston, Jamaica
Date: Mon, 25 Nov 2013 18:37:36 +0000
From: Joachim Breitner <nomeata@debian.org>
To: debian-private@lists.debian.org

Hi,

Am Montag, den 25.11.2013, 13:18 -0500 schrieb Paul Tagliamonte:
> She's got a PhD, so I think this could also be a good beersigning, if
> she drinks.

not having a PhD yet I wonder what expects me: Will I be a better
drinker after I get the degree? Or a better keysigner? /me is confused.

Greetings,
Joachim

-- 
Joachim "nomeata" Breitner
Debian Developer
  nomeata@debian.org | ICQ# 74513189 | GPG-Keyid: 4743206C
  JID: nomeata@joachim-breitner.de | http://people.debian.org/~nomeata

Debian Account Manager bootstrapped the pregnancy cluster

It looks like Marga and Ana got pregnant immediately after Joerg Jaspert (Ganneff) and Pei-Hua Tseng had their baby.

Familiar pattern,

"I first met my wife at the “International Conference on OpenSource” 2009 in Taiwan. So OpenSource, Debian and me being some tiny wheel in the system wasn’t entirely news to her."

That appears to coincide with a trip to the MiniDebConf Taiwan 2009.

Subject: Little Ganneff says: You are all REJECTED
Date: Thu, 03 May 2012 16:38:19 +0200
From: Joerg Jaspert 
Organization: Goliath-BBS
To: debian-private@lists.debian.org

Hi

litte Ganneff, born 2. May, listening to "[redacted]" (sometimes,
I hope), while presenting his mother with the finest set of "You don't
want to have this type of labour"-36hours labour, completly ignoring
that a day earlier his mother would have been in labour during labour
day, showing that he has no sense of humor, just told me, backed with
the full weight of his 3780 grams and the fearful height of 54cm, that I
should use that magic *m* key[fn:1] and reject all of your approaches to
my time, and as such going into a kind of Vacation over the next few
days to weeks, though he left an option that it will be forever,
forbidding me to talk about the evil grin he had while issuing this
order (and yo, I haven't talked about it, so all is fine).

He also turned on a special-for-you-english-natives mode to let me write
extra long and slightly complicated sentences.

But to show his evil overlord graciousness, he allowed me to show you
http://kosh.ganneff.de/~joerg/paste/2012-05-03-Hd8a1JVfC84/[redacted].png
(mind, this is -private).

Footnotes:

[fn:1] In daks process-new it means "Manual reject"

-- 
bye, Joerg
[-private note: All my parts in this post (or citation of them in
		  		another) are forbidden to be made public later.]

Will little Ganneff become a Debian Developer?

When he is old enough, he will have access to the full archives. Some of them have already been leaked anyway.

When DebConf went back to the Balkans in Kosovo, why did local women avoid it?

Subject: Re: Little Ganneff says: You are all REJECTED
Date: Sat, 5 May 2012 12:07:12 +0200
From: Christian PERRIER <bubulle@debian.org>
To: debian-private@lists.debian.org

Quoting martin f krafft (madduck@debian.org):
> also sprach Gerfried Fuchs <rhonda@deb.at> [2012.05.04.1936 +0200]:
> > > It's called "spring" ;)
> > 
> >  I fear you are misguided.  It's called "start of summer break", if you
> > manage to calculate properly.  :P
> 
> "DebConf"


We need to amend the Debconf11 report: "Little Ganneff: yet another
achievement made in Banja Luka"

/me runs far away

Please see the chronological history of how the Debian harassment and abuse culture evolved.

19 March, 2025 07:30PM

Mark Brown

Seoul Trail revamp

I regularly visit Seoul, and for the last couple of years I've been doing segments from the Seoul Trail, a series of walks that add up to a 150km circuit around the outskirts of Seoul. If you like hiking I recommend it, it's mostly through the hills and wooded areas surrounding the city or parks within the city and the bits I've done thus far have mostly been very enjoyable. Everything is generally well signposted and easy to follow, with varying degrees of difficulty from completely flat paved roads to very hilly trails.

The trail had been divided into eight segments but just after I last visited the trail was reorganised into 21 smaller ones. This was very sensible, the original segments mostly being about 10-20km and taking 3-6 hours (with the notable exception of section 8, which was 36km) which can be a bit much (especially that section 8, or section 1 which had about 1km of ascent in it overall). It does complicate matters if you're trying to keep track of what you've done already though so I've put together a quick table:

OriginalRevised
11-3
24-5
36-8
49-10
511-12
613-14
715-16
817-21

This is all straightforward, the original segments had all been arranged to start and stop at metro stations (which I think explains the length of 8, the metro network is thin around Bukhansan what with it being an actual mountain) and the new segments are all straight subdivisions, but it's handy to have it written down and I figured other people might find it useful.

19 March, 2025 12:18AM by Mark Brown

March 18, 2025

hackergotchi for Matthew Garrett

Matthew Garrett

Failing upwards: the Twitter encrypted DM failure

Almost two years ago, Twitter launched encrypted direct messages. I wrote about their technical implementation at the time, and to the best of my knowledge nothing has changed. The short story is that the actual encryption primitives used are entirely normal and fine - messages are encrypted using AES, and the AES keys are exchanged via NIST P-256 elliptic curve asymmetric keys. The asymmetric keys are each associated with a specific device or browser owned by a user, so when you send a message to someone you encrypt the AES key with all of their asymmetric keys and then each device or browser can decrypt the message again. As long as the keys are managed appropriately, this is infeasible to break.

But how do you know what a user's keys are? I also wrote about this last year - key distribution is a hard problem. In the Twitter DM case, you ask Twitter's server, and if Twitter wants to intercept your messages they replace your key. The documentation for the feature basically admits this - if people with guns showed up there, they could very much compromise the protection in such a way that all future messages you sent were readable. It's also impossible to prove that they're not already doing this without every user verifying that the public keys Twitter hands out to other users correspond to the private keys they hold, something that Twitter provides no mechanism to do.

This isn't the only weakness in the implementation. Twitter may not be able read the messages, but every encrypted DM is sent through exactly the same infrastructure as the unencrypted ones, so Twitter can see the time a message was sent, who it was sent to, and roughly how big it was. And because pictures and other attachments in Twitter DMs aren't sent in-line but are instead replaced with links, the implementation would encrypt the links but not the attachments - this is "solved" by simply blocking attachments in encrypted DMs. There's no forward secrecy - if a key is compromised it allows access to not only all new messages created with that key, but also all previous messages. If you log out of Twitter the keys are still stored by the browser, so if you can potentially be extracted and used to decrypt your communications. And there's no group chat support at all, which is more a functional restriction than a conceptual one.

To be fair, these are hard problems to solve! Signal solves all of them, but Signal is the product of a large number of highly skilled experts in cryptography, and even so it's taken years to achieve all of this. When Elon announced the launch of encrypted DMs he indicated that new features would be developed quickly - he's since publicly mentioned the feature a grand total of once, in which he mentioned further feature development that just didn't happen. None of the limitations mentioned in the documentation have been addressed in the 22 months since the feature was launched.

Why? Well, it turns out that the feature was developed by a total of two engineers, neither of whom is still employed at Twitter. The tech lead for the feature was Christopher Stanley, who was actually a SpaceX employee at the time. Since then he's ended up at DOGE, where he apparently set off alarms when attempting to install Starlink, and who today is apparently being appointed to the board of Fannie Mae, a government-backed mortgage company.

Anyway. Use Signal.

comment count unavailable comments

18 March, 2025 11:58PM

Dima Kogan

Eigen macro specializations crashes

There's an issue in the Eigen linear algebra library where linking together objects compiled with different flags causes the resulting binary to crash. Some details are written-up in this mailing list thread.

I just encountered a situation where a large application sometimes crashes for unknown reasons, and needed a method to determine whether this Eigen issue could be the cause. I ended up doing this by using the DWARF data to see if the linked binary contains the different incompatible flavors of malloc / free or not.

I downloaded the small demo program showing the problem. I built it:

CCXXXFLAGS=-g make

Here if you run ./main, the bug is triggered, and a crash occurs. I looked at the debug info for the code in question:

for o (main lib.so) {
  echo "======== $o";
  readelf --debug-dump=decodedline $o \
  | awk \
    '$1 ~ /^Memory.h/
     {
       if(180 <= $2 && $2 <= 186) {
         have["malloc_glibc"]=1
       }
       if(188 == $2) {
         have["malloc_handmade"]=1
       }
       if(201 <= $2 && $2 <= 204) {
         have["free_glibc"]=1
       }
       if(206 == $2) {
         have["free_handmade"]=1
       }
     }
     END
     {
       for (var in have) {
         print(var);
       }
     }'
}

It says:

======== main
free_handmade
======== lib.so
malloc_glibc
free_glibc

Here I looked at main and lib.so (the build products from this little demo). In a real case you'd look at every shared library linked into the binary and the binary itself. On my machine /usr/include/eigen3/Eigen/src/Core/util/Memory.h looks like this, starting on line 174:

174 EIGEN_DEVICE_FUNC inline void* aligned_malloc(std::size_t size)
175 {
176   check_that_malloc_is_allowed();
177 
178   void *result;
179   #if (EIGEN_DEFAULT_ALIGN_BYTES==0) || EIGEN_MALLOC_ALREADY_ALIGNED
180 
181     EIGEN_USING_STD(malloc)
182     result = malloc(size);
183 
184     #if EIGEN_DEFAULT_ALIGN_BYTES==16
185     eigen_assert((size<16 || (std::size_t(result)%16)==0) && "System's malloc returned an unaligned pointer. Compile with EIGEN_MALLOC_ALREADY_ALIGNED=0 to fallback to handmade aligned memory allocator.");
186     #endif
187   #else
188     result = handmade_aligned_malloc(size);
189   #endif
190 
191   if(!result && size)
192     throw_std_bad_alloc();
193 
194   return result;
195 }
196 
197 /** \internal Frees memory allocated with aligned_malloc. */
198 EIGEN_DEVICE_FUNC inline void aligned_free(void *ptr)
199 {
200   #if (EIGEN_DEFAULT_ALIGN_BYTES==0) || EIGEN_MALLOC_ALREADY_ALIGNED
201 
202     EIGEN_USING_STD(free)
203     free(ptr);
204 
205   #else
206     handmade_aligned_free(ptr);
207   #endif
208 }

The above awk script looks at the two malloc paths and the two free paths, and we can clearly see that it only ever calls malloc_glibc(), but has both flavors of free(). So this can crash. We want to see that the whole executable (shared libraries and all) should only have one type of malloc() and free(), and that would guarantee no crashing.

There are a more functions in that header that should be instrumented (realloc() for instance) and the different alignment paths should be instrumented similarly (as described in the mailing list thread above), but here we see that this technique works.

18 March, 2025 03:52AM by Dima Kogan

March 17, 2025

Vincent Bernat

Offline PKI using 3 YubiKeys and an ARM single board computer

An offline PKI enhances security by physically isolating the certificate authority from network threats. A YubiKey is a low-cost solution to store a root certificate. You also need an air-gapped environment to operate the root CA.

PKI relying on a set of 3 YubiKeys: 2 for the root CA and 1 for the intermediate CA.
Offline PKI backed up by 3 YubiKeys

This post describes an offline PKI system using the following components:

  • 2 YubiKeys for the root CA (with a 20-year validity),
  • 1 YubiKey for the intermediate CA (with a 5-year validity), and
  • 1 Libre Computer Sweet Potato as an air-gapped SBC.

It is possible to add more YubiKeys as a backup of the root CA if needed. This is not needed for the intermediate CA as you can generate a new one if the current one gets destroyed.

The software part

offline-pki is a small Python application to manage an offline PKI. It relies on yubikey-manager to manage YubiKeys and cryptography for cryptographic operations not executed on the YubiKeys. The application has some opinionated design choices. Notably, the cryptography is hard-coded to use NIST P-384 elliptic curve.

The first step is to reset all your YubiKeys:

$ offline-pki yubikey reset
This will reset the connected YubiKey. Are you sure? [y/N]: y
New PIN code:
Repeat for confirmation:
New PUK code:
Repeat for confirmation:
New management key ('.' to generate a random one):
WARNING[pki-yubikey] Using random management key: e8ffdce07a4e3bd5c0d803aa3948a9c36cfb86ed5a2d5cf533e97b088ae9e629
INFO[pki-yubikey]  0: Yubico YubiKey OTP+FIDO+CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[yubikit.management] Device config written
INFO[yubikit.piv] PIV application data reset performed
INFO[yubikit.piv] Management key set
INFO[yubikit.piv] New PUK set
INFO[yubikit.piv] New PIN set
INFO[pki-yubikey] YubiKey reset successful!

Then, generate the root CA and create as many copies as you want:

$ offline-pki certificate root --permitted example.com
Management key for Root X:
Plug YubiKey "Root X"...
INFO[pki-yubikey]  0: Yubico YubiKey CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[yubikit.piv] Data written to object slot 0x5fc10a
INFO[yubikit.piv] Certificate written to slot 9C (SIGNATURE), compression=True
INFO[yubikit.piv] Private key imported in slot 9C (SIGNATURE) of type ECCP384
Copy root certificate to another YubiKey? [y/N]: y
Plug YubiKey "Root X"...
INFO[pki-yubikey]  0: Yubico YubiKey CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[yubikit.piv] Data written to object slot 0x5fc10a
INFO[yubikit.piv] Certificate written to slot 9C (SIGNATURE), compression=True
INFO[yubikit.piv] Private key imported in slot 9C (SIGNATURE) of type ECCP384
Copy root certificate to another YubiKey? [y/N]: n

You can inspect the result:

$ offline-pki yubikey info
INFO[pki-yubikey]  0: Yubico YubiKey CCID 00 00
INFO[pki-yubikey] SN: 23854514
INFO[pki-yubikey] Slot 9C (SIGNATURE):
INFO[pki-yubikey]   Private key type: ECCP384
INFO[pki-yubikey]   Public key:
INFO[pki-yubikey]     Algorithm:  secp384r1
INFO[pki-yubikey]     Issuer:     CN=Root CA
INFO[pki-yubikey]     Subject:    CN=Root CA
INFO[pki-yubikey]     Serial:     1
INFO[pki-yubikey]     Not before: 2024-07-05T18:17:19+00:00
INFO[pki-yubikey]     Not after:  2044-06-30T18:17:19+00:00
INFO[pki-yubikey]     PEM:
-----BEGIN CERTIFICATE-----
MIIBcjCB+aADAgECAgEBMAoGCCqGSM49BAMDMBIxEDAOBgNVBAMMB1Jvb3QgQ0Ew
HhcNMjQwNzA1MTgxNzE5WhcNNDQwNjMwMTgxNzE5WjASMRAwDgYDVQQDDAdSb290
IENBMHYwEAYHKoZIzj0CAQYFK4EEACIDYgAERg3Vir6cpEtB8Vgo5cAyBTkku/4w
kXvhWlYZysz7+YzTcxIInZV6mpw61o8W+XbxZV6H6+3YHsr/IeigkK04/HJPi6+i
zU5WJHeBJMqjj2No54Nsx6ep4OtNBMa/7T9foyMwITAPBgNVHRMBAf8EBTADAQH/
MA4GA1UdDwEB/wQEAwIBhjAKBggqhkjOPQQDAwNoADBlAjEAwYKy/L8leJyiZSnn
xrY8xv8wkB9HL2TEAI6fC7gNc2bsISKFwMkyAwg+mKFKN2w7AjBRCtZKg4DZ2iUo
6c0BTXC9a3/28V5aydZj6rvx0JqbF/Ln5+RQL6wFMLoPIvCIiCU=
-----END CERTIFICATE-----

Then, you can create an intermediate certificate with offline-pki yubikey intermediate and use it to sign certificates by providing a CSR to offline-pki certificate sign. Be careful and inspect the CSR before signing it, as only the subject name can be overridden. Check the documentation for more details. Get the available options using the --help flag.

The hardware part

To ensure the operations on the root and intermediate CAs are air-gapped, a cost-efficient solution is to use an ARM64 single board computer. The Libre Computer Sweet Potato SBC is a more open alternative to the well-known Raspberry Pi.1

Libre Computer Sweet Potato single board computer relying on the Amlogic S905X SOC
Libre Computer Sweet Potato SBC, powered by the AML-S905X SOC

I interact with it through an USB to TTL UART converter:

$ tio /dev/ttyUSB0
[16:40:44.546] tio v3.7
[16:40:44.546] Press ctrl-t q to quit
[16:40:44.555] Connected to /dev/ttyUSB0
GXL:BL1:9ac50e:bb16dc;FEAT:ADFC318C:0;POC:1;RCY:0;SPI:0;0.0;CHK:0;
TE: 36574

BL2 Built : 15:21:18, Aug 28 2019. gxl g1bf2b53 - luan.yuan@droid15-sz

set vcck to 1120 mv
set vddee to 1000 mv
Board ID = 4
CPU clk: 1200MHz
[…]

The Nix glue

To bring everything together, I am using Nix with a Flake providing:

  • a package for the offline-pki application, with shell completion,
  • a development shell, including an editable version of the offline-pki application,
  • a NixOS module to setup the offline PKI, resetting the system at each boot,
  • a QEMU image for testing, and
  • an SD card image to be used on the Sweet Potato or another ARM64 SBC.
# Execute the application locally
nix run github:vincentbernat/offline-pki -- --help
# Run the application inside a QEMU VM
nix run github:vincentbernat/offline-pki\#qemu
# Build a SD card for the Sweet Potato or for the Raspberry Pi
nix build --system aarch64-linux github:vincentbernat/offline-pki\#sdcard.potato
nix build --system aarch64-linux github:vincentbernat/offline-pki\#sdcard.generic
# Get a development shell with the application
nix develop github:vincentbernat/offline-pki

  1. The key for the root CA is not generated by the YubiKey. Using an air-gapped computer is all the more important. Put it in a safe with the YubiKeys when done! ↩︎

17 March, 2025 08:12AM by Vincent Bernat

Antoine Beaupré

testing the fish shell

I have been testing fish for a couple months now (this file started on 2025-01-03T23:52:15-0500 according to stat(1)), and those are my notes. I suspect people will have Opinions about my comments here. Do not comment unless you have some Constructive feedback to provide: I don't want to know if you think I am holding it Wrong. Consider that I might have used UNIX shells for longer that you have lived.

I'm not sure I'll keep using fish, but so far it's the first shell that survived heavy use outside of zsh(1) (unless you count tcsh(1), but that was in another millenia).

My normal shell is bash(1), and it's still the shell I used everywhere else than my laptop, as I haven't switched on all the servers I managed, although it is available since August 2022 on torproject.org servers. I first got interested in fish because they ported to Rust, making it one of the rare shells out there written in a "safe" and modern programming language, released after an impressive ~2 year of work with Fish 4.0.

Cool things

Current directory gets shortened, ~/wikis/anarc.at/software/desktop/wayland shows up as ~/w/a/s/d/wayland

Autocompletion rocks.

Default prompt rocks. Doesn't seem vulnerable to command injection assaults, at least it doesn't trip on the git-landmine.

It even includes pipe status output, which was a huge pain to implement in bash. Made me realized that if the last command succeeds, we don't see other failures, which is the case of my current prompt anyways! Signal reporting is better than my bash implementation too.

So far the only modification I have made to the prompt is to add a printf '\a' to output a bell.

By default, fish keeps a directory history (but separate from the pushd stack), that can be navigated with cdh, prevd, and nextd, dirh shows the history.

Less cool

I feel there's visible latency in the prompt creation.

POSIX-style functions (foo() { true }) are unsupported. Instead, fish uses whitespace-sensitive definitions like this:

function foo
    true
end

This means my (modest) collection of POSIX functions need to be ported to fish. Workaround: simple functions can be turned into aliases, which fish supports (but implements using functions).

EOF heredocs are considered to be "minor syntactic sugar". I find them frigging useful.

Process substitution is split on newlines, not whitespace. you need to pipe through string split -n " " to get the equivalent.

<(cmd) doesn't exist: they claim you can use cmd | foo - as a replacement, but that's not correct: I used <(cmd) mostly where foo does not support - as a magic character to say 'read from stdin'.

Documentation is... limited. It seems mostly geared the web docs which are... okay (but I couldn't find out about ~/.config/fish/conf.d there!), but this is really inconvenient when you're trying to browse the manual pages. For example, fish thinks there's a fish_prompt manual page, according to its own completion mechanism, but man(1) cannot find that manual page. I can't find the manual for the time command (which is actually a keyword!)

Fish renders multi-line commands with newlines. So if your terminal looks like this, say:

anarcat@angela:~> sq keyring merge torproject-keyring/lavamind-
95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyrin
g/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg | wl-copy

... but it's actually one line, when you copy-paste the above, in foot(1), it will show up exactly like this, newlines and all:

sq keyring merge torproject-keyring/lavamind-
95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyrin
g/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg | wl-copy

Whereas it should show up like this:

sq keyring merge torproject-keyring/lavamind-95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyring/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg | wl-copy

Note that this is an issue specific to foot(1), alacritty(1) and gnome-terminal(1) don't suffer from that issue. I have already filed it upstream in foot and it is apparently fixed already.

Globbing is driving me nuts. You can't pass a * to a command unless fish agrees it's going to match something. You need to escape it if it doesn't immediately match, and then you need the called command to actually support globbing. 202[345] doesn't match folders named 2023, 2024, 2025, it will send the string 202[345] to the command.

Blockers

() is like $(): it's process substitution, and not a subshell. This is really impractical: I use ( cd foo ; do_something) all the time to avoid losing the current directory... I guess I'm supposed to use pushd for this, but ouch. This wouldn't be so bad if it was just for cd though. Clean constructs like this:

( git grep -l '^#!/.*bin/python' ; fdfind .py ) | sort -u

Turn into what i find rather horrible:

begin; git grep -l '^#!/.*bin/python' ; fdfind .py ; end | sort -ub

It... works, but it goes back to "oh dear, now there's a new langage again". I only found out about this construct while trying:

{ git grep -l '^#!/.*bin/python' ; fdfind .py } | sort -u 

... which fails and suggests using begin/end, at which point: why not just support the curly braces?

FOO=bar is not allowed. It's actually recognized syntax, but creates a warning. We're supposed to use set foo bar instead. This really feels like a needless divergence from standard.

Aliases are... peculiar. Typical constructs like alias mv="\mv -i" don't work because fish treats aliases as a function definition, and \ is not magical there. This can be worked around by specifying the full path to the command, with e.g. alias mv="/bin/mv -i". Another problem is trying to override a built-in, which seems completely impossible. In my case, I like the time(1) command the way it is, thank you very much, and fish provides no way to bypass that builtin. It is possible to call time(1) with command time, but it's not possible to replace the command keyword so that means a lot of typing.

Again: you can't use \ to bypass aliases. This is a huge annoyance for me. I would need to learn to type command in long form, and I use that stuff pretty regularly. I guess I could alias command to c or something, but this is one of those huge muscle memory challenges.

alt . doesn't always work the way i expect.

17 March, 2025 01:51AM

March 16, 2025

Russell Coker

Article Recommendations via FOSS

Google tracking everything we read is bad, particularly since Google abandoned the “don’t be evil” plan and are presumably open to being somewhat evil.

The article recommendations on Chrome on Android are useful and I’d like to be able to get the same quality of recommendations without Google knowing about everything I read. Ideally without anything other than the device I use knowing what interests me.

A ML system to map between sources of news that are of interest should be easy to develop and run on end user devices. The model could be published and when given inputs of articles you like give an output of sites that contain other articles you like. Then an agent on the end user system could spider the sites in question and run a local model to determine which articles to present to the user.

Mapping for hate following is possible for such a system (Google doesn’t do that), the user could have 2 separate model runs for regular reading and hate-following and determine how much of each content to recommend. It could also give negative weight to entries that match the hate criteria.

Some sites with articles (like Medium) give an estimate of reading time. An article recommendation system should have a fixed limit of articles (both in articles and in reading time) to support the “I spend half an hour reading during lunch” model not doom scrolling.

For getting news using only FOSS it seems that the best option at the moment is to use the Lemmy FOSS social network which is like Reddit [1] to recommend articles etc.

The Lemoa client for Lemmy uses GTK [2] but it’s no longer maintained. The Lemonade client for Lemmy is written in Rust [3]. It would be good if one of those was packaged for Debian, preferably one that’s maintained.

16 March, 2025 04:19AM by etbe

March 15, 2025

hackergotchi for Bits from Debian

Bits from Debian

Debian Med Sprint in Berlin

Debian Med sprint in Berlin on 15 and 16 February

The Debian Med team works on software packages that are associated with medicine, pre-clinical research, and life sciences, and makes them available for the Debian distribution. Seven Debian developers and contributors to the team gathered for their annual Sprint, in Berlin, Germany on 15 and 16 February 2025. The purpose of the meeting was to tackle bugs in Debian-Med packages, enhance the quality of the team's packages, and coordinate the efforts of team members overall.

This sprint allowed participants to fix dozens of bugs, including release-critical ones. New upstream versions were uploaded, and the participants took some time to modernize some packages. Additionally, they discussed the long-term goals of the team, prepared a forthcoming invited talk for a conference, and enjoyed working together.

More details on the event and individual agendas/reports can be found at https://wiki.debian.org/Sprints/2025/DebianMed.

15 March, 2025 11:00PM by Pierre Gruet, Jean-Pierre Giraud, Joost van Baal-Ilić

March 14, 2025

Dima Kogan

Getting precise timings out of RS-232 output

For uninteresting reasons I need very regular 58Hz pulses coming out of an RS-232 Tx line: the time between each pulse should be as close to 1/58s as possible. I produce each pulse by writing an \xFF byte to the device. The start bit is the only active-voltage bit being sent, and that produces my pulse. I wrote this obvious C program:

#include <stdio.h>
#include <stdlib.h>
#include <stdbool.h>
#include <sys/ioctl.h>
#include <unistd.h>
#include <fcntl.h>
#include <termios.h>
#include <stdint.h>
#include <sys/time.h>

static uint64_t gettimeofday_uint64()
{
    struct timeval tv;
    gettimeofday(&tv, NULL);
    return (uint64_t) tv.tv_sec * 1000000ULL + (uint64_t) tv.tv_usec;
}

int main(int argc, char* argv[])
{
    // open the serial device, and make it as raw as possible
    const char* device = "/dev/ttyS0";
    const speed_t baud = B9600;

    int fd = open(device, O_WRONLY|O_NOCTTY);
    tcflush(fd, TCIOFLUSH);

    struct termios options = {.c_iflag = IGNBRK,
                              .c_cflag = CS8 | CREAD | CLOCAL};
    cfsetspeed(&options, baud);
    tcsetattr(fd, TCSANOW, &options);

    const uint64_t T_us = (uint64_t)(1e6 / 58.);

    const uint64_t t0 = gettimeofday_uint64();
    for(int i=0; ; i++)
    {
        const uint64_t t_target = t0 + T_us*i;
        const uint64_t t1       = gettimeofday_uint64();

        if(t_target > t1)
            usleep(t_target - t1);

        write(fd, &((char){'\xff'}), 1);
    }
    return 0;
}

This tries to make sure that each write() call happens at 58Hz. I need these pulses to be regular, so I need to also make sure that the time between each userspace write() and when the edge actually hits the line is as short as possible or, at least, stable.

Potential reasons for timing errors:

  1. The usleep() doesn't wake up exactly when it should. This is subject to the Linux scheduler waking up the trigger process
  2. The write() almost certainly ends up scheduling a helper task to actually write the \xFF to the hardware. This helper task is also subject to the Linux scheduler waking it up.
  3. Whatever the hardware does. RS-232 doesn't give you any guarantees about byte-byte timings, so this could be an unfixable source of errors

The scheduler-related questions are observable without any extra hardware, so let's do that first.

I run the ./trigger program, and look at diagnostics while that's running.

I look at some device details:

# ls -lh /dev/ttyS0
crw-rw---- 1 root dialout 4, 64 Mar  6 18:11 /dev/ttyS0

# ls -lh /sys/dev/char/4:64/
total 0
-r--r--r-- 1 root root 4.0K Mar  6 16:51 close_delay
-r--r--r-- 1 root root 4.0K Mar  6 16:51 closing_wait
-rw-r--r-- 1 root root 4.0K Mar  6 16:51 console
-r--r--r-- 1 root root 4.0K Mar  6 16:51 custom_divisor
-r--r--r-- 1 root root 4.0K Mar  6 16:51 dev
lrwxrwxrwx 1 root root    0 Mar  6 16:51 device -> ../../../0000:00:16.3:0.0
-r--r--r-- 1 root root 4.0K Mar  6 16:51 flags
-r--r--r-- 1 root root 4.0K Mar  6 16:51 iomem_base
-r--r--r-- 1 root root 4.0K Mar  6 16:51 iomem_reg_shift
-r--r--r-- 1 root root 4.0K Mar  6 16:51 io_type
-r--r--r-- 1 root root 4.0K Mar  6 16:51 irq
-r--r--r-- 1 root root 4.0K Mar  6 16:51 line
-r--r--r-- 1 root root 4.0K Mar  6 16:51 port
drwxr-xr-x 2 root root    0 Mar  6 16:51 power
-rw-r--r-- 1 root root 4.0K Mar  6 16:51 rx_trig_bytes
lrwxrwxrwx 1 root root    0 Mar  6 16:51 subsystem -> ../../../../../../../class/tty
-r--r--r-- 1 root root 4.0K Mar  6 16:51 type
-r--r--r-- 1 root root 4.0K Mar  6 16:51 uartclk
-rw-r--r-- 1 root root 4.0K Mar  6 16:51 uevent
-r--r--r-- 1 root root 4.0K Mar  6 16:51 xmit_fifo_size

Unsurprisingly, this is a part of the tty subsystem. I don't want to spend the time to really figure out how this works, so let me look at all the tty kernel calls and also at all the kernel tasks scheduled by the trigger process, since I suspect that the actual hardware poke is happening in a helper task. I see this:

# bpftrace -e 'k:*tty* /comm=="trigger"/
               { printf("%d %d %s\n",pid,tid,probe); }
               t:sched:sched_wakeup /comm=="trigger"/
               { printf("switching to %s(%d); current backtrace:", args.comm, args.pid); print(kstack());  }'

...

3397345 3397345 kprobe:tty_ioctl
3397345 3397345 kprobe:tty_check_change
3397345 3397345 kprobe:__tty_check_change
3397345 3397345 kprobe:tty_wait_until_sent
3397345 3397345 kprobe:tty_write
3397345 3397345 kprobe:file_tty_write.isra.0
3397345 3397345 kprobe:tty_ldisc_ref_wait
3397345 3397345 kprobe:n_tty_write
3397345 3397345 kprobe:tty_hung_up_p
switching to kworker/0:1(3400169); current backtrace:
        ttwu_do_activate+268
        ttwu_do_activate+268
        try_to_wake_up+605
        kick_pool+92
        __queue_work.part.0+582
        queue_work_on+101
        rpm_resume+1398
        __pm_runtime_resume+75
        __uart_start+85
        uart_write+150
        n_tty_write+1012
        file_tty_write.isra.0+373
        vfs_write+656
        ksys_write+109
        do_syscall_64+130
        entry_SYSCALL_64_after_hwframe+118

3397345 3397345 kprobe:tty_update_time
3397345 3397345 kprobe:tty_ldisc_deref

... repeated with each pulse ...

Looking at the sources I see that uart_write() calls __uart_start(), which schedules a task to call serial_port_runtime_resume() which eventually calls serial8250_tx_chars(), which calls some low-level functions to actually send the bits.

I look at the time between two of those calls to quantify the scheduler latency:

pulserate=58

sudo zsh -c \
  '( echo "# dt_write_ns dt_task_latency_ns";
     bpftrace -q -e "k:vfs_write /comm==\"trigger\" && arg2==1/
                     {\$t=nsecs(); if(@t0) { @dt_write = \$t-@t0; } @t0=\$t;}
                     k:serial8250_tx_chars /@dt_write/
                     {\$t=nsecs(); printf(\"%d %d\\n\", @dt_write, \$t-@t0);}"
   )' \
| vnl-filter                  \
    --stream -p dt_write_ms="dt_write_ns/1e6 - 1e3/$pulserate",dt_task_latency_ms=dt_task_latency_ns/1e6 \
| feedgnuplot  \
    --stream   \
    --lines    \
    --points   \
    --xlen 200 \
    --vnl      \
    --autolegend \
    --xlabel 'Pulse index' \
    --ylabel 'Latency (ms)'

Here I'm making a realtime plot showing

  • The offset from 58Hz of when each write() call happens. This shows effect #1 from above: how promptly the trigger process wakes up
  • The latency of the helper task. This shows effect #2 above.

The raw data as I tweak things lives here. Initially I see big latency spikes:

timings.scheduler.1.noise.svg

These can be fixed by adjusting the priority of the trigger task. This tells the scheduler to wake that task up first, even if something else is currently using the CPU. I do this:

sudo chrt -p 90 `pidof trigger`

And I get better-looking latencies:

timings.scheduler.2.clean.svg

During some experiments (not in this dataset) I would see high helper-task timing instabilities as well. These could be fixed by prioritizing the helper task. In this kernel (6.12) the helper task is called kworker/N where N is the CPU index. I tie the trigger process to cpu 0, and priorities all the relevant helpers:

taskset -c 0 ./trigger 58

pgrep -f kworker/0 | while { read pid } { sudo chrt -p 90 $pid }

This fixes the helper-task latency spikes.

OK, so it looks like on the software side we're good to within 0.1ms of the true period. This is in the ballpark of the precision I need; even this might be too high. It's possible to try to push the software to do better: one could look at the kernel sources a bit more, to do smarter things with priorities or to try an -rt kernel. But all this doesn't matter if the serial hardware adds unacceptable delays. Let's look.

Let's look at it with a logic analyzer. I use a saleae logic analyzer with sigrok. The tool spits out the samples as it gets them, and an awk script finds the edges and reports the timings to give me a realtime plot.

samplerate=500000;
pulserate=58.;
sigrok-cli -c samplerate=$samplerate -O csv --continuous -C D1 \
| mawk -Winteractive  \
    "prev_logic==0 && \$0==1 \
     { 
       iedge = NR;
       if(prev_iedge)
       {
         di = iedge -prev_iedge;
         dt = di/$samplerate;
         print(dt*1000);
       }
       prev_iedge = iedge;
     }
     {
       prev_logic=\$0;
     } " | feedgnuplot --stream --ylabel 'Period (ms)' --equation "1000./$pulserate title \"True ${pulserate}Hz period\""

On the server I was using (physical RS-232 port, ancient 3.something kernel):

timings.hw.serial-server.svg

OK… This is very discrete for some reason, and generally worse than 0.1ms. What about my laptop (physical RS-232 port, recent 6.12 kernel)?

timings.hw.serial-laptop.svg

Not discrete anymore, but not really any more precise. What about using a usb-serial converter? I expect this to be worse.

timings.hw.usbserial.svg

Yeah, looks worse. For my purposes, an accuracy of 0.1ms is marginal, and the hardware adds non-negligible errors. So I cut my losses, and use an external signal generator:

timings.hw.generator.svg

Yeah. That's better, so that's what I use.

14 March, 2025 12:47PM by Dima Kogan

hackergotchi for Junichi Uekawa

Junichi Uekawa

Filing tax this year was really painful.

Filing tax this year was really painful. But mostly because my home network. It was ipv4 over ipv6 was not working correctly. First I swapped the Router which was trying to reinitialize the MAP-E table every time there was a dhcp client reconfiguration and overwhelming the server. Then I changed the DNS configuration not use ipv4 UDP lookup which was overwhelming the ipv4 ports. Tax return itself is a painful process. Debugging network issues is making things was just making everything more painful.

14 March, 2025 01:27AM by Junichi Uekawa

March 10, 2025

hackergotchi for Joachim Breitner

Joachim Breitner

Extrinsic termination proofs for well-founded recursion in Lean

A few months ago I explained that one reason why this blog has become more quiet is that all my work on Lean is covered elsewhere.

This post is an exception, because it is an observation that is (arguably) interesting, but does not lead anywhere, so where else to put it than my own blog…

Want to share your thoughts about this? Please join the discussion on the Lean community zulip!

Background

When defining a function recursively in Lean that has nested recursion, e.g. a recusive call that is in the argument to a higher-order function like List.map, then extra attention used to be necessary so that Lean can see that xs.map applies its argument only elements of the list xs. The usual idiom is to write xs.attach.map instead, where List.attach attaches to the list elements a proof that they are in that list. You can read more about this my Lean blog post on recursive definitions and our new shiny reference manual, look for Example “Nested Recursion in Higher-order Functions”.

To make this step less tedious I taught Lean to automatically rewrite xs.map to xs.attach.map (where suitable) within the construction of well-founded recursion, so that nested recursion just works (issue #5471). We already do such a rewriting to change if c then … else … to the dependent if h : c then … else …, but the attach-introduction is much more ambitious (the rewrites are not definitionally equal, there are higher-order arguments etc.) Rewriting the terms in a way that we can still prove the connection later when creating the equational lemmas is hairy at best. Also, we want the whole machinery to be extensible by the user, setting up their own higher order functions to add more facts to the context of the termination proof.

I implemented it like this (PR #6744) and it ships with 4.18.0, but in the course of this work I thought about a quite different and maybe better™ way to do this, and well-founded recursion in general:

A simpler fix

Recall that to use WellFounded.fix

WellFounded.fix : (hwf : WellFounded r) (F : (x : α) → ((y : α) → r y x → C y) → C x) (x : α) : C x

we have to rewrite the functorial of the recursive function, which naturally has type

F : ((y : α) →  C y) → ((x : α) → C x)

to the one above, where all recursive calls take the termination proof r y x. This is a fairly hairy operation, mangling the type of matcher’s motives and whatnot.

Things are simpler for recursive definitions using the new partial_fixpoint machinery, where we use Lean.Order.fix

Lean.Order.fix : [CCPO α] (F : β → β) (hmono : monotone F) : β

so the functorial’s type is unmodified (here β will be ((x : α) → C x)), and everything else is in the propositional side-condition montone F. For this predicate we have a syntax-guided compositional tactic, and it’s easily extensible, e.g. by

theorem monotone_mapM (f : γ → α → m β) (xs : List α) (hmono : monotone f) :
    monotone (fun x => xs.mapM (f x)) 

Once given, we don’t care about the content of that proof. In particular proving the unfolding theorem only deals with the unmodified F that closely matches the function definition as written by the user. Much simpler!

Isabelle has it easier

Isabelle also supports well-founded recursion, and has great support for nested recursion. And it’s much simpler!

There, all you have to do to make nested recursion work is to define a congruence lemma of the form, for List.map something like our List.map_congr_left

List.map_congr_left : (h : ∀ a ∈ l, f a = g a) :
    List.map f l = List.map g l

This is because in Isabelle, too, the termination proofs is a side-condition that essentially states “the functorial F calls its argument f only on smaller arguments”.

Can we have it easy, too?

I had wished we could do the same in Lean for a while, but that form of congruence lemma just isn’t strong enough for us.

But maybe there is a way to do it, using an existential to give a witness that F can alternatively implemented using the more restrictive argument. The following callsOn P F predicate can express that F calls its higher-order argument only on arguments that satisfy the predicate P:

section setup

variable {α : Sort u}
variable {β : α → Sort v}
variable {γ : Sort w}

def callsOn (P : α → Prop) (F : (∀ y, β y) → γ) :=
  ∃ (F': (∀ y, P y → β y) → γ), ∀ f, F' (fun y _ => f y) = F f

variable (R : α → α → Prop)
variable (F : (∀ y, β y) → (∀ x, β x))

local infix:50 " ≺ " => R

def recursesVia : Prop := ∀ x, callsOn (· ≺ x) (fun f => F f x)

noncomputable def fix (wf : WellFounded R) (h : recursesVia R F) : (∀ x, β x) :=
  wf.fix (fun x => (h x).choose)

def fix_eq (wf : WellFounded R) h x :
    fix R F wf h x = F (fix R F wf h) x := by
  unfold fix
  rw [wf.fix_eq]
  apply (h x).choose_spec

This allows nice compositional lemmas to discharge callsOn predicates:

theorem callsOn_base (y : α) (hy : P y) :
    callsOn P (fun (f : ∀ x, β x) => f y) := by
  exists fun f => f y hy
  intros; rfl

@[simp]
theorem callsOn_const (x : γ) :
    callsOn P (fun (_ : ∀ x, β x) => x) :=
  ⟨fun _ => x, fun _ => rfl⟩

theorem callsOn_app
    {γ₁ : Sort uu} {γ₂ : Sort ww}
    (F₁ :  (∀ y, β y) → γ₂ → γ₁) -- can this also support dependent types?
    (F₂ :  (∀ y, β y) → γ₂)
    (h₁ : callsOn P F₁)
    (h₂ : callsOn P F₂) :
    callsOn P (fun f => F₁ f (F₂ f)) := by
  obtain ⟨F₁', h₁⟩ := h₁
  obtain ⟨F₂', h₂⟩ := h₂
  exists (fun f => F₁' f (F₂' f))
  intros; simp_all

theorem callsOn_lam
    {γ₁ : Sort uu}
    (F : γ₁ → (∀ y, β y) → γ) -- can this also support dependent types?
    (h : ∀ x, callsOn P (F x)) :
    callsOn P (fun f x => F x f) := by
  exists (fun f x => (h x).choose f)
  intro f
  ext x
  apply (h x).choose_spec

theorem callsOn_app2
    {γ₁ : Sort uu} {γ₂ : Sort ww}
    (g : γ₁ → γ₂ → γ)
    (F₁ :  (∀ y, β y) → γ₁) -- can this also support dependent types?
    (F₂ :  (∀ y, β y) → γ₂)
    (h₁ : callsOn P F₁)
    (h₂ : callsOn P F₂) :
    callsOn P (fun f => g (F₁ f) (F₂ f)) := by
  apply_rules [callsOn_app, callsOn_const]

With this setup, we can have the following, possibly user-defined, lemma expressing that List.map calls its arguments only on elements of the list:

theorem callsOn_map (δ : Type uu) (γ : Type ww)
    (P : α → Prop) (F : (∀ y, β y) → δ → γ) (xs : List δ)
    (h : ∀ x, x ∈ xs → callsOn P (fun f => F f x)) :
    callsOn P (fun f => xs.map (fun x => F f x)) := by
  suffices callsOn P (fun f => xs.attach.map (fun ⟨x, h⟩ => F f x)) by
    simpa
  apply callsOn_app
  · apply callsOn_app
    · apply callsOn_const
    · apply callsOn_lam
      intro ⟨x', hx'⟩
      dsimp
      exact (h x' hx')
  · apply callsOn_const

end setup

So here is the (manual) construction of a nested map for trees:

section examples

structure Tree (α : Type u) where
  val : α
  cs : List (Tree α)

-- essentially
-- def Tree.map (f : α → β) : Tree α → Tree β :=
--   fun t => ⟨f t.val, t.cs.map Tree.map⟩)
noncomputable def Tree.map (f : α → β) : Tree α → Tree β :=
  fix (sizeOf · < sizeOf ·) (fun map t => ⟨f t.val, t.cs.map map⟩)
    (InvImage.wf (sizeOf ·) WellFoundedRelation.wf) <| by
  intro ⟨v, cs⟩
  dsimp only
  apply callsOn_app2
  · apply callsOn_const
  · apply callsOn_map
    intro t' ht'
    apply callsOn_base
    -- ht' : t' ∈ cs -- !
    -- ⊢ sizeOf t' < sizeOf { val := v, cs := cs }
    decreasing_trivial

end examples

This makes me happy!

All details of the construction are now contained in a proof that can proceed by a syntax-driven tactic and that’s easily and (likely robustly) extensible by the user. It also means that we can share a lot of code paths (e.g. everything related to equational theorems) between well-founded recursion and partial_fixpoint.

I wonder if this construction is really as powerful as our current one, or if there are certain (likely dependently typed) functions where this doesn’t fit, but the β above is dependent, so it looks good.

With this construction, functions defined by well-founded recursion will reduce even worse in the kernel, I assume. This may be a good thing.

The cake is a lie

What unfortunately kills this idea, though, is the generation of the functional induction principles, which I believe is not (easily) possible with this construction: The functional induction principle is proved by massaging F to return a proof, but since the extra assumptions (e.g. for ite or List.map) only exist in the termination proof, they are not available in F.

Oh wey, how anticlimactic.

PS: Path dependencies

Curiously, if we didn’t have functional induction at this point yet, then very likely I’d change Lean to use this construction, and then we’d either not get functional induction, or it would be implemented very differently, maybe a more syntactic approach that would re-prove termination. I guess that’s called path dependence.

10 March, 2025 05:47PM by Joachim Breitner (mail@joachim-breitner.de)

Thorsten Alteholz

My Debian Activities in February 2025

Debian LTS

This was my hundred-twenty-eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4072-1] xorg-server security update to fix eight CVEs related to possible privilege escalation in X.
  • [DLA 4073-1] ffmpeg security update to fix three CVEs related to out-of-bounds read, assert errors and NULL pointer dereferences. This was the second update that I announced last month.

Last but not least I did some days of FD this month and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the seventy-ninth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1337-1] xorg-server security update to fix eight CVEs in Buster, Stretch and Jessie, related to possible privilege escalation in X.
  • [ELA-882-2] amanda regression update to improve a fix for privilege escalation. This old regression was detected by Beuc during his work as FD and now finally fixed.

Last but not least I did some days of FD this month and attended the monthly LTS/ELTS meeting.

Debian Printing

This month I uploaded new packages or new upstream or bugfix versions of:

  • hplip to fix some bugs and let hplip migrate to testing again.

This work is generously funded by Freexian!

Debian Matomo

This month I uploaded new packages or new upstream or bugfix versions of:

Finally matomo was uploaded. Thanks a lot to Utkarsh Gupta and William Desportes for doing most of the work to make this happen.

This work is generously funded by Freexian!

Debian Astro

Unfortunately I didn’t found any time to upload packages.

Have you ever heard of poliastro? It was a package to do calculations related to astrodynamics and orbital mechanics? It was archived by upstream end of 2023. I am now trying to revive it under the new name boinor and hope to get it back into Debian over the next months.

This is almost the last month that Patrick, our Outreachy intern for the Debian Astro project, is handling his tasks. He is working on automatic updates of the indi 3rd-party driver.

Debian IoT

Unfortunately I didn’t found any time to work on this topic.

Debian Mobcom

This month I uploaded new packages or new upstream or bugfix versions of:

misc

Unfortunately I didn’t found any time to work on this topic.

FTP master

This month I accepted 437 and rejected 64 packages. The overall number of packages that got accepted was 445.

10 March, 2025 03:33PM by alteholz

March 08, 2025

Swiss JuristGate

French woman (frontalière) trafficked to promote unauthorised cross border Swiss insurance

Today, I published a blog on human trafficking and modern slavery. In effect, all forms of exploitation, combined with a voyage of any kind, can be considered as a possible case of human trafficking, even if the voyage is completed using a regular passenger train or flight.

The victim, a French woman had worked seven years in an insurance company in Lyon.

According to the FINMA judgment, the Swiss jurists who were selling the insurance without authorisation were under surveillance since 2021 or earlier.

At the last minute, before FINMA shut down their scam in 2023, the Swiss jurists had created a new company Justiva SA and they employed the cross-border worker (frontalière) to help them.

The victim had acquired various rights due to seven years of service at her previous employer and all those benefits are foregone if a worker quits to change employer. This is especially true if a worker in France quits their job to take a higher salary in Geneva.

The victim started her new job in Geneva in February 2023 and FINMA closed the insurance company in the first week of April 2023. She was still in her probation period when FINMA belatedly shut down this scam.

The FINMA judgment says the rogue firm was immediately shut at the beginning of April. During the probationary period, the victim would have been entitled to one week's pay in lieu of notice. According to the victim's LinkedIn profile, she continued with the new firm Justiva SA for three months until the end of June 2023. We found details of this woman in a backup copy of the Justiva SA web site, the Justicia SA web site and LinkedIn.

Anybody who wants to work in a Swiss insurance company normally needs to complete a three year apprenticeship or diploma in Switzerland. The selection process is normally highly competitive. This woman clearly did not have the Swiss qualifications. When the Swiss jurists offered her a job promoting legal insurance, without any Swiss qualifications, she must have felt she had won the lottery.

The financial regulator went to great lengths to obfuscate the failure of the insurance company run by Swiss jurists. Did they make any effort to prevent former employees talking about the failure?

The outgoing director of FINMA had previously worked for Zurich Insurance. Miraculously, in the same month that Urban Angehrn departed FINMA for health reasons, the woman was given a job at his former employer.

It feels miraculous that Zurich would spontaneously offer this opportunity to somebody who had not passed through the same Swiss training as other employees.

Promoting an unauthorized insurance for cross border workers appears to simultaneously violate laws in both Switzerland and France. If the woman was tricked to leave a stable job in Lyon and do this illegal work then she has been exploited. As the exploitation occurred over an international border, it is a clear clase of human trafficking.

French legal code Art. 225-4-1:

Human trafficking is the act of recruiting a person, transporting them, hosting them or welcoming them for the purposes of exploitation using any of the means following ...

4. for any transaction or payment or another promise of future remuneration or advantage

The exploitation mentioned in the first point can be any of the following ... or to compel the victim to commit any crime or offence

For the purposes of the French law on human trafficking, creating a situation where a cross-border worker is compelled to sell unauthorised insurance is just as bad as using them for sexual slavery. The loss of their previous employment benefits and the terms of the probationary period have the effect of compelling them to continue working for their new master even if they discover it is a scam.

Did FINMA realize that a French woman was human trafficked in violation of modern slavery laws right under their noses in a firm they had been so slow to shut down? Is that the reason the woman appears to have found a new job so conveniently while the clients got nothing?

When I saw this, I remembered my research into the Catholic abuse scandal and the practices used to make secret pay-offs.

LinkedIn, human trafficking, Switzerland

08 March, 2025 09:00PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

The author has been doctored.

Almost exactly four years after I started with this project, yesterday I presented my PhD defense.

My thesis was what I’ve been presenting advances of all around since ≈2022: «A certificate-poisoning-resistant protocol for the synchronization of Web of Trust networks»

Lots of paperwork is still on the road for me. But at least in the immediate future, I can finally use this keyring my friend Raúl Gómez 3D-printed for me:

08 March, 2025 06:31PM

Vincent Bernat

Auto-expanding aliases in Zsh

To avoid needless typing, the fish shell features command abbreviations to expand some words after pressing space. We can emulate such a feature with Zsh:

# Definition of abbrev-alias for auto-expanding aliases
typeset -ga _vbe_abbrevations
abbrev-alias() {
    alias $1
    _vbe_abbrevations+=(${1%%\=*})
}
_vbe_zle-autoexpand() {
    local -a words; words=(${(z)LBUFFER})
    if (( ${​#_vbe_abbrevations[(r)${words[-1]}]} )); then
        zle _expand_alias
    fi
    zle magic-space
}
zle -N _vbe_zle-autoexpand
bindkey -M emacs " " _vbe_zle-autoexpand
bindkey -M isearch " " magic-space

# Correct common typos
(( $+commands[git] )) && abbrev-alias gti=git
(( $+commands[grep] )) && abbrev-alias grpe=grep
(( $+commands[sudo] )) && abbrev-alias suod=sudo
(( $+commands[ssh] )) && abbrev-alias shs=ssh

# Save a few keystrokes
(( $+commands[git] )) && abbrev-alias gls="git ls-files"
(( $+commands[ip] )) && {
  abbrev-alias ip6='ip -6'
  abbrev-alias ipb='ip -brief'
}

# Hard to remember options
(( $+commands[mtr] )) && abbrev-alias mtrr='mtr -wzbe'

Here is a demo where gls is expanded to git ls-files after pressing space:

Auto-expanding gls to git ls-files

I don’t auto-expand all aliases. I keep using regular aliases when slightly modifying the behavior of a command or for well-known abbreviations:

alias df='df -h'
alias du='du -h'
alias rm='rm -i'
alias mv='mv -i'
alias ll='ls -ltrhA'

08 March, 2025 09:58AM by Vincent Bernat

March 07, 2025

Paul Wise

FLOSS Activities February 2025

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

07 March, 2025 07:26AM

Antoine Beaupré

Nix Notes

Meta

In case you haven't noticed, I'm trying to post and one of the things that entails is to just dump over the fence a bunch of draft notes. In this specific case, I had a set of rough notes about NixOS and particularly Nix, the package manager.

In this case, you can see the very birth of an article, what it looks like before it becomes the questionable prose it is now, by looking at the Git history of this file, particularly its birth. I have a couple of those left, and it would be pretty easy to publish them as is, but I feel I'd be doing others (and myself! I write for my own documentation too after all) a disservice by not going the extra mile on those.

So here's the long version of my experiment with Nix.

Nix

A couple friends are real fans of Nix. Just like I work with Puppet a lot, they deploy and maintain servers (if not fleets of servers) with NixOS and its declarative package management system. Essentially, they use it as a configuration management system, which is pretty awesome.

That, however, is a bit too high of a bar for me. I rarely try new operating systems these days: I'm a Debian developer and it takes most of my time to keep that functional. I'm not going to go around messing with other systems as I know that would inevitably get me dragged down into contributing into yet another free software project. I'm mature now and know where to draw the line. Right?

So I'm just testing Nix, the package manager, on Debian, because I learned from my friend that nixpkgs is the largest package repository out there, a mind-boggling 100,000 at the time of writing (with 88% of packages up to date), compared to around 40,000 in Debian (or 72,000 if you count binary packages, with 72% up to date). I naively thought Debian was the largest, perhaps competing with Arch, and I was wrong: Arch is larger than Debian too.

What brought me there is I wanted to run Harper, a fast spell-checker written in Rust. The logic behind using Nix instead of just downloading the source and running it myself is that I delegate the work of supply-chain integrity checking to a distributor, a bit like you trust Debian developers like myself to package things in a sane way. I know this widens the attack surface to a third party of course, but the rationale is that I shift cryptographic verification to another stack than just "TLS + GitHub" (although that is somewhat still involved) that's linked with my current chain (Debian packages).

I have since then stopped using Harper for various reasons and also wrapped up my Nix experiment, but felt it worthwhile to jot down some observations on the project.

Hot take

Overall, Nix is hard to get into, with a complicated learning curve. I have found the documentation to be a bit confusing, since there are many ways to do certain things. I particularly tripped on "flakes" and, frankly, incomprehensible error reporting.

It didn't help that I tried to run nixpkgs on Debian which is technically possible, but you can tell that I'm not supposed to be doing this. My friend who reviewed this article expressed surprised at how easy this was, but then he only saw the finished result, not me tearing my hair out to make this actually work.

Nix on Debian primer

So here's how I got started. First I installed the nix binary package:

apt install nix-bin

Then I had to add myself to the right group and logout/log back in to get the rights to deploy Nix packages:

adduser anarcat nix-users

That wasn't easy to find, but is mentioned in the README.Debian file shipped with the Debian package.

Then, I didn't write this down, but the README.Debian file above mentions it, so I think I added a "channel" like this:

nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs
nix-channel --update

And I likely installed the Harper package with:

nix-env --install harper

At this point, harper was installed in a ... profile? Not sure.

I had to add ~/.nix-profile/bin (a symlink to /nix/store/sympqw0zyybxqzz6fzhv03lyivqqrq92-harper-0.10.0/bin) to my $PATH environment for this to actually work.

Side notes on documentation

Those last two commands (nix-channel and nix-env) were hard to figure out, which is kind of amazing because you'd think a tutorial on Nix would feature something like this prominently. But three different tutorials failed to bring me up to that basic setup, even the README.Debian didn't spell that out clearly.

The tutorials all show me how to develop packages for Nix, not plainly how to install Nix software. This is presumably because "I'm doing it wrong": you shouldn't just "install a package", you should setup an environment declaratively and tell it what you want to do.

But here's the thing: I didn't want to "do the right thing". I just wanted to install Harper, and documentation failed to bring me to that basic "hello world" stage. Here's what one of the tutorials suggests as a first step, for example:

curl -L https://nixos.org/nix/install | sh
nix-shell --packages cowsay lolcat
nix-collect-garbage

... which, when you follow through, leaves you with almost precisely nothing left installed (apart from Nix itself, setup with a nasty "curl pipe bash". So while that works in testing Nix, you're not much better off than when you started.

Rolling back everything

Now that I have stopped using Harper, I don't need Nix anymore, which I'm sure my Nix friends will be sad to read about. Don't worry, I have notes now, and can try again!

But still, I wanted to clear things out, so I did this, as root:

deluser anarcat nix-users
apt purge nix-bin
rm -rf /nix ~/.nix*

I think this cleared things out, but I'm not actually sure.

Side note on Nix drama

This blurb wouldn't be complete without a mention that the Nix community has been somewhat tainted by the behavior of its founder. I won't bother you too much with this; LWN covered it well in 2024, and made a followup article about spinoffs and forks that's worth reading as well.

I did want to say that everyone I have been in contact with in the Nix community was absolutely fantastic. So I am really sad that the behavior of a single individual can pollute a community in such a way.

As a leader, if you have all but one responsability, it's to behave properly for people around you. It's actually really, really hard to do that, because yes, it means you need to act differently than others and no, you just don't get to be upset at others like you would normally do with friends, because you're in a position of authority.

It's a lesson I'm still learning myself, to be fair. But at least I don't work with arms manufacturers or, if I would, I would be sure as hell to accept the nick (or nix?) on the chin when people would get upset, and try to make amends.

So long live the Nix people! I hope the community recovers from that dark moment, so far it seems like it will.

And thanks for helping me test Harper!

07 March, 2025 01:41AM

March 06, 2025

Russell Coker

8k Video Cards

I previously blogged about getting an 8K TV [1]. Now I’m working on getting 8K video out for a computer that talks to it. I borrowed an NVidia RTX A2000 card which according to it’s specs can do 8K [2] with a mini-DisplayPort to HDMI cable rated at 8K but on both Windows and Linux the two highest resolutions on offer are 3840*2160 (regular 4K) and 4096*2160 which is strange and not useful.

The various documents on the A2000 differ on whether it has DisplayPort version 1.4 or 1.4a. According to the DisplayPort Wikipedia page [3] both versions 1.4 and 1.4a have a maximum of HBR3 speed and the difference is what version of DSC (Display Stream Compression [4]) is in use. DSC apparently causes no noticeable loss of quality for movies or games but apparently can be bad for text. According to the DisplayPort Wikipedia page version 1.4 can do 8K uncompressed at 30Hz or 24Hz with high dynamic range. So this should be able to work.

My theories as to why it doesn’t work are:

  • NVidia specs lie
  • My 8K cable isn’t really an 8K cable
  • Something weird happens converting DisplayPort to HDMI
  • The video card can only handle refresh rates for 8K that don’t match supported input for the TV

To get some more input on this issue I posted on Lemmy, here is the Lemmy post [5]. I signed up to lemmy.ml because it was the first one I found that seemed reasonable and was giving away free accounts, I haven’t tried any others and can’t review it but it seems to work well enough and it’s free. It’s described as “A community of privacy and FOSS enthusiasts, run by Lemmy’s developers” which is positive, I recommend that everyone who’s into FOSS create an account there or some other Lemmy server.

My Lemmy post was about what video cards to buy. I was looking at the Gigabyte RX 6400 Eagle 4G as a cheap card from a local store that does 8K, it also does DisplayPort 1.4 so might have the same issues, also apparently FOSS drivers don’t support 8K on HDMI because the people who manage HDMI specs are jerks. It’s a $200 card at MSY and a bit less on ebay so it’s an amount I can afford to risk on a product that might not do what I want, but it seems to have a high probability of getting the same result. The NVidia cards have the option of proprietary drivers which allow using HDMI and there are cards with DisplayPort 1.4 (which can do 8K@30Hz) and HDMI 2.1 (which can do 8K@50Hz). So HDMI is a better option for some cards just based on card output and has the additional benefit of not needing DisplayPort to HDMI conversion.

The best option apparently is the Intel cards which do DisplayPort internally and convert to HDMI in hardware which avoids the issue of FOSS drivers for HDMI at 8K. The Intel Arc B580 has nice specs [6], HDMI 2.1a and DisplayPort 2.1 output, 12G of RAM, and being faster than the low end cards like the RX 6400. But the local computer store price is $470 and the ebay price is a bit over $400. If it turns out to not do what I need it still will be a long way from the worst way I’ve wasted money on computer gear. But I’m still hesitating about this.

Any suggestions?

06 March, 2025 10:53AM by etbe

March 05, 2025

Reproducible Builds

Reproducible Builds in February 2025

Welcome to the second report in 2025 from the Reproducible Builds project. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As usual, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.

Table of contents:

  1. Reproducible Builds at FOSDEM 2025
  2. Reproducible Builds at PyCascades 2025
  3. Does Functional Package Management Enable Reproducible Builds at Scale?
  4. reproduce.debian.net updates
  5. Upstream patches
  6. Distribution work
  7. diffoscope & strip-nondeterminism
  8. Website updates
  9. Reproducibility testing framework

Reproducible Builds at FOSDEM 2025

Similar to last year’s event, there was considerable activity regarding Reproducible Builds at FOSDEM 2025, held on on 1st and 2nd February this year in Brussels, Belgium. We count at least four talks related to reproducible builds. (You can also read our news report from last year’s event in which Holger Levsen presented in the main track.)


Jelle van der Waa, Holger Levsen and kpcyrd presented in the Distributions track on A Tale of several distros joining forces for a common goal. In this talk, three developers from two different Linux distributions (Arch Linux and Debian), discuss this goal — which is, of course, reproducible builds. The presenters discuss both what is shared and different between the two efforts, touching on the history and future challenges alike. The slides of this talk are available to view, as is the full video (30m02s). The talk was also discussed on Hacker News.


Zbigniew Jędrzejewski-Szmek presented in the ever-popular Python track a on Rewriting .pyc files for fun and reproducibility, i.e. the bytecode files generated by Python in order to speed up module imports: “It’s been known for a while that those are not reproducible: on different architectures, the bytecode for exactly the same sources ends up slightly different.� The slides of this talk are available, as is the full video (28m32s).


In the Nix and NixOS track, Julien Malka presented on the Saturday asking How reproducible is NixOS: “We know that the NixOS ISO image is very close to be perfectly reproducible thanks to reproducible.nixos.org, but there doesn’t exist any monitoring of Nixpkgs as a whole. In this talk I’ll present the findings of a project that evaluated the reproducibility of Nixpkgs as a whole by mass rebuilding packages from revisions between 2017 and 2023 and comparing the results with the NixOS cache.� Unfortunately, no video of the talk is available, but there is a blog and article on the results.


Lastly, Simon Tournier presented in the Open Research track on the confluence of GNU Guix and Software Heritage: Source Code Archiving to the Rescue of Reproducible Deployment. Simon’s talk “describes design and implementation we came up and reports on the archival coverage for package source code with data collected over five years. It opens to some remaining challenges toward a better open and reproducible research.� The slides for the talk are available, as is the full video (23m17s).


Reproducible Builds at PyCascades 2025

Vagrant Cascadian presented at this year’s PyCascades conference which was held on February 8th and 9th February in Portland, OR, USA. PyCascades is a regional instance of PyCon held in the Pacific Northwest. Vagrant’s talk, entitled Re-Py-Ducible Builds caught the audience’s attention with the following abstract:

Crank your Python best practices up to 11 with Reproducible Builds! This talk will explore Reproducible Builds by highlighting issues identified in Python projects, from the simple to the seemingly inscrutable. Reproducible Builds is basically the crazy idea that when you build something, and you build it again, you get the exact same thing… or even more important, if someone else builds it, they get the exact same thing too.

More info is available on the talk’s page.


“Does Functional Package Management Enable Reproducible Builds at Scale?�

On our mailing list last month, Julien Malka, Stefano Zacchiroli and Théo Zimmermann of Télécom Paris’ in-house research laboratory, the Information Processing and Communications Laboratory (LTCI) announced that they had published an article asking the question: Does Functional Package Management Enable Reproducible Builds at Scale? (PDF).

This month, however, Ludovic Courtès followed up to the original announcement on our mailing list mentioning, amongst other things, the Guix Data Service and how that it shows the reproducibility of GNU Guix over time, as described in a GNU Guix blog back in March 2024.


reproduce.debian.net updates

The last few months have seen the introduction of reproduce.debian.net. Announced first at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project.

Powering this work is rebuilderd, our server which monitors the official package repositories of Linux distributions and attempt to reproduce the observed results there. This month, however, Holger Levsen:

  • Split packages that are not specific to any architecture away from amd64.reproducible.debian.net service into a new all.reproducible.debian.net page.

  • Increased the number of riscv64 nodes to a total of 4, and added a new amd64 node added thanks to our (now 10-year sponsor), IONOS.

  • Discovered an issue in the Debian build service where some new ‘incoming’ build-dependencies do not end up historically archived.

  • Uploaded the devscripts package, incorporating changes from Jochen Sprickerhof to the debrebuild script — specifically to fix the handling the Rules-Requires-Root header in Debian source packages.

  • Uploaded a number of Rust dependencies of rebuilderd (rust-libbz2-rs-sys, rust-actix-web, rust-actix-server, rust-actix-http, rust-actix-server, rust-actix-http, rust-actix-web-codegen and rust-time-tz) after they were prepared by kpcyrd :

Jochen Sprickerhof also updated the sbuild package to:

  • Obey requests from the user/developer for a different temporary directory.
  • Use the root/superuser for some values of Rules-Requires-Root.
  • Don’t pass --root-owner-group to old versions of dpkg.

… and additionally requested that many Debian packages are rebuilt by the build servers in order to work around bugs found on reproduce.debian.net. […][[…][…]


Lastly, kpcyrd has also worked towards getting rebuilderd packaged in NixOS, and Jelle van der Waa picked up the existing pull request for Fedora support within in rebuilderd and made it work with the existing Koji rebuilderd script. The server is being packaged for Fedora in an unofficial ‘copr’ repository and in the official repositories after all the dependencies are packaged.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Distribution work

There as been the usual work in various distributions this month, such as:

In Debian, 17 reviews of Debian packages were added, 6 were updated and 8 were removed this month adding to our knowledge about identified issues.


Fedora developers Davide Cavalca and Zbigniew Jędrzejewski-Szmek gave a talk on Reproducible Builds in Fedora (PDF), touching on SRPM-specific issues as well as the current status and future plans.


Thanks to an investment from the Sovereign Tech Agency, the FreeBSD project’s work on unprivileged and reproducible builds continued this month. Notable fixes include:


The Yocto Project has been struggling to upgrade to the latest Go and Rust releases due to reproducibility problems in the newer versions. Hongxu Jia tracked down the issue with Go which meant that the project could upgrade from the 1.22 series to 1.24, with the fix being submitted upstream for review (see above). For Rust, however, the project was significantly behind, but has made recent progress after finally identifying the blocking reproducibility issues. At time of writing, the project is at Rust version 1.82, with patches under review for 1.83 and 1.84 and fixes being discussed with the Rust developers. The project hopes to improve the tests for reproducibility in the Rust project itself in order to try and avoid future regressions.

Yocto continues to maintain its ability to binary reproduce all of the recipes in OpenEmbedded-Core, regardless of the build host distribution or the current build path.


Finally, Douglas DeMaio published an article on the openSUSE blog on announcing that the Reproducible-openSUSE (RBOS) Project Hits [Significant] Milestone. In particular:

The Reproducible-openSUSE (RBOS) project, which is a proof-of-concept fork of openSUSE, has reached a significant milestone after demonstrating a usable Linux distribution can be built with 100% bit-identical packages.

This news was also announced on our mailing list by Bernhard M. Wiedemann, who also published another report for openSUSE as well.


diffoscope & strip-nondeterminism

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 288 and 289 to Debian:

  • Add asar to DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS in order to address Debian bug #1095057) […]
  • Catch a CalledProcessError when calling html2text. […]
  • Update the minimal Black version. […]

Additionally, Vagrant Cascadian updated diffoscope in GNU Guix to version 287 […][…] and 288 […][…] as well as submitted a patch to update to 289 […]. Vagrant also fixed an issue that was breaking reprotest on Guix […][…].

strip-nondeterminism is our sister tool to remove specific non-deterministic results from a completed build. This month version 1.14.1-2 was uploaded to Debian unstable by Holger Levsen.


Website updates

There were a large number of improvements made to our website this month, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:

In addition:

  • kpcyrd fixed the /all/api/ API endpoints on reproduce.debian.net by altering the nginx configuration. […]

  • James Addison updated reproduce.debian.net to display the so-called ‘bad’ reasons hyperlink inline […] and merged the “Categorized issuesâ€� links into the “Reproduced buildsâ€� column […].

  • Jochen Sprickerhof also made some reproduce.debian.net-related changes, adding support for detecting a bug in the mmdebstrap package […] as well as updating some documentation […].

  • Roland Clobus continued their work on reproducible ‘live’ images for Debian, making changes related to new clustering of jobs in openQA. […]

And finally, both Holger Levsen […][…][…] and Vagrant Cascadian performed significant node maintenance. […][…][…][…][…]


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

05 March, 2025 01:31PM

Dima Kogan

Shop scheduling with PuLP

I recently used the PuLP modeler to solve a work scheduling problem to assign workers to shifts. Here are notes about doing that. This is a common use case, but isn't explicitly covered in the case studies in the PuLP documentation.

Here's the problem:

  • We are trying to put together a schedule for one week
  • Each day has some set of work shifts that need to be staffed
  • Each shift must be staffed with exactly one worker
  • The shift schedule is known beforehand, and the workers each declare their preferences beforehand: they mark each shift in the week as one of:
    • PREFERRED (if they want to be scheduled on that shift)
    • NEUTRAL
    • DISFAVORED (if they don't love that shift)
    • REFUSED (if they absolutely cannot work that shift)

The tool is supposed to allocate workers to the shifts to try to cover all the shifts, give everybody work, and try to match their preferences. I implemented the tool:

#!/usr/bin/python3

import sys
import os
import re

def report_solution_to_console(vars):
    for w in days_of_week:
        annotation = ''
        if human_annotate is not None:
            for s in shifts.keys():
                m = re.match(rf'{w} - ', s)
                if not m: continue
                if vars[human_annotate][s].value():
                    annotation = f" ({human_annotate} SCHEDULED)"
                    break
            if not len(annotation):
                annotation = f" ({human_annotate} OFF)"

        print(f"{w}{annotation}")

        for s in shifts.keys():
            m = re.match(rf'{w} - ', s)
            if not m: continue

            annotation = ''
            if human_annotate is not None:
                annotation = f" ({human_annotate} {shifts[s][human_annotate]})"
            print(f"    ---- {s[m.end():]}{annotation}")

            for h in humans:
                if vars[h][s].value():
                    print(f"         {h} ({shifts[s][h]})")

def report_solution_summary_to_console(vars):
    print("\nSUMMARY")

    for h in humans:
        print(f"-- {h}")
        print(f"   benefit: {benefits[h].value():.3f}")

        counts = dict()
        for a in availabilities:
            counts[a] = 0

        for s in shifts.keys():
            if vars[h][s].value():
                counts[shifts[s][h]] += 1

        for a in availabilities:
            print(f"   {counts[a]} {a}")


human_annotate = None

days_of_week = ('SUNDAY',
                'MONDAY',
                'TUESDAY',
                'WEDNESDAY',
                'THURSDAY',
                'FRIDAY',
                'SATURDAY')

humans = ['ALICE', 'BOB',
          'CAROL', 'DAVID', 'EVE', 'FRANK', 'GRACE', 'HEIDI', 'IVAN', 'JUDY']

shifts = {'SUNDAY - SANDING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'DISFAVORED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'NEUTRAL'},
          'WEDNESDAY - SAWING 7:30 AM - 2:30 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED'},
          'THURSDAY - SANDING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED'},
          'SATURDAY - SAWING 7:30 AM - 2:30 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED',
           'GRACE': 'REFUSED'},
          'SUNDAY - SAWING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED'},
          'MONDAY - SAWING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED'},
          'TUESDAY - SAWING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED'},
          'WEDNESDAY - PAINTING 7:30 AM - 2:30 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'THURSDAY - SAWING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED'},
          'FRIDAY - SAWING 9:00 AM - 4:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'DAVID': 'PREFERRED',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED'},
          'SATURDAY - PAINTING 7:30 AM - 2:30 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED',
           'GRACE': 'REFUSED',
           'DAVID': 'REFUSED'},
          'SUNDAY - PAINTING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'MONDAY - PAINTING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'NEUTRAL',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'TUESDAY - PAINTING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'WEDNESDAY - SANDING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'DAVID': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'JUDY':  'NEUTRAL',
           'EVE':   'REFUSED'},
          'THURSDAY - PAINTING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'NEUTRAL',
           'IVAN':  'PREFERRED',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'FRIDAY - PAINTING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'PREFERRED',
           'FRANK': 'PREFERRED',
           'GRACE': 'PREFERRED',
           'IVAN':  'PREFERRED',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'SATURDAY - SANDING 9:45 AM - 4:45 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'DAVID': 'PREFERRED',
           'FRANK': 'PREFERRED',
           'HEIDI': 'DISFAVORED',
           'IVAN':  'PREFERRED',
           'EVE':   'REFUSED',
           'JUDY':  'REFUSED',
           'GRACE': 'REFUSED'},
          'SUNDAY - PAINTING 11:00 AM - 6:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'PREFERRED',
           'IVAN':  'NEUTRAL',
           'JUDY':  'NEUTRAL',
           'DAVID': 'REFUSED'},
          'MONDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'PREFERRED',
           'IVAN':  'NEUTRAL',
           'JUDY':  'NEUTRAL',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'TUESDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'HEIDI': 'REFUSED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'WEDNESDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'PREFERRED',
           'EVE':   'REFUSED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'THURSDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'FRIDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'SATURDAY - PAINTING 12:00 PM - 7:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'NEUTRAL',
           'FRANK': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'HEIDI': 'REFUSED',
           'GRACE': 'REFUSED',
           'DAVID': 'REFUSED'},
          'SUNDAY - SAWING 12:00 PM - 7:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'NEUTRAL',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'NEUTRAL',
           'JUDY':  'PREFERRED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'MONDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'TUESDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'WEDNESDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'THURSDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'DAVID': 'REFUSED'},
          'FRIDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'HEIDI': 'REFUSED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'SATURDAY - SAWING 2:00 PM - 9:00 PM':
          {'ALICE': 'PREFERRED',
           'BOB':   'PREFERRED',
           'CAROL': 'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'HEIDI': 'REFUSED',
           'GRACE': 'REFUSED',
           'DAVID': 'REFUSED'},
          'SUNDAY - PAINTING 12:15 PM - 7:15 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'PREFERRED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'NEUTRAL',
           'DAVID': 'REFUSED'},
          'MONDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'DAVID': 'REFUSED'},
          'TUESDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'WEDNESDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'DAVID': 'REFUSED'},
          'THURSDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'DAVID': 'REFUSED'},
          'FRIDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'EVE':   'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'GRACE': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'REFUSED',
           'DAVID': 'REFUSED'},
          'SATURDAY - PAINTING 2:00 PM - 9:00 PM':
          {'ALICE': 'NEUTRAL',
           'BOB':   'NEUTRAL',
           'CAROL': 'DISFAVORED',
           'FRANK': 'NEUTRAL',
           'HEIDI': 'NEUTRAL',
           'IVAN':  'DISFAVORED',
           'JUDY':  'DISFAVORED',
           'EVE':   'REFUSED',
           'GRACE': 'REFUSED',
           'DAVID': 'REFUSED'}}

availabilities = ['PREFERRED', 'NEUTRAL', 'DISFAVORED']



import pulp
prob = pulp.LpProblem("Scheduling", pulp.LpMaximize)

vars = pulp.LpVariable.dicts("Assignments",
                             (humans, shifts.keys()),
                             None,None, # bounds; unused, since these are binary variables
                             pulp.LpBinary)

# Everyone works at least 2 shifts
Nshifts_min = 2
for h in humans:
    prob += (
        pulp.lpSum([vars[h][s] for s in shifts.keys()]) >= Nshifts_min,
        f"{h} works at least {Nshifts_min} shifts",
    )

# each shift is ~ 8 hours, so I limit everyone to 40/8 = 5 shifts
Nshifts_max = 5
for h in humans:
    prob += (
        pulp.lpSum([vars[h][s] for s in shifts.keys()]) <= Nshifts_max,
        f"{h} works at most {Nshifts_max} shifts",
    )

# all shifts staffed and not double-staffed
for s in shifts.keys():
    prob += (
        pulp.lpSum([vars[h][s] for h in humans]) == 1,
        f"{s} is staffed",
    )

# each human can work at most one shift on any given day
for w in days_of_week:
    for h in humans:
        prob += (
            pulp.lpSum([vars[h][s] for s in shifts.keys() if re.match(rf'{w} ',s)]) <= 1,
            f"{h} cannot be double-booked on {w}"
        )


#### Some explicit constraints; as an example
# DAVID can't work any PAINTING shift and is off on Thu and Sun
h = 'DAVID'
prob += (
    pulp.lpSum([vars[h][s] for s in shifts.keys() if re.search(r'- PAINTING',s)]) == 0,
    f"{h} can't work any PAINTING shift"
)
prob += (
    pulp.lpSum([vars[h][s] for s in shifts.keys() if re.match(r'THURSDAY|SUNDAY',s)]) == 0,
    f"{h} is off on Thursday and Sunday"
)

# Do not assign any "REFUSED" shifts
for s in shifts.keys():
    for h in humans:
        if shifts[s][h] == 'REFUSED':
            prob += (
                vars[h][s] == 0,
                f"{h} is not available for {s}"
            )


# Objective. I try to maximize the "happiness". Each human sees each shift as
# one of:
#
#   PREFERRED
#   NEUTRAL
#   DISFAVORED
#   REFUSED
#
# I set a hard constraint to handle "REFUSED", and arbitrarily, I set these
# benefit values for the others
benefit_availability = dict()
benefit_availability['PREFERRED']  = 3
benefit_availability['NEUTRAL']    = 2
benefit_availability['DISFAVORED'] = 1

# Not used, since this is a hard constraint. But the code needs this to be a
# part of the benefit. I can ignore these in the code, but let's keep this
# simple
benefit_availability['REFUSED' ] = -1000

benefits = dict()
for h in humans:
    benefits[h] = \
        pulp.lpSum([vars[h][s] * benefit_availability[shifts[s][h]] \
                    for s in shifts.keys()])

benefit_total = \
    pulp.lpSum([benefits[h] \
                for h in humans])

prob += (
    benefit_total,
    "happiness",
)

prob.solve()

if pulp.LpStatus[prob.status] == "Optimal":
    report_solution_to_console(vars)
    report_solution_summary_to_console(vars)

The set of workers is in the humans variable, and the shift schedule and the workers' preferences are encoded in the shifts dict. The problem is defined by a vars dict of dicts, each a boolean variable indicating whether a particular worker is scheduled for a particular shift. We define a set of constraints to these worker allocations to restrict ourselves to valid solutions. And among these valid solutions, we try to find the one that maximizes some benefit function, defined here as:

benefit_availability = dict()
benefit_availability['PREFERRED']  = 3
benefit_availability['NEUTRAL']    = 2
benefit_availability['DISFAVORED'] = 1

benefits = dict()
for h in humans:
    benefits[h] = \
        pulp.lpSum([vars[h][s] * benefit_availability[shifts[s][h]] \
                    for s in shifts.keys()])

benefit_total = \
    pulp.lpSum([benefits[h] \
                for h in humans])

So for instance each shift that was scheduled as somebody's PREFERRED shift gives us 3 benefit points. And if all the shifts ended up being PREFERRED, we'd have a total benefit value of 3*Nshifts. This is impossible, however, because that would violate some constraints in the problem.

The exact trade-off between the different preferences is set in the benefit_availability dict. With the above numbers, it's equally good for somebody to have a NEUTRAL shift and a day off as it is for them to have DISFAVORED shifts. If we really want to encourage the program to work people as much as possible (days off discouraged), we'd want to raise the DISFAVORED threshold.

I run this program and I get:

....
Result - Optimal solution found

Objective value:                108.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.01
Time (Wallclock seconds):       0.01

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.02   (Wallclock seconds):       0.02

SUNDAY
    ---- SANDING 9:00 AM - 4:00 PM
         EVE (PREFERRED)
    ---- SAWING 9:00 AM - 4:00 PM
         IVAN (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         FRANK (PREFERRED)
    ---- PAINTING 11:00 AM - 6:00 PM
         HEIDI (PREFERRED)
    ---- SAWING 12:00 PM - 7:00 PM
         ALICE (PREFERRED)
    ---- PAINTING 12:15 PM - 7:15 PM
         CAROL (PREFERRED)
MONDAY
    ---- SAWING 9:00 AM - 4:00 PM
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         IVAN (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         GRACE (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM
         ALICE (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         HEIDI (NEUTRAL)
TUESDAY
    ---- SAWING 9:00 AM - 4:00 PM
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         EVE (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         FRANK (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         HEIDI (NEUTRAL)
WEDNESDAY
    ---- SAWING 7:30 AM - 2:30 PM
         DAVID (PREFERRED)
    ---- PAINTING 7:30 AM - 2:30 PM
         IVAN (PREFERRED)
    ---- SANDING 9:45 AM - 4:45 PM
         FRANK (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         JUDY (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         ALICE (NEUTRAL)
THURSDAY
    ---- SANDING 9:00 AM - 4:00 PM
         GRACE (PREFERRED)
    ---- SAWING 9:00 AM - 4:00 PM
         CAROL (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         EVE (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         JUDY (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         ALICE (NEUTRAL)
FRIDAY
    ---- SAWING 9:00 AM - 4:00 PM
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM
         FRANK (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         GRACE (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         HEIDI (NEUTRAL)
SATURDAY
    ---- SAWING 7:30 AM - 2:30 PM
         CAROL (PREFERRED)
    ---- PAINTING 7:30 AM - 2:30 PM
         IVAN (PREFERRED)
    ---- SANDING 9:45 AM - 4:45 PM
         DAVID (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM
         FRANK (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM
         ALICE (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM
         BOB (NEUTRAL)

SUMMARY
-- ALICE
   benefit: 13.000
   3 PREFERRED
   2 NEUTRAL
   0 DISFAVORED
-- BOB
   benefit: 14.000
   4 PREFERRED
   1 NEUTRAL
   0 DISFAVORED
-- CAROL
   benefit: 9.000
   3 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- DAVID
   benefit: 15.000
   5 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- EVE
   benefit: 9.000
   3 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- FRANK
   benefit: 13.000
   3 PREFERRED
   2 NEUTRAL
   0 DISFAVORED
-- GRACE
   benefit: 8.000
   2 PREFERRED
   1 NEUTRAL
   0 DISFAVORED
-- HEIDI
   benefit: 9.000
   1 PREFERRED
   3 NEUTRAL
   0 DISFAVORED
-- IVAN
   benefit: 12.000
   4 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- JUDY
   benefit: 6.000
   2 PREFERRED
   0 NEUTRAL
   0 DISFAVORED

So we have a solution! We have 108 total benefit points. But it looks a bit uneven: Judy only works 2 days, while some people work many more: David works 5 for instance. Why is that? I update the program with =human_annotate = 'JUDY'=, run it again, and it tells me more about Judy's preferences:

Objective value:                108.00000000
Enumerated nodes:               0
Total iterations:               0
Time (CPU seconds):             0.01
Time (Wallclock seconds):       0.01

Option for printingOptions changed from normal to all
Total time (CPU seconds):       0.01   (Wallclock seconds):       0.02

SUNDAY (JUDY OFF)
    ---- SANDING 9:00 AM - 4:00 PM (JUDY NEUTRAL)
         EVE (PREFERRED)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         IVAN (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED)
         FRANK (PREFERRED)
    ---- PAINTING 11:00 AM - 6:00 PM (JUDY NEUTRAL)
         HEIDI (PREFERRED)
    ---- SAWING 12:00 PM - 7:00 PM (JUDY PREFERRED)
         ALICE (PREFERRED)
    ---- PAINTING 12:15 PM - 7:15 PM (JUDY NEUTRAL)
         CAROL (PREFERRED)
MONDAY (JUDY OFF)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY NEUTRAL)
         IVAN (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY NEUTRAL)
         GRACE (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         ALICE (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         HEIDI (NEUTRAL)
TUESDAY (JUDY OFF)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED)
         EVE (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY REFUSED)
         FRANK (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY REFUSED)
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY REFUSED)
         HEIDI (NEUTRAL)
WEDNESDAY (JUDY SCHEDULED)
    ---- SAWING 7:30 AM - 2:30 PM (JUDY REFUSED)
         DAVID (PREFERRED)
    ---- PAINTING 7:30 AM - 2:30 PM (JUDY REFUSED)
         IVAN (PREFERRED)
    ---- SANDING 9:45 AM - 4:45 PM (JUDY NEUTRAL)
         FRANK (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY PREFERRED)
         JUDY (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         ALICE (NEUTRAL)
THURSDAY (JUDY SCHEDULED)
    ---- SANDING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         GRACE (PREFERRED)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED)
         CAROL (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED)
         EVE (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY PREFERRED)
         JUDY (PREFERRED)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         ALICE (NEUTRAL)
FRIDAY (JUDY OFF)
    ---- SAWING 9:00 AM - 4:00 PM (JUDY DISFAVORED)
         DAVID (PREFERRED)
    ---- PAINTING 9:45 AM - 4:45 PM (JUDY DISFAVORED)
         FRANK (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY DISFAVORED)
         GRACE (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY REFUSED)
         BOB (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY REFUSED)
         HEIDI (NEUTRAL)
SATURDAY (JUDY OFF)
    ---- SAWING 7:30 AM - 2:30 PM (JUDY REFUSED)
         CAROL (PREFERRED)
    ---- PAINTING 7:30 AM - 2:30 PM (JUDY REFUSED)
         IVAN (PREFERRED)
    ---- SANDING 9:45 AM - 4:45 PM (JUDY REFUSED)
         DAVID (PREFERRED)
    ---- PAINTING 12:00 PM - 7:00 PM (JUDY DISFAVORED)
         FRANK (NEUTRAL)
    ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         ALICE (PREFERRED)
    ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED)
         BOB (NEUTRAL)

SUMMARY
-- ALICE
   benefit: 13.000
   3 PREFERRED
   2 NEUTRAL
   0 DISFAVORED
-- BOB
   benefit: 14.000
   4 PREFERRED
   1 NEUTRAL
   0 DISFAVORED
-- CAROL
   benefit: 9.000
   3 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- DAVID
   benefit: 15.000
   5 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- EVE
   benefit: 9.000
   3 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- FRANK
   benefit: 13.000
   3 PREFERRED
   2 NEUTRAL
   0 DISFAVORED
-- GRACE
   benefit: 8.000
   2 PREFERRED
   1 NEUTRAL
   0 DISFAVORED
-- HEIDI
   benefit: 9.000
   1 PREFERRED
   3 NEUTRAL
   0 DISFAVORED
-- IVAN
   benefit: 12.000
   4 PREFERRED
   0 NEUTRAL
   0 DISFAVORED
-- JUDY
   benefit: 6.000
   2 PREFERRED
   0 NEUTRAL
   0 DISFAVORED

This tells us that on Monday Judy does not work, although she marked the SAWING shift as PREFERRED. Instead David got that shift. What would happen if David gave that shift to Judy? He would lose 3 points, she would gain 3 points, and the total would remain exactly the same at 108.

How would we favor a more even distribution? We need some sort of tie-break. I want to add a nonlinearity to strongly disfavor people getting a low number of shifts. But PuLP is very explicitly a linear programming solver, and cannot solve nonlinear problems. Here we can get around this by enumerating each specific case, and assigning it a nonlinear benefit function. The most obvious approach is to define another set of boolean variables: vars_Nshifts[human][N]. And then using them to add extra benefit terms, with values nonlinearly related to Nshifts. Something like this:

benefit_boost_Nshifts = \
    {2: -0.8,
     3: -0.5,
     4: -0.3,
     5: -0.2}
for h in humans:
    benefits[h] = \
        ... + \
        pulp.lpSum([vars_Nshifts[h][n] * benefit_boost_Nshifts[n] \
                    for n in benefit_boost_Nshifts.keys()])

So in the previous example we considered giving David's 5th shift to Judy, for her 3rd shift. In that scenario, David's extra benefit would change from -0.2 to -0.3 (a shift of -0.1), while Judy's would change from -0.8 to -0.5 (a shift of +0.3). So the balancing out the shifts in this way would work: the solver would favor the solution with the higher benefit function.

Great. In order for this to work, we need the vars_Nshifts[human][N] variables to function as intended: they need to be binary indicators of whether a specific person has that many shifts or not. That would need to be implemented with constraints. Let's plot it like this:

#!/usr/bin/python3
import numpy as np
import gnuplotlib as gp

Nshifts_eq  = 4
Nshifts_max = 10

Nshifts = np.arange(Nshifts_max+1)
i0 = np.nonzero(Nshifts != Nshifts_eq)[0]
i1 = np.nonzero(Nshifts == Nshifts_eq)[0]

gp.plot( # True value: var_Nshifts4==0, Nshifts!=4
         ( np.zeros(i0.shape),
           Nshifts[i0],
           dict(_with     = 'points pt 7 ps 1 lc "red"') ),
         # True value: var_Nshifts4==1, Nshifts==4
         ( np.ones(i1.shape),
           Nshifts[i1],
           dict(_with     = 'points pt 7 ps 1 lc "red"') ),
         # False value: var_Nshifts4==1, Nshifts!=4
         ( np.ones(i0.shape),
           Nshifts[i0],
           dict(_with     = 'points pt 7 ps 1 lc "black"') ),
         # False value: var_Nshifts4==0, Nshifts==4
         ( np.zeros(i1.shape),
           Nshifts[i1],
           dict(_with     = 'points pt 7 ps 1 lc "black"') ),
        unset=('grid'),
        _set = (f'xtics ("(Nshifts=={Nshifts_eq}) == 0" 0, "(Nshifts=={Nshifts_eq}) == 1" 1)'),
        _xrange = (-0.1, 1.1),
        ylabel = "Nshifts",
        title = "Nshifts equality variable: not linearly separable",
        hardcopy = "/tmp/scheduling-Nshifts-eq.svg")

scheduling-Nshifts-eq.svg

So a hypothetical vars_Nshifts[h][4] variable (plotted on the x axis of this plot) would need to be defined by a set of linear AND constraints to linearly separate the true (red) values of this variable from the false (black) values. As can be seen in this plot, this isn't possible. So this representation does not work.

How do we fix it? We can use inequality variables instead. I define a different set of variables vars_Nshifts_leq[human][N] that are 1 iff Nshifts <= N. The equality variable from before can be expressed as a difference of these inequality variables: vars_Nshifts[human][N] = vars_Nshifts_leq[human][N]-vars_Nshifts_leq[human][N-1]

Can these vars_Nshifts_leq variables be defined by a set of linear AND constraints? Yes:

#!/usr/bin/python3
import numpy as np
import numpysane as nps
import gnuplotlib as gp

Nshifts_leq = 4
Nshifts_max = 10

Nshifts = np.arange(Nshifts_max+1)
i0 = np.nonzero(Nshifts >  Nshifts_leq)[0]
i1 = np.nonzero(Nshifts <= Nshifts_leq)[0]

def linear_slope_yintercept(xy0,xy1):
    m = (xy1[1] - xy0[1])/(xy1[0] - xy0[0])
    b = xy1[1] - m * xy1[0]
    return np.array(( m, b ))
x01     = np.arange(2)
x01_one = nps.glue( nps.transpose(x01), np.ones((2,1)), axis=-1)
y_lowerbound = nps.inner(x01_one,
                         linear_slope_yintercept( np.array((0, Nshifts_leq+1)),
                                                  np.array((1, 0)) ))
y_upperbound = nps.inner(x01_one,
                         linear_slope_yintercept( np.array((0, Nshifts_max)),
                                                  np.array((1, Nshifts_leq)) ))
y_lowerbound_check = (1-x01) * (Nshifts_leq+1)
y_upperbound_check = Nshifts_max - x01*(Nshifts_max-Nshifts_leq)

gp.plot( # True value: var_Nshifts_leq4==0, Nshifts>4
         ( np.zeros(i0.shape),
           Nshifts[i0],
           dict(_with     = 'points pt 7 ps 1 lc "red"') ),
         # True value: var_Nshifts_leq4==1, Nshifts<=4
         ( np.ones(i1.shape),
           Nshifts[i1],
           dict(_with     = 'points pt 7 ps 1 lc "red"') ),
         # False value: var_Nshifts_leq4==1, Nshifts>4
         ( np.ones(i0.shape),
           Nshifts[i0],
           dict(_with     = 'points pt 7 ps 1 lc "black"') ),
         # False value: var_Nshifts_leq4==0, Nshifts<=4
         ( np.zeros(i1.shape),
           Nshifts[i1],
           dict(_with     = 'points pt 7 ps 1 lc "black"') ),

         ( x01, y_lowerbound, y_upperbound,
           dict( _with     = 'filledcurves lc "green"',
                 tuplesize = 3) ),
         ( x01, nps.cat(y_lowerbound_check, y_upperbound_check),
           dict( _with     = 'lines lc "green" lw 2',
                 tuplesize = 2) ),

        unset=('grid'),
        _set = (f'xtics ("(Nshifts<={Nshifts_leq}) == 0" 0, "(Nshifts<={Nshifts_leq}) == 1" 1)',
                'style fill transparent pattern 1'),
        _xrange = (-0.1, 1.1),
        ylabel = "Nshifts",
        title = "Nshifts inequality variable: linearly separable",
        hardcopy = "/tmp/scheduling-Nshifts-leq.svg")

scheduling-Nshifts-leq.svg

So we can use two linear constraints to make each of these variables work properly. To use these in the benefit function we can use the equality constraint expression from above, or we can use these directly:

# I want to favor people getting more extra shifts at the start to balance
# things out: somebody getting one more shift on their pile shouldn't take
# shifts away from under-utilized people
benefit_boost_leq_bound = \
    {2: .2,
     3: .3,
     4: .4,
     5: .5}

# Constrain vars_Nshifts_leq variables to do the right thing
for h in humans:
    for b in benefit_boost_leq_bound.keys():
        prob += (pulp.lpSum([vars[h][s] for s in shifts.keys()])
                 >= (1 - vars_Nshifts_leq[h][b])*(b+1),
                 f"{h} at least {b} shifts: lower bound")
        prob += (pulp.lpSum([vars[h][s] for s in shifts.keys()])
                 <= Nshifts_max - vars_Nshifts_leq[h][b]*(Nshifts_max-b),
                 f"{h} at least {b} shifts: upper bound")

benefits = dict()
for h in humans:
    benefits[h] = \
        ... + \
        pulp.lpSum([vars_Nshifts_leq[h][b] * benefit_boost_leq_bound[b] \
                    for b in benefit_boost_leq_bound.keys()])

In this scenario, David would get a boost of 0.4 from giving up his 5th shift, while Judy would lose a boost of 0.2 from getting her 3rd, for a net gain of 0.2 benefit points. The exact numbers will need to be adjusted on a case by case basis, but this works.

The full program, with this and other extra features is available here.

05 March, 2025 12:02PM by Dima Kogan

March 03, 2025

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

this is bits from DPL for February.

Ftpmaster team is seeking for new team members

In December, Scott Kitterman announced his retirement from the project. I personally regret this, as I vividly remember his invaluable support during the Debian Med sprint at the start of the COVID-19 pandemic. He even took time off to ensure new packages cleared the queue in under 24 hours. I want to take this opportunity to personally thank Scott for his contributions during that sprint and for all his work in Debian.

With one fewer FTP assistant, I am concerned about the increased workload on the remaining team. I encourage anyone in the Debian community who is interested to consider reaching out to the FTP masters about joining their team.

If you're wondering about the role of the FTP masters, I'd like to share a fellow developer's perspective:

"My read on the FTP masters is:

  • In truth, they are the heart of the project.
  • They know it.
  • They do a fantastic job."

I fully agree and see it as part of my role as DPL to ensure this remains true for Debian's future.

If you're looking for a way to support Debian in a critical role where many developers will deeply appreciate your work, consider reaching out to the team. It's a great opportunity for any Debian Developer to contribute to a key part of the project.

Project Status: Six Months of Bug of the Day

In my Bits from the DPL talk at DebConf24, I announced the Tiny Tasks effort, which I intended to start with a Bug of the Day project. Another idea was an Autopkgtest of the Day, but this has been postponed due to limited time resources-I cannot run both projects in parallel.

The original goal was to provide small, time-bound examples for newcomers. To put it bluntly: in terms of attracting new contributors, it has been a failure so far. My offer to explain individual bug-fixing commits in detail, if needed, received no response, and despite my efforts to encourage questions, none were asked.

However, the project has several positive aspects: experienced developers actively exchange ideas, collaborate on fixing bugs, assess whether packages are worth fixing or should be removed, and work together to find technical solutions for non-trivial problems.

So far, the project has been engaging and rewarding every day, bringing new discoveries and challenges-not just technical, but also social. Fortunately, in the vast majority of cases, I receive positive responses and appreciation from maintainers. Even in the few instances where help was declined, it was encouraging to see that in two cases, maintainers used the ping as motivation to work on their packages themselves. This reflects the dedication and high standards of maintainers, whose work is essential to the project's success.

I once used the metaphor that this project is like wandering through a dark basement with a lone flashlight-exploring aimlessly and discovering a wide variety of things that have accumulated over the years. Among them are true marvels with popcon >10,000, ingenious tools, and delightful games that I only recently learned about. There are also some packages whose time may have come to an end-but each of them reflects the dedication and effort of those who maintained them, and that deserves the utmost respect.

Leaving aside the challenge of attracting newcomers, what have we achieved since August 1st last year?

  • Fixed more than one package per day, typically addressing multiple bugs.
  • Added and corrected numerous Homepage fields and watch files.
  • The most frequently patched issue was "Fails To Cross-Build From Source" (all including patches).
  • Migrated several packages from cdbs/debhelper to dh.
  • Rewrote many d/copyright files to DEP5 format and thoroughly reviewed them.
  • Integrated all affected packages into Salsa and enabled Salsa CI.
  • Approximately half of the packages were moved to appropriate teams, while the rest are maintained within the Debian or Salvage teams.
  • Regularly performed team uploads, ITS, NMUs, or QA uploads.
  • Filed several RoQA bugs to propose package removals where appropriate.
  • Reported multiple maintainers to the MIA team when necessary.

With some goodwill, you can see a slight impact on the trends.debian.net graphs (thank you Lucas for the graphs), but I would never claim that this project alone is responsible for the progress. What I have also observed is the steady stream of daily uploads to the delayed queue, demonstrating the continuous efforts of many contributors. This ongoing work often remains unseen by most-including myself, if not for my regular check-ins on this list. I would like to extend my sincere thanks to everyone pushing fixes there, contributing to the overall quality and progress of Debian's QA efforts.

If you examine the graphs for "Version Control System" and "VCS Hosting" with the goodwill mentioned above, you might notice a positive trend since mid-last year. The "Package Smells" category has also seen reductions in several areas: "no git", "no DEP5 copyright", "compat <9", and "not salsa". I'd also like to acknowledge the NMUers who have been working hard to address the "format != 3.0" issue. Thanks to all their efforts, this specific issue never surfaced in the Bug of the Day effort, but their contributions deserve recognition here.

The experience I gathered in this project taught me a lot and inspired me to some followup we should discuss at a Sprint at DebCamp this year.

Finally, if any newcomer finds this information interesting, I'd be happy to slow down and patiently explain individual steps as needed. All it takes is asking questions on the Matrix channel to turn this into a "teaching by example" session.

By the way, for newcomers who are interested, I used quite a few abbreviations-all of which are explained in the Debian Glossary.

Sneak Peek at Upcoming Conferences

I will join two conferences in March-feel free to talk to me if you spot me there.

  1. FOSSASIA Summit 2025 (March 13-15, Bangkok, Thailand) Schedule: https://eventyay.com/e/4c0e0c27/schedule

  2. Chemnitzer Linux-Tage (March 22-23, Chemnitz, Germany) Schedule: https://chemnitzer.linux-tage.de/2025/de/programm/vortraege

Both events will have a Debian booth-come say hi!

Kind regards Andreas.

03 March, 2025 11:00PM by Andreas Tille

March 02, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

RIP: Steve Langasek

[I’d like to stop writing posts like this. I’ve been trying to work out what to say now for nearly 2 months (writing the mail to -private to tell the Debian project about his death is one of the hardest things I’ve had to write, and I bottled out and wrote something that was mostly just factual, because it wasn’t the place), and I’ve decided I just have to accept this won’t be the post I want it to be, but posted is better than languishing in drafts.]

Last weekend I was in Portland, for the Celebration of Life of my friend Steve, who sadly passed away at the start of the year. It wasn’t entirely unexpected, but that doesn’t make it any easier.

I’ve struggled to work out what to say about Steve. I’ve seen many touching comments from others in Debian about their work with him, but what that’s mostly brought home to me is that while I met Steve through Debian, he was first and foremost my friend rather than someone I worked with in Debian. And so everything I have to say is more about that friendship (and thus feels a bit self-centred).

My first memory of Steve is getting lost with him in Porto Alegre, Brazil, during DebConf4. We’d decided to walk to a local mall to meet up with some other folk (I can’t recall how they were getting there, but it wasn’t walking), ended up deep in conversation (ISTR it was about shared library transititions), and then it took a bit longer than we expected. I don’t know how that managed to cement a friendship (neither of us saw it as the near death experience others feared we’d had), but it did.

Unlike others I never texted Steve much; we’d occasionally chat on IRC, but nothing major. That didn’t seem to matter when we actually saw each other in person though, we just picked up like we’d seen each other the previous week. DebConf became a recurring theme of when we’d see each other. Even outside DebConf we went places together. The first time I went somewhere in the US that wasn’t the Bay Area, it was to Portland to see Steve. He, and his family, came to visit me in Belfast a couple of times, and I did road trip from Dublin to Cork with him. He took me to a volcano.

Steve saw injustice in the world and actually tried to do something about it. I still have a copy of the US constitution sitting on my desk that he gave me. He made me want to be a better person.

The world is a worse place without him in it, and while I am better for having known him, I am sadder for the fact he’s gone.

02 March, 2025 04:56PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in February 2025

Most of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

OpenSSH

OpenSSH upstream released 9.9p2 with fixes for CVE-2025-26465 and CVE-2025-26466. I got a heads-up on this in advance from the Debian security team, and prepared updates for all of testing/unstable, bookworm (Debian 12), bullseye (Debian 11), buster (Debian 10, LTS), and stretch (Debian 9, ELTS). jessie (Debian 8) is also still in ELTS for a few more months, but wasn’t affected by either vulnerability.

Although I’m not particularly active in the Perl team, I fixed a libnet-ssleay-perl build failure because it was blocking openssl from migrating to testing, which in turn was blocking the above openssh fixes.

I also sent a minor sshd -T fix upstream, simplified a number of autopkgtests using the newish Restrictions: needs-sudo facility, and prepared for removing the obsolete slogin symlink.

PuTTY

I upgraded to the new upstream version 0.83.

GCC 15 build failures

I fixed build failures with GCC 15 in a few packages:

Python team

A lot of my Python team work is driven by its maintainer dashboard. Now that we’ve finished the transition to Python 3.13 as the default version, and inspired by a recent debian-devel thread started by Santiago, I thought it might be worth spending a bit of time on the “uscan error” section. uscan is typically scraping upstream web sites to figure out whether new versions are available, and so it’s easy for its configuration to become outdated or broken. Most of this work is pretty boring, but it can often reveal situations where we didn’t even realize that a Debian package was out of date. I fixed these packages:

  • cssutils (this in particular was very out of date due to a new and active upstream maintainer since 2021)
  • django-assets
  • django-celery-email
  • django-sass
  • django-yarnpkg
  • json-tricks
  • mercurial-extension-utils
  • pydbus
  • pydispatcher
  • pylint-celery
  • pyspread
  • pytest-pretty
  • python-apptools
  • python-django-libsass (contributed a packaging fix upstream in passing)
  • python-django-postgres-extra
  • python-django-waffle
  • python-ephemeral-port-reserve
  • python-ifaddr
  • python-log-symbols
  • python-msrest
  • python-msrestazure
  • python-netdisco
  • python-pathtools
  • python-user-agents
  • sinntp
  • wchartype

I upgraded these packages to new upstream versions:

  • cssutils (contributed a packaging tweak upstream)
  • django-iconify
  • django-sass
  • domdf-python-tools
  • extra-data (fixing a numpy 2.0 failure)
  • flufl.i18n
  • json-tricks
  • jsonpickle
  • mercurial-extension-utils
  • mod-wsgi
  • nbconvert
  • orderly-set
  • pydispatcher (contributed a Python 3.12 fix upstream)
  • pylint
  • pytest-rerunfailures
  • python-asyncssh
  • python-box (contributed a packaging fix upstream)
  • python-charset-normalizer
  • python-django-constance
  • python-django-guid
  • python-django-pgtrigger
  • python-django-waffle
  • python-djangorestframework-simplejwt
  • python-formencode
  • python-holidays (contributed a test fix upstream)
  • python-legacy-cgi
  • python-marshmallow-polyfield (fixing a test failure)
  • python-model-bakery
  • python-mrcz (fixing a numpy 2.0 failure)
  • python-netdisco
  • python-npe2
  • python-persistent
  • python-pkginfo (fixing a test failure)
  • python-proto-plus
  • python-requests-ntlm
  • python-roman
  • python-semantic-release
  • python-setproctitle
  • python-stdlib-list
  • python-trustme
  • python-typeguard (fixing a test failure)
  • python-tzlocal
  • pyzmq
  • setuptools-scm
  • sqlfluff
  • stravalib
  • tomopy
  • trove-classifiers
  • xhtml2pdf (fixing CVE-2024-25885)
  • xonsh
  • zodbpickle
  • zope.deprecation
  • zope.testrunner

In bookworm-backports, I updated python-django to 3:4.2.18-1 (issuing BSA-121) and added new backports of python-django-dynamic-fixture and python-django-pgtrigger, all of which are dependencies of debusine.

I went through all the build failures related to python-click 8.2.0 (which was confusingly tagged but not fully released upstream and posted an analysis.

I fixed or helped to fix various other build/test failures:

I dropped support for the old setup.py ftest command from zope.testrunner upstream.

I fixed various odds and ends of bugs:

Installer team

Following up on last month, I merged and uploaded Helmut’s /usr-move fix.

02 March, 2025 01:49PM by Colin Watson

March 01, 2025

hackergotchi for Junichi Uekawa

Junichi Uekawa

Network is unreliable.

Network is unreliable. Seems like my router is trying to reconnect every 20 seconds after something triggers.

01 March, 2025 10:01PM by Junichi Uekawa

hackergotchi for Guido Günther

Guido Günther

Free Software Activities February 2025

Another short status update of what happened on my side last month. One larger blocks are the Phosh 0.45 release, also reviews took a considerable amount of time. From the fun side debugging bananui and coming up with a fix in phoc as well as setting up a small GSM network using osmocom to test more Cell Broadcast thingies were likely the most fun parts.

phosh

  • Release 0.45~beta1, 0.45~rc1, 0.45.0
  • Don't hide player when track is stopped (MR) - helps with e.g. Shortwave
  • Fetch cover art via http (MR)
  • Update CI images (MR)
  • Robustify symbol file generation (MR)
  • Handle cutouts in the indicators area (MR)
  • Reduce flicker when opening overview (MR)
  • Select less noisy default background (MR)

phoc

  • Release 0.45~beta1, 0.45~rc1, 0.45.0
  • Add support for ext-foreign-toplevel-v1 (MR)
  • Keep wlroots-0.19.x in shape and add support for ext-image-copy-capture-v1 (MR)
  • Fix geometry with scale when rendering to a buffer (MR)
  • Allow to tweak log domains at runtime (MR)
  • Print more useful information on startup (MR)
  • Provide PID of toplevel for phosh (MR)
  • Improve detection for hardware keyboards (MR) (mostly to help bananui)
  • Make tests a bit more flexible (MR)
  • Use wlr_damage_ring_rotate_buffer (MR). Another prep for 0.19.x.
  • Support wp-alpha-modifier-v1 protocol (MR)

phosh-osk-stub

phosh-tour

phosh-mobile-settings

pfs

  • Add common checks and check meson files (MR)

libphosh-rs

meta-phosh

  • Add common dot files and job to check meson formatting (MR)
  • Add l10n modules to string freeze announcement (based on suggestion by Alexandre Franke) (MR)
  • Bring over mk-gitlab-rel and improve it for alpha, beta, RCs (MR)

libcmatrix

Debian

  • Upload phoc 0.45~beta1, 0.45~rc1, 0.45.0
  • Upload phosh 0.45~beta1, 0.45~rc1, 0.45.0
  • Uplaod feedbackd 0.7.0
  • Upload xdg-desktop-portal-phosh 0.45.0
  • Upload phosh-tour 0.45~rc1, 0.45.0
  • Upload phosh-osk-stub 0.45~rc1, 0.45.0
  • Upload phosh-mobile-settings 0.45~rc1, 0.45.0
  • phosh: Fix dependencies of library dev package (MR) (and add a test)
  • Update libphosh-rs to 0.0.6 (MR)
  • Update iio-sensor-proxy to 3.6 (MR)
  • Backport qbootctl RDONLY patch (MR) to make generating the boot image more robust
  • libssc: Update to 0.2.1 (MR)
  • dom-tools: Write errors to stderr (MR)
  • dom-tools: Use underscored version to drop the branch ~ (MR)
  • libmbim: Upload 1.31.6 to experimental (MR)
  • ModemManager: Upload 1.23.12 to experimental (MR)

gmobile

  • data: Add display-panel for Furilabs FLX1 (MR)

feedbackd

grim

  • Allow to force screen capture protocol (MR)

Wayland protocols

  • Address multiple rounds of review comments in the xdg-occlusion (now xdg-cutouts) protocol (MR)

g4music

  • Set prefs parent (MR)

wlroots

  • Backport touch up fix to 0.18 (MR)

qbootctl

  • Don't recreate all partitions on read operations (MR)

bananui-shell

  • Check for keyboard caps before using them (Patch, issue)

libssc

  • Allow for python3 as interpreter as well (MR)
  • Don't leak unprefixed symbols into ABI (MR)
  • Improve info on test failures (MR)
  • Support mutiarch when loading libqrtr (MR)

ModemManager

  • Cell Broadcast: Allow to set channel list via API (MR)

Waycheck

  • Add Phosh's protocols (MR)

Bug reports

  • Support haptic feedback on Linux in Firefox (Bug)
  • Get pmOS to boot again on Nokia 2780 (Bug)

Reviews

This is not code by me but reviews on other peoples code. The list is slightly incomplete. Thanks for the contributions!

  • Debian: qcom-phone-utils rework (MR)
  • Simplify ui files (MR) - partially merged
  • calls: Implement ussd interface for ofono (MR)
  • chatty: Build docs using gi-docgen (MR)
  • chatty: Search related improvements (MR)
  • chatty: Fix crash on stuck SMS removal (MR)
  • feedbackd: stop flash when "prefer flash" is disabled (MR) - merged
  • gmobile: Support for nothingphone notch (MR)
  • iio-sensor-proxy: polkit for compass (MR) - merged
  • libcmatrix: Improved error code (MR) - merged
  • libcmatrix: Load room members is current (MR) - merged
  • libcmatrix: Start 0.0.4 cycle (MR) - merged
  • libhosh-rs: Update to 0.45~rc1 (MR) - merged
  • libphosh-rs: Update to reduced API surface (MR) - merged
  • phoc: Use color-rect for shields: (MR) - merged
  • phoc: unresponsive toplevel state (MR)
  • phoc: view: Don't multiply by scale in get_geometry_default (MR)
  • phoc: render: Fix subsurface scaling when rendering to buffer (MR)
  • phoc: render: Avoid rendering textures with alpha set to zero (MR)
  • phoc: Render a spinner on output shield (MR)
  • phosh: Manage libpohsh API version separately (MR) - merged
  • phosh: Prepare container APIs for GTK4 (MR)
  • phosh: Reduce API surface further (MR) - merged
  • phosh: Simplify UI files for GTK4 migration (MR) - merged
  • phosh: Simplify gvc-channel bar (MR) - merged
  • phosh: Simplify parent lookup (MR) - merged
  • phosh: Split out private header for LF (MR) - merged
  • phosh: Use symbols file for libphosh (MR) - merged
  • phosh: stylesheet: Improve legibility of app grid and top bar (MR)
  • several mobile-broadband-provider-info updates under (MR) - mostly merged

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 March, 2025 07:53AM

February 28, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

printables.com feed

I wanted to follow new content posted to Printables.com with a feed reader, but Printables.com doesn't provide one. Neither do the other obvious 3d model catalogues. So, I started building one.

I have something that spits out an Atom feed and a couple of beta testers gave me some valuable feedback. I had planned to make it public, with the ultimate goal being to convince Printables.com to implement feeds themselves.

Meanwhile, I stumbled across someone else who has done basically the same thing. Here are 3rd party feeds for

The format of their feeds is JSON-Feed, which is new to me. FreshRSS and NetNewsWire seems happy with it. (I went with Atom.) I may still release my take, if I find time to make one improvmment that my beta-testers suggested.

28 February, 2025 12:26PM

hackergotchi for Joey Hess

Joey Hess

WASM Wayland Web (WWW)

So there are only 2 web browser engines, and it seems likely there will soon only be 1, and making a whole new web browser from the ground up is effectively impossible because the browsers vendors have weaponized web standards complexity against any newcomers. Maybe eventually someone will succeed and there will be 2 again. Best case. What a situation.

So throw out all the web standards. Make a browser that just runs WASM blobs, and gives them a surface to use, sorta like Wayland does. It has tabs, and a throbber, and urls, but no HTML, no javascript, no CSS. Just HTTP of WASM blobs.

This is where the web browser is going eventually anyway, except in the current line of evolution it will be WASM with all the web standards complexity baked in and reinforcing the current situation.

Would this be a mass of proprietary software? Have you looked at any corporate website's "source" lately? But what's important is that this would make it easy enough to build new browsers that they would stop being a point of control.

Want a browser that natively supports RSS? Poll the feeds, make a UI, download the WASM enclosures to view the posts. Want a browser that supports IPFS or gopher? Fork any browser and add it, the mantenance load will be minimal. Want to provide access to GPIO pins or something? Add an extension that can be accessed via the WASI component model. This would allow for so many things like that which won't and can't happen with the current market duopoly browser situation.

And as for your WASM web pages, well you can still use HTML if you like. Use the WASI component model to pull in a HTML engine. It doesn't need to support everything, just the parts of web standards that you want to use. Or you can do something entitely different in your WASM that is not HTML based at all but a better paradigm (oh hi Spritely or display postscript or gemini capsules or whatever).

Dual innovation sources or duopoly? I know which I'd prefer. This is not my project to build though.

28 February, 2025 06:41AM

Antoine Beaupré

Qalculate hacks

This is going to be a controversial statement because some people are absolute nerds about this, but, I need to say it.

Qalculate is the best calculator that has ever been made.

I am not going to try to convince you of this, I just wanted to put out my bias out there before writing down those notes. I am a total fan.

This page will collect my notes of cool hacks I do with Qalculate. Most examples are copy-pasted from the command-line interface (qalc(1)), but I typically use the graphical interface as it's slightly better at displaying complex formulas. Discoverability is obviously also better for the cornucopia of features this fantastic application ships.

Qalc commandline primer

On Debian, Qalculate's CLI interface can be installed with:

apt install qalc

Then you start it with the qalc command, and end up on a prompt:

anarcat@angela:~$ qalc
> 

Then it's a normal calculator:

anarcat@angela:~$ qalc
> 1+1

  1 + 1 = 2

> 1/7

  1 / 7 ≈ 0.1429

> pi

  pi ≈ 3.142

> 

There's a bunch of variables to control display, approximation, and so on:

> set precision 6
> 1/7

  1 / 7 ≈ 0.142857
> set precision 20
> pi

  pi ≈ 3.1415926535897932385

When I need more, I typically browse around the menus. One big issue I have with Qalculate is there are a lot of menus and features. I had to fiddle quite a bit to figure out that set precision command above. I might add more examples here as I find them.

Bandwidth estimates

I often use the data units to estimate bandwidths. For example, here's what 1 megabit per second is over a month ("about 300 GiB"):

> 1 megabit/s * 30 day to gibibyte 

  (1 megabit/second) × (30 days) ≈ 301.7 GiB

Or, "how long will it take to download X", in this case, 1GiB over a 100 mbps link:

> 1GiB/(100 megabit/s)

  (1 gibibyte) / (100 megabits/second) ≈ 1 min + 25.90 s

Password entropy

To calculate how much entropy (in bits) a given password structure, you count the number of possibilities in each entry (say, [a-z] is 26 possibilities, "one word in a 8k dictionary" is 8000), extract the base-2 logarithm, multiplied by the number of entries.

For example, an alphabetic 14-character password is:

> log2(26*2)*14

  log₂(26 × 2) × 14 ≈ 79.81

... 80 bits of entropy. To get the equivalent in a Diceware password with a 8000 word dictionary, you would need:

> log2(8k)*x = 80

  (log₂(8 × 000) × x) = 80 ≈

  x ≈ 6.170

... about 6 words, which gives you:

> log2(8k)*6

  log₂(8 × 1000) × 6 ≈ 77.79

78 bits of entropy.

Exchange rates

You can convert between currencies!

> 1 EUR to USD

  1 EUR ≈ 1.038 USD

Even fake ones!

> 1 BTC to USD

  1 BTC ≈ 96712 USD

This relies on a database pulled form the internet (typically the central european bank rates, see the source). It will prompt you if it's too old:

It has been 256 days since the exchange rates last were updated.
Do you wish to update the exchange rates now? y

As a reader pointed out, you can set the refresh rate for currencies, as some countries will require way more frequent exchange rates.

The graphical version has a little graphical indicator that, when you mouse over, tells you where the rate comes from.

Other conversions

Here are other neat conversions extracted from my history

> teaspoon to ml

  teaspoon = 5 mL

> tablespoon to ml

  tablespoon = 15 mL

> 1 cup to ml 

  1 cup ≈ 236.6 mL

> 6 L/100km to mpg

  (6 liters) / (100 kilometers) ≈ 39.20 mpg

> 100 kph to mph

  100 kph ≈ 62.14 mph

> (108km - 72km) / 110km/h

  ((108 kilometers) − (72 kilometers)) / (110 kilometers/hour) ≈
  19 min + 38.18 s

Completion time estimates

This is a more involved example I often do.

Background

Say you have started a long running copy job and you don't have the luxury of having a pipe you can insert pv(1) into to get a nice progress bar. For example, rsync or cp -R can have that problem (but not tar!).

(Yes, you can use --info=progress2 in rsync, but that estimate is incremental and therefore inaccurate unless you disable the incremental mode with --no-inc-recursive, but then you pay a huge up-front wait cost while the entire directory gets crawled.)

Extracting a process start time

First step is to gather data. Find the process start time. If you were unfortunate enough to forget to run date --iso-8601=seconds before starting, you can get a similar timestamp with stat(1) on the process tree in /proc with:

$ stat /proc/11232
  File: /proc/11232
  Size: 0               Blocks: 0          IO Block: 1024   directory
Device: 0,21    Inode: 57021       Links: 9
Access: (0555/dr-xr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2025-02-07 15:50:25.287220819 -0500
Modify: 2025-02-07 15:50:25.287220819 -0500
Change: 2025-02-07 15:50:25.287220819 -0500
 Birth: -

So our start time is 2025-02-07 15:50:25, we shave off the nanoseconds there, they're below our precision noise floor.

If you're not dealing with an actual UNIX process, you need to figure out a start time: this can be a SQL query, a network request, whatever, exercise for the reader.

Saving a variable

This is optional, but for the sake of demonstration, let's save this as a variable:

> start="2025-02-07 15:50:25"

  save("2025-02-07T15:50:25"; start; Temporary; ; 1) =
  "2025-02-07T15:50:25"

Estimating data size

Next, estimate your data size. That will vary wildly with the job you're running: this can be anything: number of files, documents being processed, rows to be destroyed in a database, whatever. In this case, rsync tells me how many bytes it has transferred so far:

# rsync -ASHaXx --info=progress2 /srv/ /srv-zfs/
2.968.252.503.968  94%    7,63MB/s    6:04:58  xfr#464440, ir-chk=1000/982266) 

Strip off the weird dots in there, because that will confuse qalculate, which will count this as:

  2.968252503968 bytes ≈ 2.968 B

Or, essentially, three bytes. We actually transferred almost 3TB here:

  2968252503968 bytes ≈ 2.968 TB

So let's use that. If you had the misfortune of making rsync silent, but were lucky enough to transfer entire partitions, you can use df (without -h! we want to be more precise here), in my case:

Filesystem              1K-blocks       Used  Available Use% Mounted on
/dev/mapper/vg_hdd-srv 7512681384 7258298036  179205040  98% /srv
tank/srv               7667173248 2870444032 4796729216  38% /srv-zfs

(Otherwise, of course, you use du -sh $DIRECTORY.)

Digression over bytes

Those are 1 K bytes which is actually (and rather unfortunately) Ki, or "kibibytes" (1024 bytes), not "kilobytes" (1000 bytes). Ugh.

> 2870444032 KiB

  2870444032 kibibytes ≈ 2.939 TB
> 2870444032 kB

  2870444032 kilobytes ≈ 2.870 TB

At this scale, those details matter quite a bit, we're talking about a 69GB (64GiB) difference here:

> 2870444032 KiB - 2870444032 kB

  (2870444032 kibibytes) − (2870444032 kilobytes) ≈ 68.89 GB

Anyways. Let's take 2968252503968 bytes as our current progress.

Our entire dataset is 7258298064 KiB, as seen above.

Solving a cross-multiplication

We have 3 out of four variables for our equation here, so we can already solve:

> (now-start)/x = (2996538438607 bytes)/(7258298064 KiB) to h

  ((actual − start) / x) = ((2996538438607 bytes) / (7258298064
  kibibytes))

  x ≈ 59.24 h

The entire transfer will take about 60 hours to complete! Note that's not the time left, that is the total time.

To break this down step by step, we could calculate how long it has taken so far:

> now-start

  now − start ≈ 23 h + 53 min + 6.762 s

> now-start to s

  now − start ≈ 85987 s

... and do the cross-multiplication manually, it's basically:

x/(now-start) = (total/current)

so:

x = (total/current) * (now-start)

or, in Qalc:

> ((7258298064  kibibytes) / ( 2996538438607 bytes) ) *  85987 s

  ((7258298064 kibibytes) / (2996538438607 bytes)) × (85987 secondes) ≈
  2 d + 11 h + 14 min + 38.81 s

It's interesting it gives us different units here! Not sure why.

Now and built-in variables

The now here is actually a built-in variable:

> now

  now ≈ "2025-02-08T22:25:25"

There is a bewildering list of such variables, for example:

> uptime

  uptime = 5 d + 6 h + 34 min + 12.11 s

> golden

  golden ≈ 1.618

> exact

  golden = (√(5) + 1) / 2

Computing dates

In any case, yay! We know the transfer is going to take roughly 60 hours total, and we've already spent around 24h of that, so, we have 36h left.

But I did that all in my head, we can ask more of Qalc yet!

Let's make another variable, for that total estimated time:

> total=(now-start)/x = (2996538438607 bytes)/(7258298064 KiB)

  save(((now − start) / x) = ((2996538438607 bytes) / (7258298064
  kibibytes)); total; Temporary; ; 1) ≈
  2 d + 11 h + 14 min + 38.22 s

And we can plug that into another formula with our start time to figure out when we'll be done!

> start+total

  start + total ≈ "2025-02-10T03:28:52"

> start+total-now

  start + total − now ≈ 1 d + 11 h + 34 min + 48.52 s

> start+total-now to h

  start + total − now ≈ 35 h + 34 min + 32.01 s

That transfer has ~1d left, or 35h24m32s, and should complete around 4 in the morning on February 10th.

But that's icing on top. I typically only do the cross-multiplication and calculate the remaining time in my head.

I mostly did the last bit to show Qalculate could compute dates and time differences, as long as you use ISO timestamps. Although it can also convert to and from UNIX timestamps, it cannot parse arbitrary date strings (yet?).

Other functionality

Qalculate can:

  • Plot graphs;
  • Use RPN input;
  • Do all sorts of algebraic, calculus, matrix, statistics, trigonometry functions (and more!);
  • ... and so much more!

I have a hard time finding things it cannot do. When I get there, I typically need to resort to programming code in Python, use a spreadsheet, and others will turn to more complete engines like Maple, Mathematica or R.

But for daily use, Qalculate is just fantastic.

And it's pink! Use it!

Further reading and installation

This is just scratching the surface, the fine manual has more information, including more examples. There is also of course a qalc(1) manual page which also ships an excellent EXAMPLES section.

Qalculate is packaged for over 30 Linux distributions, but also ships packages for Windows and MacOS. There are third-party derivatives as well including a web version and an Android app.

Updates

Colin Watson liked this blog post and was inspired to write his own hacks, similar to what's here, but with extras, check it out!

28 February, 2025 05:31AM

February 26, 2025

Swiss JuristGate

Urban Angehrn (FINMA director) announced resignation at same moment Parreaux, Thiébaud & Partners decision published

FINMA closed the business of Parreaux, Thiébaud & Partners / Justicia SA in April 2023 but they did not publish their judgment at the same time.

6 September 2023 they announced the resigntion of Urban Angehrn, director of FINMA since 1 November 2021.

Challenges that were successfully overcome under his leadership include, in particular, [ ... snip ... ], the supervision of supplementary health insurance geared towards client protection and the conclusion of complex enforcement proceedings.

19 September 2023 FINMA published the judgment against Parreaux, Thiébaud & Partners / Justicia SA. They suppressed the names and dates in the document.

Mr Angehrn departed on 30 September.

They wrote that Mr Angehrn had closed the business of Parreaux, Thiébaud & Partners / Justicia SA "successfully". But the clients received nothing.

He had shut down just one single scam in almost two years as director of FINMA.

Thanks to the financial accounts for 2023, we found the director's salary CHF 602,000 and the termination payment CHF 581,000.

”Being able to contribute to the sustainable improvement of the quality of the Swiss financial centre as CEO of FINMA was a unique challenge for me, and one that I tackled with all my might. However, the high and permanent stress level had health consequences. I have considered my decision carefully and have now decided to step down,” says Urban Angehrn.

He resigned due to stress and he received CHF 581,000 as a leaving gift. The clients received nothing.

26 February, 2025 06:30PM