Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

August 09, 2025

Thorsten Alteholz

My Debian Activities in July 2025

Debian LTS

This was my hundred-thirty-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4255-1] audiofile security update of two CVEs related to an integer overflow and a memory leak.
  • [DLA 4256-1] libetpan security update to fix one CVE related to prevent a null pointer dereference.
  • [DLA 4257-1] libcaca security update to fix two CVEs related to heap buffer overflows.
  • [DLA 4258-1] libfastjson security update to fix one CVE related to an out of bounds write.
  • [#1106867] kmail-account-wizard was marked as accepted

I also continued my work on suricata, which turned out to be more challenging than expected. This month I also did a week of FD duties and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the eighty-fourth ELTS month. Unfortunately my allocated hours were far less than expected, so I couldn’t do as much work as planned.

Most of the time I spent with FD tasks and I also attended the monthly LTS/ELTS meeting. I further listened to the debusine talks during debconf. On the one hand I would like to use debusine to prepare uploads for embargoed ELTS issues, on the other hand I would like to use debusine to run the version of lintian that is used in the different releases. At the moment some manual steps are involved here and I tried to automate things. Of course like for LTS, I also continued my work on suricata.

Debian Printing

This month I uploaded a new upstream version of:

Guess what, I also started to work on a new version of hplip and intend to upload it in August.

This work is generously funded by Freexian!

Debian Astro

This month I uploaded new upstream versions of:

  • supernovas (sponsored upload to experimental)
  • calceph (sponsored upload to experimental)

I also uploaded the new package boinor. This is a fork of poliastro, which was retired by upstream and removed from Debian some months ago. I adopted it and rebranded it at the desire of upstream. boinor is the abbreviation of BOdies IN ORbit and I hope this software is still useful.

Debian Mobcom

Unfortunately I didn’t found any time to work on this topic.

misc

On my fight against outdated RFPs, I closed 31 of them in July. Their number is down to 3447 (how can you dare to open new RFPs? :-)). Don’t be afraid of them, they don’t bite and are happy to be released to a closed state.

FTP master

The peace will soon come to an end, so this month I accepted 87 and rejected 2 packages. The overall number of packages that got accepted was 100.

09 August, 2025 11:35AM by alteholz

August 08, 2025

hackergotchi for David Bremner

David Bremner

Using git-annex for email and notmuch metadata

Introducing git-remote-notmuch

Based on an idea and ruby implementation by Felipe Contreras, I have been developing a git remote helper for notmuch. I will soon post an updated version of the patchset to the notmuch mailing list (I wanted to refer to this post in my email). In this blog post I'll outline my experiments with using that tool, along with git-annex to store (and sync) a moderate sized email store along with its notmuch metadata.

WARNING

The rest of this post describes some relatively complex operations using (at best) alpha level software (namely git-remote-notmuch). git-annex is good at not losing your files, but git-remote-notmuch can (and did several times during debugging) wipe out your notmuch database. If you have a backup (e.g. made with notmuch-dump), this is much less annoying, and in particular you can decide to walk away from this whole experiment and restore your database.

Why git-annex?

I currently have about 31GiB of email, spread across more than 830,000 files. I want to maintain the ability to search and read my email offline, so I need to maintain a copy on several workstations and at least one server (which is backed up explicitly). I am somewhat commited to maintaining synchronization of tags to git since that is how the notmuch bug tracker works. Commiting the email files to git seems a bit wasteful: by design notmuch does not modify email files, and even with compression, the extra copy adds a fair amount of overhead (in my case, 17G of git objects, about 57% overhead). It is also notoriously difficult to completely delete files from a git repository. git-annex offers potential mitigation for these two issues, at the cost of a somewhat more complex mental model. The main idea is that instead of committing every version of a file to the git repository, git-annex tracks the filename and metadata, with the file content being stored in a key-value store outside git. Conceptually this is similar to git-lfs. From our current point, the important point is that instead of a second (compressed) copy of the file, we store one copy, along with a symlink and a couple of directory entries.

What to annex

For sufficiently small files, the overhead of a symlink and couple of directory entries is greater than the cost of a compressed second copy. When this happens depends on several variables, and will probably depend on the file content in a particular collection of email. I did a few trials of different settings for annex.largefiles to come to a threshold of largerthan=32k 1. For the curious, my experimental results are below. One potentially surprising aspect is that annexing even a small fraction of the (largest) files yields a big drop in storage overhead.

Threshold fraction annexed overhead
0 100% 30%
8k 29% 13%
16k 12% 9.4%
32k 7% 8.9%
48k 6% 8.9%
100k 3% 9.1%
256k 2% 11%
∞ (git) 0 % 57%

In the end I chose to err on the side of annexing more files (for the flexibility of deletion) rather than potentially faster operations with fewer annexed files at the same level of overhead.

Summarizing the configuration settings for git-annex (some of these are actually defaults, but not in my environment).

$ git config annex.largefiles largerthan=32k
$ git config annex.dotfiles true
$ git config annex.synccontent true

Delivering mail

To get new mail, I do something like

# compute a date based folder under $HOME/Maildir
$ dest = $(folder)
# deliver mail to ${dest} (somehow).
$ notmuch new
$ git -C $HOME/Maildir add ${folder}
$ git -C $HOME/Maildir diff-index --quiet HEAD || git -C $HOME/Maildir commit -m 'mail delivery'

The call to diff-index is just an optimization for the case when nothing was delivered. The default configuration of git-annex will automagically annex any files larger than my threshold. At this point the git-annex repo knows nothing about tags.

There is some git configuration that can speed up the "git add" above, namely

$ git config core.untrackedCache true
$ git config core.fsmonitor true

See git-status(1) under "UNTRACKED FILES AND PERFORMANCE"

Defining notmuch as a git remote

Assuming git-remote-notmuch is somewhere in your path, you can define a remote to connect to the default notmuch database.

$ git remote add database notmuch::
$ git fetch database
$ git merge --allow-unrelated database

The --allow-unrelated should be needed only the first time.

In my case the many small files used to represent the tags (one per message), use a noticeable amount of disk space (in my case about the same amount of space as the xapian database).

Once you start merging from the database to the git repo, you will likely have some conflicts, and most conflict resolution tools leave junk lying around. I added the following .gitignore file to the top level of the repo

*.orig
*~

This prevents our cavalier use of git add from adding these files to our git history (and prevents pushing random junk to the notmuch database.

To push the tags from git to notmuch, you can run

$ git push database master

You might need to run notmuch new first, so that the database knows about all of the messages (currently git-remote-notmuch can't index files, only update metadata).

git annex sync should work with the new remote, but pushing back will be very slow 2. I disable automatic pushing as follows

$ git config remote.database.annex-push false

Unsticking the database remote

If you are debugging git-remote-notmuch, or just unlucky, you may end up in a sitation where git thinks the database is ahead of your git remote. You can delete the database remote (and associated stuff) and re-create it. Although I cannot promise this will never cause problems (because, computers), it will not modify your local copy of the tags in the git repo, nor modify your notmuch database.

$ git remote rm database
$ git update-rf -d notmuch/master
$ rm -r .git/notmuch

Fine tuning notmuch config

  • In order to avoid dealing with file renames, I have

      notmuch config maildir.synchronize_flags false
    
  • I have added the following to new.ignore:

       .git;_notmuch_metadata;.gitignore
    

  1. I also had to set annex.dotfiles to true, as many of my maildirs follow the qmail style convention of starting with a .
  2. I'm not totally clear on why it so slow, but certainly git-annex tries to push several more branches, and these are ignored by git-remote-annex.

08 August, 2025 05:08PM

August 06, 2025

Reproducible Builds

Reproducible Builds in July 2025

Welcome to the seventh report from the Reproducible Builds project in 2025. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this report:

  1. Reproducible Builds Summit 2025
  2. Reproducible Builds an official goal for SUSE Enterprise Linux
  3. Reproducible Builds at FOSSY 2025
  4. New OSS Rebuild project from Google
  5. New extension of Python setuptools to support reproducible builds
  6. diffoscope
  7. New library to patch system functions for reproducibility
  8. Independently Reproducible Git Bundles
  9. Website updates
  10. Distribution work
  11. Reproducibility testing framework
  12. Upstream patches

Reproducible Builds Summit 2025

We are extremely pleased to announce the upcoming Reproducible Builds Summit, set to take place from October 28th — 30th 2025 in Vienna, Austria!

We are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort.

During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving.

If you’re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. Registration is open until 20th September 2025, and we are very much looking forward to seeing many readers of these reports there!


Reproducible Builds an official goal for SUSE Enterprise Linux

On our mailing list this month, Bernhard M. Wiedemann revealed the big news that reproducibility is now an official goal for SUSE Linux Enterprise Server (SLES) 16:

[Everything] changed earlier this year when reproducible-builds for SLES-16 became an official goal for the product. More people are talking about digital sovereignty and supply-chain security now. […] Today, only 9 of 3319 (source) packages have significant problems left (plus 7 with pending fixes), so 99.5% of packages have reproducible builds.


Reproducible Builds at FOSSY 2025

On Saturday 2nd August, Vagrant Cascadian and Chris Lamb presented at this year’s FOSSY 2025. Their talk, titled Never Mind the Checkboxes, Here’s Reproducible Builds!, was introduced as follows:

There are numerous policy compliance and regulatory processes being developed that target software development… but do they solve actual problems? Does it improve the quality of software? Do Software Bill of Materials (SBOMs) actually give you the information necessary to verify how a given software artifact was built? What is the goal of all these compliance checklists anyways… or more importantly, what should the goals be? If a software object is signed, who should be trusted to sign it, and can they be trusted … forever?

Hosted by the Software Freedom Conservancy and taking place in Portland, Oregon, USA, FOSSY aims to be a community-focused event: “Whether you are a long time contributing member of a free software project, a recent graduate of a coding bootcamp or university, or just have an interest in the possibilities that free and open source software bring, FOSSY will have something for you”. More information on the event is available on the FOSSY 2025 website, including the full programme schedule.

Vagrant and Chris also staffed a table, where they will be available to answer any questions about Reproducible Builds and discuss collaborations with other projects.


New OSS Rebuild project from Google

The Google Open Source Security Team (GOSST) published an article this month announcing OSS Rebuild, “a new project to strengthen trust in open source package ecosystems by reproducing upstream artifacts.” As the post itself documents, the new project comprises four facets:

  • Automation to derive declarative build definitions for existing PyPI (Python), npm (JS/TS), and Crates.io (Rust) packages.
  • SLSA Provenance for thousands of packages across our supported ecosystems, meeting SLSA Build Level 3 requirements with no publisher intervention.
  • Build observability and verification tools that security teams can integrate into their existing vulnerability management workflows.
  • Infrastructure definitions to allow organizations to easily run their own instances of OSS Rebuild to rebuild, generate, sign, and distribute provenance.

One difference with most projects that aim for bit-for-bit reproducibility, OSS Rebuild aims for a kind of “semantic” reproducibility:

Through automation and heuristics, we determine a prospective build definition for a target package and rebuild it. We semantically compare the result with the existing upstream artifact, normalizing each one to remove instabilities that cause bit-for-bit comparisons to fail (e.g. archive compression).

The extensive post includes examples about how to access OSS Rebuild attestations using the Go-based command-line interface.


New extension of Python setuptools to support reproducible builds

Wim Jeantine-Glenn has written a PEP 517 Build backend in order to enable reproducible builds when building Python projects that use setuptools.

Called setuptools-reproducible, the project’s README file contains the following:

Setuptools can create reproducible wheel archives (.whl) by setting SOURCE_DATE_EPOCH at build time, but setting the env var is insufficient for creating reproducible sdists (.tar.gz). setuptools-reproducible [therefore] wraps the hooks build_sdist build_wheel with some modifications to make reproducible builds by default.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 301, 302 and 303 to Debian:

  • Improvements:

    • Use Difference.from_operation in an attempt to pipeline the output of the extract-vmlinux script, potentially avoiding it all in memory. []
    • Memoize a number of calls to --version, saving a very large number of external subprocess calls.
  • Bug fixes:

    • Don’t check for PyPDF version 3 specifically, check for versions greater than 3. []
    • Ensure that Java class files are named .class on the filesystem before passing them to javap(1). []
    • Mask stderr from extract-vmlinux script. [][]
    • Avoid spurious differences in h5dump output caused by exposure of absolute internal extraction paths. (#1108690)
  • Misc:

    • Use our_check_output in the ODT comparator. []
    • Update copyright years. []

In addition:

Lastly, Chris Lamb added a tmpfs to try.diffoscope.org so that diffoscope has a non-trivial temporary area to unpack archives, etc. []

Elsewhere in our tooling, however, reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, reprotest version 0.7.30 was uploaded to Debian unstable by Holger Levsen, chiefly including a change by Rebecca N. Palmer to not call sudo with the -h flag in order to fix Debian bug #1108550. []


New library to patch system functions for reproducibility

Nicolas Graves has written and published libfate, a simple collection of tiny libraries to patch system functions deterministically using LD_PRELOAD. According to the project’s README:

libfate provides deterministic replacements for common non-deterministic system functions that can break reproducible builds. Instead of relying on complex build systems or apps or extensive patching, libfate uses the LD_PRELOAD trick to intercept system calls and return fixed, predictable values.

Describing why he wrote it, Nicolas writes:

I originally used the OpenSUSE dettrace approach to make Emacs reproducible in Guix. But when Guix switch to GCC@14, dettrace stopped working as expected. dettrace is a complex piece of software, my need was much less heavy: I don’t need to systematically patch all sources of nondetermism, just the ones that make a process/binary unreproducible in a container/chroot.


Independently Reproducible Git Bundles

Simon Josefsson has published another interesting article this month. Titled Independently Reproducible Git Bundles, the blog post describes the advantages of why you might a reproducible bundle, and the pitfalls that can arise when trying to create them:

One desirable property is that someone else should be able to reproduce the same git bundle, and not only that a single individual is able to reproduce things on one machine. It surprised me to see that when I ran the same set of commands on a different machine (started from a fresh git clone), I got a different checksum. The different checksums occurred even when nothing had been committed on the server side between the two runs.


Website updates

Once again, there were a number of improvements made to our website this month including:


Distribution work

In Debian this month:

Debian contributors have made significant progress toward ensuring package builds produce byte-for-byte reproducible results. You can check the status for packages installed on your system using the new package debian-repro-status, or visit reproduce.debian.net for Debian’s overall statistics for trixie and later. You can contribute to these efforts by joining #debian-reproducible on IRC to discuss fixes, or verify the statistics by installing the new rebuilderd package and setting up your own instance.


The IzzyOnDroid Android APK repository made further progress in July, crossing the 50% reproducibility threshold — congratulations. Furthermore, a new release of the Neo Store was released, which exposes the reproducible status directly next to the version of each app.


In GNU Guix, a series of patches intended to fix the reproducibility for the Mono programming language was merged, fixing reproducibility in Mono versions 1.9 [], 2.4 [] and 2.6 [].


Lastly, in addition to the news that openSUSE Enterprise Linux now [has an official goal of reproducibility]((https://lists.reproducible-builds.org/pipermail/rb-general/2025-July/003846.html), Bernhard M. Wiedemann posted another monthly update for their work there.


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In June, however, a number of changes were made by Holger Levsen, including:

  • Switch the URL for the Tails package set. []
  • Make the dsa-check-packages output more useful. []
  • Setup the ppc64el architecture again, has it has returned — this time with a 2.7 GiB database instead of 72 GiB. []

In addition, Jochen Sprickerhof improved the reproducibility statistics generation:

  • Enable caching of statistics. [][][]
  • Add some common non-reproducible patterns. []
  • Change output to directory. []
  • Add a page sorted by diffoscope size. [][]
  • Switch to Python’s argparse module and separate output(). []

Holger also submitted a number of Debian bugs against rebuilderd and rebuilderd-worker:

  • Config files and scripts for a simple one machine setup. [][]
  • Create a rebuilderd user. []
  • Create rebuilderd-worker user with sbuild. []

Lastly, Mattia Rizzolo added a scheduled job to renew some SSL certificates [] and Vagrant Cascadian performed some node maintenance [][].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

There were a number of other patches from openSUSE developers:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

06 August, 2025 08:56PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in July 2025

About 90% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

DebConf

I attended DebConf for the first time in 11 years (my last one was DebConf 14 in Portland). It was great! For once I had a conference where I had a fairly light load of things I absolutely had to do, so I was able to spend time catching up with old friends, making some new friends, and doing some volunteering - a bit of Front Desk, and quite a lot of video team work where I got to play with sound desks and such. Apparently one of the BoFs (“birds of a feather”, i.e. relatively open discussion sessions) where I was talkmeister managed to break the automatic video cutting system by starting and ending precisely on time, to the second, which I’m told has never happened before. I’ll take that.

I gave a talk about Debusine, along with helping Enrico run a Debusine BoF. We still need to process some of the feedback from this, but are generally pretty thrilled about the reception. My personal highlight was getting a shout-out in a talk from CERN (in the slide starting at 32:55).

Other highlights for me included a Python team BoF, Ian’s tag2upload talk and some very useful follow-up discussions, a session on archive-wide testing, a somewhat brain-melting whiteboard session about the “multiarch interpreter problem”, several useful discussions about salsa.debian.org, Matthew’s talk on how Wikimedia automates their Debian package builds, and many others. I hope I can start attending regularly again!

OpenSSH

Towards the end of a release cycle, people tend to do more upgrade testing, and this sometimes results in interesting problems. Manfred Stock reported “No new SSH connections possible during large part of upgrade to Debian Trixie”, and after a little testing in a container I confirmed that this was a reproducible problem that would have affected many people upgrading from Debian 12 (bookworm), with potentially severe consequences for people upgrading remote systems. In fact, there were two independent problems that each led to much the same symptom:

  • OpenSSH 9.8 split the monolithic sshd listener process into two pieces: a minimal network listener (still called sshd), and an sshd-session process dealing with each individual session. (OpenSSH 10.0 further split sshd-session, adding an sshd-auth process that deals with the user authentication phase of the protocol.) This hardens the OpenSSH server by using different address spaces for privileged and unprivileged code.

    Before this change, when sshd received an incoming connection, it forked and re-executed itself with some special parameters to deal with it. After this change, it forks and executes sshd-session instead, and sshd no longer accepts the parameters it used to accept for this.

    Debian package upgrades happen in two phases: first we unpack the new files onto disk, and then we run some package-specific configuration steps which usually include things like restarting services. (I’m simplifying, but this is good enough for this post.) Normally this is fine, and in fact desirable: the old service keeps on working, and this approach often allows breaking what would otherwise be difficult cycles by ensuring that the system is in a more coherent state before trying to restart services. However, in this case, unpacking the new files onto disk immediately means that new SSH connections no longer work: the old sshd receives the connection and tries to hand it off to a freshly-executed copy of the new sshd binary on disk, which no longer supports this.

    If you’re just upgrading OpenSSH on its own or with a small number of other packages, this isn’t much of a problem as the listener will be restarted quite soon; but if you’re upgrading from bookworm to trixie, there may be a long gap when you can’t SSH to the system any more, and if something fails in the middle of the upgrade then you could be in trouble.

    So, what to do? I considered keeping a copy of the old sshd around temporarily and patching the new sshd to re-execute it if it’s being run to handle an incoming connection, but that turned out to fail in my first test: dependencies are normally only checked when configuring a package, so it’s possible to unpack openssh-server before unpacking a newer libc6 that it depends on, at which point you can’t execute the new sshd at all. (That also means that the approach of restarting the service at unpack time instead of configure time is a non-starter.) We needed a different idea.

    dpkg, the core Debian package manager, has a specialized facility called “diversions”: you can tell it that when it’s unpacking a particular file it should put it somewhere else instead. This is normally used by administrators when they want to install a locally-modified version of a particular file at their own risk, or by packages that knowingly override a file normally provided by some other package. However, in this case it turns out to be useful for openssh-server to temporarily divert one of its own files! When upgrading from before 9.8, it now diverts /usr/sbin/sshd to /usr/sbin/sshd.session-split before the new version is unpacked, then removes the diversion and moves the new file into place once it’s ready to restart the service; this reduces the period when incoming connections fail to a minimum. (We actually have to pretend that the diversion is being performed on behalf of a slightly different package since we’re using dpkg-divert in a strange way here, but it all works.)

  • Most OpenSSH processes, including sshd, check for a compatible version of the OpenSSL library when they start up. This check used to be very picky, among other things requiring both the major and minor number to match. OpenSSL 3 has a better versioning policy, and so OpenSSH 9.4p1 relaxed this check.

    Unfortunately, bookworm shipped with OpenSSH 9.2p1, which means that as soon as you unpack the new libssl3 during an upgrade (actually libssl3t64 due to the 64-bit time_t transition), sshd stops working. This couldn’t be fixed by a change in trixie; we needed to change bookworm in advance of the upgrade so that it would tolerate newer versions of OpenSSL. And time was tight if we wanted to maximize the chance that people would apply that stable update before upgrading to trixie; there isn’t going to be another point release of Debian 12 before the release of Debian 13.

    Fortunately, there’s a stable-updates mechanism for exactly this sort of thing, and the stable release managers kindly accepted my proposal to fix this there.

The net result is that if you apply updates to bookworm (including stable-updates / bookworm-updates, which is enabled by default) before starting the upgrade to trixie, everything should be fine. Many thanks to Manfred for reporting this with just enough time to spare that we were able to fix it before Debian 13 is released in a few days!

debmirror

I did my twice-yearly refresh of debmirror’s mirror_size documentation, and applied a patch from Christoph Goehre to improve mirroring of installer files.

madison-lite

I proposed renaming this project along with the rmadison tool in devscripts, although I’m not yet sure what a good replacement name would be.

Python team

I upgraded python-expandvars, python-typing-extensions (in experimental), and webtest to new upstream versions.

I backported fixes for some security vulnerabilities to unstable:

I fixed or helped to fix a number of release-critical bugs:

I fixed some other bugs, mostly Severity: important:

I reinstated python3-mastodon’s build-dependency on and recommendation of python3-blurhash, now that the latter has been fixed to use the correct upstream source.

06 August, 2025 10:41AM by Colin Watson

hackergotchi for Matthew Palmer

Matthew Palmer

I'm trying an open source funding experiment

As I’m currently somewhat underemployed, and could do with some extra income, I’m starting an open source crowd-funding experiment. My hypothesis is that the open source community, and perhaps a community-minded company or two, really wants more open source code in the world, and is willing to put a few dollars my way to make that happen.

To begin with, I’m asking for contributions to implement a bunch of feature requests on action-validator, a Rust CLI tool I wrote to validate the syntax of GitHub actions and workflows. The premise is quite simple: for every AU$150 (about US$100) I receive in donations, I’ll implement one of the nominated feature requests. If people want a particular feature implemented, they can nominate a feature in their donation message, otherwise when “general” donations get to AU$150, I’ll just pick a feature that looks interesting. More details are on my code fund page.

In the same spirit of simplicity, donations can be made through my Ko-fi page, and I’ll keep track of the various totals in a hand-written HTML table.

So, in short, if you want more open source code to exist, now would be a good time to visit my Ko-fi page and chip in a few dollars. If you’re curious to know more, my code fund page has a list of Foreseeably Anticipated Questions that might address your curiosity. Otherwise, ask your questions in the comments or email me.

06 August, 2025 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

August 05, 2025

hackergotchi for Thomas Lange

Thomas Lange

FAIme service new features: Linux Mint support and data storage for USB

Build your own customized Linux Mint ISO

Using the FAIme service [1] you can now build your own customized installation ISO for Xfce edition of Linux Mint 22.1 'Xia'.

You can select the language, add a list of additional packages, set the username and passwords. In the advanced settings you may add your ssh public key, some grub option and add a postinst script to be executed.

Add writable data partition for USB sticks

For all variants of ISOs (all live and all install ISOs) you can add a data partition to the ISO by just clicking a checkbox. This writable partition can be used when booting from USB stick. FAI will use it to search for a config space and to store the logs when this partition is detected.

The logs will be stored in the subdirectory logs on this partition. For using a different config space than the one on the ISO (which is read only) create a subdirectory config and copy a FAI config space into that directory. Then set FAI_CONFIG_SRC=detect:// (which is the default) and FAI will search for a config space on the data partition and uses this. More info about this [2]

You can also store some local packages in your config space, which will be installed automatically, without the need of recreating the ISO.

05 August, 2025 10:11AM

hackergotchi for Matthew Garrett

Matthew Garrett

Cordoomceps - replacing an Amiga's brain with Doom

There's a lovely device called a pistorm, an adapter board that glues a Raspberry Pi GPIO bus to a Motorola 68000 bus. The intended use case is that you plug it into a 68000 device and then run an emulator that reads instructions from hardware (ROM or RAM) and emulates them. You're still limited by the ~7MHz bus that the hardware is running at, but you can run the instructions as fast as you want.

These days you're supposed to run a custom built OS on the Pi that just does 68000 emulation, but initially it ran Linux on the Pi and a userland 68000 emulator process. And, well, that got me thinking. The emulator takes 68000 instructions, emulates them, and then talks to the hardware to implement the effects of those instructions. What if we, well, just don't? What if we just run all of our code in Linux on an ARM core and then talk to the Amiga hardware?

We're going to ignore x86 here, because it's weird - but most hardware that wants software to be able to communicate with it maps itself into the same address space that RAM is in. You can write to a byte of RAM, or you can write to a piece of hardware that's effectively pretending to be RAM[1]. The Amiga wasn't unusual in this respect in the 80s, and to talk to the graphics hardware you speak to a special address range that gets sent to that hardware instead of to RAM. The CPU knows nothing about this. It just indicates it wants to write to an address, and then sends the data.

So, if we are the CPU, we can just indicate that we want to write to an address, and provide the data. And those addresses can correspond to the hardware. So, we can write to the RAM that belongs to the Amiga, and we can write to the hardware that isn't RAM but pretends to be. And that means we can run whatever we want on the Pi and then access Amiga hardware.

And, obviously, the thing we want to run is Doom, because that's what everyone runs in fucked up hardware situations.

Doom was Amiga kryptonite. Its entire graphical model was based on memory directly representing the contents of your display, and being able to modify that by just moving pixels around. This worked because at the time VGA displays supported having a memory layout where each pixel on your screen was represented by a byte in memory containing an 8 bit value that corresponded to a lookup table containing the RGB value for that pixel.

The Amiga was, well, not good at this. Back in the 80s, when the Amiga hardware was developed, memory was expensive. Dedicating that much RAM to the video hardware was unthinkable - the Amiga 1000 initially shipped with only 256K of RAM, and you could fill all of that with a sufficiently colourful picture. So instead of having the idea of each pixel being associated with a specific area of memory, the Amiga used bitmaps. A bitmap is an area of memory that represents the screen, but only represents one bit of the colour depth. If you have a black and white display, you only need one bitmap. If you want to display four colours, you need two. More colours, more bitmaps. And each bitmap is stored in an independent area of RAM. You never use more memory than you need to display the number of colours you want to.

But that means that each bitplane contains packed information - every byte of data in a bitplane contains the bit value for 8 different pixels, because each bitplane contains one bit of information per pixel. To update one pixel on screen, you need to read from every bitmap, update one bit, and write it back, and that's a lot of additional memory accesses. Doom, but on the Amiga, was slow not just because the CPU was slow, but because there was a lot of manipulation of data to turn it into the format the Amiga wanted and then push that over a fairly slow memory bus to have it displayed.

The CDTV was an aesthetically pleasing piece of hardware that absolutely sucked. It was an Amiga 500 in a hi-fi box with a caddy-loading CD drive, and it ran software that was just awful. There's no path to remediation here. No compelling apps were ever released. It's a terrible device. I love it. I bought one in 1996 because a local computer store had one and I pointed out that the company selling it had gone bankrupt some years earlier and literally nobody in my farming town was ever going to have any interest in buying a CD player that made a whirring noise when you turned it on because it had a fan and eventually they just sold it to me for not much money, and ever since then I wanted to have a CD player that ran Linux and well spoiler 30 years later I'm nearly there. That CDTV is going to be our test subject. We're going to try to get Doom running on it without executing any 68000 instructions.

We're facing two main problems here. The first is that all Amigas have a firmware ROM called Kickstart that runs at powerup. No matter how little you care about using any OS functionality, you can't start running your code until Kickstart has run. This means even documentation describing bare metal Amiga programming assumes that the hardware is already in the state that Kickstart left it in. This will become important later. The second is that we're going to need to actually write the code to use the Amiga hardware.

First, let's talk about Amiga graphics. We've already covered bitmaps, but for anyone used to modern hardware that's not the weirdest thing about what we're dealing with here. The CDTV's chipset supports a maximum of 64 colours in a mode called "Extra Half-Brite", or EHB, where you have 32 colours arbitrarily chosen from a palette and then 32 more colours that are identical but with half the intensity. For 64 colours we need 6 bitplanes, each of which can be located arbitrarily in the region of RAM accessible to the chipset ("chip RAM", distinguished from "fast ram" that's only accessible to the CPU). We tell the chipset where our bitplanes are and it displays them. Or, well, it does for a frame - after that the registers that pointed at our bitplanes no longer do, because when the hardware was DMAing through the bitplanes to display them it was incrementing those registers to point at the next address to DMA from. Which means that every frame we need to set those registers back.

Making sure you have code that's called every frame just to make your graphics work sounds intensely irritating, so Commodore gave us a way to avoid doing that. The chipset includes a coprocessor called "copper". Copper doesn't have a large set of features - in fact, it only has three. The first is that it can program chipset registers. The second is that it can wait for a specific point in screen scanout. The third (which we don't care about here) is that it can optionally skip an instruction if a certain point in screen scanout has already been reached. We can write a program (a "copper list") for the copper that tells it to program the chipset registers with the locations of our bitplanes and then wait until the end of the frame, at which point it will repeat the process. Now our bitplane pointers are always valid at the start of a frame.

Ok! We know how to display stuff. Now we just need to deal with not having 256 colours, and the whole "Doom expects pixels" thing. For the first of these, I stole code from ADoom, the only Amiga doom port I could easily find source for. This looks at the 256 colour palette loaded by Doom and calculates the closest approximation it can within the constraints of EHB. ADoom also includes a bunch of CPU-specific assembly optimisation for converting the "chunky" Doom graphic buffer into the "planar" Amiga bitplanes, none of which I used because (a) it's all for 68000 series CPUs and we're running on ARM, and (b) I have a quad core CPU running at 1.4GHz and I'm going to be pushing all the graphics over a 7.14MHz bus, the graphics mode conversion is not going to be the bottleneck here. Instead I just wrote a series of nested for loops that iterate through each pixel and update each bitplane and called it a day. The set of bitplanes I'm operating on here is allocated on the Linux side so I can read and write to them without being restricted by the speed of the Amiga bus (remember, each byte in each bitplane is going to be updated 8 times per frame, because it holds bits associated with 8 pixels), and then copied over to the Amiga's RAM once the frame is complete.

And, kind of astonishingly, this works! Once I'd figured out where I was going wrong with RGB ordering and which order the bitplanes go in, I had a recognisable copy of Doom running. Unfortunately there were weird graphical glitches - sometimes blocks would be entirely the wrong colour. It took me a while to figure out what was going on and then I felt stupid. Recording the screen and watching in slow motion revealed that the glitches often showed parts of two frames displaying at once. The Amiga hardware is taking responsibility for scanning out the frames, and the code on the Linux side isn't synchronised with it at all. That means I could update the bitplanes while the Amiga was scanning them out, resulting in a mashup of planes from two different Doom frames being used as one Amiga frame. One approach to avoid this would be to tie the Doom event loop to the Amiga, blocking my writes until the end of scanout. The other is to use double-buffering - have two sets of bitplanes, one being displayed and the other being written to. This consumes more RAM but since I'm not using the Amiga RAM for anything else that's not a problem. With this approach I have two copper lists, one for each set of bitplanes, and switch between them on each frame. This improved things a lot but not entirely, and there's still glitches when the palette is being updated (because there's only one set of colour registers), something Doom does rather a lot, so I'm going to need to implement proper synchronisation.

Except. This was only working if I ran a 68K emulator first in order to run Kickstart. If I tried accessing the hardware without doing that, things were in a weird state. I could update the colour registers, but accessing RAM didn't work - I could read stuff out, but anything I wrote vanished. Some more digging cleared that up. When you turn on a CPU it needs to start executing code from somewhere. On modern x86 systems it starts from a hardcoded address of 0xFFFFFFF0, which was traditionally a long way any RAM. The 68000 family instead reads its start address from address 0x00000004, which overlaps with where the Amiga chip RAM is. We can't write anything to RAM until we're executing code, and we can't execute code until we tell the CPU where the code is, which seems like a problem. This is solved on the Amiga by powering up in a state where the Kickstart ROM is "overlayed" onto address 0. The CPU reads the start address from the ROM, which causes it to jump into the ROM and start executing code there. Early on, the code tells the hardware to stop overlaying the ROM onto the low addresses, and now the RAM is available. This is poorly documented because it's not something you need to care if you execute Kickstart which every actual Amiga does and I'm only in this position because I've made poor life choices, but ok that explained things. To turn off the overlay you write to a register in one of the Complex Interface Adaptor (CIA) chips, and things start working like you'd expect.

Except, they don't. Writing to that register did nothing for me. I assumed that there was some other register I needed to write to first, and went to the extent of tracing every register access that occurred when running the emulator and replaying those in my code. Nope, still broken. What I finally discovered is that you need to pulse the reset line on the board before some of the hardware starts working - powering it up doesn't put you in a well defined state, but resetting it does.

So, I now have a slightly graphically glitchy copy of Doom running without any sound, displaying on an Amiga whose brain has been replaced with a parasitic Linux. Further updates will likely make things even worse. Code is, of course, available.

[1] This is why we had trouble with late era 32 bit systems and 4GB of RAM - a bunch of your hardware wanted to be in the same address space and so you couldn't put RAM there so you ended up with less than 4GB of RAM

comment count unavailable comments

05 August, 2025 12:30AM

August 03, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Jubilee of Digital Missionaries, Catholic Influencers concert (photos)

If you would like the original high-resolution photograph then please contact me by email and include the filename of the photo(s) you want.

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6823.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6824.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6825.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6827.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6828.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6829.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6830.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6831.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6832.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6833.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6834.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6835.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6836.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6837.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6840.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6842.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6844.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6852.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6857.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6863.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6867.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6868.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6869.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6870.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6880.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6882.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6885.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6887.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6894.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6896.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6900.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6903.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6904.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6911.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6915.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6917.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6920.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6922.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6923.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6924.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6931.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6933.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6934.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6935.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6939.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6940.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6948.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6951.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6954.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6957.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6958.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6960.JPG


 

If you would like the original high-resolution photograph then please contact me by email and include the filename of the photo(s) you want.

Related blogs about the Jubilee of Digital Missionaries and Catholic Influencers

Related blogs about the church

Related blogs about social control media

Catholic.Community

Please follow the Catholic.Community web site and make it your home page.

03 August, 2025 07:00PM

Jubilee of Digital Missionaries, Vatican gardens (photos)

If you would like the original high-resolution photograph then please contact me by email and include the filename of the photo(s) you want.

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6645.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6647.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6648.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6649.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6650.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6651.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6652.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6653.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6655.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6659.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6661.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6662.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6666.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6667.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6668.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6678.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6679.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6680.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6682.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6683.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6684.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6685.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6686.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6687.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6688.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6690.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6691.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6692.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6693.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6694.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6695.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6700.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6705.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6711.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6730.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6737.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6738.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6745.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6760.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6766.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6770.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6771.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6780.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6784.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6785.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6787.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6788.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6790.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6797.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6798.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6803.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6815.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6818.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6819.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6820.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6821.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6822.JPG


 

If you would like the original high-resolution photograph then please contact me by email and include the filename of the photo(s) you want.

Related blogs about the Jubilee of Digital Missionaries and Catholic Influencers

Related blogs about the church

Related blogs about social control media

Catholic.Community

Please follow the Catholic.Community web site and make it your home page.

03 August, 2025 07:00PM

Jubilee of Digital Missionaries, Tuesday afternoon sessions (photos)

If you would like the original high-resolution photograph then please contact me by email and include the filename of the photo(s) you want.

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6606.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6608.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6610.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6612.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6615.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6619.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6620.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6621.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6624.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6625.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6628.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6629.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6640.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6641.JPG


 

If you would like the original high-resolution photograph then please contact me by email and include the filename of the photo(s) you want.

Related blogs about the Jubilee of Digital Missionaries and Catholic Influencers

Related blogs about the church

Related blogs about social control media

Catholic.Community

Please follow the Catholic.Community web site and make it your home page.

03 August, 2025 07:00PM

Jubilee of Digital Missionaries, Holy Door and mass with the Pope (photos)

If you would like the original high-resolution photograph then please contact me by email and include the filename of the photo(s) you want.

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6491.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6494.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6501.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6503.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6508.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6511.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6514.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6516.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6518.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6520.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6521.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6532.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6535.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6536.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6538.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6539.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6544.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6545.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6547.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6550.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6559.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6562.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6564.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6566.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6573.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6582.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6590.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6591.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6593.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6600.JPG


 

Monsignor Marc Aillet, Eveque de Bayonne, Daniel Pocock, Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6603.JPG


 

If you would like the original high-resolution photograph then please contact me by email and include the filename of the photo(s) you want.

Related blogs about the Jubilee of Digital Missionaries and Catholic Influencers

Related blogs about the church

Related blogs about social control media

Catholic.Community

Please follow the Catholic.Community web site and make it your home page.

03 August, 2025 07:00PM

Jubilee of Digital Missionaries, adoration of the Eucharist (photos)

If you would like the original high-resolution photograph then please contact me by email and include the filename of the photo(s) you want.

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6352.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6374.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6376.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6377.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6378.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6382.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6385.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6388.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6390.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6392.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6394.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6401.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6402.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6404.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6406.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6416.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6417.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6418.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6429.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6431.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6432.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6434.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6440.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6448.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6455.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6458.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6461.JPG


 

Jubilee of Digital Missionaries, Catholic Influencers

Filename: DSC_6467.JPG


 

If you would like the original high-resolution photograph then please contact me by email and include the filename of the photo(s) you want.

Related blogs about the Jubilee of Digital Missionaries and Catholic Influencers

Related blogs about the church

Related blogs about social control media

Catholic.Community

Please follow the Catholic.Community web site and make it your home page.

03 August, 2025 07:00PM

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in July 2025

In July I attended DebCamp and DebConf in Brest, France. I very much enjoyed the opportunity to reconnect with other Debian contributors in person. I had a number of interesting and fruitful conversations there, besides the formally organised BoFs and talks.

I also gave my own talk on What’s new in the Linux kernel (and what’s missing in Debian).

Here’s the usual categorisation of activity:

03 August, 2025 01:27PM by Ben Hutchings

August 02, 2025

Russell Coker

Server CPU Sockets

I am always looking for ways of increasing the compute power I have at a reasonable price. I am very happy with my HP z840 dual CPU workstation [1] that I’m using as a server and my HP z640 single CPU workstation [2]. Both of them were available second hand at quite reasonable prices and could be cheaply upgraded to faster CPUs. But if I can get something a lot faster for a reasonable price then I’ll definitely get it.

Socket LGA2011-v3

The home server and home workstation I currently use have socket LGA2011-v3 [3] which supports the E5-2699A v4 CPU which gives a rating of 26,939 according to Passmark [4]. That Passmark score is quite decent, you can get CPUs using DDR4 RAM that go up to almost double that but it’s a reasonable speed and it works in systems that are readily available at low prices. The z640 is regularly on sale for less than $400AU and the z840 is occasionally below $600.

The Dell PowerEdge T430 is an ok dual-CPU tower server using the same socket. One thing that’s not well known is that is it limited to something like 135W per CPU when run with two CPUs. So it will work correctly with a single E5-2697A v4 with 145W TDP (I’ve tested that) but will refuse to boot with two of them. In my test system I tried replacing the 495W PSUs with 750W PSUs and it made no difference, the motherboard has the limit. With only a single CPU you only get 8/12 DIMM sockets and not all PCIe slots work. There are many second hand T430s on sale with only a single CPU presumably because the T330 sucks. My T430 works fine with a pair of E5-2683 v4 CPUs.

The Dell PowerEdge T630 also takes the same CPUs but supports higher TDP than the T430. They also support 18*3.5″ disks or 32*2.5″ but they are noisy. I wouldn’t buy one for home use.

AMD

There are some nice AMD CPUs manufactured around the same time and AMD has done a better job of making multiple CPUs that fit the same socket. The reason I don’t generally use AMD CPUs is that they are used in a minority of the server grade systems so as I want ECC RAM and other server features I generally can’t find AMD systems at a reasonable price on ebay etc. There are people who really want second hand server grade systems with AMD CPUs and outbid me. This is probably a region dependent issue, maybe if I was buying in the US I could get some nice workstations with AMD CPUs at low prices.

Socket LGA1151

Socket LGA1151 [5] is used in the Dell PowerEdge T330. It only supports 2 memory channels and 4 DIMMs compared to the 4 channels and 8 DIMMs in LGA2011, and it also has a limit of 64G total RAM for most systems and 128G for some systems. By today’s standards even 128G is a real limit for server use, DDR4 RDIMMs are about $1/GB and when spending $600+ on system and CPU upgrade you wouldn’t want to spend less than $130 on RAM. The CPUs with decent performance for that socket like the i9-9900K aren’t supported by the T330 (possibly they don’t support ECC RAM). The CPUs that Dell supports perform very poorly. I suspect that Dell deliberately nerfed the T330 to drive sales of the T430.

The Lenovo P330 uses socket LGA1151-2 but has the same issues of taking slow CPUs in addition to using UDIMMs which are significantly more expensive on the second hand market.

Socket LGA2066

The next Intel socket after LGA2011-v3 is LGA2066 [6]. That is in The Dell Precision 5820 and HP Z4 G4. It takes an i9-10980XE for 32,404 on Passmark or a W-2295 for 30,906. The variant of the Dell 5820 that supports the i9 CPUs doesn’t seem to support ECC RAM so it’s not a proper workstation. The single thread performance difference between the W-2295 and the E5-2699A v4 is 2640 to 2055, a 28% increase for the W-2295. There are “High Frequency Optimized” cpus for socket LGA2011-v3 but they all deliver less than 2,300 on the Passmark single-thread tests which is much less than what you can get from socket LGA2066. The W-2295 costs $1000 on ebay and the E5-2699A v4 is readily available for under $400 and a few months ago I got a matched pair for a bit over $400. Note that getting a matched pair of Intel CPUs is a major pain [7].

Comparing sockets LGA2011-v3 and LGA2066 for a single-CPU system is a $300 system (HP x640) + $400 CPU (E5-2699A v4) vs $500 system (Dell Precision 5820) + $1000 CPU (W-2295), so more than twice the price for a 30% performance benefit on some tasks. The LGA2011-v3 and USB-C both launched in 2014 so LGA2011-v3 systems don’t have USB-C sockets, a $20 USB-C PCIe card doesn’t change the economics.

Socket LGA3647

Socket LGA3647 [8] is used in the Dell PowerEdge T440. It supports 6 channels of DDR4 RAM which is a very nice feature for bigger systems. According to one Dell web page the best CPU Dell officially supports for this is the Xeon Gold 5120 which gives performance only slightly better than the E5-2683 v4 which has a low enough TDP that a T430 can run two of them. But according to another Dell web page they support 16 core CPUs which means performance better than a T430 but less than a HP z840. The T440 doesn’t seem like a great system, if I got one cheap I could find a use for it but I wouldn’t pay the prices that they go for on ebay. The Dell PowerEdge T640 has the same socket and is described as supporting up to 28 core CPUs. But I anticipate that it would be as loud as the T630 and it’s also expensive.

This socket is also used in the HP Z6 G4 which takes a W-3265 or Xeon Gold 6258R CPU for the high end options. The HP Z6 G4 systems on ebay are all above $1500 and the Xeon Gold 6258R is also over $1000 so while the Xeon Gold 6258R in a Z6 G4 will give 50% better performance on multithreaded operations than the systems I currently have it’s costing almost 3* as much. It has 6 DIMM sockets which is a nice improvement over the 4 in the z640. The Z6 G4 takes a maximum of 768G of RAM with the optional extra CPU board (which is very expensive both new and on ebay) compared to my z840 which has 512G and half it’s DIMM slots empty. The HP Z8 G4 has the same socket and takes up to 3TB of RAM if used with CPUs that support it (most CPUs only support 768G and you need a “M” variant to support more). The higher performance CPUs supported in the Z6 G4 and Z8 G4 don’t have enough entries in the Passmark database to be accurate, but going from 22 cores in the E5-2699A v4 to 28 in the Xeon Platinum 8180 when using the same RAM technology doesn’t seem like a huge benefit. The Z6 and Z8 G4 systems run DDR4 RAM at up to 2666 speed while the z640 and z840 only to 2400, a 10% increase in RAM speed is nice but not a huge difference.

I don’t think that any socket LGA3647 systems will ever be ones I want to buy. They don’t offer much over LGA2011-v3 but are in newer and fancier systems that will go for significantly higher prices.

DDR5

I think that DDR5 systems will be my next step up in tower server and workstation performance after the socket LGA2011-v3 systems. I don’t think anything less will offer me enough of a benefit to justify a change. I also don’t think that they will be in the price range I am willing to pay until well after DDR6 is released, some people are hoping for DDR6 to be released late this year but next year seems more likely. So maybe in 2027 there will be some nice DDR5 systems going cheap.

CPU Benchmark Results

Here are the benchmark results of CPUs I mentioned in this post according to passmark.com [9]. I didn’t reference results of CPUs that only had 1 or 2 results posted as they aren’t likely to be accurate.

CPU Single Thread Multi Thread TDP
E5-2683 v4 1,713 17,591 120W
Xeon Gold 5120 1,755 18,251 105W
i9-9900K 2,919 18,152 95W
E5-2697A v4 2,106 21,610 145W
E5-2699A v4 2,055 26,939 145W
W-3265 2,572 30,105 205W
W-2295 2,642 30,924 165W
i9-10980XE 2,662 32,397 165W
Xeon Gold 6258R 2,080 40,252 205W

02 August, 2025 11:43AM by etbe

August 01, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

School of Computing Technical Reports

(You wait ages for an archiving blog post and two come along at once!)

Between 1969-2019, the Newcastle University School of Computing published a Technical Reports Series. Until 2017-ish, the full list of individually-numbered reports was available on the School's website, as well as full text PDFs for every report.

At some time around 2014 I was responsible for migrating the School's website from self-managed to centrally-managed. The driver was to improve the website from the perspective of student recruitment. The TR listings (as well as full listings and texts for awarded PhD theses, MSc dissertations, Director's reports and various others) survived the initial move. After I left (as staff) in 2015, anything not specifically about student recruitment degraded and by 2017 the listings were gone.

I've been trying, on and off, to convince different parts of the University to restore and take ownership of these lists ever since. For one reason or another each avenue I've pursued has gone nowhere.

Recently the last remaining promising way forward failed, so I gave up and did it myself. The list is now hosted by the Historic Computing Committee, here:

https://nuhc.ncl.ac.uk/computing/techreports/

It's not complete (most of the missing entries are towards the end of the run), but it's a start. The approach that finally yielded results was simply scraping the Internet Archive Wayback Machine for various pages from back when the material was represented on the School website, and then filling in the gaps from some other sources.

What I envisage in the future: per-page reports with the relevant metadata (including abstracts); authors de-duplicated and cross-referenced; PDFs OCRd; providing access to the whole metadata DB (probably as as lump of JSON); a mechanism for people to report errors; a platform for students to perform data mining projects: perhaps some kind of classification/tagging by automated content analysis; cross-referencing copies of papers in other venues (lots of TRs are pre-prints).

01 August, 2025 03:55PM

Debian Chronicles

I recently learned that, about 6 months ago, the Debian webteam deleted all news articles from the main website older than 2022. There have been several complaints from people in and outside of Debian, notably Joe Brockmeier of LWN, and this really sad one from the nephew of a deceased developer, wondering where the obituary had gone, but the team have not been swayed and are not prepared to reinstate the news.

It feels very important to me, too, that historic news, and their links, are not broken. So, I hastily built a new Debian service, The Chronicles of Debian, as as permanent home for historic web content.

$ HEAD -S -H "Accept-Language: de" https://www.debian.org/News/1997/19971211
HEAD https://www.debian.org/News/1997/19971211
302 Found
HEAD https://chronicles.debian.org/www/News/1997/19971211
200 OK
…
Content-Language: de
Content-Location: 19971211.de.html
…

This was thrown up in a hurry to get something working as fast as possible, and there is plenty of room for improvement. Get in touch if there's an enhancement you would like or you would like to get involved!

01 August, 2025 12:23PM

hackergotchi for Guido Günther

Guido Günther

Free Software Activities July 2025

Another short status update of what happened on my side last month - a lot shorter than usual due to real life events (that will also affect August) but there was some progress on stevia and it landed in Debian too now.

See below for details on the above and more:

phosh

  • Use new, rust based) phrosh portal too (MR)
  • Consistently format meson files (MR)

phoc

  • Add sysprof support (MR)
  • Reject input based on shell's state (MR)
  • Avoid zero serial (MR)
  • Allow to damage whole output on each frame (MR)
  • Avoid possible crash on unlock (MR)

phosh-mobile-settings

  • Use newer gmobile and make CI more flexible (MR
  • Fix nightly build (MR)
  • Allow to configure the OSK's automatic scaling properties (MR)

stevia (formerly phosh-osk-stub)

  • Portrait keyboard scaling (MR)
  • Fix translation of completer descriptions in mobile settings (MR)
  • Use key-pressed events (MR)
  • Fix additional completions like emojis with hunspell completer (MR)
  • Document layout testing (MR)

phosh-vala-plugins

  • Drop vapi files, they've made it into a phosh release now (MR)

xdg-desktop-portal-phosh

  • Bump rustc dependency and simplify CI (MR)

feedbackd-device-themes

  • Add key-{pressed,released} (MR)

livi

  • Make single click play/pause video (MR)
  • Update screenshot and metinfo for better display on flathub (MR)
  • Release 0.3.2 (MR)
  • Update on Flathub (MR)

Debian

  • Upload current stevia to experimental
  • phosh: Don't forget to install vapi files (MR)
  • meta-phosh: Update to 0.48.0: (MR)
  • Update to stevia 0.48 (MR)
  • Update xkbcommon to 0.10.0 (MR)
  • iio-sensor-proxy: Backport buffer mode fixes for trixie (MR), Unblock request
  • livi: Update to 0.3.2 (MR)

foliate

  • Don't let session go idle when in fullscreen (MR)

Cellbroadcastd

  • Fix packaging build (MR)

git-buildpackage

  • pull: Allow to convert local repo when remote got switched to DEP-14 (MR)

wayland-protocols

  • Respin cutout protocol MR

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • phosh-mobile-settings: Disable Xwayland to help e.g. distrobox (MR) - merged
  • phosh-mobile-settings: Allow to search (MR) - merged
  • phosh-mobile-settings: Allow to configure terminal layout shortcuts (MR) - merged
  • feedbackd: Legacy led support (MR) - merged
  • phosh: upcoming-events: Allow to hide days without events (MR)
  • m-b-p-i: Add emergency numbers for JP (MR)
  • xdg-desktop-portal-phosh: bootstrap pure rust portal (MR) - merged
  • xdg-desktop-portal-phosh: portal avoidance (MR) - merged

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 August, 2025 10:37AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

What a surprise it's August already.

What a surprise it's August already.

01 August, 2025 07:19AM by Junichi Uekawa

Birger Schacht

Status update, July 2025

In beginning of July I got my 12" framework laptop and installed Debian on it. During that setup I made some updates to my base setup scripts that I use to install Debian machines.

Due to the freeze I did not do much package related work. But I was at DebConf and I uploaded a new release of labwc to experimental, mostly to test the tag2upload workflow.

I started working on packaging wlr-sunclock which is a small Wayland widget that displays the sun’s shadows on the earth. I also created an ITP for wayback. Wayback is an X11 compatibility layer to allow to run X11 desktop environments using Wayland.

In my dayjob I did my usual work on apis-core-rdf, which is our Django application for managing prosopographic data. I implemented a password change interface and did some restructuring of the templates. We released a new version which was followed by a bugfix release a couple of days later.

I also implemented a rather big refactoring in pfp-api. PFP-API is a FastAPI based REST API that uses rdfproxy to fetch data from a Triplestore, converts the data to Pydantic models and then ships the models as JSON. Most of the work is done by rdfproxy in the background, but I adapted the existing pfp-api code to make it easier to add new entity types.

01 August, 2025 05:28AM

Paul Wise

FLOSS Activities July 2025

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Sponsors

All work was done on a volunteer basis.

01 August, 2025 02:24AM

Iustin Pop

Our Grand Japan 2025 vacation is over 😭

As I’m writing this, we’re one hour away from landing, and thus our Grand (with a capital G for sure) Japan 2025 vacation is over. Planning started about nine months ago, plane tickets bought six months in advance, most hotels booked about four months ahead, and then a wonderful, even if a bit packed, almost 3 weeks in Japan. And now we’re left with lots of good memories, some mishaps that we’re going to laugh about in a few months’s time, and quite a few thousand pictures to process and filter, so that so they can be viewed in a single session.

Oh, and I’m also left with a nice bottle of plum wine, thanks to inflight shopping. Was planning to, but didn’t manage to buy one in the airport, as Haneda International departures, after the security check, is a bit small. But in 15 hours of flying, there was enough time to implement 2 tiny Corydalis features, and browse the shopping catalog �. I only learned on the flight that some items need to be preordered, a lesson for next time…

Thanks to the wonders of inflight internet, I can write and publish this, but it not being StarLink, Visual Studio Code managed to download an update for the UI, but now the remote server package is too big? slow? and can’t be downloaded. Well, it started download 5 times, and aborted at about 80% each time. Well, thankful my blog is lightweight and I can write it in vi and push it �. And pushing the above-mentioned features to GitHub was also possible.

A proper blog post will follow, once I can select some pictures and manage to condense three weeks in an overall summary… And in the meantime, back to the real world!

01 August, 2025 12:00AM

July 31, 2025

hackergotchi for Matthew Garrett

Matthew Garrett

Secure boot certificate rollover is real but probably won't hurt you

LWN wrote an article which opens with the assertion "Linux users who have Secure Boot enabled on their systems knowingly or unknowingly rely on a key from Microsoft that is set to expire in September". This is, depending on interpretation, either misleading or just plain wrong, but also there's not a good source of truth here, so.

First, how does secure boot signing work? Every system that supports UEFI secure boot ships with a set of trusted certificates in a database called "db". Any binary signed with a chain of certificates that chains to a root in db is trusted, unless either the binary (via hash) or an intermediate certificate is added to "dbx", a separate database of things whose trust has been revoked[1]. But, in general, the firmware doesn't care about the intermediate or the number of intermediates or whatever - as long as there's a valid chain back to a certificate that's in db, it's going to be happy.

That's the conceptual version. What about the real world one? Most x86 systems that implement UEFI secure boot have at least two root certificates in db - one called "Microsoft Windows Production PCA 2011", and one called "Microsoft Corporation UEFI CA 2011". The former is the root of a chain used to sign the Windows bootloader, and the latter is the root used to sign, well, everything else.

What is "everything else"? For people in the Linux ecosystem, the most obvious thing is the Shim bootloader that's used to bridge between the Microsoft root of trust and a given Linux distribution's root of trust[2]. But that's not the only third party code executed in the UEFI environment. Graphics cards, network cards, RAID and iSCSI cards and so on all tend to have their own unique initialisation process, and need board-specific drivers. Even if you added support for everything on the market to your system firmware, a system built last year wouldn't know how to drive a graphics card released this year. Cards need to provide their own drivers, and these drivers are stored in flash on the card so they can be updated. But since UEFI doesn't have any sandboxing environment, those drivers could do pretty much anything they wanted to. Someone could compromise the UEFI secure boot chain by just plugging in a card with a malicious driver on it, and have that hotpatch the bootloader and introduce a backdoor into your kernel.

This is avoided by enforcing secure boot for these drivers as well. Every plug-in card that carries its own driver has it signed by Microsoft, and up until now that's been a certificate chain going back to the same "Microsoft Corporation UEFI CA 2011" certificate used in signing Shim. This is important for reasons we'll get to.

The "Microsoft Windows Production PCA 2011" certificate expires in October 2026, and the "Microsoft Corporation UEFI CA 2011" one in June 2026. These dates are not that far in the future! Most of you have probably at some point tried to visit a website and got an error message telling you that the site's certificate had expired and that it's no longer trusted, and so it's natural to assume that the outcome of time's arrow marching past those expiry dates would be that systems will stop booting. Thankfully, that's not what's going to happen.

First up: if you grab a copy of the Shim currently shipped in Fedora and extract the certificates from it, you'll learn it's not directly signed with the "Microsoft Corporation UEFI CA 2011" certificate. Instead, it's signed with a "Microsoft Windows UEFI Driver Publisher" certificate that chains to the "Microsoft Corporation UEFI CA 2011" certificate. That's not unusual, intermediates are commonly used and rotated. But if we look more closely at that certificate, we learn that it was issued in 2023 and expired in 2024. Older versions of Shim were signed with older intermediates. A very large number of Linux systems are already booting certificates that have expired, and yet things keep working. Why?

Let's talk about time. In the ways we care about in this discussion, time is a social construct rather than a meaningful reality. There's no way for a computer to observe the state of the universe and know what time it is - it needs to be told. It has no idea whether that time is accurate or an elaborate fiction, and so it can't with any degree of certainty declare that a certificate is valid from an external frame of reference. The failure modes of getting this wrong are also extremely bad! If a system has a GPU that relies on an option ROM, and if you stop trusting the option ROM because either its certificate has genuinely expired or because your clock is wrong, you can't display any graphical output[3] and the user can't fix the clock and, well, crap.

The upshot is that nobody actually enforces these expiry dates - here's the reference code that disables it. In a year's time we'll have gone past the expiration date for "Microsoft Windows UEFI Driver Publisher" and everything will still be working, and a few months later "Microsoft Windows Production PCA 2011" will also expire and systems will keep booting Windows despite being signed with a now-expired certificate. This isn't a Y2K scenario where everything keeps working because people have done a huge amount of work - it's a situation where everything keeps working even if nobody does any work.

So, uh, what's the story here? Why is there any engineering effort going on at all? What's all this talk of new certificates? Why are there sensationalist pieces about how Linux is going to stop working on old computers or new computers or maybe all computers?

Microsoft will shortly start signing things with a new certificate that chains to a new root, and most systems don't trust that new root. System vendors are supplying updates[4] to their systems to add the new root to the set of trusted keys, and Microsoft has supplied a fallback that can be applied to all systems even without vendor support[5]. If something is signed purely with the new certificate then it won't boot on something that only trusts the old certificate (which shouldn't be a realistic scenario due to the above), but if something is signed purely with the old certificate then it won't boot on something that only trusts the new certificate.

How meaningful a risk is this? We don't have an explicit statement from Microsoft as yet as to what's going to happen here, but we expect that there'll be at least a period of time where Microsoft signs binaries with both the old and the new certificate, and in that case those objects should work just fine on both old and new computers. The problem arises if Microsoft stops signing things with the old certificate, at which point new releases will stop booting on systems that don't trust the new key (which, again, shouldn't happen). But even if that does turn out to be a problem, nothing is going to force Linux distributions to stop using existing Shims signed with the old certificate, and having a Shim signed with an old certificate does nothing to stop distributions signing new versions of grub and kernels. In an ideal world we have no reason to ever update Shim[6] and so we just keep on shipping one signed with two certs.

If there's a point in the future where Microsoft only signs with the new key, and if we were to somehow end up in a world where systems only trust the old key and not the new key[7], then those systems wouldn't boot with new graphics cards, wouldn't be able to run new versions of Windows, wouldn't be able to run any Linux distros that ship with a Shim signed only with the new certificate. That would be bad, but we have a mechanism to avoid it. On the other hand, systems that only trust the new certificate and not the old one would refuse to boot older Linux, wouldn't support old graphics cards, and also wouldn't boot old versions of Windows. Nobody wants that, and for the foreseeable future we're going to see new systems continue trusting the old certificate and old systems have updates that add the new certificate, and everything will just continue working exactly as it does now.

Conclusion: Outside some corner cases, the worst case is you might need to boot an old Linux to update your trusted keys to be able to install a new Linux, and no computer currently running Linux will break in any way whatsoever.

[1] (there's also a separate revocation mechanism called SBAT which I wrote about here, but it's not relevant in this scenario)

[2] Microsoft won't sign GPLed code for reasons I think are unreasonable, so having them sign grub was a non-starter, but also the point of Shim was to allow distributions to have something that doesn't change often and be able to sign their own bootloaders and kernels and so on without having to have Microsoft involved, which means grub and the kernel can be updated without having to ask Microsoft to sign anything and updates can be pushed without any additional delays

[3] It's been a long time since graphics cards booted directly into a state that provided any well-defined programming interface. Even back in 90s, cards didn't present VGA-compatible registers until card-specific code had been executed (hence DEC Alphas having an x86 emulator in their firmware to run the driver on the card). No driver? No video output.

[4] There's a UEFI-defined mechanism for updating the keys that doesn't require a full firmware update, and it'll work on all devices that use the same keys rather than being per-device

[5] Using the generic update without a vendor-specific update means it wouldn't be possible to issue further updates for the next key rollover, or any additional revocation updates, but I'm hoping to be retired by then and I hope all these computers will also be retired by then

[6] I said this in 2012 and it turned out to be wrong then so it's probably wrong now sorry, but at least SBAT means we can revoke vulnerable grubs without having to revoke Shim

[7] Which shouldn't happen! There's an update to add the new key that should work on all PCs, but there's always the chance of firmware bugs

comment count unavailable comments

31 July, 2025 04:12PM

Simon Josefsson

Independently Reproducible Git Bundles

The gnulib project publish a git bundle as a stable archival copy of the gnulib git repository once in a while.

Why? We don’t know exactly what this may be useful for, but I’m promoting for this to see if we can establish some good use-case.

A git bundle may help to establish provinence in case of an attack on the Savannah hosting platform that compromise the gnulib git repository.

Another use is in the Debian gnulib package: that gnulib bundle is git cloned when building some Debian packages, to get to exactly the gnulib commit used by each upstream project – see my talk on gnulib at Debconf24 – and this approach reduces the amount of vendored code that is part of Debian’s source code, which is relevant to mitigate XZ-style attacks.

The first time we published the bundle, I wanted it to be possible to re-create it bit-by-bit identically by others.

At the time I discovered a well-written blog post by Paul Beacher on reproducible git bundles and thought he had solved the problem for me. Essentially it boils down to disable threading during compression when producing the bundle, and his final example show this results in a predictable bit-by-bit identical output:

$ for i in $(seq 1 100); do \
> git -c 'pack.threads=1' bundle create -q /tmp/bundle-$i --all; \
> done
$ md5sum /tmp/bundle-* | cut -f 1 -d ' ' | uniq -c
    100 4898971d4d3b8ddd59022d28c467ffca

So what remains to be said about this? It seems reproducability goes deeper than that. One desirable property is that someone else should be able to reproduce the same git bundle, and not only that a single individual is able to reproduce things on one machine.

It surprised me to see that when I ran the same set of commands on a different machine (started from a fresh git clone), I got a different checksum. The different checksums occured even when nothing had been committed on the server side between the two runs.

I thought the reason had to do with other sources of unpredictable data, and I explored several ways to work around this but eventually gave up. I settled for the following sequence of commands:

REV=ac9dd0041307b1d3a68d26bf73567aa61222df54 # master branch commit to package
git clone https://git.savannah.gnu.org/git/gnulib.git
cd gnulib
git fsck # attempt to validate input
# inspect that the new tree matches a trusted copy
git checkout -B master $REV # put $REV at master
for b in $(git branch -r | grep origin/stable- | sort --version-sort); do git checkout ${b#origin/}; done
git remote remove origin # drop some unrelated branches
git gc --prune=now # drop any commits after $REV
git -c 'pack.threads=1' bundle create gnulib.bundle --all
V=$(env TZ=UTC0 git show -s --date=format:%Y%m%d --pretty=%cd master)
mv gnulib.bundle gnulib-$V.bundle
build-aux/gnupload --to ftp.gnu.org:gnulib gnulib-$V.bundle

At the time it felt more important to publish something than to reach for perfection, so we did so using the above snippet. Afterwards I reached out to the git community on this and there were good discussion about my challenge.

At the end of that thread you see that I was finally able to reproduce a bit-by-bit identical bundles from two different clones, by using an intermediate git -c pack.threads=1 repack -adF step. I now assume that the unpredictable data I got earlier was introduced during the ‘git clone’ steps, compressing the pack differently each time due to threaded compression. The outcome could also depend on what content the server provided, so if someone ran git gc, git repack on the server side things would change for the user, even if the user forced threading to 1 during cloning — more experiments on what kind of server-side alterations results in client-side differences would be good research.

A couple of months passed and it is now time to publish another gnulib bundle – somewhat paired to the bi-yearly stable gnulib branches – so let’s walk through the commands and explain what they do. First clone the repository:

REV=225973a89f50c2b494ad947399425182dd42618c   # master branch commit to package
S1REV=475dd38289d33270d0080085084bf687ad77c74d # stable-202501 branch commit
S2REV=e8cc0791e6bb0814cf4e88395c06d5e06655d8b5 # stable-202507 branch commit
git clone https://git.savannah.gnu.org/git/gnulib.git
cd gnulib
git fsck # attempt to validate input

I believe the git fsck will validate that the chain of SHA1 commits are linked together, preventing someone from smuggling in unrelated commits earlier in the history without having to do SHA1 collision. SHA1 collisions are economically feasible today, so this isn’t much of a guarantee of anything though.

git checkout -B master $REV # put $REV at master
# Add all stable-* branches locally:
for b in $(git branch -r | grep origin/stable- | sort --version-sort); do git checkout ${b#origin/}; done
git checkout -B stable-202501 $S1REV
git checkout -B stable-202507 $S2REV
git remote remove origin # drop some unrelated branches
git gc --prune=now # drop any unrelated commits, not clear this helps

This establish a set of branches pinned to particular commits. The older stable-* branches are no longer updated, so they shouldn’t be moving targets. In case they are modified in the future, the particular commit we used will be found in the official git bundle.

time git -c pack.threads=1 repack -adF

That’s the new magic command to repack and recompress things in a hopefully more predictable way. This leads to a 72MB git pack under .git/objects/pack/ and a 62MB git bundle. The runtime on my laptop is around 5 minutes.

I experimented with -c pack.compression=1 and -c pack.compression=9 but the size was roughly the same; 76MB and 66MB for level 1 and 72MB and 62MB for level 9. Runtime still around 5 minutes.

Git uses zlib by default, which isn’t the most optimal compression around. I tried -c pack.compression=0 and got a 163MB git pack and a 153MB git bundle. The runtime is still around 5 minutes, indicating that compression is not the bottleneck for the git repack command.

That 153MB uncompressed git bundle compresses to 48MB with gzip default settings and 46MB with gzip -9; to 39MB with zst defaults and 34MB with zst -9; and to 28MB using xz defaults with a small 26MB using xz -9.

Still the inconvenience of having to uncompress a 30-40MB archive into
the much larger 153MB is probably not worth the savings compared to
shipping and using the (still relatively modest) 62MB git bundle.

Now finally prepare the bundle and ship it:

git -c 'pack.threads=1' bundle create gnulib.bundle --all
V=$(env TZ=UTC0 git show -s --date=format:%Y%m%d --pretty=%cd master)
mv gnulib.bundle gnulib-$V.bundle
build-aux/gnupload --to ftp.gnu.org:gnulib gnulib-$V.bundle

Yay! Another gnulib git bundle snapshot is available from
https://ftp.gnu.org/gnu/gnulib/.

The essential part of the git repack command is the -F parameter. In the thread -f was suggested, which translates into the git pack-objects --no-reuse-delta parameter:

--no-reuse-delta

When creating a packed archive in a repository that has existing packs, the command reuses existing deltas. This sometimes results in a slightly suboptimal pack. This flag tells the command not to reuse existing deltas but compute them from scratch.

When reading the man page, I though that using -F which translates into --no-reuse-object would be slightly stronger:

--no-reuse-object

This flag tells the command not to reuse existing object data at all, including non deltified object, forcing recompression of everything. This implies --no-reuse-delta. Useful only in the obscure case where wholesale enforcement of a different compression level on the packed data is desired.

On the surface, without --no-reuse-objects, some amount of earlier compression could taint the final result. Still, I was able to get bit-by-bit identical bundles by using -f so possibly reaching for -F is not necessary.

All the commands were done using git 2.51.0 as packaged by Guix. I fear the result may be different with other git versions and/or zlib libraries. I was able to reproduce the same bundle on a Trisquel 12 aramo (derived from Ubuntu 22.04) machine, which uses git 2.34.1. This suggests there is some chances of this being possible to reproduce in 20 years time. Time will tell.

I also fear these commands may be insufficient if something is moving on the server-side of the git repository of gnulib (even just something simple as a new commit), I tried to make some experiments with this but let’s aim for incremental progress here. At least I have now been able to reproduce the same bundle on different machines, which wasn’t the case last time.

Happy Reproducible Git Bundle Hacking!

31 July, 2025 02:50PM by simon

Russell Coker

July 30, 2025

hackergotchi for David Bremner

David Bremner

Hibernate on the pocket reform 9/n

Context

Sidequest: Install u-boot-menu

Don't be like me and reboot without configuring u-boot-menu. Although the defaults make sense for most people, in my case I lost access to the serial console (because that custom config needed to be re-done), and the default delay was not enough to choose a backup kernel. In consfigurator notation:

  (on-change
      (file:has-content "/etc/u-boot-menu/conf.d/reform.conf"
        '("U_BOOT_TIMEOUT=50"
          "U_BOOT_PARAMETERS=\"ro no_console_suspend cryptomgr.notests \\${bootargs} console=ttyS2,1500000 keep_bootcon console=tty1\""))
    (cmd:single "u-boot-update"))

The panel, alive.

  • Thanks to a hint from joschc (and a bit of luck) I realized the
    issue I filed was nonsense. Yay?

  • The panel driver is not added by the rk3588 patches (since the build process applies all the patches, this is not a problem for building from reform-debian-packages).

  • After applying the two patches in reform-debian-packages/linux/patches6.15/imx8mp-mnt-pocket-reform/pocket-panel, the patched 6.16 kernel boots, and seems to work, including the panel.

  • The updated source is on branch reform-patches at

    https://salsa.debian.org/bremner/collabora-rockchip-3588

  • Unsurprisingly hibernate is not working out of the box with 6.16. My next mission is to apply the recommended pci-reset patches on top of 6.16.

previous episode next episode

30 July, 2025 03:33PM

Hibernate on the pocket reform 10/n

Context

Finally applying the pci reset series.

$ b4 am 20250715-pci-port-reset-v6-0-6f9cce94e7bb@oss.qualcomm.com
$ git am -3 v6_20250715_manivannan_sadhasivam_pci_add_support_for_resetting_the_root_ports_in_a_platform_specifi.mbx

There is quite a scary looking conflict between the last patch in the series and https://lore.kernel.org/r/1744940759-23823-1-git-send-email-shawn.lin@rock-chips.com which is now upstream (collabora) in rockchip-devel. I resolved the second basically by taking both, as it seemed like two independent sets of additions to the same parts of the file. The first it looks like Shawn's commit referenced above should prevail.

  • If anyone is curious about the (possibly incorrectly) rebased patches, they are at

    https://salsa.debian.org/bremner/collabora-rockchip-3588

    (reform-patches is the default, and relevant branch).

testing

  • The new (6.16~rc7+) kernel boots
  • It successfully reboots

  • devices test passes, although the UBSAN warning / error is still there

 174.559032] UBSAN: array-index-out-of-bounds in net/mac80211/rc80211_minstrel_ht.c:409:33
[  174.559830] index 15 is out of range for type 'minstrel_rate_stats [10]'
[  174.560462] CPU: 7 UID: 0 PID: 213 Comm: kworker/u32:10 Tainted: G        WC OE       6.16.0-rc7+ #6 NONE
[  174.560470] Tainted: [W]=WARN, [C]=CRAP, [O]=OOT_MODULE, [E]=UNSIGNED_MODULE
[  174.560472] Hardware name: MNT Pocket Reform with RCORE RK3588 Module (DT)
[  174.560474] Workqueue: mt76 mt76u_tx_status_data [mt76_usb]
[  174.560489] Call trace:
[  174.560491]  show_stack+0x34/0x98 (C)
[  174.560501]  dump_stack_lvl+0x60/0x80
[  174.560508]  dump_stack+0x18/0x24
[  174.560514]  ubsan_epilogue+0x10/0x48
[  174.560520]  __ubsan_handle_out_of_bounds+0xa0/0xd0
[  174.560526]  minstrel_ht_tx_status+0x890/0xc68 [mac80211]
[  174.560633]  rate_control_tx_status+0xbc/0x180 [mac80211]
[  174.560730]  ieee80211_tx_status_ext+0x1d8/0x9a0 [mac80211]
[  174.560822]  mt76_tx_status_unlock+0x188/0x2a0 [mt76]
[  174.560844]  mt76x02_send_tx_status+0x130/0x4a0 [mt76x02_lib]
[  174.560860]  mt76x02_tx_status_data+0x64/0xa8 [mt76x02_lib]
[  174.560872]  mt76u_tx_status_data+0x84/0x120 [mt76_usb]
[  174.560879]  process_one_work+0x178/0x3c8
[  174.560885]  worker_thread+0x208/0x400
[  174.560890]  kthread+0x120/0x220
[  174.560894]  ret_from_fork+0x10/0x20
[  174.560898] ---[ end trace ]---
  • "platform" test still fails with
[   88.484072] rk_gmac-dwmac fe1b0000.ethernet end0: Link is Down
[   88.597026] rockchip-dw-pcie a40c00000.pcie: Failed to receive PME_TO_Ack
[   88.598523] PM: hibernation: hibernation debug: Waiting for 5 second(s).
[   94.667723] rockchip-dw-pcie a40c00000.pcie: Phy link never came up
[   94.668281] rockchip-dw-pcie a40c00000.pcie: fail to resume
[   94.668783] rockchip-dw-pcie a40c00000.pcie: PM: dpm_run_callback(): genpd_restore_noirq returns -110
[   94.669594] rockchip-dw-pcie a40c00000.pcie: PM: failed to restore noirq: error -110
[  120.035426] watchdog: CPU4: Watchdog detected hard LOCKUP on cpu 5
[  120.035978] Modules linked in: xt_CHECKSUM xt_tcpudp nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat bridge stp llc nf_tables aes_neon_bs aes_neon_blk ccm snd_seq_dummy snd_hrtimer snd_seq snd_seq_device dwmac_rk binfmt_misc mt76x2_common mt76x02_usb mt76_usb mt76x02_lib mt76 mac80211 libarc4 snd_soc_simple_card rockchip_saradc industrialio_triggered_buffer cfg80211 snd_soc_tlv320aic31xx rk805_pwrkey kfifo_buf reform2_lpc(OE) industrialio rockchip_thermal rfkill rockchip_rng hantro_vpu cdc_acm rockchip_rga v4l2_vp9 snd_soc_rockchip_i2s_tdm rockchip_vdec2 panthor videobuf2_dma_sg v4l2_jpeg drm_gpuvm v4l2_h264 drm_exec snd_soc_audio_graph_card snd_soc_simple_card_utils joydev evdev dm_mod nvme_fabrics efi_pstore configfs nfnetlink ip_tables x_tables autofs4 ext4 crc16 mbcache jbd2 btrfs blake2b_generic xor xor_neon raid6_pq mali_dp snd_soc_meson_axg_toddr snd_soc_meson_axg_fifo snd_soc_meson_codec_glue panfrost drm_shmem_helper gpu_sched ao_cec_g12a meson_vdec(C)
[  120.036066]  videobuf2_dma_contig hid_generic videobuf2_memops v4l2_mem2mem videobuf2_v4l2 videodev videobuf2_common mc dw_hdmi_i2s_audio meson_drm meson_canvas meson_dw_mipi_dsi meson_dw_hdmi usbhid hid mxsfb mux_mmio panel_edp imx_dcss ti_sn65dsi86 nwl_dsi mux_core pwm_imx27 xhci_plat_hcd xhci_hcd onboard_usb_dev snd_soc_hdmi_codec snd_soc_core micrel snd_pcm_dmaengine nvme snd_pcm nvme_core snd_timer snd nvme_keyring nvme_auth soundcore stmmac_platform stmmac pcs_xpcs phylink mdio_devres of_mdio sdhci_of_dwcmshc fixed_phy sdhci_pltfm phy_rockchip_usbdp dw_mmc_rockchip fwnode_mdio ehci_platform typec phy_rockchip_samsung_hdptx phy_rockchip_naneng_combphy rk808_regulator pwm_rockchip dwc3 dw_wdt libphy fan53555 ohci_platform sdhci ehci_hcd ulpi rtc_pcf8523 dw_mmc_pltfm udc_core ohci_hcd dw_mmc cqhci mdio_bus rockchip_dfi rockchipdrm dw_hdmi_qp analogix_dp i2c_rk3x usbcore phy_rockchip_inno_usb2 dw_mipi_dsi dw_mipi_dsi2 usb_common cpufreq_dt drm_dp_aux_bus [last unloaded: mt76x2u]
[  120.036150] Sending NMI from CPU 4 to CPUs 5:
  • The results are similar if I uncomment the unloading of the dwc3 module
set -x
echo platform >  /sys/power/pm_test
echo reboot > /sys/power/disk
sleep 2
rmmod mt76x2u
sleep 2
#rmmod dwc3
#sleep 2
echo disk >  /sys/power/state
sleep 2
#modprobe dwc3
#sleep 2
modprobe mt76x2u
  • Unsurprisingly, if I try an actual resume (instead of a "platform" test), I get the same messages about "Phy link never came up" and the system needs a hard reboot after trying to resume.

  • Barring inspiration, my next move will be to report my lack of success to the appropriate kernel mailing list(s).

previous episode

30 July, 2025 03:33PM

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (May and June 2025)

The following contributors got their Debian Developer accounts in the last two months:

  • Cordell Bloor (cgmb)
  • Enkelena Haxhija (enkelenah)

The following contributors were added as Debian Maintainers in the last two months:

  • Karsten Schöke
  • Lorenzo Puliti
  • Nick Rosbrook
  • Nicolas Peugnet
  • Yifei Zhan
  • Glenn Strauss
  • Fab Stz
  • Matheus Polkorny
  • Manuel Elias Guerra Figueroa

Congratulations!

30 July, 2025 12:00PM by Jean-Pierre Giraud

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Superimposed codes, take three

After I wrote last week that OEIS A286874 would stop at a(12) and that computing (verifying) a(13) would take about 4–5000 CPU years, the changes have finally been approved, and… the sequence includes a(13) = 26. What happened?

Well, first of all, I am indeed not a mathematical genius (the last post even forgot the “not”); I had a stupid conversion error in the estimation, causing a factor 25 or so. But the rest came from actual speedups.

First of all, I improved one of the existing symmetry detectors a bit (the one described last in the previous post was not fully rejecting the possible symmetries when multiple new bits were introduced in one value). But I also made a more universal symmetry detector; if switching the order of certain neighboring bits and re-sorting the sequence made it lexicographically smaller, then we can abort the search. This is pretty expensive and only rejects ~5% of candidates, so it's only worth it at higher levels, but it's much cheaper than checking all n! arbitrary permutations and catches maybe 90% of a full rejection. (Also, if you can reject 5% at multiple levels, those percentages tend to add up. We're down from hundreds of thousands of duplicate solutions, to only a bit over 100, so the amount of speedup available from reducing symmetries is rapidly dwindling.)

Also, surprisingly to me, before going on to run the next level, doing a population count to check if there were too few bits to ever be a solution was seemingly a large win (e.g. are have three values so far, but only 21 bits left; we can never generate a sequence larger than 24 even if all the stars align, and can abort immediately). You would think that this counting, which takes very real CPU time even with vectorization, wouldn't be worth it compared to just running through the base layers of the recursion very quickly, but evidently, it is by a large margin. I guess it's a very common case to have many more than 1 bit left but less than 26-n, and it also means you can just stop iterating a bit before you get to the end.

But perhaps the most impactful optimization was a microoptimization. Recall that we spent most of our time ANDing 8192-bit vectors (which would be 16384-bit vectors for a(13)) with each other. Some looking at performance metrics suggested that the RAM bandwidth was completely maxed out, with ~80% of theoretical bandwidth in use; only faster RAM or more memory channels would have made a reasonable dent in the performance of this kind of architecture.

But pretty early, most of those bits will be zero. If you've already decided on the first five values in a sequence, you will not have 8187 options left; in most cases, you'll have more like 3–400. And since the bit sets only ever shrink, we can simply compress away all those known zeros. For most of our purposes, it doesn't really decide what each bit signifies (an important exception is the point where we have a valid solution and need to print it out, but it's not hard to store the mapping), as we mostly use the values for looking up pregenerated vectors to AND together. This means that when we start a new sub-job, we can find which future values are possible, and then map those into new numbers 0 through 511 (or whatever). This means we can use 512-bit vectors instead of 8192-bit vectors, with all the obvious advantages; less ALU work, less memory traffic, and better cache locality. (It's interesting that we started by being extremely ALU-bound, then moved to being very RAM-bound, and then ended up in fairly normal territory.)

Of course, there are situations where you could have more than 512 valid values. In that case, you can either recompile with larger bit sets (typically a multiple of 128, to get good use of SIMD), or you can split into smaller sub-jobs; find all valid ways of extending the sequence by one element (trivial; we already did that to make the bit sets), and then make one job for each. This splitting is also good for variance; no longer do you have some sub-jobs that finish in milliseconds and some that require days.

There are some downsides too, of course. In particular, we can no longer pregenerate one universal 8192*8192*8192 bit LUT (well, 8192*8191/2*8192); every sub-job needs to make its own set of LUTs before starting. But since this is O(n³) and we just cut n from 8192 to 512, it's not really a blocker (although of course far from zero); and importantly, it cuts our total RAM usage. For n=8192, we already needed a bit over 32 GB (though sharable between all jobs), and each next element in the sequence (a(13), a(14), etc.) is a factor 8 extra, so it starts becoming a real problem fast. But on the flipside, I think this extra precalc makes the algorithm much less amenable to a theoretical GPU implementation (~8 MB private data per instance, as opposed to one large shared static pool of constants and then just 1 kB of state per instance), which would otherwise be nontrivial but probably possible (the problem itself is so parallel). Interestingly enough, it's possible to use bitslicing to speed up this precalc, which is a technique I cannot remember when I last used.

All in all, it took only about 114 CPU-days (or, well, thread-days, as hyperthreading now makes sense again) to calculate a(13), which was eminently possible; and many of the optimizations came late in the process, so a rerun would be faster than that. So, could we get to a(14)? Well, maybe. I'm less convinced that it would be impossible than I was with a(13) earlier. :-) But I started looking at it, and it turns out there are literally trillions (possibly more!) of sub-jobs if you want to split deeply enough to get each down into the 512-bit range. And even at ~8 ms per core per job (ignoring the cost of splitting and just looking at the cost of processing the jobs themselves), it just becomes too unwieldy for me, especially since Postgres isn't really that great at storing billions of rows efficiently. But impossible? Definitely not.

30 July, 2025 08:32AM

July 28, 2025

Dimitri John Ledkov

Achieving actually full disk encryption of UEFI ESP at rest with TCG OPAL, FIPS, LUKS

Achieving full disk encryption using FIPS, TCG OPAL and LUKS to encrypt UEFI ESP on bare-metal and in VMs

Many security standards such as CIS and STIG require to protect information at rest. For example, NIST SP 800-53r5 SC-28 advocate to use cryptographic protection, offline storage and TPMs to enhance protection of information confidentiality and/or integrity.

Traditionally to satisfy such controls on portable devices such as laptops one would utilize software based Full Disk Encryption - Mac OS X FileVault, Windows Bitlocker, Linux cryptsetup LUKS2. In cases when FIPS cryptography is required, additional burden would be placed onto these systems to operate their kernels in FIPS mode.

Trusted Computing Group works on establishing many industry standards and specifications, which are widely adopted to improve safety and security of computing whilst keeping it easy to use. One of their most famous specifications them is TCG TPM 2.0 (Trusted Platform Module). TPMs are now widely available on most devices and help to protect secret keys and attest systems. For example, most software full disk encryption solutions can utilise TCG TPM to store full disk encryption keys providing passwordless, biometric or pin-base ways to unlock the drives as well as attesting that system have not been modified or compromised whilst offline.

TCG Storage Security Subsystem Class: Opal Specification is a set of specifications for features of data storage devices. The authors and contributors to OPAL are leading and well trusted storage manufacturers such as Samsung, Western Digital, Seagate Technologies, Dell, Google, Lenovo, IBM, Kioxia, among others. One of the features that Opal Specification enables is self-encrypting drives which becomes very powerful when combined with pre-boot authentication. Out of the box, such drives always and transparently encrypt all disk data using hardware acceleration. To protect data one can enter UEFI firmware setup (BIOS) to set NVMe single user password (or user + administrator/recovery passwords) to encrypt the disk encryption key. If one's firmware didn't come with such features, one can also use SEDutil to inspect and configure all of this. Latest release of major Linux distributions have SEDutil already packaged.

Once password is set, on startup, pre-boot authentication will request one to enter password - prior to booting any operating systems. It means that full disk is actually encrypted, including the UEFI ESP and all operating systems that are installed in case of dual or multi-boot installations. This also prevents tampering with ESP, UEFI bootloaders and kernels which with traditional software-based encryption often remain unencrypted and accessible. It also means one doesn't have to do special OS level repartitioning, or installation steps to ensure all data is encrypted at rest.

What about FIPS compliance? Well, the good news is that majority of the OPAL compliant hard drives and/or security sub-chips do have FIPS 140-3 certification. Meaning they have been tested by independent laboratories to ensure they do in-fact encrypt data. On the CMVP website one can search for module name terms "OPAL" or "NVMe" or name of hardware vendor to locate FIPS certificates.

Are such drives widely available? Yes. For example, a common Thinkpad X1 gen 11 has OPAL NVMe drives as standard, and they have FIPS certification too. Thus, it is likely in your hardware fleet these are already widely available. Use sedutil to check if MediaEncrypt and LockingSupported features are available.

Well, this is great for laptops and physical servers, but you may ask - what about public or private cloud? Actually, more or less the same is already in-place in both. On CVMP website all major clouds have their disk encryption hardware certified, and all of them always encrypt all Virtual Machines with FIPS certified cryptography without an ability to opt-out. One is however in full control of how the encryption keys are managed: cloud-provider or self-managed (either with a cloud HSM or KMS or bring your own / external). See these relevant encryption options and key management docs for GCP, Azure, AWS. But the key takeaway without doing anything, at rest, VMs in public cloud are always encrypted and satisfy NIST SP 800-53 controls.

What about private cloud? Most Linux based private clouds ultimately use qemu typically with qcow2 virtual disk images. Qemu supports user-space encryption of qcow2 disk, see this manpage. Such encryption encrypts the full virtual machine disk, including the bootloader and ESP. And it is handled entirely outside of the VM on the host - meaning the VM never has access to the disk encryption keys. Qemu implements this encryption entirely in userspace using gnutls, nettle, libgcrypt depending on how it was compiled. This also means one can satisfy FIPS requirements entirely in userspace without a Linux kernel in FIPS mode. Higher level APIs built on top of qemu also support qcow2 disk encryption, as in projects such as libvirt and OpenStack Cinder.

If you carefully read the docs, you may notice that agent support is explicitly sometimes called out as not supported or not mentioned. Quite often agents running inside the OS may not have enough observability to them to assess if there is external encryption. It does mean that monitoring above encryption options require different approaches - for example monitor your cloud configuration using tools such as Wiz and Orca, rather than using agents inside individual VMs. For laptop / endpoint security agents, I do wish they would start gaining capability to report OPAL SED availability and status if it is active or not.

What about using software encryption none-the-less on top of the above solutions? It is commonly referred to double or multiple encryption. There will be an additional performance impact, but it can be worthwhile. It really depends on what you define as data at rest for yourself and which controls you need. If one has a dual-boot laptop, and wants to keep one OS encrypted whilst booted into the other, it can perfectly reasonable to encrypted the two using separate software encryption keys. In addition to the OPAL encryption of the ESP. For more targeted per-file / per-folder encryption, one can look into using gocryptfs which is the best successor to the once popular, but now deprecated eCryptfs (amazing tool, but has fallen behind in development and can lead to data loss).

All of the above mostly talks about cryptographic encryption, which only provides confidentially but not data integrity. To protect integrity, one needs to choose how to maintain that. dm-verity is a good choice for read-only and rigid installations. For read-write workloads, it may be easier to deploy ZFS or Btrfs instead. If one is using filesystems without a built-in integrity support such as XFS or Ext4, one can retrofit integrity layer to them by using dm-integrity (either standalone, or via dm-luks/cryptsetup --integrity option).

If one has a lot of estate and a lot of encryption keys to keep track off a key management solution is likely needed. The most popular solution is likely the one from Thales Group marketed under ChiperTrust Data Security Platform (previously Vormetric), but there are many others including OEM / Vendor / Hardware / Cloud specific or agnostic solutions.

I hope this crash course guide piques your interest to learn and discover modern confidentially and integrity solutions, and to re-affirm or change your existing controls w.r.t. to data protection at rest. 

Full disk encryption, including UEFI ESP /boot/efi is now widely achievable by default on both baremetal machines and in VMs including with FIPS certification. To discuss more let's connect on Linkedin.

28 July, 2025 11:13AM by Dimitri John Ledkov (noreply@blogger.com)

Russ Allbery

Review: Cyteen

Review: Cyteen, by C.J. Cherryh

Series: Cyteen #1
Publisher: Warner Aspect
Copyright: 1988
Printing: September 1995
ISBN: 0-446-67127-4
Format: Trade paperback
Pages: 680

The main text below is an edited version of my original review of Cyteen written on 2012-01-03. Additional comments from my re-read are after the original review.

I've reviewed several other C.J. Cherryh books somewhat negatively, which might give the impression I'm not a fan. That is an artifact of when I started reviewing. I first discovered Cherryh with Cyteen some 20 years ago, and it remains one of my favorite SF novels of all time. After finishing my reading for 2011, I was casting about for what to start next, saw Cyteen on my parents' shelves, and decided it was past time for my third reading, particularly given the recent release of a sequel, Regenesis.

Cyteen is set in Cherryh's Alliance-Union universe following the Company Wars. It references several other books in that universe, most notably Forty Thousand in Gehenna but also Downbelow Station and others. It also has mentions of the Compact Space series (The Pride of Chanur and sequels). More generally, almost all of Cherryh's writing is loosely tied together by an overarching future history. One does not need to read any of those other books before reading Cyteen; this book will fill you in on all of the politics and history you need to know. I read Cyteen first and have never felt the lack.

Cyteen was at one time split into three books for publishing reasons: The Betrayal, The Rebirth, and The Vindication. This is an awful way to think of the book. There are no internal pauses or reasonable volume breaks; Cyteen is a single coherent novel, and Cherryh has requested that it never be broken up that way again. If you happen to find all three portions as your reading copy, they contain all the same words and are serviceable if you remember it's a single novel under three covers, but I recommend against reading the portions in isolation.

Human colonization of the galaxy started with slower-than-light travel sponsored by the private Sol Corporation. The inhabitants of the far-flung stations and the crews of the merchant ships that supplied them have formed their own separate cultures, but initially remained attached to Earth. That changed with the discovery of FTL travel and a botched attempt by Earth to reassert its authority. At the time of Cyteen, there are three human powers: distant Earth (which plays little role in this book), the merchanter Alliance, and Union.

The planet Cyteen is one of only a few Earth-like worlds discovered by human expansion, and is the seat of government and the most powerful force in Union. This is primarily because of Reseune: the Cyteen lab that produces the azi.

If Cyteen is about any one thing, it's about azi: genetically engineered human clones who are programmed via intensive psychological conditioning starting before birth. The conditioning uses a combination of drugs to make them receptive and "tape," specific patterns of instruction and sensory stimulation. They are designed for specific jobs or roles, they're conditioned to be obedient to regular humans, and they're not citizens. They are, in short, slaves.

In a lot of books, that's as deep as the analysis would go. Azi are slaves, and slavery is certainly bad, so there would probably be a plot around azi overthrowing their conditioning, or around the protagonists trying to free them from servitude. But Cyteen is not any SF novel, and azi are considerably more complex and difficult than that analysis. We learn over the course of the book that the immensely powerful head of Reseune Labs, Ariane Emory, has a specific broader purpose in mind for the azi. One of the reasons why Reseune fought for and gained the role of legal protector of all azi in Union, regardless of where they were birthed, is so that Reseune could act to break any permanent dependence on azi as labor. And yet, they are slaves; one of the protagonists of Cyteen is an experimental azi, which makes him the permanent property of Reseune and puts him in constant jeopardy of being used as a political prisoner and lever of manipulation against those who care about him.

Cyteen is a book about manipulation, about programming people, about what it means to have power over someone else's thoughts, and what one can do with that power. But it's also a book about connection and identity, about what makes up a personality, about what constitutes identity and how people construct the moral codes and values that they hold at their core. It's also a book about certainty. Azi are absolutely certain, and are capable of absolute trust, because that's part of their conditioning. Naturally-raised humans are not. This means humans can do things that azi can't, but the reverse is also true. The azi are not mindless slaves, nor are they mindlessly programmed, and several of the characters, both human and azi, find a lot of appeal in the core of certainty and deep self-knowledge of their own psychological rules that azis can have. Cyteen is a book about emotions, and logic, and where they come from and how to balance them. About whether emotional pain and uncertainty is beneficial or damaging, and about how one's experiences make up and alter one's identity.

This is also a book about politics, both institutional and personal. It opens with Ariane Emory, Councilor for Science for five decades and the head of the ruling Union Expansionist party. She's powerful, brilliant, dangerously good at reading people, and dangerously willing to manipulate and control people for her own ends. What she wants, at the start of the book, is to completely clone a Special (the legal status given to the most brilliant minds of Union). This was attempted before and failed, but Ariane believes it's now possible, with a combination of tape, genetic engineering, and environmental control, to reproduce the brilliance of the original mind. To give Union another lifespan of work by their most brilliant thinkers.

Jordan Warrick, another scientist at Reseune, has had a long-standing professional and personal feud with Ariane Emory. As the book opens, he is fighting to be transferred out from under her to the new research station that would be part of the Special cloning project, and he wants to bring his son Justin and Justin's companion azi Grant with them. Justin is a PR, a parental replicate, meaning he shares Jordan's genetic makeup but was not an attempt to reproduce the conditions of Jordan's rearing. Grant was raised as his brother. And both have, for reasons that are initially unclear, attracted the attention of Ariane, who may be using them as pawns.

This is just the initial setup, and along with this should come a warning: the first 150 pages set up a very complex and dangerous political situation and build the tension that will carry the rest of the book, and they do this by, largely, torturing Justin and Grant. The viewpoint jumps around, but Justin and Grant are the primary protagonists for this first section of the book. While one feels sympathy for both of them, I have never, in my multiple readings of the book, particularly liked them. They're hard to like, as opposed to pity, during this setup; they have very little agency, are in way over their heads, are constantly making mistakes, and are essentially having their lives destroyed.

Don't let this turn you off on the rest of the book. Cyteen takes a dramatic shift about 150 pages in. A new set of protagonists are introduced who are some of the most interesting, complex, and delightful protagonists in any SF novel I have read, and who are very much worth waiting for. While Justin has his moments later on (his life is so hard that his courage can be profoundly moving), it's not necessary to like him to love this book. That's one of the reasons why I so strongly dislike breaking it into three sections; that first section, which is mostly Justin and Grant, is not representative of the book.

I can't talk too much more about the plot without risking spoiling it, but it's a beautiful, taut, and complex story that is full of my favorite things in both settings and protagonists. Cyteen is a book about brilliant people who think on their feet. Cherryh succeeds at showing this through what they do, which is rarely done as well as it is here. It's a book about remembering one's friends and remembering one's enemies, and waiting for the most effective moment to act, but it also achieves some remarkable transformations. About 150 pages in, you are likely to loathe almost everyone in Reseune; by the end of the book, you find yourself liking, or at least understanding, nearly everyone. This is extremely hard, and Cherryh pulls it off in most cases without even giving the people she's redeeming their own viewpoint sections. Other than perhaps George R.R. Martin I've not seen another author do this as well.

And, more than anything else, Cyteen is a book with the most wonderful feeling of catharsis. I think this is one of the reasons why I adore this book and have difficulties with some of Cherryh's other works. She's always good at ramping up the tension and putting her characters in awful, untenable positions. Less frequently does she provide the emotional payoff of turning the tables, where you get to watch a protagonist do everything you've been wanting them to do for hundreds of pages, except even better and more delightfully than you would have come up with. Cyteen is one of the most emotionally satisfying books I've ever read.

I could go on and on; there is just so much here that I love. Deep questions of ethics and self-control, presented in a way that one can see the consequences of both bad decisions and good ones and contrast them. Some of the best political negotiations in fiction. A wonderful look at friendship and loyalty from several directions. Two of the best semi-human protagonists I've seen, who one can see simultaneously as both wonderful friends and utterly non-human and who put nearly all of the androids in fiction to shame by being something trickier and more complex. A wonderful unfolding sense of power. A computer that can somewhat anticipate problems and somewhat can't, and that encapsulates much of what I love about semi-intelligent bases in science fiction. Cyteen has that rarest of properties of SF novels: Both the characters and the technology meld in a wonderful combination where neither could exist without the other, where the character issues are illuminated by the technology and the technology supports the characters.

I have, for this book, two warnings. The first, as previously mentioned, is that the first 150 pages of setup is necessary but painful to read, and I never fully warmed to Justin and Grant throughout. I would not be surprised to hear that someone started this book but gave up on it after 50 or 100 pages. I do think it's worth sticking out the rocky beginning, though. Justin and Grant continue to be a little annoying, but there's so much other good stuff going on that it doesn't matter.

The other warning is that part of the setup of the story involves the rape of an underage character. This is mostly off-camera, but the emotional consequences are significant (as they should be) and are frequently discussed throughout the book. There is also rather frank discussion of adolescent sexuality later in the book. I think both of these are relevant to the story and handled in a way that isn't gratuitous, but they made me uncomfortable and I don't have any past history with those topics.

Those warnings notwithstanding, this is simply one of the best SF novels ever written. It uses technology to pose deep questions about human emotions, identity, and interactions, and it uses complex and interesting characters to take a close look at the impact of technology on lives. And it does this with a wonderfully taut, complicated plot that sustains its tension through all 680 pages, and with characters whom I absolutely love. I have no doubt that I'll be reading it for a fourth and fifth time some years down the road.

Followed by Regenesis, although Cyteen stands well entirely on its own and there's no pressing need to read the sequel.

Rating: 10 out of 10


Some additional thoughts after re-reading Cyteen in 2025:

I touched on this briefly in my original review, but I was really struck during this re-read how much the azi are a commentary on and a complication of the role of androids in earlier science fiction. Asimov's Three Laws of Robotics were an attempt to control the risks of robots, but can also be read as turning robots into slaves. Azis make the slavery more explicit and disturbing by running the programming on a human biological platform, but they're more explicitly programmed and artificial than a lot of science fiction androids.

Artificial beings and their relationship to humans have been a recurring theme of SF since Frankenstein, but I can't remember a novel that makes the comparison to humans this ambiguous and conflicted. The azi not only like being azi, they can describe why they prefer it. It's clear that Union made azi for many of the same reasons that humans enslave other humans, and that Ariane Emory is using them as machinery in a larger (and highly ethically questionable) plan, but Cherryh gets deeper into the emergent social complications and societal impact than most SF novels manage. Azi are apparently closer to humans than the famous SF examples such as Commander Data, but the deep differences are both more subtle and more profound.

I've seen some reviewers who are disturbed by the lack of a clear moral stance by the protagonists against the creation of azi. I'm not sure what to think about that. It's clear the characters mostly like the society they've created, and the groups attempting to "free" azi from their "captivity" are portrayed as idiots who have no understanding of azi psychology. Emory says she doesn't want azi to be a permanent aspect of society but clearly has no intention of ending production any time soon. The book does seem oddly unaware that the production of azi is unethical per se and, unlike androids, has an obvious exit ramp: Continue cloning gene lines as needed to maintain a sufficient population for a growing industrial civilization, but raise the children as children rather than using azi programming. If Cherryh included some reason why that was infeasible, I didn't see it, and I don't think the characters directly confronted it.

I don't think societies in books need to be ethical, or that Cherryh intended to defend this one. There are a lot of nasty moral traps that civilizations can fall into that make for interesting stories. But the lack of acknowledgment of the problem within the novel did seem odd this time around.

The other part of this novel that was harder to read past in this re-read is the sexual ethics. There's a lot of adolescent sexuality in this book, and even apart from the rape scene — which was more on-the-page than I had remembered and which is quite (intentionally) disturbing — there is a whole lot of somewhat dubious consent. Maybe I've gotten older or just more discriminating, but it felt weirdly voyeuristic to know this much about the sex lives of characters who are, at several critical points in the story, just a bunch of kids.

All that being said, and with the repeated warning that the first 150 pages of this novel are just not very good, there is still something magic about the last two-thirds of this book. It has competence porn featuring a precociously brilliant teenager who I really like, it has one of the more interesting non-AI programmed computer systems that I've read in SF, it has satisfying politics that feel like modern politics (media strategy and relationships and negotiated alliances, rather than brute force and ideology), and it has a truly excellent feeling of catharsis. The plot resolution is a bit too abrupt and a bit insufficiently explained (there's more in Regenesis), but even though this was my fourth time through this book, the pacing grabbed me again and I could barely put down the last part of the story.

Ethics aside (and I realize that's quite the way to start a sentence), I find the azi stuff fascinating. I know the psychology in this book is not real and is hopelessly simplified compared to real psychology, but there's something in the discussions of value sets and flux and self-knowledge that grabs my interest and makes me want to ponder. I think it's the illusion of simplicity and control, the what-if premise of thought where core motivations and moral rules could be knowable instead of endlessly fluid the way they are in us humans. Cherryh's azi are some of the most intriguing androids in science fiction to me precisely because they don't start with computers and add the humanity in, but instead start with humanity and overlay a computer-like certainty of purpose that's fully self-aware. The result is more subtle and interesting than anything Star Trek managed.

I was not quite as enamored with this book this time around, but it's still excellent once the story gets properly started. I still would recommend it, but I might add more warnings about the disturbing parts.

Re-read rating: 9 out of 10

28 July, 2025 03:53AM

July 27, 2025

Review: The Dragon's Banker

Review: The Dragon's Banker, by Scott Warren

Publisher: Scott Warren
Copyright: September 2019
ISBN: 0-578-55292-2
Format: Kindle
Pages: 263

The Dragon's Banker is a self-published stand-alone fantasy novel, set in a secondary world with roughly Renaissance levels of technology and primarily alchemical magic. The version I read includes an unrelated novelette, "Forego Quest." I have the vague impression that this novel shares a world with other fantasy novels by the same author, but I have not read them and never felt like I was missing something important.

Sailor Kelstern is a merchant banker. He earns his livelihood by financing caravans and sea voyages and taking a cut of the profits. He is not part of the primary banking houses of the city; instead, he has a small, personal business with a loyal staff that looks for opportunities the larger houses may have overlooked. As the story opens, he has fallen on hard times due in part to a spectacular falling-out with a previous client and is in desperate need of new opportunities. The jewel-bedecked Lady Arkelai and her quest for private banking services for her father, Lord Alkazarian, may be exactly what he needs. Or it may be a dangerous trap; Sailor has had disastrous past experience with nobles attempting to strong-arm him into their service.

Unbeknownst to Sailor, Lord Alkazarian is even more dangerous than he first appears. He is sitting on a vast hoard of traditional riches whose value is endangered by the rise of new-fangled paper money. He is not at all happy about this development. He is also a dragon.

I, and probably many other people who read this book, picked it up because it was recommended by Matt Levine as a fantasy about finance instead of the normal magical adventuring. I knew it was self-published going in, so I wasn't expecting polished writing. My hope was for interesting finance problems in a fantasy context, similar to the kind of things Matt Levine's newsletter is about: schemes for financing risky voyages, complications around competing ideas of money, macroeconomic risks from dragon hoards, complex derivatives, principal-agent problems, or something similar that goes beyond the (annoyingly superficial) treatment of finance in most fantasy novels.

Unfortunately, what I got was a rather standard fantasy setting and a plot that revolves mostly around creative uses for magical devices, some conventional political skulduggery, and a lot of energetic but rather superficial business hustling. The protagonist is indeed a merchant banker who is in no way a conventional fantasy hero (one of the most taxing parts of Sailor's occasional visits to the dragon is the long hike down to the hoard, or rather the long climb back out), but the most complex financial instrument that appears in this book is straightforward short-selling. Alas. I was looking forward to the book that I hoped this was.

Given my expectations, this was a disappointment. I kept waiting for the finances to get more complicated and interesting, and that kept not happening. Without that expectation, this is... okay, I guess. The writing is adequate but kind of stilted, presumably in an effort to make it sound slightly archaic, and has a strong self-published feel. Sailor is not a bad protagonist, but neither is he all that memorable. I did like some of the world-building, which has an attention to creative uses of bits of magic that readers who like gadget fantasy may appreciate. There are a lot of plot conveniences and coincidences, though, and very little of this is going to feel original to a long-time fantasy reader.

Putting some of the complexity of real Renaissance banking and finance systems into a fantasy world is a great idea, but I've yet to read one that lived up to the potential of the premise. (Neal Stephenson's Baroque Cycle comes the closest; unfortunately, the non-economic parts of that over-long series are full of Stephenson's worst writing habits.) Part of the problem is doubtless that I am reasonably well-read in economics, so my standards are high. Maybe the average reader would be content with a few bits on the perils of investment, a simple treatment of trust in currency, and a mention or two of short-selling, which is what you get in this book.

I am not altogether sorry that I read this, but I wouldn't recommend it. I encourage Matt Levine to read more genre fiction and find some novels with more interesting financial problems!

"Forego Quest": This included novelette, on the other hand, was surprisingly good and raised my overall rating for the book by a full point.

Arturus Kingson is the Chosen One. He is not the Chosen One of a single prophecy or set of prophecies; no, he's the Chosen One of, apparently, all of them, no matter how contradictory, and he wants absolutely nothing to do with any of them. Magical swords litter his path. He has so many scars and birthmarks that they look like a skin condition. Beautiful women approach him in bars. Mysterious cloaked strangers die dramatically in front of him. Owls try to get into his bedroom window. It's all very exhausting, since the universe absolutely refuses to take no for an answer.

There isn't much more to the story than this, but Warren writes it in the first person with just the right tone of exasperated annoyance and gives Arturus a real problem to solve and enough of a plot to provide some structure. I'm usually not a fan of parody stories because too many of them feel like juvenile slapstick. This one is sarcastic instead, which is much more to my taste.

"Forego Quest" goes on perhaps a bit too long, and the ending was not as successful as the rest of the book, but this was a lot of fun and made me laugh. (7)

Rating: 6 out of 10

27 July, 2025 03:47AM

July 26, 2025

hackergotchi for Bits from Debian

Bits from Debian

DebConf25 closes in Brest and DebConf26 announced

DebConf25 group photo - click to enlarge

On Saturday 19 July 2025, the annual Debian Developers and Contributors Conference came to a close.

Over 443 attendees representing 50 countries from around the world came together for a combined 169 events (including some which took place during the DebCamp) including more than 50 Talks, 39 Short Talks, 5 Discussions, 59 Birds of a Feather sessions ("BoF" – informal meeting between developers and users), 10 workshops, and activities in support of furthering our distribution and free software, learning from our mentors and peers, building our community, and having a bit of fun.

The conference was preceded by the annual DebCamp hacking session held 7 through 13 July where Debian Developers and Contributors convened to focus on their individual Debian-related projects or work in team sprints geared toward in-person collaboration in developing Debian.

This year, a session was dedicated to prepare the BoF "Dealing with Dormant Packages: Ensuring Debian's High Standards"; another, at the initiative of our DPL, to prepare suggestions for the BoF “Package Acceptance in Debian: Challenges and Opportunities"; and an afternoon around Salsa-CI.

As has been the case for several years, a special effort has been made to welcome newcomers and help them become familiar with Debian and DebConf by organizing a sprint "New Contributors Onboarding" every day of Debcamp, followed more informally by mentorship during DebConf.

The actual Debian Developers Conference started on Monday 14 July 2025.

In addition to the traditional "Bits from the DPL" talk, the continuous key-signing party, lightning talks, and the announcement of next year's DebConf26, there were several update sessions shared by internal projects and teams.

Many of the hosted discussion sessions were presented by our technical core teams with the usual and useful "Meet the Technical Committee", the "What's New in the Linux Kernel" session, and a set of BoFs about Debian packaging policy and Debian infrastructure. Thus, more than a quarter of the discussions dealt with this theme, including talks about our tools and Debian's archive processes. Internationalization and Localization have been the subject of several talks. The Python, Perl, Ruby, Go, and Rust programming language teams also shared updates on their work and efforts. Several talks have covered Debian Blends and Debian-derived distributions and other talks addressed the issue of Debian and AI.

More than 17 BoFs and talks about community, diversity, and local outreach highlighted the work of various teams involved in not just the technical but also the social aspect of our community; four women who have made contributions to Debian through their artwork in recent years presented their work.

The one-day session "DebConf 2025 Academic Track!", organized in collaboration with the IRISA laboratory was the first session welcoming fellow academics at DebConf, bringing together around ten presentations.

The schedule was updated each day with planned and ad hoc activities introduced by attendees over the course of the conference. Several traditional activities took place: a job fair, a poetry performance, the traditional Cheese and Wine party (this year with cider as well), the Group Photos, and the Day Trips.

For those who were not able to attend, most of the talks and sessions were broadcasted live and recorded; currently the videos are made available through this link.

Almost all of the sessions facilitated remote participation via IRC and Matrix messaging apps or online collaborative text documents which allowed remote attendees to "be in the room" to ask questions or share comments with the speaker or assembled audience.

DebConf25 saw over 441 T-shirts, 3 day trips, and up to 315 meals planned per day.

All of these events, activities, conversations, and streams coupled with our love, interest, and participation in Debian and F/OSS certainly made this conference an overall success both here in Brest, France and online around the world.

The DebConf25 website will remain active for archival purposes and will continue to offer links to the presentations and videos of talks and events.

Next year, DebConf26 will be held in Santa Fe, Argentina, likely in July. As tradition follows before the next DebConf the local organizers in Argentina will start the conference activities with DebCamp with a particular focus on individual and team work towards improving the distribution.

DebConf is committed to a safe and welcome environment for all participants. See the web page about the Code of Conduct on the DebConf25 website for more details on this.

Debian thanks the commitment of numerous sponsors to support DebConf25, particularly our Platinum Sponsors: AMD, EDF, Infomaniak, Proxmox, and Viridien.

We also wish to thank our Video and Infrastructure teams, the DebConf25 and DebConf committees, our host nation of France, and each and every person who helped contribute to this event and to Debian overall.

Thank you all for your work in helping Debian continue to be "The Universal Operating System".

See you next year!

About Debian

The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential Open Source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system.

About DebConf

DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Bosnia and Herzegovina, India, Korea. More information about DebConf is available from https://debconf.org/.

About AMD

The AMD ROCm platform includes programming models, tools, compilers, libraries, and runtimes for AI and HPC solution development on AMD GPUs. Debian is an officially supported platform for AMD ROCm and a growing number of components are now included directly in the Debian distribution. For more than 55 years AMD has driven innovation in high-performance computing, graphics and visualization technologies. AMD is deeply committed to supporting and contributing to open-source projects, foundations, and open-standards organizations, taking pride in fostering innovation and collaboration within the open-source community.

About EDF

EDF is a leading global utility company focused on low-carbon power generation. The group uses advanced engineering and scientific computing tools to drive innovation and efficiency in its operations, especially in nuclear power plant design and safety assessment. Since 2003, the EDF Group has been using Debian as its main scientific computing environment. Debian's focus on stability and reproducibility ensures that EDF's calculations and simulations produce consistent and accurate results.

About Infomaniak

Infomaniak is Switzerland's leading developer of Web technologies. With operations all over Europe and based exclusively in Switzerland, the company designs and manages its own data centers powered by 100% renewable energy, and develops all its solutions locally, without outsourcing. With millions of users and the trust of public and private organizations across Europe - such as RTBF, the United Nations, central banks, over 3,000 radio and TV stations, as well as numerous cities and security bodies - Infomaniak stands for sovereign, sustainable and independent digital technology. The company offers a complete suite of collaborative tools, cloud hosting, streaming, marketing and events solutions, while being owned by its employees and self-financed exclusively by its customers.

About Proxmox

Proxmox develops powerful, yet easy-to-use Open Source server software. The product portfolio from Proxmox, including server virtualization, backup, and email security, helps companies of any size, sector, or industry to simplify their IT infrastructures. The Proxmox solutions are built on Debian, we are happy that they give back to the community by sponsoring DebConf25.

About Viridien

Viridien is an advanced technology, digital and Earth data company that pushes the boundaries of science for a more prosperous and sustainable future. Viridien has been using Debian-based systems to power most of its HPC infrastructure and its cloud platform since 2009 and currently employs two active Debian Project Members.

Contact Information

For further information, please visit the DebConf25 web page at https://debconf25.debconf.org/ or send mail to press@debian.org.

26 July, 2025 09:50PM by Publicity team

Birger Schacht

My DebConf 25 review

DebConf 25 happened between 14th July and 19th July and I was there. It was my first DebConf (the big one, I was at a Mini DebConf in Hamburg a couple of years ago) and it was interesting. DebConf 25 happened at a Campus University at the outskirts of Brest and I was rather reluctant to go at first (EuroPython 25 was happening at the same time in Prague), but I decided to use the chance of DebConf happening in Europe, reachable by train from Vienna. We took the nighttrain to Paris, then found our way through the maze that is the Paris underground system and then got to Brest with the TGV. On our way to the Conference site we made a detour to a supermarket, which wasn’t that easy because is was a national holiday in France and most of the shops were closed. But we weren’t sure about the food situation at DebConf and we also wanted to get some beer.

At the conference we were greeted by very friendly people at the badge station and the front desk and got our badges, swag and most important the keys to pretty nice rooms on the campus. Our rooms had a small private bathroom with a toilet and a shower and between the two rooms was a shared kitchen with a refrigerator and a microwave. All in all, the accommodation was simple but provided everything we needed and especially a space to have some privacy.

During the next days I watched a lot of talks, met new people, caught up with old friends and also had a nice time with my travel buddies. There was a beach near the campus which I used nearly every day. It was mostly sunny except for the last day of the conference, which apparently was not common for the Brest area, so we got lucky regarding the weather.

Landscape view of the sea at Dellec beach

Given that we only arrived in the evening of the first day of DebConf, I missed the talk When Free Software Communities Unite: Tails, Tor, and the Fight for Privacy (recording), but I watched it on the way home and it was also covered by LWN.

On Tuesday I started the day by visiting a talk about tag2upload (recording). The same day there was also an academic track and I watched the talk titled Integrating Knowledge Graphs into the Debian Ecosystem (recording) which presented a property graph showing relationships between various entities like packages, maintainers or bugs (there is a repository with parts of a paper, but not much other information). The speaker also mentioned the graphcast framework and the ontocast framework which sound interesting - we might have use for something liked this at $dayjob.

In the afternoon there was a talk about the ArchWiki (recording) which gave a comprehensive insight in how the ArchWiki and the community behind it works. Right after that was a Debian Wiki BoF. There are various technical limitations with the current wiki software and there are not enough helping hands to maintain the service and do content curation. But the BoF had some nice results: there is now a new debian-wiki mailinglist, an IRC channel, a MediaWiki installation has been set up during DebConf, there are efforts to migrate the data and most importantly: and handful of people who want to maintain the service and organize the content of the wiki. I think the input from the ArchWiki folks gave some ideas how that team could operate.

Tag at the wall at Dellec beach

Wednesday was the day of the daytrip. I did not sign up for any of the trips and used the time to try out tag2upload, uploaded the latest labwc release to experimental and spent the rest of the day at the beach.

Other noteworthy session I’ve attended were the Don’t fear the TPM talk (recording), which showed me a lot of stuff to try out, the session about lintian-ng (no recording), which is an experimental approach to make lintian faster, the review of the first year of wcurls existence (no recording yet) and the summary of Rust packaging in Debian (no recording yet). In between the sessions I started working on packaging wlr-sunclock (#1109230).

What did not work

Vegan food.

I might be spoiled by other conferences. Both at EuroPycon last year (definitely bigger, a lot more commercial) and at PyCon CZ 23 (similar in size, a lot more DIY) there was catering with explicitly vegan options.

As I’ve mentioned in the beginning, we went to a supermarket before we went to the conference and we had to go there one more time during the conference. I think there was a mixture between a total lack of awareness and a LOT of miscommunication. The breakfasts at the conference consisted of pastries and baguettes - I asked at the first day what the vegan options were and the answer was “I don’t know, maybe the baguette?” and we were asked to only take as much baguette as the people who also got pastries.

The lunch was prepared by the “Restaurant associatif de Kernévent” which is a canteen at the university campus. When we asked if there is vegan food, the people there said that there was only a vegetarian option so we only ate salad. Only later we heard via word of mouth that one has to explicitly ask for a vegan meal which was apparently prepared separatly and you had to find the right person that knows about it (I think thats very Debian-like 😉). But even then a person once got a vegetarian option offered as vegan food.

One problem was also the missing / confusing labeling of the food. At the conference dinner there was apparently vegan food, but it was mixed with all the other food. There were some labels but with hundreds of hungry people around and caterers removing empty plates and dropping off plates with other stuff, everything gets mixed up. In the end we ate bread soaked in olive oil, until the olive oil got taken away by the catering people literally while we were dipping the bread in it.

And when these issues were raised, some of the reactions can be summarized as “You’re holding it wrong” which was really frustrating.

The dinners at the conference hall were similar. At some point I had the impression that “vegan” and “vegetarian” was simply seen as the same thing.

Dinner menu at the conference

If the menus would be written like a debian/copyright file it would probably have looked like this:

Food: *
Diet: Vegan or Vegetarian

But the thing is that Vegan and Vegetarian cannot be mixed. Its similar to non compatible licenses. Once you mix vegan food with vegan food with vegetarian food it’s not vegan anymore.

Don’t get me wrong, I know its hard to organize food for hundreds of people. But if you don’t know what it means to provide a vegan option, just communicate the fact so people can look alternatives in advance. During the week some of the vegan people shared food, which was really nice and there were also a lot of non-vegan people who tried to help, organized extra food or simply listened to the hangry rants. Thanks for that!

Paris

Saturday was the last day of DebConf and it was a rainy day. On Sunday morning we took the TGV back to Paris and then stayed there for one night because the next night train back to Vienna was on Monday. Luckily the weather was better in Paris. The first thing we did was to look up a vegan burger place. In the evening we strolled along the Seine and had a couple of beers at the Jardins du Trocadéro. Monday the rain also arrived in Paris and we mostly went from one cafe to the next, but also managed to visit Notre Dame.

Conclusio

The next DebConf will be in Argentina and I think its likely that DebConf 27 will also not happen anywhere in trainvelling distance. But even if, I think the Mini DebConfs are more my style of happening (there is one planned in Hamburg next spring, and a couple of days ago I learned that there will be a Back to the Future musical show in Hamburg during that time). Nonetheless I had a nice time and I stumbled over some projects I might get more involved in. Thanks also to my travel buddies who put up with me 😋

26 July, 2025 05:28AM

hackergotchi for Matthew Palmer

Matthew Palmer

Object deserialization attacks using Ruby's Oj JSON parser

tl;dr: there is an attack in the wild which is triggering dangerous-but-seemingly-intended behaviour in the Oj JSON parser when used in the default and recommended manner, which can lead to everyone’s favourite kind of security problem: object deserialization bugs! If you have the oj gem anywhere in your Gemfile.lock, the quickest mitigation is to make sure you have Oj.default_options = { mode: :strict } somewhere, and that no library is overwriting that setting to something else.

Prologue

As a sensible sysadmin, all the sites I run send me a notification if any unhandled exception gets raised. Mostly, what I get sent is error-handling corner cases I missed, but now and then… things get more interesting.

In this case, it was a PG::UndefinedColumn exception, which looked something like this:

PG::UndefinedColumn: ERROR:  column "xyzzydeadbeef" does not exist

This is weird on two fronts: firstly, this application has been running for a while, and if there was a schema problem, I’d expect it to have made itself apparent long before now. And secondly, while I don’t profess to perfection in my programming, I’m usually better at naming my database columns than that.

Something is definitely hinky here, so let’s jump into the mystery mobile!

The column name is coming from outside the building!

The exception notifications I get sent include a whole lot of information about the request that caused the exception, including the request body. In this case, the request body was JSON, and looked like this:

{"name":":xyzzydeadbeef", ...}

The leading colon looks an awful lot like the syntax for a Ruby symbol, but it’s in a JSON string. Surely there’s no way a JSON parser would be turning that into a symbol, right? Right?!?

Immediately, I thought that that possibly was what was happening, because I use Sequel for my SQL database access needs, and Sequel treats symbols as database column names. It seemed like too much of a coincidence that a vaguely symbol-shaped string was being sent in, and the exact same name was showing up as a column name.

But how the flying fudgepickles was a JSON string being turned into a Ruby symbol, anyway? Enter… Oj.

Oj? I barely know… aj

A long, long time ago, the “standard” Ruby JSON library had a reputation for being slow. Thus did many competitors flourish, claiming more features and better performance. Strong amongst the contenders was oj (for “Optimized JSON”), touted as “The fastest JSON parser and object serializer”. Given the history, it’s not surprising that people who wanted the best possible performance turned to Oj, leading to it being found in a great many projects, often as a sub-dependency of a dependency of a dependency (which is how it ended up in my project).

You might have noticed in Oj’s description that, in addition to claiming “fastest”, it also describes itself as an “object serializer”. Anyone who has kept an eye on the security bug landscape will recall that “object deserialization” is a rich vein of vulnerabilities to mine. Libraries that do object deserialization, especially ones with a history that goes back to before the vulnerability class was well-understood, are likely to be trouble magnets.

And thus, it turns out to be with Oj.

By default, Oj will happily turn any string that starts with a colon into a symbol:


>> require "oj"
>> Oj.load('{"name":":xyzzydeadbeef","username":"bob","answer":42}')
=> {"name"=>:xyzzydeadbeef, "username"=>"bob", "answer"=>42}

How that gets exploited is only limited by the creativity of an attacker. Which I’ll talk about more shortly – but first, a word from my rant cortex.

Insecure By Default is a Cancer

While the object of my ire today is Oj and its fast-and-loose approach to deserialization, it is just one example of a pervasive problem in software: insecurity by default. Whether it’s a database listening on 0.0.0.0 with no password as soon as its installed, or a library whose default behaviour is to permit arbitrary code execution, it all contributes to a software ecosystem that is an appalling security nightmare.

When a user (in this case, a developer who wants to parse JSON) comes across a new piece of software, they have – by definition – no idea what they’re doing with that software. They’re going to use the defaults, and follow the most easily-available documentation, to achieve their goal. It is unrealistic to assume that a new user of a piece of software is going to do things “the right way”, unless that right way is the only way, or at least the by-far-the-easiest way.

Conversely, the developer(s) of the software is/are the domain experts. They have knowledge of the problem domain, through their exploration while building the software, and unrivalled expertise in the codebase.

Given this disparity in knowledge, it is tantamount to malpractice for the experts – the developer(s) – to off-load the responsibility for the safe and secure use of the software to the party that has the least knowledge of how to do that (the new user).

To apply this general principle to the specific case, take the “Using” section of the Oj README. The example code there calls Oj.load, with no indication that this code will, in fact, parse specially-crafted JSON documents into Ruby objects. The brand-user user of the library, no doubt being under pressure to Get Things Done, is almost certainly going to look at this “Using” example, get the apparent result they were after (a parsed JSON document), and call it a day.

It is unlikely that a brand-new user will, for instance, scroll down to the “Further Reading” section, find the second last (of ten) listed documents, “Security.md”, and carefully peruse it. If they do, they’ll find an oblique suggestion that parsing untrusted input is “never a good idea”. While that’s true, it’s also rather unhelpful, because I’d wager that by far the majority of JSON parsed in the world is “untrusted”, in one way or another, given the predominance of JSON as a format for serializing data passing over the Internet. This guidance is roughly akin to putting a label on a car’s airbags that “driving at speed can be hazardous to your health”: true, but unhelpful under the circumstances.

The solution is for default behaviours to be secure, and any deviation from that default that has the potential to degrade security must, at the very least, be clearly labelled as such. For example, the Oj.load function should be named Oj.unsafe_load, and the Oj.load function should behave as the Oj.safe_load function does presently. By naming the unsafe function as explicitly unsafe, developers (and reviewers) have at least a fighting chance of recognising they’re doing something risky. We put warning labels on just about everything in the real world; the same should be true of dangerous function calls.

OK, rant over. Back to the story.

But how is this exploitable?

So far, I’ve hopefully made it clear that Oj does some Weird Stuff with parsing certain JSON strings. It caused an unhandled exception in a web application I run, which isn’t cool, but apart from bombing me with exception notifications, what’s the harm?

For starters, let’s look at our original example: when presented with a symbol, Sequel will interpret that as a column name, rather than a string value. Thus, if our “save an update to the user” code looked like this:


# request_body has the JSON representation of the form being submitted
body = Oj.load(request_body)
DB[:users].where(id: user_id).update(name: body["name"])

In normal operation, this will issue an SQL query along the lines of UPDATE users SET name='Jaime' WHERE id=42. If the name given is “Jaime O’Dowd”, all is still good, because Sequel quotes string values, etc etc. All’s well so far.

But, imagine there is a column in the users table that normally users cannot read, perhaps admin_notes. Or perhaps an attacker has gotten temporary access to an account, and wants to dump the user’s password hash for offline cracking. So, they send an update claiming that their name is :admin_notes (or :password_hash).

In JSON, that’ll look like {"name":":admin_notes"}, and Oj.load will happily turn that into a Ruby object of {"name"=>:admin_notes}. When run through the above “update the user” code fragment, it’ll produce the SQL UPDATE users SET name=admin_notes WHERE id=42. In other words, it’ll copy the contents of the admin_notes column into the name column – which the attacker can then read out just by refreshing their profile page.

But Wait, There’s More!

That an attacker can read other fields in the same table isn’t great, but that’s barely scratching the surface.

Remember before I said that Oj does “object serialization”? That means that, in general, you can create arbitrary Ruby objects from JSON. Since objects contain code, it’s entirely possible to trigger arbitrary code execution by instantiating an appropriate Ruby object. I’m not going to go into details about how to do this, because it’s not really my area of expertise, and many others have covered it in detail. But rest assured, if an attacker can feed input of their choosing into a default call to Oj.load, they’ve been handed remote code execution on a platter.

Mitigations

As Oj’s object deserialization is intended and documented behaviour, don’t expect a future release to make any of this any safer. Instead, we need to mitigate the risks. Here are my recommended steps:

  1. Look in your Gemfile.lock (or SBOM, if that’s your thing) to see if the oj gem is anywhere in your codebase. Remember that even if you don’t use it directly, it’s popular enough that it is used in a lot of places. If you find it in your transitive dependency tree anywhere, there’s a chance you’re vulnerable, limited only by the ingenuity of attackers to feed crafted JSON into a deeply-hidden Oj.load call.
  2. If you depend on oj directly and use it in your project, consider not doing that. The json gem is acceptably fast, and JSON.parse won’t create arbitrary Ruby objects.
  3. If you really, really need to squeeze the last erg of performance out of your JSON parsing, and decide to use oj to do so, find all calls to Oj.load in your code and switch them to call Oj.safe_load.
  4. It is a really, really bad idea to ever use Oj to deserialize JSON into objects, as it lacks the safety features needed to mitigate the worst of the risks of doing so (for example, restricting which classes can be instantiated, as is provided by the permitted_classes argument to Psych.load). I’d make it a priority to move away from using Oj for that, and switch to something somewhat safer (such as the aforementioned Psych). At the very least, audit and comment heavily to minimise the risk of user-provided input sneaking into those calls somehow, and pass mode: :object as the second argument to Oj.load, to make it explicit that you are opting-in to this far more dangerous behaviour only when it’s absolutely necessary.
  5. To secure any unsafe uses of Oj.load in your dependencies, consider setting the default Oj parsing mode to :strict, by putting Oj.default_options = { mode: :strict } somewhere in your initialization code (and make sure no dependencies are setting it to something else later!). There is a small chance that this change of default might break something, if a dependency is using Oj to deliberately create Ruby objects from JSON, but the overwhelming likelihood is that Oj’s just being used to parse “ordinary” JSON, and these calls are just RCE vulnerabilities waiting to give you a bad time.

Is Your Bacon Saved?

If I’ve helped you identify and fix potential RCE vulnerabilities in your software, or even just opened your eyes to the risks of object deserialization, please help me out by buying me a refreshing beverage. I would really appreciate any support you can give. Alternately, if you’d like my help in fixing these (and many other) sorts of problems, I’m looking for work, so email me.

26 July, 2025 12:00AM by Matt Palmer (mpalmer@hezmatt.org)

July 23, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

qlcal 0.0.16 on CRAN: Regular Update

The sixteenth release of the qlcal package arrivied at CRAN today, once again following the QuantLib 1.39 release this morning.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.

This releases mainly synchronizes qlcal with the QuantLib release 1.39.

Changes in version 0.0.16 (2025-07-23)

  • Synchronized with QuantLib 1.39 released today

  • Calendar updates for Israel, minor utility functions update

  • Minor package maintenance updates

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

23 July, 2025 08:45PM

Abhijith PA

Removing spams from your local maildir

I have been using Disroot as my primary email ever since openmailbox.org stopped. I am very grateful for Disroot’s service and I occasionally donate to them.

Recently, my Disroot inbox has been flooded with spam. On an average day, I used to receive around 90% spams on entire email count. However, the situation has improved since then. I contacted the Disroot team, and they informed me that they are aware of the situation and planning to migrate to Rspamd from Spamassassin.

I don’t know whether they deployed Rspamd, even if so that only going to process incoming mails, I am looking for a way to identify spams and purge that are already entered my Imap folders.

Later I found this script nh2/rspamd-move[1], which seems fit my need.

I made couple of trivial changes in the script for my use case. I wasn’t sure of directly running this on my Mail/ dir, so I cloned my entire local mail directory to another directory and made available to podman container where I script and rspamd instance exist. I trained rspamd from the /Spam. Later, I manually moved couple of mails to /spam folder/. I requested friends to share their spam folder in the #debian-in channel, but that didn’t happen :P

$podman run -it --mount
type=bind,source=/home/abhijith/$MAILS/,target=/container-mail-clone
id:latest
$script.py

(It took some time since I have around 10000+ emails)

Wow, it was quite a successful attempt, I was able to catch most of it and move to spam/ and couple of false positive in a different folder. Now I want to do the same in the actual maildir yet very skeptical. While going through the cloned folder with mutt -f I remembered that the mails are already indexed by notmuch.

So all I need to do is operate tagging and deletion with notmuch and it will be synced back to the original mail dir. Ta-da. I cleaned by Inbox.

[1] - https://github.com/nh2/rspamd-move

23 July, 2025 08:26AM

Russell Coker

Annoying Wrongness on TV

One thing that annoys me on TV shows and movies is getting the details wrong. Yes it’s fiction and yes some things can’t be done correctly and in some situations correctly portraying things goes against the plot. But otherwise I think they should try to make it accurate.

I was just watching The Americans (a generally good show that I recommend watching) and in Season 4 Episode 9 there’s a close up of a glass of wine which clearly shows that the Tears of Wine effect is missing, the liquid in the glass obviously has the surface tension of water not of wine. When they run a show about spies then have to expect that the core audience will be the type of detail oriented people who notice these things. Having actors not actually drink alcohol on set is a standard practice, if they have to do 10 takes of someone drinking a glass of wine then that would be a problem if they actually drank real wine. But they could substitute real wine for the close up shots and of course just getting it right the first time is a good option.

Some ridiculous inaccuracy we just need to deal with, like knives making a schwing sound when pulled out of scabbards and “silenced” guns usually still being quite loud (so many people are used to it being wrong). Organisations like the KGB had guns that were actually silent, but they generally looked obviously different to regular guns and had a much lower effective range.

The gold coins shown on TV are another ridiculous thing. The sound of metal hitting something depends on how hard it is and how dense it is. Surely most people have heard the sounds of dropping steel nuts and ball bearings and the sound of dropping lead sinkers and knows that the sounds of items of similar size and shape differ greatly based on density and hardness. A modern coin made of copper, cupro-nickel (the current “silver” coins), or copper-aluminium (the current “gold” coins) sounds very different to a gold coin when dropped on a bench. For a show like The Witcher it wouldn’t be difficult to make actual gold coins of a similar quality to iron age coin production, any jeweller could make the blanks and making stamps hard enough to press gold isn’t an engineering challenge (stamping copper coins would be much more difficult). The coins used for the show could be sold to fans afterwards.

Once coins are made they can’t be just heaped up. Even if you are a sorcerer you probably couldn’t fill a barrel a meter high with gold coins and not have it break from the weight and/or have the coins at the bottom cold welded. Gold coins are supposed to have a precise amount of gold and if you pile them up too high then the cold welding process will transfer gold between coins changing the value. If someone was going to have a significant quantity of gold stored then it would be in gold ingots with separators between layers to prevent cold welding.

Movies tend not to show coins close up, I presume that’s because they considered it too difficult to make coins and they just use some random coins from their own country.

Another annoying thing is shows that don’t match up the build dates of objects used. It’s nice when they get it right like the movie Titanic featuring a M1911 pistol which is something that a rich person in 1912 would likely have. The series Carnival Row (which I recommend) has weapons that mostly match our WW1 era, everything that doesn’t involve magic seems legit. One of the worst examples of this is the movie Anna (by Luc Besson which is mostly a recreation of his film Nikita but in the early 90s and with the KGB). That film features laptops with color screens and USB ports before USB was invented and when color screens weren’t common on laptops, as an aside military spec laptops tend to have older designs than consumer spec ones.

I’ve mostly given up on hoping that movies will get “hacking” scenes that are any more accurate than knives making a “schwing” sound. But it shouldn’t be that hard for them to find computer gear that was manufactured in the right year to use for the film.

Why can’t they hire experts on technology to check everything?

23 July, 2025 06:36AM by etbe

July 22, 2025

Iustin Pop

Watching website scanning bots

Ever since I put up http://demo.corydalis.io, and setup logcheck, I’m inadvertently keeping up with recent exploits in common CMS frameworks, or maybe even normal web frameworks issues, by seeing what 404s I get from the logs.

Now, I didn’t indent to do this per se, I just wanted to make sure I don’t have any 500s, and at one point, I did actually catch a bug by seeing seemingly valid URLs, with referrer my own pages, leading to 404s. But besides that, it’s mainly a couple times per week, a bot finds the site, and then it tries in fast succession something like this (real log entries, with the source IP address removed):

[21/Jul/2025:09:27:09 +0200] "GET /pms?module=logging&file_name=../../../../../../~/.aws/credentials&number_of_lines=10000 HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:11 +0200] "GET /admin/config?cmd=cat+/root/.aws/credentials HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:11 +0200] "GET /.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:13 +0200] "GET /.env.local HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:13 +0200] "GET /.env.production HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:16 +0200] "GET /.env.dev HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:17 +0200] "GET /.env.development HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:19 +0200] "GET /.env.prod HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:19 +0200] "GET /.env.stage HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:22 +0200] "GET /.env.test HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:23 +0200] "GET /.env.example HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:25 +0200] "GET /.env.bak HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:26 +0200] "GET /.env.old HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:28 +0200] "GET /.envs/.production/.django HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:28 +0200] "GET /blog.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:31 +0200] "GET /wp-content/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:32 +0200] "GET /application/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:34 +0200] "GET /app/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:35 +0200] "GET /apps/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:37 +0200] "GET /config/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:38 +0200] "GET /config/config.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:40 +0200] "GET /config/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:41 +0200] "GET /api/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:43 +0200] "GET /vendor/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:44 +0200] "GET /backend/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:46 +0200] "GET /server/.env HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:46 +0200] "GET /home/user/.aws/credentials HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:49 +0200] "GET /aws/credentials HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:50 +0200] "GET /.aws/credentials HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:52 +0200] "GET /.aws/config HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:52 +0200] "GET /config/aws.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:55 +0200] "GET /config/aws.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:55 +0200] "GET /.env.production HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:58 +0200] "GET /config.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:27:59 +0200] "GET /config/config.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:01 +0200] "GET /config/settings.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:02 +0200] "GET /config/secrets.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:04 +0200] "GET /config.yaml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:04 +0200] "GET /config.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:07 +0200] "GET /config.py HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:08 +0200] "GET /secrets.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:10 +0200] "GET /secrets.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:11 +0200] "GET /credentials.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:13 +0200] "GET /.git-credentials HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:14 +0200] "GET /.git/config HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:16 +0200] "GET /.gitignore HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:18 +0200] "GET /.gitlab-ci.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:19 +0200] "GET /.github/workflows HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:22 +0200] "GET /.idea/workspace.xml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:22 +0200] "GET /.vscode/settings.json HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:25 +0200] "GET /docker-compose.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:25 +0200] "GET /docker-compose.override.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:28 +0200] "GET /docker-compose.prod.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:28 +0200] "GET /docker-compose.dev.yml HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:32 +0200] "GET /phpinfo HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:32 +0200] "GET /_profiler/phpinfo HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:34 +0200] "GET /phpinfo.php HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:34 +0200] "GET /info.php HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:37 +0200] "GET /storage/logs/laravel.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:37 +0200] "GET /storage/logs/error.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:40 +0200] "GET /logs/debug.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:40 +0200] "GET /logs/app.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:49 +0200] "GET /debug.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:51 +0200] "GET /error.log HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:53 +0200] "GET /.DS_Store HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:55 +0200] "GET /backup.zip HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:28:58 +0200] "GET /.backup HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:29:00 +0200] "GET /db.sql HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:29:03 +0200] "GET /dump.sql HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:29:06 +0200] "GET /database.sql HTTP/1.1" 404 - "" "Mozilla/5.0"
[21/Jul/2025:09:29:09 +0200] "GET /backup.tar.gz HTTP/1.1" 404 - "" "Mozilla/5.0"

Now, this example is actually trying to catch a bit more things, but many times it’s focused on some specific thing, or two things. Here we have docker, MacOS .DS_Store (I’m not sure how that’s useful - to find more filenames?), VSCode settings, various secrets, GitHub workflows, log output, database dumps, AWS credentials, and still—I guess from the wp filename—WordPress settings. The first few years were full of WordPress scanners, now it seems it has quieted down, I haven’t seen a bot scanning 200 WP potential filenames in ages. And this bot even bothers to put in “Mozilla/5.0� as browser identification 😅.

Side-note: I don’t think the filename path in the first log entry, i.e. ../../../../../../~/, ever properly resolves to the home directory of any user. So I’m not that particular scanner ever works, but who knows? Maybe some framework does bad tilde expansion, but at least bash will not expand ~ inside a path, it seems—that path is passed as-is to an invoked command (strace confirms it).

What’s surprising here is that these are usually plain dumb scanners, from the same IP address, no concern on throttling, no attempt to hide, just 2 minutes of brute-forcing a random list of known “treasures�, then moving on. For this to be worth, it means there are still victims found using this method, sadly. Well, sometimes I get a single, one-off "GET /wp-login.php HTTP/1.1, which is strange enough it might not be a bot even, who knows. But in general, periods of activity of this type are coming and going, probably aligned with new CVEs.

And another surprising thing is that for this type of scanning to work (and I’ve seen many over the years), the website framework/configuration must allow random file download. Corydalis itself is written in Haskell, using Yesod, and it has a hardcoded (built at compile time) list of static resources it will serve. I haven’t made the switch to fully embedding in the binary, but at that point, it won’t need to read from the filesystem at all. Right now it will serve a few CSS and JS files, plus fonts, but that’s it, no arbitrary filesystem traversal. Strange that some frameworks allow it.

This is not productively spent time, but it is fun, especially seeing how this changes over time. And probably the most use anyone gets out of http://demo.corydalis.io 😄.

22 July, 2025 10:47PM

hackergotchi for David Bremner

David Bremner

Hibernate on the pocket reform 8/n

Context

Sidequest: Fix patches continued

  • 1001-pci_dw_rockchip_enable_l0s_capability.patch doesn't apply cleanly either

  • b4 am 1744594051-209255-1-git-send-email-shawn.lin@rock-chips.com

  • This one has a usable blob index 21dc99c

  • git log --raw --all --find-object=21dc99c finds the patch already applied as 198e69cc4150aba1e7af740a2111ace6a267779e

  • 1002-v2-media_verisilicon_fix_av1_decoder_clock_frequency.patch applies cleanly

Build kernel with backported patches

Back following the upstream bisect instructions from reform-debian-packages/README.md

$  apt-get install git gpg gpgv build-essential bc rsync kmod cpio bison flex libelf-dev libssl-dev debhelper libdw-dev
$ cp /boot/config-6.15.4-mnt-reform-arm64 .config
$ make olddefconfig
$ yes '' | make localmodconfig
$ make KBUILD_IMAGE=arch/arm64/boot/Image bindeb-pkg -j$(nproc)

One thing not documented there is that you need the pocket-reform dtb as well. Copy that file from reform-debian-packages, and update the relevant Makefile.

diff --git a/arch/arm64/boot/dts/rockchip/Makefile b/arch/arm64/boot/dts/rockchip/Makefile
index 26533be1dd86..83ef850cd113 100644
--- a/arch/arm64/boot/dts/rockchip/Makefile
+++ b/arch/arm64/boot/dts/rockchip/Makefile
@@ -163,6 +163,7 @@ dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3588-h96-max-v58.dtb
 dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3588-jaguar.dtb
 dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3588-jaguar-pre-ict-tester.dtbo
 dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3588-mnt-reform2.dtb
+dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3588-mnt-pocket-reform.dtb
 dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3588-nanopc-t6.dtb
 dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3588-nanopc-t6-lts.dtb
 dtb-$(CONFIG_ARCH_ROCKCHIP) += rk3588-ok3588-c.dtb
diff --git a/arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dts b/arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dts
new file mode 100644
index 000000000000..81533cedc200
  • With these changes I can boot in to 6.16~rc6, and log in the serial console, but the LCD display seems blank (but with some backlight power). That is probably related to the following warnings from device tree compilation
DTC     arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dtb
arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dts:1020.3-13: Warning (reg_format): /dsi@fde30000/panel:reg: property has invalid length (4 bytes) (#address-cells == 2, #size-cells == 1)
arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dtb: Warning (pci_device_reg): Failed prerequisite 'reg_format'
arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dtb: Warning (pci_device_bus_num): Failed prerequisite 'reg_format'
arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dtb: Warning (i2c_bus_reg): Failed prerequisite 'reg_format'
arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dtb: Warning (spi_bus_reg): Failed prerequisite 'reg_format'
arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dts:1018.8-1033.4: Warning (avoid_default_addr_size): /dsi@fde30000/panel: Relying on default #address-cells value
arch/arm64/boot/dts/rockchip/rk3588-mnt-pocket-reform.dts:1018.8-1033.4: Warning (avoid_default_addr_size): /dsi@fde30000/panel: Relying on default #size-cells value
  • The current source is on

    https://salsa.debian.org/bremner/collabora-rockchip-3588

    The branch "reform-patches" is subject to rebase (and may make your computer explode).

  • For now I'm blocked on the panel, I suspect the dts file needs an update.

previous episode|next episode

22 July, 2025 11:41AM

July 21, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

Nine Inch Nails, Paris, 2025-07-07

On July 7th I did a quick one-day trip to Paris to watch Nine Inch Nails, who were performing one of their last shows for the European leg of the tour. I'd missed all the UK dates, which were announced after I was committed to a family holiday.

The setup for these shows differs a lot from the norm. First, the support act was German-Iraqi techno DJ Boys Noize, who TR&AR had collaborated with on their soundtrack for the recent movie Challengers. BN did a 75-minute opening set:

Stripped down Set 1

Stripped down Set 1

The main gig was divided into four acts. Act 1 took place on a square "B Stage", situated in the middle of the main floor. Trent and 2-3 band members performed three stripped-down songs, with interpolations to other songs, alternative lyrics, and the like. I enjoyed this section more than I expected to.

Main stage lighting

Main stage lighting

They then transition to the main stage for a six-song full-band set. The main stage was draped in layers of transparent curtains. One of the team was on-stage, walking around with a video camera, and the live camera footage was processed and projected onto the curtains in realtime. The visuals were truly fantastic.

For the third act, Trent and Atticus returned to the B stage, to be joined by Boys Noize, who then performed four songs in a reworked, remixed, electronic dance format. This ended up being the highlight of the gig for me. I felt the reworkings breathed new life into some old songs.

remix Set 3

remix Set 3

Finally, back to the main stage to close out with a final seven songs. The traditional triad of closers was adjusted, substituting "The Hand That Feeds" for "The Perfect Drug"; a song that wasn't played at all from 1997 to 2018, and has now become a regular fixture.

Several tapers recorded the show, and their recordings are up at nin.live.

Overall it was great fun: I ended up flying via London and the airport time was a bit of a pain, but I did get a good start on my corrections. I was home the next afternoon. I'm trying not to think of the cost.

21 July, 2025 01:54PM

July 20, 2025

hackergotchi for Michael Prokop

Michael Prokop

What to expect from Debian/trixie #newintrixie

Trixie Banner, Copyright 2024 Elise Couper

Update on 2025-07-28: added note about Debian 13/trixie support for OpenVox (thanks, Ben Ford!)

Debian v13 with codename trixie is scheduled to be published as new stable release on 9th of August 2025.

I was the driving force at several of my customers to be well prepared for the upcoming stable release (my efforts for trixie started in August 2024). On the one hand, to make sure packages we care about are available and actually make it into the release. On the other hand, to ensure there are no severe issues that make it into the release and to get proper and working upgrades. So far everything is looking pretty well and working fine, the efforts seemed to have payed off. :)

As usual with major upgrades, there are some things to be aware of, and hereby I’m starting my public notes on trixie that might be worth for other folks. My focus is primarily on server systems and looking at things from a sysadmin perspective.

Further readings

As usual start at the official Debian release notes, make sure to especially go through What’s new in Debian 13 + issues to be aware of for trixie (strongly recommended read!).

Package versions

As a starting point, let’s look at some selected packages and their versions in bookworm vs. trixie as of 2025-07-20 (mainly having amd64 in mind):

Package bookworm/v12 trixie/v13
ansible 2.14.3 2.19.0
apache 2.4.62 2.4.64
apt 2.6.1 3.0.3
bash 5.2.15 5.2.37
ceph 16.2.11 18.2.7
docker 20.10.24 26.1.5
dovecot 2.3.19 2.4.1
dpkg 1.21.22 1.22.21
emacs 28.2 30.1
gcc 12.2.0 14.2.0
git 2.39.5 2.47.2
golang 1.19 1.24
libc 2.36 2.41
linux kernel 6.1 6.12
llvm 14.0 19.0
lxc 5.0.2 6.0.4
mariadb 10.11 11.8
nginx 1.22.1 1.26.3
nodejs 18.13 20.19
openjdk 17.0 21.0
openssh 9.2p1 10.0p1
openssl 3.0 3.5
perl 5.36.0 5.40.1
php 8.2+93 8.4+96
podman 4.3.1 5.4.2
postfix 3.7.11 3.10.3
postgres 15 17
puppet 7.23.0 8.10.0
python3 3.11.2 3.13.5
qemu/kvm 7.2 10.0
rsync 3.2.7 3.4.1
ruby 3.1 3.3
rust 1.63.0 1.85.0
samba 4.17.12 4.22.3
systemd 252.36 257.7-1
unattended-upgrades 2.9.1 2.12
util-linux 2.38.1 2.41
vagrant 2.3.4 2.3.7
vim 9.0.1378 9.1.1230
zsh 5.9 5.9

Misc unsorted

  • The asterisk package once again didn’t make it into trixie (see #1031046)
  • The new debian-repro-status package provides the identically named command-line tool debian-repro-status to query the reproducibility status of your installed Debian packages
  • The Grml live system project provided further of their packages into Debian. Available as of trixie are now also grml-keyring (OpenPGP certificates used for signing the Grml repositories), grml-hwinfo (a tool which collects information of the hardware ) + grml-paste (command line interface for paste.debian.net)
  • If you use pacemaker, be aware that its fence-agents package is now a transitional package. All the fence-agents got split into separate packages (fence-agents-$whatever). If you want to have all the fence-agents available, make sure to install the fence-agents-all package. If you have Recommends disabled, you definitely should be aware of this.
  • usrmerge is finalized (also see dpkg warning issue in release notes)
  • For an overview of the XMPP/Jabber situation in trixie see xmpp-team’s blog post
  • The curl package now includes the wcurl command line tool, being a simple wrapper around curl to easily download files

apt

The new apt version 3.0 brings several new features, including:

  • support for colors (f.e. green for installs/upgrades, yellow for downgrades, red for removals, can be disabled via &dash&dashno-color, APT_NO_COLOR=1 or NO_COLOR=1 and customized via e.g. APT::Color::Action::Install “cyan”)
  • organizes output in more readable sections and shows removals more prominently
  • uses sequoia to verify signatures
  • includes a new solver
  • the new apt modernize-sources command converts /etc/apt/sources.list.d/*.list files into the new .sources format (AKA DEB822)
  • the new apt distclean command removes all files under $statedir/lists except Release, Release.gpg, and InRelease (it can be used for example, when finalizing images distributed to users)
  • new configuration option APT::NeverAutoRemove::KernelCount for keeping a configurable amount of kernels, f.e. setting APT::NeverAutoRemove::KernelCount 3 will keep 3 kernels (including the running, and most recent)
  • new command line option &dash&dashsnapshot, and configuration option APT::Snapshot, controlling the snapshot chosen for archives with Snapshot: enable
  • new command line option &dash&dashupdate to run the update command before the specified command, like apt &dash&dashupdate install zsh,
    apt &dash&dashupdate remove foobar or apt &dash&dashupdate safe-upgrade
  • apt-key is gone, and there’s no replacement for it available (if you need an interface for listing present keys)

systemd

systemd got upgraded from v252.36-1~deb12u1 to 257.7-1 and there are lots of changes.

Be aware that systemd v257 has a new net.naming_scheme, v257 being PCI slot number is now read from firmware_node/sun sysfs file. The naming scheme based on devicetree aliases was extended to support aliases for individual interfaces of controllers with multiple ports. This might affect you, see e.g. #1092176 and #1107187, the Debian Wiki provides further useful information.

There are new systemd tools available:

  • run0: temporarily and interactively acquire elevated or different privileges (serves a similar purpose as sudo)
  • systemd-ac-power: Report whether we are connected to an external power source
  • systemd-confext: Activates System Extension Images
  • systemd-vpick: Resolve paths to ‘.v/’ versioned directories
  • varlinkctl: Introspect with and invoke Varlink services

The tools provided by systemd gained several new options:

  • busctl: new option &dash&dashlimit&dashmessages=NUMBER (Stop monitoring after receiving the specified number of message)
  • hostnamectl: new option &dashj (same as &dash&dashjson=pretty on tty, &dash&dashjson=short otherwise)
  • journalctl: new options &dash&dashimage&dashpolicy=POLICY (Specify disk image dissection policy), &dash&dashinvocation=ID (Show logs from the matching invocation ID), &dashI (Show logs from the latest invocation of unit), &dash&dashexclude-identifier=STRING (Hide entries with the specified syslog identifier),&dash&dashtruncate-newline (Truncate entries by first newline character), &dash&dashlist-invocations (Show invocation IDs of specified unit), &dash&dashlist-namespaces (Show list of journal namespaces)
  • kernel-install: new commands add&dashall + list and plenty of new command line options
  • localectl: new option &dash&dashfull (Do not ellipsize output)
  • loginctl: new options &dash&dashjson=MODE (Generate JSON output for list-sessions/users/seats) + &dashj (Same as &dash&dashjson=pretty on tty, &dash&dashjson=short otherwise)
  • networkctl: new commands edit FILES|DEVICES… (Edit network configuration files), cat [FILES|DEVICES…] (Show network configuration files), mask FILES… (Mask network configuration files) + unmask FILES… (Unmask network configuration files) + persistent-storage BOOL (Notify systemd-networkd if persistent storage is ready), and new options &dash&dashno-ask-password (Do not prompt for password), &dash&dashno-reload (Do not reload systemd-networkd or systemd-udevd after editing network config), &dash&dashdrop-in=NAME (Edit specified drop-in instead of main config file), &dash&dashruntime (Edit runtime config files) + &dash&dashstdin (Read new contents of edited file from stdin)
  • systemctl” new commands list-paths [PATTERN] (List path units currently in memory, ordered by path), whoami [PID…] (Return unit caller or specified PIDs are part of), soft-reboot (Shut down and reboot userspace) + sleep (Put the system to sleep), and new options &dash&dashcapsule=NAME (Connect to service manager of specified capsule), &dash&dashbefore (Show units ordered before with ‘list-dependencies’), &dash&dashafter (Show units ordered after with ‘list-dependencies’), &dash&dashkill-value=INT (Signal value to enqueue), &dash&dashno-warn (Suppress several warnings shown by default), &dash&dashmessage=MESSAGE (Specify human readable reason for system shutdown), &dash&dashimage&dashpolicy=POLICY (Specify disk image dissection policy), &dash&dashreboot&dashargument=ARG (Specify argument string to pass to reboot()), &dash&dashdrop-in=NAME (Edit unit files using the specified drop-in file name), &dash&dashwhen=TIME (Schedule halt/power-off/reboot/kexec action after a certain timestamp) + &dash&dashstdin (Read
    new contents of edited file from stdin)
  • systemd-analyze” new commands architectures [NAME…] (List known architectures), smbios11 (List strings passed via SMBIOS Type #11), image-policy POLICY… (Analyze image policy string), fdstore SERVICE… (Show file descriptor store contents of service), malloc [D-BUS SERVICE…] (Dump malloc stats of a D-Bus service), has-tpm2 (Report whether TPM2 support is available), pcrs [PCR…] (Show TPM2 PCRs and their names) + srk [>FILE] (Write TPM2 SRK (to FILE)) and new options &dash&dashno-legend (Disable column headers and hints in plot with either &dash&dashtable or &dash&dashjson=), &dash&dashinstance=NAME (Specify fallback instance name for template units), &dash&dashunit=UNIT (Evaluate conditions and asserts of unit), &dash&dashtable (Output plot’s raw time data as a table), &dash&dashscale-svg=FACTOR (Stretch x-axis of plot by FACTOR (default: 1.0)), &dash&dashdetailed (Add more details to SVG plot), &dash&dashtldr (Skip comments and empty lines), &dash&dashimage
    -policy=POLICY (Specify disk image dissection policy) + &dash&dashmask (Parse parameter as numeric capability mask)
  • systemd-ask-password: new options &dash&dashuser (Ask only our own user’s agents) + &dash&dashsystem (Ask agents of the system and of all users)
  • systemd-cat: new option &dash&dashnamespace=NAMESPACE (Connect to specified journal namespace)
  • systemd-creds: new options &dash&dashuser (Select user-scoped credential encryption), &dash&dashuid=UID (Select user for scoped credentials) + &dash&dashallow-null (Allow decrypting credentials with empty key)
  • systemd-detect-virt: new options &dash&dashcvm (Only detect whether we are run in a confidential VM) + &dash&dashlist-cvm (List all known and detectable types of confidential virtualization)
  • systemd-firstboot: new options &dash&dashimage-policy=POLICY (Specify disk image dissection policy), &dash&dashkernel-command-line=CMDLINE (Set kernel command line) + &dash&dashreset (Remove existing files)
  • systemd-id128: new commands var-partition-uuid (Print the UUID for the /var/ partition) + show [NAME|UUID] (Print one or more UUIDs), and new options &dash&dashno-pager (Do not pipe output into a pager), &dash&dashno-legend (Do not show the headers and footers), &dash&dashjson=FORMAT (Output inspection data in JSON), &dashj (Equivalent to &dash&dashjson=pretty (on TTY) or &dash&dashjson=short (otherwise)) + &dashP &dash&dashvalue (Only print the value)
  • systemd-inhibit: new option &dash&dashno-ask-password (Do not attempt interactive authorization)
  • systemd-machine-id-setup: new option &dash&dashimage-policy=POLICY (Specify disk image dissection policy)
  • systemd-mount: new options &dash&dashjson=pretty|short|off (Generate JSON output) + &dash&dashtmpfs (Create a new tmpfs on the mount point)
  • systemd-notify: new options &dash&dashreloading (Inform the service manager about configuration reloading), &dash&dashstopping (Inform the service manager about service shutdown), &dash&dashexec (Execute command line separated by ‘;’ once done), &dash&dashfd=FD (Pass specified file descriptor with along with message) + &dash&dashfdname=NAME (Name to assign to passed file descriptor(s))
  • systemd-path: new option &dash&dashno-pager (Do not pipe output into a pager)
  • systemd-run: new options &dash&dashexpand-environment=BOOL (Control expansion of environment variables), &dash&dashjson=pretty|short|off (Print unit name and invocation id as JSON), &dash&dashignore-failure (Ignore the exit status of the invoked process) + &dash&dashbackground=COLOR (Set ANSI color for background)
  • systemd-sysext: new options &dash&dashmutable=yes|no|auto|import|ephemeral|ephemeral-import (Specify a mutability mode of the merged hierarchy), &dash&dashno-reload (Do not reload the service manager), &dash&dashimage-policy=POLICY (Specify disk image dissection policy) + &dash&dashnoexec=BOOL (Whether to mount extension overlay with noexec)
  • systemd-sysusers: new options &dash&dashtldr (Show non-comment parts of configuration) + &dash&dashimage-policy=POLICY (Specify disk image dissection policy)
  • systemd-tmpfiles: new command &dash&dashpurge(Delete files and directories marked for creation in specified configuration files (careful!)), and new options &dash&dashuser (Execute user configuration), &dash&dashtldr (Show non-comment parts of configuration files), &dash&dashgraceful (Quietly ignore unknown users or groups), &dash&dashimage-policy=POLICY (Specify disk image dissection policy) + &dash&dashdry-run (Just print what would be done)
  • systemd-umount: new options &dash&dashjson=pretty|short|off (Generate JSON output) + &dash&dashtmpfs (Create a new tmpfs on the mount point)
  • timedatectl: new commands ntp-servers INTERFACE SERVER (Set the interface specific NTP servers) + revert INTERFACE (Revert the interface specific NTP servers) and new option &dashP NAME (Equivalent to &dash&dashvalue &dash&dashproperty=NAME)

Debian’s systemd ships new binary packages:

  • systemd-boot-efi-amd64-signed (Tools to manage UEFI firmware updates (signed))
  • systemd-boot-tools (simple UEFI boot manager – tools)
  • systemd-cryptsetup (Provides cryptsetup, integritysetup and veritysetup utilities)
  • systemd-netlogd (journal message forwarder)
  • systemd-repart (Provides the systemd-repart and systemd-sbsign utilities)
  • systemd-standalone-shutdown (standalone shutdown binary for use in exitrds)
  • systemd-ukify (tool to build Unified Kernel Images)

Linux Kernel

The trixie release ships a Linux kernel based on latest longterm version 6.12. As usual there are lots of changes in the kernel area, including better hardware support, and this might warrant a separate blog entry. To highlight some changes with Debian trixie:

See Kernelnewbies.org for further changes between kernel versions.

Configuration management

For puppet users, Debian provides the puppet-agent (v8.10.0), puppetserver (v8.7.0) and puppetdb (v8.4.1) packages. Puppet’s upstream does not provide packages for trixie, yet. Given how long it took them for Debian bookworm, and with their recent Plans for Open Source Puppet in 2025, it’s unclear when (and whether at all) we might get something. As a result of upstream behavior, also the OpenVox project evolved, and they already provide Debian 13/trixie support (https://apt.voxpupuli.org/openvox8-release-debian13.deb). FYI: the AIO puppet-agent package for bookworm (v7.34.0-1bookworm) so far works fine for me on Debian/trixie. Be aware that due to the apt-key removal you need a recent version of the puppetlabs-apt for usage with trixie. The puppetlabs-ntp module isn’t yet ready for trixie (regarding ntp/ntpsec), if you should depend on that.

ansible is available and made it with version 2.19 into trixie.

Prometheus stack

Prometheus server was updated from v2.42.0 to v2.53, and all the exporters that got shipped with bookworm are still around (in more recent versions of course). Trixie gained some new exporters:

Virtualization

docker (v26.1.5), ganeti (v3.1.0), libvirt (v11.3.0, be aware of significant changes to libvirt packaging), lxc (v6.0.4), podman (v5.4.2), openstack (see openstack-team on Salsa), qemu/kvm (v10.0.2), xen (v4.20.0) are all still around.

Proxmox already announced their PVE 9.0 BETA, being based on trixie and providing 6.14.8-1 kernel, QEMU 10.0.2, LXC 6.0.4, OpenZFS 2.3.3.

Vagrant is available in version 2.3.7, but Vagrant upstream does not provide packages for trixie yet. Given that HashiCorp adopted the BSL, the future of vagrant in Debian is unclear.

If you’re relying on VirtualBox, be aware that upstream doesn’t provide packages for trixie, yet. VirtualBox is available from Debian/unstable (version 7.1.12-dfsg-1 as of 2025-07-20), but not shipped with stable release since quite some time (due to lack of cooperation from upstream on security support for older releases, see #794466). Be aware that starting with Linux kernel 6.12, KVM initializes virtualization on module loading by default. This prevents VirtualBox VMs from starting. In order to avoid this, either add “kvm.enable_virt_at_load=0” parameter into kernel command line or unload the corresponding kvm_intel / kvm_amd module.

If you want to use Vagrant with VirtualBox on trixie, be aware that Debian’s vagrant package as present in trixie doesn’t support the VirtualBox package version 7.1 as present in Debian/unstable (manually patching vagrant’s meta.rb and rebuilding the package without Breaks: virtualbox (>= 7.1) is known to be working).

util-linux

The are plenty of new options available in the tools provided by util-linux:

  • blkdiscard: new option &dash&dashquiet (suppress warning messages)
  • blockdev: new options &dash&dashgetdiskseq (get disk sequence number) + &dash&dashgetzonesz (get zone size)
  • dmesg: new option &dash&dashkmsg-file … (use the file in kmsg format), new &dash&dashtime-format … argument ‘raw’
  • findmnt: new options &dash&dashlist-columns (list the available columns), &dash&dashdfi (imitate the output of df(1) with -i option), &dash&dashid … (filter by mount node ID), &dash&dashfilter … (apply display filter) + &dash&dashuniq-id … (filter by
    mount node 64-bit ID)
  • fstrim: new option -types …. (limit the set of filesystem types)
  • hardlink: new options &dash&dashrespect-dir (directory names have to be identical), &dash&dashexclude-subtree … (regular expression to exclude directories), &dash&dashprioritize-trees (files found in the earliest specified top-level directory have higher priority), &dash&dashlist-duplicates (print every group of duplicate files), &dash&dashmount (stay within the same filesystem) + &dash&dashzero (delimit output with NULs instead of newlines)
  • ipcmk: new options &dash&dashposix-shmem … (create POSIX shared memory segment of size), &dash&dashposix-semaphore … (create POSIX semaphore), &dash&dashposix-mqueue … (create POSIX message queue) + &dash&dashname … (name of the POSIX resource)
  • ipcrm: new options &dash&dashposix-shmem … (remove POSIX shared memory segment by name), &dash&dashposix-mqueue … (remove POSIX message queue by name), &dash&dashposix-semaphore (remove POSIX semaphore by name) + &dash&dashall=… (remove all in specified category)
  • lsblk: new options &dash&dashct-filter … (restrict the next counter), &dash&dashct … (define a custom counter), &dash&dashhighlight … (colorize lines matching the expression), &dash&dashlist-columns (list the available columns), &dash&dashnvme (output info about NVMe devices), &dash&dashproperties-by … (methods used to gather data), &dash&dashfilter … (print only lines matching the expression), &dash&dashvirtio (output info about virtio devices)
  • lscpu: new options &dash&dashraw (use raw output format (for -e, -p and -C)) + &dash&dashhierarchic=… (use subsections in summary (auto, never, always))
  • lsipc: new options &dash&dashposix-shmems (POSIX shared memory segments), &dash&dashposix-mqueues (POSIX message queues), &dash&dashposix-semaphores (POSIX semaphores), &dash&dashname … (POSIX resource identified by name)
  • lslocks: new option &dash&dashlist-columns (list the available columns)
  • lslogins: new option &dash&dashlastlog2 … (set an alternate path for lastlog2)
  • lsns: new options &dash&dashpersistent (namespaces without processes), &dash&dashfilter … (apply display filter) + &dash&dashlist-columns (list the available columns)
  • mkswap: new options &dash&dashendianness=… (specify the endianness to use (native, little or big)), &dash&dashoffset … (specify the offset in the device), &dash&dashsize … (specify the size of a swap file in bytes) + &dash&dashfile (create a swap file)
  • namei: &dash&dashcontext (print any security context of each file)
  • nsenter: new options &dash&dashnet-socket … (enter socket’s network namespace), &dash&dashuser-parent (enter parent user namespace), &dash&dashkeep-caps (retain capabilities granted in user namespaces), &dash&dashenv (inherit environment variables from tar get process) + &dash&dashjoin-cgroup (join the cgroup of the target process)
  • runuser: new option &dash&dashno-pty (do not create a new pseudo-terminal)
  • setarch: new option &dash&dashshow=… (show current or specific personality and exit)
  • setpriv: new options &dash&dashptracer … (allow ptracing from the given process), &dash&dashlandlock-access … (add Landlock access), &dash&dashlandlock-rule … (add Landlock rule) + &dash&dashseccomp-filter … (load seccomp filter from file)
  • su: new option &dash&dashno-pty (do not create a new pseudo-terminal)
  • unshare: new option &dash&dashload-interp … ( load binfmt definition in the namespace)
  • whereis: new option -g (interpret name as glob (pathnames pattern))
  • wipefs: new argument option feature for &dash&dashbackup=… option to specify directory (instead of default $HOME)
  • zramctl: new option &dash&dashalgorithm-params … (algorithm parameters to use)

Now no longer present in util-linux as of trixie:

  • addpart (tell the kernel about the existence of a specified partition): use partx instead
  • delpart (tell the kernel to forget about a specified partition): use partx instead
  • last (show a listing of last logged in users, binary got moved to wtmpdb), lastb (show a listing of last logged in users), mesg (control write access of other users to your terminal), utmpdump (dump UTMP and WTMP files in raw format): see Debian release notes for details

The following binaries got moved from util-linux to the util-linux-extra package:

  • ctrlaltdel (set the function of the Ctrl-Alt-Del combination)
  • mkfs.bfs (make an SCO bfs filesystem)
  • fsck.cramfs + mkfs.cramfs (compressed ROM file system)
  • fsck.minix + mkfs.minix (Minix filesystem)
  • resizepart (tell the kernel about the new size of a partition)

And the util-linux-extra package also provides new tools:

  • bits: convert bit masks from/to various formats
  • blkpr: manage persistent reservations on a device
  • coresched: manage core scheduling cookies for tasks
  • enosys: utility to make syscalls fail with ENOSYS
  • exch: atomically exchanges paths between two files
  • fadvise: utility to use the posix_fadvise system call
  • pipesz: set or examine pipe buffer sizes and optionally execute command.
  • waitpid: utility to wait for arbitrary processes

OpenSSH

OpenSSH was updated from v9.2p1 to 10.0p1-5, so if you’re interested in all the changes, check out the release notes between those versions (9.3, 9.4, 9.5, 9.6, 9.7, 9.8, 9.9 + 10.0).

Let’s highlight some notable behavior changes in Debian:

There are some notable new features:

  • allow forwarding Unix Domain sockets via ssh -W
  • OpenSSH penalty behavior: visit my separate blog post for more details
  • add support for reading ED25519 private keys in PEM PKCS8 format. Previously only the OpenSSH private key format was supported.
  • the new hybrid post-quantum algorithm mlkem768x25519-sha256 (based on the FIPS 203 Module-Lattice Key Encapsulation mechanism (ML-KEM) combined with X25519 ECDH) is now used by default for key agreement. This algorithm is considered to be safe against attack by quantum computers, is guaranteed to be no less strong than the popular curve25519-sha256 algorithm, has been standardised by NIST and is considerably faster than the previous default.
  • the ssh-agent will now delete all loaded keys when signaled with SIGUSR1. This allows deletion of keys without having access to $SSH_AUTH_SOCK.
  • support systemd-style socket activation in ssh-agent using the LISTEN_PID/LISTEN_FDS mechanism. Activated when these environment variables are set, the agent is started with the -d or -D option and no socket path is set.
  • add a sshd -G option that parses and prints the effective configuration without attempting to load private keys and perform other checks. (This allows usage of the option before keys have been generated and for configuration evaluation and verification by unprivileged users.)
  • add support for configuration tags to ssh(1). This adds a ssh_config(5) “Tag” directive and corresponding “Match tag” predicate that may be used to select blocks of configuration.
  • add a “match localnetwork” predicate. This allows matching on the addresses of available network interfaces and may be used to vary the effective client configuration based on network location.
  • add a %j token that expands to the configured ProxyJump hostname
  • add support for “Match sessiontype” to ssh_config. Allows matching on the type of session initially requested, either “shell” for interactive sessions, “exec” for command execution sessions, “subsystem” for subsystem requests, such as sftp, or “none” for transport/forwarding-only sessions.
  • allow glob(3) patterns to be used in sshd_config AuthorizedKeysFile and AuthorizedPrincipalsFile directives.

Thanks to everyone involved in the release, looking forward to trixie + and happy upgrading!
Let’s continue with working towards Debian/forky. :)

20 July, 2025 04:32PM by mika

hackergotchi for David Bremner

David Bremner

Hibernate on the pocket reform 7/n

Context

Building upstream-ish kernel

  • Roughly following the "bisecting upstream linux" section of reform-debian-packages/README.md
$ git clone https://gitlab.collabora.com/hardware-enablement/rockchip-3588/linux.git collabora
$ cd collabora && git git switch -c rockchip-devel origin/rockchip-devel
  • The next step is to apply the the non-collabora patches from reform-debian-packages/linux/patches6.15/rk3588-mnt-reform2

  • Unfortunately these are missing the proper metadata to work with git-am

Sidequest: Fix patches

  • 1000-v3-pci_dw_rockchip_add_system_pm_support.patch doesn't apply, even with added metadata. Start again upstream.

  • Thanks to joshc for the suggestion of the b4 tool.

      b4 am 1744940759-23823-1-git-send-email-shawn.lin@rock-chips.com
    

    This creates a .mbx file ready for git am (roughly equivalent to fetching the /raw version from lore, with some extra checks).

  • Brute force finding a base for the patch:

git rev-list --no-merges --since 2025-01-01 refs/heads/rockchip-devel | \
    while read ref
    do
        echo trying $ref
        git checkout $ref
        git apply --check v3_20250418_shawn_lin_pci_dw_rockchip_add_system_pm_support.mbx && echo SUCCESS && break
    done
  • 122 steps later this yields 9dff55ebaef7 (bisect would be better if we knew a "good" commit).
$ git branch -D tmp ; git switch -c tmp 9dff55ebaef7
$ git am v3_20250418_shawn_lin_pci_dw_rockchip_add_system_pm_support.mbx
$ git rebase -i rockchip-devel

This fails with 3 conflicts. The first is easy, as the one non-comment line just moves around. The other two involve a new function rockchip_pcie_unmask_dll_indicator used to reduce code duplication, and in all 3 cases I just took the version of the code from Shawn's patch.

EDIT This rebase turns out to miss (at least) changes in the names of the PCI* constants. By amusing(?) coincidence, as I was discovering that, the patch was being rebased by someone more competent at collabora, and is now in the rockchip-devel branch.

previous episode|next episode

20 July, 2025 12:18PM

July 19, 2025

hackergotchi for Jonathan Carter

Jonathan Carter

DebConf25

The last two weeks I attended DebConf and DebCamp in Brest, France.

Usually, I like to do a more detailed write-up of DebConf, but I was already quite burnt out when I got here, so I’ll circle back to a few things that were important to me in later posts.

In the meantime, thanks to everyone who made this DebConf possible, whether you volunteered for one task or were part of the organisation team. Also a special thanks to the wonderful sponsors who made this entire event possible!

See you next year in Argentina!

Jellyfish taken during daytrip at aquarium.

Jellyfish, taken during daytrip at aquarium.

19 July, 2025 05:12PM by jonathan

July 18, 2025

Sven Hoexter

Debian can Now Drop Xorg

even I managed to migrate my last setup to sway a few weeks ago. I was the last one you've been waiting for dear X Strike Force, right?

Multi display support just works, no more modeline hackery. Oh and we can also remove those old clipboard manager.

One oddity with sway I could not yet solve is that I had to delete the default wallpaper /usr/share/backgrounds/sway/Sway_Wallpaper_Blue_1920x1080.png to allow it to load the Debian wallpaper via

output * bg /usr/share/desktop-base/active-theme/wallpaper/contents/images/1920x1080.svg fill

Update: Thanks to Birger and Sebastian who easily could explain that. The sway-backgrounds package ships a config snippet in /etc/sway/config.d and if that's included e.g. via include /etc/sway/config.d/* after setting the background in your ~/.config/sway/config it does the obvious and overrides your own background configuration again. Didn't expect that but makes sense. So the right fix is to just remove the sway-backgrounds package.

I also had a bit of fist fight with sway to make sure I've as much screen space available as possible. So I tried to shrink fonts and remove borders.

default_border none
default_floating_border none
titlebar_padding 1
titlebar_border_thickness 0
font pango: monospace 9

Rest I guess is otherwise well documented. I settled on wofi as menu tool, cliphist for clipboard access, of course waybar to be able to use the nm-applet, swayidle and swaylock are probably also more or less standard for screen locking.

Having

for_window [app_id="firefox"] inhibit_idle fullscreen

is also sensible for video streaming, to avoid the idle locking.

18 July, 2025 08:37PM

July 17, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

About our proof-of-concept LLM tool for navigating Debian's manpages

So, for the first year, this year’s DebConf had the “DebConf Academic Track�, that is, content for a one-day-long set of short sessions, for which of them there was a written article presenting the content — often with a very academic format, but not necessarily. I hope that for future DebConfs we will continue to hold this track, and that we can help bridge the gap: to get people that are not usually from the academic / universitary prepare content that will be formally expressed and included in a long-lasting, indexed book of proceedings. We did have (informal) proceedings in many of the early DebConfs, and I’m very happy to regain that, but with 20 years of better practices.

Anyway, of course, I took the opportunity to join this experiment, and together with my Mexican colleague Tzolkin Garduño who is finishing her PhD here in France (or should I say, second to her, as she is the true leading author of our work). And here you can see the paper we submitted to the DebConf Academic Track, which will be published soon:

A retrieval-augmented-generation pipeline to help users query system-provided documentation

The corresponding source code is all available at Tzolkin’s repository in GitHub.

So, what is it about, in shorter words and layman terms?

Debian has lots of documentation, but lacks in discoverability. We targetted our venerable manpages: It is well known that manpages are relevant, well-organized documentation describing how to use each of the binaries we ship in Debian. Eric Raymond wrote in his well-known essay “The Art of Unix Programming� (2003) that the Unix cultural style is “telegraphic but complete. It does not hold you by the hand, but it usualy points in the right direction.�

Our original intent was to digest all of the information in our manpages, but we narrowed it to only the first “section� of the manual due to the limitations of the hardware a good friend lent us to play with LLMs. We took four different base, redistributable (although, yes, non-DFSG-free) Large Language Models downloaded from HuggingFace (T5-small, MiniLM-L6-v2, BGE-small-en and Multilingual-e5-small), and trained them with the 34579 pages found inside /usr/share/man/man1 of all of the existing Debian packages. We did some interesting fine-tuning (for further details, please refer to the paper itself or to our GitHub repository.

The idea is to present an interactive tool that udnerstand natural language queries, and answers with the specific manpage to which they better relate (I’d like to say “they answer best�, but Tzolkin has repeatedly tried to correct my understanding of the resulting vectorial distances).

I had prepared some images to present as interaction examples, but I’ll wrap up this post with something even better 😉 So, this is what you would get with the following queries:

We were happy to present like this. During DebCamp, however, I was able to devote some time to translating my client into a Web-accessible system. Do note that it’s a bit brittle, and might break now and then. But you are welcome to play with it!

Play with the Web interface for our RAG for Debian manuals!

I find it worth sharing that, while we used quite a bit of GPU for the training (not too much — a couple of nights on an old-although-powerful nVidia 1070 card lent to us by the Felipe Esquivel Fund for Science and Cultural Support), all querying takes place in the CPU, and I usually get between 0.1 and 0.3 seconds per query. My server’s architecture is far from rock-solid, and it can break now and then… but at least it should respawn upon failure 😉 And it should at least be available there for a couple of weeks into August.

Anyway, this has been a very interesting journey getting my toes wet in the waters of LLM, and while current results are a bit crude, I think this can be made into an interesting tool for Debian exploration.

17 July, 2025 11:08AM

Birger Schacht

My first tag2upload upload

Following the DebConf25 talk by Ian Jackson tag2upload - upload simply by pushing a signed git tag I decided to use the quiet time during the day of the DayTrip on DebConf 25 to try out uploading a package using tag2upload.

Given the current freeze a couple of the packages I maintainer have new releases waiting. I decided on uploading the new version of labwc to experimental. Labwc is a Wayland compositor based on the wlroots compositor library (the library that sway is using). Labwc is inspired by the Openbox window manager. The upstream team of Labwc released version 0.9.0 last week, the first version that is based on wlroots 0.19.x. Wlroots 0.19.x is also only available in experimental, so that was a good fit for trying an upload with tag2upload.

I first used my usual workflow, going into my package repository, doing get fetch origin, checking out the tag of the new release and tagging that with git tag upstream/0.9.0. Then I bumped the version in the debian/experimental branch, adapted the debian/control file for the changed wlroots dependency, committed and built the package using git-buildpackage to check if it builds fine and there are no lintian errors. Then I moved on to look at tag2upload.

As a starting point for using tag2upload I read the blogpost by Jonathan Carter My first tag2upload upload. It pointed me to one very important option of git debpush, namely the --baredebian option which I have to use because I use the bare Debian git layout. Given that the last upload of labwc I did was to unstable, I also had to add the --force=changed-suite.

I also started right away to use the --tag-only option, because for my first tests I only wanted to have local changes and nothing pushed to anywhere. I also used the --dry-run option. This led to the following command:

> git debpush --baredebian --force=changed-suite --dry-run --tag-only
tags 0.9.0, upstream/0.9.0 all exist in this repository
tell me which one you want to make an orig.tar from: git deborig --just-print '--version=0.9.0-1' TAG
git-debpush: git-deborig failed; maybe try git-debpush --upstream=TAG

This was a bit confusing, because the error message talked about git-deborig, but I was using git-debpush. I also did not want to make an orig.tar! The --upstream option in the git-debpush(1) manual gave an explanation for this:

When pushing a non-native package, git-debpush needs a tag for the upstream part of your package.

By default git-debpush asks git-deborig(1), which searches for a suitable tag based on the upstream version in debian/changelog.

So apparently git-debpush can not find out what the correct tag for the upstream version is, because git-deborig can not find out what the correct tag for the upstream version is. git-debpush simply calls git deborig --just-print --version="$version" in line 437. This fails because I initially created a second upstream/0.9.0 to the existing 0.9.0 release tag. I do this for git-buildpackage to find the upstream sources, but with multiple tags git-deborig is not sure which one is the tag it should use (although both point to the same commit).

So I removed the upstream/0.9.0 tag and ran git debpush again and now there was no error message (besides the warning regarding the changed suite) but it also did not give an feedback about what is happening. So I tried without the --dry-run option. Again, no output whatsoever, other than the warning about the changed release, BUT my gnupg asked me for permission to sign via my yubikey! And when I looked at the list of tags, I saw that there is now a debian/0.9.0-1 tag that was not there before! Looking at the tag I saw that it was a tag in the format described in the tag2upload(5) manual page, containing the following lines:

labwc release 0.9.0-1 for experimental

[dgit distro=debian split --quilt=baredebian]
[dgit please-upload source=labwc version=0.9.0-1 upstream-tag=0.9.0 upstream=4beee3851f75b53afc2e8699c594c0cc222115bd]

and the tag was signed by me. The 4beee3851f75b53afc2e8699c594c0cc222115bd commit ID is the commit the 0.9.0 tag points to.

Now that I had a signed commit tag in the correct format, I went to the labwc packaging repository on salsa and enabled the webhook to trigger the tag2upload service (I just saw that the documentation was updated and there is now a global webhook on salsa, so this step is not needed anymore).

Finally I pushed the tags using git push --tags. I could also have used git-debpush for this step, but I’d rather use git directly. I then looked at the tag2upload queue and saw how a worker built and uploaded the newest labwc release and I also got an email from the tag2upload service [tag2upload 275] uploaded labwc 0.9.0-1. And a couple of minutes later I got the confirmation that labwc 0.9.0-1 was accepted into experimental. Great!

So, to conclude: for tag2upload to work we simply need a git tag in the correct format. The tag2upload service now gets triggered by every pushed tag on salsa but only acts on tags that adhere to the tag2upload(5) format. git-debpush is a simply bash script that creates such a tag and by default also pushes the tag.

I think the script could be a bit more verbose, for example telling me that it created a tag and the name of that tag. I think the dependency on git-deborig is also a problem. I use git-buildpackage to build my packages and by default git-buildpacakge assumes upstream tags are of the form upstream/%(version)s (source). I could now change that for all the packages I maintain, but I also think it makes sense to control the tag myself and not use a tag that is controlled by upstream. Upstream could change or delete that tag or I might need to create a tag for a version that is not tagged by upstream.

I also think git-debpush is a rather mileading command name, given that the main task of the script is to create a tag in the correct format.

Other than that, I’m pretty happy about this service. I have a rather crappy uplink at home and it is not so uncommon for my uploads to fail because the connection dropped during dput. Using a simple git based upload approach makes these problems a thing of the past. I might look into other ways to create the needed tag, though.

17 July, 2025 08:28AM

Arnaud Rebillout

Acquire-By-Hash for APT packages repositories, and the lack of it in Kali Linux

This is a lenghty blog post. It features a long introduction that explains how apt update acquires various files from a package repository, what is Acquire-By-Hash, and how it all works for Kali Linux: a Debian-based distro that doesn't support Acquire-By-Hash, and which is distributed via a network of mirrors and a redirector.

In a second part, I explore some "Hash Sum Mismatch" errors that we can hit with Kali Linux, errors that would not happen if only Acquire-By-Hash was supported. If anything, this blog post supports the case for adding Acquire-By-Hash support in reprepro, as requested at https://bugs.debian.org/820660.

All of this could have just remained some personal notes for myself, but I got carried away and turned it into a blog post, dunno why... Hopefully others will find it interesting, but you really need to like troubleshooting stories, packed with details, and poorly written at that. You've been warned!

Introducing Acquire-By-Hash

Acquire-By-Hash is a feature of APT package repositories, that might or might not be supported by your favorite Debian-based distribution. A repository that supports it says so, in the Release file, by setting the field Acquire-By-Hash: yes.

It's easy to check. Debian and Ubuntu both support it:

$ wget -qO- http://deb.debian.org/debian/dists/sid/Release | grep -i ^Acquire-By-Hash:
Acquire-By-Hash: yes

$ wget -qO- http://archive.ubuntu.com/ubuntu/dists/devel/Release | grep -i ^Acquire-By-Hash:
Acquire-By-Hash: yes

What about other Debian derivatives?

$ wget -qO- http://http.kali.org/kali/dists/kali-rolling/Release | grep -i ^Acquire-By-Hash: || echo not supported
not supported

$ wget -qO- https://archive.raspberrypi.com/debian/dists/trixie/Release | grep -i ^Acquire-By-Hash: || echo not supported
not supported

$ wget -qO- http://packages.linuxmint.com/dists/faye/Release | grep -i ^Acquire-By-Hash: || echo not supported
not supported

$ wget -qO- https://apt.pop-os.org/release/dists/noble/Release | grep -i ^Acquire-By-Hash: || echo not supported
not supported

Huhu, Acquire-By-Hash is not ubiquitous. But wait, what is Acquire-By-Hash to start with? To answer that, we have to take a step back and cover some basics first.

The HTTP requests performed by 'apt update'

What happens when one runs apt update? APT first requests the Release file from the repository(ies) configured in the APT sources. This file is a starting point, it contains a list of other files (sometimes called "Index files") that are available in the repository, along with their hashes. After fetching the Release file, APT proceeds to request those Index files. To give you an idea, there are many kinds of Index files, among which:

  • Packages files: list the binary packages that are available in the repository
  • Sources files: list the source packages that are available in the repository
  • Contents files: list files provided by each package (used by the command apt-file)
  • and even more, such as PDiff, Translations, DEP-11 metadata, etc etc...

There's an excellent Wiki page that details the structure of a Debian package repository, it's there: https://wiki.debian.org/DebianRepository/Format.

Note that APT doesn't necessarily download ALL of those Index files. For simplicity, we'll limit ourselves to the minimal scenario, where apt update downloads only the Packages files.

Let's try to make it more visual: here's a representation of a apt update transaction, assuming that all the components of the repository are enabled:

apt update -> Release -> Packages (main/amd64)
                      -> Packages (contrib/amd64)
                      -> Packages (non-free/amd64)
                      -> Packages (non-free-firmware/amd64)

Meaning that, in a first step, APT downloads the Release file, reads its content, and then in a second step it downloads the Index files in parallel.

You can actually see that happen with a command such as apt -q -o Debug::Acquire::http=true update 2>&1 | grep ^GET. For Kali Linux you'll see something pretty similar to what I described above. Try it!

$ podman run --rm kali-rolling apt -q -o Debug::Acquire::http=true update 2>&1 | grep ^GET
GET /kali/dists/kali-rolling/InRelease HTTP/1.1    # <- returns a redirect, that is why the file is requested twice
GET /kali/dists/kali-rolling/InRelease HTTP/1.1
GET /kali/dists/kali-rolling/non-free/binary-amd64/Packages.gz HTTP/1.1
GET /kali/dists/kali-rolling/main/binary-amd64/Packages.gz HTTP/1.1
GET /kali/dists/kali-rolling/non-free-firmware/binary-amd64/Packages.gz HTTP/1.1
GET /kali/dists/kali-rolling/contrib/binary-amd64/Packages.gz HTTP/1.1

However, and it's now becoming interesting, for Debian or Ubuntu you won't see the same kind of URLs:

$ podman run --rm debian:sid apt -q -o Debug::Acquire::http=true update 2>&1 | grep ^GET
GET /debian/dists/sid/InRelease HTTP/1.1
GET /debian/dists/sid/main/binary-amd64/by-hash/SHA256/22709f0ce67e5e0a33a6e6e64d96a83805903a3376e042c83d64886bb555a9c3 HTTP/1.1

APT doesn't download a file named Packages, instead it fetches a file named after a hash. Why? This is due to the field Acquire-By-Hash: yes that is present in the Debian's Release file.

What does Acquire-By-Hash mean for 'apt update'

The idea with Acquire-By-Hash is that the Index files are named after their hash on the repository, so if the MD5 sum of main/binary-amd64/Packages is 77b2c1539f816832e2d762adb20a2bb1, then the file will be stored at main/binary-amd64/by-hash/MD5Sum/77b2c1539f816832e2d762adb20a2bb1. The path main/binary-amd64/Packages still exists (it's the "Canonical Location" of this particular Index file), but APT won't use it, instead it downloads the file located in the by-hash/ directory.

Why does it matter? This has to do with repository updates, and allowing the package repository to be updated atomically, without interruption of service, and without risk of failure client-side.

It's important to understand that the Release file and the Index files are part of a whole, a set of files that go altogether, given that Index files are validated by their hash (as listed in the Release file) after download by APT.

If those files are simply named "Release" and "Packages", it means they are not immutable: when the repository is updated, all of those files are updated "in place". And it causes problems. A typical failure mode for the client, during a repository update, is that: 1) APT requests the Release file, then 2) the repository is updated and finally 3) APT requests the Packages files, but their checksum don't match, causing apt update to fail. There are variations of this error, but you get the idea: updating a set of files "in place" is problematic.

The Acquire-By-Hash mechanism was introduced exactly to solve this problem: now the Index files have a unique, immutable name. When the repository is updated, at first new Index files are added in the by-hash/ directory, and only after the Release file is updated. Old Index files in by-hash/ are retained for a while, so there's a grace period during which both the old and the new Release files are valid and working: the Index files that they refer to are available in the repo. As a result: no interruption of service, no failure client-side during repository updates.

This is explained in more details at https://www.chiark.greenend.org.uk/~cjwatson/blog/no-more-hash-sum-mismatch-errors.html, which is the blog post from Colin Watson that came out at the time Acquire-By-Hash was introduced in... 2016. This is still an excellent read in 2025.

So you might be wondering why I'm rambling about a problem that's been solved 10 years ago, but then as I've shown in the introduction, the problem is not solved for everyone. Support for Acquire-By-Hash server side is not for granted, and unfortunately it never landed in reprepro, as one can see at https://bugs.debian.org/820660.

reprepro is a popular tool for creating APT package repositories. In particular, at Kali Linux we use reprepro, and that's why there's no Acquire-By-Hash: yes in the Kali Release file. As one can guess, it leads to subtle issues during those moments when the repository is updated. However... we're not ready to talk about that yet! There's still another topic that we need to cover: this window of time during which a repository is being updated, and during which apt update might fail.

The window for Hash Sum Mismatches, and the APT trick that saves the day

Pay attention! In this section, we're now talking about packages repositories that do NOT support Acquire-By-Hash, such as the Kali Linux repository.

As I've said above, it's only when the repository is being updated that there is a "Hash Sum Mismatch Window", ie. a moment when apt update might fail for some unlucky clients, due to invalid Index files.

Surely, it's a very very short window of time, right? I mean, it can't take that long to update files on a server, especially when you know that a repository is usually updated via rsync, and rsync goes to great length to update files the most atomically as it can (with the option --delay=updates). So if apt update fails for me, I've been very unlucky, but I can just retry in a few seconds and it should be fixed, isn't it? The answer is: it's not that simple.

So far I pictured the "package repository" as a single server, for simplicity. But it's not always what it is. For Kali Linux, by default users have http.kali.org configured in their APT sources, and it is a redirector, ie. a web server that redirects requests to mirrors that are nearby the client. Some context that matters for what comes next: the Kali repository is synced with ~70 mirrors all around the world, 4 times a day. What happens if your apt update requests are redirected to 2 mirrors close-by, and one was just synced, while the other is still syncing (or even worse, failed to sync entirely)? You'll get a mix of old and new Index files. Hash Sum Mismatch!

As you can see, with this setup the "Hash Sum Mismatch Window" becomes much longer than a few seconds: as long as nearby mirrors are syncing the window is opened. You could have a fast and a slow mirror next to you, and they can be out of sync with each other for several minutes every time the repository is updated, for example.

For Kali Linux in particular, there's a "detail" in our network of mirrors that, as a side-effect, almost guarantees that this window lasts several minutes at least. This is because the pool of mirrors includes kali.download which is in fact the Cloudflare CDN, and from the redirector point of view, it's seen as a "super mirror" that is present in every country. So when APT fires a bunch of requests against http.kali.org, it's likely that some of them will be redirected to the Kali CDN, and others will be redirected to a mirror nearby you. So far so good, but there's another point of detail to be aware of: the Kali CDN is synced first, before the other mirrors. Another thing: usually the mirrors that are the farthest from the Tier-0 mirror are the longest to sync. Packing all of that together: if you live somewhere in Asia, it's not uncommon for your "Hash Sum Mismatch Window" to be as long as 30 minutes, between the moment the Kali CDN is synced, and the moment your nearby mirrors catch up and are finally in sync as well.

Having said all of that, and assuming you're still reading (anyone here?), you might be wondering... Does that mean that apt update is broken 4 times a day, for around 30 minutes, for every Kali user out there? How can they bear with that? Answer is: no, of course not, it's not broken like that. It works despite all of that, and this is thanks to yet another detail that we didn't go into yet. This detail lies in APT itself.

APT is in fact "redirector aware", in a sense. When it fetches a Release file, and if ever the request is redirected, it then fires the subsequent requests against the server where it was initially redirected. So you are guaranteed that the Release file and the Index files are retrieved from the same mirror! Which brings back our "Hash Sum Mismatch Window" to the window for a single server, ie. something like a few seconds at worst, hopefully. And that's what makes it work for Kali, literally. Without this trick, everything would fall apart.

For reference, this feature was implemented in APT back in... 2016! A busy year it seems! Here's the link to the commit: use the same redirection mirror for all index files.

To finish, a dump from the console. You can see this behaviour play out easily, again with APT debugging turned on. Below we can see that only the first request hits the Kali redirector:

$ podman run --rm kali-rolling apt -q -o Debug::Acquire::http=true update 2>&1 | grep -e ^Answer -e ^HTTP
Answer for: http://http.kali.org/kali/dists/kali-rolling/InRelease
HTTP/1.1 302 Found
Answer for: http://mirror.freedif.org/kali/dists/kali-rolling/InRelease
HTTP/1.1 200 OK
Answer for: http://mirror.freedif.org/kali/dists/kali-rolling/non-free-firmware/binary-amd64/Packages.gz
HTTP/1.1 200 OK
Answer for: http://mirror.freedif.org/kali/dists/kali-rolling/contrib/binary-amd64/Packages.gz
HTTP/1.1 200 OK
Answer for: http://mirror.freedif.org/kali/dists/kali-rolling/main/binary-amd64/Packages.gz
HTTP/1.1 200 OK
Answer for: http://mirror.freedif.org/kali/dists/kali-rolling/non-free/binary-amd64/Packages.gz
HTTP/1.1 200 OK

Interlude

Believe it or not, we're done with the introduction! At this point, we have a good understanding of what apt update does (in terms of HTTP requests), we know that Release files and Index files are part of a whole, and we know that a repository can be updated atomically thanks to the Acquire-By-Hash feature, so that users don't experience interruption of service or failures of any sort, even with a rolling repository that is updated several times a day, like Debian sid.

We've also learnt that, despite the fact that Acquire-By-Hash landed almost 10 years ago, some distributions like Kali Linux are still doing without it... and yet it works! But the reason why it works is more complicated to grasp, especially when you add a network of mirrors and a redirector to the picture. Moreover, it doesn't work as flawlessly as with the Acquire-By-Hash feature: we still expect some short (seconds at worst) "Hash Sum Mismatch Windows" for those unlucky users that run apt update at the wrong moment.

This was a long intro, but that really sets the stage for what comes next: the edge cases. Some situations in which we can hit some Hash Sum Mismatch errors with Kali. Error cases that I've collected and investigated over the time...

If anything, it supports the case that Acquire-By-Hash is really something that should be implemented in reprepro. More on that in the conclusion, but for now, let's look at those edge cases.

Edge Case 1: the caching proxy

If you put a caching proxy (such as approx, my APT caching proxy of choice) between yourself and the actual packages repository, then obviously it's the caching proxy that performs the HTTP requests, and therefore APT will never know about the redirections returned by the server, if any. So the APT trick of downloading all the Index files from the same server in case of redirect doesn't work anymore.

It was rather easy to confirm that by building a Kali package during a mirror sync, and watch if fail at the "Update chroot" step:

$ sudo rm /var/cache/approx/kali/dists/ -fr
$ gbp buildpackage --git-builder=sbuild

+------------------------------------------------------------------------------+
| Update chroot                                Wed, 11 Jun 2025 10:33:32 +0000 |
+------------------------------------------------------------------------------+

Get:1 http://http.kali.org/kali kali-dev InRelease [41.4 kB]
Get:2 http://http.kali.org/kali kali-dev/contrib Sources [81.6 kB]
Get:3 http://http.kali.org/kali kali-dev/main Sources [17.3 MB]
Get:4 http://http.kali.org/kali kali-dev/non-free Sources [122 kB]
Get:5 http://http.kali.org/kali kali-dev/non-free-firmware Sources [8297 B]
Get:6 http://http.kali.org/kali kali-dev/non-free amd64 Packages [197 kB]
Get:7 http://http.kali.org/kali kali-dev/non-free-firmware amd64 Packages [10.6 kB]
Get:8 http://http.kali.org/kali kali-dev/contrib amd64 Packages [120 kB]
Get:9 http://http.kali.org/kali kali-dev/main amd64 Packages [21.0 MB]
Err:9 http://http.kali.org/kali kali-dev/main amd64 Packages
  File has unexpected size (20984689 != 20984861). Mirror sync in progress? [IP: ::1 9999]
  Hashes of expected file:
   - Filesize:20984861 [weak]
   - SHA256:6cbbee5838849ffb24a800bdcd1477e2f4adf5838a844f3838b8b66b7493879e
   - SHA1:a5c7e557a506013bd0cf938ab575fc084ed57dba [weak]
   - MD5Sum:1433ce57419414ffb348fca14ca1b00f [weak]
  Release file created at: Wed, 11 Jun 2025 07:15:10 +0000
Fetched 17.9 MB in 9s (1893 kB/s)
Reading package lists...
E: Failed to fetch http://http.kali.org/kali/dists/kali-dev/main/binary-amd64/Packages.gz  File has unexpected size (20984689 != 20984861). Mirror sync in progress? [IP: ::1 9999]
   Hashes of expected file:
    - Filesize:20984861 [weak]
    - SHA256:6cbbee5838849ffb24a800bdcd1477e2f4adf5838a844f3838b8b66b7493879e
    - SHA1:a5c7e557a506013bd0cf938ab575fc084ed57dba [weak]
    - MD5Sum:1433ce57419414ffb348fca14ca1b00f [weak]
   Release file created at: Wed, 11 Jun 2025 07:15:10 +0000
E: Some index files failed to download. They have been ignored, or old ones used instead.
E: apt-get update failed

The obvious workaround is to NOT use the redirector in the approx configuration. Either use a mirror close by, or the Kali CDN:

$ grep kali /etc/approx/approx.conf 
#kali http://http.kali.org/kali <- do not use the redirector!
kali  http://kali.download/kali

Edge Case 2: debootstrap struggles

What if one tries to debootstrap Kali while mirrors are being synced? It can give you some ugly logs, but it might not be fatal:

$ sudo debootstrap kali-dev kali-dev http://http.kali.org/kali
[...]
I: Target architecture can be executed
I: Retrieving InRelease 
I: Checking Release signature
I: Valid Release signature (key id 827C8569F2518CC677FECA1AED65462EC8D5E4C5)
I: Retrieving Packages 
I: Validating Packages 
W: Retrying failed download of http://http.kali.org/kali/dists/kali-dev/main/binary-amd64/Packages.gz
I: Retrieving Packages 
I: Validating Packages 
W: Retrying failed download of http://http.kali.org/kali/dists/kali-dev/main/binary-amd64/Packages.gz
I: Retrieving Packages 
I: Validating Packages 
W: Retrying failed download of http://http.kali.org/kali/dists/kali-dev/main/binary-amd64/Packages.gz
I: Retrieving Packages 
I: Validating Packages 
W: Retrying failed download of http://http.kali.org/kali/dists/kali-dev/main/binary-amd64/Packages.gz
I: Retrieving Packages 
I: Validating Packages 
I: Resolving dependencies of required packages...
I: Resolving dependencies of base packages...
I: Checking component main on http://http.kali.org/kali...
I: Retrieving adduser 3.152
[...]

To understand this one, we have to go and look at the debootstrap source code. How does debootstrap fetch the Release file and the Index files? It uses wget, and it retries up to 10 times in case of failure. It's not as sophisticated as APT: it doesn't detect when the Release file is served via a redirect.

As a consequence, what happens above can be explained as such:

  1. debootstrap requests the Release file, gets redirected to a mirror, and retrieves it from there
  2. then it requests the Packages file, gets redirected to another mirror that is not in sync with the first one, and retrieves it from there
  3. validation fails, since the checksum is not as expected
  4. try again and again

Since debootstrap retries up to 10 times, at some point it's lucky enough to get redirected to the same mirror as the one from where it got its Release file from, and this time it gets the right Packages file, with the expected checksum. So ultimately it succeeds.

Edge Case 3: post-debootstrap failure

I like this one, because it gets us to yet another detail that we didn't talk about yet.

So, what happens after we successfully debootstraped Kali? We have only the main component enabled, and only the Index file for this component have been retrieved. It looks like that:

$ sudo debootstrap kali-dev kali-dev http://http.kali.org/kali
[...]
I: Base system installed successfully.

$ cat kali-dev/etc/apt/sources.list
deb http://http.kali.org/kali kali-dev main

$ ls -l kali-dev/var/lib/apt/lists/
total 80468
-rw-r--r-- 1 root root    41445 Jun 19 07:02 http.kali.org_kali_dists_kali-dev_InRelease
-rw-r--r-- 1 root root 82299122 Jun 19 07:01 http.kali.org_kali_dists_kali-dev_main_binary-amd64_Packages
-rw-r--r-- 1 root root    40562 Jun 19 11:54 http.kali.org_kali_dists_kali-dev_Release
-rw-r--r-- 1 root root      833 Jun 19 11:54 http.kali.org_kali_dists_kali-dev_Release.gpg
drwxr-xr-x 2 root root     4096 Jun 19 11:54 partial

So far so good. Next step would be to complete the sources.list with other components, then run apt update: APT will download the missing Index files. But if you're unlucky, that might fail:

$ sudo sed -i 's/main$/main contrib non-free non-free-firmware/' kali-dev/etc/apt/sources.list

$ cat kali-dev/etc/apt/sources.list
deb http://http.kali.org/kali kali-dev main contrib non-free non-free-firmware

$ sudo chroot kali-dev apt update
Hit:1 http://http.kali.org/kali kali-dev InRelease
Get:2 http://kali.download/kali kali-dev/contrib amd64 Packages [121 kB]
Get:4 http://mirror.sg.gs/kali kali-dev/non-free-firmware amd64 Packages [10.6 kB]
Get:3 http://mirror.freedif.org/kali kali-dev/non-free amd64 Packages [198 kB]
Err:3 http://mirror.freedif.org/kali kali-dev/non-free amd64 Packages
  File has unexpected size (10442 != 10584). Mirror sync in progress? [IP: 66.96.199.63 80]
  Hashes of expected file:
   - Filesize:10584 [weak]
   - SHA256:71a83d895f3488d8ebf63ccd3216923a7196f06f088461f8770cee3645376abb
   - SHA1:c4ff126b151f5150d6a8464bc6ed3c768627a197 [weak]
   - MD5Sum:a49f46a85febb275346c51ba0aa8c110 [weak]
  Release file created at: Fri, 23 May 2025 06:48:41 +0000
Fetched 336 kB in 4s (77.5 kB/s)  
Reading package lists... Done
E: Failed to fetch http://mirror.freedif.org/kali/dists/kali-dev/non-free/binary-amd64/Packages.gz  File has unexpected size (10442 != 10584). Mirror sync in progress? [IP: 66.96.199.63 80]
   Hashes of expected file:
    - Filesize:10584 [weak]
    - SHA256:71a83d895f3488d8ebf63ccd3216923a7196f06f088461f8770cee3645376abb
    - SHA1:c4ff126b151f5150d6a8464bc6ed3c768627a197 [weak]
    - MD5Sum:a49f46a85febb275346c51ba0aa8c110 [weak]
   Release file created at: Fri, 23 May 2025 06:48:41 +0000
E: Some index files failed to download. They have been ignored, or old ones used instead.

What happened here? Again, we need APT debugging options to have a hint:

$ sudo chroot kali-dev apt -q -o Debug::Acquire::http=true update 2>&1 | grep -e ^Answer -e ^HTTP
Answer for: http://http.kali.org/kali/dists/kali-dev/InRelease
HTTP/1.1 304 Not Modified
Answer for: http://http.kali.org/kali/dists/kali-dev/contrib/binary-amd64/Packages.gz
HTTP/1.1 302 Found
Answer for: http://http.kali.org/kali/dists/kali-dev/non-free/binary-amd64/Packages.gz
HTTP/1.1 302 Found
Answer for: http://http.kali.org/kali/dists/kali-dev/non-free-firmware/binary-amd64/Packages.gz
HTTP/1.1 302 Found
Answer for: http://kali.download/kali/dists/kali-dev/contrib/binary-amd64/Packages.gz
HTTP/1.1 200 OK
Answer for: http://mirror.sg.gs/kali/dists/kali-dev/non-free-firmware/binary-amd64/Packages.gz
HTTP/1.1 200 OK
Answer for: http://mirror.freedif.org/kali/dists/kali-dev/non-free/binary-amd64/Packages.gz
HTTP/1.1 200 OK

As we can see above, for the Release file we get a 304 (aka. "Not Modified") from the redirector. Why is that?

This is due to If-Modified-Since also known as RFC-7232. APT supports this feature when it retrieves the Release file, it basically says to the server "Give me the Release file, but only if it's newer than what I already have". If the file on the server is not newer than that, it answers with a 304, which basically says to the client "You have the latest version already". So APT doesn't get a new Release file, it uses the Release file that is already present locally in /var/lib/apt/lists/, and then it proceeeds to download the missing Index files. And as we can see above: it then hits the redirector for each requests, and might be redirected to different mirrors for each Index file.

So the important bit here is: the APT "trick" of downloading all the Index files from the same mirror only works if the Release file is served via a redirect. If it's not, like in this case, then APT hits the redirector for each files it needs to download, and it's subject to the "Hash Sum Mismatch" error again.

In practice, for the casual user running apt update every now and then, it's not an issue. If they have the latest Release file, no extra requests are done, because they also have the latest Index files, from a previous apt update transaction. So APT doesn't re-download those Index files. The only reason why they'd have the latest Release file, and would miss some Index files, would be that they added new components to their APT sources, like we just did above. Not so common, and then they'd need to run apt update at a unlucky moment. I don't think many users are affected in practice.

Note that this issue is rather new for Kali Linux. The redirector running on http.kali.org is mirrorbits, and support for If-Modified-Since just landed in the latest release, version 0.6. This feature was added by no one else than me, a great example of the expression "shooting oneself in the foot".

An obvious workaround here is to empty /var/lib/apt/lists/ in the chroot after debootstrap completed. Or we could disable support for If-Modified-Since entirely for Kali's instance of mirrorbits.

Summary and Conclusion

The Hash Sum Mismatch failures above are caused by a combination of things:

  • Kali uses a redirector + a network of mirrors
  • Kali repo doesn't support Acquire-By-Hash
  • The fact that the redirector honors If-Modified-Since makes the matter a bit worse

At the same time:

  • Regular users that just use APT to update their system or install packages are not affected by those issues
  • Power users (that setup a caching proxy) or developers (that use debootstrap) are the most likely to hit those issues every now and then
  • It only happens during those specific windows of time, when mirrors might not be in sync with each others, 4 times a day
  • It's rather easy to workaround on the user side, by NOT using the redirector
  • However, unless you're deep into these things, you're unlikely to understand what caused the issues, and to magically guess the workarounds

All in all, it seems that all those issues would go away if only Acquire-By-Hash was supported in the Kali packages repository.

Now is not a bad moment to try to land this feature in reprepro. After development halted in 2019, there's now a new upstream, and patches are being merged again. But it won't be easy: reprepro is a C codebase of around 50k lines of code, and it will take time and effort for the newcomer to get acquainted with the codebase, to the point of being able to implement a significant feature like this one.

As an alternative, aptly is another popular tool to manage APT package repositories. And it seems to support Acquire-By-Hash already.

Another alternative: I was told that debusine has (experimental) support for package repositories, and that Acquire-By-Hash is supported as well.

Options are on the table, and I hope that Kali will eventually get support for Acquire-By-Hash, one way or another.

To finish, due credits: this blog post exists thanks to my employer OffSec.

Thanks for reading!

17 July, 2025 12:00AM by Arnaud Rebillout

July 16, 2025

Swiss JuristGate

Exclusive: corruption in Tribunals, Greffiers, from protection rackets to cat whisperers

In 2022, the Transparency International Corruption Perception Index (CPI) ranked Switzerland at number seven on their list, meaning it is the seventh least corrupt country based on the methodology used for ranking. Did Switzerland achieve this favorable score due to genuine attempts to be clean or due to the effectiveness with which Swiss laws and Swiss culture help to hide the wrongdoing?

The favorable ranking from Transparency International was reported widely in the media. At the same time, most media reports also noted Transparency International's country report card had included caveats about nepotism, lobbyists and vulnerability of whistleblowers.

According to Transparency International, their scoring is based on the perception of corruption. Swiss laws on criminal speech tend to prevent the media reporting any bad news at all. This gives the public a false sense of security. In earlier blogs, we look at a series of positive news reports about Parreaux, Thiébaud & Partners when the firm was launched in 2017/2018. Yet when regulators took disciplinary action against the firm in 2023, there was not one news report about the enforcement action. Without news reporting, the public perception of corruption is likely to be totally disconnected from reality. Given that Transparency International's rankings are based on the public perception, the Swiss legal system has gamed the rankings and allowed Switzerland to earn a ranking that it may not deserve.

When people do try to document the reality, they are sent to prison. Many multinational companies operate a three hundred and sixty degree review system whereby employees can give each other feedback. The human rights activist Gerhard Ulrich created a web site where Swiss citizens could write three sixty degree reviews of decisions made by their local judges. The web site was censored and a SWAT team, the elite TIGRIS unit was sent to arrest Gerhard Ulrich and take him to prison.

Trevor Kitchen is another well known spokesperson for investors' rights. In the 1990s Kitchen discovered Swiss people taking credit for his work and not properly attributing his share. Some time later he discovered the FX scandal. During Mr Kitchen's retirement in Portugal, Swiss persecutors used the European Arrest Warrant (EAW) to punish him from afar. Amnesty International published a report noting he was subject to physical and sexual abuse by Swiss authorities in 1993 and then using the EAW they tricked the police in Portugal to repeat the abuse 25 years later in 2018.

By publishing the facts below, I face the same risk of physical and sexual abuse by corrupt police and lawyerists.

If the Swiss public were fully aware of these details, would Switzerland still rate so highly on Transparency International's scale of public perception?

If Transparency International's system can be fooled so easily by states with criminal speech laws, why doesn't Transparency International develop a better methodology for ranking corruption?

Every fact I am reporting here can be found using various sources on the Internet, including the Wayback Machine and the social media profiles of the various people named below. Yet when these facts are assembled in the same article they reveal the inconvenient truth about the Swiss legal system as a whole.

In 2015, the Swiss attorney Benjamin Suter went to New Zealand to complete an LLM at Victoria University. The Victoria University of Wellington Law Review published an article by Mr Suter "Appointment, Discipline and Removal of Judges: a Comparison of the Swiss and New Zealand Judiciaries".

At the time, Mr Suter may have felt that writing the article was a rather abstract exercise. Five years later in 2020, scandal broke out in the Swiss parliament when the fascist SVP / UDC party announced they would try to remove a judge because his "behavior" was not submissive enough for their liking:

On September 23, both houses of parliament are set to appoint a new crop of judges to the Federal Court. But in the lead-up to this, the rightwing Swiss People’s Party has dropped a bombshell.

“We’re proposing to vote judge Yves Donzallaz out of office,� the leader of the party’s parliamentary group Thomas Aeschi has announced.

It reminds me of an incident from 1978 in Australia. In a previous blog, I looked at the prison escape of James Richard Loughnan and the help he received from Fr Sean Patrick O'Connell of St Paul's, Coburg.

Loughnan's next chance to win freedom came a year later when another young criminal, Mark Brandon Read, walked into a courtroom with his shotgun and kidnapped a judge to have Loughnan released. Read went on to become one of Australia's most notorious criminals, using the name Chopper Read. The movie Chopper helps us get to know him better.

Escape bid: police

28 January 1978

A man who menaced a County Court judge with a shotgun on Thursday was a "comic character Charles Chaplin would have portrayed sympathetically", a barrister told Melbourne magistrates court yesterday.

Ironically, Charlie Chaplin was accused of being a communist and fled the US to take refuge in Switzerland. He is buried at Corsier-sur-Vevey in the Canton of Vaud.

... Read had planned to hold the judge hostage while Loughnan was brought to the court and given an automatic car and a magnum pistol.

Chopper Read, kidnapping judge

 

Isn't it remarkable to find the Swiss fascist party ( SVP / UDC) and Chopper Read both using the same tactics, kidnapping and blackmailing judges, to get their way?

Suter had anticipated that moment five years prior in the introduction of his paper:

The author explains how, in Switzerland, openly political and other considerations are weighed in the course of electing judges and how the appointment of lay judges is balanced with an active role of law clerks (Greffier). In contrast, New Zealand has a proud tradition of apolitical judicial appointments that are made solely based on merit. The author criticises that Swiss judges are elected for a term of office, whereas New Zealand judges enjoy the security of tenure and thus, a greater judicial independence.

Mr Suter asserts that the judges are effectively an extension of the political parties and the law clerks (Greffier) take a more active role to prevent the judges indulging themselves. In fact, the word judge looks similar in English and French but it is not really the same thing at all. The term law clerk is used for convenience in English but it is not really a perfect translation either. The role performed by a law clerk in an English-derived courtroom is very different to the role performed by a Greffier in a Swiss courtroom. Therefore, using the term law clerk is confusing and it is better to simply refer to them by the native name, Greffier in French or Gerichtsschreiber in German.

In section IV, appointment of judges, Suter tells us:

The formal requirements to be a federal court judge are scant: any person eligible to vote, that is to say, anyone over the age of 18 who is not incapacitated, may be appointed as a federal court judge.

In other words, a judge does not need to have a law degree or any experience working in a court.

Suter goes on

Typically, lay judges will only be part of a panel of judges, together with judges holding a law degree. It may happen though that a lay judge must act as a single judge as was the case in X v Canton of Thurgau, where both the President and the Vice-President of the District Court had recused themselves. The Federal Supreme Court held that to have a case adjudicated by a lay judge is not in violation of the right to a fair trial as long as a trained law clerk participates in the management of the proceedings and the decision making. The court noted that in the Canton of Thurgau – as in many other cantons – the law clerk may actively participate in the deliberations on the judgment.

In Switzerland, it is intended that these lay judges, without legal qualifications, bring some diversity to the system and avoid the problem of career jurists ruling over society like royal princes.

In English-speaking countries, trials have a jury and the people in the jury are non-lawyers.

The judges in Switzerland are appointed by a political party for a period of four to ten years. Members of a jury in English-speaking countries are selected randomly and replaced for each new trial. Both lay judges and juries are alternative ways of bringing non-lawyers into the decision making process of the tribunal.

The idea that lay judges make the tribunal more in touch with the community is something of a myth. The judges, including lay judges, are under some control from their political party. The political parties are under control from their most significant donors. Look at Elon Musk and his attempt to create the America Party.

Caroline Kuhnlein-Hofmann was the judge in charge of the civil court in the Canton of Vaud. In another blog post, I demonstrated how Kuhnlein-Hofmann is a member of the Green Party along with one of my competitors, Gerhard Andrey of the company Liip SA. Moreover, Mr Andrey is also a politician for the Green party in the federal parliament. One of Mr Andrey's employees, Didier Raboud is another Debian Developer. It is an incestuous web of corruption indeed.

Look specifically at the payments from the so-called judge's salary into the Green Party's Swiss bank account. In Australia, when a politician is elected, they have a similar obligation to give some of their salary back to their political party. While this woman is using the title "judge", she is more like a politician and a servant of her political party. The payments to the Green Party demonstrate that she has an obligation to the party, she has to give them money and judgments. This is not speculation, the SVP / UDC party said the same thing very loudly in 2020.

Caroline Kuhnlein-Hofmann, Gerhard Andrey, Didier Raboud, Liip SA, Greens, Les Vertes Suisses

 

In the specific analysis of Kuhnlein-Hofmann, I presented the minutes from the meeting of her fellow politicians who promoted her to be a judge on 3 March 2010.

Was she working as a lawyer before she was appointed as a judge?

The Wayback machine has a snapshot of the website for the Ordre des Avocats Vaudois (bar association Canton Vaud) from before her appointment to the tribunal. Searching for the name Kuhnlein we only found her husband.

Vivian Kuhnlein, Caroline Kuhnlein-Hofmann

 

Suter has reminded us again of the importance of the Greffier to complement the work of the unqualified lay judges. But what if the judges are not real lawyers and the Greffiers were not trustworthy either?

Look out for the blind leading the blind.

Caroline Kuhnlein-Hofmann, Melanie Bron, Vaud

 

Suter tells us that the Greffier participates in the deliberations of the judge or judges. In cases where a single lay judge is hearing a trial, the Federal Supreme Court requires the Greffier to be involved in the deliberations. Therefore, the ability for rogue Greffiers to participate in deliberations would bring the whole system and all the judgements into disrepute. It all comes down like a house of cards.

house of cards

 

Benjamin Suter, the author of the report, works for Walder Wyss, the same legal firm that acted as liquidator for Parreaux, Thiébaud & Partners. Suter tells us:

In some cantons, law clerks are even allowed to act in place of judges in some respects, for instance in matters of urgency. In the Canton of Valais/Wallis, law clerks (Greffier) may substitute district court judges.

Remarkably, Mathieu Parreaux, the founder of Parreaux, Thiébaud & Partners was also a Greffier in the Canton of Valais, the same Canton where a Greffier can act as a judge and pass judgments on their own without any involvement of the real judge.

A snapshot of Mathieu Parreaux's biography, captured by the Wayback Machine, tells us that Parreaux was still working as a Greffier at the same time that he was selling legal fees insurance to the public.

Mathieu Parreaux began his career in 2010, training in accounting and tax law in a fiduciary capacity at the renowned Scheizerweg Finance. Following this experience, he held a KYC officer position at several private banks in Geneva, such as Safra Sarasin and Audi Bank.

After completing his banking experience, he worked at law firms, notably at Ochsner et associés in Geneva and Besselegal in Nyon. Finally, he gained further experience at the Daudin&CIE real estate agency in Geneva.

In 2017, pursuing his desire to bring an innovative perspective and practice to the field of law, Mathieu founded his law firm, Parreaux&Associés. His clientele includes individuals and legal entities, both nationally and internationally.

That same year, Mathieu took up his duties as lead Greffier at the Tribunal of Monthey in Canton Valais, thus expanding the Municipality's conciliation authority.

He also began teaching law at the private Moser College in Geneva.

In early 2018, Parreaux & Partners merged with Mr. François Thiébaud's service company. By combining their assets and expertise, the new firm Parreaux, Thiébaud & Partners established additional tools to achieve its primary goal: to represent your interests in all legal matters, while providing a personal service.

Mathieu practices primarily in corporate law, namely contract law, tax law, corporate law, and banking and finance law.

Mathieu also practices health law (medical law, pharmaceutical law, and forensic medicine).

Therefore, by giving Mr Parreaux payments of legal fees protection insurance, people would feel they are gaining influence over somebody with the power of a judge.

In the tribunal of Monthey, the judge was Antoine Pitteloud ( left / socialist party) and the Deputy Judge was Roland Maire ( PDC / the Center party).

Notice in 2021, Mr Parreaux was putting his own name at the bottom of the renewal invoices sent to clients. In 2022, he changed the business name to Justicia SA and had one of his employees put their name at the bottom of the invoice letters.

When thinking about the incredible conflict of interest, it is a good moment to remember the story of John Smyth QC, the British barrister who achieved the role of Recorder, a low-ranking judge, in the British courts while simultaneously being a Reader in the Church of England and a prolific pedophile.

While Walder Wyss was the liquidator of Parreaux, Thiébaud & Partners they were simultaneously involved in litigation against clients of Parreaux, Thiébaud & Partners. This is another outrageous conflict of interest.

After gaining access to client records through the liquidation, they had unreasonable advantages in using those records during unrelated litigation.

When FINMA publicly banned Mathieu Parreaux from selling insurance for two years, they did not make any public comment on his role or disqualification as a Greffier. Does this mean he can continue working as a Greffier as long as he does not sell insurance at the same time?

In the Lawyer X scandal in Australia, hundreds of judgments had to be overturned due to a miscarriage of justice. If the Swiss public were aware of the full circumstances then every judgment involving Mathieu Parreaux or Walder Wyss could also be invalidated. This appears to be one of the reasons for the intense secrecy about the JuristGate affair.

During my research, I found two other employees of the legal fee insurance scheme who were also employed in a tribunal as a Greffier. It looks like there was a revolving door between the illegal legal insurance scheme and the tribunal.

What about the Greffier who created an invalid judgment trying to transfer a trademark that had already been canceled? The signature of the Greffier Mélanie Bron appears beside the signature of Caroline Kuhnlein-Hofmann in the invalid judgment. Does Madame Bron have any conflicts of interest, political engagement or side businesses?

Caroline Kuhnlein-Hofmann, Mélanie Bron

 

Mélanie Bron

 

The Cantonal Archives tell us they have a copy of Madame Bron's thesis on family law. There is no academic record of her working on trademark law. The thesis is cited in various other research works and it is mentioned in a 2004 edition of Uniscope, the UNIL newsletter.

A news report from 2018 tells us that Madame Bron was trying to divert more Cantonal police to Blonay, the village where she resides. She is pictured with other local figures Jean-Marc Nicolet, André Grivel and mayor Bertrand Cherix of the Parti libéral-radical (PLR). That is the same political party as the judge Richard Oulevay.

Jean-Marc Nicolet, Mélanie Bron, André Grivel, Bertrand Cherix

 

Is it appropriate for somebody with the powers of a judge to try and influence the deployment of police resources to suit their personal circumstances or should they be concerned with distributing police resources throughout the canton at large?

In the abstract of Benjamin Suter's report, he told us that the Greffier is meant to help keep the politically-affiliated judges honest. If the Greffiers are not honest either, the system described by Suter is a sham.

We found Mélanie Bron listed as a teacher for l'Ecole de la Conscience which claims to be the only French-speaking school of animal communication. Here is her biography from the web site:

Mélanie Bron - Teacher/trainer in CAI

A lawyer by training, Mélanie devotes much of her time to the animal world, practicing animal communication consultations since 2012 and offering feline/canine behavioral counseling since 2018. Her training as an animal masseuse (cats, dogs, and small mammals) furthers her understanding of animal physical sensations. Since 2020, she has combined her expertise in traditional Feng Shui to harmonize the living spaces of animals and their human companions. She teaches Cycle 1 in French-speaking Switzerland. You can find her internship dates and locations on our calendar.

Imagine for a moment that you are in the middle of a legal dispute and your brother calls up the Greffier / cat whisperer and asks her to take his cat for a walk. Hypothetically, he pays ten thousand Swiss francs for her to interpret his cat and you cross your fingers and hope that your company's trademark will be granted well-known status like Coca-cola or Disney.

It needs to be emphasized that the book value of your trademark increases by millions of francs with a declaration of well known status under the Paris convention. Any fee that is hypothetically paid for cat whispering is trivial in comparison to the profit for your organization.

Martine Richoz keeps a list of links where she tells us that Mélanie Bron is offering services in Chernex VD. Mélanie Bron has her own web site, HomeChance.ch where she displays horse-whispering pictures.

I will be happy to help you convey the messages you want to send to your pet and to receive their messages for you, or to help you create an interior that is conducive to your well-being and that of your loved ones.

In other countries, judges and senior employees of a tribunal are prohibited from running businesses on the side. When a jury is deliberating they are usually sequestered in a hotel to prevent any messages being conveyed through family and the media.

We see her accepting money from people though Facebook:

[OUR GRADUATES]��🧑�� Discover the career path of Mélanie Bron, a professional trained by Fabienne Maillefer and who collaborates with her in Switzerland 🇨🇭��
�� All of Fabienne's trained communicators have completed a comprehensive course as animal interpreters (practitioner level) at the École de la Conscience 🗣�👌
�� The average price of a consultation is CHF 100 / €80
�� Questions? Visit www.ecoledelaconscience.com
�� Fabienne Maillefer has been teaching for over 17 years. In 2018, she founded the École de la conscience. It is the first school of its kind to be recognized as a continuing education organization, delivering quality education by obtaining an international label reserved for adult education 💯👌
Mélanie Bron, horse whisperer, Blonay, Chernex, Vaud

 

Judgments in Switzerland are typically secret. Many more disputes are settled secretly at mediation without any judgment at all. The Greffier has access to all these secret settlement accords. What if Mathieu Parreaux was editing secret settlement documents on the same personal laptop he was using for Parreaux, Thiébaud & Partners? Where did the laptops go when the firm was shut down?

When FINMA published their judgment against Parreaux, Thiébaud & Partners, they redacted almost every paragraph.

By joining this scandal directly into the cantonal tribunal, using this 2015 report from Benjamin Suter at Walder Wyss, liquidator of Parreaux, Thiébaud & Partners, we have proven, beyond reasonable doubt, the entire Swiss legal system smells like a protection racket.

They pretend to be a country and they act like a student union. I graduated from the National Union of Students in Australia, traveled half way around the world to Switzerland, I thought student politics was behind me and I found a bona fide kangaroo court system at work in the alps.

A kangaroo court with a cat whisperer.

In Zurich, they tried to tell us that there was a smell coming from our cats. I recorded the discussion in the tribunal and published a full report about the black cat harassment judgment.

Dog ate the judgment

Here is the letter from the Swiss Intellectual Property Institute (IGE / IPI) telling the judges, greffiers and cat whisperers that the so-called Debian "judgment" can not be processed:

Debian, trademark, judgment

 

The judge Richard Oulevey sent another letter acknowledging that their so-called judgment is impossible to follow, in other words, it is on par with witchcraft.

Debian, trademark, judgment

 

It was the responsibility of the Greffier, Mélanie Bron to communicate with all the parties in the case and communicate with the Intellectual Property Institute to make sure there really was a trademark registration in effect. If she can communicate with cats and dogs, why couldn't she communicate with the IPI?

If running a side business as a pet psychic undermines her work at the tribunal, is it time to give up one job or the other?

Remember, over $120,000 from Debian bank accounts went to these kangaroo courts but when volunteers arrived at DebConf23 in India, they were asked to contribute their own money to some expenses. Abraham Raji apparently didn't have a few extra dollars for the day trip, he was left alone without a life jacket and he drowned.

The Parreaux, Thiébaud & Partners scandal went on for six years from 2017 to 2023 and everybody in the legal profession seemed to know about it from the beginning. Therefore, what other scandals are going on right now that the rest of us don't know about yet?

Footnote: enforcing the law in Pentridge Prison, Melbourne, Australia

From the Wikipedia article about Chopper Read:

While in Pentridge Prison's H division in the late 1970s, Read launched a prison war. "The Overcoat Gang" wore long coats all year round to conceal their weapons, and were involved in several hundred acts of violence against a larger gang during this period. Around this time, Read had a fellow inmate cut both of his ears off to be able to leave H division temporarily. ...

In 1978, while Read was incarcerated, his associate Amos Atkinson held 30 people hostage at The Waiters Restaurant in Melbourne while demanding Read's release. After shots were fired, the siege was lifted when Atkinson's mother, in her dressing gown, arrived at the restaurant to act as go-between. Atkinson's mother hit him over the head with her handbag and told him to "stop being so stupid". Atkinson then surrendered.

16 July, 2025 01:30PM

Sven Hoexter

Windows 10 to Ubuntu Migration

I know that Canonical / Ubuntu people are sometimes not well received due to promotion of Canonical tooling (some might remember upstart and Mir, or more recently snap and netplan). Thus for some positive vibes consider that I could hand out the Ubuntu Desktop image on a USB flash drive to a family member, and the family member could just replace Windows 10 without any assistance. It just worked. This was made possible by the will to keep a slightly dated ThinkPad in use, which it's not supported by Windows 11.

I've to admit that I never looked at Ubuntu Desktop before, but the user experience is on par with everything else I know. Thanks to all the folks at Canonical who made that possible! Luckily the times when you had to fiddle with modelines for XFree86, and sleepless nights about configuring lpd to get printing up and running are long gone. I believe now that Microsoft is doing Microsoft things with rolling Windows updates which force users to replace completely fine working hardware is the time to encourage more people to move to open operating systems, and Ubuntu Desktop seems to be a very suitable choice.

Things to Improve

Albeit I think the out of the box experience is great, there are a few niche topics where things could improve.

Default Access to Apt / Ubuntu Universe

Well snaps are promoted as the primary application source, but having some graphical interface like synaptic available by default to just install from Ubuntu Universe would be helpful. In this case we wanted to install keepass2 to access the users keepass file kept from the Windows setup. Having to tell someone "open the terminal and type sudo apt install" is something that requires support.

Snaps and Isolation Overrides

I'm fine with snaps having least privileges, but it would be nice if one could add overrides easily. Here the family member was playing with an Arduino Uno and there is one sample in the workbook that utilizes a Java application called Processing. It's available as a snap, but that one doesn't have access to the required serial port device file. I tried hard to make it work - full details in the snapcraft forum - but failed, and opted to use the --devmode to install it without isolation enforcement. As far as I understood snap that results in no more automatic updates for the application. If someone from the Ubuntu crowd with more snap debug experience has additional hints on how to narrow down which change is required, I would love to improve that and create a PR for the processing developers. Either reply in the forum or reach out via mail sven at stormbind dot net.

16 July, 2025 12:50PM

Google Cloud Oddities Summer 2025 Edition

Latest oddities I ran into with Google Cloud products before I start to forget about them again.

e2 Compute Instances vs CloudNAT

Years ago I already had a surprising encounter with the Google Cloud e2 instances. Back then we observed CPU steal time from 20-60%, which made the instances unusable for anything remotely latency sensitive. Now someone started to run a workload which creates many outbound connections to the same destination IP:Port. To connect to the internet we utilize the Google Cloud product "CloudNAT" which implements a NAT solution somewhere in the network layer.

Starting the workload let after a few seconds to all sort of connection issues, and of course logs from CloudNAT that it dropped connections. The simplest reproducer I could find was while true; do curl http://sven.stormbind.net; done which already let to connection drops on CloudNAT.

We starred a bit at output of gcloud compute routers get-nat-mapping-info our-default-natgw, but allocating additional ports still worked fine in general. Further investigation let to two differences between a project which was fine and those that failed:

  1. c2d or n2d machine types instead of e2 and
  2. usage of gVNIC.

Moving away from the e2 instances instantly fixed our issue. Only some connection drops could be observed on CloudNAT if we set the min_ports_per_vm value too low and it could not allocate new ports in time. Thus we did some additional optimizations:

  • raised min_ports_per_vm to 256
  • raised max_ports_per_vm to 32768 (the sensible maximum because CloudNAT will always double its allocation)
  • set nat_tcp_timewait_sec to 30, default is 120, reclaim of ports is only running every 30s, thus ports can be re-used after 30-60s

See also upstream documentation regarding timeouts.

To complete the setup alignment we also enabled gVNIC on all GKE pools. Noteworthy detail a colleague figured out: If you use terraform to provision GKE pools make sure to use at least google provider v6.33.0 to avoid a re-creation of your node pool.

GKE LoadBalancer Force allPorts: true on Forwarding Rule

Technically it's possible to configure a forwarding rule to listen on some or all ports. That gets more complicated if you do not configure the forwarding rule via terraform or gcloud cli, but use a GKE resource kind: Service with spec.type: LoadBalancer. The logic documented by Google Cloud is that the forwarding rule will have per port configuration if it's five or less, and above that it will open for all ports. Sadly that does not work e.g. in cases where you've an internal load balancer and a serviceAttachment attached to the forwarding rule. In my experience reconfiguring was also unreliable in cases without a serviceAttachment and required a manual deletion of the service load balancer to have the operator reconcile it and create it correctly.

Given that we wanted to have all ports open to allow us to dynamically add more ports on a specific load balancer, but there is no annotation for that, I worked around with this beauty:

      ports:
        - name: dummy-0
          port: 2342
          protocol: TCP
          targetPort: 2342
        - name: dummy-1
          port: 2343
          protocol: TCP
          targetPort: 2343
        - name: dummy-2
          port: 2344
          protocol: TCP
          targetPort: 2344
        - name: dummy-3
          port: 2345
          protocol: TCP
          targetPort: 2345
        - name: service-1
          port: 4242
          protocol: TCP
          targetPort: 4242
        - name: service-2
          port: 4343
          protocol: TCP
          targetPort: 4343

If something in that area did not work out there's basically two things to check:

  1. Is the port open on the forwarding rule / is the forwarding rule configured with allPorts: true?
  2. Got the VPC firewall rule created by the service operator in GKE updated to open all required ports?

Rate Limiting with Cloud Armor on Global TCP Proxy Load Balancer

According to the Google Cloud support rate limiting on a TCP proxy is a preview feature. That seems to be the excuse why it's all very inconsistent right now, but it works.

  • The Google Cloud Web Console is 100% broken and unable to deal with it. Don't touch it via the web.
  • If you configure an exceed_action in a google_compute_security_policy terraform resource you must use a value with response code, e.g. exceed_action = "deny(429)". The response code will be ignored. In all other cases I know you must use a deny without response code if you want to be able to assign the policy to a L3/L4 load balancer.
  • If you use config-connector (kcc) you can already use exceedAction: deny albeit it's not documented. Neither for config-connector itself nor for the API.
  • If you use the gcloud cli you can use --exceed-action=deny which is already documented if you call gcloud beta compute security-policies create --help, but it also works in the none beta mode. Also export / import via gcloud cli work with a deny without defining a response code.

Terraform Sample Snippet

  rule {
    description = "L3-L4 Rate Limit"
    action      = "rate_based_ban"
    priority    = "2342"
    match {
      versioned_expr = "SRC_IPS_V1"
      config {
        src_ip_ranges = ["*"]
      }
    }
    rate_limit_options {
      enforce_on_key = "IP"
      # exceed_action only supports deny() with a response code
      exceed_action = "deny(429)"
      rate_limit_threshold {
        count        = 320
        interval_sec = 60
      }
      ban_duration_sec = 240
      ban_threshold {
        count        = 320
        interval_sec = 60
      }
      conform_action = "allow"
    }
  }

Config-Connector Sample Snippet

  - action: rate_based_ban
    description: L3-L4 Rate Limit
    match:
      config:
        srcIpRanges:
          - "*"
      versionedExpr: SRC_IPS_V1
    preview: false
    priority: 2342
    rateLimitOptions:
      banDurationSec: 240
      banThreshold:
        count: 320
        intervalSec: 60
      conformAction: allow
      enforceOnKey: IP
      exceedAction: deny
      rateLimitThreshold:
         count: 320
         intervalSec: 60

16 July, 2025 12:06PM

July 15, 2025

hackergotchi for Alberto García

Alberto García

Converting QEMU qcow2 images directly to stdout

Introduction

Some months ago, my colleague Madeeha Javed and I wrote a tool to convert QEMU disk images into qcow2, writing the result directly to stdout.

This tool is called qcow2-to-stdout.py and can be used for example to create a new image and pipe it through gzip and/or send it directly over the network without having to write it to disk first.

This program is included in the QEMU repository: https://github.com/qemu/qemu/blob/master/scripts/qcow2-to-stdout.py

If you simply want to use it then all you need to do is have a look at these examples:

$ qcow2-to-stdout.py source.raw > dest.qcow2
$ qcow2-to-stdout.py -f dmg source.dmg | gzip > dest.qcow2.gz

If you’re interested in the technical details, read on.

A closer look under the hood

QEMU uses disk images to store the contents of the VM’s hard drive. Images are often in qcow2, QEMU’s native format, although a variety of other formats and protocols are also supported.

I have written in detail about the qcow2 format in the past (for example, here and here), but the general idea is very easy to understand: the virtual drive is divided into clusters of a certain size (64 KB by default), and only the clusters containing non-zero data need to be physically present in the qcow2 image. So what we have is essentially a collection of data clusters and a set of tables that map guest clusters (what the VM sees) to host clusters (what the qcow2 file actually stores).

A qcow2 file is a collection of data clusters plus some metadata to map them to what the guest VM sees.

qemu-img is a powerful and versatile tool that can be used to create, modify and convert disk images. It has many different options, but one question that sometimes arises is whether it can use stdin or stdout instead of regular files when converting images.

The short answer is that this is not possible in general. qemu-img convert works by checking the (virtual) size of the source image, creating a destination image of that same size and finally copying all the data from start to finish.

Reading a qcow2 image from stdin doesn’t work because data and metadata blocks can come in any arbitrary order, so it’s perfectly possible that the information that we need in order to start writing the destination image is at the end of the input data¹.

Writing a qcow2 image to stdout doesn’t work either because we need to know in advance the complete list of clusters from the source image that contain non-zero data (this is essential because it affects the destination file’s metadata). However, if we do have that information then writing a new image directly to stdout is technically possible.

The bad news is that qemu-img won’t help us here: it uses the same I/O code as the rest of QEMU. This generic approach makes total sense because it’s simple, versatile and is valid for any kind of source and destination image that QEMU supports. However, it needs random access to both images.

If we want to write a qcow2 file directly to stdout we need new code written specifically for this purpose, and since it cannot reuse the logic present in the QEMU code this was written as a separate tool (a Python script).

The process itself goes like this:

  • Read the source image from start to finish in order to determine which clusters contain non-zero data. These are the only clusters that need to be present in the new image.
  • Write to stdout all the metadata structures of the new image. This is now possible because after the previous step we know how much data we have and where it is located.
  • Read the source image again and copy the clusters with non-zero data to stdout.

Images created with this program always have the same layout: header, refcount tables and blocks, L1 and L2 tables, and finally all data clusters.

One problem here is that, while QEMU can read many different image formats, qcow2-to-stdout.py is an independent tool that does not share any of the code and therefore can only read raw files. The solution here is to use qemu-storage-daemon. This program is part of QEMU and it can use FUSE to export any file that QEMU can read as a raw file. The usage of qemu-storage-daemon is handled automatically and the user only needs to specify the format of the source file:

$ qcow2-to-stdout.py -f dmg source.dmg > dest.qcow2

qcow2-to-stdout.py can only create basic qcow2 files and does not support features like compression or encryption. However, a few parameters can be adjusted, like the cluster size (-c), the width of the reference count entries (-r) and whether the new image is created with the input as an external data file (-d and -R).

And this is all, I hope that you find this tool useful and this post informative. Enjoy!

Acknowledgments

This work has been developed by Igalia and sponsored by Outscale, a Dassault Systèmes brand.

Logos of Igalia and Outscale

¹ This problem would not happen if the input data was in raw format but in this case we would not know the size in advance.

15 July, 2025 05:17PM by berto

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

anytime 0.3.12 on CRAN: Minor Bugfix and Maintenance

A maintenance release 0.3.132 of the anytime package arrived on CRAN today. The package is fairly feature-complete, and code and functionality remain mature and stable.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … input format to either POSIXct (when called as anytime) or Date objects (when called as anydate) – and to do so without requiring a format string as well as accomodating different formats in one input vector. See the anytime page, or the GitHub repo for a few examples, and the beautiful documentation site for all documentation.

This release covers a corner case reported in a GitHub issue: the (nonsensical but possible) input of zero-length (floating point or integer) vectors was not dealt with properly which lead to an error. We now return the requested type (POSIXct or Date, depending on the call) also with length zero. Two minor maintenance tasks were also addressed since the last release six months ago.

The short list of changes follows.

Changes in anytime version 0.3.12 (2025-07-14)

  • Continuous integration now uses r-ci action with embedded bootstrap

  • The versioned depends on Rcpp now requires 1.0.8 or newer to support use of the updated header file structure

  • The corner-case of an empty (numeric or integer) vector argument is now addressed, new tests have been added (#135)))

Courtesy of my CRANberries, there is also a diffstat report of changes relative to the previous release. The issue tracker tracker off the GitHub repo can be use for questions and comments. More information about the package is at the package page, the GitHub repo and the documentation site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

15 July, 2025 01:58AM

July 13, 2025

hackergotchi for Bits from Debian

Bits from Debian

DebConf25 starts today in Brest on Monday, July 14, 2025

DebConf25, the 25th annual Debian Developer Conference, is taking place in Brest, France from 14 to 19 July 2025. Debian contributors from all over the world have come together at the Campus of IMT Atlantique Bretagne-Pays de la Loire, Brest, to participate and work in a conference exclusively ran by volunteers.

As the conference begins on July 14, the French National Day, Debian can make France's motto its own: "Liberté, égalité, fraternité", Freedom for Free and Open Source Software, Equity for the equal right (and duties) of everyone to use, modify and share Free Software, Fraternity which perfectly covers what our code of conduct defines.

Today the main conference starts with around 500 expected attendants and over 140 scheduled activities, including 45-minute and 20-minute talks, Bird of a Feather ("BoF") team meetings, workshops, a job fair, as well as a variety of other events. The full schedule is updated each day, including activities planned ad-hoc by attendees over the course of the conference.

If you would like to engage remotely, you can follow the video streams available from the DebConf25 website for the events happening in the three talk rooms: Méridienne, Grand amphi and Petit amphi accessible from the DebConf25 homepage. Or you can join the conversations happening inside the talk rooms via the OFTC IRC network in the #debconf-meridienne, #debconf-grandamphi, #debconf-petitamphi, and #debconf-bof channels. Please also join us in the #debconf channel for common discussions related to DebConf.

You can also follow the live coverage of news about DebConf25 provided by our micronews service or the @debian profile on your favorite social network.

DebConf is committed to a safe and welcoming environment for all participants. Please see our Code of Conduct page for more information on this.

Debian thanks the commitment of numerous sponsors to support DebConf25, particularly our Platinum Sponsors: Infomaniak, Proxmox, Viridien, EDF, and AMD.

DebConf25 logo

13 July, 2025 10:50PM by The Debian Publicity Team

July 12, 2025

Debconf25 welcomes its sponsors

DebConf25 logo

DebConf25, the 26th edition of the Debian conference is taking place in Brest Campus of IMT Atlantique Bretagne-Pays de la Loire, France. We appreciate the organizers for their hard work, and hope this event will be highly beneficial for those who attend in person as well as online.

This event would not be possible without the help from our generous sponsors. We would like to warmly welcome the sponsors of DebConf 25, and introduce them to you.

We have five Platinum sponsors.

  • Our first Platinum sponsor is Infomaniak. Infomaniak is Switzerland’s leading developer of Web technologies. With operations all over Europe and based exclusively in Switzerland, the company designs and manages its own data centers powered by 100% renewable energy, and develops all its solutions locally, without outsourcing. With millions of users and the trust of public and private organizations across Europe - such as RTBF, the United Nations, central banks, over 3,000 radio and TV stations, as well as numerous cities and security bodies - Infomaniak stands for sovereign, sustainable and independent digital technology. The company offers a complete suite of collaborative tools, cloud hosting, streaming, marketing and events solutions, while being owned by its employees and self-financed exclusively by its customers.

  • Proxmox is the second Platinum sponsor. Proxmox develops powerful, yet easy-to-use Open Source server software. The product portfolio from Proxmox, including server virtualization, backup, and email security, helps companies of any size, sector, or industry to simplify their IT infrastructures. The Proxmox solutions are built on Debian, we are happy that they give back to the community by sponsoring DebConf25.

  • Viridien is our third Platinum sponsor. Viridien is an advanced technology, digital and Earth data company that pushes the boundaries of science for a more prosperous and sustainable future. Viridien has been using Debian-based systems to power most of its HPC infrastructure and its cloud platform since 2009 and currently employs two active Debian Project Members.

  • EDF is our fourth Platinum sponsor. EDF is a leading global utility company focused on low-carbon power generation. The group uses advanced engineering and scientific computing tools to drive innovation and efficiency in its operations, especially in nuclear power plant design and safety assessment. Since 2003, the EDF Group has been using Debian as its main scientific computing environment. Debian's focus on stability and reproducibility ensures that EDF's calculations and simulations produce consistent and accurate results.

  • AMD is our fifth Platinum sponsor. The AMD ROCm platform includes programming models, tools, compilers, libraries, and runtimes for AI and HPC solution development on AMD GPUs. Debian is an officially supported platform for AMD ROCm and a growing number of components are now included directly in the Debian distribution. For more than 55 years AMD has driven innovation in high-performance computing, graphics and visualization technologies. AMD is deeply committed to supporting and contributing to open-source projects, foundations, and open-standards organizations, taking pride in fostering innovation and collaboration within the open-source community.

Our Gold sponsors are:

  • Ubuntu, the Operating System delivered by Canonical.

  • Freexian, a services company specialized in Free Software and in particular Debian GNU/Linux, covering consulting, custom developments, support and training.

  • Lenovo, a global technology leader manufacturing a wide portfolio of connected products including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions.

  • Collabora, a global consultancy delivering Open Source software solutions to the commercial world.

  • Vyos Networks provides a free routing platform that competes directly with other commercially available solutions and VyOS is an open source network operating system based on Debian.

Our Silver sponsors are:

  • Google, one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware.
  • Arm: leading technology provider of processor IP, Arm powered solutions have been supporting innovation for more than 30 years and are deployed in over 280 billion chips to date.
  • The Bern University of Applied Sciences with around 7,925 students enrolled, located in the Swiss capital.
  • Siemens is a technology company focused on industry, infrastructure and transport.
  • Linagora develops ethical Free and Open Source Software, supports Open Source products that helps humans and boosts digital progress. They promote Free Software in France.
  • Pexip brings the ease of commercial video platforms to secure and sovereign environments without compromising control or performance.
  • Sipgate offers cloud telephony that automates routine tasks, recognizes customer needs, and enables deep CRM integrations.
  • evolix is a french company specialized in Hosting and Managed Services Provider (MSP) working only with Open Source softwares.
  • Civil Infrastructure Platform, a collaborative project hosted by the Linux Foundation, establishing an open source “base layer” of industrial grade software.
  • credativ: a consulting and services company offering comprehensive services and technical support for the implementation and operation of Open Source Software in business applications.
  • Wind River is a leader in delivering the highest levels of secure, safe, and reliable solutions for mission-critical intelligent systems.
  • NovaCustom is a company that lets you configure your own laptop with various hardware and software options, focusing on privacy and security.
  • Microsoft who enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.
  • McKay Brothers is the acknowledged leader in providing extreme low latency microwave private bandwidth for firms trading in financial markets.
  • Matanel Foundation, whose first concern is to preserve the cohesion of a society and a nation plagued by divisions.
  • Qualcomm Technologies one of the world's leading companies in field of mobile technology, sponsors and contributes to Open Source developer communities that drive collaboration.

Bronze sponsors:

And finally, our Supporter level sponsors:

A special thanks to the IMT Atlantique Bretagne-Pays de la Loire, our Venue Partner and our Network Partner ResEl!

Thanks to all our sponsors for their support! Their contributions enable a diverse global community of Debian developers and maintainers to collaborate, support one another, and share knowledge at DebConf25.

12 July, 2025 09:45PM by The Debian Publicity Team

Reproducible Builds

Reproducible Builds in June 2025

Welcome to the 6th report from the Reproducible Builds project in 2025. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this report:

  1. Reproducible Builds at FOSSY 2025
  2. Distribution work
  3. diffoscope
  4. OSS Rebuild updates
  5. Website updates
  6. Upstream patches
  7. Reproducibility testing framework

Reproducible Builds at FOSSY 2025

On Saturday 2nd August, Vagrant Cascadian and Chris Lamb will be presenting at this year’s FOSSY 2025. Their talk, titled Never Mind the Checkboxes, Here’s Reproducible Builds!, is being introduced as follows:

There are numerous policy compliance and regulatory processes being developed that target software development… but do they solve actual problems? Does it improve the quality of software? Do Software Bill of Materials (SBOMs) actually give you the information necessary to verify how a given software artifact was built? What is the goal of all these compliance checklists anyways… or more importantly, what should the goals be? If a software object is signed, who should be trusted to sign it, and can they be trusted … forever?

The talk will introduce the audience to Reproducible Builds as a set of best practices which allow users and developers to verify that software artifacts were built from the source code, but also allows auditing for license compliance, providing security benefits, and removes the need to trust arbitrary software vendors.

Hosted by the Software Freedom Conservancy and taking place in Portland, Oregon, USA, FOSSY aims to be a community-focused event: “Whether you are a long time contributing member of a free software project, a recent graduate of a coding bootcamp or university, or just have an interest in the possibilities that free and open source software bring, FOSSY will have something for you”. More information on the event is available on the FOSSY 2025 website, including the full programme schedule.

Vagrant and Chris will also be staffing a table this year, where they will be available to answer any questions about Reproducible Builds and discuss collaborations with other projects.



Distribution work

In Debian this month:

  • Holger Levsen has discovered that it is now possible to bootstrap a minimal Debian trixie using 100% reproducible packages. This result can itself be reproduced, using the debian-repro-status tool and mmdebstrap’s support for hooks:

      $ mmdebstrap --variant=apt --include=debian-repro-status \
           --chrooted-customize-hook=debian-repro-status \
           trixie /dev/null 2>&1 | grep "Your system has"
       INFO  debian-repro-status > Your system has 100.00% been reproduced.
    
  • On our mailing list this month, Helmut Grohne wrote an extensive message raising an issue related to Uploads with conflicting buildinfo filenames:

    Having several .buildinfo files for the same architecture is something that we plausibly want to have eventually. Imagine running two sets of buildds and assembling a single upload containing buildinfo files from both buildds in the same upload. In a similar vein, as a developer I may want to supply several .buildinfo files with my source upload (e.g. for multiple architectures). Doing any of this is incompatible with current incoming processing and with reprepro.

  • 5 reviews of Debian packages were added, 4 were updated and 8 were removed this month adding to our ever-growing knowledge about identified issues.


In GNU Guix, Timothee Mathieu reported that a long-standing issue with reproducibility of shell containers across different host operating systems has been solved. In their message, Timothee mentions:

I discovered that pytorch (and maybe other dependencies) has a reproducibility problem of order 1e-5 when on AVX512 compared to AVX2. I first tried to solve the problem by disabling AVX512 at the level of pytorch, but it did not work. The dev of pytorch said that it may be because some components dispatch computation to MKL-DNN, I tried to disable AVX512 on MKL, and still the results were not reproducible, I also tried to deactivate in openmpi without success. I finally concluded that there was a problem with AVX512 somewhere in the dependencies graph but I gave up identifying where, as this seems very complicated.


The IzzyOnDroid Android APK repository made more progress in June. Not only have they just passed 48% reproducibility coverage, Ben started making their reproducible builds more visible, by offering rbtlog shields, a kind of badge that has been quickly picked up by many developers who are proud to present their applications’ reproducibility status.


Lastly, in openSUSE news, Bernhard M. Wiedemann posted another monthly update for their work there.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 298, 299 and 300 to Debian:

  • Add python3-defusedxml to the Build-Depends in order to include it in the Docker image. []
  • Handle the RPM format’s HEADERSIGNATURES and HEADERIMMUTABLE as a special-case to avoid unnecessarily large diffs. Thanks to Daniel Duan for the report and suggestion. [][]
  • Update copyright years. []

In addition, @puer-robustus fixed a regression introduced in an earlier commit which resulted in some differences being lost. [][]

Lastly, Vagrant Cascadian updated diffoscope in GNU Guix to version 299 [][] and 300 [][].


OSS Rebuild updates

OSS Rebuild has added a new network analyzer that provides transparent HTTP(S) interception during builds, capturing all network traffic to monitor external dependencies and identify suspicious behavior, even in unmodified maintainer-controlled build processes.

The text-based user interface now features automated failure clustering that can group similar rebuild failures and provides natural language failure summaries, making it easier to identify and understand patterns across large numbers of build failures.

OSS Rebuild has also improved the local development experience with a unified interface for build execution strategies, allowing for more extensible environment setup for build execution. The team also designed a new website and logo.


Website updates

Once again, there were a number of improvements made to our website this month including:



Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In June, however, a number of changes were made by Holger Levsen, including:


  • reproduce.debian.net-related:

    • Installed and deployed rebuilderd version 0.24 from Debian unstable in order to make use of the new compression feature added by Jarl Gullberg for the database. This resulted in massive decrease of the SQLite databases:

      • 79G → 2.8G (all)
      • 84G → 3.2G (amd64)
      • 75G → 2.9G (arm64)
      • 45G → 2.1G (armel)
      • 48G → 2.2G (armhf)
      • 73G → 2.8G (i386)
      • 72G → 2.7G (ppc64el)
      • 45G → 2.1G (riscv64)

      … for a combined saving from 521G → 20.8G. This naturally reduces the requirements to run an independent rebuilderd instance and will permit us to add more Debian suites as well.

    • During migration to the latest version of rebuilderd, make sure several services are not started. []
    • Actually run rebuilderd from /usr/bin. []
    • Raise temperatures for NVME devices on some riscv64 nodes that should be ignored. [][]
    • Use a 64KB kernel page size on the ppc64el architecture (see #1106757). []
    • Improve ordering of some “failed to reproduce” statistics. []
    • Detect a number of potential causes of build failures within the statistics. [][]
    • Add support for manually scheduling for the any architecture. []
  • Misc:

    • Update the Codethink nodes as there are now many kernels installed. [][]
    • Install linux-sysctl-defaults on Debian trixie systems as we need ping functionality. []
    • Limit the fs.nr_open kernel turnable. []
    • Stop submitting results to deprecated buildinfo.debian.net service. [][]

In addition, Jochen Sprickerhof greatly improved the statistics and the logging functionality, including adopting to the new database format of rebuilderd version 0.24.0 [] and temporarily increasing maximum log size in order to debug a nettlesome build []. Jochen also dropped the CPUSchedulingPolicy=idle systemd flag on the workers. []



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

12 July, 2025 04:08PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Adulting

In the last past weeks, I have done something I had been meaning to do for a while but always pushed back at the bottom of my TODO pile: prepare for my death.

I am still quite young and perfectly healthy (mentally and physically) and I do plan to live a long and full life, but death is something that comes from us all and can strike anytime. Having witnessed friends and colleagues who lost loved ones who did not prepare adequately for their passing, dealing with all this legal stuff ahead of time seems like the best gift you can leave them.

Writing my will was the easiest part of this "preparation for death" process. I have few material possessions and I'm leaving everything to my SO. As for the copyright for my code, I have decided everything I wrote will be licensed under CC0 (public domain) when I die. Quebec — where I live — also accepts holograph wills, which means I didn't have to hire a notary.

Apart from the will, I also wrote a protection mandate1, filled out Quebec's organ donation form2, took a contract for prearranged funeral services3 and finally, wrote a disaster recovery plan.

This recovery plan was by far the longest and most tedious part of this entire ordeal. If all your machines use full-disk encryption and you die or forget your passwords (for example after a head injury), can your data be recovered? How do you arbitrate between easy recovery and data security? If all your local devices burn down and you also pass away in the process, how will your next of kin access your remote backups and extract the relevant data (in my case, my password manager)?

I had to ask myself many complex questions in this process and although I won't be sharing my disaster recovery plan here (security through obscurity), I urge you to take the time to do something similar yourself and make sure you will leave a house in order when you go away.


  1. in case I become incapacitated and can't make choices by myself anymore. 

  2. it's sadly still opt-in here... 

  3. you pay now for the services you want, the money is kept in a trust in your name and you can't be charged extra when you do pass away. This protects you from inflation and is a great way to make sure your next of kin don't have to deal with the complexities of funeral services while grieving. 

12 July, 2025 04:00AM by Louis-Philippe Véronneau

July 11, 2025

Jamie McClelland

Avoiding Apache Max Request Workers Errors

Wow, I hate this error:

AH00484: server reached MaxRequestWorkers setting, consider raising the MaxRequestWorkers setting

For starters, it means I have to relearn how MaxRequestWorkers functions in Apache:

For threaded and hybrid servers (e.g. event or worker), MaxRequestWorkers restricts the total number of threads that will be available to serve clients. For hybrid MPMs, the default value is 16 (ServerLimit) multiplied by the value of 25 (ThreadsPerChild). Therefore, to increase MaxRequestWorkers to a value that requires more than 16 processes, you must also raise ServerLimit.

Ok… remind me what ServerLimit refers to?

For the prefork MPM, this directive sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process. For the worker and event MPMs, this directive in combination with ThreadLimit sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process. For the event MPM, this directive also defines how many old server processes may keep running and finish processing open connections. Any attempts to change this directive during a restart will be ignored, but MaxRequestWorkers can be modified during a restart. Special care must be taken when using this directive. If ServerLimit is set to a value much higher than necessary, extra, unused shared memory will be allocated. If both ServerLimit and MaxRequestWorkers are set to values higher than the system can handle, Apache httpd may not start or the system may become unstable. With the prefork MPM, use this directive only if you need to set MaxRequestWorkers higher than 256 (default). Do not set the value of this directive any higher than what you might want to set MaxRequestWorkers to. With worker, use this directive only if your MaxRequestWorkers and ThreadsPerChild settings require more than 16 server processes (default). Do not set the value of this directive any higher than the number of server processes required by what you may want for MaxRequestWorkers and ThreadsPerChild. With event, increase this directive if the process number defined by your MaxRequestWorkers and ThreadsPerChild settings, plus the number of gracefully shutting down processes, is more than 16 server processes (default).

Got it? In other words, you can “consider” raising the MaxRequestWorkers setting all you want, but you can’t just change that setting, you have to read about several other compliated settings, do some math, and spend a lot of time wondering if you are going to remember what you just did and how to undo it if you blow up your server.

On the plus side, typically, nobody should increase this limit - because if the server runs out of connections, it usually means something else is wrong.

In our case, on a shared web server running Apache2 and PHP-FPM, it’s usually because a single web site has gone out of control.

But wait! How can that happen, we are using PHP-FPM’s max_children setting to prevent a single PHP web site from taking down the server?

After years of struggling with this problem I have finally made some headway.

Our PHP pool configuration typically looks like this:

user = site342999writer
group = site342999writer
listen = /run/php/8.1-site342999.sock
listen.owner = www-data
listen.group = www-data
pm = ondemand
pm.max_children = 12
pm.max_requests = 500
php_admin_value[memory_limit] = 256M

And we invoke PHP-FPM via this apache snippet:

<FilesMatch \.php$>
        SetHandler "proxy:unix:/var/run/php/8.1-site342999.sock|fcgi://localhost"
</FilesMatch>

With these settings in place, what happens when we use up all 12 max_children?

According to the docs:

By default, mod_proxy will allow and retain the maximum number of connections that could be used simultaneously by that web server child process. Use the max parameter to reduce the number from the default. The pool of connections is maintained per web server child process, and max and other settings are not coordinated among all child processes, except when only one child process is allowed by configuration or MPM design.

The max parameter seems to default to the ThreadsPerChild, so it seems that the default here is to allow any web site to consume ThreadsPerChild (25) x ServerLimit (16), which is also the max number of over all connections. Not great.

To make matter worse, there is another setting available which is mysteriously called acquire:

If set, this will be the maximum time to wait for a free connection in the connection pool, in milliseconds. If there are no free connections in the pool, the Apache httpd will return SERVER_BUSY status to the client.

By default this is not set which seems to suggest Apache will just hang on to connections forever until a free PHP process becomes available (or some other time out happens).

So, let’s try something different:

 <Proxy "fcgi://localhost">
    ProxySet acquire=1 max=12
  </proxy>

This snippet is the way you can configure the proxy configuration we setup in the SetHandler statement above. It’s documented on the Apache mod_proxy page.

Now we limit the maximum pool size per process to half of what is available for the entire server and we tell Apache to immediately throw a 503 error if we have exceeded our maximum number of connecitons.

Now, if a site is overwhelmed with traffic, instead of maxing out the available Apache connections while leaving user with constantly spinning browsers, the users will get 503’ed and the server will be able to server other sites.

11 July, 2025 12:27PM

July 10, 2025

Russell Coker

Bad Product Comparisons and EVs

When companies design products a major concern seems to be what the reviewers will have to say about it. For any product of significant value the users are unable to perform any reasonable test before buying, for a casual user some problems may only be apparent after weeks of use so professional reviews are important to many people. The market apparently doesn’t want reviews of the form “here’s a list of products that are quite similar and all do the job well, you can buy any of them, it’s no big deal” which would be the most technically accurate way of doing it.

So the reviewers compare the products on the criteria that are easiest to measure, this lead to phones being compared by how light and thin they are. I think it’s often the case that users would be better served by thicker heavier phones that have larger batteries but instead they are being sold phones that have good battery life in a fresh installation but which don’t last a day with a full load of apps installed.

The latest issue with bad reviews driving poor product design is electric cars. For a while the advocates of old fashioned cars have touted the range of petrol cars which has become an issue for comparing EVs. I have been driving cars for 35 years and so far I have never driven anywhere that’s out of range of the current electric charging network, even with the range of the LEAF (which is smaller than many other EVs). If I ever felt the need to drive across the Nullarbor Plain then I could rent a car to do that and the costs of such car rental would be small compared to the money I’m saving by driving an EV and also small when compared to the premium I would have to pay for an EV with a larger range.

Some of the recent articles I’ve seen about EVs have covered vehicles with a battery range over 700Km which is greater than the legal distance a commercial driver can drive without a break. I’ve also seen articles about plans to have a small petrol or Diesel motor in an EV to recharge the battery without directly driving the wheels. A 9KW Diesel motor could provide enough electricity on average to keep the charge maintained in a LEAF battery and according to the specs of Diesel generators would take about 55Kg of fuel to provide the charge a LEAF needs to drive 1000Km. The idea of a mostly electric hybrid car that can do 1000Km on one tank of fuel is interesting as a thought experiment but doesn’t seem to have much actual use. Apparently a Chinese company is planning to release a car that can do 1400Km one one tank of fuel using such technology which is impressive but not particularly useful.

The next issue of unreasonable competition is in charge speed. Charging a car at 2KW from a regular power socket is a real limit to what you can do with a car. It’s a limit that hasn’t bothered me so far because the most driving I typically do in a week is less than one full charge, so at most I have to charge overnight twice in a week. But if I was going to drive to another city without hiring a car that has better range I’d need a fast charger. Most current models of the Nissan LEAF support charging speeds up to 50KW which means fully charging the battery in under an hour (or slightly over an hour for the long range version). If I was to drive from Melbourne to Canberra in my LEAF I’d have to charge twice which would be an annoyance at those speeds. There are a variety of EVs that can charge at 100KW and some as high as 350KW. 350KW is enough to fully charge the largest EV batteries in half an hour which seems to be as much as anyone would need. But there are apparently plans for 1MW car chargers which would theoretically be able to charge a Hummer (the EV with the largest battery) in 12 minutes. One obvious part of the solution to EV charging times is to not drive a Hummer! Another thing to note is that batteries can’t be charged at a high rate for all charge levels, this is why advertising for fast chargers makes claims like “80% charge in half an hour” which definitely doesn’t mean “100% charge in 37.5 minutes”!

There are significant engineering issues with high power applications. A 1MW cable is not just a bigger version of a regular power cable, there are additional safety issues, user training is required and cooling of the connector is probably required. That’s a lot to just get a better number in the table at the end of a review. There is research in progress on the Megawatt Charging System which is designed to charge heavy vehicles (presumably trucks and buses) at up to 3.75MW. Charging a truck at that rate is reasonable as the process of obtaining and maintaining a heavy vehicle license requires a significant amount of effort and some extra training in 3.75MW charging probably doesn’t make much difference.

A final issue with fast charging is the capacity of the grid. A few years ago I attended a lecture by an electrical engineer who works for the Victorian railway system which was very interesting. The Vic rail power setup involved about 100MW of grid connectivity with special contracts with the grid operators due to the fact that 1MW trains suddenly starting and stopping causes engineering problems that aren’t trivial to solve. They were also working on battery packs and super capacitors to deal with regenerative braking and to avoid brownouts in long sections of track. For a medium size petrol station 14 bays for fuelling cars is common. If 6 such petrol stations were replaced with fast charging stations that can charge cars at 1MW each that would draw the same power as the train network for the entire state! There is a need for significant engineering work to allow most cars to be electric no matter how it’s done, but we don’t need to make that worse just for benchmarks.

10 July, 2025 09:19AM by etbe

Tianon Gravi

Yubi Whati? (YubiKeys, ECDSA, and X.509)

Off-and-on over the last several weeks, I've been spending time trying to learn/understand YubiKeys better, especially from the perspective of ECDSA and signing. �

I had a good mental model for how "slots" work (canonically referenced by their hexadecimal names such as 9C), but found that it had a gap related to "objects"; while closing that, I was annoyed that the main reference table for this gap lives primarily in either a PDF or inside several implementations, so I figured I should create the reference I want to see in the world, but that it would also be useful to write down some of my understanding for my own (and maybe others') future reference. �

So, to that end, I'm going to start with a bit (�) of background information, with the heavy caveat that this only applies to "PIV" ("FIPS 201") usage of YubiKeys, and that I only actually care about ECDSA, although I've been reassured that it's the same for at least RSA (anything outside this is firmly Here Be Not Tianon; "gl hf dd"). �

(Incidentally, learning all this helped me actually appreciate the simplicity of cloud-based KMS solutions, which was an unexpected side effect. 😬)

At a really high level, ECDSA is like many other (asymmetric) cryptographic solutions – you've got a public key and a private key, the private key can be used to "sign" data (tiny amounts of data, in fact, like P-256 can only reasonably sign 256 bits of data, which is where cryptographic hashes like SHA256 come in as secure analogues for larger data in small bit sizes), and the public key can then be used to verify that the data was indeed signed by the private key, and only someone with the private key could've done so. There's some complex math and RNGs involved, but none of that's actually relevant to this post, so find that information elsewhere. 🙈

Unfortunately, this is where things go off the rails: PIV is X.509 ("x509") heavy, and there's no X.509 in the naïve view of my use case. �

In a YubiKey (or any other PIV-signing-supporting smart card? do they actually have competitors in this specific niche? 🤔), a given "slot" can hold one single private key. There are ~24 slots which can hold a private key and be used for signing, although "Slot 9c" is officially designated as the "Digital Signature" slot and is encouraged for signing purposes. 🌈�

One of the biggest gotchas is that with pure-PIV (and older YubiKey firmware 🤬) the public key for a given slot is only available at the time the key is generated, and the whole point of the device in the first place is that the private key is never, ever available from it (all cryptographic operations happen inside the device), so if you don't save that public key when you first ask the device to generate a private key in a particular slot, the public key is lost forever (asterisk). 🙊

$ # generate a new ECDSA P-256 key in "slot 9c" ("Digital Signature")
$ # WARNING: THIS WILL GLEEFULLY WIPE SLOT 9C WITHOUT PROMPTING
$ yubico-piv-tool --slot 9c --algorithm ECCP256 --action generate
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEtGoWRGyjjUlJFXpu8BL6Rnx8jjKR
5+Mzl2Vepgor+k7N9q7ppOtSMWefjFVR0SEPmXqXINNsCi6LpLtNEigIRg==
-----END PUBLIC KEY-----
Successfully generated a new private key.
$ # this is the only time/place we (officially) get this public key

With that background, now let's get to the second aspect of "slots" and how X.509 fits. For every aforementioned slot, there is a corresponding "object" (read: place to store arbitrary data) which is corresponding only by convention. For all these "key" slots the (again, by convention) corresponding "object" is explicitly supposed to be an X.509 certificate (see also the PDF reference linked above). 🙉

It turns out this is a useful and topical place to store that public key we need to keep handy! It's also an interesting place to shove additional details about what the key in a given slot is being used for, if that's your thing. Converting the raw public key into a (likely self-signed) X.509 certificate is an exercise for the reader, but if you want to follow the conventions, you need some way to convert a given "slot" to the corresponding "object", and that is the lookup table I wish existed in more forms. 🕳

So, without further ado, here is the anti-climax: 💫

Slot Object Description
0x9A 0x5FC105 X.509 Certificate for PIV Authentication
0x9E 0x5FC101 X.509 Certificate for Card Authentication
0x9C 0x5FC10A X.509 Certificate for Digital Signature
0x9D 0x5FC10B X.509 Certificate for Key Management
0x82 0x5FC10D Retired X.509 Certificate for Key Management 1
0x83 0x5FC10E Retired X.509 Certificate for Key Management 2
0x84 0x5FC10F Retired X.509 Certificate for Key Management 3
0x85 0x5FC110 Retired X.509 Certificate for Key Management 4
0x86 0x5FC111 Retired X.509 Certificate for Key Management 5
0x87 0x5FC112 Retired X.509 Certificate for Key Management 6
0x88 0x5FC113 Retired X.509 Certificate for Key Management 7
0x89 0x5FC114 Retired X.509 Certificate for Key Management 8
0x8A 0x5FC115 Retired X.509 Certificate for Key Management 9
0x8B 0x5FC116 Retired X.509 Certificate for Key Management 10
0x8C 0x5FC117 Retired X.509 Certificate for Key Management 11
0x8D 0x5FC118 Retired X.509 Certificate for Key Management 12
0x8E 0x5FC119 Retired X.509 Certificate for Key Management 13
0x8F 0x5FC11A Retired X.509 Certificate for Key Management 14
0x90 0x5FC11B Retired X.509 Certificate for Key Management 15
0x91 0x5FC11C Retired X.509 Certificate for Key Management 16
0x92 0x5FC11D Retired X.509 Certificate for Key Management 17
0x93 0x5FC11E Retired X.509 Certificate for Key Management 18
0x94 0x5FC11F Retired X.509 Certificate for Key Management 19
0x95 0x5FC120 Retired X.509 Certificate for Key Management 20

See also "piv-objects.json" for a machine-readable copy of this data. 👀🤖💻💾

(Major thanks to paultag and jon gzip johnson for helping me learn and generally putting up with me, but especially dealing with my live-stream-of-thoughts while I stumble through the dark. 💖)

10 July, 2025 07:00AM by Tianon Gravi (admwiggin@gmail.com)

July 08, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Superimposed codes, take two

After my last post on superimposed codes, I discovered that OEIS already had a sequence for it (I had just missed it due to a slightly different convention), namely A286874 (and its sister sequence A303977, which lists the number of distinct maximal solutions). However, very few terms of this sequence were known; in particular, it was known that a(12) >= 20 (easily proved by simply demonstrating a set of twenty 12-bit numbers with the desired property), but it wasn't known if the value could be higher (i.e., whether there existed a 12-bit set with 21 elements or more). The SAT solver wasn't really working well for this anymore, so I thought; can I just bruteforce it? I.e., can I enumerate all 12-bit 20-element sets and then see if any of them have room for a 21st element?

Now, obviously you cannot run a completely dumb bruteforce. The raw state space is 12*20 = 240 bits, and going through 2^240 different options is far away. But it's a good place to start, and then we can start employing tricks from there. (I'm sure there are more fancy ways somehow, but this one was what I chose. I'm a genius with mathematics, but I can write code.)

So I started with a 20-level deep for loop, with each element counting from 0 to 4095 (inclusive). Now, there are some speedups that are obvious; for instance, once you have two elements, you can check that neither is a subset of the other (which is, except in some edge cases with small sets that we don't need to worry about here, a looser condition than what we're trying to test for), and then skip the remaining 18 levels. Similarly, once we have the first three elements, we can start testing whether one is a subset of OR of the two others, and abort similarly.

Furthermore, we can start considering symmetries. We only care about solutions that are qualitatively distinct, in that the ordering of the elements don't matter and the ordering of the bits also don't matter. So we can simply only consider sequences where the elements are in order, which is extremely simple, very cheap, and nets us a speedup of 20! ~= 2.4 * 10^18. We have to be a bit careful, though, because this symmetry can conflict with other symmetries that we'd like to use for speedup. For instance, it would be nice to impose the condition that the elements must be in order of increasing population count (number of set bits), but if we do this at the same time as the “strictly increasing” condition, we'll start missing valid solutions. (I did use a very weak variant of it, though; no element can have smaller popcount than the first one. Otherwise, you could just swap those two elements and shuffle columns around, and it wouldn't be in conflict.)

However, there is more that we can do which isn't in conflict. In particular, let's consider (writing only 5-bit elements for brevity) that we are considering candidates for the first element:

00011
00101
00110
10010

These are all, obviously, the same (except that the latter ones will be more restrictive); we could just shuffle bits around and get the same thing. So we impose a new symmetry: Whenever we introduce new bits (bits that were previously always set), they need to start from the right. So now this start of a sequence is valid:

00011
00101

but this is not:

00011
01001

The reason is, again, that we could get the first sequence from the second by flipping the second and third bit (counting from the left). This is cheap and easy to test for, and is not in conflict with our “increasing” criterion as long as we make this specific choice.

But we can extend this even further. Look at these two alternatives:

00111
01001

and

00111
01010

They are also obviously equivalent as prefixes (just swap the fourth and fifth bits), so we don't want to keep both. We make a very similar restriction as before; if all previous bits in a position are the same, then we need to fill bits from the right. (If they're not, then we cannot impose a restriction.) This is also fairly easy to do with some bit fiddling, although my implementation only considers consecutive bits. (It's not in conflict with the strictly-increasing criterion, again because it only makes values lower, not higher. It is, in a sense, a non-decreasing criterion on the columns.)

And finally, consider these two sequences (with some other elements in-between):

00111
01001
.....
10011

and

00111
01011
.....
10001

They are also equivalent; if you exchange first and second bit and then swap the order of them, you end up with the same. So this brings us to the last symmetry: If you introduce a new bit (or more generally N new bits), then you are not allowed to introduce later a value that is the same bit shifted more to the left and with the other bits being lower. So the second sequence would be outlawed.

Now, how do we do all of these tests efficiently? (In particular, the last symmetry, while it helped a lot in reducing the number of duplicate solutions, wasn't a speed win at first.) My first choice was to just generate code that did all the tests, and did them as fast as possible. This was actually quite efficient, although it took GCC several minutes to compile (and Clang even more, although the resulting code wasn't much faster). Amusingly, this code ended up with an IPC above 6 on my Zen 3 (5950X); no need for hyperthreading here! I don't think I've ever seen real-life code this taxing on the execution units, even though this code is naturally extremely branch-heavy. Modern CPUs are amazing beasts.

It's a bit wasteful that we have 64-bit ALUs (and 256-bit SIMD ALUs) and use them to do AND/OR on 12 bits at a time. So I tried various tricks with packing the values to do more tests at a time, but unfortunately, it only lead to slowdowns. So eventually, I settled at a very different solution: Bitsets. At any given time, we have a 4096-bit set of valid future values for the inner for loops. Whenever we decide on a value, we look up in a set of pregenerated tables and just AND them into our set. For instance, if we just picked the value 3 (00011), we look up into the “3” table and it will instantly tell us that values like 7 (00111), 11 (01011), and many others are going to be invalid for all inner iterations and we can just avoid considering them altogether. (Iterating over only the set bits in a bitset is pretty fast in general, using only standard tricks.) This saves us from testing any further value against these illegals, so it's super-fast. The resulting tables are large (~4 GB), since we need to look up pairs of values into it, so this essentially transforms our high-ALU problem into a memory-bound problem, but it's still easily worth it (I think it gave a speedup of something like 80x). The actual ANDing is easily done with AVX2, 256 bits at a time.

This optimization not only made the last symmetry-breaking feasible, but also sped up the entire process enough (you essentially get O(n) bitset intersections instead of O(n²) new tests per level) that it went from a “multiple machines, multiple months” project to running comfortably within a day on my 5950X (~6 core-days). I guess maybe a bit anticlimactic; I had to move the database I used for work distribution locally to the machine or else the latency would be killing me. It found the five different solutions very quickly and then a couple of thousand duplicates of them (filtering those out efficiently is a kind of tricky problem in itself!), and then confirmed there were no others. I submitted it to OEIS, and it should hopefully go through the editing process fairly fast.

The obvious next question is: Can we calculate a(13) in the same way? Unfortunately, it seems the answer is no. Recompiling the same code with 13-bit parameters (taking the LUTs up to ~31 GB, still within the amount of RAM I've got) and making a 25-deep instead of 20-level deep for loop, and then running for a while, it seems that we're looking at roughly 4–5000 core years. Which is infeasible unless you've got a lot of money to burn (with spot VMs on GCE, you're talking about roughly half a million dollars, give or take) on something that isn't a very important problem in computer science.

In theory, there's still hope, though: The fact that we're still finding the same solution ~1000x (down from ~100000x before the last symmetries were added!) indicates that there's some more symmetry that we could in theory exploit and break (and that factor 1000 is likely to be much larger for 25 elements than for 20). So if someone more creative than me could invent code for identifying them—or some other way of rejecting elements early—we could perhaps identify a(13). But I don't think that's happening anytime soon. Brute force found its sweet spot and I'm happy about that, but it doesn't scale forever. :-)

08 July, 2025 07:34PM