Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

January 09, 2026

Simon Josefsson

Debian Taco – Towards a GitSecDevOps Debian

One of my holiday projects was to understand and gain more trust in how Debian binaries are built, and as the holidays are coming to an end, I’d like to introduce a new research project called Debian Taco. I apparently need more holidays, because there are still more work to be done here, so at the end I’ll summarize some pending work.

Debian Taco, or TacOS, is a GitSecDevOps rebuild of Debian GNU/Linux.

The Debian Taco project publish rebuilt binary packages, package repository metadata (InRelease, Packages, etc), container images, cloud images and live images.

All packages are built from pristine source packages in the Debian archive. Debian Taco does not modify any Debian source code nor add or remove any packages found in Debian.

No servers are involved! Everything is built in GitLab pipelines and results are published through modern GitDevOps mechanism like GitLab Pages and S3 object storage. You can fork the individual projects below on GitLab.com and you will have your own Debian-derived OS available for tweaking. (Of course, at some level, servers are always involved, so this claim is a bit of hyperbole.)

Goals

The goal of TacOS is to be bit-by-bit identical with official Debian GNU/Linux, and until that has been completed, publish diffoscope output with differences.

The idea is to further categorize all artifact differences into one of the following categories:

1) An obvious bug in Debian. For example, if a package does not build reproducible.

2) An obvious bug in TacOS. For example, if our build environment does not manage to build a package.

3) Something else. This would be input for further research and consideration. This category also include things where it isn’t obvious if it is a bug in Debian or in TacOS. Known examples:

3A) Packages in TacOS are rebuilt the latest available source code, not the (potentially) older package that were used to build the Debian packages. This could lead to differences in the packages. These differences may be useful to analyze to identify supply-chain attacks. See some discussion about idempotent rebuilds.

Our packages are all built from source code, unless we have not yet managed to build something. In the latter situation, Debian Taco falls back and uses the official Debian artifact. This allows an incremental publication of Debian Taco that still is 100% complete without requiring that everything is rebuilt instantly. The goal is that everything should be rebuilt, and until that has been completed, publish a list of artifacts that we use verbatim from Debian.

Debian Taco Archive

The Debian Taco Archive project generate and publish the package archive (dists/tacos-trixie/InRelease, dists/tacos-trixie/main/binary-amd64/Packages.gz, pool/* etc), similar to what is published at https://deb.debian.org/debian/.

The output of the Debian Taco Archive is available from https://debdistutils.gitlab.io/tacos/archive/.

Debian Taco Container Images

The Debian Taco Container Images project provide container images of Debian Taco for trixie, forky and sid on the amd64, arm64, ppc64el and riscv64 architectures.

These images allow quick and simple use of Debian Taco interactively, but makes it easy to deploy for container orchestration frameworks.

Debian Taco Cloud Images

The Debian Taco Cloud Images project provide cloud images of Debian Taco for trixie, forky and sid on the amd64, arm64, ppc64el and riscv64 architectures.

Launch and install Debian Taco for your cloud environment!

Debian Taco Live Images

The Debian Taco Live Images project provide live images of Debian Taco for trixie, forky and sid on the amd64 and arm64 architectures.

These images allows running Debian Taco on physical hardware (or virtual machines), and even installation for permanent use.

Debian Taco Build Images and Packages

Packages are built using debdistbuild, which was introduced in a blog about Build Debian in a GitLab Pipeline.

The first step is to prepare build images, which is done by the Debian Taco Build Images project. They are similar to the Debian Taco containers but have build-essential and debdistbuild installed on them.

Debdistbuild is launched in a per-architecture per-suite CI/CD project. Currently only trixie-amd64 is available. That project has built some essential early packages like base-files, debian-archive-keyring and hostname. They are stored in Git LFS backed by a S3 object storage. These packages were all built reproducibly. So this means Debian Taco is still 100% bit-by-bit identical to Debian, except for the renaming.

I’ve yet to launch a more massive wide-scale package rebuild until some outstanding issues have been resolved. I earlier rebuilt around 7000 packages from Trixie on amd64, so I know that the method easily scales.

Remaining work

Where is the diffoscope package outputs and list of package differences? For another holiday! Clearly this is an important remaining work item.

Another important outstanding issue is how to orchestrate launching the build of all packages. Clearly a list of packages is needed, and some trigger mechanism to understand when new packages are added to Debian.

One goal was to build packages from the tag2upload browse.dgit.debian.org archive, before checking the Debian Archive. This ought to be really simple to implement, but other matters came first.

GitLab or Codeberg?

Everything is written using basic POSIX /bin/sh shell scripts. Debian Taco uses the GitLab CI/CD Pipeline mechanism together with a Hetzner S3 object storage to serve packages. The scripts have only weak reliance on GitLab-specific principles, and were designed with the intention to support other platforms. I believe reliance on a particular CI/CD platform is a limitation, so I’d like to explore shipping Debian Taco through a Forgejo-based architecture, possibly via Codeberg as soon as I manage to deploy reliable Forgejo runners.

The important aspects that are required are:

1) Pipelines that can build and publish web sites similar to GitLab Pages. Codeberg has a pipeline mechanism. I’ve successfully used Codeberg Pages to publish the OATH Toolkit homepage homepage. Glueing this together seems feasible.

2) Container Registry. It seems Forgejo supports a Container Registry but I’ve not worked with it at Codeberg to understand if there are any limitations.

3) Package Registry. The Deban Taco live images are uploaded into a package registry, because they are too big for being served through GitLab Pages. It may be converted to using a Pages mechanism, or possibly through Release Artifacts if multi-GB artifacts are supported on other platforms.

I hope to continue this work and explaining more details in a series of posts, stay tuned!

09 January, 2026 04:33PM by simon

Russell Coker

LEAF ZE1 After 6 Months

About 6 months ago I got a Nissan LEAF ZE1 (2019 model) [1]. Generally it’s going well and I’m happy with most things about it.

One issue is that as there isn’t a lot of weight in the front with the batteries in the centre of the car the front wheels slip easily when accelerating. It’s a minor thing but a good reason for wanting AWD in an electric car.

When I got the car I got two charging devices, the one to charge from a regular 240V 10A power point (often referred to as a “granny charger”) and a cable with a special EV charging connector on each end. The cable with an EV connector on each end is designed for charging that’s faster than the “granny charger” but not as fast as the rapid chargers which have the cable connected to the supply so the cable temperature can be monitored and/or controlled. That cable can be used if you get a fast charger setup at your home (which I never plan to do) and apparently at some small hotels and other places with home-style EV charging. I’m considering just selling that cable on ebay as I don’t think I have any need to personally own a cable other than the “granny charger”.

The key fob for the LEAF has a battery installed, it’s either CR2032 or CR2025 – mine has CR2025. Some reports on the Internet suggest that you can stuff a CR2032 battery in anyway but that didn’t work for me as the thickness of the battery stopped some of the contacts from making a good connection. I think I could have got it going by putting some metal in between but the batteries aren’t expensive enough to make it worth the effort and risk. It would be nice if I could use batteries from my stockpile of CR2032 batteries that came from old PCs but I can afford to spend a few dollars on it.

My driveway is short and if I left the charger out it would be visible from the street and at risk of being stolen. I’m thinking of chaining the charger to a tree and having some sort of waterproof enclosure for it so I don’t have to go to the effort of taking it out of the boot every time I use it. Then I could also configure the car to only charge during the peak sunlight hours when the solar power my home feeds into the grid has a negative price (we have so much solar power that it’s causing grid problems).

The cruise control is a pain to use, so much so that I haven’t yet got it to work usefully ever. The features look good in the documentation but in practice it’s not as good as the Kia one I’ve used previously where I could just press one button to turn it on, another button to set the current speed as the cruise control speed, and then just have it work.

The electronic compass built in to the dash turned out to be surprisingly useful. I regret not gluing a compass to the dash of previous cars. One example is when I start google navigation for a journey and it says “go South on street X” and I need to know which direction is South so I don’t start in the wrong direction. Another example is when I know that I’m North of a major road that I need to take to get to my destination so I just need to go roughly South and that is enough to get me to a road I recognise.

In the past when there is a bird in the way I don’t do anything different, I keep driving at the same speed and rely on the bird to see me and move out of the way. Birds have faster reactions than humans and have evolved to move at the speeds cars travel on all roads other than freeways, also birds that are on roads are usually ones that have an eye in each side of their head so they can’t not see my car approaching. For decades this has worked, but recently a bird just stood on the road and got squashed. So I guess that I should honk when there’s birds on the road.

Generally everything about the car is fine and I’m happy to keep driving it.

09 January, 2026 03:32AM by etbe

January 08, 2026

Dima Kogan

Meshroom packaged for Debian

Like the title says, I just packaged Meshroom (and all the adjacent dependencies) for Debian! This is a fancy photogrammetry toolkit that uses modern software development methods. "Modern" meaning that it has a multitude of dependencies that come from lots of disparate places, which make it impossible for a mere mortal to build the thing. The Linux "installer" is 13GB and probably is some sort of container, or something.

But now, if you have a Debian/sid box with the non-free repos enabled, you can

sudo apt install meshroom

And then you can generate and 3D-print a life-size, geometrically-accurate statue of your cat. The colmap package does a similar thing, and has been in Debian for a while. I think it can't do as many things, but it's good to have both tools easily available.

These packages are all in contrib, because they depend on a number of non-free things, most notably CUDA.

This is currently in Debian/sid, but should be picked up by the downstream distros as they're released. The next noteworthy one is Ubuntu 26.04. Testing and feedback welcome.

08 January, 2026 11:34PM by Dima Kogan

Reproducible Builds

Reproducible Builds in December 2025

Welcome to the December 2025 from the Reproducible Builds project!

Our monthly reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

  1. New orig-check service to validate Debian upstream tarballs
  2. Distribution work
  3. disorderfs updated to FUSE 3
  4. Mailing list updates
  5. Three new academic papers published
  6. Website updates
  7. Upstream patches

New orig-check service to validate Debian upstream tarballs

This month, Debian Developer Lucas Nussbaum announced the orig-check service, which attempts to automatically reproduce the generation upstream tarballs (ie. the “original source” component of a Debian source package), comparing that to the upstream tarball actually shipped with Debian.

As of the time of writing, it is possible for a Debian developer to upload a source archive that does not actually correspond to upstream’s version. Whilst this is not inherently malicious (it typically indicates some tooling/process issue), the very possibility that a maintainer’s version may differ potentially permits a maintainer to make (malicious) changes that would be misattributed to upstream.

This service therefore nicely complements the whatsrc.org service, which was reported in our reports for both April and August. The orig-check is dedicated to Lunar, who sadly passed away a year ago.


Distribution work

In Arch Linux this month, Robin Candau and Mark Hegreberg worked at making the Arch Linux WSL image bit-for-bit reproducible. Robin also shared some implementation details and future related work on our mailing list.

Continuing a series reported in these reports for March, April and July 2025 (etc.), Simon Josefsson has published another interesting article this month, itself a followup to a post Simon published in December 2024 regarding GNU Guix Container Images that are hosted on GitLab.

In Debian this month, Micha Lenk posted to the debian-backports-announce mailing list with the news that the Backports archive will now discard binaries generated and uploaded by maintainers: “The benefit is that all binary packages [will] get built by the Debian buildds before we distribute them within the archive.”

Felix Moessbauer of Siemens then filed a bug in the Debian bug tracker to signal their intention to package debsbom, a software bill of materials (SBOM) generator for distributions based on Debian. This generated a discussion on the bug inquiring about the output format as well as a question about how these SBOMs might be distributed.

Holger Levsen merged a number of significant changes written by Alper Nebi Yasak to the Debian Installer in order to improve its reproducibility. As noted in Alper’s merge request, “These are the reproducibility fixes I looked into before bookworm release, but was a bit afraid to send as it’s just before the release, because the things like the xorriso conversion changes the content of the files to try to make them reproducible.”

In addition, 76 reviews of Debian packages were added, 8 were updated and 27 were removed this month adding to our knowledge about identified issues. A new different_package_content_when_built_with_nocheck issue type was added by Holger Levsen. []

Arnout Engelen posted to our mailing list reporting that they successfully reproduced the NixOS minimal installation ISO for the 25.11 release without relying on a pre-compiled package archive, with more details on their blog.

Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for his work there.


disorderfs updated to FUSE 3

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into system calls to reliably flush out reproducibility issues.

This month, however, Roland Clobus upgraded disorderfs* from FUSE 2 to FUSE 3 after its package automatically got removed from Debian testing. Some tests in Debian currently require disorderfs to make the Debian live images reproducible, although disorderfs is not a Debian-specific tool.


Mailing list updates

On our mailing list this month:

  • Luca Di Maio announced stampdalf, a “filesystem timestamp preservation” tool that wraps “arbitrary commands and ensures filesystem timestamp reproducibility”:

    stampdalf allows you to run any command that modifies files in a directory tree, then automatically resets all timestamps back to their original values. Any new files created during command execution are set to [the UNIX epoch] or a custom timestamp via SOURCE_DATE_EPOCH.

    The project’s GitHub page helpfully reveals that the project is “pronounced: stamp-dalf (stamp like time-stamp, dalf like Gandalf the wizard)” as “it’s a wizard of time and stamps”.)

  • Lastly, Reproducible Builds developer cen1 posted to our list announcing that “early/experimental/alpha” support for FreeBSD was added to rebuilderd. In their post, cen1 reports that the “initial builds are in progress and look quite decent”. cen1 also interestingly notes that “since the upstream is currently not technically reproducible I had to relax the bit-for-bit identical requirement of rebuilderd [—] I consider the pkg to be reproducible if the tar is content-identical (via diffoscope), ignoring timestamps and some of the manifest files.”.


Three new academic papers published

Yogya Gamage and Benoit Baudry of Université de Montréal, Canada together with Deepika Tiwari and Martin Monperrus of KTH Royal Institute of Technology, Sweden published a paper on The Design Space of Lockfiles Across Package Managers:

Most package managers also generate a lockfile, which records the exact set of resolved dependency versions. Lockfiles are used to reduce build times; to verify the integrity of resolved packages; and to support build reproducibility across environments and time. Despite these beneficial features, developers often struggle with their maintenance, usage, and interpretation. In this study, we unveil the major challenges related to lockfiles, such that future researchers and engineers can address them. […]

A PDF of their paper is available online.

Benoit Baudry also posted an announcement to our mailing list, which generated a number of replies.


Betul Gokkaya, Leonardo Aniello and Basel Halak of the University of Southampton then published a paper on the A taxonomy of attacks, mitigations and risk assessment strategies within the software supply chain:

While existing studies primarily focus on software supply chain attacks’ prevention and detection methods, there is a need for a broad overview of attacks and comprehensive risk assessment for software supply chain security. This study conducts a systematic literature review to fill this gap. By analyzing 96 papers published between 2015-2023, we identified 19 distinct SSC attacks, including 6 novel attacks highlighted in recent studies. Additionally, we developed 25 specific security controls and established a precisely mapped taxonomy that transparently links each control to one or more specific attacks. […]

A PDF of the paper is available online via the article’s canonical page.


Aman Sharma and Martin Monperrus of the KTH Royal Institute of Technology, Sweden along with Benoit Baudry of Université de Montréal, Canada published a paper this month on Causes and Canonicalization of Unreproducible Builds in Java. The abstract of the paper is as follows:

[Achieving] reproducibility at scale remains difficult, especially in Java, due to a range of non-deterministic factors and caveats in the build process. In this work, we focus on reproducibility in Java-based software, archetypal of enterprise applications. We introduce a conceptual framework for reproducible builds, we analyze a large dataset from Reproducible Central, and we develop a novel taxonomy of six root causes of unreproducibility. […]

A PDF of the paper is available online.


Website updates

Once again, there were a number of improvements made to our website this month including:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

08 January, 2026 10:51PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppCCTZ 0.2.14 on CRAN: New Upstream, Small Edits

A new release 0.2.14 of RcppCCTZ is now on CRAN, in Debian and built for r2u.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now several others packages (four the last time we counted) include its sources too. Not ideal, but beyond our control.

This version updates to a new upstream release, and brings some small local edits. CRAN and R-devel were stumbled over us still mentioning C++11 in SystemRequirements (yes, this package is old enough for that to have mattered once). As that is a false positive—the package compiles well under any recent standard—we removed the mention. The key changes since the last CRAN release are summarised below.

Changes in version 0.2.14 (2026-01-08)

  • Synchronized with upstream CCTZ (Dirk in #46).

  • Explicitly enumerate files to be compiled in src/Makevars* (Dirk in #47)

Courtesy of my CRANberries, there is a diffstat report relative to to the previous version. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

08 January, 2026 03:42PM

RcppSpdlog 0.0.24 on CRAN: New Upstream

Version 0.0.24 of RcppSpdlog arrived on CRAN today, has been been uploaded to Debian, and also been built for r2u. This follows an upstream release on Sunday which we incorporated immediately, but CRAN was still closed for the winter break until yesterday. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This release updates the code to the version 1.17.0 of spdlog which was released yesterday morning, and includes version 12.1.0 of fmt. No other changes besides tweaks to the documentation site (that was updated to using altdoc last release) have been made.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.24 (2026-01-07)

  • Upgraded to upstream release spdlog 1.17.0 (including fmt 12.1.0)

Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

08 January, 2026 01:59PM

Sven Hoexter

Moving from hexchat to Halloy

I'm not hanging around on IRC a lot these days, but when I do I used hexchat (and xchat before). Probably a bad habbit of clinging to what I got used to for the past 25 years. But in the light of the planned removal of GTK2, it felt like it was time to look for an alternative.

Halloy looked interesting, albeit not packaged for Debian. But upstream references a flatpak (another party I did not join so far), good enough to give it a try.

$ sudo apt install flatpak
$ flatpak remote-add --if-not-exists flathub https://dl.flathub.org/repo/flathub.flatpakrepo
$ flatpak install org.squidowl.halloy
$ flatpak run org.squidowl.halloy

Configuration ends up at ~/.var/app/org.squidowl.halloy/config/halloy/config.toml, which I linked for convenience to ~/.halloy.toml.

Since I connect via ZNC in an odd old setup without those virtual networks, but several accounts, and of course never bothered to replace the self signed certificate, it requires some additional configuration to be able to connect. Each account gets its own servers.<foo> block like this:

[servers.bnc-oftc]
nickname = "my-znc-user-for-this-network"
server = "sven.stormbind.net"
dangerously_accept_invalid_certs = true
password = "mypasssowrd"
port = 4711
use_tls = true

Halloy has also a small ZNC guide.

I'm growing old, so a bigger font size is useful. Be aware that font changes require an application restart to take effect.

[font]
size = 16
family = "Noto Mono"

I also prefer the single-pane mode which could be copy & pasted as documented.

Works good enough for now. hexchat was also the last none wayland application I've been using (xlsclients output is finally empty).

08 January, 2026 10:35AM

Swiss JuristGate

Burn victims treated worse than suspects: Crans-Montana double-whammy, years of arguments over medical bills and compensation

The Nazis claimed they were humane killers. Victims of Zyklon B in the gas chambers were supposed to die within two minutes.

Those who survived the fire at Le Constellation, Crans-Montana, Canton Valais are set to face years of arguments about their medical bills and compensation.

Authorities have revealed the vast majority of victims have inhaled hot and poisonous fumes, burning their airways and lungs and requiring them to be placed into an induced coma and intubated.

We don't know if they are having dreams or nightmares in the coma. What we do know is that they will wake up to the nightmare of Swiss medical bills and insurance companies.

Tages Anzeiger has discussed the scars of medical debt with Swiss jurist Rolf Steinegger. Many other media outlets repeated his comments (also in Le Temps):

For victims, the process can be very grueling.

The victims of the fire in Crans-Montana could face a legal battle lasting years. A lawyer is therefore appealing to the federal government – ​​also to protect Switzerland's reputation.

...

Clarifying which party (managers, building inspectors, architect, builder) is really guilty after the club fire that killed 40 people could take years.

...

“It can often take years before they receive compensation, and even then, it can be very unsatisfactory.”

...

... prevent a repeat of the Kaprun fire disaster in November 2000. The tunnel fire, which claimed 155 lives, was followed by years of legal proceedings. "In the end, the victims were frustrated because they felt the process was unfair," Steinegger explains.

...

People who are disabled after the disaster and, for example, can no longer work, face enormous follow-up costs. "These costs are simply incalculable," says Steinegger.

The inspectors, architect and suppliers of insulation will all try to cover themselves and point blame at each other. This will frustrate victims.

This web site was originally founded when the insurance regulator themselves failed. Look at the example of the email from Joao Rodrigues Boleiro. In the email, written in French, Rodrigues Boleiro is telling me that he is not responsible and it is all the fault of another partner in the firm: Mathieu Parreaux, the founder. In the last post that Parreaux himself published on LinkedIn, Parreaux told us that FINMA is responsible.

Look at Grenfell Tower and the insulation company. Internal documents show that the manufacturer, Kingspan, knew about fire risk and promoted the product in places where it shouldn't be used.

Yet in Canton Valais, they weren't even sure how many victims they have. It took a week for all victims to be counted correctly before they can even begin thinking about legal action. First they told us there were 113 victims, then there were 119 victims and now only 116 victims (also in BBC report).

In the German language, we use the same word, Schuldige, for somebody who is guilty of a crime and Schuldner for somebody who has a big debt for medical expenses.

The owners of the bar, who are suspects have been given unconditional bail and are free to come and go as they please while the investigation is underway, even while some of the victims may be involuntarily confined to hospital for months.

When somebody fell on Carla in Zurich, the Swiss authorities spent two years protecting the yoga studio and blaming Carla and I.

I hope the victims of the fire will not spend the next two years in arguments about insurance. The jurist Rolf Steinegger suggests it may be much longer than that.

While victims wait for the money questions to be resolved, they may have a black mark on their credit record. This is called a Betreibung in German or a poursuite in French. The same word is also used to describe stalking. Think of these black marks on the credit record as analogous to the life-long scars on the victims' faces.

The Holocaust kicked off with the Kristallnacht, or night of broken glass, on 9 November 1938. Incidentally, one of the most violent attacks against a young woman in Australia took place on 9 November 2005. Lauren Huxley was beaten with a wrench and then set on fire.

At the end of his sentence, the man responsible was released from prison in 2025. The media were keen to interview Lauren and publish before-and-after photos.

Lauren Huxley, 2005, 2025

 

Technically, on paper, Jacques Moretti and Jessica Maric may be responsible for the tragedy because they own the bar. In practice, most bar owners are not experts on the use of fire-resistant construction practices and they depend on the suppliers of construction materials, the local municipality and the building inspectors to provide specialist insights into fire-resistant design.

Please see the rest of the JuristGate reports.

08 January, 2026 07:30AM

January 07, 2026

hackergotchi for Gunnar Wolf

Gunnar Wolf

Artificial Intelligence • Play or break the deck

This post is an unpublished review for Artificial Intelligence • Play or break the deck

As a little disclaimer, I usually review books or articles written in English, and although I will offer this review to Computing Reviews as usual, it is likely it will not be published. The title of this book in Spanish is Inteligencia artificial: jugar o romper la baraja.

I was pointed at this book, published last October by Margarita Padilla García, a well known Free Software activist from Spain who has long worked on analyzing (and shaping) aspects of socio-technological change. As other books published by Traficantes de sueños, this book is published as Open Access, under a CC BY-NC license, and can be downloaded in full. I started casually looking at this book, with too long a backlog of material to read, but soon realized I could just not put it down: it completely captured me.

This book presents several aspects of Artificial Intelligence (AI), written for a general, non-technical audience. Many books with a similar target have been published, but this one is quite unique; first of all, it is written in a personal, non-formal tone. Contrary to what’s usual in my reading, the author made the explicit decision not to fill the book with references to her sources (“because searching on Internet, it’s very easy to find things”), making the book easier to read linearly — a decision I somewhat regret, but recognize helps develop the author’s style.

The book has seven sections, dealing with different aspects of AI. They are the “Visions” (historical framing of the development of AI); “Spectacular” (why do we feel AI to be so disrupting, digging particularly into game engines and search space); “Strategies”, explaining how multilayer neural networks work and linking the various branches of historic AI together, arriving at Natural Language Processing; “On the inside”, tackling technical details such as algorithms, the importance of training data, bias, discrimination; “On the outside”, presenting several example AI implementations with socio-ethical implications; “Philosophy”, presenting the works of Marx, Heidegger and Simondon in their relation with AI, work, justice, ownership; and “Doing”, presenting aspects of social activism in relation to AI. Each part ends with yet another personal note: Margarita Padilla includes a letter to one of her friends related to said part.

Totalling 272 pages (A5, or roughly half-letter, format), this is a rather small book. I read it probably over a week. So, while this book does not provide lots of new information to me, the way how it was written, made it a very pleasing experience, and it will surely influence the way I understand or explain several concepts in this domain.

07 January, 2026 07:46PM

Thorsten Alteholz

My Debian Activities in December 2025

Debian LTS/ELTS

This was my hundred-thirty-eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. (As the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities.)

During my allocated time I uploaded or worked on:

  • [cups] upload to unstable to fix an issue with the latest security upload
  • [libcoap3] uploaded to unstable to fix ten CVEs
  • [gcal] check whether security bug reports are really security bug reports (no, they are not and no CVEs have been issued yet)
  • [#1124284] trixie-pu for libcoap3 to fix ten CVEs in Trixie.
  • [#1121342] trixie-pu bug; debdiff has been approved and libcupsfilters uploaded.
  • [#1121391] trixie-pu bug; debdiff has been approved and cups-filter uploaded.
  • [#1121392] bookworm-pu bug; debdiff has been approved and cups-filter uploaded.
  • [#1121433] trixie-pu bug; debdiff has been approved and rlottie uploaded.
  • [#1121437] bookworm-pu bug; debdiff has been approved and rlottie uploaded.
  • [#1124284] trixie-pu bug; debdiff has been approved and libcoap3 uploaded.

I also tried to backport the libcoap3-patches to Bookworm, but I am afraid the changes would be too intrusive.

When I stumbled upon a comment for 7zip about “finding the patches might be a hard”, I couldn’t believe it. Well, Daniel was right and I didn’t find any.

Furthermore I worked on suricata, marked some CVEs as not-affected or ignored, and added some new patches. Unfortunately my allocated time was spent before I could do a new upload.

I also attended the monthly LTS/ELTS meeting.

Last but not least I injected some packages for uploads to security-master.

Debian Printing

This month I uploaded a new upstream version or a bugfix version of:

  • cups to unstable.

This work is generously funded by Freexian!

Debian Lomiri

I started to contribute to Lomiri packages, which are part of the Debian UBports Team. As a first step I took care of failing CI pipelines and tried to fix them. A next step would be to package some new Applications.

This work is generously funded by Fre(i)e Software GmbH!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

Debian IoT

This month I uploaded a new upstream version or a bugfix version of:

Debian Mobcom

Unfortunately I didn’t found any time to work on this topic.

misc

This month I uploaded a new upstream version or a bugfix version of:

Last but not least, I wish (almost) everbody a Happy New Year and hope that you are able to stick to your New Year’s resolutions.

07 January, 2026 02:54PM by alteholz

January 06, 2026

Ingo Juergensmann

Outages on Nerdculture.de due to Ceph – Part 2

Last weekend I had “fun” with Ceph again on a Saturday evening. But let’s start at the beginning….

Before the weekend I announced a downtime/maintenance windows to upgrade PostgreSQL from v15 to v17 – because of the Debian upgrade from Bookworm to Trixie. After some tests with a cloned VM I decided use the quick path of pg_ugradecluster 15 main -v 17 -m upgrade –clone. As this would be my first time to upgrade PostgreSQL that way, I made several backups. In the end everything went smooth and the database is now on v17.

However, there was also a new Proxmox kernel and packages, so I also upgrade one Proxmox node and rebootet it. And then the issues began:

But before that I also encountered an issue with Redis for Mastodon. It complained about this:

Unable to obtain the AOF file appendonly.aof.4398.base.rdb

Solution to this was to change redis configuration to autoappend no.

And then CephFS was unavailable again, complaining about laggy MDS or no MDS at all, which – of course – was totally wrong. I search for solutions and read many forum posts in the Proxmox forum, but nothing helped. I also read the official Ceph documentation. After a whole day offline for all of the services to my thousands of users, I somehow managed to get systemctl reset-failed mnt-pve-cephfs && systemctl start mnt-pve-cephfs again. Shortly before that I followed the advice in the Ceph docs for RADOS Health and there especially section about Troubleshooting Monitors.

In the end, I can’t say which step exactly did the trick that CephFS was working again. But as it seems, I will have one or two more chances to find out, because only one server out of three is currently updated.

Another issue during the downtime also was that one server crashed/rebooted and didn’t came back. It hang in the midst of an upgrade at the point of upgrade-grub. Usually it wouldn’t be a big deal: just go the IPMI website and reboot the server.

Nah! That’s too simple!

For some unknow reason the IPMI interfaces lost their DHCP leases: the DHCP server at the colocation was not serving IPs. So I opened a ticket, got some acknowledgement from the support but also a statement “maybe tomorrow or on Monday…”. Hmpf!

On Sunday evening I managed to bring back CephFS. As said: no idea what specific step did the trick. But the story continues: On Monday before lunch time the IPMI DHCP was working again and I could access the web interfaces again, logged in…. and was forcefully locked out again:

Your session has timed out. You will need to open a new session

I hit the problem described here. But cold resetting the BMC didn’t work. So still no working web interface to deal with the issue. But on my phone I got “IPMIView” as app and that still worked and showed the KVM console. But what I saw there didn’t make me happy as well:

The reason for this is apparently the crash while running update-grub. Anyway, using the Grub bootloader and selecting an older kernel works fine. The server boots, Proxmox is showing the node as up and…. the working CephFS is stalled again! Fsck!

Rebooting the node or stopping Ceph on that node results immediatedly in a working CephFS again.

Currently I’m moving everything off of Ceph to the local disks of the two nodes. If everything is on local disks I can work on debugging CephFS without interrupting the service for the users (hopefully). But this also means that there will be no redundancy for Mastodon and mail.

When I have more detailled information about possible reasons and such, I may post to the Proxmox forum.

06 January, 2026 03:57PM by ij

Swiss JuristGate

Scapegoats, Crans-Montana, Le Constellation: candles, managers, commune are not the only ones at fault

In a press conference on the morning of 6 January 2026, the mayor of the Commune of Crans-Montana admits there had been no inspection for five years and they only conduct inspections on areas suspected to be high risk, such as kitchens.

After the Pulse nightclub tragedy in Macedonia, the risks of nightclub fires were re-examined and widely documented. Many European countries paid attention.

Industry magazines and web sites frequented by building inspectors and insurance specialists published extensive reports.

From a report by CROSS UK:

On 16th March 2025, a fire broke out at the Pulse nightclub in North Macedonia. Leaving 61 people dead, the fire was reportedly started by indoor fireworks igniting materials inside the club. Many people will remember similar incidents ...

... Netherlands, Romania, Russia, Argentina, the USA, Ecuador, China and Brazil. Around 1,000 people have died in such fires. ...

... the alarming global trend: mass fatalities are still occurring because of the use of pyrotechnics in nightclubs. Furthermore, the frequency and severity of these fires is a real cause for concern. ...

The British seem to get it. Their very detailed national guidance encourages business owners, building designers, inspectors and local authorities to take all measures to prevent mass death:

You need to be aware that certain functions e.g. discos, can present additional dangers for the audience, largely from the effects of over excitement and irrational behaviour; together with the higher noise level and flashing lights. In such circumstances, and particularly where there is a mainly younger audience, you should ensure that there are a sufficient number of competent and adequately trained attendants to cover an emergency situation; and a public address system which can over-ride the performance and be heard clearly in all parts of the premises will be required.

In Switzerland, however, responsibility for writing the standards and enforcing them is split across twenty six different cantons and hundreds of tiny little communes / municipalities.

How many Swiss cantons did an audit after Pulse nightclub? How many did nothing?

The CROSS-UK research report was shared widely and ignited a debate in the industry. International Fire & Safety Journal was one of many outlets to direct people towards the CROSS-UK report.

Is it possible that somebody in at least one Swiss canton read that report and decided to do an audit?

Even if the Swiss authorities find a defect, how long does it take before they protect the public? Look at the case of the illegal legal insurance firm, Parreaux, Thiebaud & Partners. It is alleged that authorities knew about the firm since 2018 but didn't take enforcement action until 2023.

Look at Grenfell Tower and the insulation company. Internal documents show that the manufacturer, Kingspan, knew about fire risk and covered it up.

From the BBC's news report about the inquiry:

Kingspan was dishonest and cynical, Grenfell Inquiry finds

The report published on Wednesday found Kingspan, which is headquartered in County Cavan, was not directly responsible for the fire but showed "complete disregard for fire safety" in how it marketed one of its products.

It also demonstrated "deeply entrenched and persistent dishonesty...in pursuit of commercial gain".

How many building control specialists followed the reports about the Grenfell tragedy and expanded their practice to include the insulation foams, accoustic foams and similar products?

In comments quoted by Le Temps and many other media outlets, I feel the canton's prosecutor, Béatrice Pilloud is now being evasive about the width of the exit staircase:

En ce qui concerne l’étroitesse de l’escalier, j’ai vu l’escalier. J’ai été constaté également sur place, qu’est-ce qu’un escalier étroit, en fait, finalement?

Translated to English:

On matters concerning the narrowness of the staircase, I saw the staircase. I contemplated that on site, what is a narrow staircase, ultimately?

Looking back to the UK regulations, they would simply take out a measuring tape and give us an answer in millimeters. Then they would compare it to the minimum widths in the regulations. Pilloud spent most of her career working as a defense lawyer and when I look at the way she avoided a direct question about the width, I feel that rather than looking for every possible reason to advance the prosecution, she is falling into the habits she developed as a defence lawyer and trying to avoid putting any facts in the open that could become landmines for her political cousins who run the canton and the communes.

In fact, we saw similar phenomena when people made abuse reports. The church appointed a lawyer with the title of "Independent Commissioner" and encouraged victims to take their complaints to him for assessment. The Royal Commission heard allegations that he wasn't active enough in encouraging people to make police reports and some evidence was leaked to other lawyers acting to defend the church:

The Catholic Church's independent commissioner was unable to explain how he received confidential information from a victim of serial paedophile priest Kevin O'Donnell or why it was passed on to the church's lawyers in an apparent breach of confidentiality.

The Royal Commission also raised concerns about the independence of Peter O'Callaghan, QC, who has investigated allegations of clerical abuse for the past 18 years under the church's controversial Melbourne Response.

The church's law firm Corrs Chambers Westgarth was also questioned over its handling of files and sensitive information from three separate arms of the Melbourne Response, which claim to be independent of each other.

One of Mr O'Callaghan's first duties as a lawyer involved representing the concerns of my grandparents in the Liquor Royal Commission. This is ironic, of course, because the late Cardinal Pell would subsequently be prosecuted in relation to allegations involving two choir boys who went looking for the sacramental wine.

The Liquor Royal Commission sought to examine all aspects of the industry, from the viability of small businesses to the safety of the public.

In Switzerland, authorities were considering a proposal to relax fire safety regulations but that has now been put on hold and the consultation extended.

Stéphane Ganzer, the security minister who previously worked as a fireman, told us that they use the same candles / sparklers in every venue without a problem and there must have been a more serious defect in the building that allowed it to catch fire.

After looking at the videos of the moment when the fire started, the media speculated if it was an accoustic foam that was not in conformance with fire regulations.

Now, with the media briefing of 6 January, the commune has admitted that the accoustic foam was inspected only a few weeks ago to ensure it is adequate for noise prevention but they do not mention whether any control of fire safety is performed in the same audit.

This suggests that the neighbors and the commune were so concerned with enforcing Switzerland's super-strict regulations on noise emissions after 10pm that they forced the children into this dangerous underground bunker run by a convicted pimp.

Every Swiss parent knows where their fifteen year old hangs out on a Saturday night if for no other reason than the fact somebody has to drive there to pick them up. Le Constellation was no secret.

As noted in a previous report, the three most senior political figures in the canton all live in the immediate vicinity of Le Constellation. How many times have they been there to pick up their own children after a party?

Liquor licensing, police and fire inspectors normally make random visits to high-risk venues during trading hours. This is well known throughout the world. Does this type of inspection happen from time to time in Valais?

Did the owner, who appears to be a chef, really do the load-bearing walls and the electrical work entirely by himself or did an architect, engineer or electrician provide oversight at various moments during the project?

Journalists found various photos published by the owners during the construction works. Who are all the other people in the photos? Do any of them have professional responsibility or will they be given anonymity?

How many other premises in Canton Valais had assistance from the same engineer and architect? Will the owners of other premises be given anonymity too?

Are there any cantons in Switzerland where the architects and engineers are required to be qualified as in some other countries or can anybody work in these professions, even without a degree?

From a Swiss construction web site:

Job title

The professional title "architect" is not protected in Switzerland. This means that, in principle, anyone can call themselves an architect.

In this "uncontrolled growth", the Bachelor's degree in architecture from a federally accredited university (universities of applied sciences, universities) is a recognized seal of quality. It leads to the title "Bachelor of Arts [FH/UH] in Architecture".

Authorization to practice the profession

Who is allowed to work independently as an architect is regulated by the cantons. There are cantons (e.g. GE, VD, NE) in which the architectural profession is regulated and a license and proof of professional qualifications are required to practice. In other cantons, the profession is not regulated.

What is the role of the insurance company? Did they know they were insuring a night club full of teenagers or were they tricked? Do they require policyholders to have any inspections or certifications above and beyond those mandated by the local municipality?

Look at JuristGate, it looks like one of the women who worked for the rogue insurance company was immediately given a job at another insurance company. Were the insurance companies in collaboration with the Swiss authorities to cover up the extraordinary levels of incompetence?

Looking through the meeting minutes of the municipality, the canton and the federal parliament, can we find previous discussions about the regulation and inspection of night clubs and similar spaces?

Look at how the politicians are hyper-sensitive about foreign people coming to work in Switzerland but they made no scrutiny of these foreign people setting up a high-risk business where children go on Saturday nights and New Year's Eve.

Many other people interacted with Jacques Moretti and Jessica Maric throughout the ten years they spent renovating and operating businesses in Crans-Montana. A genuinely thorough investigation might raise uncomfortable questions about each of their collaborators and all the other premises where any of the same collaborators have had a role in design, construction or supervision. Switzerland is only a small country where everybody is related to everybody else. For that reason, many people would prefer not to open up that can of worms at all because they don't know how many other people might be brought down with the Morettis.

This was never just about the candles or the Morettis.

Please see the rest of the JuristGate reports.

06 January, 2026 01:30PM

January 05, 2026

hackergotchi for Matthew Garrett

Matthew Garrett

Not here

Hello! I am not posting here any more. You can find me here instead. Most Planets should be updated already (I've an MR open for Planet Gnome), but if you're subscribed to my feed directly please update it.

comment count unavailable comments

05 January, 2026 10:26PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in December 2025

About 95% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

Python packaging

I upgraded these packages to new upstream versions:

Python 3.14 is now a supported version in unstable, and we’re working to get that into testing. As usual this is a pretty arduous effort because it requires going round and fixing lots of odds and ends across the whole ecosystem. We can deal with a fair number of problems by keeping up with upstream (see above), but there tends to be a long tail of packages whose upstreams are less active and where we need to chase them, or where problems only show up in Debian for one reason or another. I spent a lot of time working on this:

Fixes for pytest 9:

I filed lintian: Report Python egg-info files/directories to help us track the migration to pybuild-plugin-pyproject.

I did some work on dh-python: Normalize names in pydist lookups and pyproject plugin: Support headers (the latter of which allowed converting python-persistent and zope.proxy to pybuild-plugin-pyproject, although it needed a follow-up fix).

I fixed or helped to fix several other build/test failures:

Other bugs:

Other bits and pieces

Code reviews

05 January, 2026 01:08PM by Colin Watson

hackergotchi for Bits from Debian

Bits from Debian

Debian welcomes Outreachy interns for December 2025-March 2026 round

Outreachy logo

Debian continues participating in Outreachy, and as you might have already noticed, Debian has selected two interns for the Outreachy December 2025 - March 2026 round.

After a busy contribution phase and a competitive selection process, Hellen Chemtai Taylor and Isoken Ibizugbe are officially working as interns on Debian Images Testing with OpenQA for the past month, mentored by Tássia Camões Araújo, Roland Clobus and Philip Hands.

Congratulations and welcome Hellen Chemtai Taylor and Isoken Ibizugbe!

The team also congratulates all candidates for their valuable contributions, with special thanks to those who manage to continue participating as volunteers.

From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships.

Join us and help to improve Debian! You can follow the work of the Outreachy interns reading their blog posts (syndicated in Planet Debian), and chat with the team at the debian-openqa matrix channel. For Outreachy matters, the programme admins can be reached on #debian-outreach IRC/matrix channel and mailing list.

05 January, 2026 09:00AM by Anupa Ann Joseph, Tássia Camões Araújo

Vincent Bernat

Using eBPF to load-balance traffic across UDP sockets with Go

Akvorado collects sFlow and IPFIX flows over UDP. Because UDP does not retransmit lost packets, it needs to process them quickly. Akvorado runs several workers listening to the same port. The kernel should load-balance received packets fairly between these workers. However, this does not work as expected. A couple of workers exhibit high packet loss:

$ curl -s 127.0.0.1:8080/api/v0/inlet/metrics \
> | sed -n s/akvorado_inlet_flow_input_udp_in_dropped//p
packets_total{listener="0.0.0.0:2055",worker="0"} 0
packets_total{listener="0.0.0.0:2055",worker="1"} 0
packets_total{listener="0.0.0.0:2055",worker="2"} 0
packets_total{listener="0.0.0.0:2055",worker="3"} 1.614933572278264e+15
packets_total{listener="0.0.0.0:2055",worker="4"} 0
packets_total{listener="0.0.0.0:2055",worker="5"} 0
packets_total{listener="0.0.0.0:2055",worker="6"} 9.59964121598348e+14
packets_total{listener="0.0.0.0:2055",worker="7"} 0

eBPF can help by implementing an alternate balancing algorithm. �

Options for load-balancing

There are three methods to load-balance UDP packets across workers:

  1. One worker receives the packets and dispatches them to the other workers.
  2. All workers share the same socket.
  3. Each worker has its own socket, listening to the same port, with the SO_REUSEPORT socket option.

SO_REUSEPORT option

Tom Hebert added the SO_REUSEPORT socket option in Linux 3.9. The cover letter for his patch series explains why this new option is better than the two existing ones from a performance point of view:

SO_REUSEPORT allows multiple listener sockets to be bound to the same port. […] Received packets are distributed to multiple sockets bound to the same port using a 4-tuple hash.

The motivating case for SO_RESUSEPORT in TCP would be something like a web server binding to port 80 running with multiple threads, where each thread might have it’s own listener socket. This could be done as an alternative to other models:

  1. have one listener thread which dispatches completed connections to workers, or
  2. accept on a single listener socket from multiple threads.

In case #1, the listener thread can easily become the bottleneck with high connection turn-over rate. In case #2, the proportion of connections accepted per thread tends to be uneven under high connection load. […] We have seen the disproportion to be as high as 3:1 ratio between thread accepting most connections and the one accepting the fewest. With SO_REUSEPORT the distribution is uniform.

The motivating case for SO_REUSEPORT in UDP would be something like a DNS server. An alternative would be to receive on the same socket from multiple threads. As in the case of TCP, the load across these threads tends to be disproportionate and we also see a lot of contection on the socket lock.

Akvorado uses the SO_REUSEPORT option to dispatch the packets across the workers. However, because the distribution uses a 4-tuple hash, a single socket handles all the flows from one exporter.

SO_ATTACH_REUSEPORT_EBPF option

In Linux 4.5, Craig Gallek added the SO_ATTACH_REUSEPORT_EBPF option to attach an eBPF program to select the target UDP socket. In Linux 4.6, he extended it to support TCP. The socket(7) manual page documents this mechanism:1

The BPF program must return an index between 0 and N-1 representing the socket which should receive the packet (where N is the number of sockets in the group). If the BPF program returns an invalid index, socket selection will fall back to the plain SO_REUSEPORT mechanism.

In Linux 4.19, Martin KaFai Lau added the BPF_PROG_TYPE_SK_REUSEPORT program type. Such an eBPF program selects the socket from a BPF_MAP_TYPE_REUSEPORT_ARRAY map instead. This new approach is more reliable when switching target sockets from one instance to another—for example, when upgrading, a new instance can add its sockets and remove the old ones.

Load-balancing with eBPF and Go

Altering the load-balancing algorithm for a group of sockets requires two steps:

  1. write and compile an eBPF program in C,2 and
  2. load it and attach it in Go.

eBPF program in C

A simple load-balancing algorithm is to randomly choose the destination socket. The kernel provides the bpf_get_prandom_u32() helper function to get a pseudo-random number.

volatile const __u32 num_sockets; // �

struct {
    __uint(type, BPF_MAP_TYPE_REUSEPORT_SOCKARRAY);
    __type(key, __u32);
    __type(value, __u64);
    __uint(max_entries, 256);
} socket_map SEC(".maps"); // �

SEC("sk_reuseport")
int reuseport_balance_prog(struct sk_reuseport_md *reuse_md)
{
    __u32 index = bpf_get_prandom_u32() % num_sockets; // �
    bpf_sk_select_reuseport(reuse_md, &socket_map, &index, 0); // �
    return SK_PASS; // �
}

char _license[] SEC("license") = "GPL";

In �, we declare a volatile constant for the number of sockets in the group. We will initialize this constant before loading the eBPF program into the kernel. In �, we define the socket map. We will populate it with the socket file descriptors. In �, we randomly select the index of the target socket.3 In �, we invoke the bpf_sk_select_reuseport() helper to record our decision. Finally, in �, we accept the packet.

Header files

If you compile the C source with clang, you get errors due to missing headers. The recommended way to solve this is to generate a vmlinux.h file with bpftool:

$ bpftool btf dump file /sys/kernel/btf/vmlinux format c > vmlinux.h

Then, include the following headers:4

#include "vmlinux.h"
#include <bpf/bpf_helpers.h>

For my 6.17 kernel, the generated vmlinux.h is quite big: 2.7 MiB. Moreover, bpf/bpf_helpers.h is shipped with libbpf. This adds another dependency for users. As the eBPF program is quite small, I prefer to put the strict minimum in vmlinux.h by cherry-picking the definitions I need.

Compilation

The eBPF Library for Go ships bpf2go, a tool to compile eBPF programs and to generate some scaffolding code. We create a gen.go file with the following content:

package main

//go:generate go tool bpf2go -tags linux reuseport reuseport_kern.c

After running go generate ./..., we can inspect the resulting objects with readelf and llvm-objdump:

$ readelf -S reuseport_bpfeb.o
There are 14 section headers, starting at offset 0x840:
  [Nr] Name              Type             Address           Offset
[…]
  [ 3] sk_reuseport      PROGBITS         0000000000000000  00000040
  [ 6] .maps             PROGBITS         0000000000000000  000000c8
  [ 7] license           PROGBITS         0000000000000000  000000e8
[…]
$ llvm-objdump -S reuseport_bpfeb.o
reuseport_bpfeb.o:  file format elf64-bpf
Disassembly of section sk_reuseport:
0000000000000000 <reuseport_balance_prog>:
; {
       0:   bf 61 00 00 00 00 00 00     r6 = r1
;     __u32 index = bpf_get_prandom_u32() % num_sockets;
       1:   85 00 00 00 00 00 00 07     call 0x7
[…]

Usage from Go

Let’s set up 10 workers listening to the same port.5 Each socket enables the SO_REUSEPORT option before binding:6

var (
    err error
    fds []uintptr
    conns []*net.UDPConn
)
workers := 10
listenAddr := "127.0.0.1:0"
listenConfig := net.ListenConfig{
    Control: func(_, _ string, c syscall.RawConn) error {
        c.Control(func(fd uintptr) {
            err = unix.SetsockoptInt(int(fd), unix.SOL_SOCKET, unix.SO_REUSEPORT, 1)
            fds = append(fds, fd)
        })
        return err
    },
}
for range workers {
    pconn, err := listenConfig.ListenPacket(t.Context(), "udp", listenAddr)
    if err != nil {
        t.Fatalf("ListenPacket() error:\n%+v", err)
    }
    udpConn := pconn.(*net.UDPConn)
    listenAddr = udpConn.LocalAddr().String()
    conns = append(conns, udpConn)
}

The second step is to load the eBPF program, initialize the num_sockets variable, populate the socket map, and attach the program to the first socket.7

// Load the eBPF collection.
spec, err := loadReuseport()
if err != nil {
    t.Fatalf("loadVariables() error:\n%+v", err)
}

// Set "num_sockets" global variable to the number of file descriptors we will register
if err := spec.Variables["num_sockets"].Set(uint32(len(fds))); err != nil {
    t.Fatalf("NumSockets.Set() error:\n%+v", err)
}

// Load the map and the program into the kernel.
var objs reuseportObjects
if err := spec.LoadAndAssign(&objs, nil); err != nil {
    t.Fatalf("loadReuseportObjects() error:\n%+v", err)
}
t.Cleanup(func() { objs.Close() })

// Assign the file descriptors to the socket map.
for worker, fd := range fds {
    if err := objs.reuseportMaps.SocketMap.Put(uint32(worker), uint64(fd)); err != nil {
        t.Fatalf("SocketMap.Put() error:\n%+v", err)
    }
}

// Attach the eBPF program to the first socket.
socketFD := int(fds[0])
progFD := objs.reuseportPrograms.ReuseportBalanceProg.FD()
if err := unix.SetsockoptInt(socketFD, unix.SOL_SOCKET, unix.SO_ATTACH_REUSEPORT_EBPF, progFD); err != nil {
    t.Fatalf("SetsockoptInt() error:\n%+v", err)
}

We are now ready to process incoming packets. Each worker is a Go routine incrementing a counter for each received packet:8

var wg sync.WaitGroup
receivedPackets := make([]int, workers)
for worker := range workers {
    conn := conns[worker]
    packets := &receivedPackets[worker]
    wg.Go(func() {
        payload := make([]byte, 9000)
        for {
            if _, err := conn.Read(payload); err != nil {
                if errors.Is(err, net.ErrClosed) {
                    return
                }
                t.Logf("Read() error:\n%+v", err)
            }
            *packets++
        }
    })
}

Let’s send 1000 packets:

sentPackets := 1000
conn, err := net.Dial("udp", conns[0].LocalAddr().String())
if err != nil {
    t.Fatalf("Dial() error:\n%+v", err)
}
defer conn.Close()
for range sentPackets {
    if _, err := conn.Write([]byte("hello world!")); err != nil {
        t.Fatalf("Write() error:\n%+v", err)
    }
}

If we print the content of the receivedPackets array, we can check the balancing works as expected, with each worker getting about 100 packets:

=== RUN   TestUDPWorkerBalancing
    balancing_test.go:84: receivedPackets[0] = 107
    balancing_test.go:84: receivedPackets[1] = 92
    balancing_test.go:84: receivedPackets[2] = 99
    balancing_test.go:84: receivedPackets[3] = 105
    balancing_test.go:84: receivedPackets[4] = 107
    balancing_test.go:84: receivedPackets[5] = 96
    balancing_test.go:84: receivedPackets[6] = 102
    balancing_test.go:84: receivedPackets[7] = 105
    balancing_test.go:84: receivedPackets[8] = 99
    balancing_test.go:84: receivedPackets[9] = 88

    balancing_test.go:91: receivedPackets = 1000
    balancing_test.go:92: sentPackets     = 1000

Graceful restart

You can also use SO_ATTACH_REUSEPORT_EBPF to gracefully restart an application. A new instance of the application binds to the same address and prepare its own version of the socket map. Once it attaches the eBPF program to the first socket, the kernel steers incoming packets to this new instance. The old instance needs to drain the already received packets before shutting down.

To check we are not losing any packet, we spawn a Go routine to send as many packets as possible:

sentPackets := 0
notSentPackets := 0
done := make(chan bool)
conn, err := net.Dial("udp", conns1[0].LocalAddr().String())
if err != nil {
    t.Fatalf("Dial() error:\n%+v", err)
}
defer conn.Close()
go func() {
    for {
        if _, err := conn.Write([]byte("hello world!")); err != nil {
            notSentPackets++
        } else {
            sentPackets++
        }
        select {
        case <-done:
            return
        default:
        }
    }
}()

Then, while the Go routine runs, we start the second set of workers. Once they are running, they start receiving packets. If we gracefully stop the initial set of workers, not a single packet is lost!9

=== RUN   TestGracefulRestart
    graceful_test.go:135: receivedPackets1[0] = 165
    graceful_test.go:135: receivedPackets1[1] = 195
    graceful_test.go:135: receivedPackets1[2] = 194
    graceful_test.go:135: receivedPackets1[3] = 190
    graceful_test.go:135: receivedPackets1[4] = 213
    graceful_test.go:135: receivedPackets1[5] = 187
    graceful_test.go:135: receivedPackets1[6] = 170
    graceful_test.go:135: receivedPackets1[7] = 190
    graceful_test.go:135: receivedPackets1[8] = 194
    graceful_test.go:135: receivedPackets1[9] = 155

    graceful_test.go:139: receivedPackets2[0] = 1631
    graceful_test.go:139: receivedPackets2[1] = 1582
    graceful_test.go:139: receivedPackets2[2] = 1594
    graceful_test.go:139: receivedPackets2[3] = 1611
    graceful_test.go:139: receivedPackets2[4] = 1571
    graceful_test.go:139: receivedPackets2[5] = 1660
    graceful_test.go:139: receivedPackets2[6] = 1587
    graceful_test.go:139: receivedPackets2[7] = 1605
    graceful_test.go:139: receivedPackets2[8] = 1631
    graceful_test.go:139: receivedPackets2[9] = 1689

    graceful_test.go:147: receivedPackets = 18014
    graceful_test.go:148: sentPackets     = 18014

Unfortunately, gracefully shutting down a UDP socket is not trivial in Go.10 Previously, we were terminating workers by closing their sockets. However, if we close them too soon, the application loses packets that were assigned to them but not yet processed. Before stopping, a worker needs to call conn.Read() until there are no more packets. A solution is to set a deadline for conn.Read() and check if we should stop the Go routine when the deadline is exceeded:

payload := make([]byte, 9000)
for {
    conn.SetReadDeadline(time.Now().Add(50 * time.Millisecond))
    if _, err := conn.Read(payload); err != nil {
        if errors.Is(err, os.ErrDeadlineExceeded) {
            select {
            case <-done:
                return
            default:
                continue
            }
        }
        t.Logf("Read() error:\n%+v", err)
    }
    *packets++
}

With TCP, this aspect is simpler: after enabling the net.ipv4.tcp_migrate_req sysctl, the kernel automatically migrates waiting connections to a random socket in the same group. Alternatively, eBPF can also control this migration. Both features are available since Linux 5.14.

Addendum

After implementing this strategy in Akvorado, all workers now drop packets! 😱

$ curl -s 127.0.0.1:8080/api/v0/inlet/metrics \
> | sed -n s/akvorado_inlet_flow_input_udp_in_dropped//p
packets_total{listener="0.0.0.0:2055",worker="0"} 838673
packets_total{listener="0.0.0.0:2055",worker="1"} 843675
packets_total{listener="0.0.0.0:2055",worker="2"} 837922
packets_total{listener="0.0.0.0:2055",worker="3"} 841443
packets_total{listener="0.0.0.0:2055",worker="4"} 840668
packets_total{listener="0.0.0.0:2055",worker="5"} 850274
packets_total{listener="0.0.0.0:2055",worker="6"} 835488
packets_total{listener="0.0.0.0:2055",worker="7"} 834479

The root cause is the default limit of 32 records for Kafka batch sizes. This limit is too low because the brokers have a large overhead when handling each batch: they need to ensure they persist correctly before acknowledging them. Increasing the limit to 4096 records fixes this issue.

While load-balancing incoming flows with eBPF remains useful, it did not solve the main issue. At least the even distribution of dropped packets helped identify the real bottleneck. 😅


  1. The current version of the manual page is incomplete and does not cover the evolution introduced in Linux 4.19. There is a pending patch about this. ↩�

  2. Rust is another option. However, the program we use is so trivial that it does not make sense to use Rust. ↩�

  3. As bpf_get_prandom_u32() returns a pseudo-random 32-bit unsigned value, this method exhibits a very slight bias towards the first indexes. This is unlikely to be worth fixing. ↩�

  4. Some examples include <linux/bpf.h> instead of "vmlinux.h". This makes your eBPF program dependent on the installed kernel headers. ↩�

  5. listenAddr is initially set to 127.0.0.1:0 to allocate a random port. After the first iteration, it is updated with the allocated port. ↩�

  6. This is the setupSockets() function in fixtures_test.go. ↩�

  7. This is the setupEBPF() function in fixtures_test.go. ↩�

  8. The complete code is in balancing_test.go ↩�

  9. The complete code is in graceful_test.go ↩�

  10. In C, we would poll() both the socket and a pipe used to signal for shutdown. When the second condition is triggered, we drain the socket by executing a series of non-blocking read() until we get EWOULDBLOCK. ↩�

05 January, 2026 08:51AM by Vincent Bernat

hackergotchi for Jonathan McDowell

Jonathan McDowell

Free Software Activities for 2025

Given we’ve entered a new year it’s time for my annual recap of my Free Software activities for the previous calendar year. For previous years see 2019, 2020, 2021, 2022, 2023 + 2024.

Conferences

My first conference of the year was FOSDEM. I’d submitted a talk proposal about system attestation in production environments for the attestation devroom, but they had a lot of good submissions and mine was a bit more “this is how we do it” rather than “here’s some neat Free Software that does it”. I’m still trying to work out how to make some of the bits we do more open, but the problem is a lot of the neat stuff is about taking internal knowledge about what should be running and making sure that’s the case, and what you end up with if you abstract that is a toolkit that still needs a lot of work to get something useful.

I’d more luck at DebConf25 where I gave a talk (Don’t fear the TPM) trying to explain how TPMs could be useful in a Debian context. Naturally the comments section descended into a discussion about UEFI Secure Boot, which is a separate, if related, thing. DebConf also featured the usual catch up with fellow team members, hanging out with folk I hadn’t seen in ages, and generally feeling a bit more invigorated about Debian.

Other conferences I considered, but couldn’t justify, were All Systems Go! and the Linux Plumbers Conference. I’ve no doubt both would have had a bunch of interesting and relevant talks + discussions, but not enough this year.

I’m going to have to miss FOSDEM this year, due to travel later in the month, and I’m uncertain if I’m going to make DebConf (for a variety of reasons). That means I don’t have a Free Software conference planned for 2026. Ironically FOSSY moving away from Portland makes it a less appealing option (I have Portland friends it would be good to visit). Other than potential Debian MiniConfs, anything else European I should consider?

Debian

I continue to try and keep RetroArch in shape, with 1.22.2+dfsg-1 (and, shortly after, 1.22.2+dfsg-2 - git-buildpackage in trixie seems more strict about Build-Depends existing in the outside environment, and I keep forgetting I need Build-Depends-Arch and Build-Depends-Indep to be pretty much the same with a minimal Build-Depends that just has enough for the clean target) getting uploaded in December, and 1.20.0+dfsg-1, 1.20+dfsg-2 + 1.20+dfsg-3 all being uploaded earlier in the year. retroarch-assets had 1.20.0+dfsg-1 uploaded back in April. I need to find some time to get 1.22.0 packaged. libretro-snes9x got updated to 1.63+dfsg-1.

sdcc saw 4.5.0+dfsg-1, 4.5.0+dfsg-2, 4.5.0+dfsg-3 (I love major GCC upgrades) and 4.5.0-dfsg-4 uploads. There’s an outstanding bug around a LaTeX error building the manual, but this turns out to be a bug in the 2.5 RC for LyX. Huge credit to Tobias Quathamer for engaging with this, and Pavel Sanda + Jürgen Spitzmüller from the LyX upstream for figuring out the issue + a fix.

Pulseview saw 0.4.2-4 uploaded to fix issues with the GCC 15 + CMake upgrades. I should probably chase the sigrok upstream about new releases; I think there are a bunch of devices that have gained support in git without seeing a tagged release yet.

I did an Electronics Team upload for gputils 1.5.2-2 to fix compilation with GCC 15.

While I don’t do a lot with storage devices these days if I can help it I still pay a little bit of attention to sg3-utils. That resulted in 1.48-2 and 1.48-3 uploads in 2025.

libcli got a 1.10.7-3 upload to deal with the libcrypt-dev split out.

Finally I got more up-to-date versions of libtorrent (0.15.7-1) and rtorrent (also 0.15.7-1) uploaded to experimental. There’s a ppc64el build failure in libtorrent, but having asked on debian-powerpc this looks like a flaky test/code and I should probably go ahead and upload to unstable.

I sponsored some uploads for Michel Lind - the initial uploads of plymouth-theme-hot-dog, and the separated out pykdumpfile package.

Recognising the fact I wasn’t contributing in a useful fashion to the Data Protection Team I set about trying to resign in an orderly fashion - see Andreas’ call for volunteers that went out in the last week. Shout out to Enrico for pointing out in the past that we should gracefully step down from things we’re not actually managing to do, to avoid the perception it’s all fine and no one else needs to step up. Took me too long to act on it.

The Debian keyring team continues to operate smoothly, maintaining our monthly release cadence with a 3 month rotation ensuring all team members stay familiar with the process, and ensure their setups are still operational (especially important after Debian releases). I handled the 2025.03.23, 2025.06.24, 2025.06.27, 2025.09.18, 2025.12.08 + 2025.12.26 pushes.

Linux

TPM related fixes were the theme of my kernel contributions in 2025, all within a work context. Some were just cleanups, but several fixed real issues that were causing us issues. I’ve also tried to be more proactive about reviewing diffs in the TPM subsystem; it feels like a useful way to contribute, as well as making me more actively pay attention to what’s going on there.

Personal projects

I did some work on onak, my OpenPGP keyserver. That resulted in a 0.6.4 release, mainly driven by fixes for building with more recent CMake + GCC versions in Debian. I’ve got a set of changes that should add RFC9580 (v6) support, but there’s not a lot of test keys out there at present for making sure I’m handling things properly. Equally there’s a plan to remove Berkeley DB from Debian, which I’m completely down with, but that means I need a new primary backend. I’ve got a draft of LMDB support to replace that, but I need to go back and confirm I’ve got all the important bits implemented before publishing it and committing to a DB layout. I’d also like to add sqlite support as an option, but that needs some thought about trying to take proper advantage of its features, rather than just treating it as a key-value store.

(I know everyone likes to hate on OpenPGP these days, but I continue to be interested by the whole web-of-trust piece of it, which nothing else I’m aware of offers.)

That about wraps up 2025. Nothing particularly earth shaking in there, more a case of continuing to tread water on the various things I’m involved. I highly doubt 2026 will be much different, but I think that’s ok. I scratch my own itches, and if that helps out other folk too then that’s lovely, but not the primary goal.

05 January, 2026 07:57AM

Russell Coker

Phone Charging Speeds With Debian/Trixie

One of the problems I encountered with the PinePhone Pro (PPP) when I tried using it as a daily driver [1] was the charge speed, both slow charging and a bad ratio of charge speed to discharge speed. I also tried using a One Plus 6 (OP6) which had a better charge speed and battery life but I never got VoLTE to work [2] and VoLTE is a requirement for use in Australia and an increasing number of other countries. In my tests with the Librem 5 from Purism I had similar issues with charge speed [3].

What I want to do is get an acceptable ratio of charge time to use time for a free software phone. I don’t necessarily object to a phone that can’t last an 8 hour day on a charge, but I can’t use a phone that needs to be on charge for 4 hours during the day. For this part I’m testing the charge speed and will test the discharge speed when I have solved some issues with excessive CPU use.

I tested with a cheap USB power monitoring device that is inline between the power cable and the phone. The device has no method of export so I just watched it and when the numbers fluctuated I tried to estimate the average. I only give the results to two significant digits which is about all the accuracy that is available, as I copied the numbers separately the V*A might not exactly equal the W. I idly considered rounding off Voltages to the nearest Volt and current to the half amp but the way the PC USB ports have voltage drop at higher currents is interesting.

This post should be useful for people who want to try out FOSS phones but don’t want to buy the range of phones and chargers that I have bought.

Phones Tested

I have seen claims about improvements with charging speed on the Librem 5 with recent updates so I decided to compare a number of phones running Debian/Trixie as well as some Android phones. I’m comparing an old Samsung phone (which I tried running Droidian on but is now on Android) and a couple of Pixel phones with the three phones that I currently have running Debian for charging.

Chargers Tested

HP Z640

The Librem 5 had problems with charging on a port on the HP ML110 Gen9 I was using as a workstation. I have sold the ML110 and can’t repeat that exact test but I tested on the HP z640 that I use now. The z640 is a much better workstation (quieter and better support for audio and other desktop features) and is also sold as a workstation.

The z640 documentation says that of the front USB ports the top one can do “fast charge (up to 1.5A)” with “USB Battery Charging Specification 1.2”. The only phone that would draw 1.5A on that port was the Librem 5 but the computer would only supply 4.4V at that current which is poor. For every phone I tested the bottom port on the front (which apparently doesn’t have USB-BC or USB-PD) charged at least as fast as the top port and every phone other than the OP6 charged faster on the bottom port. The Librem 5 also had the fastest charge rate on the bottom port. So the rumours about the Librem 5 being updated to address the charge speed on PC ports seem to be correct.

The Wikipedia page about USB Hardware says that the only way to get more than 1.5A from a USB port while operating within specifications is via USB-PD so as USB 3.0 ports the bottom 3 ports should be limited to 5V at 0.9A for 4.5W. The Librem 5 takes 2.0A and the voltage drops to 4.6V so that gives 9.2W. This shows that the z640 doesn’t correctly limit power output and the Librem 5 will also take considerably more power than the specs allow. It would be really interesting to get a powerful PSU and see how much power a Librem 5 will take without negotiating USB-PD and it would also be interesting to see what happens when you short circuit a USB port in a HP z640. But I recommend not doing such tests on hardware you plan to keep using!

Of the phones I tested the only one that was within specifications on the bottom port of the z640 was the OP6. I think that is more about it just charging slowly in every test than conforming to specs.

Monitor

The next test target is my 5120*2160 Kogan monitor with a USB-C port [4]. This worked quite well and apart from being a few percent slower on the PPP it outperformed the PC ports for every device due to using USB-PD (the only way to get more than 5V) and due to just having a more powerful PSU that doesn’t have a voltage drop when more than 1A is drawn.

Ali Charger

The Ali Charger is a device that I bought from AliExpress is a 240W GaN charger supporting multiple USB-PD devices. I tested with the top USB-C port that can supply 100W to laptops.

The Librem 5 has charging going off repeatedly on the Ali charger and doesn’t charge properly. It’s also the only charger for which the Librem 5 requests a higher voltage than 5V, so it seems that the Librem 5 has some issues with USB-PD. It would be interesting to know why this problem happens, but I expect that a USB signal debugger is needed to find that out. On AliExpress USB 2.0 sniffers go for about $50 each and with a quick search I couldn’t see a USB 3.x or USB-C sniffer. So I’m not going to spend my own money on a sniffer, but if anyone in Melbourne Australia owns a sniffer and wants to visit me and try it out then let me know. I’ll also bring it to Everything Open 2026.

Generally the Ali charger was about the best charger from my collection apart from the case of the Librem 5.

Dell Dock

I got a number of free Dell WD15 (aka K17A) USB-C powered docks as they are obsolete. They have VGA ports among other connections and for the HDMI and DisplayPort ports it doesn’t support resolutions higher than FullHD if both ports are in use or 4K if a single port is in use. The resolutions aren’t directly relevant to the charging but it does indicate the age of the design.

The Dell dock seems to not support any voltages other than 5V for phones and 19V (20V requested) for laptops. Certainly not the 9V requested by the Pixel 7 Pro and Pixel 8 phones. I wonder if not supporting most fast charging speeds for phones was part of the reason why other people didn’t want those docks and I got some for free. I hope that the newer Dell docks support 9V, a phone running Samsung Dex will display 4K output on a Dell dock and can productively use a keyboard and mouse. Getting equivalent functionality to Dex working properly on Debian phones is something I’m interested in.

Battery

The “Battery” I tested with is a Chinese battery for charging phones and laptops, it’s allegedly capable of 67W USB-PD supply but so far all I’ve seen it supply is 20V 2.5A for my laptop. I bought the 67W battery just in case I need it for other laptops in future, the Thinkpad X1 Carbon I’m using now will charge from a 30W battery.

There seems to be an overall trend of the most shonky devices giving the best charging speeds. Dell and HP make quality gear although my tests show that some HP ports exceed specs. Kogan doesn’t make monitors, they just put their brand on something cheap. Buying one of the cheapest chargers from AliExpress and one of the cheaper batteries from China I don’t expect the highest quality and I am slightly relieved to have done enough tests with both of those that a fire now seems extremely unlikely. But it seems that the battery is one of the fastest charging devices I own and with the exception of the Librem 5 (which charges slowly on all ports and unreliably on several ports) the Ali charger is also one of the fastest ones. The Kogan monitor isn’t far behind.

Conclusion

Volage and Age

The Samsung Galaxy Note 9 was released in 2018 as was the OP6. The PPP was first released in 2022 and the Librem 5 was first released in 2020, but I think they are both at a similar technology level to the Note 9 and OP6 as the companies that specialise in phones have a pipeline for bringing new features to market.

The Pixel phones are newer and support USB-PD voltage selection while the other phones either don’t support USB-PD or support it but only want 5V. Apart from the Librem 5 which wants a higher voltage but runs it at a low current and repeatedly disconnects.

Idle Power

One of the major problems I had in the past which prevented me from using a Debian phone as my daily driver is the ratio of idle power use to charging power. Now that the phones seem to charge faster if I can get the idle power use under control then it will be usable.

Currently the Librem 5 running Trixie is using 6% CPU time (24% of a core) while idle and the screen is off (but “Caffeine” mode is enabled so no deep sleep). On the PPP the CPU use varies from about 2% and 20% (12% to 120% of one core), this was mainly plasmashell and kwin_wayland. The OP6 has idle CPU use a bit under 1% CPU time which means a bit under 8% of one core.

The Librem 5 and PPP seem to have configuration issues with KDE Mobile and Pipewire that result in needless CPU use. With those issues addressed I might be able to make a Librem 5 or PPP a usable phone if I have a battery to charge it.

The OP6 is an interesting point of comparison as a Debian phone but is not a viable option as a daily driver due to problems with VoLTE and also some instability – it sometimes crashes or drops off Wifi.

The Librem 5 charges at 9.2W from a a PC that doesn’t obey specs and 10W from a battery. That’s a reasonable charge rate and the fact that it can request 12V (unsuccessfully) opens the possibility to potential higher charge rates in future. That could allow a reasonable ratio of charge time to use time.

The PPP has lower charging speeds then the Librem 5 but works more consistently as there was no charger I found that wouldn’t work well with it. This is useful for the common case of charging from a random device in the office. But the fact that the Librem 5 takes 10W from the battery while the PPP only takes 6.3W would be an issue if using the phone while charging.

Now I know the charge rates for different scenarios I can work on getting the phones to use significantly less power than that on average.

Specifics for a Usable Phone

The 57W battery or something equivalent is something I think I will always need to have around when using a PPP or Librem 5 as a daily driver.

The ability to charge fast while at a desk is also an important criteria. The charge speed of my home PC is good in that regard and the charge speed of my monitor is even better. Getting something equivalent at a desktop of an office I work in is a possibility.

Improving the Debian distribution for phones is necessary. That’s something I plan to work on although the code is complex and in many cases I’ll have to just file upstream bug reports.

I have also ordered a FuriLabs FLX1s [5] which I believe will be better in some ways. I will blog about it when it arrives.

Phone Top z640 Bottom Z640 Monitor Ali Charger Dell Dock Battery Best Worst
Note9 4.8V 1.0A 5.2W 4.8V 1.6A 7.5W 4.9V 2.0A 9.5W 5.1V 1.9A 9.7W 4.8V 2.1A 10W 5.1V 2.1A 10W 5.1V 2.1A 10W 4.8V 1.0A 5.2W
Pixel 7 pro 4.9V 0.80A 4.2W 4.8V 1.2A 5.9W 9.1V 1.3A 12W 9.1V 1.2A 11W 4.9V 1.8A 8.7W 9.0V 1.3A 12W 9.1V 1.3A 12W 4.9V 0.80A 4.2W
Pixel 8 4.7V 1.2A 5.4W 4.7V 1.5A 7.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W 9.1V 2.7A 24W 4.7V 1.2A 5.4W
PPP 4.7V 1.2A 6.0W 4.8V 1.3A 6.8W 4.9V 1.4A 6.6W 5.0V 1.2A 5.8W 4.9V 1.4A 5.9W 5.1V 1.2A 6.3W 4.8V 1.3A 6.8W 5.0V 1.2A 5.8W
Librem 5 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 4.8V 2.4A 11.2W 12V 0.48A 5.8W 5.0V 0.56A 2.7W 5.1V 2.0A 10W 4.8V 2.4A 11.2W 5.0V 0.56A 2.7W
OnePlus6 5.0V 0.51A 2.5W 5.0V 0.50A 2.5W 5.0V 0.81A 4.0W 5.0V 0.75A 3.7W 5.0V 0.77A 3.7W 5.0V 0.77A 3.9W 5.0V 0.81A 4.0W 5.0V 0.50A 2.5W
Best 4.4V 1.5A 6.7W 4.6V 2.0A 9.2W 8.9V 2.1A 19W 9.1V 2.7A 24W 4.8V 2.3A 11.0W 9.1V 2.6A 24W

05 January, 2026 07:21AM by etbe

January 03, 2026

Joerg Jaspert

AI Shit, go away; iocaine to the rescue

As a lot of people do, I have some content that is reachable using webbrowsers. There is the password manager Vaultwarden, an instance of Immich, ForgeJo for some personal git repos, my blog and some other random pages here and there.

All of this never had been a problem, running a webserver is a relatively simple task, no matter if you use apache2 , nginx or any of the other possibilities. And the things mentioned above bring their own daemon to serve the users.

AI crap

And then some idiot somewhere had the idea to ignore every law, every copyright and every normal behaviour and run some shit AI bot. And more idiots followed. And now we have more AI bots than humans generating traffic.

And those AI shit crawlers do not respect any limits. robots.txt, slow servers, anything to keep your meager little site up and alive? Them idiots throw more resources onto them to steal content. No sense at all.

iocaine to the rescue

So them AI bros want to ignore everything and just fetch the whole internet? Without any consideration if thats even wanted? Or legal? There are people who dislike this. I am one of them, but there are some who got annoyed enough to develop tools to fight the AI craziness. One of those tools is iocaine - it says about itself that it is The deadliest poison known to AI.

Feed AI bots sh*t

So you want content? You do not accept any Go away? Then here is content. It is crap, but appearently you don’t care. So have fun.

What iocaine does is (cite from their webpage) “not made for making the Crawlers go away. It is an aggressive defense mechanism that tries its best to take the blunt of the assault, serve them garbage, and keep them off of upstream resources”.

That is, instead of the expensive webapp using a lot of resources that are basically wasted for nothing, iocaine generates a small static page (with some links back to itself, so the crawler shit stays happy). Which takes a hell of a lot less resource than any fullblown app.

iocaine setup

The website has a https://iocaine.madhouse-project.org/documentation/, it is not hard to setup. Still, I had to adjust some things for my setup, as I use [Caddy Docker Proxy}([https://github.com/lucaslorentz/caddy-docker-proxy) nowadays and wanted to keep the config within the docker setup, that is, within the labels.

Caddy container

So my container setup for the caddy itself contains the following extra lines:

    labels:
      caddy_0.email: email@example.com
      caddy_1: (iocaine)
      caddy_1.0_@read: method GET HEAD
      caddy_1.1_reverse_proxy: "@read iocaine:42069"
      "caddy_1.1_reverse_proxy.@fallback": "status 421"
      caddy_1.1_reverse_proxy.handle_response: "@fallback"

This will be translated to the following Caddy config snippet:

(iocaine) {
        @read method GET HEAD
        reverse_proxy @read iocaine:42069 {
                @fallback status 421
                handle_response @fallback
        }
}

Any container that should be protected by iocaine

All the containers that are “behind” the Caddy reverse proxy can now get protected by iocaine with just one more line in their docker-compose.yaml. So now we have

   labels:
      caddy: service.example.com
      caddy.reverse_proxy: "{{upstreams 3000}}"
      caddy.import: iocaine

which translates to

service.example.com {
        import iocaine
        reverse_proxy 172.18.0.6:3000
}

So with one simple extra label for the docker container I have iocaine activated.

Result? ByeBye (most) AI Bots

Looking at the services that got hammered most from those crap bots - deploying this iocaine container and telling Caddy about it solved the problem for me. 98% of the requests from the bots now go to iocaine and no longer hog resources in the actual services.

I wish it wouldn’t be neccessary to run such tools. But as long as we have shitheads doing the AI hype there is no hope. I wish they all would end up in Jail for all their various stealing they do. And someone with a little more brain left would set things up sensibly, then the AI thing could maybe turn out something good and useful.

But currently it is all crap.

03 January, 2026 01:23PM

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

Effects of Algorithmic Flagging on Fairness: Quasi-experimental Evidence from Wikipedia

Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some “older” published projects. This particular post is closely based on a previously published post by Nate TeBlunthuis from the Community Data Science Blog.

Many online platforms are adopting AI and machine learning as a tool to maintain order and high-quality information in the face of massive influxes of user-generated content. Of course, AI algorithms can be inaccurate, biased, or unfair. How do signals from AI predictions shape the fairness of online content moderation? How can we measure an algorithmic flagging system’s effects?

In our paper published at CSCW, Nate TeBlunthuis, together with myself and Aaron Halfaker, analyzed the RCFilters system: an add-on to Wikipedia that highlights and filters edits that a machine learning algorithm called ORES identifies as likely to be damaging to Wikipedia. This system has been deployed on large Wikipedia language editions and is similar to other algorithmic flagging systems that are becoming increasingly widespread. Our work measures the causal effect of being flagged in the RCFilters user interface.

Screenshot of Wikipedia edit metadata on Special:RecentChanges with RCFilters enabled. Highlighted edits with a colored circle to the left side of other metadata are flagged by ORES. Different circle and highlight colors (white, yellow, orange, and red in the figure) correspond to different levels of confidence that the edit is damaging. RCFilters does not specifically flag edits by new accounts or unregistered editors, but does support filtering changes by editor types.

Our work takes advantage of the fact that RCFilters, like many algorithmic flagging systems, create discontinuities in the relationship between the probability that a moderator should take action and whether a moderator actually does. This happens because the output of machine learning systems like ORES is typically a continuous score (in RCFilters, an estimated probability that a Wikipedia edit is damaging), while the flags (in RCFilters, the yellow, orange, or red highlights) are either on or off and are triggered when the score crosses some arbitrary threshold. As a result, edits slightly above the threshold are both more visible to moderators and appear more likely to be damaging than edits slightly below. Even though edits on either side of the threshold have virtually the same likelihood of truly being damaging, the flagged edits are substantially more likely to be reverted. This fact lets us use a method called regression discontinuity to make causal estimates of the effect of being flagged in RCFilters.

Charts showing the probability that an edit will be reverted as a function of ORES scores in the neighborhood of the discontinuous threshold that triggers the RCfilters flag. The jump in the increase in reversion chances is larger for registered editors compared to unregistered editors at both thresholds.

To understand how this system may affect the fairness of Wikipedia moderation, we estimate the effects of flagging on edits on different groups of editors. Comparing the magnitude of these estimates lets us measure how flagging is associated with several different definitions of fairness. Surprisingly, we found evidence that these flags improved fairness for categories of editors that have been widely perceived as troublesome—particularly unregistered (anonymous) editors. This occurred because flagging has a much stronger effect on edits by the registered than on edits by the unregistered.

We believe that our results are driven by the fact that algorithmic flags are especially helpful for finding damage that can’t be easily detected otherwise. Wikipedia moderators can see the editor’s registration status in the recent changes, watchlists, and edit history. Because unregistered editors are often troublesome, Wikipedia moderators’ attention is often focused on their contributions, with or without algorithmic flags. Algorithmic flags make damage by registered editors (in addition to unregistered editors) much more detectable to moderators and so help moderators focus on damage overall, not just damage by suspicious editors. As a result, the algorithmic flagging system decreases the bias that moderators have against unregistered editors.

This finding is particularly surprising because the ORES algorithm we analyzed was itself demonstrably biased against unregistered editors (i.e., the algorithm tended to greatly overestimate the probability that edits by these editors were damaging). Despite the fact that the algorithms were biased, their introduction could still lead to less biased outcomes overall.

Our work shows that although it is important to design predictive algorithms to avoid such biases, it is equally important to study fairness at the level of the broader sociotechnical system. Since we first published a preprint of our paper, a follow-up piece by Leijie Wang and Haiyi Zhu replicated much of our work and showed that differences between different Wikipedia communities may be another important factor driving the effect of the system. Overall, this work suggests that social signals and social context can interact with algorithmic signals, and together these can influence behavior in important and unexpected ways.


The full citation for the paper is: TeBlunthuis, Nathan, Benjamin Mako Hill, and Aaron Halfaker. 2021. “Effects of Algorithmic Flagging on Fairness: Quasi-Experimental Evidence from Wikipedia.” Proceedings of the ACM on Human-Computer Interaction 5 (CSCW): 56:1-56:27. https://doi.org/10.1145/3449130.

We have also released replication materials for the paper, including all the data and code used to conduct the analysis and compile the paper itself.

03 January, 2026 12:34PM by Benjamin Mako Hill

Russ Allbery

Review: Challenges of the Deeps

Review: Challenges of the Deeps, by Ryk E. Spoor

Series: Arenaverse #3
Publisher: Baen
Copyright: March 2017
ISBN: 1-62579-564-5
Format: Kindle
Pages: 438

Challenges of the Deeps is the third book in the throwback space opera Arenaverse series. It is a direct sequel to Spheres of Influence, but Spoor provides a substantial recap of the previous volumes for those who did not read the series in close succession (thank you!).

Ariane has stabilized humanity's position in the Arena with yet another improbable victory. (If this is a spoiler for previous volumes, so was telling you the genre of the book.) Now is a good opportunity to fulfill the promise humanity made to their ally Orphan: accompaniment on a journey into the uncharted deeps of the Arena for reasons that Orphan refuses to explain in advance. Her experienced crew provide multiple options to serve as acting Leader of Humanity until she gets back. What can go wrong?

The conceit of this series is that as soon as a species achieves warp drive technology, their ships are instead transported into the vast extradimensional structure of the Arena where a godlike entity controls the laws of nature and enforces a formal conflict resolution process that looks alternatingly like a sporting event, a dueling code, and technology-capped total war. Each inhabitable system in the real universe seems to correspond to an Arena sphere, but the space between them is breathable atmosphere filled with often-massive storms.

In other words, this is an airship adventure as written by E.E. "Doc" Smith. Sort of. There is an adventure, and there are a lot of airships (although they fight mostly like spaceships), but much of the action involves tense mental and physical sparring with a previously unknown Arena power with unclear motives.

My general experience with this series is that I find the Arena concept fascinating and want to read more about it, Spoor finds his much-less-original Hyperion Project in the backstory of the characters more fascinating and wants to write about that, and we reach a sort of indirect, grumbling (on my part) truce where I eagerly wait for more revelations about the Arena and roll my eyes at the Hyperion stuff. Talking about Hyperion in detail is probably a spoiler for at least the first book, but I will say that it's an excuse to embed versions of literary characters into the story and works about as well as most such excuses (not very). The characters in question are an E.E. "Doc" Smith mash-up, a Monkey King mash-up, and a number of other characters that are obviously references to something but for whom I lack enough hints to place (which is frustrating).

Thankfully we get far less human politics and a decent amount of Arena world-building in this installment. Hyperion plays a role, but mostly as foreshadowing for the next volume and the cause of a surprising interaction with Arena rules. One of the interesting wrinkles of this series is that humanity have an odd edge against the other civilizations in part because we're borderline insane sociopaths from the perspective of the established powers. That's an old science fiction trope, but I prefer it to the Campbell-style belief in inherent human superiority.

Old science fiction tropes are what you need to be in the mood for to enjoy this series. This is an unapologetic and intentional throwback to early pulp: individuals who can be trusted with the entire future of humanity because they're just that moral, super-science, psychic warfare, and even coruscating beams that would make E.E. "Doc" Smith proud. It's an occasionally glorious but mostly silly pile of technobabble, but Spoor takes advantage of the weird, constructed nature of the Arena to provide more complex rules than competitive superlatives.

The trick is that while this is certainly science fiction pulp, it's also a sort of isekai novel. There's a lot of anime and manga influence just beneath the surface. I'm not sure why it never occurred to me before reading this series that melodramatic anime and old SF pulps have substantial aesthetic overlap, but of course they do. I loved the Star Blazers translated anime that I watched as a kid precisely because it had the sort of dramatic set pieces that make the Lensman novels so much fun.

There is a bit too much Wu Kong in this book for me (although the character is growing on me a little), and some of the maneuvering around the mysterious new Arena actor drags on longer than was ideal, but the climax is great stuff if you're in the mood for dramatic pulp adventure. The politics do not bear close examination and the writing is serviceable at best, but something about this series is just fun. I liked this book much better than Spheres of Influence, although I wish Spoor would stop being so coy about the nature of the Arena and give us more substantial revelations. I'm also now tempted to re-read Lensman, which is probably a horrible idea. (Spoor leaves the sexism out of his modern pulp.)

If you got through Spheres of Influence with your curiosity about the Arena intact, consider this one when you're in the mood for modern pulp, although don't expect any huge revelations. It's not the best-written book, but it sits squarely in the center of a genre and mood that's otherwise a bit hard to find.

Followed by the Kickstarter-funded Shadows of Hyperion, which sadly looks like it's going to concentrate on the Hyperion Project again. I will probably pick that up... eventually.

Rating: 6 out of 10

03 January, 2026 05:23AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

2025 — A Musical Retrospective

2026 already! The winter weather here has really been beautiful and I always enjoy this time of year. Writing this yearly musical retrospective has now become a beloved tradition of mine1 and I enjoy retracing the year's various events through albums I listened to and concerts I went to.

Albums

In 2025, I added 141 new albums to my collection, around 60% more than last year's haul. I think this might have been too much? I feel like I didn't have time to properly enjoy all of them and as such, I decided to slow down my acquisition spree sometimes in early December, around the time I normally do the complete opposite.

This year again, I bought the vast majority of my music on Bandcamp. Most of the other albums I bought as CDs and ripped them.

Concerts

In 2025, I went to the following 25 (!!) concerts:

  • January 17th: Uzu, Young Blades, She came to quit, Fever Visions
  • February 1st: Over the Hill, Jail, Mortier, Ain't Right
  • February 7th: Béton Armé, Mulchulation II, Ooz
  • February 15th: The Prowlers, Ultra Razzia, Sistema de Muerte, Trauma Bond
  • February 28th: Francbâtards
  • March 28th: Conflit Majeur, to Even Exist, Crachat
  • April 12th: Jetsam, Mortier, NIIVI, Canette
  • April 26th-27th (Montreal Oi! Fest 2025): The Buzzers, Bad Terms, Sons of Pride, Liberty and Justice, Flafoot 56, The Beltones, Mortier, Street Code, The Stress, Alternate Action
  • May 1st: Bauxite, Atomic threat, the 351's
  • May 30th: Uzu, Tenaz, Extraña Humana, Sistema de muerte
  • June 7th: Ordures Ioniques, Tulaviok, Fucking Raymonds, Voyou
  • June 18th: Tiken Jah Fakoly
  • June 21st: Saïan Supa Celebration
  • June 26th: Taxi Girls, Death Proof, Laura Krieg
  • July 4th: Frente Cumbiero
  • July 12th: Montreal's Big Fiesta DJ Set
  • August 16th: Guerilla Poubelle
  • September 11th: No Suicide Act, Mortier
  • September 20th: Hors Contrôle, Union Thugs, Barricade Mentale
  • October 20th: Ezra Furman, The Golden Dregs
  • October 24th: Overbass, Hommage à Bérurier Noir, Self Control, Vermin Kaos
  • November 6th: Béton Armé, Faze, Slash Need, Chain Block
  • November 28th (Blood Moon Ritual 2025): Bhatt, Channeler, Pyrocene Death Cult, Masse d'Armes
  • December 13th (Stomp Records' 30th Anniversary Bash): The Planet Smashers, The Flatliners, Wine Lips, The Anti-Queens, Crash ton rock

Although I haven't touched metalfinder's code in a good while, my instance still works very well and I get the occasional match when a big-name artist in my collection comes in town. Most the venues that advertise on Bandsintown are tied to Ticketmaster though, which means most underground artists (i.e. most of the music I listen to) end up playing elsewhere.

As such, shout out again to the Gancio project and to the folks running the Montreal instance. It continues to be a smash hit and most of the interesting concerts end up being advertised there.

See you all in 2026!


  1. see the 2022, 2023 and 2024 entries 

03 January, 2026 12:32AM by Louis-Philippe Véronneau

January 02, 2026

hackergotchi for Joachim Breitner

Joachim Breitner

Seemingly impossible programs in Lean

In 2007, Martin Escardo wrote a often-read blog post about “Seemingly impossible functional programs”. One such seemingly impossible function is find, which takes a predicate on infinite sequences of bits, and returns an infinite sequence for which that predicate hold (unless the predicate is just always false, in which case it returns some arbitrary sequence).

Inspired by conversations with and experiments by Massin Guerdi at the dinner of LeaningIn 2025 in Berlin (yes, this blog post has been in my pipeline for far too long), I wanted to play around these concepts in Lean.

Let’s represent infinite sequences of bits as functions from Nat to Bit, and give them a nice name, and some basic functionality, including a binary operator for consing an element to the front:

import Mathlib.Data.Nat.Find

abbrev Bit := Bool

def Cantor : Type := Nat → Bit

def Cantor.head (a : Cantor) : Bit := a 0

def Cantor.tail (a : Cantor) : Cantor := fun i => a (i + 1)

@[simp, grind] def Cantor.cons (x : Bit) (a : Cantor) : Cantor
  | 0 => x
  | i+1 => a i

infix:60 " # " => Cantor.cons

With this in place, we can write Escardo’s function in Lean. His blog post discusses a few variants; I’ll focus on just one of them:

mutual
  partial def forsome (p : Cantor → Bool) : Bool :=
    p (find p)

  partial def find (p : Cantor → Bool) : Cantor :=
    have b := forsome (fun a => p (true # a))
    (b # find (fun a => p (b # a)))
end

We define find together with forsome, which checks if the predicate p holds for any sequence. Using that find sets the first element of the result to true if there exists a squence starting with true, else to false, and then tries to find the rest of the sequence.

It is a bit of a brian twiter that this code works, but it does:

def fifth_false : Cantor → Bool := fun a => not (a 5)

/-- info: [true, true, true, true, true, false, true, true, true, true] -/
#guard_msgs in
#eval List.ofFn (fun (i : Fin 10) => find fifth_false i)

Of course, in Lean we don’t just want to define these functions, but we want to prove that they do what we expect them to do.

Above we defined them as partial functions, even though we hope that they are not actually partial: The partial keyword means that we don’t have to do a termination proof, but also that we cannot prove anything about these functions.

So can we convince Lean that these functions are total after all? We can, but it’s a bit of a puzzle, and we have to adjust the definitions.

First of all, these “seemingly impossible functions” are only possible because we assume that the predicate we pass to it, p, is computable and total. This is where the whole magic comes from, and I recommend to read Escardo’s blog posts and papers for more on this fascinating topic. In particular, you will learn that a predicate on Cantor that is computable and total necessarily only looks at some initial fragment of the sequence. The length of that prefix is called the “modulus”. So if we hope to prove termination of find and forsome, we have to restrict their argument p to only such computable predicates.

To that end I introduce HasModulus and the subtype of predicates on Cantor that have such a modulus:

-- Extensional (!) modulus of uniform continuity
def HasModulus (p : Cantor → α) := ∃ n, ∀ a b : Cantor, (∀ i < n, a i = b i) → p a = p b

@[ext] structure CantorPred where
  pred : Cantor → Bool
  hasModulus : HasModulus pred

The modulus of such a predicate is now the least prefix lenght that determines the predicate. In particular, if the modulus is zero, the predicate is constant:

namespace CantorPred

variable (p : CantorPred)

noncomputable def modulus : Nat :=
  open Classical in Nat.find p.hasModulus

theorem eq_of_modulus : ∀a b : Cantor, (∀ i < p.modulus, a i = b i) → p a = p b := by
  open Classical in
  unfold modulus
  exact Nat.find_spec p.hasModulus

theorem eq_of_modulus_eq_0 (hm : p.modulus = 0) : ∀ a b, p a = p b := by
  intro a b
  apply p.eq_of_modulus
  simp [hm]

Because we want to work with CantorPred and not Cantor → Bool I have to define some operations on that new type; in particular the “cons element before predicate” operation that we saw above in find:

def comp_cons (b : Bit) : CantorPred where
  pred := fun a => p (b # a)
  hasModulus := by
    obtain ⟨n, h_n⟩ := p.hasModulus
    cases n with
    | zero => exists 0; grind
    | succ m =>
      exists m
      intro a b heq
      simp
      apply h_n
      intro i hi
      cases i
      · rfl
      · grind

@[simp, grind =] theorem comp_cons_pred (x : Bit) (a : Cantor) :
  (p.comp_cons x) a = p (x # a) := rfl

For this operation we know that the modulus decreases (if it wasn’t already zero):

theorem comp_cons_modulus (x : Bit) :
    (p.comp_cons x).modulus ≤ p.modulus - 1 := by
  open Classical in
  apply Nat.find_le
  intro a b hab
  apply p.eq_of_modulus
  cases hh : p.modulus
  · simp
  · intro i hi
    cases i
    · grind
    · grind
grind_pattern comp_cons_modulus => (p.comp_cons x).modulus

We can rewrite the find function above to use these operations:

mutual
  partial def forsome (p : CantorPred) : Bool := p (find p)

  partial def find (p : CantorPred) : Cantor := fun i =>
    have b := forsome (p.comp_cons true)
    (b # find (p.comp_cons b)) i
end

I have also eta-expanded the Cantor function returned by find; there is now a fun i => … i around the body. We’ll shortly see why that is needed.

Now have everything in place to attempt a termination proof. Before we do that proof, we could step back and try to come up with an informal termination argument.

  • The recursive call from forsome to find doesn’t decrease any argument at all. This is ok if all calls from find to forsome are decreasing.

  • The recursive call from find to find decreases the index i as the recursive call is behind the Cantor.cons operation that shifts the index. Good.

  • The recursive call from find to forsome decreases the modulus of the argument p, if it wasn’t already zero.

    But if was zero, it does not decrease it! But if it zero, then the call from forsome to find doesn’t actually need to call find, because then p doesn’t look at its argument.

We can express all this reasoning as a termination measure in the form of a lexicographic triple. The 0 and 1 in the middle component mean that for zero modulus, we can call forsome from find “for free”.

mutual
  def forsome (p : CantorPred) : Bool := p (find p)
  termination_by (p.modulus, if p.modulus = 0 then 0 else 1, 0)
  decreasing_by grind

  def find (p : CantorPred) : Cantor := fun i =>
    have b := forsome (p.comp_cons true)
    (b # find (p.comp_cons b)) i
  termination_by i => (p.modulus, if p.modulus = 0 then 1 else 0, i)
  decreasing_by all_goals grind
end

The termination proof doesn’t go through just yet: Lean is not able to see that (_ # p) i will call p with i - 1, and it does not see that p (find p) only uses find p if the modulus of p is non-zero. We can use the wf_preprocess feature to tell it about that:

The following theorem replaces a call to p f, where p is a function parameter, with the slightly more complex but provably equivalent expression on the right, where the call to f is no in the else branch of an if-then-else and thus has ¬p.modulus = 0 in scope:

@[wf_preprocess]
theorem coe_wf (p : CantorPred) :
    (wfParam p) f = p (if _ : p.modulus = 0 then fun _ => false else f) := by
  split
  next h => apply p.eq_of_modulus_eq_0 h
  next => rfl

And similarly we replace (_ # p) i with a variant that extend the context with information on how p is called:

def cantor_cons' (x : Bit) (i : Nat) (a : ∀ j, j + 1 = i → Bit) : Bit :=
  match i with
  | 0 => x
  | j + 1 => a j (by grind)

@[wf_preprocess] theorem cantor_cons_congr (b : Bit) (a : Cantor) (i : Nat) :
  (b # a) i = cantor_cons' b i (fun j _ => a j) := by cases i <;> rfl

After these declarations, the above definition of forsome and find goes through!

It remains to now prove that they do what they should, by a simple induction on the modulus of p:

@[simp, grind =] theorem tail_cons_eq (a : Cantor) : (x # a).tail = a := by
  funext i; simp [Cantor.tail, Cantor.cons]

@[simp, grind =] theorem head_cons_tail_eq (a : Cantor) : a.head # a.tail = a := by
  funext i; cases i <;> rfl

theorem find_correct (p : CantorPred) (h_exists : ∃ a, p a) : p (find p) := by
  by_cases h0 : p.modulus = 0
  · obtain ⟨a, h_a⟩ := h_exists
    rw [← h_a]
    apply p.eq_of_modulus_eq_0 h0
  · rw [find.eq_unfold, forsome.eq_unfold]
    dsimp -zeta
    extract_lets b
    change p (_ # _)
    by_cases htrue : ∃ a, p (true # a)
    next =>
      have := find_correct (p.comp_cons true) htrue
      grind
    next =>
      have : b = false := by grind
      clear_value b; subst b
      have hfalse : ∃ a, p (false # a) := by
        obtain ⟨a, h_a⟩ := h_exists
        cases h : a.head
        · exists Cantor.tail a
          grind
        · exfalso
          apply htrue
          exists Cantor.tail a
          grind
      clear h_exists
      exact find_correct (p.comp_cons false) hfalse
termination_by p.modulus
decreasing_by all_goals grind

theorem forsome_correct (p : CantorPred) :
    forsome p ↔ (∃ a, p a) where
  mp hfind := by unfold forsome at hfind; exists find p
  mpr hex := by unfold forsome; exact find_correct p hex

This is pretty nice! However there is more to do. For example, Escardo has a “massively faster” variant of find that we can implement as a partial function in Lean:

def findBit (p : Bit → Bool) : Bit :=
  if p false then false else true

def branch (x : Bit) (l r : Cantor) : Cantor :=
  fun n =>
    if n = 0      then x
    else if 2 ∣ n then r ((n - 2) / 2)
                  else l ((n - 1) / 2)

mutual
  partial def forsome (p : Cantor -> Bool) : Bool :=
    p (find p)

  partial def find (p : Cantor -> Bool) : Cantor :=
    let x := findBit (fun x => forsome (fun l => forsome (fun r => p (branch x l r))))
    let l := find (fun l => forsome (fun r => p (branch x l r)))
    let r := find (fun r => p (branch x l r))
    branch x l r
end

But can we get this past Lean’s termination checker? In order to prove that the modulus of p is decreasing, we’d have to know that, for example, find (fun r => p (branch x l r)) is behaving nicely. Unforunately, it is rather hard to do termination proof for a function that relies on the behaviour of the function itself.

So I’ll leave this open as a future exercise.

I have dumped the code for this post at https://github.com/nomeata/lean-cantor.

02 January, 2026 02:30PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Ben Hutchings

Ben Hutchings

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Rewriting Git merge history, part 1

I remember that when Git was new and hip (around 2005), one of the supposed advantages was that “merging is so great!”. Well, to be honest, the competition at the time (mostly CVS and Subversion) wasn't fantastic, so I guess it was a huge improvement, but it's still… problematic. And this is even more visible when trying to rewrite history.

The case in question was that I needed to move Stockfish's cluster (MPI) branch up-to-date with master, which nobody had done for a year and and a half because there had been a number of sort-of tricky internal refactorings that caused merge conflicts. I fairly quickly realized that just doing “git merge master” would create a huge mess of unrelated conflicts that would be impossible to review and bisect, so I settled on a different strategy: Take one conflict at a time.

So I basically merged up as far as I could without any conflicts (essentially by bisecting), noted that as a merge commit, then merged one conflicting commit, noted that as another merge (with commit notes if the merge was nontrivial, e.g., if it required new code or a new approach), and then repeat. Notably, Git doesn't seem to have any kind of native support for this flow; I did it manually at first, and then only later realized that there were so many segments (20+) that I should write a script to get everything consistent. Notably, this approach means that a merge commit can have significant new code that was not in either parent. (Git does support this kind of flow, because a commit is just a list of zero or more parent commits and then the contents of the entire tree; git show does a diff on-the-fly, and object deduplication and compression makes this work without ballooning the size. But it is still surprising to those that don't do a lot of merges.)

That's where the nice parts ended, and the problems began. (Even ignoring that a conflict-free merge could break the compile, of course.) Because I realized that while I had merged everything, it wasn't actually done; the MPI support didn't even compile, for one, and once I had fixed that, I realized that I wanted to fix typos in commit messages, fix bugs pointed out to me by reviewers, and so on. In short, I wanted to rewrite history. And that's not where Git shines.

Everyone who works with a patch-based review flow (as opposed to having a throwaway branch per feature with lots of commits like “answer review comments #13” and then squash-merging it or similar) will know that git's basic answer to this is git rebase. rebase essentially sets up a script of what commits you've done, then executes a script (potentially at a different starting point, so you could get conflicts). Interactive rebase simply lets you edit that script in various ways, so that you can e.g. modify a commit message on the way, or take out a commit, or (more interestingly) make changes to a commit before continuing.

However, when merges are involved, regular interactive rebase just breaks down completely. It assumes that you don't really want merges; you just want a nice linear series of commits. And that's nice, except that in this case, I wanted the merges because the entire point was to upmerge. So then I needed to invoke git rebase --rebase-merges, which makes the script language into a somewhat different one that's subtly different and vastly more complicated (it basically sets up a list of ephemeral branches as “labels” to specify the trees that are merged into the various merge commits). And this is fine—until you want to edit that script.

In particular, let's take a fairly trivial change: Modifying a commit message. The merge command in the rebase script takes in a commit hash that's only used for the commit message and nothing else (the contents of the tree are ignored), and you can choose to either use a different hash or modify the message in an editor after-the-fact. And you can try to do this, but… then you get a merge conflict later in the rebase. What?

It turns out that git has a native machinery for remembering conflict resolutions. It basically remembers that you tried to merge commit A and B and ended up committing C (possibly after manual conflict resolution); so any merge of A and B will cause git to look that up and just use C. But that's not what really happened; since you modified the commit message of A (or even just its commit date), it changed its hash and became A', and now you're trying to merge A' and B, for which git has no conflict resolution remembered, and you're back to square one and have to do the resolution yourself. I had assumed that the merge remembered how to merge trees, but evidently it's on entire commits.

But wait, I hear you say; the solution for this is git-rerere! rerere exists precisely for this purpose; it remembers conflict resolutions you've done before and tries to reapply them. It only remembers merge conflicts you did when rerere was actually active, but there's a contrib script to “learn” from before that time, which works OK. So I tried to run the learn script and run the rebase… and it stopped with a merge conflict. You see, git rerere doesn't stop the conflicts, it just resolves them and then you have to continue the rebase yourself from the shell as usual. So I did that 20+ times (I can tell you, this gets tedious real quick)… and ended up with a different result. The tree simply wasn't the same as before the merge, even though I had only changed a commit message.

See, the problem is that rerere remembers conflicts, not merges. It has to, in order to reach its goal of being able to reapply conflict resolutions even if other parts of the file have changed. (Otherwise, it would be only marginally more useful than git's existing native support, which we discussed earlier.) But in this case, two or more conflicts in the rebase looked too similar to each other, yet needed different resolutions. So it picked the wrong resolution and ended up with a silent mismerge. And there's no way to guide it towards which one should apply when, so rerere was also out of the question.

This post is already long enough as it is; next time, we'll discuss the (horrible) workaround I used to actually (mostly) solve the problem.

02 January, 2026 09:50AM

Birger Schacht

Status update, December 2025

December 2025 started off with a nice event, namely a small gathering of Vienna based DDs. Some of us were at DebConf25 in Brest and we thought it might be nice to have a get-together of DDs in Vienna. A couple of months after DebConf25 I picked up the idea, let someone else ping the DDs, booked a table at a local cafe and in the end we were a group of 6 DDs. It was nice to put faces to names, names to nicknames and to hear what people are up to. We are definitely planning to repeat that!

December also ended with a meeting of nerds: the 39th Chaos Communication Congress in Hamburg. As usual, I did not really have that much time to watch many talks. I tend to bookmark a lot of them in the scheduling app in advance, but once I’m at the congress the social aspect is much more important and I try to only attend workshop or talks that are not recorded. Watching the recordings afterward is possible anyway (and I actually try to do that!).

There was also a Debian Developers meetup at day 3, combined with the usual time confusion regarding UTC and CET. We talked about having a Debian table at 40c3, so maybe the timezone won’t be that much of a problem in the next time.

Two talks I recommend are CSS Clicker Training: Making games in a “styling” language and To sign or not to sign: Practical vulnerabilities in GPG & friends.

Regarding package uploads this month did not happen that much, I only uploaded the new version (0.9.3) of labwc.

I created two new releases for carl. First a 0.5 release that adds Today and SpecifiedDate as properties. I forwarded an issue about dates not being parsed correctly to the icalendar issue tracker and this was fixed a couple of days later (thanks!). I then created a 0.5.1 release containing that fix. I also started planning to move the carl repository back to codeberg, because Github feels more and more like an AI Slop platform.

The work on debiverse also continued. I removed the tailwind CSS framework, and it was actually not that hard to reproduce all the needed CSS classes with custom CSS. I think that CSS frameworks make sense to a point, but once you start implementing stuff that the framework does not provide, it is easier if everything comes out of one set of rules. There was also the article Vanilla CSS is all you need which goes into the same direction and which gave me some ideas how to organize the CSS directives.

I also refactored the filter generation for the listing filters and the HTML filter form is now generated from the FastAPI Query Parameter Model.

Screenshot of the filter form

For navigation I implemented a sidebar, that is hidden on small screens but can be toggled using a burger menu.

Screenshot of the navigation bar

I also stumbled upon An uncomfortable but necessary discussion about the Debian bug tracker, which raises some valid points. I think debiverse could be a solution to the first point of “What could be a way forward?”, namely: “Create a new web service that parses the existing bug data and displays it in a “rich” format”.

But if there is ever another way than email to interact with bugs.debian.org, than this approach should not rely on passing on the commands via mail. If I click a button in a web interface to raise the severity, the severity should be raised right away - not 10 minutes later when the email is received. I think the individual parts (web, database, mail interface) should be decoupled and talk to each other via APIs.

02 January, 2026 05:28AM

January 01, 2026

Dima Kogan

Using libpython3 without linking it in; and old Python, g++ compatibility patches

I just released mrcal 2.5; much more about that in a future post. Here, I'd like to talk about some implementation details.

libpython3 and linking

mrcal is a C library and a Python library. Much of mrcal itself interfaces the C and Python libraries. And it is common for external libraries to want to pass Python mrcal.cameramodel objects to their C code. The obvious way to do this is in a converter function in an O& argument to PyArg_ParseTupleAndKeywords(). I wrote this mrcal_cameramodel_converter() function, which opened a whole can of worms when thinking about the compiling and linking and distribution of this thing.

mrcal_cameramodel_converter() is meant to be called by code that implements Python-wrapping of C code. This function will be called by the PyArg_ParseTupleAndKeywords() Python library function, and it uses the Python C API itself. Since it uses the Python C API, it would normally link against libpython. However:

  • The natural place to distribute this is in libmrcal.so, but this library doesn't touch Python, and I'd rather not pull in all of libpython for this utility function, even in the 99% case when that function won't even be called
  • In some cases linking to libpython actually breaks things, so I never do that anymore anyway. This is fine: since this code will only ever be called by libpython itself, we're guaranteed that libpython will already be loaded, and we don't need to ask for it.

OK, let's not link to libpython then. But if we do that, we're going to have unresolved references to our libpython calls, and the loader will complain when loading libmrcal.so, even if we're not actually calling those functions. This has an obvious solution: the references to the libpython calls should be marked weak. That won't generate unresolved-reference errors, and everything will be great.

OK, how do we mark things weak? There're two usual methods:

  1. We mark the declaration (or definition?) or the relevant functions with __attribute__((weak))
  2. We weaken the symbols after the compile with objcopy --weaken.

Method 1 is more work: I don't want to keep track of what Python API calls I'm actually making. This is non-trivial, because some of the Py_...() invocations in my code are actually macros that call functions internally that I must weaken. Furthermore, all the functions are declared in Python.h that I don't control. I can re-declare stuff with __attribute__((weak)), but then I have to match the prototypes. And I have to hope that re-declaring these will make __attribute__((weak)) actually work.

So clearly I want method 2. I implemented it:

python-cameramodel-converter.o: %.o:%.c
        $(c_build_rule); mv $@ _$@
        $(OBJCOPY) --wildcard --weaken-symbol='Py*' --weaken-symbol='_Py*' _$@ $@

Works great on my machine! But doesn't work on other people's machines. Because only the most recent objcopy tool actually works to weaken references. Apparently the older tools only weaken definitions, which isn't useful to me, and the tool only started handling references very recently.

Well that sucks. I guess I will need to mark the symbols with __attribute__((weak)) after all. I use the nm tool to find the symbols that should be weakened, and I apply the attribute with this macro:

#define WEAKEN(f) extern __typeof__(f) f __attribute__((weak));

The prototypes are handled by __typeof__. So are we done? With gcc, we are done. With clang we are not done. Apparently this macro does not weaken symbols generated by inline function calls if using clang I have no idea if this is a bug. The Python internal machinery has some of these, so this doesn't weaken all the symbols. I give up on the people that both have a too-old objcopy and are using clang, and declare victory. So the logic ends up being:

  1. Compile
  2. objcopy --weaken
  3. nm to find the non-weak Python references
  4. If there aren't any, our objcopy call worked and we're done!
  5. Otherwise, compile again, but explicitly asking to weaken those symbols
  6. nm again to see if the compiler didn't do it
  7. If any non-weak references still remain, complain and give up.

Whew. This logic appears here and here. There were even more things to deal with here: calling nm and objcopy needed special attention and build-system support in case we were cross-building. I took care of it in mrbuild.

This worked for a while. Until the converter code started to fail. Because ….

Supporting old Python

…. I was using PyTuple_GET_ITEM(). This is a macro to access PyTupleObject data. So the layout of PyTupleObject ended up encoded in libmrcal.so. But apparently this wasn't stable, and changed between Python3.13 and Python3.14. As described above, I'm not linking to libpython, so there's no NEEDED tag to make sure we pull in the right version. The solution was to call the PyTuple_GetItem() function instead. This is unsatisfying, and means that in theory other stuff here might stop working in some Python 3.future, but I'm ready to move on for now.

There were other annoying gymnastics that had to be performed to make this work with old-but-not-super old tooling.

The Python people deprecated PyModule_AddObject(), and added PyModule_Add() as a replacement. I want to support Pythons before and after this happened, so I needed some if statements. Today the old function still works, but eventually it will stop, and I will have needed to do this typing sooner or later.

Supporting old C++ compilers

mrcal is a C project, but it is common for people to want to #include the headers from C++. I widely use C99 designated initializers (27-years old in C!), which causes issues with not-very-old C++ compilers. I worked around this initialization in one spot, and disabled it a feature for a too-old compiler in another spot. Fortunately, semi-recent tooling supports my usages, so this is becoming a non-issue as time goes on.

01 January, 2026 09:52PM by Dima Kogan

Russ Allbery

2025 Book Reading in Review

In 2025, I finished and reviewed 32 books, not counting another five books I've finished but not yet reviewed and which will therefore roll over to 2026.

This was not a great reading year, although not my worst reading year since I started keeping track. I'm not entirely sure why, although part of the explanation was that I hit a bad stretch of books in spring of 2025 and got into a bit of a reading slump. Mostly, though, I shifted a lot of reading this year to short non-fiction (newsletters and doom-scrolling) and spent rather more time than I intended watching YouTube videos, and sadly each hour in the day can only be allocated one way.

This year felt a bit like a holding pattern. I have some hopes of being more proactive and intentional in 2026. I'm still working on finding a good balance between all of my hobbies and the enjoyment of mindless entertainment.

The best book I read this year was also the last book I reviewed (and yes, I snuck the review under the wire for that reason): Bethany Jacobs's This Brutal Moon, the conclusion of the Kindom Trilogy that started with These Burning Stars. I thought the first two books of the series were interesting but flawed, but the conclusion blew me away and improved the entire trilogy in retrospect. Like all books I rate 10 out of 10, I'm sure a large part of my reaction is idiosyncratic, but two friends of mine also loved the conclusion so it's not just me.

The stand-out non-fiction book of the year was Rory Stewart's Politics on the Edge. I have a lot of disagreements with Stewart's political positions (the more I listen to him, the more disagreements I find), but he is an excellent memoirist who skewers the banality, superficiality, and contempt for competence that has become so prevailing in centrist and right-wing politics. It's hard not to read this book and despair of electoralism and the current structures of governments, but it's bracing to know that even some people I disagree with believe in the value of expertise.

I also finished Suzanne Palmer's excellent Finder Chronicles series, reading The Scavenger Door and Ghostdrift. This series is some of the best science fiction I've read in a long time and I'm sad it is over (at least for now). Palmer has a new, unrelated book coming in 2026 (Ode to the Half-Broken), and I'm looking forward to reading that.

This year, I experimented with re-reading books I had already reviewed for the first time since I started writing reviews. After my reading slump, I felt like revisiting something I knew I liked, and therefore re-read C.J. Cherryh's Cyteen and Regenesis. Cyteen mostly held up, but Regenesis was worse than I had remembered. I experimented with a way to add on to my previous reviews, but I didn't like the results and the whole process of re-reading and re-reviewing annoyed me. I'm counting this as a failed experiment, which means I've still not solved the problem of how to revisit series that I read long enough ago that I want to re-read them before picking up the new book. (You may have noticed that I've not read the new Jacqueline Carey Kushiel novel, for example.)

You may have also noticed that I didn't start a new series re-read, or continue my semi-in-progress re-reads of Mercedes Lackey or David Eddings. I have tentative plans to kick off a new series re-read in 2026, but I'm not ready to commit to that yet.

As always, I have no firm numeric goals for the next year, but I hope to avoid another reading slump and drag my reading attention back from lower-quality and mostly-depressing material in 2026.

The full analysis includes some additional personal reading statistics, probably only of interest to me.

01 January, 2026 09:12PM

Swiss JuristGate

Stardust to Crans-Montana, Le Constellation: cover-up already under way

The author of these reports is Daniel Pocock, a member of Engineers Australia, Engineers Ireland and an elected representative of the Free Software Fellowship.

Mr Pocock is a citizen of Australia, Ireland and Switzerland.

Mr Pocock was granted Swiss citizenship in the Canton of Vaud, which is adjacent to the bilingual Canton of Valais where the tragedy occurred.

Valais can be thought of as the New Hampshire of Switzerland, a region where businessmen enjoy minimal regulation.

Valais is the third largest canton but it only has a population of 350,000 people. The population is relatively wealthy by European standards but on the other hand, they don't have the depth and breadth of skills that can be found in more populated cantons like Zurich and Geneva.

In Switzerland, each Canton operates with significant autonomy and the federal government has a rather insignificant role in comparison to other countries.

While standards in construction are agreed at a national level, it is the responsibility of each canton to enforce the standards in local premises.

Pocock's first blog after the fire found evidence the night club's owner had done the internal fit-out himself back in 2016:

(Translated to English) It was love at first sight! This Corsican businessman decided to open a business. Conveniently, Le Constellation, in the center of Crans, was for sale. He had to wait until June 2015 to sign the agreement. For six months, Jacques Moretti rolled up his sleeves and completely renovated the establishment. "I did almost everything myself. Look at these walls, there are 14 tons of dry-stone masonry, and the stones come from Saint-Léonard!" Since December 2015, Le Constellation has been showcasing Corsican products: cured meats, wines, beers, myrtle liqueur, and even chestnut-flavored whisky. "But mind you, I'm also keen to feature local Valais products. You have some excellent wines here; it's a pleasure to serve them to my customers." The Corsican admits he feels very much at home here. "You know, we're alike. We're both mountain people at heart. Stubborn, perhaps, but above all, very kind."

French TV channel BFMTV subsequently published statements from witnesses. The statements were repeated by Le Temps in Geneva:

The Hypothesis of a flame on a bottle

The testimony of two French women, gathered by BFMTV, is striking. The two witnesses, who were in the bar that night, recount the panic that seized the customers of Le Constellation and the stampede triggered by the fire.

They mention the use of "candles" placed in champagne bottles. One of them was reportedly held close to the ceiling. "In a few dozen seconds, the entire ceiling was on fire. Everything was made of wood," the two customers recount. They describe how a loud order had been given and a female member of the staff climbed onto a male colleague's shoulders. The fire then spread up to the ground floor of the establishment, they say. Panicked, the customers tried to escape through the exit door. "It was quite small compared to the number of people present. Someone broke a window so that people could get out," one of them adds.

The Canton of Valais shares borders with France and Italy. Shortly before the fire, the new French ambassador was presented to Switzerland in a ceremony in Valais, at the Opal Foundation, the very same location where the local authorities conducted their media briefing after the fire.

Marion Paradas, Clément Leclerc, Mathias Reynard, Franz Ruppen

 

On the morning after the fire, four local Swiss officials attended the media briefing. News reports emphasized their roles: the president of the canton, the minister for security, the attorney general and the chief of police. All except the last, the police chief, are political appointments.

By coincidence, three of these four people live in immediate proximity to Crans, where the fire occurred and Lens, where the business owner has another restaurant, Le Vieux Chalet 1978. People need to ask how many times they visited these businesses personally. How well did they know the proprietor and staff?

The attorney general's husband, Francois Pilloud, operates a local winery. In a small canton like Valais, these families have a common interest in promoting the canton as a destination for tourists. The night club owner told the media that he likes to serve wines from local producers in Valais.

During the course of the rescue operation, authorities made their own videos of the police and firemen at work. In the morning, they published a two minute montage of videos emphasizing all the equipment and people mobilized to respond to the crisis.

However, the video is uncomfortable to watch. It feels like a public relations exercise was under way even before the bodies started to be removed and counted. The video showcases all the expensive equipment possessed by canton Valais but the video doesn't tell us anything about their competence for building inspections.

In July 2024, a scaffolding collapsed outside a shopping center in Lausanne, canton Vaud. People died. There were international news reports about the tragedy. Searching online today, I couldn't find any evidence that anybody ever conducted a public inquiry or published a report.

In October 2022, a massive construction crane collapsed beside the University of Lausanne (UNIL). The crane collapsed onto the concrete foundations sending a shockwave through the region. Many people who felt the shockwave feared an accident in the EPFL nuclear reactor. The authorities eventually told us that only one person died. Once again, I can find no further reports about inquiries or public reports about the accident. The accident occurred in Chavannes-près-Renens, the same commune where I was naturalized as a Swiss citizen. I was in the region on the very day of the accident.

Remember the referendum on corporate accountability (Responsible business initiative)? Canton Valais was one of the Cantons that voted No to accountability.

Look at how three of the four senior officials managing the crisis are personally domiciled in the immediate neighbourhood of the nightclub. In a canton with such a small population, it is very likely that some or all of these officials have frequented the businesses owned by Jacques Moretti.

Sion, Crans-Montana, Valais

 

Mathias Reynard is the president of the canton Valais. His official profile tells us that he is domiciled in Savièse, which is adjacent to Lens and Crans. He is a member of the Parti socialiste.

Stéphane Ganzer is the canton's minister for security. His official profile tells us he is domiciled in Veyras which is half way up the hill going to the Crans-Montana resort. Ganzer has twenty years experience as a fireman and this is more valuable than his job as a politician in the circumstances. He is a member of the PLR.Les Libéraux-Radicaux political party.

Frédéric Gisler is the police chief for canton Valais. A public announcement about his appointment gives us details about his career history. The report notes that he has worked as a police inspector, as a prosecutor and as a greffier. In a previous report about corruption, we demonstrated how a greffier is able to exercise the powers of a judge. A prosecutor exercising judge-like powers is a huge conflict of interest. The previous report made the remarkable revelation that Mathieu Parreaux, founder of the illegal legal insurance scheme, had worked as the greffier, similar to a judge, in the tribunal of Monthey. Coincidentally, Monthey is also in the Canton of Valais and Gisler is from Vernayaz, which is much closer to Monthey than the other individuals mentioned here.

Béatrice Pilloud is the attorney general or chief prosecutor for the canton. Her declaration of interests tells us she is a member of the PLR.Les Libéraux-Radicaux political party, that is, the same party as the canton's security minister. Is it appropriate for both of these roles to be linked politically? Her husband is Francois Pilloud. He is a co-owner of the PaP Vins winery and tourism business.

Béatrice Pilloud spent more than twenty years working as a criminal defence lawyer before she decided to work as the attorney general. Therefore, she has switched sides. Is this fair to all the clients she represented and defended over the years?

When people have all these powers and conflicts of interest, it is easy to see how they could use their powers to either cover something up, to punish whistleblowers or to designate somebody as a scapegoat and deceive the public about who is really at fault.

Over 100 survivors have suffered severe and critical injuries from burns and the inhalation of smoke. The crisis teams in three of Switzerlands biggest hospitals are fully occupied treating these unexpected casualties on a public holiday. Yet the Canton of Valais has told visitors in other ski resorts they can continue to ski as long as they don't have accidents.

The earlier blog about the subject identifies Jacques Moretti and his spouse Jessica Maric as co-proprietors/founders of the business where the tragedy occurred.

Ireland suffered a similar tragedy, the Stardust nightclub fire in 1981. Forty eight people died. Forty five years later, the families are still waiting for answers. As a citizen of both Ireland and Switzerland, it bothers me that the same lightning has struck two countries.

The JuristGate investigation found much the same phenomena. FINMA never gave any information to people who purchased the illegal legal insurance. They only published a redacted and anonymized version of their judgment six months later. The JuristGate web site has unredacted it so people can see how Switzerland covers up crime and incompetence.

It is entirely possible that people have committed suicide due to the financial and insurance crisis in Switzerland. These deaths are every bit as bad as the deaths in Le Constellation.

When people like Mr Pocock try to offer professional advice, for example, after the Debian suicide cluster, they are subject to public humiliation and threats of violence (recorded). The lawyers, politicians and small business owners are a group of kissing-cousins. Protecting the reputations and business interests of their families and friends is inextricably intertwined with covering up all the people who failed to prevent this disaster.

Look at the mayor of Basel, Diana von Bidder-Senn. They tell us that she has a PhD in cybersecurity from ETH Zurich but she didn't realize that her own husband was under the influence of social engineering. Can there be any more extreme example of social engineering than a victim who dies by suicide? IBM's annual Cost of a Data Breach report regularly concludes that social engineering is the number one risk for their clients.

JuristGate reports have been published in the hope that Swiss authorities can raise their standards in relation to both cybersecurity and fire safety.

Please see the rest of the JuristGate reports.

01 January, 2026 04:00PM

Crans-Montana, Le Constellation: journalists, victims' families, ProtonMail users at risk, police raids

A news report appeared about the Geneva airport setting up a zone to welcome family members coming to search for missing and injured victims of the tragic fire at Le Constellation in Crans-Montana, Valais.

My first thought upon reading this report is that it could be a trap, or at the very least, part of a public relations exercise to divert these family members from contact with the journalists and lawyers waiting to greet them.

Incidentally, a lot of journalists are now using ProtonMail, which is based in Switzerland and subject to surveillance by Swiss police. Remember how Arjen Kamphuis disappeared.

My fears about the "welcome zone" for families are well founded: this is exactly what they did to victims of Parreaux, Thiebaud & Partners, the JuristGate scandal.

In April 2023, the Swiss financial regulator, FINMA, made a judgment shutting down the illegal legal insurance scheme. They did not publish the judgment, they did not inform the clients and they did not offer any compensation to the clients whatsoever. To minimize media attention on the ponzi scheme run by Swiss jurists, they issued the judgment simultaneously with the much bigger Credit Suisse crisis.

In September 2023, as the FINMA director was resigning for health reasons, FINMA published a heavily redacted document about an enforcement action that had taken place at some unspecified moment during his tenure. Despite the effort to redact the document, they had inadvertently included an acronym in the filename allowing us to prove the link to JuristGate.

Within days, Daniel Pocock published his first damning exposé on the cross-border scandal.

It looks like at least one of the former employees of the illegal legal insurance company was given help to find a new job so that she would stay quiet about the scandal.

FINMA has never offered any compensation to Mr Pocock and his family.

On 1 November 2023, Mr Pocock was granted Swiss citizenship in the Canton of Vaud.

Shortly after the publication, Mr Pocock began receiving increasingly nasty threats of violence. Mr Pocock recorded people coming to his home uninvited and making threats.

Here is one of the threats from a lawyerist:

Subject: Justcia SA
Date: Wed, 13 Dec 2023 17:59:27 +0100
From: Joao Rodrigues Boleiro <joaoboleiro@hotmail.com>
To: daniel@pocock.pro

Monsieur,

Je viens de voir que vous me citez dans votre blog sous le nom de João Boleiro et sans avoir obtenu mon autorisation.

Je ne veux pas mêler aux affaires de M. Parreaux. Mon activité en qualité de juriste, titulaire du brevet d’avocat, s’est terminée avec la liquidation de la société Justicia SA.

Dès lors, je vous serai gré de retirer immédiatement mon nom de votre blog, faute de quoi je dépose une plainte pénale à votre encontre.

Cordialement,

Joao RODRIGUES
Chemin De-Maisonneuve, 5
1219 Châtelaine

Mr Pocock replied and received more threats, just days before Christmas:

Subject: 	Re: Justcia SA
Date: 	Wed, 13 Dec 2023 17:10:12 +0000
From: 	joao rodrigues <joaoboleiro@hotmail.com>
To: 	Daniel Pocock <daniel@pocock.pro>



Le problème que vous citez concerne directement M. Parreaux et je vous invite à lui formuler vos doléances. Toutefois, en droit Suisse, vous ne pouvez pas citer une personne sans son autorisation, d'autant plus si cela lui porte atteinte à son honneur (art. 173ss CP).

Partant, je maintiens ma demande et n'hésiterai pas à déposer une plainte pénale si vous ne retirez mon de votre blog.

Cordialement,


João Rodrigues

At 08:30 on 1 January 2026, Mr Pocock's blog was the first to find and publish links to old news reports about owners of Le Constellation doing the renovations by themselves.

Within two days, every media outlet is now running the story that these people have been charged with negligent homicide.

Mr Pocock warned the public that they have at least two other businesses, Le Senso in Crans-Montana and Le Vieux Chalet in Lens (VS) and they have done the construction work themselves in at least one of those businesses.

According to a report in Le Temps, the maximum sentence these people would face is four and a half years in prison. That is less than the sentence given to legendary human rights activist Gerhard Ulrich for creating the Swiss-Corruption.com web site.

The news reports today tell us that the Crans-Montana suspects, facing charges relating to forty deaths and over one hundred seriously injured victims, have been released on unconditional bail. They are even free to travel outside Switzerland. Yet when Gerhard Ulrich dared to publish information about the absurd behaviour of Swiss jurists, they sent the TIGRIS commando unit to molest him and lock him up.

This is grim news: if the foreign journalists, Swiss journalists, the victims or their family members make any "critical commentary" about the Swiss authorities, the building inspections or the suspects, even daring to publish the names of the suspects on social media, they could all be arrested on "criminal speech" charges.

This is no exaggeration: that is exactly what the jurist Rodrigues-Boleiro wrote above with his reference to Art. 173ss CP of the Swiss Criminal Code.

In 2011, Adrian von Bidder-Senn died on the same day Mr Pocock got married. The widow, Diana von Bidder-Senn, became mayor of Basel. There have been countless vendettas against Mr Pocock and his family over more than fifteen years. Mr Pocock has not committed any crime. His presence is a reminder of the possibility that a Swiss citizen, Adrian von Bidder, could be so unhappy that they would take their own life. So they want to snuff out Mr Pocock too so they can pretend everybody is happy. What does it mean to be Swiss?

The Debian suicide cluster, as a whole, is just as bad as the deaths in Le Constellation.

Some of the suicide victims had referred to the behaviour of the overlords. The overlords will do anything to avoid scrutiny of their behaviour. The Swiss jurists habitually gravitate towards the side who has the most money.

The von Bidders are both graduates of the elite ETH Zurich university. On the anniversary of the September 11 attacks, another ETH Zurich employee and Debianist, Axel Beckert, signed a demand asking Swiss authorities to violently attack Mr Pocock and destroy all of his computers and other evidence about the possibility that a Swiss graduate of the ETH Zurich had committed suicide on Mr Pocock's wedding day.

Well done ETH Zurich. Well done Switzerland.

Look again at the emails from Joao Rodrigues-Boleiro and compare them to the demands from ETH Zurich. They are not complaining about any real crimes. They are sooking hysterically about an unpaid volunteer who put his finger on the truth and dared to publish it.

Axel Beckert, ETH Zurich, Debianism, manslaughter, suicide cluster

 

Axel Beckert, ETH Zurich, Debianism, manslaughter, suicide cluster

 

Let's go back to the case of Joao Rodrigues Boleiro. His emails to Mr Pocock do not disclose his relation with the police. In fact, according to his LinkedIn profile, he has also worked as an in-house jurist for the police departments in both canton Valais and canton Geneva.

Joao Rodrigues Boleiro, Geneva police

 

The founder of the illegal legal insurance scheme, Mathieu Parreaux had worked as a greffier in the tribunal of Monthey, Valais at the same time that he was selling illegal legal protection insurance. That was also around the same time that Joao Rodrigues Boleiro was working with the police and attorney general (procureur / Ministère public) in Sierre and Sion, canton Valais.

Therefore, by identifying the history of Joao Rodrigues Boleiro today, I want to ask the question: how long did Rodrigues-Boleiro's police colleagues know that he quit his police job to go and work for a scam?

After all, Mathieu Parreaux told us on his own LinkedIn post that the authorities knew his company was a scam since 2018, well before the time Joao Rodrigues Boleiro decided to quit his police job and join the insurance racket.

FINMA shut down the illegal legal insurance in April 2023 but even more than a year later, Joao Rodrigues Boleiro was still listed in the Canton Geneva police directory as one of their own jurists.

Joao Rodrigues Boleiro, Geneva police

 

When he sent the email threats to unpaid volunteer Mr Pocock in December 2023, Joao Rodrigues Boleiro did not show Mr Pocock his police badge or reveal anything about his connections to the police department.

When UBS was being prosecuted in 2010, Mr Pocock went to Switzerland in good faith to help them improve their business processes. But some of the native Swiss people are the most deeply racist crackpots Mr Pocock has ever encountered. Some of these people are deeply jealous of professionals like Mr Pocock who were born in other countries and speak Swiss languages with an accent. They are too stubborn to listen and they use the police to hide their own sheer incompetence. Now forty of their own children are dead and over one hundred have been scarred for life by third degree burns.

On 15 January 2023, Mr Pocock dared to speculate what a model police officer looks like. A few weeks later, the police officer identified by Mr Pocock was awarded the National Emergency Medal for his efforts during the Australian bushfires. In other words, Victoria Police have had their best and brightest watching Mr Pocock for a long time. A rogue employee of ETH Zurich, which claims to be a leading research institution, tried to organize a lynching because Mr Pocock is a reliable judge of character.

That point needs to be emphasized: Mr Pocock called out his old friend as an exemplary police officer and the person in question won the National Emergency Medal. On 9 September 2023, Mr Pocock told the Cambridgeshire coroner about the risks of the Debian toxic culture and three days after that warning, the next victim died. Mr Pocock has an uncanny habit of being right about people. Snobby people have a fit when you hold a mirror up to them.

Look at how Joao Rodrigues Boleiro from the Canton Valais police and Axel Beckert from ETH Zurich tried to bully Mr Pocock into silence in the documents cited above.

Amnesty International published a report about the Swiss police using violence to "arrest" and sexually abuse Trevor Kitchen for his reports about financial corruption.

Trevor Kitchen, a 41-year-old British citizen resident in Switzerland, was arrested by police in Chiasso (canton of Ticino) on the morning of 25 December 1992 in connection with offences of defamation and insults against private individuals. In a letter addressed to the Head of the Federal Department of Justice and Police in Berne and to the Tribunal in Bellinzona (Ticino) on 3 June 1993 he alleged that two police officers arrested him in a bar in Chiasso and, after handcuffing him, accompanied him to their car in the street outside. They then bent him over the car and hit him around the head approximately seven times and carried out a body search during which his testicles were squeezed. He claimed he was then punched hard between the shoulder blades several times. He said he offered no resistance during the arrest.

He was then taken to a police station in Chiasso where he was questioned in Italian (a language he does not understand) and stated that during the questioning "The same policeman that arrested me came into the office to shout at me and hit me once again around the head. Another policeman forced me to remove all of my clothes. I was afraid that they would use physical force again; they continued to shout at me. The one policeman was pulling at my clothes and took my trouser belt off and removed my shoe laces. Now I stood in the middle of an office completely naked (for 10 minutes) with the door wide open and three policemen staring at me, one of the policemen put on a pair of rubber surgical gloves and instructed me to crouch into a position so that he could insert his fingers into my anus, I refused and they all became angry and started shouting and demonstrating to me the position which they wanted me to take, laughing, all were laughing, these police were having a good time. They pointed at my penis, making jokes, hurling abuse and insults at me, whilst I stood completely still and naked. Finally, when they finished laughing, one of the policemen threw my clothes onto the floor in front of me. I got dressed."

He was transferred to prison some hours later and in his letter claimed that during the night he started to experience severe pains in his chest, back and arms. He asked a prison guard if he could see a doctor but the request was refused and he claimed the guard kicked him. He was released on 30 December 1993. Medical reports indicated that since his release he had been experiencing recurrent pain in the area of his chest and right shoulder and had been receiving physiotherapy for an injury to the upper thoracic spine and his right shoulder girdle.

Read more about the JuristGate scandal.

01 January, 2026 04:00PM

hackergotchi for Daniel Pocock

Daniel Pocock

Crans-Montana: Le Constellation ownership, Jacques Moretti and Jessica Maric, Lens (CH)

News reports have appeared about an explosion at the bar Le Constellation in Crans-Montana, Switzerland.

A 2016 news report from Le Nouvelliste quotes the owner of the bar around his acquisition of the establishment:

Coup de foudre! Ce commerçant corse décide d’ouvrir une affaire. Ça tombe bien, le Constellation, au centre de Crans, est à remettre. Il faudra attendre juin 2015 pour signer un accord. Durant six mois, Jacques Moretti retrousse ses manches et relooke l’établissement. «J’ai quasiment tout fait moi-même. Regardez ces murs, il y a 14 tonnes de pierres sèches, elles viennent de Saint-Léonard!» Depuis décembre 2015, le Constellation sert d’écrin aux produits corses. Charcuteries, vins, bières, liqueur de myrte et même whisky au parfum de châtaigne. «Mais attention, j’ai à cœur de présenter aussi le terroir valaisan. Vous avez de très bons vins, c’est un plaisir de les servir à mes clients.» Le Corse avoue se sentir très bien chez nous. «Vous savez, on est pareil. On est d’abord des montagnards. Avec la tête dure, mais surtout avec beaucoup de gentillesse.»

Translated to English:

(Translated to English) It was love at first sight! This Corsican businessman decided to open a business. Conveniently, Le Constellation, in the center of Crans, was for sale. He had to wait until June 2015 to sign the agreement. For six months, Jacques Moretti rolled up his sleeves and completely renovated the establishment. "I did almost everything myself. Look at these walls, there are 14 tons of dry-stone masonry, and the stones come from Saint-Léonard!" Since December 2015, Le Constellation has been showcasing Corsican products: cured meats, wines, beers, myrtle liqueur, and even chestnut-flavored whisky. "But mind you, I'm also keen to feature local Valais products. You have some excellent wines here; it's a pleasure to serve them to my customers." The Corsican admits he feels very much at home here. "You know, we're alike. We're both mountain people at heart. Stubborn, perhaps, but above all, very kind."

Jacques Moretti on LinkedIn.

The news report notes he did everything himself but doesn't pose questions about whether controlled works, such as the electrical, gas and fire safety systems, were a DIY.

The Facebook page for the bar has been taken down. The bar published a profile, with pictures and contact details, on the site of the local tourist office.

These are the details the owners chose to make public:

Rue Centrale 35
3963 Crans-Montana
constellationcransmontana@gmail.com
+41 78 717 14 86
www.facebook.com/leconstellation

Switzerland has 26 cantons and each canton maintains its own business register.

I previously had to research the scandal involving an illegal legal insurance scheme being operated across the border between Switzerland and France. The presence of records about multiple nominee owners and business entities in different cantons made it hard to find the truth. Nonetheless, the truth came out on the JuristGate web site.

Le Constellation is in the Canton of Valais and the business records can be searched in this public database.

The search reveals the owners are Jacques Moretti, a French citizen domiciled in Lens and Jesicca Maric, his spouse, who is also a French citizen.

The records mention they are domiciled in the Swiss village Lens, not to be confused with the French city of the same name.

Jessica Maric on LinkedIn

Searching for their names finds other businesses, including Le Vieux Chalet 1978, and Le Senso.

More links

Article about the couple in Altitude Immobilier magazine

Blog about their other restaurant from food critic Gilles Pudlowski.

To read more about researching businesses in Switzerland, please see the JuristGate web site.

01 January, 2026 07:30AM

Russ Allbery

Review: This Brutal Moon

Review: This Brutal Moon, by Bethany Jacobs

Series: Kindom Trilogy #3
Publisher: Orbit
Copyright: December 2025
ISBN: 0-316-46373-6
Format: Kindle
Pages: 497

This Brutal Moon is a science fiction thriller with bits of cyberpunk and space opera. It concludes the trilogy begun with These Burning Stars. The three books tell one story in three volumes, and ideally you would read all three in close succession.

There is a massive twist in the first book that I am still not trying to spoil, so please forgive some vague description.

At the conclusion of These Burning Stars, Jacobs had moved a lot of pieces into position, but it was not yet clear to me where the plot was going, or even if it would come to a solid ending in three volumes as promised by the series title. It does. This Brutal Moon opens with some of the political maneuvering that characterized These Burning Stars, but once things start happening, the reader gets all of the action they could wish for and then some.

I am pleased to report that, at least as far as I'm concerned, Jacobs nails the ending. Not only is it deeply satisfying, the characterization in this book is so good, and adds so smoothly to the characterization of the previous books, that I saw the whole series in a new light. I thought this was one of the best science fiction series finales I've ever read. Take that with a grain of salt, since some of those reasons are specific to me and the mood I was in when I read it, but this is fantastic stuff.

There is a lot of action at the climax of this book, split across at least four vantage points and linked in a grand strategy with chaotic surprises. I kept all of the pieces straight and understood how they were linked thanks to Jacobs's clear narration, which is impressive given the number of pieces in motion. That's not the heart of this book, though. The action climax is payoff for the readers who want to see some ass-kicking, and it does contain some moving and memorable moments, but it relies on some questionable villain behavior and a convenient plot device introduced only in this volume. The action-thriller payoff is competent but not, I think, outstanding.

What put this book into a category of its own were the characters, and specifically how Jacobs assembles sweeping political consequences from characters who, each alone, would never have brought about such a thing, and in some cases had little desire for it.

Looking back on the trilogy, I think Jacobs has captured, among all of the violence and action-movie combat and space-opera politics, the understanding that political upheaval is a relay race. The people who have the personalities to start it don't have the personality required to nurture it or supply it, and those who can end it are yet again different. This series is a fascinating catalog of political actors — the instigator, the idealist, the pragmatist, the soldier, the one who supports her friends, and several varieties and intensities of leaders — and it respects all of them without anointing any of them as the One True Revolutionary. The characters are larger than life, yes, and this series isn't going to win awards for gritty realism, but it's saying something satisfyingly complex about where we find courage and how a cause is pushed forward by different people with different skills and emotions at different points in time. Sometimes accidentally, and often in entirely unexpected ways.

As before, the main story is interwoven with flashbacks. This time, we finally see the full story of the destruction of the moon of Jeve. The reader has known about this since the first volume, but Jacobs has a few more secrets to show (including, I will admit, setting up a plot device) and some pointed commentary on resource extraction economies. I think this part of the book was a bit obviously constructed, although the characterization was great and the visible junction points of the plot didn't stop me from enjoying the thrill when the pieces came together.

But the best part of this book was the fact there was 10% of it left after the climax. Jacobs wrote an actual denouement, and it was everything I wanted and then some. We get proper story conclusions for each of the characters, several powerful emotional gut punches, some remarkably subtle and thoughtful discussion of political construction for a series that tended more towards space-opera action, and a conclusion for the primary series relationship that may not be to every reader's taste but was utterly, perfectly, beautifully correct for mine. I spent a whole lot of the last fifty pages of this book trying not to cry, in the best way.

The character evolution over the course of this series is simply superb. Each character ages like fine wine, developing more depth, more nuance, but without merging. They become more themselves, which is an impressive feat across at least four very different major characters. You can see the vulnerabilities and know what put them there, you can see the strengths they developed to compensate, and you can see why they need the support the other characters provide. And each of them is so delightfully different.

This was so good. This was so precisely the type of story that I was in the mood for, with just the type of tenderness for its characters that I wanted, that I am certain I am not objective about it. It will be one of those books where other people will complain about flaws that I didn't see or didn't care about because it was doing the things I wanted from it so perfectly. It's so good that it elevated the entire trilogy; the journey was so worth the ending.

I'm afraid this review will be less than helpful because it's mostly nonspecific raving. This series is such a spoiler minefield that I'd need a full spoiler review to be specific, but my reaction is so driven by emotion that I'm not sure that would help if the characters didn't strike you the way that they struck me. I think the best advice I can offer is to say that if you liked the emotional tone of the end of These Burning Stars (not the big plot twist, the character reaction to the political goal that you learn drove the plot), stick with the series, because that's a sign of the questions Jacobs is asking. If you didn't like the characters at the end (not the middle) of the first novel, bail out, because you're going to get a lot more of that.

Highly, highly recommended, and the best thing I've read all year, with the caveats that you should read the content notes, and that some people are going to bounce off this series because it's too intense and melodramatic. That intensity will not let up, so if that's not what you're in the mood for, wait on this trilogy until you are.

Content notes: Graphic violence, torture, mentions of off-screen child sexual assault, a graphic corpse, and a whole lot of trauma.

One somewhat grumbly postscript: This is the sort of book where I need to not read other people's reviews because I'll get too defensive of it (it's just a book I liked!). But there is one bit of review commentary I've seen about the trilogy that annoys me enough I have to mention it. Other reviewers seem to be latching on to the Jeveni (an ethnic group in the trilogy) as Space Jews and then having various feelings about that.

I can see some parallels, I'm not going to say that it's completely wrong, but I also beg people to read about a fictional oppressed ethnic and religious minority and not immediately think "oh, they must be stand-ins for Jews." That's kind of weird? And people from the US, in particular, perhaps should not read a story about an ethnic group enslaved due to their productive skill and economic value and think "they must be analogous to Jews, there are no other possible parallels here." There are a lot of other comparisons that can be made, including to the commonalities between the methods many different oppressed minorities have used to survive and preserve their culture.

Rating: 10 out of 10

01 January, 2026 05:27AM

December 31, 2025

hackergotchi for Junichi Uekawa

Junichi Uekawa

Happy new year.

Happy new year.

31 December, 2025 10:42PM by Junichi Uekawa

hackergotchi for Bits from Debian

Bits from Debian

DebConf26 dates announced

Alt Debconf26 by Romina Molina

As announced in Brest, France, in July, the Debian Conference is heading to Santa Fe, Argentina.

The DebConf26 team and the local organizers team in Argentina are excited to announce Debconf26 dates, the 27th edition of the Debian Developers and Contributors Conference:

DebCamp, the annual hacking session, will run from Monday July 13th to Sunday to July 19th 2026, followed by DebConf from Monday July 20th to Saturday July 25th 2026.

For all those who wish to meet us in Santa Fe, the next step will be the opening of registration on January 26, 2026. The call for proposals period for anyone wishing to submit a conference or event proposal will be launched on the same day.

DebConf26 is looking for sponsors; if you are interested or think you know of others who would be willing to help, please have a look at our sponsorship page and get in touch with sponsors@debconf.org.

About Debian

The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential Open Source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system.

About DebConf

DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Bosnia and Herzegovina, India, Korea. More information about DebConf is available from https://debconf.org/.

For further information, please visit the DebConf26 web page at https://debconf26.debconf.org/ or send mail to press@debian.org.

Debconf26 is made possible by Proxmox and others.

31 December, 2025 05:00PM by Publicity team

hackergotchi for Chris Lamb

Chris Lamb

Favourites of 2025

Here are my favourite books and movies that I read and watched throughout 2025.

§

Books

Eliza Clark: Boy Parts (2020)
Rachel Cusk: The Outline Trilogy (2014—2018)
Edith Wharton: The House of Mirth (1905)
Michael Finkel: The Art Thief (2023)
Tony Judt: When the Facts Change: Essays 1995-2010 (2010)
Jennette McCurdy: I'm Glad My Mom Died (2022)
Joan Didion: The Year of Magical Thinking (2005)
Jill Lepore: These Truths: A History of the United States (2018)

§

Films

Recent releases

Disappointments this year included 28 Years Later (Danny Boyle, 2025), Cover-Up (Laura Poitras & Mark Obenhaus, 2025), Bugonia (Yorgos Lanthimos, 2025) and Caught Stealing (Darren Aronofsky, 2025).


Older releases

ie. Films released before 2024, and not including rewatches from previous years.

Distinctly unenjoyable watches included War of the Worlds (Rich Lee, 2025), Highest 2 Lowest (Spike Lee, 2025), Elizabethtown (Cameron Crowe, 2005), Crazy Rich Asians (Jon M. Chu, 2018) and Spinal Tap II: The End Continues (Rob Reiner, 2025).

On the other hand, unforgettable cinema experiences this year included big-screen rewatches of Chinatown (Roman Polanski, 1974), Koyaanisqatsi (Godfrey Reggio, 1982), Heat (Michael Mann, 1995) and Night of the Hunter (Charles Laughton, 1955).


31 December, 2025 08:58AM

December 30, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Invitation to live next door to George and Amal Clooney

George and Amal Clooney have been in the news today after receiving French citizenship.

An interesting observation is that their villa is almost right next door to the Benedictine monastary established by Dom Alcuin Reid.

( Directions)

Dom Alcuin Reid is from Melbourne. He left the seminary for reasons that have not been well explained. He was invited to operate the English-speaking Benedictine monastery in the south of France. In 2022, a bishop outside France secretly ordained Dom Alcuin Reid as a priest. One of the other monks was secretly ordained as a deacon. The public and the Catholic faithful remain in the dark.

The story of secret ordinations taking place in the church reminded me of the secret demotions used to hide the Debian suicide cluster. What an uncanny coincidence. One of the victims died on our wedding day and it was Palm Sunday.

Nonetheless, the monastery invites male visitors over 18 to come and live with them and discover the monastic lifestyle. The advertisement doesn't state the fact the next-door-neighbors are the Clooneys:

Our classical Benedictine monastic observance is centred upon the solemn celebration of the Sacred Liturgy according to the older, classical forms of the Roman and Monastic Rites and is supported through our manual and intellectual work.

The Monastère Saint-Benoît welcomes all to their celebrations of the Monastic Office and Holy Mass (celebrated in Latin according to the usus antiquior) in our beautiful 10th century Romanesque church. The weekly schedule is posted here.

Men of 18 years of age or over are welcome to ask to stay in the monastery guest accommodation for a time of retreat and should contact the Guest Master. Our guest accommodation is limited. Ladies and families are welcome to arrange their own accommodation locally and are able to participate in the monastic offices, all of which are open to the public and are celebrated in the monastery church.

Monks are available for Confession or spiritual advice. The Monks welcome requests for prayer and accept intentions for Mass to be offered. Please contact us.

Men discerning the possibility of a Benedictine vocation are welcome to visit the monastery and to share in its daily life and work for an extended period. In the first instance they should write to the Prior.

The Monastery welcomes those who wish to associate themselves more formally with the prayer and the work of the community as Oblates. For further details, contact the Master of Oblates.

George Clooney, Le Canadel, Monastère Sainte-Benoit, Dom Alcuin Reid, Brignoles, Var, France

 

By coincidence, I had the fortunate opportunity to meet Mgr Rey, the former bishop of this region, at a recent event in Lyon.

Daniel Pocock, Monseigneur Dominique Rey, Réseau Vie, Basilique Saint-Bonaventure, Lyon

 

Related blogs about the church

Catholic.Community

Please follow the Catholic.Community web site and make it your home page.

30 December, 2025 10:30PM

From the ABC to Techrights: recognizing fake news about economics, finance and investment

On Sunday, I published some brief comments about the Eurozone, Bulgaria, fake news, inflation and bullion. Shortly afterwards, various reports appeared which contradict my own comments.

Techrights has asked Debt as the new currency?.

In fact, most banks are expected to hold some form of reserve. The reserve is actually a loan to the government. In countries with central banks, the central bank lends money to the government and it is authorized to print banknotes and make other loans. The founding of the Bank of England famously involves a loan of GBP 1.2 million to the British Government. The loan is a liability for the British government and it is an asset for the Bank of England. Off the back of this asset, the Bank can print banknotes.

Therefore, the Techrights headline was wrong: there is nothing new about debts backing up currency. Interested readers can discover more by reading about money supply or the history of central banking, in which the British have a special status.

Australia's ABC has gone on to comment that silver prices declined and palladium crashed on Monday, 29 December. They justify the comment by explaining that professional investors regard any downward price move over ten percent is a crash.

Most traders view such a price plunge as a market crash.

Fact checking, traders assign a volatility rating to every stock, every commodity and every corporate bond.

People can look at the volatility rating before they decide to purchase a stock, a metal or a bond. If you choose to put your money in an investment with a very high volatility rating then you can not use the word "crash" every time it swings ten percent in a single day.

Despite the high volatility of precious metals, people have been buying these things as long term investments to protect against inflation. Here is one of those Irish photos demonstrating how many silver coins you can buy with EUR 1,000. There is a stack of coins for every two year interval since 2007. There are ten stacks, implying the purchaser spent 10 x 1,000 = EUR 10,000 in total to buy the 607 coins in the photo.

At the silver price peak on 26 December, the coins were worth EUR 70 each, that is a total of EUR 42,500.

After the ABC's "crash" on Monday, the coins were worth EUR 64 each, that is a total of EUR 38,848. It is smaller than the peak price but it is still a lot more than the long term cost of buying the coins.

Silver coins, inflation, Eurozone

 

When I saw the article about a crash, I wondered whether these over-generalized comments were created by a real journalist or by artificial intelligence.

Last week, the same ABC web site published a report about an "up-crash" in the markets as people speculate on the artificial intelligence stocks.

Every stock and every metal has a bubble from time to time. Good investment may be nothing more than picking the bubble that is less wrong than the other bubbles.

Do we say that the Euro has crashed now that the number of silver coins we can purchase has fallen by more than fifty percent in twelve months?

If you took the four grams of free silver mentioned in my previous blog, as it was free, did you lose anything at all when the price changed? That depends on which metric you use to measure the price. The free grams of bullion are still available today but it feels like it won't be long before they reduce the size of the promotion.

More reports about economic subjects.

30 December, 2025 10:30AM

Russ Allbery

Review: Dark Ambitions

Review: Dark Ambitions, by Michelle Diener

Series: Class 5 #4.5
Publisher: Eclipse
Copyright: 2020
ISBN: 1-7637844-2-8
Format: Kindle
Pages: 81

Dark Ambitions is a science fiction romance novella set in Michelle Diener's Class 5 series, following the events of Dark Matters. It returns to Rose as the protagonist and in that sense is a sequel to Dark Horse, but you don't have to remember that book in detail to read this novella.

Rose and Dav (and the Class 5 ship Sazo) are escorting an exploration team to a planet that is being evaluated for settlement. Rose has her heart set on going down to the planet, feeling the breeze, and enjoying the plant life. Dav and his ship are called away to deal with a hostage situation. He tries to talk her out of going down without him, but Rose is having none of it. Predictably, hijinks ensue.

This is a very slight novella dropped into the middle of the series but not (at least so far as I can tell) important in any way to the overall plot. It provides a bit of a coda to Rose's story from Dark Horse, but given that Rose has made cameos in all of the other books, readers aren't going to learn much new here. According to the Amazon blurb, it was originally published in the Pets in Space 5 anthology. The pet in question is a tiny creature a bit like a flying squirrel that Rose rescues and that then helps Rose in exactly the way that you would predict in this sort of story.

This is so slight and predictable that it's hard to find enough to say about it to write a review. Dav is protective in a way that I found annoying and kind of sexist. Rose doesn't let that restrict her decisions, but seems to find this behavior more charming than I did. There is a tiny bit of Rose being awesome but a bit more damsel in distress than the series usually goes for. The cute animal is cute. There's the obligatory armory scene with another round of technomagical weapons that I think has appeared in every book in this series. It all runs on rather obvious rails.

There is a subplot involving Rose feeling some mysterious illness while on the planet that annoyed me entirely out of proportion to how annoying it is objectively, mostly because mysterious illnesses tend to ramp up my anxiety, which is not a pleasant reading emotion. This objection is probably specific to me.

This is completely skippable. I was told that in advance and thus only have myself to blame, but despite my completionist streak, I wish I'd skipped it. We learn one piece of series information that will probably come up in the future, but it's not the sort of information that would lead me to seek out a story about it. Otherwise, there's nothing wrong with it, really, but it would be a minor and entirely forgettable chapter in a longer novel, padded out with a cute animal and Dav trying to be smothering.

Not recommended just because you probably have something better to do with that reading time (reading the next full book of the series, for example), but there's nothing wrong with this if you want to read it anyway.

Followed by Dark Class.

Rating: 5 out of 10

30 December, 2025 06:19AM

December 28, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Eurozone: Bulgaria, Russian dirty tricks, Gold & Silver bullion

Bulgaria joined the European Union in 2007 and they anticipate adopting the Euro as their currency on 1 January 2026.

The decision to use the Euro has been divisive. Polls suggest that a majority of citizens would prefer to defer or completely cancel the decision to adopt the Euro. Everybody from the political parties to the Russians are getting in on the conflict. At the beginning of December, the Bulgarian parliament was asked to consider putting the Euro to a referendum.

The ECB is prepared for anything

The European authorities understand that if Bulgaria changes their mind or if the Euro transition flops badly, it could have major ramifications for every other country who is already part of the Euro.

In particular, ever since banks began using currencies that have no link to gold and silver, the currencies have been entirely dependent on public perception. A Euro rejection or flop in Bulgaria would undermine perception in unpredictable ways. Other countries would think twice before joining in the future.

With that in mind, the banks have well and truly prepared for every imaginable disaster, whether it is the accidental death of a prime minister or Russian cyberattacks.

Gold and silver: the silent referendum

While the Bulgarian public did not get to vote on the Euro per se, they are voting with their wallets. Reports claim that Bulgaria, the poorest country in Europe, is now the third highest country on the table of private bullion ownership.

Eurozone inflation and the price police

As in previous Euro changeovers, the authorities have promised that price police will check the prices of essential goods and services in a range of businesses before and after 1 January. Businesses who obviously increase their prices and blame the Euro have been threatened with punishment.

In practice, we've seen that businesses in other countries have found indirect ways to work around inflation. For example, in Ireland, a lot of restaurants periodically make a complete overhaul of their menu. They change the ingredients and serving sizes and there is no easy way to make a like-for-like comparison to the menu before Ireland got the Euro many years ago.

On top of that, businesses that don't increase their prices will probably fail completely. New businesses will appear and replace the old businesses. The new businesses will charge new prices and the price police will not be able to punish them because they didn't exist before the Euro changeover.

Inflation in a picture: gold and silver prices

The gold and silver prices in the media typically show a chart that is always going up.

A far better way to look at the prices of these metals is to ask if you took one thousand Euro from your salary and invested it in silver every December, how many coins would you get?

Somebody made a "chart" by stacking the silver coins they acquired over more than twenty years. It has got people talking in Ireland. In 2007, when Bulgaria joined the European Union, one thousand Euros could buy 92 silver coins. Today, one thousand Euro only buys fifteen coins.

Silver coins, inflation, Eurozone

 

Silver coins, inflation, Eurozone

 

Silver coins, inflation, Eurozone

 

BullionVault may stop giving away free silver

The well known BullionVault web site currently puts 4 grams of free silver into each new account. The new normal for silver prices may make them downgrade this policy and future customers may only get 2 or 3 grams free.

The terms and conditions discourage people from creating multiple accounts in their own name. There appears to be no reason multiple people in the same family, for example, the husband, wife and each child can't each open an account and claim 4 grams of silver in their own name.

Now that I've speculated about that, look out for new terms and conditions that limit the free 4 grams of silver to one person at the same address.

More reports about economic subjects.

28 December, 2025 09:00PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Our study, 2025

We’re currently thinking of renovating our study/home office. I’ll likely write more about that project. Embarking on it reminded me that I’d taken a photo of the state of it nearly a year ago and forgot to post it, so here it is.

Home workspace, January 2025

Home workspace, January 2025

When I took that pic last January, it had been three years since the last one, and the major difference was a reduction in clutter. I've added a lava lamp (charity shop find) and Rob Sheridan print. We got rid of the POÄNG chair (originally bought for breast feeding) so we currently have no alternate seating besides the desk chair.

As much as I love my vintage mahogany writing desk, our current thinking is it’s likely to go. I’m exploring whether we could fit in two smaller desks: one main one for the computer, and another “workbench” for play: the synthesiser, Amiga, crafting and 3d printing projects, etc.

28 December, 2025 08:25AM

Balasankar 'Balu' C

Granting Namespace-Specific Access in GKE Clusters

Heyo,

In production Kubernetes environments, access control becomes critical when multiple services share the same cluster. I recently faced this exact scenario: a GKE cluster hosting multiple services across different namespaces, where a new team needed access to maintain and debug their service-but only their service.

The requirement was straightforward yet specific: grant external users the ability to exec into pods, view logs, and forward ports, but restrict this access to a single namespace within a single GKE cluster. No access to other clusters in the Google Cloud project, and no access to other namespaces.

The Solution

Achieving this granular access control requires combining Google Cloud IAM with Kubernetes RBAC (Role-Based Access Control). Here’s how to implement it:

Step 1: Tag Your GKE Cluster

First, apply a unique tag to your GKE cluster. This tag will serve as the identifier for IAM policies.

Step 2: Grant IAM Access via Tags

Add an IAM policy binding that grants users access to resources with your specific tag. The Kubernetes Engine Viewer role (roles/container.viewer) provides sufficient base permissions without granting excessive access.

Step 3: Create a Kubernetes ClusterRole

Define a ClusterRole that specifies the exact permissions needed:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-access-role
rules:
  - apiGroups: [""]
    resources: ["pods", "pods/exec", "pods/attach", "pods/portforward", "pods/log"]
    verbs: ["get", "list", "watch", "create"]

Note: While you could use a namespace-scoped Role, a ClusterRole offers better reusability if you need similar permissions for other namespaces later.

Step 4: Bind the Role to Users

Create a RoleBinding to connect the role to specific users and namespaces:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: custom-rolebinding
  namespace: my-namespace
subjects:
  - kind: User
    name: myuser@gmail.com
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: custom-access-role
  apiGroup: rbac.authorization.k8s.io

Apply both configurations using kubectl apply -f <filename>.

How It Works

This approach creates a two-layer security model:

  • GCP IAM controls which clusters users can access using resource tags
  • Kubernetes RBAC controls what users can do within the cluster and limits their scope to specific namespaces

The result is a secure, maintainable solution that grants teams the access they need without compromising the security of other services in your cluster.

28 December, 2025 06:00AM

December 25, 2025

Russ Allbery

Review: Machine

Review: Machine, by Elizabeth Bear

Series: White Space #2
Publisher: Saga Press
Copyright: October 2020
ISBN: 1-5344-0303-5
Format: Kindle
Pages: 485

Machine is a far-future space opera. It is a loose sequel to Ancestral Night, but you do not have to remember the first book to enjoy this book and they have only a couple of secondary characters in common. There are passing spoilers for Ancestral Night in the story, though, if you care.

Dr. Brookllyn Jens is a rescue paramedic on Synarche Medical Vessel I Race To Seek the Living. That means she goes into dangerous situations to get you out of them, patches you up enough to not die, and brings you to doctors who can do the slower and more time-consuming work. She was previously a cop (well, Judiciary, which in this universe is mostly the same thing) and then found that medicine, and specifically the flagship Synarche hospital Core General, was the institution in all the universe that she believed in the most.

As Machine opens, Jens is boarding the Big Rock Candy Mountain, a generation ship launched from Earth during the bad era before right-minding and joining the Synarche, back when it looked like humanity on Earth wouldn't survive. Big Rock Candy Mountain was discovered by accident in the wrong place, going faster than it was supposed to be going and not responding to hails. The Synarche ship that first discovered and docked with it is also mysteriously silent. It's the job of Jens and her colleagues to get on board, see if anyone is still alive, and rescue them if possible.

What they find is a corpse and a disturbingly servile early AI guarding a whole lot of people frozen in primitive cryobeds, along with odd artificial machinery that seems to be controlled by the AI. Or possibly controlling the AI.

Jens assumes her job will be complete once she gets the cryobeds and the AI back to Core General where both the humans and the AI can be treated by appropriate doctors. Jens is very wrong.

Machine is Elizabeth Bear's version of a James White Sector General novel. If one reads this book without any prior knowledge, the way that I did, you may not realize this until the characters make it to Core General, but then it becomes obvious to anyone who has read White's series. Most of the standard Sector General elements are here: A vast space station with rings at different gravity levels and atmospheres, a baffling array of species, and the ability to load other people's personalities into your head to treat other species at the cost of discomfort and body dysmorphia. There's a gruff supervisor, a fragile alien doctor, and a whole lot of idealistic and well-meaning people working around complex interspecies differences. Sadly, Bear does drop White's entertainingly oversimplified species classification codes; this is the correct call for suspension of disbelief, but I kind of missed them.

I thoroughly enjoy the idea of the Sector General series, so I was delighted by an updated version that drops the sexism and the doctor/nurse hierarchy and adds AIs, doctors for AIs, and a more complicated political structure. The hospital is even run by a sentient tree, which is an inspired choice.

Bear, of course, doesn't settle for a relatively simple James White problem-solving plot. There are interlocking, layered problems here, medical and political, immediate and structural, that unwind in ways that I found satisfyingly twisty. As with Ancestral Night, Bear has some complex points to make about morality. I think that aspect of the story was a bit less convincing than Ancestral Night, in part because some of the characters use rather bizarre tactics (although I will grant they are the sort of bizarre tactics that I could imagine would be used by well-meaning people using who didn't think through all of the possible consequences). I enjoyed the ethical dilemmas here, but they didn't grab me the way that Ancestral Night did. The setting, though, is even better: An interspecies hospital was a brilliant setting when James White used it, and it continues to be a brilliant setting in Bear's hands.

It's also worth mentioning that Jens has a chronic inflammatory disease and uses an exoskeleton for mobility, and (as much as I can judge while not being disabled myself) everything about this aspect of the character was excellent. It's rare to see characters with meaningful disabilities in far-future science fiction. When present at all, they're usually treated like Geordi's sight: something little different than the differential abilities of the various aliens, or even a backdoor advantage. Jens has a true, meaningful disability that she has to manage and that causes a constant cognitive drain, and the treatment of her assistive device is complex and nuanced in a way that I found thoughtful and satisfying.

The one structural complaint that I will make is that Jens is an astonishingly talkative first-person protagonist, particularly for an Elizabeth Bear novel. This is still better than being inscrutable, but she is prone to such extended philosophical digressions or infodumps in the middle of a scene that I found myself wishing she'd get on with it already in a few places. This provides good characterization, in the sense that the reader certainly gets inside Jens's head, but I think Bear didn't get the balance quite right.

That complaint aside, this was very fun, and I am certainly going to keep reading this series. Recommended, particularly if you like James White, or want to see why other people do.

The most important thing in the universe is not, it turns out, a single, objective truth. It's not a hospital whose ideals you love, that treats all comers. It's not a lover; it's not a job. It's not friends and teammates.

It's not even a child that rarely writes me back, and to be honest I probably earned that. I could have been there for her. I didn't know how to be there for anybody, though. Not even for me.

The most important thing in the universe, it turns out, is a complex of subjective and individual approximations. Of tries and fails. Of ideals, and things we do to try to get close to those ideals.

It's who we are when nobody is looking.

Followed by The Folded Sky.

Rating: 8 out of 10

25 December, 2025 03:05AM

December 23, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

Remarkable

Remarkable tablet displaying my 2025 planner PDF.

My Remarkable tablet, displaying my 2025 planner.

During my PhD, on a sunny summer’s day, I copied some papers to read onto an iPad and cycled down to an outdoor cafe next to the beach. armed with a coffee and an ice cream, I sat and enjoyed the warmth. The only problem was that due to the bright sunlight, I couldn’t see a damn thing.

In 2021 I decided to take the plunge and buy the Remarkable 2 that has been heavily advertised at the time. Over the next four or so years, I made good use of it to read papers; read drafts of my own papers and chapters; read a small number of technical books; use as a daily planner; take meeting notes for work, PhD and later, personal matters.

I didn’t buy the remarkable stylus or folio cover instead opting for a (at the time, slightly cheaper) LAMY AL-star EMR. And a fantastic fabric sleeve cover from Emmerson Gray.

I installed a hack which let me use the Lamy’s button to activate an eraser and also added a bunch of other tweaks. I wouldn’t recommend that specific hack anymore as there are safer alternatives (personally untested, but e.g. https://github.com/isaacwisdom/RemarkableLamyEraser)

Pros: the writing experience is unparalleled. Excellent. I enjoy writing with fountain pens on good paper but that experience comes with inky fingers, dried up nibs, and a growing pile of paper notebooks. The remarkable is very nearly as good without those drawbacks.

Cons: lower contrast than black on white paper and no built in illumination. It needs good light to read. Almost the opposite problem to the iPad! I’ve tried a limited number of external clip on lights but nothing is frictionless to use.

The traditional two-column, wide margin formatting for academic papers is a bad fit for the remarkable’s size (just as it is for computer display sizes. Really is it good for anything people use anymore?). You can pinch to zoom which is OK, or pre-process papers (with e.g. Briss) to reframe them to be more suitable but that’s laborious.

The newer model, the Remarkable Paper Pro, might address both those issues: its bigger; has illumination and has also added colour which would be a nice to have. It’s also a lot more expensive.

I had considered selling on the tablet after I finished my PhD. My current plan, inspired to some extent by my former colleague Aleksey Shipilëv, who makes great use of his, is to have a go at using it more often, to see if it continues to provide value for me: more noodling out thoughts for work tasks, more drawings (e.g. plans for 3D models) and more reading of tech books.

23 December, 2025 10:58AM

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

AI and Secure Messaging Don't Mix

AI and Secure Messaging Don't Mix

Over on the ACLU's Free Future blog, I just published an article titled AI and Secure Messaging Don't Mix.

The blogpost assumes for the sake of the argument that people might actually want to have an AI involved in their personal conversations, and explores why Meta's Private Processing doesn't offer the level of assurance that they want it to offer.

In short, the promises of "confidential cloud computing" are built on shaky foundations, especially against adversaries as powerful as Meta themselves.

If you really want AI in your chat, the baseline step for privacy preservation is to include it in your local compute base, not to use a network service! But these operators clearly don't value private communication as much as they value binding you to their services.

But let's imagine some secure messenger that actually does put message confidentiality first -- and imagine they had integrated some sort of AI capability into the messenger. That at least bypasses the privacy questions around AI use.

Would you really want to talk with your friends, as augmented by their local AI, though? Would you want an AI, even one running locally with perfect privacy, intervening in your social connections?

What if it summarized your friend's messages to you in a way that led you to misunderstand (or ignore) an important point your friend had made? What if it encouraged you to make an edgy joke that comes across wrong? Or to say something that seriously upsets a friend? How would you respond? How would you even know that it had happened?

My handle is dkg. More times than i can count, i've had someone address me in a chat as "dog" and then cringe and apologize and blame their spellchecker/autocorrect. I can laugh these off because the failure mode is so obvious and transparent -- and repeatable. (also, dogs are awesome, so i don't really mind!)

But when our attention (and our responses!) are being shaped and molded by these plausibility engines, how will we even know that mistakes are being made? What if the plausibility engine you've hooked into your messenger embeds subtle (or unsubtle!) bias?

Don't we owe it to each other to engage with actual human attention?

23 December, 2025 05:00AM by Daniel Kahn Gillmor

December 22, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

NanoKVM: I like it

I bought a NanoKVM. I’d heard some of the stories about how terrible it was beforehand, and some I didn’t learn about until afterwards, but at £52, including VAT + P&P, that seemed like an excellent bargain for something I was planning to use in my home network environment.

Let’s cover the bad press first. apalrd did a video, entitled NanoKVM: The S stands for Security (Armen Barsegyan has a write up recommending a PiKVM instead that lists the objections raised in the video). Matej Kovačič wrote an article about the hidden microphone on a Chinese NanoKVM. Various other places have picked up both of these and still seem to be running with them, 10 months later.

Next, let me explain where I’m coming from here. I have over 2 decades of experience with terrible out-of-band access devices. I still wince when I think of the Sun Opteron servers that shipped with an iLOM that needed a 32-bit Windows browser in order to access it (IIRC some 32 bit binary JNI blob). It was a 64 bit x86 server from a company who, at the time, still had a major non-Windows OS. Sheesh. I do not assume these devices are fit for exposure to the public internet, even if they come from “reputable” vendors. Add into that the fact the NanoKVM is very much based on a development board (the LicheeRV Nano), and I felt I knew what I was getting into here.

And, as a TL;DR, I am perfectly happy with my purchase. Sipeed have actually dealt with a bunch of apalrd’s concerns (GitHub ticket), which I consider to be an impressive level of support for this price point. Equally the microphone is explained by the fact this is a £52 device based on a development board. You’re giving it USB + HDMI access to a host on your network, if you’re worried about the microphone then you’re concentrating on the wrong bit here.

I started out by hooking the NanoKVM up to my Raspberry Pi classic, which I use as a serial console / network boot tool for working on random bits of hardware. That meant the NanoKVM had no access to the outside world (the Pi is not configured to route, or NAT, for the test network interface), and I could observe what went on. As it happens you can do an SSH port forward of port 80 with this sort of setup and it all works fine - no need for the NanoKVM to have any external access, and it copes happily with being accessed as http://localhost:8000/ (though you do need to choose MJPEG as the video mode, more forwarding or enabling HTTPS is needed for an H.264 WebRTC session).

IPv6 is enabled in the kernel. My test setup doesn’t have a router advertisements configured, but I could connect to the web application over the v6 link-local address that came up automatically.

My device reports:

Image version:              v1.4.1
Application version:        2.2.9

That’s recent, but the GitHub releases page has 2.3.0 listed as more recent.

Out of the box it’s listening on TCP port 80. SSH is not running, but there’s a toggle to turn it on and the web interface offers a web based shell (with no extra authentication over the normal login). On first use I was asked to set a username + password. Default access, as you’d expect from port 80, is HTTP, but there’s a toggle to enable HTTPS. It generates a self signed certificate - for me it had the CN localhost but that might have been due to my use of port forwarding. Enabling HTTPS does not disable HTTP, but HTTP just redirects to the HTTPS URL.

As others have discussed it does a bunch of DNS lookups, primarily for NTP servers but also for cdn.sipeed.com. The DNS servers are hard coded:

~ # cat /etc/resolv.conf
nameserver 192.168.0.1
nameserver 8.8.4.4
nameserver 8.8.8.8
nameserver 114.114.114.114
nameserver 119.29.29.29
nameserver 223.5.5.5

This is actually restored on boot from /boot/resolv.conf, so if you want changes to persist you can just edit that file. NTP is configured with a standard set of pool.ntp.org services in /etc/ntp.conf (this does not get restored on reboot, so can just be edited in place). I had dnsmasq on the Pi setup to hand out DNS + NTP servers, but both were ignored (though actually udhcpc does write the DNS details to /etc/resolv.conf.dhcp).

My assumption is the lookup to cdn.sipeed.com is for firmware updates (as I bought the NanoKVM cube it came fully installed, so no need for a .so download to make things work); when working DNS was provided I witness attempts to connect to HTTPS. I’ve not bothered digging further into this. I did go grab the latest.zip being served from the URL, which turned out to be v2.2.9, matching what I have installed, not the latest on GitHub.

I note there’s an iptables setup (with nftables underneath) that’s not fully realised - it seems to be trying to allow inbound HTTP + WebRTC, as well as outbound SSH, but everything is default accept so none of it gets hit. Setting up a default deny outbound and tweaking a little should provide a bit more reassurance it’s not going to try and connect out somewhere it shouldn’t.

It looks like updates focus solely on the KVM application, so I wanted to take a look at the underlying OS. This is buildroot based:

~ # cat /etc/os-release
NAME=Buildroot
VERSION=-g98d17d2c0-dirty
ID=buildroot
VERSION_ID=2023.11.2
PRETTY_NAME="Buildroot 2023.11.2"

The kernel reports itself as 5.10.4-tag-. Somewhat ancient, but actually an LTS kernel. Except we’re now up to 5.10.247, so it obviously hasn’t been updated in some time.

TBH, this is what I expect (and fear) from embedded devices. They end up with some ancient base OS revision and a kernel with a bunch of hacks that mean it’s not easily updated. I get that the margins on this stuff are tiny, but I do wish folk would spend more time upstreaming. Or at least updating to the latest LTS point release for their kernel.

The SSH client/daemon is full-fat OpenSSH:

~ # sshd -V
OpenSSH_9.6p1, OpenSSL 3.1.4 24 Oct 2023

There are a number of CVEs fixed in later OpenSSL 3.1 versions, though at present nothing that looks too concerning from the server side. Yes, the image has tcpdump + aircrack installed. I’m a little surprised at aircrack (the device has no WiFi and even though I know there’s a variant that does, it’s not a standard debug tool the way tcpdump is), but there’s a copy of GNU Chess in there too, so it’s obvious this is just a kitchen-sink image. FWIW it looks like the buildroot config is here.

Sadly the UART that I believe the bootloader/kernel are talking to is not exposed externally - the UART pin headers are for UART1 + 2, and I’d have to open up the device to get to UART0. I’ve not yet done this (but doing so would also allow access to the SD card, which would make trying to compile + test my own kernel easier).

In terms of actual functionality it did what I’d expect. 1080p HDMI capture was fine. I’d have gone for a lower resolution, but I think that would have required tweaking on the client side. It looks like the 2.3.0 release allows EDID tweaking, so I might have to investigate that. The keyboard defaults to a US layout, which caused some problems with the | symbol until I reconfigured the target machine not to expect a GB layout.

There’s also the potential to share out images via USB. I copied a Debian trixie netinst image to /data on the NanoKVM and was able to select it in the web interface and have it appear on the target machine easily. There’s also the option to fetch direct from a URL in the web interface, but I was still testing without routable network access, so didn’t try that. There’s plenty of room for images:

~ # df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/mmcblk0p2            7.6G    823.3M      6.4G  11% /
devtmpfs                 77.7M         0     77.7M   0% /dev
tmpfs                    79.0M         0     79.0M   0% /dev/shm
tmpfs                    79.0M     30.2M     48.8M  38% /tmp
tmpfs                    79.0M    124.0K     78.9M   0% /run
/dev/mmcblk0p1           16.0M     11.5M      4.5M  72% /boot
/dev/mmcblk0p3           22.2G    160.0K     22.2G   0% /data

The NanoKVM also appears as an RNDIS USB network device, with udhcpd running on the interface. IP forwarding is not enabled, and there’s no masquerading rules setup, so this doesn’t give the target host access to the “management” LAN by default. I guess it could be useful for copying things over to the target host, as a more flexible approach than a virtual disk image.

One thing to note is this makes for a bunch of devices over the composite USB interface. There are 3 HID devices (keyboard, absolute mouse, relative mouse), the RNDIS interface, and the USB mass storage. I had a few occasions where the keyboard input got stuck after I’d been playing about with big data copies over the network and using the USB mass storage emulation. There is a HID-only mode (no network/mass storage) to try and help with this, and a restart of the NanoKVM generally brought things back, but something to watch out for. Again I see that the 2.3.0 application update mentions resetting the USB hardware on a HID reset, which might well help.

As I stated at the start, I’m happy with this purchase. Would I leave it exposed to the internet without suitable firewalling? No, but then I wouldn’t do so for any KVM. I wanted a lightweight KVM suitable for use in my home network, something unlikely to see heavy use but that would save me hooking up an actual monitor + keyboard when things were misbehaving. So far everything I’ve seen says I’ve got my money’s worth from it.

22 December, 2025 05:38PM

Russell Coker

Samsung 65″ QN900C 8K TV

As a follow up from my last post about my 8K TV [1] I tested out a Samsung 65″ QN900C Neo QLED 8K that’s on sale in JB Hifi. According to the JB employee I spoke to they are running out the last 8K TVs and have no plans to get more.

In my testing of that 8K TV YouTube had a 3840*2160 viewport which is better than the 1920*1080 of my Hisense TV. When running a web browser the codeshack page reported it as 1920*1080 with a 1.25* pixel density (presumably a configuration option) that gave a usable resolution of 1536*749.

The JB Hifi employee wouldn’t let me connect my own device via HDMI but said that it would work at 8K. I said “so if I buy it I can return it if it doesn’t do 8K HDMI?” and then he looked up the specs and found that it would only do 4K input on HDMI. It seems that actual 8K resolution might work on a Samsung streaming device but that’s not very useful particularly as there probably isn’t much 8K content on any streaming service.

Basically that Samsung allegedly 8K TV only works at 4K at best.

It seems to be impossible to buy an 8K TV or monitor in Australia that will actually display 8K content. ASUS has a 6K 32″ monitor with 6016*3384 resolution for $2016 [2]. When counting for inflation $2016 wouldn’t be the most expensive monitor I’ve ever bought and hopefully prices will continue to drop.

Rumour has it that there are 8K TVs available in China that actually take 8K input. Getting one to Australia might not be easy but it’s something that I will investigate.

Also I’m trying to sell my allegedly 8K TV.

22 December, 2025 07:52AM by etbe

François Marier

LXC setup on Debian forky

Similar to what I wrote for Ubuntu 18.04, here is how to setup an LXC container on Debian forky.

Installing the required packages

Start by installing the necessary packages on the host:

apt install lxc libvirt-clients debootstrap

Network setup

Ensure the veth kernel module is loaded by adding the following to /etc/modules-load.d/lxc-local.conf:

veth

and then loading it manually for now:

modprobe veth

Enable IPv4 forwarding by putting this in /etc/sysctl.d/lxc-local.conf:

net.ipv4.ip_forward=1

and applying it:

sysctl -p /etc/sysctl.d/lxc-local.conf

Restart the LXC network bridge:

systemctl restart lxc-net.service

Ensure that container traffic is not blocked by the host firewall, for example by adding the following to /etc/network/iptables.up.rules:

-A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 10.0.3.0/24 -j ACCEPT
-A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 239.255.255.250 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT

and applying the rules:

iptables-apply

Creating a container

To see all available images, run:

lxc-create -n foo --template=download -- --list

and then create a Debian forky container using:

lxc-create -n forky -t download -- -d debian -r forky -a amd64

Start and stop the container like this:

lxc-start -n forky
lxc-stop -n forky

Connecting to the container

Attach to the running container's console:

lxc-attach -n forky

Inside the container, you can change the root password by typing:

passwd

and install some essential packages:

apt install openssh-server vim

To find the container's IP address (for example, so that you can ssh to it from the host):

lxc-ls --fancy

22 December, 2025 02:47AM

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

I’m learning about perlguts today.


im-learning-about-perlguts-today.png


## 0.23	2025-12-20

commit be15aa25dea40aea66a8534143fb81b29d2e6c08
Author: C.J. Collier 
Date:   Sat Dec 20 22:40:44 2025 +0000

    Fixes C-level test infrastructure and adds more test cases for upb_to_sv conversions.
    
    - **Makefile.PL:**
        - Allow `extra_src` in `c_test_config.json` to be an array.
        - Add ASan flags to CCFLAGS and LDDLFLAGS for better debugging.
        - Corrected echo newlines in `test_c` target.
    - **c_test_config.json:**
        - Added missing type test files to `deps` and `extra_src` for `convert/sv_to_upb` and `convert/upb_to_sv` test runners.
    - **t/c/convert/upb_to_sv.c:**
        - Fixed a double free of `test_pool`.
        - Added missing includes for type test headers.
        - Updated test plan counts.
    - **t/c/convert/sv_to_upb.c:**
        - Added missing includes for type test headers.
        - Updated test plan counts.
        - Corrected Perl interpreter initialization.
    - **t/c/convert/types/**:
        - Added missing `test_util.h` include in new type test headers.
        - Completed the set of `upb_to_sv` test cases for all scalar types by adding optional and repeated tests for `sfixed32`, `sfixed64`, `sint32`, and `sint64`, and adding repeated tests to the remaining scalar type files.
    - **Documentation:**
        - Updated `01-xs-testing.md` with more debugging tips, including ASan usage and checking for double frees and typos.
        - Updated `xs_learnings.md` with details from the recent segfault.
        - Updated `llm-plan-execution-instructions.md` to emphasize debugging steps.


## 0.22	2025-12-19

commit 2c171d9a5027e0150eae629729c9104e7f6b9d2b
Author: C.J. Collier 
Date:   Fri Dec 19 23:41:02 2025 +0000

    feat(perl,testing): Initialize C test framework and build system
    
    This commit sets up the foundation for the C-level tests and the build system for the Perl Protobuf module:
    
    1.  **Makefile.PL Enhancements:**
        *   Integrates `Devel::PPPort` to generate `ppport.h` for better portability.
        *   Object files now retain their path structure (e.g., `xs/convert/sv_to_upb.o`) instead of being flattened, improving build clarity.
        *   The `MY::postamble` is significantly revamped to dynamically generate build rules for all C tests located in `t/c/` based on the `t/c/c_test_config.json` file.
        *   C tests are linked against `libprotobuf_common.a` and use `ExtUtils::Embed` flags.
        *   Added `JSON::MaybeXS` to `PREREQ_PM`.
        *   The `test` target now also depends on the `test_c` target.
    
    2.  **C Test Infrastructure (`t/c/`):
        *   Introduced `t/c/c_test_config.json` to configure individual C test builds, specifying dependencies and extra source files.
        *   Created `t/c/convert/test_util.c` and `.h` for shared test functions like loading descriptors.
        *   Initial `t/c/convert/upb_to_sv.c` and `t/c/convert/sv_to_upb.c` test runners.
        *   Basic `t/c/integration/030_protobuf_coro.c` for Coro safety testing on core utils using `libcoro`.
        *   Basic `t/c/integration/035_croak_test.c` for testing exception handling.
        *   Basic `t/c/integration/050_convert.c` for integration testing conversions.
    
    3.  **Test Proto:** Updated `t/data/test.proto` with more field types for conversion testing and regenerated `test_descriptor.bin`.
    
    4.  **XS Test Harness (`t/c/upb-perl-test.h`):** Added `like_n` macro for length-aware regex matching.
    
    5.  **Documentation:** Updated architecture and plan documents to reflect the C test structure.
    6.  **ERRSV Testing:** Note that the C tests (`t/c/`) will primarily check *if* a `croak` occurs (i.e., that the exception path is taken), but will not assert on the string content of `ERRSV`. Reliably testing `$@` content requires the full Perl test environment with `Test::More`, which will be done in the `.t` files when testing the Perl API.
    
    This provides a solid base for developing and testing the XS and C components of the module.


## 0.21	2025-12-18

commit a8b6b6100b2cf29c6df1358adddb291537d979bc
Author: C.J. Collier 
Date:   Thu Dec 18 04:20:47 2025 +0000

    test(C): Add integration tests for Milestone 2 components
    
    - Created t/c/integration/030_protobuf.c to test interactions
      between obj_cache, arena, and utils.
    - Added this test to t/c/c_test_config.json.
    - Verified that all C tests for Milestones 2 and 3 pass,
      including the libcoro-based stress test.


## 0.20	2025-12-18

commit 0fcad68680b1f700a83972a7c1c48bf3a6958695
Author: C.J. Collier 
Date:   Thu Dec 18 04:14:04 2025 +0000

    docs(plan): Add guideline review reminders to milestones
    
    - Added a "[ ] REFRESH: Review all documents in @perl/doc/guidelines/**"
      checklist item to the start of each component implementation
      milestone (C and Perl layers).
    - This excludes Integration Test milestones.


## 0.19	2025-12-18

commit 987126c4b09fcdf06967a98fa3adb63d7de59a34
Author: C.J. Collier 
Date:   Thu Dec 18 04:05:53 2025 +0000

    docs(plan): Add C-level and Perl-level Coro tests to milestones
    
    - Added checklist items for `libcoro`-based C tests
      (e.g., `t/c/integration/050_convert_coro.c`) to all C layer
      integration milestones (050 through 220).
    - Updated `030_Integration_Protobuf.md` to standardise checklist
      items for the existing `030_protobuf_coro.c` test.
    - Removed the single `xt/author/coro-safe.t` item from
      `010_Build.md`.
    - Added checklist items for Perl-level `Coro` tests
      (e.g., `xt/coro/240_arena.t`) to each Perl layer
      integration milestone (240 through 400).
    - Created `perl/t/c/c_test_config.json` to manage C test
      configurations externally.
    - Updated `perl/doc/architecture/testing/01-xs-testing.md` to describe
      both C-level `libcoro` and Perl-level `Coro` testing strategies.


## 0.18	2025-12-18

commit 6095a5a610401a6035a81429d0ccb9884d53687b
Author: C.J. Collier 
Date:   Thu Dec 18 02:34:31 2025 +0000

    added coro testing to c layer milestones


## 0.17	2025-12-18

commit cc0aae78b1f7f675fc8a1e99aa876c0764ea1cce
Author: C.J. Collier 
Date:   Thu Dec 18 02:26:59 2025 +0000

    docs(plan): Refine test coverage checklist items for SMARTness
    
    - Updated the "Tests provide full coverage" checklist items in
      C layer plan files (020, 040, 060, 080, 100, 120, 140, 160, 180, 200)
      to explicitly mention testing all public functions in the
      corresponding header files.
    - Expanded placeholder checklists in 140, 160, 180, 200.
    - Updated the "Tests provide full coverage" and "Add coverage checks"
      checklist items in Perl layer plan files (230, 250, 270, 290, 310, 330,
      350, 370, 390) to be more specific about the scope of testing
      and the use of `Test::TestCoverage`.
    - Expanded Well-Known Types milestone (350) to detail each type.


## 0.16	2025-12-18

commit e4b601f14e3817a17b0f4a38698d981dd4cb2818
Author: C.J. Collier 
Date:   Thu Dec 18 02:07:35 2025 +0000

    docs(plan): Full refactoring of C and Perl plan files
    
    - Split both ProtobufPlan-C.md and ProtobufPlan-Perl.md into
      per-milestone files under the `perl/doc/plan/` directory.
    - Introduced Integration Test milestones after each component
      milestone in both C and Perl plans.
    - Numbered milestone files sequentially (e.g., 010_Build.md,
      230_Perl_Arena.md).
    - Updated main ProtobufPlan-C.md and ProtobufPlan-Perl.md to
      act as Tables of Contents.
    - Ensured consistent naming for integration test files
      (e.g., `t/c/integration/030_protobuf.c`, `t/integration/260_descriptor_pool.t`).
    - Added architecture review steps to the end of all milestones.
    - Moved Coro safety test to C layer Milestone 1.
    - Updated Makefile.PL to support new test structure and added Coro.
    - Moved and split t/c/convert.c into t/c/convert/*.c.
    - Moved other t/c/*.c tests into t/c/protobuf/*.c.
    - Deleted old t/c/convert.c.


## 0.15	2025-12-17

commit 649cbacf03abb5e7293e3038bb451c0406e9d0ce
Author: C.J. Collier 
Date:   Wed Dec 17 23:51:22 2025 +0000

    docs(plan): Refactor and reset ProtobufPlan.md
    
    - Split the plan into ProtobufPlan-C.md and ProtobufPlan-Perl.md.
    - Reorganized milestones to clearly separate C layer and Perl layer development.
    - Added more granular checkboxes for each component:
      - C Layer: Create test, Test coverage, Implement, Tests pass.
      - Perl Layer: Create test, Test coverage, Implement Module/XS, Tests pass, C-Layer adjustments.
    - Reset all checkboxes to `[ ]` to prepare for a full audit.
    - Updated status in architecture/api and architecture/core documents to "Not Started".
    
    feat(obj_cache): Add unregister function and enhance tests
    
    - Added `protobuf_unregister_object` to `xs/protobuf/obj_cache.c`.
    - Updated `xs/protobuf/obj_cache.h` with the new function declaration.
    - Expanded tests in `t/c/protobuf_obj_cache.c` to cover unregistering,
      overwriting keys, and unregistering non-existent keys.
    - Corrected the test plan count in `t/c/protobuf_obj_cache.c` to 17.


## 0.14	2025-12-17

commit 40b6ad14ca32cf16958d490bb575962f88d868a1
Author: C.J. Collier 
Date:   Wed Dec 17 23:18:27 2025 +0000

    feat(arena): Complete C layer for Arena wrapper
    
    This commit finalizes the C-level implementation for the Protobuf::Arena wrapper.
    
    - Adds `PerlUpb_Arena_Destroy` for proper cleanup from Perl's DEMOLISH.
    - Enhances error checking in `PerlUpb_Arena_Get`.
    - Expands C-level tests in `t/c/protobuf_arena.c` to cover memory allocation
      on the arena and lifecycle through `PerlUpb_Arena_Destroy`.
    - Corrects embedded Perl initialization in the C test.
    
    docs(plan): Refactor ProtobufPlan.md
    
    - Restructures the development plan to clearly separate "C Layer" and
      "Perl Layer" tasks within each milestone.
    - This aligns the plan with the "C-First Implementation Strategy" and improves progress tracking.


## 0.13	2025-12-17

commit c1e566c25f62d0ae9f195a6df43b895682652c71
Author: C.J. Collier 
Date:   Wed Dec 17 22:00:40 2025 +0000

    refactor(perl): Rename C tests and enhance Makefile.PL
    
    - Renamed test files in `t/c/` to better match the `xs` module structure:
        - `01-cache.c` -> `protobuf_obj_cache.c`
        - `02-arena.c` -> `protobuf_arena.c`
        - `03-utils.c` -> `protobuf_utils.c`
        - `04-convert.c` -> `convert.c`
        - `load_test.c` -> `upb_descriptor_load.c`
    - Updated `perl/Makefile.PL` to reflect the new test names in `MY::postamble`'s `$c_test_config`.
    - Refactored the `$c_test_config` generation in `Makefile.PL` to reduce repetition by using a default flags hash and common dependencies array.
    - Added a `fail()` macro to `perl/t/c/upb-perl-test.h` for consistency.
    - Modified `t/c/upb_descriptor_load.c` to use the `t/c/upb-perl-test.h` macros, making its output consistent with other C tests.
    - Added a skeleton for `t/c/convert.c` to test the conversion functions.
    - Updated documentation in `ProtobufPlan.md` and `architecture/testing/01-xs-testing.md` to reflect new test names.


## 0.12	2025-12-17

commit d8cb5dd415c6c129e71cd452f78e29de398a82c9
Author: C.J. Collier 
Date:   Wed Dec 17 20:47:38 2025 +0000

    feat(perl): Refactor XS code into subdirectories
    
    This commit reorganizes the C code in the `perl/xs/` directory into subdirectories, mirroring the structure of the Python UPB extension. This enhances modularity and maintainability.
    
    - Created subdirectories for each major component: `convert`, `descriptor`, `descriptor_containers`, `descriptor_pool`, `extension_dict`, `map`, `message`, `protobuf`, `repeated`, and `unknown_fields`.
    - Created skeleton `.h` and `.c` files within each subdirectory to house the component-specific logic.
    - Updated top-level component headers (e.g., `perl/xs/descriptor.h`) to include the new sub-headers.
    - Updated top-level component source files (e.g., `perl/xs/descriptor.c`) to include their main header and added stub initialization functions (e.g., `PerlUpb_InitDescriptor`).
    - Moved code from the original `perl/xs/protobuf.c` to new files in `perl/xs/protobuf/` (arena, obj_cache, utils).
    - Moved code from the original `perl/xs/convert.c` to new files in `perl/xs/convert/` (upb_to_sv, sv_to_upb).
    - Updated `perl/Makefile.PL` to use a glob (`xs/*/*.c`) to find the new C source files in the subdirectories.
    - Added `perl/doc/architecture/core/07-xs-file-organization.md` to document the new structure.
    - Updated `perl/doc/ProtobufPlan.md` and other architecture documents to reference the new organization.
    - Corrected self-referential includes in the newly created .c files.
    
    This restructuring provides a solid foundation for further development and makes it easier to port logic from the Python implementation.


## 0.11	2025-12-17

commit cdedcd13ded4511b0464f5d3bdd72ce6d34e73fc
Author: C.J. Collier 
Date:   Wed Dec 17 19:57:52 2025 +0000

    feat(perl): Implement C-first testing and core XS infrastructure
    
    This commit introduces a significant refactoring of the Perl XS extension, adopting a C-first development approach to ensure a robust foundation.
    
    Key changes include:
    
    -   **C-Level Testing Framework:** Established a C-level testing system in `t/c/` with a dedicated Makefile, using an embedded Perl interpreter. Initial tests cover the object cache (`01-cache.c`), arena wrapper (`02-arena.c`), and utility functions (`03-utils.c`).
    -   **Core XS Infrastructure:**
        -   Implemented a global object cache (`xs/protobuf.c`) to manage Perl wrappers for UPB objects, using weak references.
        -   Created an `upb_Arena` wrapper (`xs/protobuf.c`).
        -   Consolidated common XS helper functions into `xs/protobuf.h` and `xs/protobuf.c`.
    -   **Makefile.PL Enhancements:** Updated to support building and linking C tests, incorporating flags from `ExtUtils::Embed`, and handling both `.c` and `.cc` source files.
    -   **XS File Reorganization:** Restructured XS files to mirror the Python UPB extension's layout (e.g., `message.c`, `descriptor.c`). Removed older, monolithic `.xs` files.
    -   **Typemap Expansion:** Added extensive typemap entries in `perl/typemap` to handle conversions between Perl objects and various `const upb_*Def*` pointers.
    -   **Descriptor Tests:** Added a new test suite `t/02-descriptor.t` to validate descriptor loading and accessor methods.
    -   **Documentation:** Updated development plans and guidelines (`ProtobufPlan.md`, `xs_learnings.md`, etc.) to reflect the C-first strategy, new testing methods, and lessons learned.
    -   **Build Cleanup:** Removed `ppport.h` from `.gitignore` as it's no longer used, due to `-DPERL_NO_PPPORT` being set in `Makefile.PL`.
    
    This C-first approach allows for more isolated and reliable testing of the core logic interacting with the UPB library before higher-level Perl APIs are built upon it.


## 0.10	2025-12-17

commit 1ef20ade24603573905cb0376670945f1ab5d829
Author: C.J. Collier 
Date:   Wed Dec 17 07:08:29 2025 +0000

    feat(perl): Implement C-level tests and core XS utils
    
    This commit introduces a C-level testing framework for the XS layer and implements key components:
    
    1.  **C-Level Tests (`t/c/`)**:
        *   Added `t/c/Makefile` to build standalone C tests.
        *   Created `t/c/upb-perl-test.h` with macros for TAP-compliant C tests (`plan`, `ok`, `is`, `is_string`, `diag`).
        *   Implemented `t/c/01-cache.c` to test the object cache.
        *   Implemented `t/c/02-arena.c` to test `Protobuf::Arena` wrappers.
        *   Implemented `t/c/03-utils.c` to test string utility functions.
        *   Corrected include paths and diagnostic messages in C tests.
    
    2.  **XS Object Cache (`xs/protobuf.c`)**:
        *   Switched to using stringified pointers (`%p`) as hash keys for stability.
        *   Fixed a critical double-free bug in `PerlUpb_ObjCache_Delete` by removing an extra `SvREFCNT_dec` on the lookup key.
    
    3.  **XS Arena Wrapper (`xs/protobuf.c`)**:
        *   Corrected `PerlUpb_Arena_New` to use `newSVrv` and `PTR2IV` for opaque object wrapping.
        *   Corrected `PerlUpb_Arena_Get` to safely unwrap the arena pointer.
    
    4.  **Makefile.PL (`perl/Makefile.PL`)**:
        *   Added `-Ixs` to `INC` to allow C tests to find `t/c/upb-perl-test.h` and `xs/protobuf.h`.
        *   Added `LIBS` to link `libprotobuf_common.a` into the main `Protobuf.so`.
        *   Added C test targets `01-cache`, `02-arena`, `03-utils` to the test config in `MY::postamble`.
    
    5.  **Protobuf.pm (`perl/lib/Protobuf.pm`)**:
        *   Added `use XSLoader;` to load the compiled XS code.
    
    6.  **New files `xs/util.h`**:
        *   Added initial type conversion function.
    
    These changes establish a foundation for testing the C-level interface with UPB and fix crucial bugs in the object cache implementation.


## 0.09	2025-12-17

commit 07d61652b032b32790ca2d3848243f9d75ea98f4
Author: C.J. Collier 
Date:   Wed Dec 17 04:53:34 2025 +0000

    feat(perl): Build system and C cache test for Perl XS
    
    This commit introduces the foundational pieces for the Perl XS implementation, focusing on the build system and a C-level test for the object cache.
    
    -   **Makefile.PL:**
        -   Refactored C test compilation rules in `MY::postamble` to use a hash (`$c_test_config`) for better organization and test-specific flags.
        -   Integrated `ExtUtils::Embed` to provide necessary compiler and linker flags for embedding the Perl interpreter, specifically for the `t/c/01-cache.c` test.
        -   Correctly constructs the path to the versioned Perl library (`libperl.so.X.Y.Z`) using `$Config{archlib}` and `$Config{libperl}` to ensure portability.
        -   Removed `VERSION_FROM` and `ABSTRACT_FROM` to avoid dependency on `.pm` files for now.
    
    -   **C Cache Test (t/c/01-cache.c):**
        -   Added a C test to exercise the object cache functions implemented in `xs/protobuf.c`.
        -   Includes tests for adding, getting, deleting, and weak reference behavior.
    
    -   **XS Cache Implementation (xs/protobuf.c, xs/protobuf.h):**
        -   Implemented `PerlUpb_ObjCache_Init`, `PerlUpb_ObjCache_Add`, `PerlUpb_ObjCache_Get`, `PerlUpb_ObjCache_Delete`, and `PerlUpb_ObjCache_Destroy`.
        -   Uses a Perl hash (`HV*`) for the cache.
        -   Keys are string representations of the C pointers, created using `snprintf` with `"%llx"`.
        -   Values are weak references (`sv_rvweaken`) to the Perl objects (`SV*`).
        -   `PerlUpb_ObjCache_Get` now correctly returns an incremented reference to the original SV, not a copy.
        -   `PerlUpb_ObjCache_Destroy` now clears the hash before decrementing its refcount.
    
    -   **t/c/upb-perl-test.h:**
        -   Updated `is_sv` to perform direct pointer comparison (`got == expected`).
    
    -   **Minor:** Added `util.h` (currently empty), updated `typemap`.
    
    These changes establish a working C-level test environment for the XS components.


## 0.08	2025-12-17

commit d131fd22ea3ed8158acb9b0b1fe6efd856dc380e
Author: C.J. Collier 
Date:   Wed Dec 17 02:57:48 2025 +0000

    feat(perl): Update docs and core XS files
    
    - Explicitly add TDD cycle to ProtobufPlan.md.
    - Clarify mirroring of Python implementation in upb-interfacing.md for both C and Perl layers.
    - Branch and adapt python/protobuf.h and python/protobuf.c to perl/xs/protobuf.h and perl/xs/protobuf.c, including the object cache implementation. Removed old cache.* files.
    - Create initial C test for the object cache in t/c/01-cache.c.


## 0.07	2025-12-17

commit 56fd6862732c423736a2f9a9fb1a2816fc59e9b0
Author: C.J. Collier 
Date:   Wed Dec 17 01:09:18 2025 +0000

    feat(perl): Align Perl UPB architecture docs with Python
    
    Updates the Perl Protobuf architecture documents to more closely align with the design and implementation strategies used in the Python UPB extension.
    
    Key changes:
    
    -   **Object Caching:** Mandates a global, per-interpreter cache using weak references for all UPB-derived objects, mirroring Python's `PyUpb_ObjCache`.
    -   **Descriptor Containers:** Introduces a new document outlining the plan to use generic XS container types (Sequence, ByNameMap, ByNumberMap) with vtables to handle collections of descriptors, similar to Python's `descriptor_containers.c`.
    -   **Testing:** Adds a note to the testing strategy to port relevant test cases from the Python implementation to ensure feature parity.


## 0.06	2025-12-17

commit 6009ce6ab64eccce5c48729128e5adf3ef98e9ae
Author: C.J. Collier 
Date:   Wed Dec 17 00:28:20 2025 +0000

    feat(perl): Implement object caching and fix build
    
    This commit introduces several key improvements to the Perl XS build system and core functionality:
    
    1.  **Object Caching:**
        *   Introduces `xs/protobuf.c` and `xs/protobuf.h` to implement a caching mechanism (`protobuf_c_to_perl_obj`) for wrapping UPB C pointers into Perl objects. This uses a hash and weak references to ensure object identity and prevent memory leaks.
        *   Updates the `typemap` to use `protobuf_c_to_perl_obj` for `upb_MessageDef *` output, ensuring descriptor objects are cached.
        *   Corrected `sv_weaken` to the correct `sv_rvweaken` function.
    
    2.  **Makefile.PL Enhancements:**
        *   Switched to using the Bazel-generated UPB descriptor sources from `bazel-bin/src/google/protobuf/_virtual_imports/descriptor_proto/google/protobuf/`.
        *   Updated `INC` paths to correctly locate the generated headers.
        *   Refactored `MY::dynamic_lib` to ensure the static library `libprotobuf_common.a` is correctly linked into each generated `.so` module, resolving undefined symbol errors.
        *   Overrode `MY::test` to use `prove -b -j$(nproc) t/*.t xt/*.t` for running tests.
        *   Cleaned up `LIBS` and `LDDLFLAGS` usage.
    
    3.  **Documentation:**
        *   Updated `ProtobufPlan.md` to reflect the current status and design decisions.
        *   Reorganized architecture documents into subdirectories.
        *   Added `object-caching.md` and `c-perl-interface.md`.
        *   Updated `llm-guidance.md` with notes on `upb/upb.h` and `sv_rvweaken`.
    
    4.  **Testing:**
        *   Fixed `xt/03-moo_immutable.t` to skip tests if no Moo modules are found.
    
    This resolves the build issues and makes the core test suite pass.


## 0.05	2025-12-16

commit 177d2f3b2608b9d9c415994e076a77d8560423b8
Author: C.J. Collier 
Date:   Tue Dec 16 19:51:36 2025 +0000

    Refactor: Rename namespace to Protobuf, build system and doc updates
    
    This commit refactors the primary namespace from `ProtoBuf` to `Protobuf`
    to align with the style guide. This involves renaming files, directories,
    and updating package names within all Perl and XS files.
    
    **Namespace Changes:**
    
    *   Renamed `perl/lib/ProtoBuf` to `perl/lib/Protobuf`.
    *   Moved and updated `ProtoBuf.pm` to `Protobuf.pm`.
    *   Moved and updated `ProtoBuf::Descriptor` to `Protobuf::Descriptor` (.pm & .xs).
    *   Removed other `ProtoBuf::*` stubs (Arena, DescriptorPool, Message).
    *   Updated `MODULE` and `PACKAGE` in `Descriptor.xs`.
    *   Updated `NAME`, `*_FROM` in `perl/Makefile.PL`.
    *   Replaced `ProtoBuf` with `Protobuf` throughout `perl/typemap`.
    *   Updated namespaces in test files `t/01-load-protobuf-descriptor.t` and `t/02-descriptor.t`.
    *   Updated namespaces in all documentation files under `perl/doc/`.
    *   Updated paths in `perl/.gitignore`.
    
    **Build System Enhancements (Makefile.PL):**
    
    *   Included `xs/*.c` files in the common object files list.
    *   Added `-I.` to the `INC` paths.
    *   Switched from `MYEXTLIB` to `LIBS => ['-L$(CURDIR) -lprotobuf_common']` for linking.
    *   Removed custom keys passed to `WriteMakefile` for postamble.
    *   `MY::postamble` now sources variables directly from the main script scope.
    *   Added `all :: ${common_lib}` dependency in `MY::postamble`.
    *   Added `t/c/load_test.c` compilation rule in `MY::postamble`.
    *   Updated `clean` target to include `blib`.
    *   Added more modules to `TEST_REQUIRES`.
    *   Removed the explicit `PM` and `XS` keys from `WriteMakefile`, relying on `XSMULTI => 1`.
    
    **New Files:**
    
    *   `perl/lib/Protobuf.pm`
    *   `perl/lib/Protobuf/Descriptor.pm`
    *   `perl/lib/Protobuf/Descriptor.xs`
    *   `perl/t/01-load-protobuf-descriptor.t`
    *   `perl/t/02-descriptor.t`
    *   `perl/t/c/load_test.c`: Standalone C test for UPB.
    *   `perl/xs/types.c` & `perl/xs/types.h`: For Perl/C type conversions.
    *   `perl/doc/architecture/upb-interfacing.md`
    *   `perl/xt/03-moo_immutable.t`: Test for Moo immutability.
    
    **Deletions:**
    
    *   Old test files: `t/00_load.t`, `t/01_basic.t`, `t/02_serialize.t`, `t/03_message.t`, `t/04_descriptor_pool.t`, `t/05_arena.t`, `t/05_message.t`.
    *   Removed `lib/ProtoBuf.xs` as it's not needed with `XSMULTI`.
    
    **Other:**
    
    *   Updated `test_descriptor.bin` (binary change).
    *   Significant content updates to markdown documentation files in `perl/doc/architecture` and `perl/doc/internal` reflecting the new architecture and learnings.


## 0.04	2025-12-14

commit 92de5d482c8deb9af228f4b5ce31715d3664d6ee
Author: C.J. Collier 
Date:   Sun Dec 14 21:28:19 2025 +0000

    feat(perl): Implement Message object creation and fix lifecycles
    
    This commit introduces the basic structure for `ProtoBuf::Message` object
    creation, linking it with `ProtoBuf::Descriptor` and `ProtoBuf::DescriptorPool`,
    and crucially resolves a SEGV by fixing object lifecycle management.
    
    Key Changes:
    
    1.  **`ProtoBuf::Descriptor`:** Added `_pool` attribute to hold a strong
        reference to the parent `ProtoBuf::DescriptorPool`. This is essential to
        prevent the pool and its C `upb_DefPool` from being garbage collected
        while a descriptor is still in use.
    
    2.  **`ProtoBuf::DescriptorPool`:**
        *   `find_message_by_name`: Now passes the `$self` (the pool object) to the
            `ProtoBuf::Descriptor` constructor to establish the lifecycle link.
        *   XSUB `pb_dp_find_message_by_name`: Updated to accept the pool `SV*` and
            store it in the descriptor's `_pool` attribute.
        *   XSUB `_load_serialized_descriptor_set`: Renamed to avoid clashing with the
            Perl method name. The Perl wrapper now correctly calls this internal XSUB.
        *   `DEMOLISH`: Made safer by checking for attribute existence.
    
    3.  **`ProtoBuf::Message`:**
        *   Implemented using Moo with lazy builders for `_upb_arena` and
            `_upb_message`.
        *   `_descriptor` is a required argument to `new()`.
        *   XS functions added for creating the arena (`pb_msg_create_arena`) and
            the `upb_Message` (`pb_msg_create_upb_message`).
        *   `pb_msg_create_upb_message` now extracts the `upb_MessageDef*` from the
            descriptor and uses `upb_MessageDef_MiniTable()` to get the minitable
            for `upb_Message_New()`.
        *   `DEMOLISH`: Added to free the message's arena.
    
    4.  **`Makefile.PL`:**
        *   Added `-g` to `CCFLAGS` for debugging symbols.
        *   Added Perl CORE include path to `MY::postamble`'s `base_flags`.
    
    5.  **Tests:**
        *   `t/04_descriptor_pool.t`: Updated to check the structure of the
            returned `ProtoBuf::Descriptor`.
        *   `t/05_message.t`: Now uses a descriptor obtained from a real pool to
            test `ProtoBuf::Message->new()`.
    
    6.  **Documentation:**
        *   Updated `ProtobufPlan.md` to reflect progress.
        *   Updated several files in `doc/architecture/` to match the current
            implementation details, especially regarding arena management and object
            lifecycles.
        *   Added `doc/internal/development_cycle.md` and `doc/internal/xs_learnings.md`.
    
    With these changes, the SEGV is resolved, and message objects can be successfully
    created from descriptors.


## 0.03	2025-12-14

commit 6537ad23e93680c2385e1b571d84ed8dbe2f68e8
Author: C.J. Collier 
Date:   Sun Dec 14 20:23:41 2025 +0000

    Refactor(perl): Object-Oriented DescriptorPool with Moo
    
    This commit refactors the `ProtoBuf::DescriptorPool` to be fully object-oriented using Moo, and resolves several issues related to XS, typemaps, and test data.
    
    Key Changes:
    
    1.  **Moo Object:** `ProtoBuf::DescriptorPool.pm` now uses `Moo` to define the class. The `upb_DefPool` pointer is stored as a lazy attribute `_upb_defpool`.
    2.  **XS Lifecycle:** `DescriptorPool.xs` now has `pb_dp_create_pool` called by the Moo builder and `pb_dp_free_pool` called from `DEMOLISH` to manage the `upb_DefPool` lifecycle per object.
    3.  **Typemap:** The `perl/typemap` file has been significantly updated to handle the conversion between the `ProtoBuf::DescriptorPool` Perl object and the `upb_DefPool *` C pointer. This includes:
        *   Mapping `upb_DefPool *` to `T_PTR`.
        *   An `INPUT` section for `ProtoBuf::DescriptorPool` to extract the pointer from the object's hash, triggering the lazy builder if needed via `call_method`.
        *   An `OUTPUT` section for `upb_DefPool *` to convert the pointer back to a Perl integer, used by the builder.
    4.  **Method Renaming:** `add_file_descriptor_set_binary` is now `load_serialized_descriptor_set`.
    5.  **Test Data:**
        *   Added `perl/t/data/test.proto` with a sample message and enum.
        *   Generated `perl/t/data/test_descriptor.bin` using `protoc`.
        *   Removed `t/data/` from `.gitignore` to ensure test data is versioned.
    6.  **Test Update:** `t/04_descriptor_pool.t` is updated to use the new OO interface, load the generated descriptor set, and check for message definitions.
    7.  **Build Fixes:**
        *   Corrected `#include` paths in `DescriptorPool.xs` to be relative to the `upb/` directory (e.g., `upb/wire/decode.h`).
        *   Added `-I../upb` to `CCFLAGS` in `Makefile.PL`.
        *   Reordered `INC` paths in `Makefile.PL` to prioritize local headers.
    
    **Note:** While tests now pass in some environments, a SEGV issue persists in `make test` runs, indicating a potential memory or lifecycle issue within the XS layer that needs further investigation.


## 0.02	2025-12-14

commit 6c9a6f1a5f774dae176beff02219f504ea3a6e07
Author: C.J. Collier 
Date:   Sun Dec 14 20:13:09 2025 +0000

    Fix(perl): Correct UPB build integration and generated file handling
    
    This commit resolves several issues to achieve a successful build of the Perl extension:
    
    1.  **Use Bazel Generated Files:** Switched from compiling UPB's stage0 descriptor.upb.c to using the Bazel-generated `descriptor.upb.c` and `descriptor.upb_minitable.c` located in `bazel-bin/src/google/protobuf/_virtual_imports/descriptor_proto/google/protobuf/`.
    2.  **Updated Include Paths:** Added the `bazel-bin` path to `INC` in `WriteMakefile` and to `base_flags` in `MY::postamble` to ensure the generated headers are found during both XS and static library compilation.
    3.  **Removed Stage0:** Removed references to `UPB_STAGE0_DIR` and no longer include headers or source files from `upb/reflection/stage0/`.
    4.  **-fPIC:** Explicitly added `-fPIC` to `CCFLAGS` in `WriteMakefile` and ensured `$(CCFLAGS)` is used in the custom compilation rules in `MY::postamble`. This guarantees all object files in the static library are compiled with position-independent code, resolving linker errors when creating the shared objects for the XS modules.
    5.  **Refined UPB Sources:** Used `File::Find` to recursively find UPB C sources, excluding `/conformance/` and `/reflection/stage0/` to avoid conflicts and unnecessary compilations.
    6.  **Arena Constructor:** Modified `ProtoBuf::Arena::pb_arena_new` XSUB to accept the class name argument passed from Perl, making it a proper constructor.
    7.  **.gitignore:** Added patterns to `perl/.gitignore` to ignore generated C files from XS (`lib/*.c`, `lib/ProtoBuf/*.c`), the copied `src_google_protobuf_descriptor.pb.cc`, and the `t/data` directory.
    8.  **Build Documentation:** Updated `perl/doc/architecture/upb-build-integration.md` to reflect the new build process, including the Bazel prerequisite, include paths, `-fPIC` usage, and `File::Find`.
    
    Build Steps:
    1.  `bazel build //src/google/protobuf:descriptor_upb_proto` (from repo root)
    2.  `cd perl`
    3.  `perl Makefile.PL`
    4.  `make`
    5.  `make test` (Currently has expected failures due to missing test data implementation).


## 0.01	2025-12-14

commit 3e237e8a26442558c94075766e0d4456daaeb71d
Author: C.J. Collier 
Date:   Sun Dec 14 19:34:28 2025 +0000

    feat(perl): Initialize Perl extension scaffold and build system
    
    This commit introduces the `perl/` directory, laying the groundwork for the Perl Protocol Buffers extension. It includes the essential build files, linters, formatter configurations, and a vendored Devel::PPPort for XS portability.
    
    Key components added:
    
    *   **`Makefile.PL`**: The core `ExtUtils::MakeMaker` build script. It's configured to:
        *   Build a static library (`libprotobuf_common.a`) from UPB, UTF8_Range, and generated protobuf C/C++ sources.
        *   Utilize `XSMULTI => 1` to create separate shared objects for `ProtoBuf`, `ProtoBuf::Arena`, and `ProtoBuf::DescriptorPool`.
        *   Link each XS module against the common static library.
        *   Define custom compilation rules in `MY::postamble` to handle C vs. C++ flags and build the static library.
        *   Set up include paths for the project root, UPB, and other dependencies.
    
    *   **XS Stubs (`.xs` files)**:
        *   `lib/ProtoBuf.xs`: Placeholder for the main module's XS functions.
        *   `lib/ProtoBuf/Arena.xs`: XS interface for `upb_Arena` management.
        *   `lib/ProtoBuf/DescriptorPool.xs`: XS interface for `upb_DefPool` management.
    
    *   **Perl Module Stubs (`.pm` files)**:
        *   `lib/ProtoBuf.pm`: Main module, loads XS.
        *   `lib/ProtoBuf/Arena.pm`: Perl class for Arenas.
        *   `lib/ProtoBuf/DescriptorPool.pm`: Perl class for Descriptor Pools.
        *   `lib/ProtoBuf/Message.pm`: Base class for messages (TBD).
    
    *   **Configuration Files**:
        *   `.gitignore`: Ignores build artifacts, editor files, etc.
        *   `.perlcriticrc`: Configures Perl::Critic for static analysis.
        *   `.perltidyrc`: Configures perltidy for code formatting.
    
    *   **`Devel::PPPort`**: Vendored version 3.72 to generate `ppport.h` for XS compatibility across different Perl versions.
    
    *   **`typemap`**: Custom typemap for XS argument/result conversion.
    
    *   **Documentation (`doc/`)**: Initial architecture and plan documents.
    
    This provides a solid foundation for developing the UPB-based Perl extension.


22 December, 2025 01:32AM by C.J. Collier

December 21, 2025

Ian Jackson

Debian’s git transition

tl;dr:

There is a Debian git transition plan. It’s going OK so far but we need help, especially with outreach and updating Debian’s documentation.

Goals of the Debian git transition project

  1. Everyone who interacts with Debian source code should be able to do so entirely in git.

That means, more specifically:

  1. All examination and edits to the source should be performed via normal git operations.

  2. Source code should be transferred and exchanged as git data, not tarballs. git should be the canonical form everywhere.

  3. Upstream git histories should be re-published, traceably, as part of formal git releases published by Debian.

  4. No-one should have to learn about Debian Source Packages, which are bizarre, and have been obsoleted by modern version control.

This is very ambitious, but we have come a long way!

Achievements so far, and current status

We have come a very long way. But, there is still much to do - especially, the git transition team needs your help with adoption, developer outreach, and developer documentation overhaul.

We’ve made big strides towards goals 1 and 4. Goal 2 is partially achieved: we currently have dual running. Goal 3 is within our reach but depends on widespread adoption of tag2upload (and/or dgit push).

Downstreams and users can obtain the source code of any Debian package in git form. (dgit clone, 2013). They can then work with this source code completely in git, including building binaries, merging new versions, even automatically (eg Raspbian, 2016), and all without having to deal with source packages at all (eg Wikimedia 2025).

A Debian maintainer can maintain their own package entirely in git. They can obtain upstream source code from git, and do their packaging work in git (git-buildpackage, 2006).

Every Debian maintainer can (and should!) release their package from git reliably and in a standard form (dgit push, 2013; tag2upload, 2025). This is not only more principled, but also more convenient, and with better UX, than pre-dgit tooling like dput.

Indeed a Debian maintainer can now often release their changes to Debian, from git, using only git branches (so no tarballs). Releasing to Debian can be simply pushing a signed tag (tag2upload, 2025).

A Debian maintainer can maintain a stack of changes to upstream source code in git (gbp pq 2009). They can even maintain such a delta series as a rebasing git branch, directly buildable, and use normal git rebase style operations to edit their changes, (git-dpm, 2010; git-debrebase, 2018)

An authorised Debian developer can do a modest update to any package in Debian, even one maintained by someone else, working entirely in git in a standard and convenient way (dgit, 2013).

Debian contributors can share their work-in-progress on git forges and collaborate using merge requests, git based code review, and so on. (Alioth, 2003; Salsa, 2018.)

Core engineering principle

The Debian git transition project is based on one core engineering principle:

Every Debian Source Package can be losslessly converted to and from git.

In order to transition away from Debian Source Packages, we need to gateway between the old dsc approach, and the new git approach.

This gateway obviously needs to be bidirectional: source packages uploaded with legacy tooling like dput need to be imported into a canonical git representation; and of course git branches prepared by developers need to be converted to source packages for the benefit of legacy downstream systems (such as the Debian Archive and apt source).

This bidirectional gateway is implemented in src:dgit, and is allowing us to gradually replace dsc-based parts of the Debian system with git-based ones.

Correspondence between dsc and git

A faithful bidirectional gateway must define an invariant:

The canonical git tree, corresponding to a .dsc, is the tree resulting from dpkg-source -x.

This canonical form is sometimes called the “dgit view”. It’s sometimes not the same as the maintainer’s git branch, because many maintainers are still working with “patches-unapplied” git branches. More on this below.

(For 3.0 (quilt) .dscs, the canonical git tree doesn’t include the quilt .pc directory.)

Patches-applied vs patches-unapplied

The canonical git format is “patches applied”. That is:

If Debian has modified the upstream source code, a normal git clone of the canonical branch gives the modified source tree, ready for reading and building.

Many Debian maintainers keep their packages in a different git branch format, where the changes made by Debian, to the upstream source code, are in actual patch files in a debian/patches/ subdirectory.

Patches-applied has a number of important advantages over patches-unapplied:

  • It is familiar to, and doesn’t trick, outsiders to Debian. Debian insiders radically underestimate how weird “patches-unapplied” is. Even expert software developers can get very confused or even accidentally build binaries without security patches!

  • Making changes can be done with just normal git commands, eg git commit. Many Debian insiders working with patches-unapplied are still using quilt(1), a footgun-rich contraption for working with patch files!

  • When developing, one can make changes to upstream code, and to Debian packaging, together, without ceremony. There is no need to switch back and forth between patch queue and packaging branches (as with gbp pq), no need to “commit” patch files, etc. One can always edit every file and commit it with git commit.

The downside is that, with the (bizarre) 3.0 (quilt) source format, the patch files files in debian/patches/ must somehow be kept up to date. Nowadays though, tools like git-debrebase and git-dpm (and dgit for NMUs) make it very easy to work with patches-applied git branches. git-debrebase can deal very ergonomically even with big patch stacks.

(For smaller packages which usually have no patches, plain git merge with an upstream git branch, and a much simpler dsc format, sidesteps the problem entirely.)

Prioritising Debian’s users (and other outsiders)

We want everyone to be able to share and modify the software that they interact with. That means we should make source code truly accessible, on the user’s terms.

Many of Debian’s processes assume everyone is an insider. It’s okay that there are Debian insiders and that people feel part of something that they worked hard to become involved with. But lack of perspective can lead to software which fails to uphold our values.

Our source code practices — in particular, our determination to share properly (and systematically) — are a key part of what makes Debian worthwhile at all. Like Debian’s installer, we want our source code to be useable by Debian outsiders.

This is why we have chosen to privilege a git branch format which is more familiar to the world at large, even if it’s less popular in Debian.

Consequences, some of which are annoying

The requirement that the conversion be bidirectional, lossless, and context-free can be inconvenient.

For example, we cannot support .gitattributes which modify files during git checkin and checkout. .gitattributes cause the meaning of a git tree to depend on the context, in possibly arbitrary ways, so the conversion from git to source package wouldn’t be stable. And, worse, some source packages might not to be representable in git at all.

Another example: Maintainers often have existing git branches for their packages, generated with pre-dgit tooling which is less careful and less principled than ours. That can result in discrepancies between git and dsc, which need to be resolved before a proper git-based upload can succeed.

That some maintainers use patches-unapplied, and some patches-unapplied, means that there has to be some kind of conversion to a standard git representation. Choosing the less-popular patches-applied format as the canonical form, means that many packages need their git representation converted. It also means that user- and outsider-facing branches from {browse,git}.dgit.d.o and dgit clone are not always compatible with maintainer branches on Salsa. User-contributed changes need cherry-picking rather than merging, or conversion back to the maintainer format. The good news is that dgit can automate much of this, and the manual parts are usually easy git operations.

Distributing the source code as git

Our source code management should be normal, modern, and based on git. That means the Debian Archive is obsolete and needs to be replaced with a set of git repositories.

The replacement repository for source code formally released to Debian is *.dgit.debian.org. This contains all the git objects for every git-based upload since 2013, including the signed tag for each released package version.

The plan is that it will contain a git view of every uploaded Debian package, by centrally importing all legacy uploads into git.

Tracking the relevant git data, when changes are made in the legacy Archive

Currently, many critical source code management tasks are done by changes to the legacy Debian Archive, which works entirely with dsc files (and the associated tarballs etc). The contents of the Archive are therefore still an important source of truth. But, the Archive’s architecture means it cannot sensibly directly contain git data.

To track changes made in the Archive, we added the Dgit: field to the .dsc of a git-based upload (2013). This declares which git commit this package was converted from. and where those git objects can be obtained.

Thus, given a Debian Source Package from a git-based upload, it is possible for the new git tooling to obtain the equivalent git objects. If the user is going to work in git, there is no need for any tarballs to be downloaded: the git data could be obtained from the depository using the git protocol.

The signed tags, available from the git depository, have standardised metdata which gives traceability back to the uploading Debian contributor.

Why *.dgit.debian.org is not Salsa

We need a git depository - a formal, reliable and permanent git repository of source code actually released to Debian.

Git forges like Gitlab can be very convenient. But Gitlab is not sufficiently secure, and too full of bugs, to be the principal and only archive of all our source code. (The “open core” business model of the Gitlab corporation, and the constant-churn development approach, are critical underlying problems.)

Our git depository lacks forge features like Merge Requests. But:

  • It is dependable, both in terms of reliability and security.
  • It is append-only: once something is pushed, it is permanently recorded.
  • Its access control is precisely that of the Debian Archive.
  • Its ref namespace is standardised and corresponds to Debian releases.
  • Pushes are authorised by PGP signatures, not ssh keys, so traceable.

The dgit git depository outlasted Alioth and it may well outlast Salsa.

We need both a good forge, and the *.dgit.debian.org formal git depository.

Roadmap

In progress

Right now we are quite focused on tag2upload.

We are working hard on eliminating the remaining issues that we feel need to be addressed before declaring the service out of beta.

Future Technology

Whole-archive dsc importer

Currently, the git depository only has git data for git-based package updates (tag2upload and dgit push). Legacy dput-based uploads are not currently present there. This means that the git-based and legacy uploads must be resolved client-side, by dgit clone.

We will want to start importing legacy uploads to git.

Then downstreams and users will be able to get the source code for any package simply with git clone, even if the maintainer is using legacy upload tools like dput.

Support for git-based uploads to security.debian.org

Security patching is a task which would particularly benefit from better and more formal use of git. git-based approaches to applying and backporting security patches are much more convenient than messing about with actual patch files.

Currently, one can use git to help prepare a security upload, but it often involves starting with a dsc import (which lacks the proper git history) or figuring out a package maintainer’s unstandardised git usage conventions on Salsa.

And it is not possible to properly perform the security release as git.

Internal Debian consumers switch to getting source from git

Buildds, QA work such as lintian checks, and so on, could be simpler if they don’t need to deal with source packages.

And since git is actually the canonical form, we want them to use it directly.

Problems for the distant future

For decades, Debian has been built around source packages. Replacing them is a long and complex process. Certainly source packages are going to continue to be supported for the foreseeable future.

There are no doubt going to be unanticipated problems. There are also foreseeable issues: for example, perhaps there are packages that work very badly when represented in git. We think we can rise to these challenges as they come up.

Mindshare and adoption - please help!

We and our users are very pleased with our technology. It is convenient and highly dependable.

dgit in particular is superb, even if we say so ourselves. As technologists, we have been very focused on building good software, but it seems we have fallen short in the marketing department.

A rant about publishing the source code

git is the preferred form for modification.

Our upstreams are overwhelmingly using git. We are overwhelmingly using git. It is a scandal that for many packages, Debian does not properly, formally and officially publish the git history.

Properly publishing the source code as git means publishing it in a way that means that anyone can automatically and reliably obtain and build the exact source code corresponding to the binaries. The test is: could you use that to build a derivative?

Putting a package in git on Salsa is often a good idea, but it is not sufficient. No standard branch structure git on Salsa is enforced, nor should it be (so it can’t be automatically and reliably obtained), the tree is not in a standard form (so it can’t be automatically built), and is not necessarily identical to the source package. So Vcs-Git fields, and git from Salsa, will never be sufficient to make a derivative.

Debian is not publishing the source code!

The time has come for proper publication of source code by Debian to no longer be a minority sport. Every maintainer of a package whose upstream is using git (which is nearly all packages nowadays) should be basing their work on upstream git, and properly publishing that via tag2upload or dgit.

And it’s not even difficult! The modern git-based tooling provides a far superior upload experience.

A common misunderstanding

dgit push is not an alternative to gbp pq or quilt. Nor is tag2upload. These upload tools complement your existing git workflow. They replace and improve source package building/signing and the subsequent dput. If you are using one of the usual git layouts on salsa, and your package is in good shape, you can adopt tag2upload and/or dgit push right away.

git-debrebase is distinct and does provides an alternative way to manage your git packaging, do your upstream rebases, etc.

Documentation

Debian’s documentation all needs to be updated, including particularly instructions for packaging, to recommend use of git-first workflows. Debian should not be importing git-using upstreams’ “release tarballs” into git. (Debian outsiders who discover this practice are typically horrified.) We should use only upstream git, work only in git, and properly release (and publish) in git form.

We, the git transition team, are experts in the technology, and can provide good suggestions. But we do not have the bandwidth to also engage in the massive campaigns of education and documentation updates that are necessary — especially given that (as with any programme for change) many people will be sceptical or even hostile.

So we would greatly appreciate help with writing and outreach.

Personnel

We consider ourselves the Debian git transition team.

Currently we are:

  • Ian Jackson. Author and maintainer of dgit and git-debrebase. Co-creator of tag2upload. Original author of dpkg-source, and inventor in 1996 of Debian Source Packages. Alumnus of the Debian Technical Committee.

  • Sean Whitton. Co-creator of the tag2upload system; author and maintainer of git-debpush. Co-maintainer of dgit. Debian Policy co-Editor. Former Chair of the Debian Technical Committee.

We wear the following hats related to the git transition:

You can contact us:

We do most of our heavy-duty development on Salsa.

Thanks

Particular thanks are due to Joey Hess, who, in the now-famous design session in Vaumarcus in 2013, helped invent dgit. Since then we have had a lot of support: most recently political support to help get tag2upload deployed, but also, over the years, helpful bug reports and kind words from our users, as well as translations and code contributions.

Many other people have contributed more generally to support for working with Debian source code in git. We particularly want to mention Guido Günther (git-buildpackage); and of course Alexander Wirt, Joerg Jaspert, Thomas Goirand and Antonio Terceiro (Salsa administrators); and before them the Alioth administrators.



comment count unavailable comments

21 December, 2025 11:24PM

Russell Coker

December 20, 2025

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Immutable Debian

Immutable Atomic Linux Distirbutions

Of late, I’ve been hearing a lot of (good) things about Immutable Linux Distributions, from friends, colleagues and mentors. It has been something on my plate for some time, to explore. But given the nature of the subject, it has been delayed for a while. Reasons are simple; I can only really judge this product if I use it for some time; and it has to be on my primary daily driver machine.

Personal life, this year, has been quite challenging as well. Thus it got pushed to until now.

Chrome OS

I’ve realized that I’ve been quite late to a lot of Linux parties. Containers, Docker, Kubernetes, Golang, Rust, Immutable Linux and many many more.

Late to the extent that I’ve had a Chrome Book lying at home for many months but never got to tinker with it at all.

Having used it for just around 2 weeks now, I can see what a great product Google built with it. In short, this is exactly how a Linux desktop integration should be. The GUI integration is just top notch. There’s consistency across all applications rendered on the Chrome OS

The integration of [X]Wayland and friends is equally good. Maybe Google should consider opensourcing all those components. IIRC, exo, sommelier, xwayland, ash and many more.

I was equally happy to see their Linux Development Environment offering on supported hardware. While tightly integrated, it still allows power users to tinker things around. I was quite impressed to see nested containers in crostini. Job well done.

All of this explains why there’s much buzz about Immutable Atomic Linux Distributions these days.

Then, there’s the Android integration, which is just awesome in case you care of it. Both libndk and libhoudini are well integrated and nicely usable.

Immutable Linux Distributions

This holiday season I wanted to find and spend some time catching up on stuff I had been prolonging.

I chose to explore this subject while trying to remain in familiar Debian land. So my first look was to see if there was any product derived out of the Debian base.

That brought me to Vanilla OS Orchid. This is a fresh out of oven project, recently switched to being based on Debian Sid. Previous iteration used Ubuntu as the base.

Vanilla OS turned out to be quite good an experience. The stock offering is created well enough to serve the general audience. And the framework is such wonderfully structured that seasoned users can tinker around with it, without much fuss.

Vanilla OS uses an A/B Partition model for how system updates are rolled. At any point, when a new OTA update is pushed, it gets applied to the inactive A/B partition. And it gets activated at next boot. If things break, user has the option to switch to the previous state. Just the usual set of expectations one would have with an immutable distribution.

What they’ve done beautifully is:

  • Integration Device Mapper LVM for A/B Partition
  • Linux Container OCI images to provison/flash A/B Paritions
  • Developed abroot utility for A/B Partition management
  • APX (Distrobox) integration for container workflows, with multiple Linux flavors
  • No sudo. Everything done via pkexec

But the most awesome thing I liked in Vanilla OS is custom images. This allows power users to easily tinker with the developer workflow and generate new images, tailored for their specific use cases. All of this done levraging the GitHub/GitLab CI/CD workflows, which I think is just plain awesome. Given that payload is of the OCI format, the CICD workflow just generates new OCI images and publishes to a registry. And then the same is pulled to the client as an OTA.

Hats off to this small team/community for doing such nice integration work, ultimately producing a superb Immutable Atomic Linux Distribution based on the Debian base.

Immutable Linux

My primary work machine has grown over the years, being on the rolling Debian Testing/Unstable channel. And I don’t much feel the itch ever to format my (primary) machine so quick, no matter how great the counter offer is.

So that got me wondering how to have some of bling of the immutable world that I’ve tasted (Thanks Chrome OS and Vanilla OS). With a fair idea of what they offer in features, I drew a line to what I’d want on my primary machine.

  • read-only rootfs
  • read-only /etc/

This also kinda hardens my systems to an extent that I can’t accidentally cause catastrophic damage to it.

The feature I’m letting go of is the A/B Partition (rpm-ostree for Fedora land). While a good feature, having to integrate it into my current machine is going to be very very challenging.

I actually feel that, the core assumption the Immutable Distros make, that all hardware is going to Just Work, is flawed. While Linux has substantially improved over the past years, there’s still a hit/miss when introducing very recent hardware.

Immutable Linux is targeted for the novice user, who won’t accidentally mess with the system. But what would the novice user do in case they have issues with their recently purchased hardware, that they are attempting to run (Immutable) Linux on.

Ritesh’s Immutable Debian

With the premise set, on to sailing in immutable land.

There’s another ground breaking innovation that has been happening; which I think everyone is aware of. And may be using it as well, direct or indirect.

Artificial Intelligence

While I’ve only been a user for a couple of months as I draft this post, I’m now very much impressed with all this innovation. Being at the consumer end has me appreciating it for what it has offered thus far. And I haven’t even scratched the surface. I’m making attempts at developing understanding of Machine Learning and Artificial Intelligence but there’s a looonnngg way to go still.

What I’m appreciating the most is the availability of the AI Technology. It has helped me be more efficient. And thus I get to use the gain (time) with family.

To wrap, what I tailored my primary OS to, wouldn’t have been possible without assistance from AI.

With that, I disclaim that the rest of this article is primarily drafted by my AI Companion. This is going to serve me as a reference for future, when I forget about how all of this was structured.

�� System Architecture: Immutable Debian (Btrfs + MergerFS)

This system is a custom-hardened Immutable Workstation based on Debian Testing/Unstable. It utilizes native Btrfs properties and surgical VFS mounting to isolate the Operating System from persistent data.

1. Storage Strategy: Subvolume Isolation

The system resides on a LUKS-encrypted NVMe partition, using a flattened subvolume layout to separate the “Gold Master” OS from volatile and persistent data.

Mount Point Subvolume Path State Purpose
/ /ROOTVOL RO The core OS image.
/etc /ROOTVOL/etc RO System configuration (Snapshot-capable).
/home/rrs /ROOTVOL/home/rrs RW User data and Kitty terminal configs.
/var/lib /ROOTVOL/var/lib RW Docker, Apt state, and system DBs.
/var/spool /ROOTVOL/var/spool RW Mail queues and service state.
/swap /ROOTVOL/swap RW Isolated path for No_COW Swapfile.
/disk-tmp /ROOTVOL/disk-tmp RW MergerFS overflow tier.

1.1 /etc/fstab

� cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# --- ROOT & BOOT ---
/dev/mapper/nvme0n1p3_crypt / btrfs autodefrag,compress=zstd,discard=async,noatime,defaults,ro 0 0
/dev/nvme0n1p2 /boot ext4 defaults 0 2
/dev/nvme0n1p1 /boot/efi vfat umask=0077 0 1
# --- SWAP ---
# Mount the "Portal" to the swap subvolume using UUID (Robust)
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /swap btrfs subvol=/ROOTVOL/swap,defaults,noatime 0 0
# Activate the swap file by path (Correct for files)
/swap/swapfile none swap defaults 0 0
# --- DATA / MEDIA ---
UUID=439e297a-96a5-4f81-8b3a-24559839539d /media/rrs/TOSHIBA btrfs noauto,compress=zstd,space_cache=v2,subvolid=5,subvol=/,user
# --- MERGERFS ---
# --- DISK-TMP (MergerFS Overflow Tier) ---
# Ensure this ID matches your actual disk-tmp subvolume
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /disk-tmp btrfs subvolid=417,discard=async,defaults,noatime,compress=zstd 0 0
tmpfs /ram-tmp tmpfs defaults 0 0
/ram-tmp:/disk-tmp /tmp fuse.mergerfs x-systemd.requires=/ram-tmp,x-systemd.requires=/disk-tmp,defaults,allow_other,use_ino,nonempty,minfreespace=1G,category.create=all,moveonenospc=true 0 0
# --- IMMUTABILITY PERSISTENCE LAYERS ---
# We explicitly mount these subvolumes so they remain Writable later.
# UUID is the same as your /var/lib entry (your main Btrfs volume).
# 1. /var/lib (Docker, Apt state) - ID 50659
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/lib btrfs subvolid=50659,discard=async,defaults,noatime,compress=zstd 0 0
# 2. /home/rrs (User Data) - ID 13032
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /home/rrs btrfs subvolid=13032,discard=async,defaults,noatime,compress=zstd 0 0
# 3. /etc (System Config) - ID 13030
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /etc btrfs subvolid=13030,discard=async,defaults,noatime,compress=zstd,ro 0 0
# 4. /var/log (Logs) - ID 406
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/log btrfs subvolid=406,discard=async,defaults,noatime,compress=zstd 0 0
# 5. /var/cache (Apt Cache) - ID 409
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/cache btrfs subvolid=409,discard=async,defaults,noatime,compress=zstd 0 0
# 6. /var/tmp (Temp files) - ID 401
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/tmp btrfs subvolid=401,discard=async,defaults,noatime,compress=zstd 0 0
# /var/spool
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/spool btrfs subvolid=50689,discard=async,defaults,noatime,compress=zstd 0 0

2. Tiered Memory Model (/tmp)

To balance performance and capacity, /tmp is managed via MergerFS:

  • Tier 1 (RAM): tmpfs mounted at /ram-tmp.
  • Tier 2 (Disk): Btrfs subvolume mounted at /disk-tmp.
  • Logic: Files are written to RAM first. If RAM falls below 1GB available, files spill over to the Btrfs disk tier.

3. Hibernation & Swap Logic

  • Size: 33 GiB (Configured for Suspend-to-Disk with 24GB RAM).
  • Attribute: The /swap subvolume is marked No_COW (+C).
  • Kernel Integration:
    • resume=UUID=... (Points to the unlocked LUKS container).
    • resume_offset=... (Physical extent mapping for Btrfs).

3.1 systemd sleep/Hibernation

� cat /etc/systemd/sleep.conf.d/sleep.conf
[Sleep]
HibernateDelaySec=12min

and

� cat /etc/systemd/logind.conf.d/logind.conf
[Login]
HandleLidSwitch=suspend-then-hibernate
HandlePowerKey=suspend-then-hibernate
HandleSuspendKey=suspend-then-hibernate
SleepOperation==suspend-then-hibernate

4. Immutability & Safety Mechanisms

The system state is governed by two key components:

A. The Control Script (immutectl)

Handles the state transition by flipping Btrfs properties and VFS mount flags in the correct order.

  • sudo immutectl unlock: Sets ro=false and remounts rw.
  • sudo immutectl lock: Sets ro=true and remounts ro.
� cat /usr/local/bin/immutectl
#!/bin/bash
# Ensure script is run as root
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root (sudo)."
exit 1
fi
ACTION=$1
case $ACTION in
unlock)
echo "🔓 Unlocking / and /etc for maintenance..."
# 1. First, tell the Kernel to allow writes to the mount point
mount -o remount,rw /
mount -o remount,rw /etc
# 2. Now that the VFS is RW, Btrfs will allow you to change the property
btrfs property set / ro false
btrfs property set /etc ro false
echo "Status: System is now READ-WRITE."
;;
lock)
echo "🔒 Locking / and /etc (Immutable Mode)..."
sync
btrfs property set / ro true
btrfs property set /etc ro true
# We still attempt remount, but we ignore failure since Property is the Hard Lock
mount -o remount,ro / 2>/dev/null
mount -o remount,ro /etc 2>/dev/null
echo "Status: System is now READ-ONLY (Btrfs Property Set)."
;;
status)
echo "--- System Immutability Status ---"
for dir in "/" "/etc"; do
# Get VFS state
VFS_STATE=$(grep " $dir " /proc/mounts | awk '{print $4}' | cut -d, -f1)
# Get Btrfs Property state
BTRFS_PROP=$(btrfs property get "$dir" ro | cut -d= -f2)
# Determine overall health
if [[ "$BTRFS_PROP" == "true" ]]; then
FINAL_STATUS="LOCKED (RO)"
else
FINAL_STATUS="UNLOCKED (RW)"
fi
echo "Path: $dir"
echo " - VFS Layer (Mount): $VFS_STATE"
echo " - Btrfs Property: ro=$BTRFS_PROP"
echo " - Effective State: $FINAL_STATUS"
# Check for mismatch (The "Busy" scenario)
if [[ "$VFS_STATE" == "rw" && "$BTRFS_PROP" == "true" ]]; then
echo " ⚠� NOTICE: VFS is RW but Btrfs is RO. System is effectively Immutable."
fi
echo ""
done
;;
*)
echo "Usage: $0 {lock|unlock|status}"
exit 1
;;
esac

B. The Smart Seal (immutability-seal.service)

A systemd one-shot service that ensures the system is locked on boot.

  • Fail-safe: The service checks /proc/cmdline for the standalone word rw. If found (via GRUB manual override), the seal is aborted to allow emergency maintenance.
� cat /etc/systemd/system/immutability-seal.service
[Unit]
Description=Ensure Btrfs Immutable Properties are set on Boot (unless rw requested)
DefaultDependencies=no
After=systemd-remount-fs.service
Before=local-fs.target
# Don't run in emergency/rescue modes
#ConditionPathExists=!/run/systemd/seats/seat0
[Service]
Type=oneshot
# The robust check: exit if 'rw' exists as a standalone word
ExecStartPre=/bin/sh -c '! grep -qE "\brw\b" /proc/cmdline'
ExecStartPre=mount -o remount,rw /
ExecStart=/usr/bin/btrfs property set / ro true
ExecStart=/usr/bin/btrfs property set /etc ro true
ExecStartPost=mount -o remount,ro /
RemainAfterExit=yes
[Install]
WantedBy=local-fs.target

5. Monitoring & Maintenance

  • Nagging: A systemd user-timer runs immutability-nag every 15 minutes to notify the desktop session if the system is currently in an “Unlocked” state.
  • Verification: Use sudo immutectl status to verify that both the VFS Layer and Btrfs Properties are in sync.

5.1 Nagging

� cat ~/bin/immutability-nag
#!/bin/bash
# Check Btrfs property
BTRFS_STATUS=$(btrfs property get / ro | cut -d= -f2)
if [[ "$BTRFS_STATUS" == "false" ]]; then
# Use notify-send (Standard, fast, non-intrusive)
notify-send -u critical -i security-low \
"🔓 System Unlocked" \
"Root is currently WRITABLE. Run 'immutectl lock' when finished."
fi

and

� usystemctl cat immutability-nag.service
# /home/rrs/.config/systemd/user/immutability-nag.service
[Unit]
Description=Check Btrfs immutability and notify user
# Ensure it doesn't run before the graphical session is ready
After=graphical-session.target
[Service]
Type=oneshot
ExecStart=%h/bin/immutability-nag
# Standard environment for notify-send to find the DBus session
Environment=DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/%U/bus
[Install]
WantedBy=default.target
   ~   20:35:15
� usystemctl cat immutability-nag.timer
# /home/rrs/.config/systemd/user/immutability-nag.timer
[Unit]
Description=Check immutability every 15 mins
[Timer]
OnStartupSec=5min
OnUnitActiveSec=15min
[Install]
WantedBy=timers.target

And the resultant nag in action.

Immutable Debian Nag

Immutable Debian Nag

5.2 Verification

� sudo immutectl status
[sudo] password for rrs:
--- System Immutability Status ---
Path: /
- VFS Layer (Mount): rw
- Btrfs Property: ro=false
- Effective State: UNLOCKED (RW)
Path: /etc
- VFS Layer (Mount): rw
- Btrfs Property: ro=false
- Effective State: UNLOCKED (RW)
   ~   21:14:08
� sudo immutectl lock
🔒 Locking / and /etc (Immutable Mode)...
Status: System is now READ-ONLY (Btrfs Property Set).
   ~   21:14:15
� sudo immutectl status
--- System Immutability Status ---
Path: /
- VFS Layer (Mount): rw
- Btrfs Property: ro=true
- Effective State: LOCKED (RO)
⚠� NOTICE: VFS is RW but Btrfs is RO. System is effectively Immutable.
Path: /etc
- VFS Layer (Mount): rw
- Btrfs Property: ro=true
- Effective State: LOCKED (RO)
⚠� NOTICE: VFS is RW but Btrfs is RO. System is effectively Immutable.

Date Configured: December 2025
Philosophy: The OS is a diagnostic tool. If an application fails to write to a locked path, the application is the variable, not the system.

Wrap

Overall, I’m very very happy with, the result of a day of working together with AI. I wouldn’t have gotten things done so quick in such time if it wasn’t around. Such great is this age of AI.

20 December, 2025 12:00AM by Ritesh Raj Sarraf (rrs@researchut.com)

December 19, 2025

hackergotchi for Kartik Mistry

Kartik Mistry

KDE Needs You!

* KDE Randa Meetings and make a donation!

I know that my contributions to KDE are minimal at this stage, but hey, I’m doing my part this time for sure!

19 December, 2025 01:44PM by કાર્તિક

December 18, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

dang 0.0.17: New Features, Plus Maintenance

dang image

A new release of my mixed collection of things package dang package arrived at CRAN earlier today. The dang package regroups a few functions of mine that had no other home as for example lsos() from a StackOverflow question from 2009 (!!), the overbought/oversold price band plotter from an older blog post, the market monitor blogged about as well as the checkCRANStatus() function tweeted about by Tim Taylor. And more so take a look.

This release retires two functions: the social media site nobody ever visits anymore shut down its API too, so no way to mute posts by a given handle. Similarly, the (never official) ability by Google to supply financial data is no more, so the function to access data this way is gone too. But we also have two new ones: one that helps with CRAN entries for ORCiD ids, and another little helper to re-order microbenchmark results by summary column (defaulting to the median). Other than the usual updates to continuous integrations, as well as a switch to Authors@R which will result in CRAN nagging me less about this, and another argument update.

The detailed NEWS entry follows.

Changes in version 0.0.17 (2025-12-18)

  • Added new funtion reorderMicrobenchmarkResults with alias rmr

  • Use tolower on email argument to checkCRANStatus

  • Added new function cranORCIDs bootstrapped from two emails by Kurt Hornik

  • Switched to using Authors@R in DESCRIPTION and added ORCIDs where available

  • Switched to r-ci action with included bootstrap step; updated updated the checkout action (twice); added (commented-out) log accessor

  • Removed googleFinanceData as the (unofficial) API access point no longer works

  • Removed muteTweeters because the API was turned off

Via my CRANberries, there is a comparison to the previous release. For questions or comments use the the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

18 December, 2025 09:14PM

hackergotchi for Colin Watson

Colin Watson

Preparing a transition in Debusine

We announced a public beta of Debusine repositories recently (Freexian blog, debian-devel-announce). One thing I’m very keen on is being able to use these to prepare “transitions”: changes to multiple packages that need to be prepared together in order to land in testing. As I said in my DebConf25 talk:

We have distribution-wide CI in unstable, but there’s only one of it and it’s shared between all of us. As a result it’s very possible to get into tangles when multiple people are working on related things at the same time, and we only avoid that as much as we do by careful coordination such as transition bugs. Experimental helps, but again, there’s only one of it and setting up another one is far from trivial.

So, what we want is a system where you can run experiments on possible Debian changes at a large scale without a high setup cost and without fear of breaking things for other people. And then, if it all works, push the whole lot into Debian.

Time to practice what I preach.

Setup

The setup process is documented on the Debian wiki. You need to decide whether you’re working on a short-lived experiment, in which case you’ll run the create-experiment workflow and your workspace will expire after 60 days of inactivity, or something that you expect to keep around for longer, in which case you’ll run the create-repository workflow. Either one of those will create a new workspace for you. Then, in that workspace, you run debusine archive suite create for whichever suites you want to use. For the case of a transition that you plan to land in unstable, you’ll most likely use create-experiment and then create a single suite with the pattern sid-<name>.

The situation I was dealing with here was moving to Pylint 4. Tests showed that we needed this as part of adding Python 3.14 as a supported Python version, and I knew that I was going to need newer upstream versions of the astroid and pylint packages. However, I wasn’t quite sure what the fallout of a new major version of pylint was going to be. Fortunately, the Debian Python ecosystem has pretty good autopkgtest coverage, so I thought I’d see what Debusine said about it. I created an experiment called cjwatson-pylint (resulting in https://debusine.debian.net/debian/developers-cjwatson-pylint/ - I’m not making that a proper link since it will expire in a couple of months) and a sid-pylint suite in it.

Iteration

From this starting point, the basic cycle involved uploading each package like this for each package I’d prepared:

$ dput -O debusine_workspace=developers-cjwatson-pylint \
       -O debusine_workflow=publish-to-sid-pylint \
       debusine.debian.net foo.changes

I could have made a new dput-ng profile to cut down on typing, but it wasn’t worth it here.

Then I looked at the workflow results, figured out which other packages I needed to fix based on those, and repeated until the whole set looked coherent. Debusine automatically built each upload against whatever else was currently in the repository, as you’d expect.

I should probably have used version numbers with tilde suffixes (e.g. 4.0.2-1~test1) in case I needed to correct anything, but fortunately that was mostly unnecessary. I did at least run initial test-builds locally of just the individual packages I was directly changing to make sure that they weren’t too egregiously broken, just because I usually find it quicker to iterate that way.

I didn’t take screenshots as I was going along, but here’s what the list of top-level workflows in my workspace looked like by the end:

Workflows

You can see that not all of the workflows are successful. This is because we currently just show everything in every workflow; we don’t consider whether a task was retried and succeeded on the second try, or whether there’s now a newer version of a reverse-dependency so tests of the older version should be disregarded, and so on. More fundamentally, you have to look through each individual workflow, which is a bit of a pain: we plan to add a dashboard that shows you the current state of a suite as a whole rather than the current workflow-oriented view, but we haven’t started on that yet.

Drilling down into one of these workflows, it looks something like this:

astroid workflow

This was the first package I uploaded. The first pass of failures told me about pylint (expected), pylint-flask (an obvious consequence), and python-sphinx-autodoc2 and sphinx-autoapi (surprises). The slightly odd pattern of failures and errors is because I retried a few things, and we sometimes report retries in a slightly strange way, especially when there are workflows involved that might not be able to resolve their input parameters any more.

The next level was:

pylint workflow

Again, there were some retries involved here, and also some cases where packages were already failing in unstable so the failures weren’t the fault of my change; for now I had to go through and analyze these by hand, but we’ll soon have regression tracking to compare with reference runs and show you where things have got better or worse.

After excluding those, that left pytest-pylint (not caused by my changes, but I fixed it anyway in unstable to clear out some noise) and spyder. I’d seen people talking about spyder on #debian-python recently, so after a bit of conversation there I sponsored a rope upload by Aeliton Silva, upgraded python-lsp-server, and patched spyder. All those went into my repository too, exposing a couple more tests I’d forgotten in spyder.

Once I was satisfied with the results, I uploaded everything to unstable. The next day, I looked through the tracker as usual starting from astroid, and while there are some test failures showing up right now it looks as though they should all clear out as pieces migrate to testing. Success!

Conclusions

We still have some way to go before this is a completely smooth experience that I’d be prepared to say that every developer can and should be using; there are all sorts of fit-and-finish issues that I can easily see here. Still, I do think we’re at the point where a tolerant developer can use this to deal with the common case of a mid-sized transition, and get more out of it than they put in.

Without Debusine, either I’d have had to put much more effort into searching for and testing reverse-dependencies myself, or (more likely, let’s face it) I’d have just dumped things into unstable and sorted them out afterwards, resulting in potentially delaying other people’s work. This way, everything was done with as little disruption as possible.

This works best when the packages likely to be involved have reasonably good autopkgtest coverage (even if the tests themselves are relatively basic). This is an increasingly good bet in Debian, but we have plans to add installability comparisons (similar to how Debian’s testing suite works) as well as optional rebuild testing.

If this has got you interested, please try it out for yourself and let us know how it goes!

18 December, 2025 01:21PM by Colin Watson

December 17, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

21 years of blogging

21 years ago today I wrote my first blog post. Did I think I’d still be writing all this time later? I’ve no idea to be honest. I’ve always had the impression my readership is small, and people who mostly know me in some manner, and I post to let them know what I’m up to in more detail than snippets of IRC conversation can capture. Or I write to make notes for myself (I frequently refer back to things I’ve documented here). I write less about my personal life than I used to, but I still occasionally feel the need to mark some event.

From a software PoV I started out with Blosxom, migrated to MovableType in 2008, ditched that, when the Open Source variant disappeared, for Jekyll in 2015 (when I also started putting it all in git). And have stuck there since. The static generator format works well for me, and I outsource comments to Disqus - I don’t get a lot, I can’t be bothered with the effort of trying to protect against spammers, and folk who don’t want to use it can easily email or poke me on the Fediverse. If I ever feel the need to move from Jekyll I’ll probably take a look at Hugo, but thankfully at present there’s no push factor to switch.

It’s interesting to look at my writing patterns over time. I obviously started keen, and peaked with 81 posts in 2006 (I’ve no idea how on earth that happened), while 2013 had only 2. Generally I write less when I’m busy, or stressed, or unhappy, so it’s kinda interesting to see how that lines up with various life events.

Blog posts over time

During that period I’ve lived in 10 different places (well, 10 different houses/flats, I think it’s only 6 different towns/cities), on 2 different continents, working at 6 different employers, as well as a period where I was doing my Masters in law. I’ve travelled around the world, made new friends, lost contact with folk, started a family. In short, I have lived, even if lots of it hasn’t made it to these pages.

At this point, do I see myself stopping? No, not really. I plan to still be around, like Flameeyes, to the end. Even if my posts are unlikely to hit the frequency from back when I started out.

17 December, 2025 05:06PM

Sven Hoexter

exfatprogs: Do not try defrag.exfat / mkfs.exfat Windows compatibility in Trixie

exfatprogs 1.3.0 added a new defrag.exfat utility which turned out to be not reliable and cause data loss. exfatprogs 1.3.1 disabled the utility, and I followed that decision with the upload to Debian/unstable yesterday. But as usual it will take some time until it's migrating to testing. Thus if you use testing do not try defag.exfat! At least not without a vetted and current backup.

Beside of that there is a compatibility issue with the way mkfs.exfat, as shipped in trixie (exfatprogs 1.2.9), handles drives which have a physical sector size of 4096 bytes but emulate a logical size of 512 bytes. With exfatprogs 1.2.6 a change was implemented to prefer the physical sector size on those devices. That turned out to be not compatible with Windows, and was reverted in exfatprogs 1.3.0. Sadly John Ogness ran into the issue and spent some time to debug it. I've to admit that I missed the relevance of that change. Huge kudos to John for the bug report. Based on that I prepared an update for the next trixie point release.

If you hit that issue on trixie with exfatprogs 1.2.9-1 you can work around it by formating with mkfs.exfat -s 512 /dev/sdX to get Windows compatibility. If you use exfatprogs 1.2.9-1+deb13u1 or later, and want the performance gain back, and do not need Windows compatibility, you can format with mkfs.exfat -s 4096 /dev/sdX.

17 December, 2025 02:38PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 15.2.3-1 on CRAN: Upstream Update

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1272 other packages on CRAN, downloaded 43.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 661 times according to Google Scholar.

This versions updates to the 15.2.3 upstream Armadillo release from yesterday. It brings minor changes over the RcppArmadillo 15.2.2 release made last month (and described in this post). As noted previously, and due to both the upstream transition to C++14 coupled with the CRAN move away from C++11, the package offers a transition by allowing packages to remain with the older, pre-15.0.0 ‘legacy’ Armadillo yet offering the current version as the default. If and when CRAN will have nudged (nearly) all maintainers away from C++11 (and now also C++14 !!) we can remove the fallback. Our offer to help with the C++ modernization still stands, so please get in touch if we can be of assistance. As a reminder, the meta-issue #475 regroups all the resources for the C++11 transition.

There were no R-side changes in this release. The detailed changes since the last release follow.

Changes in RcppArmadillo version 15.2.3-1 (2025-12-16)

  • Upgraded to Armadillo release 15.2.3 (Medium Roast Deluxe)

    • Faster .resize() for vectors

    • Faster repcube()

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

17 December, 2025 02:11PM

hackergotchi for Matthew Garrett

Matthew Garrett

How did IRC ping timeouts end up in a lawsuit?

I recently won a lawsuit against Roy and Rianne Schestowitz, the authors and publishers of the Techrights and Tuxmachines websites. The short version of events is that they were subject to an online harassment campaign, which they incorrectly blamed me for. They responded with a large number of defamatory online posts about me, which the judge described as unsubstantiated character assassination and consequently awarded me significant damages. That's not what this post is about, as such. It's about the sole meaningful claim made that tied me to the abuse.

In the defendants' defence and counterclaim[1], 15.27 asserts in part The facts linking the Claimant to the sock puppet accounts include, on the IRC network: simultaneous dropped connections to the mjg59_ and elusive_woman accounts. This is so unlikely to be coincidental that the natural inference is that the same person posted under both names. "elusive_woman" here is an account linked to the harassment, and "mjg59_" is me. This is actually a surprisingly interesting claim to make, and it's worth going into in some more detail.

The event in question occurred on the 28th of April, 2023. You can see a line reading *elusive_woman has quit (Ping timeout: 2m30s), followed by one reading *mjg59_ has quit (Ping timeout: 2m30s). The timestamp listed for the first is 09:52, and for the second 09:53. Is that actually simultaneous? We can actually gain some more information - if you hover over the timestamp links on the right hand side you can see that the link is actually accurate to the second even if that's not displayed. The first event took place at 09:52:52, and the second at 09:53:03. That's 11 seconds apart, which is clearly not simultaneous, but maybe it's close enough. Figuring out more requires knowing what a "ping timeout" actually means here.

The IRC server in question is running Ergo (link to source code), and the relevant function is handleIdleTimeout(). The logic here is fairly simple - track the time since activity was last seen from the client. If that time is longer than DefaultIdleTimeout (which defaults to 90 seconds) and a ping hasn't been sent yet, send a ping to the client. If a ping has been sent and the timeout is greater than DefaultTotalTimeout (which defaults to 150 seconds), disconnect the client with a "Ping timeout" message. There's no special logic for handling the ping reply - a pong simply counts as any other client activity and resets the "last activity" value and timeout.

What does this mean? Well, for a start, two clients running on the same system will only have simultaneous ping timeouts if their last activity was simultaneous. Let's imagine a machine with two clients, A and B. A sends a message at 02:22:59. B sends a message 2 seconds later, at 02:23:01. The idle timeout for A will fire at 02:24:29, and for B at 02:24:31. A ping is sent for A at 02:24:29 and is responded to immediately - the idle timeout for A is now reset to 02:25:59, 90 seconds later. The machine hosting A and B has its network cable pulled out at 02:24:30. The ping to B is sent at 02:24:31, but receives no reply. A minute later, at 02:25:31, B quits with a "Ping timeout" message. A ping is sent to A at 02:25:59, but receives no reply. A minute later, at 02:26:59, A quits with a "Ping timeout" message. Despite both clients having their network interrupted simultaneously, the ping timeouts occur 88 seconds apart.

So, two clients disconnecting with ping timeouts 11 seconds apart is not incompatible with the network connection being interrupted simultaneously - depending on activity, simultaneous network interruption may result in disconnections up to 90 seconds apart. But another way of looking at this is that network interruptions may occur up to 90 seconds apart and generate simultaneous disconnections[2]. Without additional information it's impossible to determine which is the case.

This already casts doubt over the assertion that the disconnection was simultaneous, but if this is unusual enough it's still potentially significant. Unfortunately for the Schestowitzes, even looking just at the elusive_woman account, there were several cases where elusive_woman and another user had a ping timeout within 90 seconds of each other - including one case where elusive_woman and schestowitz[TR] disconnect 40 seconds apart. By the Schestowitzes argument, it's also a natural inference that elusive_woman and schestowitz[TR] (one of Roy Schestowitz's accounts) are the same person.

We didn't actually need to make this argument, though. In England it's necessary to file a witness statement describing the evidence that you're going to present in advance of the actual court hearing. Despite being warned of the consequences on multiple occasions the Schestowitzes never provided any witness statements, and as a result weren't allowed to provide any evidence in court, which made for a fairly foregone conclusion.

[1] As well as defending themselves against my claim, the Schestowitzes made a counterclaim on the basis that I had engaged in a campaign of harassment against them. This counterclaim failed.

[2] Client A and client B both send messages at 02:22:59. A falls off the network at 02:23:00, has a ping sent at 02:24:29, and has a ping timeout at 02:25:29. B falls off the network at 02:24:28, has a ping sent at 02:24:29, and has a ping timeout at 02:25:29. Simultaneous disconnects despite over a minute of difference in the network interruption.

comment count unavailable comments

17 December, 2025 01:17PM

December 16, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Tom Silvagni sentencing: not Xavier College but DPP and social media to blame

After the recent abuse judgment, today we will find out about Tom Silvagni's sentence. Dad died shortly after his employer, the late Cardinal Pell, was sentenced to prison. The cardinal was subsequently acquitted in his appeal to the High Court. Dad was a Carlton supporter. He would be turning in his grave at the thought that Stephen Silvagni's son could be a rapist.

Suppression orders were lifted last week and the media have finally identified Tom Silvagni as the latest Australian football personality to be convicted of abuse.

News reports were quick to comment on the fact the Silvagni brothers all attended Xavier College, the elite Catholic boys school attended by my father and I. As explained in a previous blog, after moving from Bendigo to Melbourne for my final year of school, I used to cycle from Pentridge Prison to Xavier College each day while Tom really is in prison for at least one Christmas and maybe more.

The alleged incident took place in January 2024. That appears to be four years after Tom completed year 12. References to the school are therefore not helpful in any specific way. In a general sense, all we can say is there is a correlation between wealth and abuse, just as there is a correlation between wealth and attendance at elite schools. But correlation is not causation. The Chanel Contos "petition" about consent demonstrated that incidents of this nature were alleged to happen in every elite school of every denomination. The Federal Court published the Katharine Thornton Dossier about their former boss, the attorney general Christian Porter. In his case, it is alleged that abuse took place while he was representing another elite school at the national debating contest. An allegation against a student on an official school trip is far more severe than an allegation against a former student.

Silvagni background

Tom had started a job as a player agent at Kapital Sports Management shortly before the incident. The Wayback machine has captured images of Tom with his colleagues as well as his profile:

Tom is a recently accredited AFL Player Agent and works closely with our team of experienced agents at the ground level. Tom has “lived” the industry through his family ties and is a great resource to Kapital given he has recently experienced playing in AFL pathways. Tom offers great perspective to the young draftees as they navigate the system and is a great relationship piece with our draft talent.

Polarizing and adversarial procedures are not solving abuse

After the conviction was announced, the victim was invited to make a unilateral victim impact statement. She used much of that opportunity to direct her comments against Tom. She made little reference to anybody else at the party and no reference to the cultural and legal problems around abuse in Australia.

Shiannon Corcoron writes a strongly man-hating piece about the trial:

was about how the rights of the wealthy and powerful can override the rights of the small, the weak, the vulnerable and the truth. This man-child ...

As the accuser is anonymous, we do not know if she was small or weak. The facts do confirm she was vulnerable at that particular moment in time: she had gone to sleep in a bed with another man. She believed he would stay the night. The other man left at 2am, leaving the complainant alone and vulnerable.

The polarizing nature of these comments can be further exposed with reference to a parallel case in the United Kingdom. On the same day as the judgment in Melbourne, a British police officer failed in their appeal to overturn dismissal for gross misconduct. In the British case, the attacker was not a male police officer, it was a female police officer, PC Pamela Pritchard. While the police sacked her, there is no mention of any criminal prosecution for her actions.

Look at the women running around the world of open source software encouraging people to gang up on each other:

 

Comments like that are incredibly dangerous. In the world of football, Tom may have seen the way the Director of Public Prosecutions (DPP) handled the case against Stephen Milne and he may have felt that a yes to one man is almost as good as a yes to both men.

Abuse is not about the offender's gender. It is about power, having a gang on your side or just being incredibly drunk and stupid.

There are at least two sides to every story. Looking at the falsified harassment claims in the world of open source software, I was able to find internal emails manipulating journalists to repeat false accusations against Dr Jacob Appelbaum. If somebody was really abused, why did they try to fight their battle through B-grade journalists rather than going directly to the police?

One of the more notable examples in Australia was the wrongful persecution of Alex Matters from the ALP. Sky News conducted an excellent interview with Mr Matters about what it is like to be wrongly accused of abuse.

Based on these cases, I feel it is wise to be very cautious when somebody raises an accusation. It is important to listen and write down evidence but it is foolish to repeat an accusation willy-nilly on social control media.

The mental health defence

Silvagni's lawyers argued that due to the high profile of his family and his young age, he would be at unreasonable risk of self-harm or suicide if the story of the trial was published by the media. On this basis, the entire trial was conducted in secret and his identity only revealed after he was convicted.

There have been vast discussions about privacy, censorship and the credibility of mental health concerns.

Research into the mental health issue suggests that everybody in proximity to bullying and persecution, including family members, team mates, Carlton fans, friends of Tom's mum and Xavier alumni are going to collectively suffer some stress due to the public denunciation of the Silvagni family.

Take a moment to think about Tom's brothers and their families.

Ben was dating Eve Markoski-Wood. Eve's biological father, inconveniently named Rapoff, was convicted and jailed on drug offences. Eve's mother is a reality TV star and Eve uses the name of her step-father. It looks like Eve broke off the relationship with Ben shortly after the charges were officially declared. Britain's Daily Mail tabloid speculated that the "tyranny of distance" had forced them apart but now we know the real reason.

Jack had a very successful few years playing for Carlton. He arrived in the club at the same time as Grace Phillips took up a role as a social media intern. Grace was fortunate to strike up a relationship with her new colleague, the star recruit and son of one of the club legends. They married in 2023 and not long after, in 2024 they had a baby son, Charlie. How is the child going to feel when it arrives for its first day at school and some other five year old asks about uncle Tom?

Tom's girlfriend, Alannah Iocanis, who was a friend of the accuser, is also one of these influencer/model personalities in the world of social control media. With her boyfriend in jail, will other celebrities be willing to date her? Will she be able to maintain the influencer/model lifestyle or will she have to get a job in a supermarket or coffee shop?

Alannah was chosen as a finalist in Miss Universe Australia 2025 even while her boyfriend was on trial for rape. Many pages about her participation have vanished as news got around.

Alannah's model agency, KHOO, has removed her profile.

Media is self-censoring even after suppression order lifted

Many of the media reports do not mention the names of the other people attending the party. It is vital to understand that Anthony Lo Giudice, the other man who had been in the room with the girl was a close relative of the Carlton football club president, Mark Lo Giudice. At the same time, it is important to understand that Tom's father, one of the legends of Carlton, had been refusing to speak to Mark Lo Giudice for a number of years.

Channel 7 report about Anthony Lo Giudice and the sequence of events and Anthony's LinkedIn.

When the reader is aware of all these challenging relationships, they can begin to contemplate the possibility that people have had a role in manipulating the girl or manipulating Tom or manipulating both of them to create a crisis.

Tom's girlfriend, Alannah Iocanis, had invited the victim to the four-way party and she arrived after midnight. Tom's best friend was having an open relationship with the victim. Think of the film Cruel Intentions from 1999. It remains a masterpiece of this genre.

The role of technology

Within minutes of the alleged abuse, the victim had used her mobile phone to alert multiple people that she was an abuse victim. Being only nineteen years old, she may not have realized the extent to which these text messages would change her life. The identities of abuse victims can't be published by the press in Australia, nonetheless, her name has been shared widely between people in her circle of friends, people she thought she could trust and the football clubs concerned.

Without a mobile phone, she may have had time to think about her response to the situation. Once she had gone down the path of telling multiple people, she was unable to turn back.

Deception and rape go together, from Chateau Silvagni to the FSFE & Debian lies

News reports were quick to emphasize that Tom is accused of using deception to gain access to the sleeping nineteen year old. He has admitted using deception, a falsified Uber receipt, to obfuscate the identities of those really in the house at the time of the alleged abuse.

I suspect many people would feel a sense of shock if accused of abuse and some may be tempted to put up barriers to protect themselves. The trial of Tom Silvagni found that his response was not merely a spontaneous lie made up on the spur of the moment, it was a co-ordinated deception involving at least one other person and a forged document.

During the nearly 10-day trial, Crown prosecutor Jeremy McWilliams told jurors the rapes were committed 'not through threats, not through force… but through deception,' with Silvagni impersonating his friend to trick the woman.

In Debianism, the former leader sent this email in December 2018:

You are well-aware that I have been nothing but scrupulous and gentlemanly with regards to your personal privacy and thus I would refuse to cite any outside or otherwise offer any objective rebuttals to your claims on a public forum.

Yet records show he had spent much of 2018 sending defamatory emails behind my back at a time when I lost two family members. Nothing could be a more hideous violation of privacy.

We've seen similar extremes as Matthias Kirschner uses the name FSFE to impersonate the real FSF in Boston. In a previous blog, I compared the FSFE to a Nigerian 911 scam.

Tom Silvagni is accused of using deception/impersonation to procure sex with one of his best friend's girlfriends. Chris Lamb and Matthias Kirschner used deception on a similar scale to procure victims' work while pretending to be independent voluntary organizations. In the latter case, we saw victims killed themselves in the Debian suicide cluster. One victim died on our wedding day.

16 December, 2025 09:30PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Lichess

I wish more pages on the Internet were like Lichess. It's fast. It feels like it only does one thing (even though it's really more like seven or eight)—well, perhaps except for the weird blogs. It does not feel like it's trying to sell me anything; in fact, it feels like it hardly even wants my money. (I've bought two T-shirts from their Spreadshirt, to support them.) It's super-efficient; I've seen their (public) balance sheets, and it feels like it runs off of a shoestring budget. (Take note, Wikimedia Foundation!) And, perhaps most relieving in this day and age, it does not try to grift any AI.

Yes, I know, chess.com is the juggernaut, and has probably done more for chess' popularity than FIDE ever did. But I still go to Lichess every now and then and just click that 2+1 button. (Generally without even logging in, so that I don't feel angry about it when I lose.) Be more like Lichess.

16 December, 2025 06:45PM