Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

July 18, 2025

Sven Hoexter

Debian can Now Drop Xorg

even I managed to migrate my last setup to sway a few weeks ago. I was the last one you've been waiting for dear X Strike Force, right?

Multi display support just works, no more modeline hackery. Oh and we can also remove those old clipboard manager.

One oddity with sway I could not yet solve is that I had to delete the default wallpaper /usr/share/backgrounds/sway/Sway_Wallpaper_Blue_1920x1080.png to allow it to load the Debian wallpaper via

output * bg /usr/share/desktop-base/active-theme/wallpaper/contents/images/1920x1080.svg fill

Update: Thanks to Birger and Sebastian who easily could explain that. The sway-backgrounds package ships a config snippet in /etc/sway/config.d and if that's included e.g. via include /etc/sway/config.d/* after setting the background in your ~/.config/sway/config it does the obvious and overrides your own background configuration again. Didn't expect that but makes sense. So the right fix is to just remove the sway-backgrounds package.

I also had a bit of fist fight with sway to make sure I've as much screen space available as possible. So I tried to shrink fonts and remove borders.

default_border none
default_floating_border none
titlebar_padding 1
titlebar_border_thickness 0
font pango: monospace 9

Rest I guess is otherwise well documented. I settled on wofi as menu tool, cliphist for clipboard access, of course waybar to be able to use the nm-applet, swayidle and swaylock are probably also more or less standard for screen locking.

Having

for_window [app_id="firefox"] inhibit_idle fullscreen

is also sensible for video streaming, to avoid the idle locking.

18 July, 2025 08:37PM

hackergotchi for David Bremner

David Bremner

Hibernate on the pocket reform 7/n

Context

Building upstream-ish kernel

  • Roughly following the "bisecting upstream linux" section of reform-debian-packages/README.md
$ git clone https://gitlab.collabora.com/hardware-enablement/rockchip-3588/linux.git collabora
$ cd collabora && git git switch -c rockchip-devel origin/rockchip-devel
  • The next step is to apply the the non-collabora patches from reform-debian-packages/linux/patches6.15/rk3588-mnt-reform2

  • Unfortunately these are missing the proper metadata to work with git-am

Sidequest: Fix patches

  • 1000-v3-pci_dw_rockchip_add_system_pm_support.patch doesn't apply, even with added metadata. Start again upstream.

  • Thanks to joshc for the suggestion of the b4 tool.

      b4 am 1744940759-23823-1-git-send-email-shawn.lin@rock-chips.com
    

    This creates a .mbx file ready for git am (roughly equivalent to fetching the /raw version from lore, with some extra checks).

  • Brute force finding a base for the patch:

git rev-list --no-merges --since 2025-01-01 refs/heads/rockchip-devel | \
    while read ref
    do
        echo trying $ref
        git checkout $ref
        git apply --check v3_20250418_shawn_lin_pci_dw_rockchip_add_system_pm_support.mbx && echo SUCCESS && break
    done
  • 122 steps later this yields 9dff55ebaef7 (bisect would be better if we knew a "good" commit).
$ git branch -D tmp ; git switch -c tmp 9dff55ebaef7
$ git am v3_20250418_shawn_lin_pci_dw_rockchip_add_system_pm_support.mbx
$ git rebase -i rockchip-devel

This fails with 3 conflicts. The first is easy, as the one non-comment line just moves around. The other two involve a new function rockchip_pcie_unmask_dll_indicator used to reduce code duplication, and in all 3 cases I just took the version of the code from Shawn's patch.

Rebased patch is at 0001-PCI-dw-rockchip-Add-system-PM-support.patch

previous episode

18 July, 2025 07:30PM

July 17, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Security: Shane Wegner & Debian statement of incompetence

Debian's authorship database shows us Shane Wegner was changed to the "Removed" status on 25 June 2025. The snobby people in the Debian Account Managers team didn't give us any "Statement on Shane Wegner" so we can only guess what has happened.

Normally when someone resigns they are changed to the Emeritus status. I resigned from some of my voluntary activities around the time my father died and they spent seven years insulting my family. People don't care about families any more but even if you don't care about my family, if you care about your servers running Debian, it is time to ask these little pricks what the hell is going on with the keyring.

The little pricks did this with Dr Jacob Appelbaum in 2016. They bamboozled people with rumors about abuse. I researched the case and showed that they were leading us down the garden path.

Now it has happened again, they removed somebody and it is radio silence.

Looking in debian-private we can see that Wegner lost his key over 15 years ago.

The little pricks attacked me when my father died but when somebody's key is compromised they are asleep at the wheel.

Subject: [vac] Looking for keysigning Salt Lake City 10-jul - 15-jul
Date: Mon, 28 Jun 2010 17:05:42 -0700
From: Shane Wegner <shane@debian.org>
To: debian-private@lists.debian.org

Hi all,

I will be in Salt Lake on the above mentioned dates. If
anyone is available for key exchange, please contact me
privately for a meet-up. I am out of the keyring due to a
key compromize so looking to get recertified.

Cheers,
Shane

Shane Wegner sent that cry for help a few weeks before Frans Pop chose Debian Day for his suicide plan. Boo hoo, the little pricks were in denial about their role in that suicide and they failed to notice the keyring was compromised.

Wegner reminded us again in 2011:

Subject: [vac] keysigning Salt Lake City 29-Apr - 8-May
Date: Tue, 26 Apr 2011 12:01:42 -0700
From: Shane Wegner <shane@debian.org>
To: debian-private@lists.debian.org

Hello,

Subject says it all. Still looking for a keysigner or two
to get back into the keyring. I'll be in downtown SLC if
anyone wants to get together for keysigning.

Shane

That email was sent right in the middle of the Adrian von Bidder-Senn crisis. The cause of death is secret because Adrian died in Basel, Switzerland and his wife, Diana von Bidder is a politician. Adrian was 32 years old and he died on the same day as our wedding. Why are they covering this up and pretending to be a so-called community and a "family" when some of these deaths appear to be suicides?

They spent over $120,000 in legal fees to stop independent professionals like me from asking questions. But they can't cope without us. They are obsessed with their diversity crusades and pronouns but all those things are irrelevant if the project is compromised due to incompetence.

Remember the case of Edward J Brocklesby? Why did they remove him from the keyring? He was living on the road to GCHQ so they never made a "Statement on Edward Brocklesby". Is this a family or a bunch of snobs?

When Wegner joined in 2000, people were still talking about accepting scanned copies of passports as a proof of identity. How many people on the keyring today can be traced back to a scanned, possibly forged, copy of a passport?

At DebConf25, there is a secret meeting about the scandal but they didn't give any report. The Debian Social Contract is dead.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

17 July, 2025 08:30PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

About our proof-of-concept LLM tool for navigating Debian's manpages

So, for the first year, this year’s DebConf had the “DebConf Academic Track�, that is, content for a one-day-long set of short sessions, for which of them there was a written article presenting the content — often with a very academic format, but not necessarily. I hope that for future DebConfs we will continue to hold this track, and that we can help bridge the gap: to get people that are not usually from the academic / universitary prepare content that will be formally expressed and included in a long-lasting, indexed book of proceedings. We did have (informal) proceedings in many of the early DebConfs, and I’m very happy to regain that, but with 20 years of better practices.

Anyway, of course, I took the opportunity to join this experiment, and together with my Mexican colleague Tzolkin Garduño who is finishing her PhD here in France (or should I say, second to her, as she is the true leading author of our work). And here you can see the paper we submitted to the DebConf Academic Track, which will be published soon:

A retrieval-augmented-generation pipeline to help users query system-provided documentation

The corresponding source code is all available at Tzolkin’s repository in GitHub.

So, what is it about, in shorter words and layman terms?

Debian has lots of documentation, but lacks in discoverability. We targetted our venerable manpages: It is well known that manpages are relevant, well-organized documentation describing how to use each of the binaries we ship in Debian. Eric Raymond wrote in his well-known essay “The Art of Unix Programming� (2003) that the Unix cultural style is “telegraphic but complete. It does not hold you by the hand, but it usualy points in the right direction.�

Our original intent was to digest all of the information in our manpages, but we narrowed it to only the first “section� of the manual due to the limitations of the hardware a good friend lent us to play with LLMs. We took four different base, redistributable (although, yes, non-DFSG-free) Large Language Models downloaded from HuggingFace (T5-small, MiniLM-L6-v2, BGE-small-en and Multilingual-e5-small), and trained them with the 34579 pages found inside /usr/share/man/man1 of all of the existing Debian packages. We did some interesting fine-tuning (for further details, please refer to the paper itself or to our GitHub repository.

The idea is to present an interactive tool that udnerstand natural language queries, and answers with the specific manpage to which they better relate (I’d like to say “they answer best�, but Tzolkin has repeatedly tried to correct my understanding of the resulting vectorial distances).

I had prepared some images to present as interaction examples, but I’ll wrap up this post with something even better 😉 So, this is what you would get with the following queries:

We were happy to present like this. During DebCamp, however, I was able to devote some time to translating my client into a Web-accessible system. Do note that it’s a bit brittle, and might break now and then. But you are welcome to play with it!

Play with the Web interface for our RAG for Debian manuals!

I find it worth sharing that, while we used quite a bit of GPU for the training (not too much — a couple of nights on an old-although-powerful nVidia 1070 card lent to us by the Felipe Esquivel Fund for Science and Cultural Support), all querying takes place in the CPU, and I usually get between 0.1 and 0.3 seconds per query. My server’s architecture is far from rock-solid, and it can break now and then… but at least it should respawn upon failure 😉 And it should at least be available there for a couple of weeks into August.

Anyway, this has been a very interesting journey getting my toes wet in the waters of LLM, and while current results are a bit crude, I think this can be made into an interesting tool for Debian exploration.

17 July, 2025 11:08AM

Birger Schacht

My first tag2upload upload

Following the DebConf25 talk by Ian Jackson tag2upload - upload simply by pushing a signed git tag I decided to use the quiet time during the day of the DayTrip on DebConf 25 to try out uploading a package using tag2upload.

Given the current freeze a couple of the packages I maintainer have new releases waiting. I decided on uploading the new version of labwc to experimental. Labwc is a Wayland compositor based on the wlroots compositor library (the library that sway is using). Labwc is inspired by the Openbox window manager. The upstream team of Labwc released version 0.9.0 last week, the first version that is based on wlroots 0.19.x. Wlroots 0.19.x is also only available in experimental, so that was a good fit for trying an upload with tag2upload.

I first used my usual workflow, going into my package repository, doing get fetch origin, checking out the tag of the new release and tagging that with git tag upstream/0.9.0. Then I bumped the version in the debian/experimental branch, adapted the debian/control file for the changed wlroots dependency, committed and built the package using git-buildpackage to check if it builds fine and there are no lintian errors. Then I moved on to look at tag2upload.

As a starting point for using tag2upload I read the blogpost by Jonathan Carter My first tag2upload upload. It pointed me to one very important option of git debpush, namely the --baredebian option which I have to use because I use the bare Debian git layout. Given that the last upload of labwc I did was to unstable, I also had to add the --force=changed-suite.

I also started right away to use the --tag-only option, because for my first tests I only wanted to have local changes and nothing pushed to anywhere. I also used the --dry-run option. This led to the following command:

> git debpush --baredebian --force=changed-suite --dry-run --tag-only
tags 0.9.0, upstream/0.9.0 all exist in this repository
tell me which one you want to make an orig.tar from: git deborig --just-print '--version=0.9.0-1' TAG
git-debpush: git-deborig failed; maybe try git-debpush --upstream=TAG

This was a bit confusing, because the error message talked about git-deborig, but I was using git-debpush. I also did not want to make an orig.tar! The --upstream option in the git-debpush(1) manual gave an explanation for this:

When pushing a non-native package, git-debpush needs a tag for the upstream part of your package.

By default git-debpush asks git-deborig(1), which searches for a suitable tag based on the upstream version in debian/changelog.

So apparently git-debpush can not find out what the correct tag for the upstream version is, because git-deborig can not find out what the correct tag for the upstream version is. git-debpush simply calls git deborig --just-print --version="$version" in line 437. This fails because I initially created a second upstream/0.9.0 to the existing 0.9.0 release tag. I do this for git-buildpackage to find the upstream sources, but with multiple tags git-deborig is not sure which one is the tag it should use (although both point to the same commit).

So I removed the upstream/0.9.0 tag and ran git debpush again and now there was no error message (besides the warning regarding the changed suite) but it also did not give an feedback about what is happening. So I tried without the --dry-run option. Again, no output whatsoever, other than the warning about the changed release, BUT my gnupg asked me for permission to sign via my yubikey! And when I looked at the list of tags, I saw that there is now a debian/0.9.0-1 tag that was not there before! Looking at the tag I saw that it was a tag in the format described in the tag2upload(5) manual page, containing the following lines:

labwc release 0.9.0-1 for experimental

[dgit distro=debian split --quilt=baredebian]
[dgit please-upload source=labwc version=0.9.0-1 upstream-tag=0.9.0 upstream=4beee3851f75b53afc2e8699c594c0cc222115bd]

and the tag was signed by me. The 4beee3851f75b53afc2e8699c594c0cc222115bd commit ID is the commit the 0.9.0 tag points to.

Now that I had a signed commit tag in the correct format, I went to the labwc packaging repository on salsa and enabled the webhook to trigger the tag2upload service (I just saw that the documentation was updated and there is now a global webhook on salsa, so this step is not needed anymore).

Finally I pushed the tags using git push --tags. I could also have used git-debpush for this step, but I’d rather use git directly. I then looked at the tag2upload queue and saw how a worker built and uploaded the newest labwc release and I also got an email from the tag2upload service [tag2upload 275] uploaded labwc 0.9.0-1. And a couple of minutes later I got the confirmation that labwc 0.9.0-1 was accepted into experimental. Great!

So, to conclude: for tag2upload to work we simply need a git tag in the correct format. The tag2upload service now gets triggered by every pushed tag on salsa but only acts on tags that adhere to the tag2upload(5) format. git-debpush is a simply bash script that creates such a tag and by default also pushes the tag.

I think the script could be a bit more verbose, for example telling me that it created a tag and the name of that tag. I think the dependency on git-deborig is also a problem. I use git-buildpackage to build my packages and by default git-buildpacakge assumes upstream tags are of the form upstream/%(version)s (source). I could now change that for all the packages I maintain, but I also think it makes sense to control the tag myself and not use a tag that is controlled by upstream. Upstream could change or delete that tag or I might need to create a tag for a version that is not tagged by upstream.

I also think git-debpush is a rather mileading command name, given that the main task of the script is to create a tag in the correct format.

Other than that, I’m pretty happy about this service. I have a rather crappy uplink at home and it is not so uncommon for my uploads to fail because the connection dropped during dput. Using a simple git based upload approach makes these problems a thing of the past. I might look into other ways to create the needed tag, though.

17 July, 2025 08:28AM

Arnaud Rebillout

Acquire-By-Hash for APT packages repositories, and the lack of it in Kali Linux

This is a lenghty blog post. It features a long introduction that explains how apt update acquires various files from a package repository, what is Acquire-By-Hash, and how it all works for Kali Linux: a Debian-based distro that doesn't support Acquire-By-Hash, and which is distributed via a network of mirrors and a redirector.

In a second part, I explore some "Hash Sum Mismatch" errors that we can hit with Kali Linux, errors that would not happen if only Acquire-By-Hash was supported. If anything, this blog post supports the case for adding Acquire-By-Hash support in reprepro, as requested at https://bugs.debian.org/820660.

All of this could have just remained some personal notes for myself, but I got carried away and turned it into a blog post, dunno why... Hopefully others will find it interesting, but you really need to like troubleshooting stories, packed with details, and poorly written at that. You've been warned!

Introducing Acquire-By-Hash

Acquire-By-Hash is a feature of APT package repositories, that might or might not be supported by your favorite Debian-based distribution. A repository that supports it says so, in the Release file, by setting the field Acquire-By-Hash: yes.

It's easy to check. Debian and Ubuntu both support it:

$ wget -qO- http://deb.debian.org/debian/dists/sid/Release | grep -i ^Acquire-By-Hash:
Acquire-By-Hash: yes

$ wget -qO- http://archive.ubuntu.com/ubuntu/dists/devel/Release | grep -i ^Acquire-By-Hash:
Acquire-By-Hash: yes

What about other Debian derivatives?

$ wget -qO- http://http.kali.org/kali/dists/kali-rolling/Release | grep -i ^Acquire-By-Hash: || echo not supported
not supported

$ wget -qO- https://archive.raspberrypi.com/debian/dists/trixie/Release | grep -i ^Acquire-By-Hash: || echo not supported
not supported

$ wget -qO- http://packages.linuxmint.com/dists/faye/Release | grep -i ^Acquire-By-Hash: || echo not supported
not supported

$ wget -qO- https://apt.pop-os.org/release/dists/noble/Release | grep -i ^Acquire-By-Hash: || echo not supported
not supported

Huhu, Acquire-By-Hash is not ubiquitous. But wait, what is Acquire-By-Hash to start with? To answer that, we have to take a step back and cover some basics first.

The HTTP requests performed by 'apt update'

What happens when one runs apt update? APT first requests the Release file from the repository(ies) configured in the APT sources. This file is a starting point, it contains a list of other files (sometimes called "Index files") that are available in the repository, along with their hashes. After fetching the Release file, APT proceeds to request those Index files. To give you an idea, there are many kinds of Index files, among which:

  • Packages files: list the binary packages that are available in the repository
  • Sources files: list the source packages that are available in the repository
  • Contents files: list files provided by each package (used by the command apt-file)
  • and even more, such as PDiff, Translations, DEP-11 metadata, etc etc...

There's an excellent Wiki page that details the structure of a Debian package repository, it's there: https://wiki.debian.org/DebianRepository/Format.

Note that APT doesn't necessarily download ALL of those Index files. For simplicity, we'll limit ourselves to the minimal scenario, where apt update downloads only the Packages files.

Let's try to make it more visual: here's a representation of a apt update transaction, assuming that all the components of the repository are enabled:

apt update -> Release -> Packages (main/amd64)
                      -> Packages (contrib/amd64)
                      -> Packages (non-free/amd64)
                      -> Packages (non-free-firmware/amd64)

Meaning that, in a first step, APT downloads the Release file, reads its content, and then in a second step it downloads the Index files in parallel.

You can actually see that happen with a command such as apt -q -o Debug::Acquire::http=true update 2>&1 | grep ^GET. For Kali Linux you'll see something pretty similar to what I described above. Try it!

$ podman run --rm kali-rolling apt -q -o Debug::Acquire::http=true update 2>&1 | grep ^GET
GET /kali/dists/kali-rolling/InRelease HTTP/1.1    # <- returns a redirect, that is why the file is requested twice
GET /kali/dists/kali-rolling/InRelease HTTP/1.1
GET /kali/dists/kali-rolling/non-free/binary-amd64/Packages.gz HTTP/1.1
GET /kali/dists/kali-rolling/main/binary-amd64/Packages.gz HTTP/1.1
GET /kali/dists/kali-rolling/non-free-firmware/binary-amd64/Packages.gz HTTP/1.1
GET /kali/dists/kali-rolling/contrib/binary-amd64/Packages.gz HTTP/1.1

However, and it's now becoming interesting, for Debian or Ubuntu you won't see the same kind of URLs:

$ podman run --rm debian:sid apt -q -o Debug::Acquire::http=true update 2>&1 | grep ^GET
GET /debian/dists/sid/InRelease HTTP/1.1
GET /debian/dists/sid/main/binary-amd64/by-hash/SHA256/22709f0ce67e5e0a33a6e6e64d96a83805903a3376e042c83d64886bb555a9c3 HTTP/1.1

APT doesn't download a file named Packages, instead it fetches a file named after a hash. Why? This is due to the field Acquire-By-Hash: yes that is present in the Debian's Release file.

What does Acquire-By-Hash mean for 'apt update'

The idea with Acquire-By-Hash is that the Index files are named after their hash on the repository, so if the MD5 sum of main/binary-amd64/Packages is 77b2c1539f816832e2d762adb20a2bb1, then the file will be stored at main/binary-amd64/by-hash/MD5Sum/77b2c1539f816832e2d762adb20a2bb1. The path main/binary-amd64/Packages still exists (it's the "Canonical Location" of this particular Index file), but APT won't use it, instead it downloads the file located in the by-hash/ directory.

Why does it matter? This has to do with repository updates, and allowing the package repository to be updated atomically, without interruption of service, and without risk of failure client-side.

It's important to understand that the Release file and the Index files are part of a whole, a set of files that go altogether, given that Index files are validated by their hash (as listed in the Release file) after download by APT.

If those files are simply named "Release" and "Packages", it means they are not immutable: when the repository is updated, all of those files are updated "in place". And it causes problems. A typical failure mode for the client, during a repository update, is that: 1) APT requests the Release file, then 2) the repository is updated and finally 3) APT requests the Packages files, but their checksum don't match, causing apt update to fail. There are variations of this error, but you get the idea: updating a set of files "in place" is problematic.

The Acquire-By-Hash mechanism was introduced exactly to solve this problem: now the Index files have a unique, immutable name. When the repository is updated, at first new Index files are added in the by-hash/ directory, and only after the Release file is updated. Old Index files in by-hash/ are retained for a while, so there's a grace period during which both the old and the new Release files are valid and working: the Index files that they refer to are available in the repo. As a result: no interruption of service, no failure client-side during repository updates.

This is explained in more details at https://www.chiark.greenend.org.uk/~cjwatson/blog/no-more-hash-sum-mismatch-errors.html, which is the blog post from Colin Watson that came out at the time Acquire-By-Hash was introduced in... 2016. This is still an excellent read in 2025.

So you might be wondering why I'm rambling about a problem that's been solved 10 years ago, but then as I've shown in the introduction, the problem is not solved for everyone. Support for Acquire-By-Hash server side is not for granted, and unfortunately it never landed in reprepro, as one can see at https://bugs.debian.org/820660.

reprepro is a popular tool for creating APT package repositories. In particular, at Kali Linux we use reprepro, and that's why there's no Acquire-By-Hash: yes in the Kali Release file. As one can guess, it leads to subtle issues during those moments when the repository is updated. However... we're not ready to talk about that yet! There's still another topic that we need to cover: this window of time during which a repository is being updated, and during which apt update might fail.

The window for Hash Sum Mismatches, and the APT trick that saves the day

Pay attention! In this section, we're now talking about packages repositories that do NOT support Acquire-By-Hash, such as the Kali Linux repository.

As I've said above, it's only when the repository is being updated that there is a "Hash Sum Mismatch Window", ie. a moment when apt update might fail for some unlucky clients, due to invalid Index files.

Surely, it's a very very short window of time, right? I mean, it can't take that long to update files on a server, especially when you know that a repository is usually updated via rsync, and rsync goes to great length to update files the most atomically as it can (with the option --delay=updates). So if apt update fails for me, I've been very unlucky, but I can just retry in a few seconds and it should be fixed, isn't it? The answer is: it's not that simple.

So far I pictured the "package repository" as a single server, for simplicity. But it's not always what it is. For Kali Linux, by default users have http.kali.org configured in their APT sources, and it is a redirector, ie. a web server that redirects requests to mirrors that are nearby the client. Some context that matters for what comes next: the Kali repository is synced with ~70 mirrors all around the world, 4 times a day. What happens if your apt update requests are redirected to 2 mirrors close-by, and one was just synced, while the other is still syncing (or even worse, failed to sync entirely)? You'll get a mix of old and new Index files. Hash Sum Mismatch!

As you can see, with this setup the "Hash Sum Mismatch Window" becomes much longer than a few seconds: as long as nearby mirrors are syncing the window is opened. You could have a fast and a slow mirror next to you, and they can be out of sync with each other for several minutes every time the repository is updated, for example.

For Kali Linux in particular, there's a "detail" in our network of mirrors that, as a side-effect, almost guarantees that this window lasts several minutes at least. This is because the pool of mirrors includes kali.download which is in fact the Cloudflare CDN, and from the redirector point of view, it's seen as a "super mirror" that is present in every country. So when APT fires a bunch of requests against http.kali.org, it's likely that some of them will be redirected to the Kali CDN, and others will be redirected to a mirror nearby you. So far so good, but there's another point of detail to be aware of: the Kali CDN is synced first, before the other mirrors. Another thing: usually the mirrors that are the farthest from the Tier-0 mirror are the longest to sync. Packing all of that together: if you live somewhere in Asia, it's not uncommon for your "Hash Sum Mismatch Window" to be as long as 30 minutes, between the moment the Kali CDN is synced, and the moment your nearby mirrors catch up and are finally in sync as well.

Having said all of that, and assuming you're still reading (anyone here?), you might be wondering... Does that mean that apt update is broken 4 times a day, for around 30 minutes, for every Kali user out there? How can they bear with that? Answer is: no, of course not, it's not broken like that. It works despite all of that, and this is thanks to yet another detail that we didn't go into yet. This detail lies in APT itself.

APT is in fact "redirector aware", in a sense. When it fetches a Release file, and if ever the request is redirected, it then fires the subsequent requests against the server where it was initially redirected. So you are guaranteed that the Release file and the Index files are retrieved from the same mirror! Which brings back our "Hash Sum Mismatch Window" to the window for a single server, ie. something like a few seconds at worst, hopefully. And that's what makes it work for Kali, literally. Without this trick, everything would fall apart.

For reference, this feature was implemented in APT back in... 2016! A busy year it seems! Here's the link to the commit: use the same redirection mirror for all index files.

To finish, a dump from the console. You can see this behaviour play out easily, again with APT debugging turned on. Below we can see that only the first request hits the Kali redirector:

$ podman run --rm kali-rolling apt -q -o Debug::Acquire::http=true update 2>&1 | grep -e ^Answer -e ^HTTP
Answer for: http://http.kali.org/kali/dists/kali-rolling/InRelease
HTTP/1.1 302 Found
Answer for: http://mirror.freedif.org/kali/dists/kali-rolling/InRelease
HTTP/1.1 200 OK
Answer for: http://mirror.freedif.org/kali/dists/kali-rolling/non-free-firmware/binary-amd64/Packages.gz
HTTP/1.1 200 OK
Answer for: http://mirror.freedif.org/kali/dists/kali-rolling/contrib/binary-amd64/Packages.gz
HTTP/1.1 200 OK
Answer for: http://mirror.freedif.org/kali/dists/kali-rolling/main/binary-amd64/Packages.gz
HTTP/1.1 200 OK
Answer for: http://mirror.freedif.org/kali/dists/kali-rolling/non-free/binary-amd64/Packages.gz
HTTP/1.1 200 OK

Interlude

Believe it or not, we're done with the introduction! At this point, we have a good understanding of what apt update does (in terms of HTTP requests), we know that Release files and Index files are part of a whole, and we know that a repository can be updated atomically thanks to the Acquire-By-Hash feature, so that users don't experience interruption of service or failures of any sort, even with a rolling repository that is updated several times a day, like Debian sid.

We've also learnt that, despite the fact that Acquire-By-Hash landed almost 10 years ago, some distributions like Kali Linux are still doing without it... and yet it works! But the reason why it works is more complicated to grasp, especially when you add a network of mirrors and a redirector to the picture. Moreover, it doesn't work as flawlessly as with the Acquire-By-Hash feature: we still expect some short (seconds at worst) "Hash Sum Mismatch Windows" for those unlucky users that run apt update at the wrong moment.

This was a long intro, but that really sets the stage for what comes next: the edge cases. Some situations in which we can hit some Hash Sum Mismatch errors with Kali. Error cases that I've collected and investigated over the time...

If anything, it supports the case that Acquire-By-Hash is really something that should be implemented in reprepro. More on that in the conclusion, but for now, let's look at those edge cases.

Edge Case 1: the caching proxy

If you put a caching proxy (such as approx, my APT caching proxy of choice) between yourself and the actual packages repository, then obviously it's the caching proxy that performs the HTTP requests, and therefore APT will never know about the redirections returned by the server, if any. So the APT trick of downloading all the Index files from the same server in case of redirect doesn't work anymore.

It was rather easy to confirm that by building a Kali package during a mirror sync, and watch if fail at the "Update chroot" step:

$ sudo rm /var/cache/approx/kali/dists/ -fr
$ gbp buildpackage --git-builder=sbuild

+------------------------------------------------------------------------------+
| Update chroot                                Wed, 11 Jun 2025 10:33:32 +0000 |
+------------------------------------------------------------------------------+

Get:1 http://http.kali.org/kali kali-dev InRelease [41.4 kB]
Get:2 http://http.kali.org/kali kali-dev/contrib Sources [81.6 kB]
Get:3 http://http.kali.org/kali kali-dev/main Sources [17.3 MB]
Get:4 http://http.kali.org/kali kali-dev/non-free Sources [122 kB]
Get:5 http://http.kali.org/kali kali-dev/non-free-firmware Sources [8297 B]
Get:6 http://http.kali.org/kali kali-dev/non-free amd64 Packages [197 kB]
Get:7 http://http.kali.org/kali kali-dev/non-free-firmware amd64 Packages [10.6 kB]
Get:8 http://http.kali.org/kali kali-dev/contrib amd64 Packages [120 kB]
Get:9 http://http.kali.org/kali kali-dev/main amd64 Packages [21.0 MB]
Err:9 http://http.kali.org/kali kali-dev/main amd64 Packages
  File has unexpected size (20984689 != 20984861). Mirror sync in progress? [IP: ::1 9999]
  Hashes of expected file:
   - Filesize:20984861 [weak]
   - SHA256:6cbbee5838849ffb24a800bdcd1477e2f4adf5838a844f3838b8b66b7493879e
   - SHA1:a5c7e557a506013bd0cf938ab575fc084ed57dba [weak]
   - MD5Sum:1433ce57419414ffb348fca14ca1b00f [weak]
  Release file created at: Wed, 11 Jun 2025 07:15:10 +0000
Fetched 17.9 MB in 9s (1893 kB/s)
Reading package lists...
E: Failed to fetch http://http.kali.org/kali/dists/kali-dev/main/binary-amd64/Packages.gz  File has unexpected size (20984689 != 20984861). Mirror sync in progress? [IP: ::1 9999]
   Hashes of expected file:
    - Filesize:20984861 [weak]
    - SHA256:6cbbee5838849ffb24a800bdcd1477e2f4adf5838a844f3838b8b66b7493879e
    - SHA1:a5c7e557a506013bd0cf938ab575fc084ed57dba [weak]
    - MD5Sum:1433ce57419414ffb348fca14ca1b00f [weak]
   Release file created at: Wed, 11 Jun 2025 07:15:10 +0000
E: Some index files failed to download. They have been ignored, or old ones used instead.
E: apt-get update failed

The obvious workaround is to NOT use the redirector in the approx configuration. Either use a mirror close by, or the Kali CDN:

$ grep kali /etc/approx/approx.conf 
#kali http://http.kali.org/kali <- do not use the redirector!
kali  http://kali.download/kali

Edge Case 2: debootstrap struggles

What if one tries to debootstrap Kali while mirrors are being synced? It can give you some ugly logs, but it might not be fatal:

$ sudo debootstrap kali-dev kali-dev http://http.kali.org/kali
[...]
I: Target architecture can be executed
I: Retrieving InRelease 
I: Checking Release signature
I: Valid Release signature (key id 827C8569F2518CC677FECA1AED65462EC8D5E4C5)
I: Retrieving Packages 
I: Validating Packages 
W: Retrying failed download of http://http.kali.org/kali/dists/kali-dev/main/binary-amd64/Packages.gz
I: Retrieving Packages 
I: Validating Packages 
W: Retrying failed download of http://http.kali.org/kali/dists/kali-dev/main/binary-amd64/Packages.gz
I: Retrieving Packages 
I: Validating Packages 
W: Retrying failed download of http://http.kali.org/kali/dists/kali-dev/main/binary-amd64/Packages.gz
I: Retrieving Packages 
I: Validating Packages 
W: Retrying failed download of http://http.kali.org/kali/dists/kali-dev/main/binary-amd64/Packages.gz
I: Retrieving Packages 
I: Validating Packages 
I: Resolving dependencies of required packages...
I: Resolving dependencies of base packages...
I: Checking component main on http://http.kali.org/kali...
I: Retrieving adduser 3.152
[...]

To understand this one, we have to go and look at the debootstrap source code. How does debootstrap fetch the Release file and the Index files? It uses wget, and it retries up to 10 times in case of failure. It's not as sophisticated as APT: it doesn't detect when the Release file is served via a redirect.

As a consequence, what happens above can be explained as such:

  1. debootstrap requests the Release file, gets redirected to a mirror, and retrieves it from there
  2. then it requests the Packages file, gets redirected to another mirror that is not in sync with the first one, and retrieves it from there
  3. validation fails, since the checksum is not as expected
  4. try again and again

Since debootstrap retries up to 10 times, at some point it's lucky enough to get redirected to the same mirror as the one from where it got its Release file from, and this time it gets the right Packages file, with the expected checksum. So ultimately it succeeds.

Edge Case 3: post-debootstrap failure

I like this one, because it gets us to yet another detail that we didn't talk about yet.

So, what happens after we successfully debootstraped Kali? We have only the main component enabled, and only the Index file for this component have been retrieved. It looks like that:

$ sudo debootstrap kali-dev kali-dev http://http.kali.org/kali
[...]
I: Base system installed successfully.

$ cat kali-dev/etc/apt/sources.list
deb http://http.kali.org/kali kali-dev main

$ ls -l kali-dev/var/lib/apt/lists/
total 80468
-rw-r--r-- 1 root root    41445 Jun 19 07:02 http.kali.org_kali_dists_kali-dev_InRelease
-rw-r--r-- 1 root root 82299122 Jun 19 07:01 http.kali.org_kali_dists_kali-dev_main_binary-amd64_Packages
-rw-r--r-- 1 root root    40562 Jun 19 11:54 http.kali.org_kali_dists_kali-dev_Release
-rw-r--r-- 1 root root      833 Jun 19 11:54 http.kali.org_kali_dists_kali-dev_Release.gpg
drwxr-xr-x 2 root root     4096 Jun 19 11:54 partial

So far so good. Next step would be to complete the sources.list with other components, then run apt update: APT will download the missing Index files. But if you're unlucky, that might fail:

$ sudo sed -i 's/main$/main contrib non-free non-free-firmware/' kali-dev/etc/apt/sources.list

$ cat kali-dev/etc/apt/sources.list
deb http://http.kali.org/kali kali-dev main contrib non-free non-free-firmware

$ sudo chroot kali-dev apt update
Hit:1 http://http.kali.org/kali kali-dev InRelease
Get:2 http://kali.download/kali kali-dev/contrib amd64 Packages [121 kB]
Get:4 http://mirror.sg.gs/kali kali-dev/non-free-firmware amd64 Packages [10.6 kB]
Get:3 http://mirror.freedif.org/kali kali-dev/non-free amd64 Packages [198 kB]
Err:3 http://mirror.freedif.org/kali kali-dev/non-free amd64 Packages
  File has unexpected size (10442 != 10584). Mirror sync in progress? [IP: 66.96.199.63 80]
  Hashes of expected file:
   - Filesize:10584 [weak]
   - SHA256:71a83d895f3488d8ebf63ccd3216923a7196f06f088461f8770cee3645376abb
   - SHA1:c4ff126b151f5150d6a8464bc6ed3c768627a197 [weak]
   - MD5Sum:a49f46a85febb275346c51ba0aa8c110 [weak]
  Release file created at: Fri, 23 May 2025 06:48:41 +0000
Fetched 336 kB in 4s (77.5 kB/s)  
Reading package lists... Done
E: Failed to fetch http://mirror.freedif.org/kali/dists/kali-dev/non-free/binary-amd64/Packages.gz  File has unexpected size (10442 != 10584). Mirror sync in progress? [IP: 66.96.199.63 80]
   Hashes of expected file:
    - Filesize:10584 [weak]
    - SHA256:71a83d895f3488d8ebf63ccd3216923a7196f06f088461f8770cee3645376abb
    - SHA1:c4ff126b151f5150d6a8464bc6ed3c768627a197 [weak]
    - MD5Sum:a49f46a85febb275346c51ba0aa8c110 [weak]
   Release file created at: Fri, 23 May 2025 06:48:41 +0000
E: Some index files failed to download. They have been ignored, or old ones used instead.

What happened here? Again, we need APT debugging options to have a hint:

$ sudo chroot kali-dev apt -q -o Debug::Acquire::http=true update 2>&1 | grep -e ^Answer -e ^HTTP
Answer for: http://http.kali.org/kali/dists/kali-dev/InRelease
HTTP/1.1 304 Not Modified
Answer for: http://http.kali.org/kali/dists/kali-dev/contrib/binary-amd64/Packages.gz
HTTP/1.1 302 Found
Answer for: http://http.kali.org/kali/dists/kali-dev/non-free/binary-amd64/Packages.gz
HTTP/1.1 302 Found
Answer for: http://http.kali.org/kali/dists/kali-dev/non-free-firmware/binary-amd64/Packages.gz
HTTP/1.1 302 Found
Answer for: http://kali.download/kali/dists/kali-dev/contrib/binary-amd64/Packages.gz
HTTP/1.1 200 OK
Answer for: http://mirror.sg.gs/kali/dists/kali-dev/non-free-firmware/binary-amd64/Packages.gz
HTTP/1.1 200 OK
Answer for: http://mirror.freedif.org/kali/dists/kali-dev/non-free/binary-amd64/Packages.gz
HTTP/1.1 200 OK

As we can see above, for the Release file we get a 304 (aka. "Not Modified") from the redirector. Why is that?

This is due to If-Modified-Since also known as RFC-7232. APT supports this feature when it retrieves the Release file, it basically says to the server "Give me the Release file, but only if it's newer than what I already have". If the file on the server is not newer than that, it answers with a 304, which basically says to the client "You have the latest version already". So APT doesn't get a new Release file, it uses the Release file that is already present locally in /var/lib/apt/lists/, and then it proceeeds to download the missing Index files. And as we can see above: it then hits the redirector for each requests, and might be redirected to different mirrors for each Index file.

So the important bit here is: the APT "trick" of downloading all the Index files from the same mirror only works if the Release file is served via a redirect. If it's not, like in this case, then APT hits the redirector for each files it needs to download, and it's subject to the "Hash Sum Mismatch" error again.

In practice, for the casual user running apt update every now and then, it's not an issue. If they have the latest Release file, no extra requests are done, because they also have the latest Index files, from a previous apt update transaction. So APT doesn't re-download those Index files. The only reason why they'd have the latest Release file, and would miss some Index files, would be that they added new components to their APT sources, like we just did above. Not so common, and then they'd need to run apt update at a unlucky moment. I don't think many users are affected in practice.

Note that this issue is rather new for Kali Linux. The redirector running on http.kali.org is mirrorbits, and support for If-Modified-Since just landed in the latest release, version 0.6. This feature was added by no one else than me, a great example of the expression "shooting oneself in the foot".

An obvious workaround here is to empty /var/lib/apt/lists/ in the chroot after debootstrap completed. Or we could disable support for If-Modified-Since entirely for Kali's instance of mirrorbits.

Summary and Conclusion

The Hash Sum Mismatch failures above are caused by a combination of things:

  • Kali uses a redirector + a network of mirrors
  • Kali repo doesn't support Acquire-By-Hash
  • The fact that the redirector honors If-Modified-Since makes the matter a bit worse

At the same time:

  • Regular users that just use APT to update their system or install packages are not affected by those issues
  • Power users (that setup a caching proxy) or developers (that use debootstrap) are the most likely to hit those issues every now and then
  • It only happens during those specific windows of time, when mirrors might not be in sync with each others, 4 times a day
  • It's rather easy to workaround on the user side, by NOT using the redirector
  • However, unless you're deep into these things, you're unlikely to understand what caused the issues, and to magically guess the workarounds

All in all, it seems that all those issues would go away if only Acquire-By-Hash was supported in the Kali packages repository.

Now is not a bad moment to try to land this feature in reprepro. After development halted in 2019, there's now a new upstream, and patches are being merged again. But it won't be easy: reprepro is a C codebase of around 50k lines of code, and it will take time and effort for the newcomer to get acquainted with the codebase, to the point of being able to implement a significant feature like this one.

As an alternative, aptly is another popular tool to manage APT package repositories. And it seems to support Acquire-By-Hash already.

Another alternative: I was told that debusine has (experimental) support for package repositories, and that Acquire-By-Hash is supported as well.

Options are on the table, and I hope that Kali will eventually get support for Acquire-By-Hash, one way or another.

To finish, due credits: this blog post exists thanks to my employer OffSec.

Thanks for reading!

17 July, 2025 12:00AM by Arnaud Rebillout

July 16, 2025

Swiss JuristGate

Exclusive: corruption in Tribunals, Greffiers, from protection rackets to cat whisperers

In 2022, the Transparency International Corruption Perception Index (CPI) ranked Switzerland at number seven on their list, meaning it is the seventh least corrupt country based on the methodology used for ranking. Did Switzerland achieve this favorable score due to genuine attempts to be clean or due to the effectiveness with which Swiss laws and Swiss culture help to hide the wrongdoing?

The favorable ranking from Transparency International was reported widely in the media. At the same time, most media reports also noted Transparency International's country report card had included caveats about nepotism, lobbyists and vulnerability of whistleblowers.

According to Transparency International, their scoring is based on the perception of corruption. Swiss laws on criminal speech tend to prevent the media reporting any bad news at all. This gives the public a false sense of security. In earlier blogs, we look at a series of positive news reports about Parreaux, Thiébaud & Partners when the firm was launched in 2017/2018. Yet when regulators took disciplinary action against the firm in 2023, there was not one news report about the enforcement action. Without news reporting, the public perception of corruption is likely to be totally disconnected from reality. Given that Transparency International's rankings are based on the public perception, the Swiss legal system has gamed the rankings and allowed Switzerland to earn a ranking that it may not deserve.

When people do try to document the reality, they are sent to prison. Many multinational companies operate a three hundred and sixty degree review system whereby employees can give each other feedback. The human rights activist Gerhard Ulrich created a web site where Swiss citizens could write three sixty degree reviews of decisions made by their local judges. The web site was censored and a SWAT team, the elite TIGRIS unit was sent to arrest Gerhard Ulrich and take him to prison.

Trevor Kitchen is another well known spokesperson for investors' rights. In the 1990s Kitchen discovered Swiss people taking credit for his work and not properly attributing his share. Some time later he discovered the FX scandal. During Mr Kitchen's retirement in Portugal, Swiss persecutors used the European Arrest Warrant (EAW) to punish him from afar. Amnesty International published a report noting he was subject to physical and sexual abuse by Swiss authorities in 1993 and then using the EAW they tricked the police in Portugal to repeat the abuse 25 years later in 2018.

By publishing the facts below, I face the same risk of physical and sexual abuse by corrupt police and lawyerists.

If the Swiss public were fully aware of these details, would Switzerland still rate so highly on Transparency International's scale of public perception?

If Transparency International's system can be fooled so easily by states with criminal speech laws, why doesn't Transparency International develop a better methodology for ranking corruption?

Every fact I am reporting here can be found using various sources on the Internet, including the Wayback Machine and the social media profiles of the various people named below. Yet when these facts are assembled in the same article they reveal the inconvenient truth about the Swiss legal system as a whole.

In 2015, the Swiss attorney Benjamin Suter went to New Zealand to complete an LLM at Victoria University. The Victoria University of Wellington Law Review published an article by Mr Suter "Appointment, Discipline and Removal of Judges: a Comparison of the Swiss and New Zealand Judiciaries".

At the time, Mr Suter may have felt that writing the article was a rather abstract exercise. Five years later in 2020, scandal broke out in the Swiss parliament when the fascist SVP / UDC party announced they would try to remove a judge because his "behavior" was not submissive enough for their liking:

On September 23, both houses of parliament are set to appoint a new crop of judges to the Federal Court. But in the lead-up to this, the rightwing Swiss People’s Party has dropped a bombshell.

“We’re proposing to vote judge Yves Donzallaz out of office,� the leader of the party’s parliamentary group Thomas Aeschi has announced.

It reminds me of an incident from 1978 in Australia. In a previous blog, I looked at the prison escape of James Richard Loughnan and the help he received from Fr Sean Patrick O'Connell of St Paul's, Coburg.

Loughnan's next chance to win freedom came a year later when another young criminal, Mark Brandon Read, walked into a courtroom with his shotgun and kidnapped a judge to have Loughnan released. Read went on to become one of Australia's most notorious criminals, using the name Chopper Read. The movie Chopper helps us get to know him better.

Escape bid: police

28 January 1978

A man who menaced a County Court judge with a shotgun on Thursday was a "comic character Charles Chaplin would have portrayed sympathetically", a barrister told Melbourne magistrates court yesterday.

Ironically, Charlie Chaplin was accused of being a communist and fled the US to take refuge in Switzerland. He is buried at Corsier-sur-Vevey in the Canton of Vaud.

... Read had planned to hold the judge hostage while Loughnan was brought to the court and given an automatic car and a magnum pistol.

Chopper Read, kidnapping judge

 

Isn't it remarkable to find the Swiss fascist party ( SVP / UDC) and Chopper Read both using the same tactics, kidnapping and blackmailing judges, to get their way?

Suter had anticipated that moment five years prior in the introduction of his paper:

The author explains how, in Switzerland, openly political and other considerations are weighed in the course of electing judges and how the appointment of lay judges is balanced with an active role of law clerks (Greffier). In contrast, New Zealand has a proud tradition of apolitical judicial appointments that are made solely based on merit. The author criticises that Swiss judges are elected for a term of office, whereas New Zealand judges enjoy the security of tenure and thus, a greater judicial independence.

Mr Suter asserts that the judges are effectively an extension of the political parties and the law clerks (Greffier) take a more active role to prevent the judges indulging themselves. In fact, the word judge looks similar in English and French but it is not really the same thing at all. The term law clerk is used for convenience in English but it is not really a perfect translation either. The role performed by a law clerk in an English-derived courtroom is very different to the role performed by a Greffier in a Swiss courtroom. Therefore, using the term law clerk is confusing and it is better to simply refer to them by the native name, Greffier in French or Gerichtsschreiber in German.

In section IV, appointment of judges, Suter tells us:

The formal requirements to be a federal court judge are scant: any person eligible to vote, that is to say, anyone over the age of 18 who is not incapacitated, may be appointed as a federal court judge.

In other words, a judge does not need to have a law degree or any experience working in a court.

Suter goes on

Typically, lay judges will only be part of a panel of judges, together with judges holding a law degree. It may happen though that a lay judge must act as a single judge as was the case in X v Canton of Thurgau, where both the President and the Vice-President of the District Court had recused themselves. The Federal Supreme Court held that to have a case adjudicated by a lay judge is not in violation of the right to a fair trial as long as a trained law clerk participates in the management of the proceedings and the decision making. The court noted that in the Canton of Thurgau – as in many other cantons – the law clerk may actively participate in the deliberations on the judgment.

In Switzerland, it is intended that these lay judges, without legal qualifications, bring some diversity to the system and avoid the problem of career jurists ruling over society like royal princes.

In English-speaking countries, trials have a jury and the people in the jury are non-lawyers.

The judges in Switzerland are appointed by a political party for a period of four to ten years. Members of a jury in English-speaking countries are selected randomly and replaced for each new trial. Both lay judges and juries are alternative ways of bringing non-lawyers into the decision making process of the tribunal.

The idea that lay judges make the tribunal more in touch with the community is something of a myth. The judges, including lay judges, are under some control from their political party. The political parties are under control from their most significant donors. Look at Elon Musk and his attempt to create the America Party.

Caroline Kuhnlein-Hofmann was the judge in charge of the civil court in the Canton of Vaud. In another blog post, I demonstrated how Kuhnlein-Hofmann is a member of the Green Party along with one of my competitors, Gerhard Andrey of the company Liip SA. Moreover, Mr Andrey is also a politician for the Green party in the federal parliament. One of Mr Andrey's employees, Didier Raboud is another Debian Developer. It is an incestuous web of corruption indeed.

Look specifically at the payments from the so-called judge's salary into the Green Party's Swiss bank account. In Australia, when a politician is elected, they have a similar obligation to give some of their salary back to their political party. While this woman is using the title "judge", she is more like a politician and a servant of her political party. The payments to the Green Party demonstrate that she has an obligation to the party, she has to give them money and judgments. This is not speculation, the SVP / UDC party said the same thing very loudly in 2020.

Caroline Kuhnlein-Hofmann, Gerhard Andrey, Didier Raboud, Liip SA, Greens, Les Vertes Suisses

 

In the specific analysis of Kuhnlein-Hofmann, I presented the minutes from the meeting of her fellow politicians who promoted her to be a judge on 3 March 2010.

Was she working as a lawyer before she was appointed as a judge?

The Wayback machine has a snapshot of the website for the Ordre des Avocats Vaudois (bar association Canton Vaud) from before her appointment to the tribunal. Searching for the name Kuhnlein we only found her husband.

Vivian Kuhnlein, Caroline Kuhnlein-Hofmann

 

Suter has reminded us again of the importance of the Greffier to complement the work of the unqualified lay judges. But what if the judges are not real lawyers and the Greffiers were not trustworthy either?

Look out for the blind leading the blind.

Caroline Kuhnlein-Hofmann, Melanie Bron, Vaud

 

Suter tells us that the Greffier participates in the deliberations of the judge or judges. In cases where a single lay judge is hearing a trial, the Federal Supreme Court requires the Greffier to be involved in the deliberations. Therefore, the ability for rogue Greffiers to participate in deliberations would bring the whole system and all the judgements into disrepute. It all comes down like a house of cards.

house of cards

 

Benjamin Suter, the author of the report, works for Walder Wyss, the same legal firm that acted as liquidator for Parreaux, Thiébaud & Partners. Suter tells us:

In some cantons, law clerks are even allowed to act in place of judges in some respects, for instance in matters of urgency. In the Canton of Valais/Wallis, law clerks (Greffier) may substitute district court judges.

Remarkably, Mathieu Parreaux, the founder of Parreaux, Thiébaud & Partners was also a Greffier in the Canton of Valais, the same Canton where a Greffier can act as a judge and pass judgments on their own without any involvement of the real judge.

A snapshot of Mathieu Parreaux's biography, captured by the Wayback Machine, tells us that Parreaux was still working as a Greffier at the same time that he was selling legal fees insurance to the public.

Mathieu Parreaux began his career in 2010, training in accounting and tax law in a fiduciary capacity at the renowned Scheizerweg Finance. Following this experience, he held a KYC officer position at several private banks in Geneva, such as Safra Sarasin and Audi Bank.

After completing his banking experience, he worked at law firms, notably at Ochsner et associés in Geneva and Besselegal in Nyon. Finally, he gained further experience at the Daudin&CIE real estate agency in Geneva.

In 2017, pursuing his desire to bring an innovative perspective and practice to the field of law, Mathieu founded his law firm, Parreaux&Associés. His clientele includes individuals and legal entities, both nationally and internationally.

That same year, Mathieu took up his duties as lead Greffier at the Tribunal of Monthey in Canton Valais, thus expanding the Municipality's conciliation authority.

He also began teaching law at the private Moser College in Geneva.

In early 2018, Parreaux & Partners merged with Mr. François Thiébaud's service company. By combining their assets and expertise, the new firm Parreaux, Thiébaud & Partners established additional tools to achieve its primary goal: to represent your interests in all legal matters, while providing a personal service.

Mathieu practices primarily in corporate law, namely contract law, tax law, corporate law, and banking and finance law.

Mathieu also practices health law (medical law, pharmaceutical law, and forensic medicine).

Therefore, by giving Mr Parreaux payments of legal fees protection insurance, people would feel they are gaining influence over somebody with the power of a judge.

In the tribunal of Monthey, the judge was Antoine Pitteloud ( left / socialist party) and the Deputy Judge was Roland Maire ( PDC / the Center party).

Notice in 2021, Mr Parreaux was putting his own name at the bottom of the renewal invoices sent to clients. In 2022, he changed the business name to Justicia SA and had one of his employees put their name at the bottom of the invoice letters.

When thinking about the incredible conflict of interest, it is a good moment to remember the story of John Smyth QC, the British barrister who achieved the role of Recorder, a low-ranking judge, in the British courts while simultaneously being a Reader in the Church of England and a prolific pedophile.

While Walder Wyss was the liquidator of Parreaux, Thiébaud & Partners they were simultaneously involved in litigation against clients of Parreaux, Thiébaud & Partners. This is another outrageous conflict of interest.

After gaining access to client records through the liquidation, they had unreasonable advantages in using those records during unrelated litigation.

When FINMA publicly banned Mathieu Parreaux from selling insurance for two years, they did not make any public comment on his role or disqualification as a Greffier. Does this mean he can continue working as a Greffier as long as he does not sell insurance at the same time?

In the Lawyer X scandal in Australia, hundreds of judgments had to be overturned due to a miscarriage of justice. If the Swiss public were aware of the full circumstances then every judgment involving Mathieu Parreaux or Walder Wyss could also be invalidated. This appears to be one of the reasons for the intense secrecy about the JuristGate affair.

During my research, I found two other employees of the legal fee insurance scheme who were also employed in a tribunal as a Greffier. It looks like there was a revolving door between the illegal legal insurance scheme and the tribunal.

What about the Greffier who created an invalid judgment trying to transfer a trademark that had already been canceled? The signature of the Greffier Mélanie Bron appears beside the signature of Caroline Kuhnlein-Hofmann in the invalid judgment. Does Madame Bron have any conflicts of interest, political engagement or side businesses?

Caroline Kuhnlein-Hofmann, Mélanie Bron

 

Mélanie Bron

 

The Cantonal Archives tell us they have a copy of Madame Bron's thesis on family law. There is no academic record of her working on trademark law. The thesis is cited in various other research works and it is mentioned in a 2004 edition of Uniscope, the UNIL newsletter.

A news report from 2018 tells us that Madame Bron was trying to divert more Cantonal police to Blonay, the village where she resides. She is pictured with other local figures Jean-Marc Nicolet, André Grivel and mayor Bertrand Cherix of the Parti libéral-radical (PLR). That is the same political party as the judge Richard Oulevay.

Jean-Marc Nicolet, Mélanie Bron, André Grivel, Bertrand Cherix

 

Is it appropriate for somebody with the powers of a judge to try and influence the deployment of police resources to suit their personal circumstances or should they be concerned with distributing police resources throughout the canton at large?

In the abstract of Benjamin Suter's report, he told us that the Greffier is meant to help keep the politically-affiliated judges honest. If the Greffiers are not honest either, the system described by Suter is a sham.

We found Mélanie Bron listed as a teacher for l'Ecole de la Conscience which claims to be the only French-speaking school of animal communication. Here is her biography from the web site:

Mélanie Bron - Teacher/trainer in CAI

A lawyer by training, Mélanie devotes much of her time to the animal world, practicing animal communication consultations since 2012 and offering feline/canine behavioral counseling since 2018. Her training as an animal masseuse (cats, dogs, and small mammals) furthers her understanding of animal physical sensations. Since 2020, she has combined her expertise in traditional Feng Shui to harmonize the living spaces of animals and their human companions. She teaches Cycle 1 in French-speaking Switzerland. You can find her internship dates and locations on our calendar.

Imagine for a moment that you are in the middle of a legal dispute and your brother calls up the Greffier / cat whisperer and asks her to take his cat for a walk. Hypothetically, he pays ten thousand Swiss francs for her to interpret his cat and you cross your fingers and hope that your company's trademark will be granted well-known status like Coca-cola or Disney.

It needs to be emphasized that the book value of your trademark increases by millions of francs with a declaration of well known status under the Paris convention. Any fee that is hypothetically paid for cat whispering is trivial in comparison to the profit for your organization.

Martine Richoz keeps a list of links where she tells us that Mélanie Bron is offering services in Chernex VD. Mélanie Bron has her own web site, HomeChance.ch where she displays horse-whispering pictures.

I will be happy to help you convey the messages you want to send to your pet and to receive their messages for you, or to help you create an interior that is conducive to your well-being and that of your loved ones.

In other countries, judges and senior employees of a tribunal are prohibited from running businesses on the side. When a jury is deliberating they are usually sequestered in a hotel to prevent any messages being conveyed through family and the media.

We see her accepting money from people though Facebook:

[OUR GRADUATES]��🧑�� Discover the career path of Mélanie Bron, a professional trained by Fabienne Maillefer and who collaborates with her in Switzerland 🇨🇭��
�� All of Fabienne's trained communicators have completed a comprehensive course as animal interpreters (practitioner level) at the École de la Conscience 🗣�👌
�� The average price of a consultation is CHF 100 / €80
�� Questions? Visit www.ecoledelaconscience.com
�� Fabienne Maillefer has been teaching for over 17 years. In 2018, she founded the École de la conscience. It is the first school of its kind to be recognized as a continuing education organization, delivering quality education by obtaining an international label reserved for adult education 💯👌
Mélanie Bron, horse whisperer, Blonay, Chernex, Vaud

 

Judgments in Switzerland are typically secret. Many more disputes are settled secretly at mediation without any judgment at all. The Greffier has access to all these secret settlement accords. What if Mathieu Parreaux was editing secret settlement documents on the same personal laptop he was using for Parreaux, Thiébaud & Partners? Where did the laptops go when the firm was shut down?

When FINMA published their judgment against Parreaux, Thiébaud & Partners, they redacted almost every paragraph.

By joining this scandal directly into the cantonal tribunal, using this 2015 report from Benjamin Suter at Walder Wyss, liquidator of Parreaux, Thiébaud & Partners, we have proven, beyond reasonable doubt, the entire Swiss legal system smells like a protection racket.

They pretend to be a country and they act like a student union. I graduated from the National Union of Students in Australia, traveled half way around the world to Switzerland, I thought student politics was behind me and I found a bona fide kangaroo court system at work in the alps.

A kangaroo court with a cat whisperer.

In Zurich, they tried to tell us that there was a smell coming from our cats. I recorded the discussion in the tribunal and published a full report about the black cat harassment judgment.

Dog ate the judgment

Here is the letter from the Swiss Intellectual Property Institute (IGE / IPI) telling the judges, greffiers and cat whisperers that the so-called Debian "judgment" can not be processed:

Debian, trademark, judgment

 

The judge Richard Oulevey sent another letter acknowledging that their so-called judgment is impossible to follow, in other words, it is on par with witchcraft.

Debian, trademark, judgment

 

It was the responsibility of the Greffier, Mélanie Bron to communicate with all the parties in the case and communicate with the Intellectual Property Institute to make sure there really was a trademark registration in effect. If she can communicate with cats and dogs, why couldn't she communicate with the IPI?

If running a side business as a pet psychic undermines her work at the tribunal, is it time to give up one job or the other?

Remember, over $120,000 from Debian bank accounts went to these kangaroo courts but when volunteers arrived at DebConf23 in India, they were asked to contribute their own money to some expenses. Abraham Raji apparently didn't have a few extra dollars for the day trip, he was left alone without a life jacket and he drowned.

The Parreaux, Thiébaud & Partners scandal went on for six years from 2017 to 2023 and everybody in the legal profession seemed to know about it from the beginning. Therefore, what other scandals are going on right now that the rest of us don't know about yet?

Footnote: enforcing the law in Pentridge Prison, Melbourne, Australia

From the Wikipedia article about Chopper Read:

While in Pentridge Prison's H division in the late 1970s, Read launched a prison war. "The Overcoat Gang" wore long coats all year round to conceal their weapons, and were involved in several hundred acts of violence against a larger gang during this period. Around this time, Read had a fellow inmate cut both of his ears off to be able to leave H division temporarily. ...

In 1978, while Read was incarcerated, his associate Amos Atkinson held 30 people hostage at The Waiters Restaurant in Melbourne while demanding Read's release. After shots were fired, the siege was lifted when Atkinson's mother, in her dressing gown, arrived at the restaurant to act as go-between. Atkinson's mother hit him over the head with her handbag and told him to "stop being so stupid". Atkinson then surrendered.

16 July, 2025 01:30PM

hackergotchi for David Bremner

David Bremner

Hibernate on the pocket reform 6/n

Context

Another kernel patch?

  • Confused about prerequisites, I wrote

  • A reply from Niklas Cassel suggested I look at

    https://lore.kernel.org/linux-pci/1744940759-23823-1-git-send-email-shawn.lin@rock-chips.com/

  • EDIT It turns out that this patch is already shipped as part of the mnt research kernel. It will need rebasing for 6.16.x.

Applying the prerequisites

  • Niklas also point me to

    https://lore.kernel.org/linux-pci/20250508-pcie-reset-slot-v4-0-7050093e2b50@linaro.org/

  • Since the new patch doesn't apply to linux master either, I guess I need to apply that series. But part of it is already applied, fun.

  • I'm not claiming this is the best way...

# index 31090770fffcc94e15 from the first patch in the series
$ git log --raw --all --find-object=31090770fffcc94e15
# The applied version of the first patch is `b06d125e6280603a34d9064cd9c12748ca2edb04`
$ git switch -c base b06d125e6280603a34d9064cd9c12748ca2edb04^
$ mbox-extract-patch < ~/Downloads/PATCH-v4-1-5-PCI-ERR-Remove-misleading-TODO-regarding-kernel-panic.mbox | git am
$ git rebase -i master  # two applied patches skipped
$ git switch master && git merge base
  • mbox-extract-patch is from package mailscripts.

  • git am -3 ~/tmp/PATCH-v3-PCI-dw-rockchip-Add-support-for-slot-reset-on-link-down-event.txt

  • Currently can't get the "Add system PM support" patch to apply, will test the others first.

  • Except that a test build tells me I need to rebase all of my patches against 6.15.x, rather the the current 6.16~rcX

previous episode

16 July, 2025 01:10PM

Sven Hoexter

Windows 10 to Ubuntu Migration

I know that Canonical / Ubuntu people are sometimes not well received due to promotion of Canonical tooling (some might remember upstart and Mir, or more recently snap and netplan). Thus for some positive vibes consider that I could hand out the Ubuntu Desktop image on a USB flash drive to a family member, and the family member could just replace Windows 10 without any assistance. It just worked. This was made possible by the will to keep a slightly dated ThinkPad in use, which it's not supported by Windows 11.

I've to admit that I never looked at Ubuntu Desktop before, but the user experience is on par with everything else I know. Thanks to all the folks at Canonical who made that possible! Luckily the times when you had to fiddle with modelines for XFree86, and sleepless nights about configuring lpd to get printing up and running are long gone. I believe now that Microsoft is doing Microsoft things with rolling Windows updates which force users to replace completely fine working hardware is the time to encourage more people to move to open operating systems, and Ubuntu Desktop seems to be a very suitable choice.

Things to Improve

Albeit I think the out of the box experience is great, there are a few niche topics where things could improve.

Default Access to Apt / Ubuntu Universe

Well snaps are promoted as the primary application source, but having some graphical interface like synaptic available by default to just install from Ubuntu Universe would be helpful. In this case we wanted to install keepass2 to access the users keepass file kept from the Windows setup. Having to tell someone "open the terminal and type sudo apt install" is something that requires support.

Snaps and Isolation Overrides

I'm fine with snaps having least privileges, but it would be nice if one could add overrides easily. Here the family member was playing with an Arduino Uno and there is one sample in the workbook that utilizes a Java application called Processing. It's available as a snap, but that one doesn't have access to the required serial port device file. I tried hard to make it work - full details in the snapcraft forum - but failed, and opted to use the --devmode to install it without isolation enforcement. As far as I understood snap that results in no more automatic updates for the application. If someone from the Ubuntu crowd with more snap debug experience has additional hints on how to narrow down which change is required, I would love to improve that and create a PR for the processing developers. Either reply in the forum or reach out via mail sven at stormbind dot net.

16 July, 2025 12:50PM

Google Cloud Oddities Summer 2025 Edition

Latest oddities I ran into with Google Cloud products before I start to forget about them again.

e2 Compute Instances vs CloudNAT

Years ago I already had a surprising encounter with the Google Cloud e2 instances. Back then we observed CPU steal time from 20-60%, which made the instances unusable for anything remotely latency sensitive. Now someone started to run a workload which creates many outbound connections to the same destination IP:Port. To connect to the internet we utilize the Google Cloud product "CloudNAT" which implements a NAT solution somewhere in the network layer.

Starting the workload let after a few seconds to all sort of connection issues, and of course logs from CloudNAT that it dropped connections. The simplest reproducer I could find was while true; do curl http://sven.stormbind.net; done which already let to connection drops on CloudNAT.

We starred a bit at output of gcloud compute routers get-nat-mapping-info our-default-natgw, but allocating additional ports still worked fine in general. Further investigation let to two differences between a project which was fine and those that failed:

  1. c2d or n2d machine types instead of e2 and
  2. usage of gVNIC.

Moving away from the e2 instances instantly fixed our issue. Only some connection drops could be observed on CloudNAT if we set the min_ports_per_vm value too low and it could not allocate new ports in time. Thus we did some additional optimizations:

  • raised min_ports_per_vm to 256
  • raised max_ports_per_vm to 32768 (the sensible maximum because CloudNAT will always double its allocation)
  • set nat_tcp_timewait_sec to 30, default is 120, reclaim of ports is only running every 30s, thus ports can be re-used after 30-60s

See also upstream documentation regarding timeouts.

To complete the setup alignment we also enabled gVNIC on all GKE pools. Noteworthy detail a colleague figured out: If you use terraform to provision GKE pools make sure to use at least google provider v6.33.0 to avoid a re-creation of your node pool.

GKE LoadBalancer Force allPorts: true on Forwarding Rule

Technically it's possible to configure a forwarding rule to listen on some or all ports. That gets more complicated if you do not configure the forwarding rule via terraform or gcloud cli, but use a GKE resource kind: Service with spec.type: LoadBalancer. The logic documented by Google Cloud is that the forwarding rule will have per port configuration if it's five or less, and above that it will open for all ports. Sadly that does not work e.g. in cases where you've an internal load balancer and a serviceAttachment attached to the forwarding rule. In my experience reconfiguring was also unreliable in cases without a serviceAttachment and required a manual deletion of the service load balancer to have the operator reconcile it and create it correctly.

Given that we wanted to have all ports open to allow us to dynamically add more ports on a specific load balancer, but there is no annotation for that, I worked around with this beauty:

      ports:
        - name: dummy-0
          port: 2342
          protocol: TCP
          targetPort: 2342
        - name: dummy-1
          port: 2343
          protocol: TCP
          targetPort: 2343
        - name: dummy-2
          port: 2344
          protocol: TCP
          targetPort: 2344
        - name: dummy-3
          port: 2345
          protocol: TCP
          targetPort: 2345
        - name: service-1
          port: 4242
          protocol: TCP
          targetPort: 4242
        - name: service-2
          port: 4343
          protocol: TCP
          targetPort: 4343

If something in that area did not work out there's basically two things to check:

  1. Is the port open on the forwarding rule / is the forwarding rule configured with allPorts: true?
  2. Got the VPC firewall rule created by the service operator in GKE updated to open all required ports?

Rate Limiting with Cloud Armor on Global TCP Proxy Load Balancer

According to the Google Cloud support rate limiting on a TCP proxy is a preview feature. That seems to be the excuse why it's all very inconsistent right now, but it works.

  • The Google Cloud Web Console is 100% broken and unable to deal with it. Don't touch it via the web.
  • If you configure an exceed_action in a google_compute_security_policy terraform resource you must use a value with response code, e.g. exceed_action = "deny(429)". The response code will be ignored. In all other cases I know you must use a deny without response code if you want to be able to assign the policy to a L3/L4 load balancer.
  • If you use config-connector (kcc) you can already use exceedAction: deny albeit it's not documented. Neither for config-connector itself nor for the API.
  • If you use the gcloud cli you can use --exceed-action=deny which is already documented if you call gcloud beta compute security-policies create --help, but it also works in the none beta mode. Also export / import via gcloud cli work with a deny without defining a response code.

Terraform Sample Snippet

  rule {
    description = "L3-L4 Rate Limit"
    action      = "rate_based_ban"
    priority    = "2342"
    match {
      versioned_expr = "SRC_IPS_V1"
      config {
        src_ip_ranges = ["*"]
      }
    }
    rate_limit_options {
      enforce_on_key = "IP"
      # exceed_action only supports deny() with a response code
      exceed_action = "deny(429)"
      rate_limit_threshold {
        count        = 320
        interval_sec = 60
      }
      ban_duration_sec = 240
      ban_threshold {
        count        = 320
        interval_sec = 60
      }
      conform_action = "allow"
    }
  }

Config-Connector Sample Snippet

  - action: rate_based_ban
    description: L3-L4 Rate Limit
    match:
      config:
        srcIpRanges:
          - "*"
      versionedExpr: SRC_IPS_V1
    preview: false
    priority: 2342
    rateLimitOptions:
      banDurationSec: 240
      banThreshold:
        count: 320
        intervalSec: 60
      conformAction: allow
      enforceOnKey: IP
      exceedAction: deny
      rateLimitThreshold:
         count: 320
         intervalSec: 60

16 July, 2025 12:06PM

July 15, 2025

hackergotchi for Alberto García

Alberto García

Converting QEMU qcow2 images directly to stdout

Introduction

Some months ago, my colleague Madeeha Javed and I wrote a tool to convert QEMU disk images into qcow2, writing the result directly to stdout.

This tool is called qcow2-to-stdout.py and can be used for example to create a new image and pipe it through gzip and/or send it directly over the network without having to write it to disk first.

This program is included in the QEMU repository: https://github.com/qemu/qemu/blob/master/scripts/qcow2-to-stdout.py

If you simply want to use it then all you need to do is have a look at these examples:

$ qcow2-to-stdout.py source.raw > dest.qcow2
$ qcow2-to-stdout.py -f dmg source.dmg | gzip > dest.qcow2.gz

If you’re interested in the technical details, read on.

A closer look under the hood

QEMU uses disk images to store the contents of the VM’s hard drive. Images are often in qcow2, QEMU’s native format (although a variety of other formats and protocols are also supported).

I have written in detail about the qcow2 format in the past (for example, here and here), but the general idea is very easy to understand: the virtual drive is divided into clusters of a certain size (64 KB by default), and only the clusters containing non-zero data need to be physically present in the qcow2 image. So what we have is essentially a collection of data clusters and a set of tables that map guest clusters (what the VM sees) to host clusters (what the qcow2 file actually stores).

A qcow2 file is a collection of data clusters plus some metadata to map them to what the guest VM sees.

qemu-img is a powerful and versatile tool that can be used to create, modify and convert disk images. It has many different options, but one question that sometimes arises is whether it can use stdin or stdout instead of regular files when converting images.

The short answer is that this is not possible in general. qemu-img convert works by checking the (virtual) size of the source image, creating a destination image of that same size and finally copying all the data from start to finish.

Reading a qcow2 image from stdin doesn’t work because data and metadata blocks can come in any arbitrary order, so it’s perfectly possible that the information that we need in order to start writing the destination image is at the end of the input data¹.

Writing a qcow2 image to stdout doesn’t work either because we need to know in advance the complete list of clusters from the source image that contain non-zero data (this is essential because it affects the destination file’s metadata). However, if we do have that information then writing a new image directly to stdout is technically possible.

The bad news is that qemu-img won’t help us here: it uses the same I/O code as the rest of QEMU. This generic approach makes total sense because it’s simple, versatile and is valid for any kind of source and destination image that QEMU supports. However, it needs random access to both images.

If we want to write a qcow2 file directly to stdout we need new code written specifically for this purpose, and since it cannot reuse the logic present in the QEMU code this was written as a separate tool (a Python script).

The process itself goes like this:

  • Read the source image from start to finish in order to determine which clusters contain non-zero data. These are the only clusters that need to be present in the new image.
  • Write to stdout all the metadata structures of the new image. This is now possible because after the previous step we know how much data we have and where it is located.
  • Read the source image again and copy the clusters with non-zero data to stdout.

Images created with this program always have the same layout: header, refcount tables and blocks, L1 and L2 tables, and finally all data clusters.

One problem here is that, while QEMU can read many different image formats, qcow2-to-stdout.py is an independent tool that does not share any of the code and therefore can only read raw files. The solution here is to use qemu-storage-daemon. This program is part of QEMU and it can use FUSE to export any file that QEMU can read as a raw file. The usage of qemu-storage-daemon is handled automatically and the user only needs to specify the format of the source file:

$ qcow2-to-stdout.py -f dmg source.dmg > dest.qcow2

qcow2-to-stdout.py can only create basic qcow2 files and does not support features like compression or encryption. However, a few parameters can be adjusted, like the cluster size (-c), the width of the reference count entries (-r) and whether the new image is created with the input as an external data file (-d and -R).

And this is all, I hope that you find this tool useful and this post informative. Enjoy!

Acknowledgments

This work has been developed by Igalia and sponsored by Outscale, a Dassault Systèmes brand.

Logos of Igalia and Outscale

¹ This problem would not happen if the input data was in raw format but in this case we would not know the size in advance.

15 July, 2025 05:17PM by berto

hackergotchi for Daniel Pocock

Daniel Pocock

Conviction overturned: helped but not harbored by Catholic priest Fr Sean O’Connell

In an earlier blog, I published a copy of my baptism certificate signed by the late Father Sean Patrick O'Connell of St Paul's church in Coburg, beside the former prison.

Shortly before I was born, Fr O'Connell had been convicted of harboring an escaped prisoner. Fr O'Connell made an appeal and the conviction was overturned.

For people who started the gossip about police, it is a fascinating story.

Even more significant, the story shows us that the church explained their philosophy of forgiveness and redemption to the judge and to the media in 1977. Journalists traveled out to Coburg to meet Fr O'Connell and the story appeared in news reports all around Australia.

In Melbourne, the story and the philosophy was published on the front page of the newspaper under the heading "I'd do same again, says cleared priest".

The public and the court had the opportunity to ask Fr O'Connell how the church would handle a more dangerous criminal, for example, somebody convicted for abuse. Nobody asked these questions.

These observations don't exonerate institutions for their failings. However, if the wider public had this opportunity to examine the philosophy in 1977 then society at large has to share some responsibility for failing to scrutinize religious institutions.

The Age, 14 January 1977 gives us a view into Australian society and prison life in 1977, with letters about police tactics and abuse inside the prison.

Police behavior is the real scandal

Police go gay to lure homosexuals

The recent arrests of homosexuals by police in the Black Rock - Sandringham area and the type of "poofter bashing" mentality underlying them is cause for serious public concern.

...

Peeping-tommery

It was with some humor that I read the report of policemen baiting homosexuals at Black Rock. ...

...

Deplorable crime

The Minister for Social Welfare, Mr Dixon, confesses he is unaware whether or not a pack rape victim in J Division of Pentridge needed medical attention ("The Age," 6/1).

The victim endured these rapes for three successive nights without any restraint from those supposed to be in authority. Is Mr. Dixon concerned enough about such serious lapses of supervision to investigate the causes and reasons why such deplorable crime can exist in Pentridge?

Such victimisation between prisoners reads like the worst excesses of the early penal times of Tasmania.

Do we need another Elizabeth Fry to reform the prisons of the Hamer Government?

(Mrs.) LORNA BYRNE (Forest Hill)

The last letter is rather similar to my own email from 2013 resigning from the ALP over very similar cases of abuse in the concentration camps for asylum seekers.

Jeanette Towns, Mark Kron, homophobia, police behavior

 

Brian Dixon, Lorna Byrne, Pentridge, abuse

 

The chief of the prison resigned to pursue a PhD. The same newspaper interviewed him too. On 19 April 1977 they published it prominently at the top of page 3 with the headline "Pentridge shouldn't be escape-proof, says its former chief".

Pentridge prison is not escape-proof - and should not be escape-proof - according to the prison's former superintendent, Mr. John Van Groningen.

...

"If they knew the prison really was escape-proof, prisoners wouldn't be able to lie in bed at night and dream of how they could get out," he said.

"I'm serious. The vast majority of prisoners have these sorts of dreams and I believe they are healthy and therapeutic for them.

John Van Groningen, Pentridge, escape

 

It looks like the prisoners had a lot of respect for the former jailmaster. People threw copies of the newspaper over the walls for prisoners to read the interview. Three weeks later, dreams came true when somebody threw a rope over the wall and three prisoners escaped.

According to reports, one of the escapees, James Richard Loughnan, broke both of his legs and hid behind the church beside the prison while his fellow escapees, Allan Martin and Peter Dawson, made their getaway.

The reports don't tell us if the prisoners were wearing uniforms or if the sirens were activated to warn the community about an escape. Therefore, Fr O'Connell may not have had any hints that an escape had transpired or that the man he was about to meet was one of the suspects.

Pentridge Prison, Coburg, St Paul's Church

 

Fr O'Connell was leaving in his car and he came across Loughnan on the ground. Loughnan claimed he had been injured in a traffic accident and asked for transport to get medical assistance. Fr O'Connell obliged.

During the ride in Fr O'Connell's car, Loughnan asked to make confession. Fr O'Connell, like all Catholic priests, is unable to tell anybody what was said during confession. Nonetheless, it seems that he did come to realize he was transporting an escaped prisoner.

On 11 May 1977 The Age published a photo of Loughnan being carried into court by police.

James Richard Loughnan, Noel Thomas, Phil Glare

 

A few weeks later, the magistrate's court convicted Fr O'Connell for harboring an escaped prisoner. Fr O'Connell's punishment was a six month good behavior bond. The punishment seems bizarre as a priest is already meant to be a model of good behavior for everybody else.

Catholic Church defends right of priests to keep confidences

15 June 1977

The Roman Catholic Church yesterday defended the right of its priests to keep confidences after a Coburg priest was found guilty of harboring a Pentridge escaper.

...

"I was simply acting as a priest helping a man who was injured with no thought of harboring or breaking the law."

Nonetheless, Fr O'Connell lodged an appeal in the County Court. The judge decided that Fr O'Connell had not offered the prisoner shelter, therefore, the transport by car to a hospital could not be enough to justify a conviction for harboring. Fr O'Connell was liberated from the obligation of good behavior.

I'd do same again, says cleared priest

16 August 1977

A Roman Catholic priest cleared of harboring a Pentridge escaper said last night he would act exactly the same way if the situation arose again.

...

Father O'Connell, 36, of St. Paul's Church, Coburg, yesterday won an appeal in the County Court against a conviction and six-month good-behavior bond for harboring an escaper on May 9.

...

"People have to trust us. It's a trust which has been won over years and years. I felt that principle was at stake," he said.

He said he had taken confession from Loughnan and was therefore bound not to tell the police of his whereabouts.

Daniel Pocock, baptism, St Paul's, Coburg

 

Fr Sean Patrick O'Connell, St Paul's, Coburg, acquitted

 

A few weeks later, another news report appeared about escape dreams.

SM told jail had a sex hideaway

13 September 1977

A secret cubby-hole discovered at Pentridge was used for private homosexual acts and to hide contraband goods, Melbourne Magistrates Court was told yesterday.

The court was told the hole was behind a false wall in a stationary cupboard.

...

"We needed a place to go to where nobody would see us," ...

Paul Kurt Hazel, Pentridge prison

 

Read more about the police rumors.

The Catholic Church teaches us that God forgives. Artificial Intelligence doesn't.

Follow Catholic.Community.

15 July, 2025 11:00AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

anytime 0.3.12 on CRAN: Minor Bugfix and Maintenance

A maintenance release 0.3.132 of the anytime package arrived on CRAN today. The package is fairly feature-complete, and code and functionality remain mature and stable.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … input format to either POSIXct (when called as anytime) or Date objects (when called as anydate) – and to do so without requiring a format string as well as accomodating different formats in one input vector. See the anytime page, or the GitHub repo for a few examples, and the beautiful documentation site for all documentation.

This release covers a corner case reported in a GitHub issue: the (nonsensical but possible) input of zero-length (floating point or integer) vectors was not dealt with properly which lead to an error. We now return the requested type (POSIXct or Date, depending on the call) also with length zero. Two minor maintenance tasks were also addressed since the last release six months ago.

The short list of changes follows.

Changes in anytime version 0.3.12 (2025-07-14)

  • Continuous integration now uses r-ci action with embedded bootstrap

  • The versioned depends on Rcpp now requires 1.0.8 or newer to support use of the updated header file structure

  • The corner-case of an empty (numeric or integer) vector argument is now addressed, new tests have been added (#135)))

Courtesy of my CRANberries, there is also a diffstat report of changes relative to the previous release. The issue tracker tracker off the GitHub repo can be use for questions and comments. More information about the package is at the package page, the GitHub repo and the documentation site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

15 July, 2025 01:58AM

July 14, 2025

hackergotchi for David Bremner

David Bremner

Hibernate on the pocket reform 5/n

Context

A Kernel Patch

  • The follow patch looks potentially relevant:

https://patchwork.kernel.org/project/linux-rockchip/patch/20250509-b4-pci_dwc_reset_support-v3-1-37e96b4692e7@wdc.com/

  • git clone https://github.com/torvalds/linux.git (Is there a better place? kernel.org is pretty opaque)

  • are the pre-reqs in mnt kernel? The patch header contains

    base-commit: 08733088b566b58283f0f12fb73f5db6a9a9de30
    change-id: 20250430-b4-pci_dwc_reset_support-d720dbafb7ea
    prerequisite-change-id: 20250404-pcie-reset-slot-730bfa71a202:v4
    prerequisite-patch-id: 2dad85eb26838d89569b12c19d70f392fa592667
    prerequisite-patch-id: 6238a682bd8e9476e5911b7a59263c3fc618d63e
    prerequisite-patch-id: 37cab00bc255a62b1e8396a48a3afba5e1751abd
    prerequisite-patch-id: ff711f65cf9926374646b76cd38bdd823d576764
    prerequisite-patch-id: 1654cca919d024b9a9190b28e90f722975c797e8
  • First check and see what is upstream. I had to remember how to use git-patch-id and also how to split a long regex disjunction into multiple lines.
git log --patch --no-merges v6.13.. | \
  git patch-id --stable | \
  grep -F -e 2dad85eb26838d89569b12c19d70f392fa592667 \
    -e 6238a682bd8e9476e5911b7a59263c3fc618d63e \
    -e 37cab00bc255a62b1e8396a48a3afba5e1751abd \
    -e ff711f65cf9926374646b76cd38bdd823d576764 \
    -e 1654cca919d024b9a9190b28e90f722975c797e8

yields

37cab00bc255a62b1e8396a48a3afba5e1751abd d1c696dba120624256ab335ab8247f535b872309
2dad85eb26838d89569b12c19d70f392fa592667 b06d125e6280603a34d9064cd9c12748ca2edb04

The two commits that are actually found, are only in tag 'v6.16~rc1'

  • The discussion on LKML mentions pci/slot-reset. Where does that branch live?
git remote add pci https://git.kernel.org/pub/scm/linux/kernel/git/pci/pci.git
git fetch pci
git for-each-ref refs/remotes/pci --format "%(refname)" | \
    while read branch
    do
        echo "checking $branch"
        git log --patch --no-merges --since 2025-01-01 $branch | \
            git patch-id --stable | \
            grep -F -e 2dad85eb26838d89569b12c19d70f392fa592667 \
                 -e 6238a682bd8e9476e5911b7a59263c3fc618d63e \
                 -e 37cab00bc255a62b1e8396a48a3afba5e1751abd \
                 -e ff711f65cf9926374646b76cd38bdd823d576764 \
                 -e 1654cca919d024b9a9190b28e90f722975c797e8
    done

This did not find any more commits, but I did learn how to use git-for-each-ref, so I guess not a total loss.

previous episodenext episode

14 July, 2025 10:49PM

hackergotchi for Daniel Pocock

Daniel Pocock

Outreachy & Debian pregnancy cluster, Meike Reichle evidence

Meike Reichle is the next case in the Debian pregnancy cluster. In most voluntary organizations, there is some privacy for the family lives of volunteers. Meike chose to share details on the mailing list with over a thousand strangers so we can talk about it in a general sense here.

Under copyright law, the money raised for a work of joint authorship is to be divided up equally between every co-author. The law is very clear on this point. At Columbia University in New York, the Kernochan Center for Law, Media and the Arts publishes a page about joint works (Co-Authorship) with this advice:

in the case of two co-authors, and absent an agreement to the contrary, a right to an accounting for 50 percent of the proceeds of the exploitation of a given work

The Debian maintainer database lists over one thousand seven hundred and twenty (1,720) co-authors (Debian Developers). These are all the people who have ever been on the Debian keyring. There are possibly other people that have not been tracked.

In 2023, when Abraham Raji went to DebConf23, he did a huge amount of work as an unpaid volunteer, including the design for the DebConf23 logo. When he arrived at the day trip, everybody was asked to contribute some of their own money. Abraham Raji didn't put in any money, he was left alone without a life jacket and he drowned.

In the same year over $32,000 was given to so-called diversity internships. It is listed in the summaries published by Software in the Public Interest.

         debit         credit          total
--------------------------------------------
  41559.14 USD       0.87 USD  -41558.27 USD  Expenses
    363.82 USD       0.87 USD    -362.95 USD    Bank-Fees
   8228.22 USD              0   -8228.22 USD    IT
     21.99 USD              0     -21.99 USD      Domains
   8206.23 USD              0   -8206.23 USD      Hardware
  32000.00 USD              0  -32000.00 USD    Internships
    967.10 USD              0    -967.10 USD    Travel
    271.08 USD              0    -271.08 USD      Accommodation
    696.02 USD              0    -696.02 USD      Transportation
    260.35 USD    5131.30 USD    4870.95 USD  Income
             0       3.95 USD       3.95 USD    Currency-Gain
      3.94 USD              0      -3.94 USD    Currency-Loss
    256.41 USD    5127.35 USD    4870.94 USD    Donations
--------------------------------------------
  41819.49 USD    5132.17 USD  -36687.32 USD

If we followed the advice from the experts at Columbia University we would divide the sum of $32,000 between all 1,720 joint authors. Each person (or their estate if they are dead) would receive $18.60. If some of the co-authors want to contribute their money to a fund for diversity then they can do so. Each co-author must make a personal decision whether they put their share of the money in the diversity fund or whether they keep it for themselves.

When the GNOME foundation created Outreach Program for Women in 2006, a little bit less than two percent of Debian Developers were female.

When Debian decided to start contributing money to the program in 2013, the percentage of women was still about two percent.

Today, in 2025, twelve years after Debian started contributing authors' money to the Outreachy internships, we still have less than two percent women.

Out of the entire history of the program only one of the women, Ulrike Uhlig became a Debian Developer. She participated for a couple of years and then she quit.

Therefore, what is Debian receiving in exchange for this money? Or what are the men in charge hoping to receive in exchange for $32,000 per year?

When we put the events in the correct order and look at the evidence about the Debian pregnancy cluster it becomes very clear.

The next pregnancy we look at is the email from Meike.

GNOME launched Outreach Program for Women and in the same year we saw Meike Reichle at DebConf6. Here is a snippet of the video:

14 July, 2025 08:00PM

July 13, 2025

hackergotchi for Bits from Debian

Bits from Debian

DebConf25 starts today in Brest on Monday, July 14, 2025

DebConf25, the 25th annual Debian Developer Conference, is taking place in Brest, France from 14 to 19 July 2025. Debian contributors from all over the world have come together at the Campus of IMT Atlantique Bretagne-Pays de la Loire, Brest, to participate and work in a conference exclusively ran by volunteers.

As the conference begins on July 14, the French National Day, Debian can make France's motto its own: "Liberté, égalité, fraternité", Freedom for Free and Open Source Software, Equity for the equal right (and duties) of everyone to use, modify and share Free Software, Fraternity which perfectly covers what our code of conduct defines.

Today the main conference starts with around 500 expected attendants and over 140 scheduled activities, including 45-minute and 20-minute talks, Bird of a Feather ("BoF") team meetings, workshops, a job fair, as well as a variety of other events. The full schedule is updated each day, including activities planned ad-hoc by attendees over the course of the conference.

If you would like to engage remotely, you can follow the video streams available from the DebConf25 website for the events happening in the three talk rooms: Méridienne, Grand amphi and Petit amphi accessible from the DebConf25 homepage. Or you can join the conversations happening inside the talk rooms via the OFTC IRC network in the #debconf-meridienne, #debconf-grandamphi, #debconf-petitamphi, and #debconf-bof channels. Please also join us in the #debconf channel for common discussions related to DebConf.

You can also follow the live coverage of news about DebConf25 provided by our micronews service or the @debian profile on your favorite social network.

DebConf is committed to a safe and welcoming environment for all participants. Please see our Code of Conduct page for more information on this.

Debian thanks the commitment of numerous sponsors to support DebConf25, particularly our Platinum Sponsors: Infomaniak, Proxmox, Viridien, EDF, and AMD.

DebConf25 logo

13 July, 2025 10:50PM by The Debian Publicity Team

hackergotchi for David Bremner

David Bremner

Hibernate on the pocket reform 3/n

Context

Serial console hardware

  • Manual is unclear about name of connector (J16 in schematics, J17 in manual).
  • Also numbering of pins is not given afaict.
  • Clone https://source.mnt.re/reform/pocket-reform.git
  • Look at pocket-reform-motherboard.kicad_pcb
  • From the PCB I can confirm J16 and pins numbered left (sysctl) to right.
  • attach "dtech" prolific PL2303 based serial to usb cable per serial console section of PR manual
  • lsusb shows ID 067b:23a3 Prolific Technology, Inc. ATEN Serial Bridge
  • install tio
  • add my user to group dialout
  • newgrp dialout
  • tio /dev/ttyUSB0 -b 1500000
  • A closer look at the PCB in kicad makes me realize the pin labels in the manual are wrong. 4 = GND, 5 = UART1_RX, 6= UART1_TX. With that change I have U-boot output on boot.

Serial console software

With some help from minute on ircs://irc.libera.chat:6697/#mnt-reform, I got the kernel boot arguments right to have not just u-boot output but linux kernel output on the serial console. In consfigurator notation

(on-change
      (file:has-content "/etc/flash-kernel/ubootenv.d/00reform2_serial_console"
        "setenv bootargs \"${bootargs} console=ttyS2,1500000 keep_bootcon\"")
    (cmd:single "flash-kernel"))

The filename should sort before "00reform2_ubootenv" so that the existing "console=tty1" still ends up at the end.

previous episode|next episode

13 July, 2025 02:50PM

July 12, 2025

hackergotchi for Bits from Debian

Bits from Debian

Debconf25 welcomes its sponsors

DebConf25 logo

DebConf25, the 26th edition of the Debian conference is taking place in Brest Campus of IMT Atlantique Bretagne-Pays de la Loire, France. We appreciate the organizers for their hard work, and hope this event will be highly beneficial for those who attend in person as well as online.

This event would not be possible without the help from our generous sponsors. We would like to warmly welcome the sponsors of DebConf 25, and introduce them to you.

We have five Platinum sponsors.

  • Our first Platinum sponsor is Infomaniak. Infomaniak is Switzerland’s leading developer of Web technologies. With operations all over Europe and based exclusively in Switzerland, the company designs and manages its own data centers powered by 100% renewable energy, and develops all its solutions locally, without outsourcing. With millions of users and the trust of public and private organizations across Europe - such as RTBF, the United Nations, central banks, over 3,000 radio and TV stations, as well as numerous cities and security bodies - Infomaniak stands for sovereign, sustainable and independent digital technology. The company offers a complete suite of collaborative tools, cloud hosting, streaming, marketing and events solutions, while being owned by its employees and self-financed exclusively by its customers.

  • Proxmox is the second Platinum sponsor. Proxmox develops powerful, yet easy-to-use Open Source server software. The product portfolio from Proxmox, including server virtualization, backup, and email security, helps companies of any size, sector, or industry to simplify their IT infrastructures. The Proxmox solutions are built on Debian, we are happy that they give back to the community by sponsoring DebConf25.

  • Viridien is our third Platinum sponsor. Viridien is an advanced technology, digital and Earth data company that pushes the boundaries of science for a more prosperous and sustainable future. Viridien has been using Debian-based systems to power most of its HPC infrastructure and its cloud platform since 2009 and currently employs two active Debian Project Members.

  • EDF is our fourth Platinum sponsor. EDF is a leading global utility company focused on low-carbon power generation. The group uses advanced engineering and scientific computing tools to drive innovation and efficiency in its operations, especially in nuclear power plant design and safety assessment. Since 2003, the EDF Group has been using Debian as its main scientific computing environment. Debian's focus on stability and reproducibility ensures that EDF's calculations and simulations produce consistent and accurate results.

  • AMD is our fifth Platinum sponsor. The AMD ROCm platform includes programming models, tools, compilers, libraries, and runtimes for AI and HPC solution development on AMD GPUs. Debian is an officially supported platform for AMD ROCm and a growing number of components are now included directly in the Debian distribution. For more than 55 years AMD has driven innovation in high-performance computing, graphics and visualization technologies. AMD is deeply committed to supporting and contributing to open-source projects, foundations, and open-standards organizations, taking pride in fostering innovation and collaboration within the open-source community.

Our Gold sponsors are:

  • Ubuntu, the Operating System delivered by Canonical.

  • Freexian, a services company specialized in Free Software and in particular Debian GNU/Linux, covering consulting, custom developments, support and training.

  • Lenovo, a global technology leader manufacturing a wide portfolio of connected products including smartphones, tablets, PCs and workstations as well as AR/VR devices, smart home/office and data center solutions.

  • Collabora, a global consultancy delivering Open Source software solutions to the commercial world.

  • Vyos Networks provides a free routing platform that competes directly with other commercially available solutions and VyOS is an open source network operating system based on Debian.

Our Silver sponsors are:

  • Google, one of the largest technology companies in the world, providing a wide range of Internet-related services and products such as online advertising technologies, search, cloud computing, software, and hardware.
  • Arm: leading technology provider of processor IP, Arm powered solutions have been supporting innovation for more than 30 years and are deployed in over 280 billion chips to date.
  • The Bern University of Applied Sciences with around 7,925 students enrolled, located in the Swiss capital.
  • Siemens is a technology company focused on industry, infrastructure and transport.
  • Linagora develops ethical Free and Open Source Software, supports Open Source products that helps humans and boosts digital progress. They promote Free Software in France.
  • Pexip brings the ease of commercial video platforms to secure and sovereign environments without compromising control or performance.
  • Sipgate offers cloud telephony that automates routine tasks, recognizes customer needs, and enables deep CRM integrations.
  • evolix is a french company specialized in Hosting and Managed Services Provider (MSP) working only with Open Source softwares.
  • Civil Infrastructure Platform, a collaborative project hosted by the Linux Foundation, establishing an open source “base layer” of industrial grade software.
  • credativ: a consulting and services company offering comprehensive services and technical support for the implementation and operation of Open Source Software in business applications.
  • Wind River is a leader in delivering the highest levels of secure, safe, and reliable solutions for mission-critical intelligent systems.
  • NovaCustom is a company that lets you configure your own laptop with various hardware and software options, focusing on privacy and security.
  • Microsoft who enables digital transformation for the era of an intelligent cloud and an intelligent edge. Its mission is to empower every person and every organization on the planet to achieve more.
  • McKay Brothers is the acknowledged leader in providing extreme low latency microwave private bandwidth for firms trading in financial markets.
  • Matanel Foundation, whose first concern is to preserve the cohesion of a society and a nation plagued by divisions.
  • Qualcomm Technologies one of the world's leading companies in field of mobile technology, sponsors and contributes to Open Source developer communities that drive collaboration.

Bronze sponsors:

And finally, our Supporter level sponsors:

A special thanks to the IMT Atlantique Bretagne-Pays de la Loire, our Venue Partner and our Network Partner ResEl!

Thanks to all our sponsors for their support! Their contributions enable a diverse global community of Debian developers and maintainers to collaborate, support one another, and share knowledge at DebConf25.

12 July, 2025 09:45PM by The Debian Publicity Team

Reproducible Builds

Reproducible Builds in June 2025

Welcome to the 6th report from the Reproducible Builds project in 2025. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this report:

  1. Reproducible Builds at FOSSY 2025
  2. Distribution work
  3. diffoscope
  4. OSS Rebuild updates
  5. Website updates
  6. Upstream patches
  7. Reproducibility testing framework

Reproducible Builds at FOSSY 2025

On Saturday 2nd August, Vagrant Cascadian and Chris Lamb will be presenting at this year’s FOSSY 2025. Their talk, titled Never Mind the Checkboxes, Here’s Reproducible Builds!, is being introduced as follows:

There are numerous policy compliance and regulatory processes being developed that target software development… but do they solve actual problems? Does it improve the quality of software? Do Software Bill of Materials (SBOMs) actually give you the information necessary to verify how a given software artifact was built? What is the goal of all these compliance checklists anyways… or more importantly, what should the goals be? If a software object is signed, who should be trusted to sign it, and can they be trusted … forever?

The talk will introduce the audience to Reproducible Builds as a set of best practices which allow users and developers to verify that software artifacts were built from the source code, but also allows auditing for license compliance, providing security benefits, and removes the need to trust arbitrary software vendors.

Hosted by the Software Freedom Conservancy and taking place in Portland, Oregon, USA, FOSSY aims to be a community-focused event: “Whether you are a long time contributing member of a free software project, a recent graduate of a coding bootcamp or university, or just have an interest in the possibilities that free and open source software bring, FOSSY will have something for you”. More information on the event is available on the FOSSY 2025 website, including the full programme schedule.

Vagrant and Chris will also be staffing a table this year, where they will be available to answer any questions about Reproducible Builds and discuss collaborations with other projects.



Distribution work

In Debian this month:

  • Holger Levsen has discovered that it is now possible to bootstrap a minimal Debian trixie using 100% reproducible packages. This result can itself be reproduced, using the debian-repro-status tool and mmdebstrap’s support for hooks:

      $ mmdebstrap --variant=apt --include=debian-repro-status \
           --chrooted-customize-hook=debian-repro-status \
           trixie /dev/null 2>&1 | grep "Your system has"
       INFO  debian-repro-status > Your system has 100.00% been reproduced.
    
  • On our mailing list this month, Helmut Grohne wrote an extensive message raising an issue related to Uploads with conflicting buildinfo filenames:

    Having several .buildinfo files for the same architecture is something that we plausibly want to have eventually. Imagine running two sets of buildds and assembling a single upload containing buildinfo files from both buildds in the same upload. In a similar vein, as a developer I may want to supply several .buildinfo files with my source upload (e.g. for multiple architectures). Doing any of this is incompatible with current incoming processing and with reprepro.

  • 5 reviews of Debian packages were added, 4 were updated and 8 were removed this month adding to our ever-growing knowledge about identified issues.


In GNU Guix, Timothee Mathieu reported that a long-standing issue with reproducibility of shell containers across different host operating systems has been solved. In their message, Timothee mentions:

I discovered that pytorch (and maybe other dependencies) has a reproducibility problem of order 1e-5 when on AVX512 compared to AVX2. I first tried to solve the problem by disabling AVX512 at the level of pytorch, but it did not work. The dev of pytorch said that it may be because some components dispatch computation to MKL-DNN, I tried to disable AVX512 on MKL, and still the results were not reproducible, I also tried to deactivate in openmpi without success. I finally concluded that there was a problem with AVX512 somewhere in the dependencies graph but I gave up identifying where, as this seems very complicated.


The IzzyOnDroid Android APK repository made more progress in June. Not only have they just passed 48% reproducibility coverage, Ben started making their reproducible builds more visible, by offering rbtlog shields, a kind of badge that has been quickly picked up by many developers who are proud to present their applications’ reproducibility status.


Lastly, in openSUSE news, Bernhard M. Wiedemann posted another monthly update for their work there.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 298, 299 and 300 to Debian:

  • Add python3-defusedxml to the Build-Depends in order to include it in the Docker image. []
  • Handle the RPM format’s HEADERSIGNATURES and HEADERIMMUTABLE as a special-case to avoid unnecessarily large diffs. Thanks to Daniel Duan for the report and suggestion. [][]
  • Update copyright years. []

In addition, @puer-robustus fixed a regression introduced in an earlier commit which resulted in some differences being lost. [][]

Lastly, Vagrant Cascadian updated diffoscope in GNU Guix to version 299 [][] and 300 [][].


OSS Rebuild updates

OSS Rebuild has added a new network analyzer that provides transparent HTTP(S) interception during builds, capturing all network traffic to monitor external dependencies and identify suspicious behavior, even in unmodified maintainer-controlled build processes.

The text-based user interface now features automated failure clustering that can group similar rebuild failures and provides natural language failure summaries, making it easier to identify and understand patterns across large numbers of build failures.

OSS Rebuild has also improved the local development experience with a unified interface for build execution strategies, allowing for more extensible environment setup for build execution. The team also designed a new website and logo.


Website updates

Once again, there were a number of improvements made to our website this month including:



Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In June, however, a number of changes were made by Holger Levsen, including:


  • reproduce.debian.net-related:

    • Installed and deployed rebuilderd version 0.24 from Debian unstable in order to make use of the new compression feature added by Jarl Gullberg for the database. This resulted in massive decrease of the SQLite databases:

      • 79G → 2.8G (all)
      • 84G → 3.2G (amd64)
      • 75G → 2.9G (arm64)
      • 45G → 2.1G (armel)
      • 48G → 2.2G (armhf)
      • 73G → 2.8G (i386)
      • 72G → 2.7G (ppc64el)
      • 45G → 2.1G (riscv64)

      … for a combined saving from 521G → 20.8G. This naturally reduces the requirements to run an independent rebuilderd instance and will permit us to add more Debian suites as well.

    • During migration to the latest version of rebuilderd, make sure several services are not started. []
    • Actually run rebuilderd from /usr/bin. []
    • Raise temperatures for NVME devices on some riscv64 nodes that should be ignored. [][]
    • Use a 64KB kernel page size on the ppc64el architecture (see #1106757). []
    • Improve ordering of some “failed to reproduce” statistics. []
    • Detect a number of potential causes of build failures within the statistics. [][]
    • Add support for manually scheduling for the any architecture. []
  • Misc:

    • Update the Codethink nodes as there are now many kernels installed. [][]
    • Install linux-sysctl-defaults on Debian trixie systems as we need ping functionality. []
    • Limit the fs.nr_open kernel turnable. []
    • Stop submitting results to deprecated buildinfo.debian.net service. [][]

In addition, Jochen Sprickerhof greatly improved the statistics and the logging functionality, including adopting to the new database format of rebuilderd version 0.24.0 [] and temporarily increasing maximum log size in order to debug a nettlesome build []. Jochen also dropped the CPUSchedulingPolicy=idle systemd flag on the workers. []



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

12 July, 2025 04:08PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Adulting

In the last past weeks, I have done something I had been meaning to do for a while but always pushed back at the bottom of my TODO pile: prepare for my death.

I am still quite young and perfectly healthy (mentally and physically) and I do plan to live a long and full life, but death is something that comes from us all and can strike anytime. Having witnessed friends and colleagues who lost loved ones who did not prepare adequately for their passing, dealing with all this legal stuff ahead of time seems like the best gift you can leave them.

Writing my will was the easiest part of this "preparation for death" process. I have few material possessions and I'm leaving everything to my SO. As for the copyright for my code, I have decided everything I wrote will be licensed under CC0 (public domain) when I die. Quebec — where I live — also accepts holograph wills, which means I didn't have to hire a notary.

Apart from the will, I also wrote a protection mandate1, filled out Quebec's organ donation form2, took a contract for prearranged funeral services3 and finally, wrote a disaster recovery plan.

This recovery plan was by far the longest and most tedious part of this entire ordeal. If all your machines use full-disk encryption and you die or forget your passwords (for example after a head injury), can your data be recovered? How do you arbitrate between easy recovery and data security? If all your local devices burn down and you also pass away in the process, how will your next of kin access your remote backups and extract the relevant data (in my case, my password manager)?

I had to ask myself many complex questions in this process and although I won't be sharing my disaster recovery plan here (security through obscurity), I urge you to take the time to do something similar yourself and make sure you will leave a house in order when you go away.


  1. in case I become incapacitated and can't make choices by myself anymore. 

  2. it's sadly still opt-in here... 

  3. you pay now for the services you want, the money is kept in a trust in your name and you can't be charged extra when you do pass away. This protects you from inflation and is a great way to make sure your next of kin don't have to deal with the complexities of funeral services while grieving. 

12 July, 2025 04:00AM by Louis-Philippe Véronneau

July 11, 2025

hackergotchi for David Bremner

David Bremner

Hibernate on the pocket reform 4/n

Context

Log from (failed) platform test

After some fun I got the serial console working and re-ran the platform test.

After a bit of reading the serial console, I realized that rmmod dwc3 was causing more problems than it solved, in particularly reliable hard lockup on one of the CPUs.

My revised test script is

set -x
echo platform >  /sys/power/pm_test
echo reboot > /sys/power/disk
sleep 2
rmmod mt76x2u
sleep 2
echo disk >  /sys/power/state
sleep 2
modprobe mt76x2u

The current problem seems to be pcie not resuming properly.

[   65.306842] usbcore: deregistering interface driver mt76x2u
[   65.343606] wlx000a5205eb2d: deauthenticating from 20:05:b7:00:2d:89 by local choice (Reason: 3=DEAUTH_LEAVING)
[   67.995239] PM: hibernation: hibernation entry
[   68.048103] Filesystems sync: 0.022 seconds
[   68.049005] Freezing user space processes
[   68.051075] Freezing user space processes completed (elapsed 0.001 seconds)
[   68.051760] OOM killer disabled.
[   68.052597] PM: hibernation: Basic memory bitmaps created
[   68.053108] PM: hibernation: Preallocating image memory
[   69.719040] PM: hibernation: Allocated 366708 pages for snapshot
[   69.719650] PM: hibernation: Allocated 1466832 kbytes in 1.66 seconds (883.63 MB/s)
[   69.720370] Freezing remaining freezable tasks
[   69.723558] Freezing remaining freezable tasks completed (elapsed 0.002 seconds)
[   69.728002] rk_gmac-dwmac fe1b0000.ethernet end0: Link is Down
[   69.992324] rockchip-dw-pcie a40c00000.pcie: Failed to receive PME_TO_Ack
[   69.993405] PM: hibernation: debug: Waiting for 5 seconds.
[   76.059484] rockchip-dw-pcie a40c00000.pcie: Phy link never came up
[   76.060043] rockchip-dw-pcie a40c00000.pcie: fail to resume
[   76.060546] rockchip-dw-pcie a40c00000.pcie: PM: dpm_run_callback(): genpd_restore_noirq returns -110
[   76.061363] rockchip-dw-pcie a40c00000.pcie: PM: failed to restore noirq: error -110

previous episode|next episode

11 July, 2025 05:16PM

Jamie McClelland

Avoiding Apache Max Request Workers Errors

Wow, I hate this error:

AH00484: server reached MaxRequestWorkers setting, consider raising the MaxRequestWorkers setting

For starters, it means I have to relearn how MaxRequestWorkers functions in Apache:

For threaded and hybrid servers (e.g. event or worker), MaxRequestWorkers restricts the total number of threads that will be available to serve clients. For hybrid MPMs, the default value is 16 (ServerLimit) multiplied by the value of 25 (ThreadsPerChild). Therefore, to increase MaxRequestWorkers to a value that requires more than 16 processes, you must also raise ServerLimit.

Ok… remind me what ServerLimit refers to?

For the prefork MPM, this directive sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process. For the worker and event MPMs, this directive in combination with ThreadLimit sets the maximum configured value for MaxRequestWorkers for the lifetime of the Apache httpd process. For the event MPM, this directive also defines how many old server processes may keep running and finish processing open connections. Any attempts to change this directive during a restart will be ignored, but MaxRequestWorkers can be modified during a restart. Special care must be taken when using this directive. If ServerLimit is set to a value much higher than necessary, extra, unused shared memory will be allocated. If both ServerLimit and MaxRequestWorkers are set to values higher than the system can handle, Apache httpd may not start or the system may become unstable. With the prefork MPM, use this directive only if you need to set MaxRequestWorkers higher than 256 (default). Do not set the value of this directive any higher than what you might want to set MaxRequestWorkers to. With worker, use this directive only if your MaxRequestWorkers and ThreadsPerChild settings require more than 16 server processes (default). Do not set the value of this directive any higher than the number of server processes required by what you may want for MaxRequestWorkers and ThreadsPerChild. With event, increase this directive if the process number defined by your MaxRequestWorkers and ThreadsPerChild settings, plus the number of gracefully shutting down processes, is more than 16 server processes (default).

Got it? In other words, you can “consider” raising the MaxRequestWorkers setting all you want, but you can’t just change that setting, you have to read about several other compliated settings, do some math, and spend a lot of time wondering if you are going to remember what you just did and how to undo it if you blow up your server.

On the plus side, typically, nobody should increase this limit - because if the server runs out of connections, it usually means something else is wrong.

In our case, on a shared web server running Apache2 and PHP-FPM, it’s usually because a single web site has gone out of control.

But wait! How can that happen, we are using PHP-FPM’s max_children setting to prevent a single PHP web site from taking down the server?

After years of struggling with this problem I have finally made some headway.

Our PHP pool configuration typically looks like this:

user = site342999writer
group = site342999writer
listen = /run/php/8.1-site342999.sock
listen.owner = www-data
listen.group = www-data
pm = ondemand
pm.max_children = 12
pm.max_requests = 500
php_admin_value[memory_limit] = 256M

And we invoke PHP-FPM via this apache snippet:

<FilesMatch \.php$>
        SetHandler "proxy:unix:/var/run/php/8.1-site342999.sock|fcgi://localhost"
</FilesMatch>

With these settings in place, what happens when we use up all 12 max_children?

According to the docs:

By default, mod_proxy will allow and retain the maximum number of connections that could be used simultaneously by that web server child process. Use the max parameter to reduce the number from the default. The pool of connections is maintained per web server child process, and max and other settings are not coordinated among all child processes, except when only one child process is allowed by configuration or MPM design.

The max parameter seems to default to the ThreadsPerChild, so it seems that the default here is to allow any web site to consume ThreadsPerChild (25) x ServerLimit (16), which is also the max number of over all connections. Not great.

To make matter worse, there is another setting available which is mysteriously called acquire:

If set, this will be the maximum time to wait for a free connection in the connection pool, in milliseconds. If there are no free connections in the pool, the Apache httpd will return SERVER_BUSY status to the client.

By default this is not set which seems to suggest Apache will just hang on to connections forever until a free PHP process becomes available (or some other time out happens).

So, let’s try something different:

 <Proxy "fcgi://localhost">
    ProxySet acquire=1 max=12
  </proxy>

This snippet is the way you can configure the proxy configuration we setup in the SetHandler statement above. It’s documented on the Apache mod_proxy page.

Now we limit the maximum pool size per process to half of what is available for the entire server and we tell Apache to immediately throw a 503 error if we have exceeded our maximum number of connecitons.

Now, if a site is overwhelmed with traffic, instead of maxing out the available Apache connections while leaving user with constantly spinning browsers, the users will get 503’ed and the server will be able to server other sites.

11 July, 2025 12:27PM

July 10, 2025

Russell Coker

Bad Product Comparisons and EVs

When companies design products a major concern seems to be what the reviewers will have to say about it. For any product of significant value the users are unable to perform any reasonable test before buying, for a casual user some problems may only be apparent after weeks of use so professional reviews are important to many people. The market apparently doesn’t want reviews of the form “here’s a list of products that are quite similar and all do the job well, you can buy any of them, it’s no big deal” which would be the most technically accurate way of doing it.

So the reviewers compare the products on the criteria that are easiest to measure, this lead to phones being compared by how light and thin they are. I think it’s often the case that users would be better served by thicker heavier phones that have larger batteries but instead they are being sold phones that have good battery life in a fresh installation but which don’t last a day with a full load of apps installed.

The latest issue with bad reviews driving poor product design is electric cars. For a while the advocates of old fashioned cars have touted the range of petrol cars which has become an issue for comparing EVs. I have been driving cars for 35 years and so far I have never driven anywhere that’s out of range of the current electric charging network, even with the range of the LEAF (which is smaller than many other EVs). If I ever felt the need to drive across the Nullarbor Plain then I could rent a car to do that and the costs of such car rental would be small compared to the money I’m saving by driving an EV and also small when compared to the premium I would have to pay for an EV with a larger range.

Some of the recent articles I’ve seen about EVs have covered vehicles with a battery range over 700Km which is greater than the legal distance a commercial driver can drive without a break. I’ve also seen articles about plans to have a small petrol or Diesel motor in an EV to recharge the battery without directly driving the wheels. A 9KW Diesel motor could provide enough electricity on average to keep the charge maintained in a LEAF battery and according to the specs of Diesel generators would take about 55Kg of fuel to provide the charge a LEAF needs to drive 1000Km. The idea of a mostly electric hybrid car that can do 1000Km on one tank of fuel is interesting as a thought experiment but doesn’t seem to have much actual use. Apparently a Chinese company is planning to release a car that can do 1400Km one one tank of fuel using such technology which is impressive but not particularly useful.

The next issue of unreasonable competition is in charge speed. Charging a car at 2KW from a regular power socket is a real limit to what you can do with a car. It’s a limit that hasn’t bothered me so far because the most driving I typically do in a week is less than one full charge, so at most I have to charge overnight twice in a week. But if I was going to drive to another city without hiring a car that has better range I’d need a fast charger. Most current models of the Nissan LEAF support charging speeds up to 50KW which means fully charging the battery in under an hour (or slightly over an hour for the long range version). If I was to drive from Melbourne to Canberra in my LEAF I’d have to charge twice which would be an annoyance at those speeds. There are a variety of EVs that can charge at 100KW and some as high as 350KW. 350KW is enough to fully charge the largest EV batteries in half an hour which seems to be as much as anyone would need. But there are apparently plans for 1MW car chargers which would theoretically be able to charge a Hummer (the EV with the largest battery) in 12 minutes. One obvious part of the solution to EV charging times is to not drive a Hummer! Another thing to note is that batteries can’t be charged at a high rate for all charge levels, this is why advertising for fast chargers makes claims like “80% charge in half an hour” which definitely doesn’t mean “100% charge in 37.5 minutes”!

There are significant engineering issues with high power applications. A 1MW cable is not just a bigger version of a regular power cable, there are additional safety issues, user training is required and cooling of the connector is probably required. That’s a lot to just get a better number in the table at the end of a review. There is research in progress on the Megawatt Charging System which is designed to charge heavy vehicles (presumably trucks and buses) at up to 3.75MW. Charging a truck at that rate is reasonable as the process of obtaining and maintaining a heavy vehicle license requires a significant amount of effort and some extra training in 3.75MW charging probably doesn’t make much difference.

A final issue with fast charging is the capacity of the grid. A few years ago I attended a lecture by an electrical engineer who works for the Victorian railway system which was very interesting. The Vic rail power setup involved about 100MW of grid connectivity with special contracts with the grid operators due to the fact that 1MW trains suddenly starting and stopping causes engineering problems that aren’t trivial to solve. They were also working on battery packs and super capacitors to deal with regenerative braking and to avoid brownouts in long sections of track. For a medium size petrol station 14 bays for fuelling cars is common. If 6 such petrol stations were replaced with fast charging stations that can charge cars at 1MW each that would draw the same power as the train network for the entire state! There is a need for significant engineering work to allow most cars to be electric no matter how it’s done, but we don’t need to make that worse just for benchmarks.

10 July, 2025 09:19AM by etbe

Tianon Gravi

Yubi Whati? (YubiKeys, ECDSA, and X.509)

Off-and-on over the last several weeks, I've been spending time trying to learn/understand YubiKeys better, especially from the perspective of ECDSA and signing. �

I had a good mental model for how "slots" work (canonically referenced by their hexadecimal names such as 9C), but found that it had a gap related to "objects"; while closing that, I was annoyed that the main reference table for this gap lives primarily in either a PDF or inside several implementations, so I figured I should create the reference I want to see in the world, but that it would also be useful to write down some of my understanding for my own (and maybe others') future reference. �

So, to that end, I'm going to start with a bit (�) of background information, with the heavy caveat that this only applies to "PIV" ("FIPS 201") usage of YubiKeys, and that I only actually care about ECDSA, although I've been reassured that it's the same for at least RSA (anything outside this is firmly Here Be Not Tianon; "gl hf dd"). �

(Incidentally, learning all this helped me actually appreciate the simplicity of cloud-based KMS solutions, which was an unexpected side effect. 😬)

At a really high level, ECDSA is like many other (asymmetric) cryptographic solutions – you've got a public key and a private key, the private key can be used to "sign" data (tiny amounts of data, in fact, like P-256 can only reasonably sign 256 bits of data, which is where cryptographic hashes like SHA256 come in as secure analogues for larger data in small bit sizes), and the public key can then be used to verify that the data was indeed signed by the private key, and only someone with the private key could've done so. There's some complex math and RNGs involved, but none of that's actually relevant to this post, so find that information elsewhere. 🙈

Unfortunately, this is where things go off the rails: PIV is X.509 ("x509") heavy, and there's no X.509 in the naïve view of my use case. �

In a YubiKey (or any other PIV-signing-supporting smart card? do they actually have competitors in this specific niche? 🤔), a given "slot" can hold one single private key. There are ~24 slots which can hold a private key and be used for signing, although "Slot 9c" is officially designated as the "Digital Signature" slot and is encouraged for signing purposes. 🌈�

One of the biggest gotchas is that with pure-PIV (and older YubiKey firmware 🤬) the public key for a given slot is only available at the time the key is generated, and the whole point of the device in the first place is that the private key is never, ever available from it (all cryptographic operations happen inside the device), so if you don't save that public key when you first ask the device to generate a private key in a particular slot, the public key is lost forever (asterisk). 🙊

$ # generate a new ECDSA P-256 key in "slot 9c" ("Digital Signature")
$ # WARNING: THIS WILL GLEEFULLY WIPE SLOT 9C WITHOUT PROMPTING
$ yubico-piv-tool --slot 9c --algorithm ECCP256 --action generate
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEtGoWRGyjjUlJFXpu8BL6Rnx8jjKR
5+Mzl2Vepgor+k7N9q7ppOtSMWefjFVR0SEPmXqXINNsCi6LpLtNEigIRg==
-----END PUBLIC KEY-----
Successfully generated a new private key.
$ # this is the only time/place we (officially) get this public key

With that background, now let's get to the second aspect of "slots" and how X.509 fits. For every aforementioned slot, there is a corresponding "object" (read: place to store arbitrary data) which is corresponding only by convention. For all these "key" slots the (again, by convention) corresponding "object" is explicitly supposed to be an X.509 certificate (see also the PDF reference linked above). 🙉

It turns out this is a useful and topical place to store that public key we need to keep handy! It's also an interesting place to shove additional details about what the key in a given slot is being used for, if that's your thing. Converting the raw public key into a (likely self-signed) X.509 certificate is an exercise for the reader, but if you want to follow the conventions, you need some way to convert a given "slot" to the corresponding "object", and that is the lookup table I wish existed in more forms. 🕳

So, without further ado, here is the anti-climax: 💫

Slot Object Description
0x9A 0x5FC105 X.509 Certificate for PIV Authentication
0x9E 0x5FC101 X.509 Certificate for Card Authentication
0x9C 0x5FC10A X.509 Certificate for Digital Signature
0x9D 0x5FC10B X.509 Certificate for Key Management
0x82 0x5FC10D Retired X.509 Certificate for Key Management 1
0x83 0x5FC10E Retired X.509 Certificate for Key Management 2
0x84 0x5FC10F Retired X.509 Certificate for Key Management 3
0x85 0x5FC110 Retired X.509 Certificate for Key Management 4
0x86 0x5FC111 Retired X.509 Certificate for Key Management 5
0x87 0x5FC112 Retired X.509 Certificate for Key Management 6
0x88 0x5FC113 Retired X.509 Certificate for Key Management 7
0x89 0x5FC114 Retired X.509 Certificate for Key Management 8
0x8A 0x5FC115 Retired X.509 Certificate for Key Management 9
0x8B 0x5FC116 Retired X.509 Certificate for Key Management 10
0x8C 0x5FC117 Retired X.509 Certificate for Key Management 11
0x8D 0x5FC118 Retired X.509 Certificate for Key Management 12
0x8E 0x5FC119 Retired X.509 Certificate for Key Management 13
0x8F 0x5FC11A Retired X.509 Certificate for Key Management 14
0x90 0x5FC11B Retired X.509 Certificate for Key Management 15
0x91 0x5FC11C Retired X.509 Certificate for Key Management 16
0x92 0x5FC11D Retired X.509 Certificate for Key Management 17
0x93 0x5FC11E Retired X.509 Certificate for Key Management 18
0x94 0x5FC11F Retired X.509 Certificate for Key Management 19
0x95 0x5FC120 Retired X.509 Certificate for Key Management 20

See also "piv-objects.json" for a machine-readable copy of this data. 👀🤖💻💾

(Major thanks to paultag and jon gzip johnson for helping me learn and generally putting up with me, but especially dealing with my live-stream-of-thoughts while I stumble through the dark. 💖)

10 July, 2025 07:00AM by Tianon Gravi (admwiggin@gmail.com)

July 08, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Superimposed codes, take two

After my last post on superimposed codes, I discovered that OEIS already had a sequence for it (I had just missed it due to a slightly different convention), namely A286874 (and its sister sequence A303977, which lists the number of distinct maximal solutions). However, very few terms of this sequence were known; in particular, it was known that a(12) >= 20 (easily proved by simply demonstrating a set of twenty 12-bit numbers with the desired property), but it wasn't known if the value could be higher (i.e., whether there existed a 12-bit set with 21 elements or more). The SAT solver wasn't really working well for this anymore, so I thought; can I just bruteforce it? I.e., can I enumerate all 12-bit 20-element sets and then see if any of them have room for a 21st element?

Now, obviously you cannot run a completely dumb bruteforce. The raw state space is 12*20 = 240 bits, and going through 2^240 different options is far away. But it's a good place to start, and then we can start employing tricks from there. (I'm sure there are more fancy ways somehow, but this one was what I chose. I'm a genius with mathematics, but I can write code.)

So I started with a 20-level deep for loop, with each element counting from 0 to 4095 (inclusive). Now, there are some speedups that are obvious; for instance, once you have two elements, you can check that neither is a subset of the other (which is, except in some edge cases with small sets that we don't need to worry about here, a looser condition than what we're trying to test for), and then skip the remaining 18 levels. Similarly, once we have the first three elements, we can start testing whether one is a subset of OR of the two others, and abort similarly.

Furthermore, we can start considering symmetries. We only care about solutions that are qualitatively distinct, in that the ordering of the elements don't matter and the ordering of the bits also don't matter. So we can simply only consider sequences where the elements are in order, which is extremely simple, very cheap, and nets us a speedup of 20! ~= 2.4 * 10^18. We have to be a bit careful, though, because this symmetry can conflict with other symmetries that we'd like to use for speedup. For instance, it would be nice to impose the condition that the elements must be in order of increasing population count (number of set bits), but if we do this at the same time as the “strictly increasing” condition, we'll start missing valid solutions. (I did use a very weak variant of it, though; no element can have smaller popcount than the first one. Otherwise, you could just swap those two elements and shuffle columns around, and it wouldn't be in conflict.)

However, there is more that we can do which isn't in conflict. In particular, let's consider (writing only 5-bit elements for brevity) that we are considering candidates for the first element:

00011
00101
00110
10010

These are all, obviously, the same (except that the latter ones will be more restrictive); we could just shuffle bits around and get the same thing. So we impose a new symmetry: Whenever we introduce new bits (bits that were previously always set), they need to start from the right. So now this start of a sequence is valid:

00011
00101

but this is not:

00011
01001

The reason is, again, that we could get the first sequence from the second by flipping the second and third bit (counting from the left). This is cheap and easy to test for, and is not in conflict with our “increasing” criterion as long as we make this specific choice.

But we can extend this even further. Look at these two alternatives:

00111
01001

and

00111
01010

They are also obviously equivalent as prefixes (just swap the fourth and fifth bits), so we don't want to keep both. We make a very similar restriction as before; if all previous bits in a position are the same, then we need to fill bits from the right. (If they're not, then we cannot impose a restriction.) This is also fairly easy to do with some bit fiddling, although my implementation only considers consecutive bits. (It's not in conflict with the strictly-increasing criterion, again because it only makes values lower, not higher. It is, in a sense, a non-decreasing criterion on the columns.)

And finally, consider these two sequences (with some other elements in-between):

00111
01001
.....
10011

and

00111
01011
.....
10001

They are also equivalent; if you exchange first and second bit and then swap the order of them, you end up with the same. So this brings us to the last symmetry: If you introduce a new bit (or more generally N new bits), then you are not allowed to introduce later a value that is the same bit shifted more to the left and with the other bits being lower. So the second sequence would be outlawed.

Now, how do we do all of these tests efficiently? (In particular, the last symmetry, while it helped a lot in reducing the number of duplicate solutions, wasn't a speed win at first.) My first choice was to just generate code that did all the tests, and did them as fast as possible. This was actually quite efficient, although it took GCC several minutes to compile (and Clang even more, although the resulting code wasn't much faster). Amusingly, this code ended up with an IPC above 6 on my Zen 3 (5950X); no need for hyperthreading here! I don't think I've ever seen real-life code this taxing on the execution units, even though this code is naturally extremely branch-heavy. Modern CPUs are amazing beasts.

It's a bit wasteful that we have 64-bit ALUs (and 256-bit SIMD ALUs) and use them to do AND/OR on 12 bits at a time. So I tried various tricks with packing the values to do more tests at a time, but unfortunately, it only lead to slowdowns. So eventually, I settled at a very different solution: Bitsets. At any given time, we have a 4096-bit set of valid future values for the inner for loops. Whenever we decide on a value, we look up in a set of pregenerated tables and just AND them into our set. For instance, if we just picked the value 3 (00011), we look up into the “3” table and it will instantly tell us that values like 7 (00111), 11 (01011), and many others are going to be invalid for all inner iterations and we can just avoid considering them altogether. (Iterating over only the set bits in a bitset is pretty fast in general, using only standard tricks.) This saves us from testing any further value against these illegals, so it's super-fast. The resulting tables are large (~4 GB), since we need to look up pairs of values into it, so this essentially transforms our high-ALU problem into a memory-bound problem, but it's still easily worth it (I think it gave a speedup of something like 80x). The actual ANDing is easily done with AVX2, 256 bits at a time.

This optimization not only made the last symmetry-breaking feasible, but also sped up the entire process enough (you essentially get O(n) bitset intersections instead of O(n²) new tests per level) that it went from a “multiple machines, multiple months” project to running comfortably within a day on my 5950X (~6 core-days). I guess maybe a bit anticlimactic; I had to move the database I used for work distribution locally to the machine or else the latency would be killing me. It found the five different solutions very quickly and then a couple of thousand duplicates of them (filtering those out efficiently is a kind of tricky problem in itself!), and then confirmed there were no others. I submitted it to OEIS, and it should hopefully go through the editing process fairly fast.

The obvious next question is: Can we calculate a(13) in the same way? Unfortunately, it seems the answer is no. Recompiling the same code with 13-bit parameters (taking the LUTs up to ~31 GB, still within the amount of RAM I've got) and making a 25-deep instead of 20-level deep for loop, and then running for a while, it seems that we're looking at roughly 4–5000 core years. Which is infeasible unless you've got a lot of money to burn (with spot VMs on GCE, you're talking about roughly half a million dollars, give or take) on something that isn't a very important problem in computer science.

In theory, there's still hope, though: The fact that we're still finding the same solution ~1000x (down from ~100000x before the last symmetries were added!) indicates that there's some more symmetry that we could in theory exploit and break (and that factor 1000 is likely to be much larger for 25 elements than for 20). So if someone more creative than me could invent code for identifying them—or some other way of rejecting elements early—we could perhaps identify a(13). But I don't think that's happening anytime soon. Brute force found its sweet spot and I'm happy about that, but it doesn't scale forever. :-)

08 July, 2025 07:34PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Updated my timezone tool.

Updated my timezone tool. hover of mouse will change color. Trying to make it more visible to me.

08 July, 2025 01:56AM by Junichi Uekawa

July 07, 2025

Thorsten Alteholz

My Debian Activities in June 2025

Debian LTS

This was my hundred-thirty-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4221-1] libblockdev security update of one embargoed CVE related to obtaining full root privileges.
  • [hardening udisks2] uploaded new version of udisks2 with a hardening patch related to DLA 4221-1
  • [DLA 4235-1] sudo security update to fix one embargoed CVE related to prevent a local privilege escalation.
  • [#1106867] got permission to upload kmail-account-wizard; the package was marked as accepted in July.

This month I also did a week of FD duties and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the eighty-third ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1465-1] libblockdev security update to fix one embargoed CVE in Buster, related to obtaining full root privileges.
  • [ELA-1475-1] gst-plugins-good1.0 security update to fix 16 CVEs in Stretch. This also includes cherry picking other commits to make this fixes possible.
  • [ELA-1476-1] sudo security update to fix one embargoed CVE in Buster, Stretch and Jessie. The fix is related to prevent a local privilege escalation.

This month I also did a week of FD duties and attended the monthly LTS/ELTS meeting.

Debian Printing

This month I uploaded bugfix versions of:

  • lprng to update translations.
  • mtink to update translations
  • cups to fix a FTBFS introduced by changes to systemd

Thanks a lot again to the Release Team who quickly handled all my unblock bugs!

This work is generously funded by Freexian!

Debian Astro

This month I uploaded bugfix versions of:

  • siril (sponsored upload to experimental)
  • calceph (sponsored upload to experimental)

Debian Mobcom

Unfortunately I didn’t found any time to work on this topic.

misc

This month I uploaded bugfix versions of:

Unfortunately I stumbled over a discussion about RFPs. One part of those involved wanted to automatically close older RFPs, the other part just wanted to keep them. But nobody suggested to really take care of those RFPs. Why is it easier to spend time on talking about something instead of solving the real problem? Anyway, I had a look at those open RFPs. Some of them can be just closed because they haven’t been closed when uploading the corresponding package. For some others the corresponding software has not seen any upstream activity for several years and depends on older software no longer in Debian (like Python 2). Such bugs can be just closed. Some requested software only works together with long gone technology (for example the open Twitter API). Such bugs can be just closed. Last but not least, even the old RFPs contain nice software, that is still maintained upstream and useful. One example is ta-lib that I uploaded in June. So, please, let’s put our money where out mouths are. My diary of closed RFP bugs is on people.d.o. If only ten people follow suit, all bugs can be closed within a year.

FTP master

It is still this time of the year when just a few packages arrive in NEW: it is Hard Freeze. So please don’t hold it against me that I enjoy the sun more than processing packages in NEW. This month I accepted 104 and rejected 13 packages. The overall number of packages that got accepted was 105.

07 July, 2025 09:40AM by alteholz

Birger Schacht

Debian on Framework 12

For some time now I was looking for a device to replace my Thinkpad. Its a 14" device, but thats to big for my taste. I am a big fan of small notebooks, so when frame.work announced their 12" laptop, I took the chance and ordered one right away.

I was in one of the very early batches and got my package a couple of days ago. When ordering, I chose the DIY edition, but in the end there was not that much of DIY to do: I had to plug in the storage and the memory, put the keyboard in and tighten some screws. There are very detailed instructions with a lot of photos that tell you which part to put where, which is nice.

Image of the Framework 12 laptop, assembled but powered off

My first impressions of the device are good - it is heavier than I anticipated, but very vell made. It is very easy to assemble and disassemble and it feels like it can take a hit.

When I started it the first time it took some minutes to boot because of the new memory module, but then it told me right away that it could not detect an operating system. As usual when I want to install a new system, I created a GRML live usb system and tried to boot from this USB device. But the Framwork BIOS did not want to let me boot GRML, telling me it is blocked by the current security policy. So I started to look in the BIOS where I could find the SecureBoot configuration, but there was no such setting anywhere. I then resorted to a Debian Live image, which was allowed to boot.

Image of the screen of the Framework 12 laptop, saying it could not detect an operating system

I only learned later, that the SecureBoot setting is in a separate section that is not part of the main BIOS configuration dialog. There is an “Administer Secure Boot” icon which you can choose when starting the device, but apparently only before you try to load an image that is not allowed.

I always use my personal minimal install script to install my Debian systems, so it did not make that much of a difference to use Debian Live instead of GRML. I only had to apt install debootstrap before running the script.

I updated the install script to default to trixie and to also install shim-signed and after successful installation booted into Debian 13 on the Framwork 12. Everthing seems to work fine so far. WIFI works. For sway to start I had to install firmware-intel-graphics. The touchscreen works without me having to configure anything (though I don’t have frame.work stylus, as they are not yet available), also changing the brightness of the screen worked right away. The keyboard feels very nice, likewise the touchpad, which I configured to allow tap-to-click using the tap enabled option of sway-input.

Image of the a Framework 12 laptop, showing the default Sway background image

One small downside of the keyboard is that it does not have a backlight, which was a surprise. But given that this is a frame.work laptop, there are chances that a future generation of the keyboard will have backlight support.

The screen of the laptop can be turned all the way around to the back of the laptops body, so it can be used as a tablet. In this mode the keyboard gets disabled to prevent accidently pushing keys when using the device in tablet mode.

For online meetings I still prefer using headphones with cables over bluetooth once, so I’m glad that the laptop has a headphone jack on the side.

Above the screen there are a camera and a microphone, which both have separate physical switches to disable them.

I ordered a couple of expansion cards, in the current setup I use two USB-C, one HDMI and one USB-A. I also ordered a 1TB expansion card and only used this to transfer my /home, but I soon realized that the card got rather hot, so I probably won’t use it as a permanent expansion.

I can not yet say a lot about how long the battery lasts, but I will bring the laptop to DebConf 25, I guess there I’ll find out. There I might also have a chance to test if the screen is bright enough to be usable outdoors ;)

07 July, 2025 05:28AM

July 05, 2025

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

This is bits from the DPL for June.

The Challenge of Mentoring Newcomers

In June there was an extended discussion about the ongoing challenges around mentoring newcomers in Debian. As many of you know, this is a topic I’ve cared about deeply--long before becoming DPL. In my view, the issue isn’t just a matter of lacking tools or needing to “try harder” to attract contributors. Anyone who followed the discussion will likely agree that it’s more complex than that.

I sometimes wonder whether Debian’s success contributes to the problem. From the outside, things may appear to “just work”, which can lead to the impression: “Debian is doing fine without me--they clearly have everything under control.” But that overlooks how much volunteer effort it takes to keep the project running smoothly.

We should make it clearer that help is always needed--not only in packaging, but also in writing technical documentation, designing web pages, reaching out to upstreams about license issues, finding sponsors, or organising events. (Speaking from experience, I would have appreciated help in patiently explaining Free Software benefits to upstream authors.) Sometimes we think too narrowly about what newcomers can do, and also about which tasks could be offloaded from overcommitted contributors.

In fact, one of the most valuable things a newcomer can contribute is better documentation. Those of us who’ve been around for years may be too used to how things work--or make assumptions about what others already know. A person who just joined the project is often in the best position to document what’s confusing, what’s missing, and what they wish they had known sooner.

In that sense, the recent "random new contributor’s experience" posts might be a useful starting point for further reflection. I think we can learn a lot from positive user stories, like this recent experience of a newcomer adopting the courier package. I'm absolutely convinced that those who just found their way into Debian have valuable perspectives--and that we stand to learn the most from listening to them.

We should also take seriously what Russ Allbery noted in the discussion: "This says bad things about the project's sustainability and I think everyone knows that." Volunteers move on--that’s normal and expected. But it makes it all the more important that we put effort into keeping Debian's contributor base at least stable, if not growing.

Project-wide LLM budget for helping people

Lucas Nussbaum has volunteered to handle the paperwork and submit a request on Debian’s behalf to LLM providers, aiming to secure project-wide access for Debian Developers. If successful, every DD will be free to use this access--or not--according to their own preferences.

Kind regards Andreas.

05 July, 2025 10:00PM by Andreas Tille

July 04, 2025

Russell Coker

Function Keys

For at least 12 years laptops have been defaulting to not having the traditional PC 101 key keyboard function key functionality and instead have had other functions like controlling the volume and have had a key labelled Fn to toggle the functions. It’s been a BIOS option to control whether traditional function keys or controls for volume etc are the default and for at least 12 years I’ve configured all my laptops to have the traditional function keys as the default.

Recently I’ve been working in corporate IT and having exposure to many laptops with the default BIOS settings for those keys to change volume etc and no reasonable option for addressing it. This has made me reconsider the options for configuring these things.

Here’s a page listing the standard uses of function keys [1]. Here is a summary of the relevant part of that page:

  • F1 key launches help doesn’t seem to get much use. The main help option in practice is Google (I anticipate controversy about this and welcome comments) and all the software vendors are investigating LLM options for help which probably won’t involve F1.
  • F2 is for renaming files but doesn’t get much use. Probably most people who use graphical file managers use the right mouse button for it. I use it when sorting a selection of photos.
  • F3 is for launching a search (which is CTRL-F in most programs).
  • ALT-F4 is for closing a window which gets some use, although for me the windows I close are web browsers (via CTRL-W) and terminals (via CTRL-D).
  • F5 is for reloading a page which is used a lot in web browsers.
  • F6 moves the input focus to the URL field of a web browser.
  • F8 is for moving a file which in the degenerate case covers the rename functionality of F2.
  • F11 is for full-screen mode in browsers which is sometimes handy.

The keys F1, F3, F4, F7, F9, F10, and F12 don’t get much use for me and for the people I observe. The F2 and F8 keys aren’t useful in most programs, F6 is only really used in web browsers – but the web browser counts as “most programs” nowadays.

Here’s the description of Thinkpad Fn keys [2]. I use Thinkpads for fun and Dell laptops for work, so it would be nice if they both worked in similar ways but of course they don’t. Dell doesn’t document how their Fn keys are laid out, but the relevant bit is that F1 to F4 are the same as on Thinkpads which is convenient as they are the ones that are likely to be commonly used and needed in a hurry.

I have used the KDE settings on my Thinkpad to map the function F1 to F3 keys to the Fn equivalents which are F1 to mute-audio, F2 for vol-down, and F3 for vol-up to allow using them without holding down the Fn key while having other function keys such as F5 and F6 have their usual GUI functionality. Now I have to could train myself to use F8 in situations where I usually use F2, at least when using a laptop.

The only other Fn combinations I use are F5 and F6 for controlling screen brightness, but that’s not something I use much.

It’s annoying that the laptop manufacturers forced me to this. Having a Fn key to get extra functions and not need 101+ keys on a laptop size device is a reasonable design choice. But they could have done away with the PrintScreen key to make space for something else. Also for Thinkpads a touch pad is something that could obviously be removed to gain some extra space as the Trackpoint does all that’s needed in that regard.

04 July, 2025 11:44AM by etbe

July 03, 2025

The Fuss About “AI”

There are many negative articles about “AI” (which is not about actual Artificial Intelligence also known as “AGI”). Which I think are mostly overblown and often ridiculous.

Resource Usage

Complaints about resource usage are common, training Llama 3.1 could apparently produce as much pollution as “10,000 round trips by car between Los Angeles and New York City”. That’s not great but when you compare to the actual number of people doing such drives in the US and the number of people taking commercial flights on that route it doesn’t seem like such a big deal. Apparently commercial passenger jets cause CO2 emissions per passenger about equal to a car with 2 people. Why is it relevant whether pollution comes from running servers, driving cars, or steel mills? Why not just tax polluters for the damage they do and let the market sort it out? People in the US make a big deal about not being communist, so why not have a capitalist solution, make it more expensive to do undesirable things and let the market sort it out?

ML systems are a less bad use of compute resources than Bitcoin, at least ML systems give some useful results while Bitcoin has nothing good going for it.

The Dot-Com Comparison

People often complain about the apparent impossibility of “AI” companies doing what investors think they will do. But this isn’t anything new, that all happened before with the “dot com boom”. I’m not the first person to make this comparison, The Daily WTF (a high quality site about IT mistakes) has an interesting article making this comparison [1]. But my conclusions are quite different.

The result of that was a lot of Internet companies going bankrupt, the investors in those companies losing money, and other companies then bought up their assets and made profitable companies. The cheap Internet we now have was built on the hardware from bankrupt companies which was sold for far less than the manufacture price. That allowed it to scale up from modem speeds to ADSL without the users paying enough to cover the purchase of the infrastructure. In the early 2000s I worked for two major Dutch ISPs that went bankrupt (not my fault) and one of them continued operations in the identical manner after having the stock price go to zero (I didn’t get to witness what happened with the other one). As far as I’m aware random Dutch citizens and residents didn’t suffer from this and employees just got jobs elsewhere.

There are good things being done with ML systems and when companies like OpenAI go bankrupt other companies will buy the hardware and do good things.

NVidia isn’t ever going to have the future sales that would justify a market capitalisation of almost 4 Trillion US dollars. This market cap can support paying for new research and purchasing rights to patented technology in a similar way to the high stock price of Google supported buying YouTube, DoubleClick, and Motorola Mobility which are the keys to Google’s profits now.

The Real Upsides of ML

Until recently I worked for a company that used ML systems to analyse drivers for signs of fatigue, distraction, or other inappropriate things (smoking which is illegal in China, using a mobile phone, etc). That work was directly aimed at saving human lives with a significant secondary aim of saving wear on vehicles (in the mining industry drowsy drivers damage truck tires and that’s a huge business expense).

There are many applications of ML in medical research such as recognising cancer cells in tissue samples.

There are many less important uses for ML systems, such as recognising different types of pastries to correctly bill bakery customers – technology that was apparently repurposed for recognising cancer cells.

The ability to recognise objects in photos is useful. It can be used for people who want to learn about random objects they see and could be used for helping young children learn about their environment. It also has some potential for assistance for visually impaired people, it wouldn’t be good for safety critical systems (don’t cross a road because a ML system says there are no cars coming) but could be useful for identifying objects (is this a lemon or a lime). The Humane AI pin had some real potential to do good things but there wasn’t a suitable business model [2], I think that someone will develop similar technology in a useful way eventually.

Even without trying to do what the Humane AI Pin attempted, there are many ways for ML based systems to assist phone and PC use.

ML systems allow analysing large quantities of data and giving information that may be correct. When used by a human who knows how to recognise good answers this can be an efficient way of solving problems. I personally have solved many computer problems with the help of LLM systems while skipping over many results that were obviously wrong to me. I believe that any expert in any field that is covered in the LLM input data could find some benefits from getting suggestions from an LLM. It won’t necessarily allow them to solve problems that they couldn’t solve without it but it can provide them with a set of obviously wrong answers mixed in with some useful tips about where to look for the right answers.

Jobs and Politics

Noema Magazine has an insightful article about how “AI” can allow different models of work which can enlarge the middle class [3].

I don’t think it’s reasonable to expect ML systems to make as much impact on society as the industrial revolution, and the agricultural revolutions which took society from more than 90% farm workers to less than 5%. That doesn’t mean everything will be fine but it is something that can seem OK after the changes have happened. I’m not saying “apart from the death and destruction everything will be good”, the death and destruction are optional. Improvements in manufacturing and farming didn’t have to involve poverty and death for many people, improvements to agriculture didn’t have to involve overcrowding and death from disease. This was an issue of political decisions that were made.

The Real Problems of ML

Political decisions that are being made now have the aim of making the rich even richer and leaving more people in poverty and in many cases dying due to being unable to afford healthcare. The ML systems that aim to facilitate such things haven’t been as successful as evil people have hoped but it will happen and we need appropriate legislation if we aren’t going to have revolutions.

There are documented cases of suicide being inspired by Chat GPT systems [4]. There have been people inspired towards murder by ChatGPT systems but AFAIK no-one has actually succeeded in such a crime yet. There are serious issues that need to be addressed with the technology and with legal constraints about how people may use it. It’s interesting to consider the possible uses of ChatGPT systems for providing suggestions to a psychologist, maybe ChatGPT systems could be used to alleviate mental health problems.

The cases of LLM systems being used for cheating on assignments etc isn’t a real issue. People have been cheating on assignments since organised education was invented.

There is a real problem of ML systems based on biased input data that issue decisions that are the average of the bigotry of the people who provided input. That isn’t going to be worse than the current situation of bigoted humans making decisions based on hate and preconceptions but it will be more insidious. It is possible to search for that so for example a bank could test it’s mortgage approval ML system by changing one factor at a time (name, gender, age, address, etc) and see if it changes the answer. If it turns out that the ML system is biased on names then the input data could have names removed. If it turns out to be biased about address then there could be weights put in to oppose that.

For a long time there has been excessive trust in computers. Computers aren’t magic they just do maths really fast and implement choices based on the work of programmers – who have all the failings of other humans. Excessive trust in a rule based system is less risky than excessive trust in a ML system where no-one really knows why it makes the decisions it makes.

Self driving cars kill people, this is the truth that Tesla stock holders don’t want people to know.

Companies that try to automate everything with “AI” are going to be in for some nasty surprises. Getting computers to do everything that humans do in any job is going to be a large portion of an actual intelligent computer which if it is achieved will raise an entirely different set of problems.

I’ve previously blogged about ML Security [5]. I don’t think this will be any worse than all the other computer security problems in the long term, although it will be more insidious.

How Will It Go?

Companies spending billions of dollars without firm plans for how to make money are going to go bankrupt no matter what business they are in. Companies like Google and Microsoft can waste some billions of dollars on AI Chat systems and still keep going as successful businesses. Companies like OpenAI that do nothing other than such chat systems won’t go well. But their assets can be used by new companies when sold at less than 10% the purchase price.

Companies like NVidia that have high stock prices based on the supposed ongoing growth in use of their hardware will have their stock prices crash. But the new technology they develop will be used by other people for other purposes. If hospitals can get cheap diagnostic ML systems because of unreasonable investment into “AI” then that could be a win for humanity.

Companies that bet their entire business on AI even when it’s not necessarily their core business (as Tesla has done with self driving) will have their stock price crash dramatically at a minimum and have the possibility of bankruptcy. Having Tesla go bankrupt is definitely better than having people try to use them as self driving cars.

03 July, 2025 10:21AM by etbe

July 02, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 14.6.0-1 on CRAN: New Upstream Minor Release

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1241 other packages on CRAN, downloaded 40.4 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 634 times according to Google Scholar.

Conrad released a minor version 4.6.0 yesterday which offers new accessors for non-finite values. And despite being in Beautiful British Columbia on vacation, I had wrapped up two rounds of reverse dependency checks preparing his 4.6.0 release, and shipped this to CRAN this morning where it passed with flying colours and no human intervention—even with over 1200 reverse dependencies. The changes since the last CRAN release are summarised below.

Changes in RcppArmadillo version 14.6.0-1 (2025-07-02)

  • Upgraded to Armadillo release 14.6.0 (Caffe Mocha)

    • Added balance() to transform matrices so that column and row norms are roughly the same

    • Added omit_nan() and omit_nonfinite() to extract elements while omitting NaN and non-finite values

    • Added find_nonnan() for finding indices of non-NaN elements

    • Added standalone replace() function

  • The fastLm() help page now mentions that options to solve() can control its behavior.

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

02 July, 2025 09:21PM

Rcpp 1.1.0 on CRAN: C++11 now Minimum, Regular Semi-Annual Update

rcpp logo

With a friendly Canadian hand wave from vacation in Beautiful British Columbia, and speaking on behalf of the Rcpp Core Team, I am excited to shared that the (regularly scheduled bi-annual) update to Rcpp just brought version 1.1.0 to CRAN. Debian builds haven been prepared and uploaded, Windows and macOS builds should appear at CRAN in the next few days, as will builds in different Linux distribution–and of course r2u should catch up tomorrow as well.

The key highlight of this release is the switch to C++11 as minimum standard. R itself did so in release 4.0.0 more than half a decade ago; if someone is really tied to an older version of R and an equally old compiler then using an older Rcpp with it has to be acceptable. Our own tests (using continuous integration at GitHub) still go back all the way to R 3.5.* and work fine (with a new-enough compiler). In the previous release post, we commented that we had only reverse dependency (falsely) come up in the tests by CRAN, this time there was none among the well over 3000 packages using Rcpp at CRAN. Which really is quite amazing, and possibly also a testament to our rigorous continued testing of our development and snapshot releases on the key branch.

This release continues with the six-months January-July cycle started with release 1.0.5 in July 2020. As just mentioned, we do of course make interim snapshot ‘dev’ or ‘rc’ releases available. While we not longer regularly update the Rcpp drat repo, the r-universe page and repo now really fill this role admirably (and with many more builds besides just source). We continue to strongly encourage their use and testing—I run my systems with these versions which tend to work just as well, and are of course also fully tested against all reverse-dependencies.

Rcpp has long established itself as the most popular way of enhancing R with C or C++ code. Right now, 3038 packages on CRAN depend on Rcpp for making analytical code go faster and further. On CRAN, 13.6% of all packages depend (directly) on Rcpp, and 61.3% of all compiled packages do. From the cloud mirror of CRAN (which is but a subset of all CRAN downloads), Rcpp has been downloaded 100.8 million times. The two published papers (also included in the package as preprint vignettes) have, respectively, 2023 (JSS, 2011) and 380 (TAS, 2018) citations, while the the book (Springer useR!, 2013) has another 695.

As mentioned, this release switches to C++11 as the minimum standard. The diffstat display in the CRANberries comparison to the previous release shows how several (generated) sources files with C++98 boilerplate have now been removed; we also flattened a number of if/else sections we no longer need to cater to older compilers (see below for details). We also managed more accommodation for the demands of tighter use of the C API of R by removing DATAPTR and CLOENV use. A number of other changes are detailed below.

The full list below details all changes, their respective PRs and, if applicable, issue tickets. Big thanks from all of us to all contributors!

Changes in Rcpp release version 1.1.0 (2025-07-01)

  • Changes in Rcpp API:

    • C++11 is now the required minimal C++ standard

    • The std::string_view type is now covered by wrap() (Lev Kandel in #1356 as discussed in #1357)

    • A last remaining DATAPTR use has been converted to DATAPTR_RO (Dirk in #1359)

    • Under R 4.5.0 or later, R_ClosureEnv is used instead of CLOENV (Dirk in #1361 fixing #1360)

    • Use of lsInternal switched to lsInternal3 (Dirk in #1362)

    • Removed compiler detection macro in a header cleanup setting C++11 as the minunum (Dirk in #1364 closing #1363)

    • Variadic templates are now used onconditionally given C++11 (Dirk in #1367 closing #1366)

    • Remove RCPP_USING_CXX11 as a #define as C++11 is now a given (Dirk in #1369)

    • Additional cleanup for __cplusplus checks (Iñaki in #1371 fixing #1370)

    • Unordered set construction no longer needs a macro for the pre-C++11 case (Iñaki in #1372)

    • Lambdas are supported in a Rcpp Sugar functions (Iñaki in #1373)

    • The Date(time)Vector classes now have default ctor (Dirk in #1385 closing #1384)

    • Fixed an issue where Rcpp::Language would duplicate its arguments (Kevin in #1388, fixing #1386)

  • Changes in Rcpp Attributes:

    • The C++26 standard now has plugin support (Dirk in #1381 closing #1380)
  • Changes in Rcpp Documentation:

    • Several typos were correct in the NEWS file (Ben Bolker in #1354)

    • The Rcpp Libraries vignette mentions PACKAGE_types.h to declare types used in RcppExports.cpp (Dirk in #1355)

    • The vignettes bibliography file was updated to current package versions, and now uses doi references (Dirk in #1389)

  • Changes in Rcpp Deployment:

    • Rcpp.package.skeleton() creates ‘URL’ and ‘BugReports’ if given a GitHub username (Dirk in #1358)

    • R 4.4.* has been added to the CI matrix (Dirk in #1376)

    • Tests involving NA propagation are skipped under linux-arm64 as they are under macos-arm (Dirk in #1379 closing #1378)

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

02 July, 2025 08:05PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Japan is now very hot.

Japan is now very hot. If you are coming to Banpaku, be prepared.

02 July, 2025 01:12AM by Junichi Uekawa

July 01, 2025

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in June 2025

01 July, 2025 07:08PM by Ben Hutchings

hackergotchi for Guido Günther

Guido Günther

Free Software Activities June 2025

Another short status update of what happened on my side last month. Phosh 0.48.0 is out with nice improvements, phosh.mobi e.V. is alive, helped a bit to get cellbroadcastd out, osk bugfixes and some more:

See below for details on the above and more:

phosh

  • Fix crash triggered by our mpris player refactor (MR)
  • Generate vapi file for libphosh (MR)
  • Backport fixes for 0.47 (MR)
  • Media players lockscreen plugin (MR), bugfix
  • Fix lockscreen clock when am/pm is localized (MR)
  • Another round of CI cleanups (MR)
  • Proper life cycle for MeatinfoCache in app-grid button tests (MR)
  • Enable cell broadcast display by default (MR)
  • Release 0.48~rc1, 0.48.0

phoc

  • Unify output config updates and support adaptive sync (MR)
  • Avoid crash on shutdown (MR)
  • Avoid use after free in gtk-shell (MR)
  • Simplify CI (MR)
  • Release 0.48~rc1, 0.48.0

phosh-mobile-settings

stevia (formerly phosh-osk-stub)

  • Release 0.48~rc1, 0.48.0
  • Reject non-UTF-8 dictionaries for hunspell so avoid broken completion bar (MR)
  • Output tracking (MR) as prep for future work
  • Handle non-UTF-8 dictionaries for hunspell for input and output (MR)
  • Fix some leaks (MR)
  • Handle default completer changes right away (MR)

phosh-osk-data

  • Handle stevia rename (MR)
  • Supply ru presage data

phosh-vala-plugins

  • Add example plugin (MR)

pfs

  • Fix initial empty state (MR)
  • Use GNOME's mirror for fdo templates (MR)

xdg-desktop-portal-phosh

xdg-desktop-portal

  • Fix categories for cell broadcasts (MR)
  • Relax app-id requirement in app-chooser portal (MR)

phosh-debs

  • Switch from osk-stub to stevia (MR)

meta-phosh

  • Make installing from sid and experimental convenient (MR)

feedbackd

feedbackd-device-themes

gmobile

  • Release 0.4.0
  • Make gir and doc build warning free (MR)

GNOME clocks

  • Use libfeedback instead of GTK's media api: (MR). This way the alarm become more recognizable and users can tweak alarm sounds.
  • Fix flatpak build and CI in our branch that carries the needed patches for mobile

Debian

  • meta-phosh: Switch to 0.47 (MR)
  • libmbim: Upload 1.33.1 to experimental
  • libqmi: Upload 1.37.1 to experimental
  • modemmanager: Upload 1.23.1 to experimental
  • Update mobile-broadband-provider-info to 20250613 (MR) in experimental
  • Upload phoc 0.48~rc1, 0.48.0 to experimental
  • Upload gmobile 0.4.0 to experimental
  • Upload phosh-mobile-settings 0.48~rc1, 0.48.0 to experimental
  • Upload xdg-desktop-portal-phosh 0.48~rc1, 0.48.0 to experimental
  • Prepare stevia 0.48~rc1 and upload 0.48.0 to experimental
  • Upload feedbackd 0.8.3 to experimental
  • Upload feedbackd-device-themes 0.8.4 to experimental

Mobian

  • Add feedbackd and wakeup timer support (MR)

ModemManager

  • Release 1.25.1
  • Test and warning fixes (MR)
  • run asan in ci (MR) and fix more leaks

libmbim

libqmi

mobile-broadband-provider-info

Cellbroadcastd

  • Better handle empty operator (MR)
  • Use GApplicaation (MR)
  • Fix library init (MR)
  • Add desktop file (MR)
  • Allow to send notifications for cell broadcast messages (MR)
  • Build introspection data (MR)
  • Only indicate Cell Broadcast support for MM >= 1.25 (MR)
  • Implement duplication detection (MR)
  • Reduce API surface (MR)
  • Add symbols file (MR)
  • Support vala (MR)

iio-sensor-proxy

  • Add minimal gio dependency (MR)

twenty-twenty-hugo

  • Support Mastodon (MR)

gotosocial

  • Explain STARTTLS behavior in docs (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • cellbroadcastd: Message store (MR)
  • cellbroadcastd: Print severity (MR)
  • cellbroadcastd: Packaging (MR)
  • cellbroadcastd: Rename from cbd (MR)
  • cellbroadcastd: Release 0.0.1 (MR)
  • cellbroadcastd: Release 0.0.2 (MR)
  • cellbroadcastd: Close file descriptors (MR)
  • cellbroadcastd: Sort messages by timestamp (MR)
  • meta-phosh: Ignore subprojects in format check (MR)
  • p-m-s: pmOS tweaks ground work (MR)
  • p-m-s: osk popover switch (MR)
  • p-m-s: Add panel search (MR)
  • p-m-s: Add cellbroadcastd message history (MR)
  • phosh: Add search daemon and command line tool to query search results (MR)
  • phosh: App-grid: Set max-width entries (MR)
  • chatty: Keyboard navigation improvements (MR)
  • phosh: LTR QuickSettings and fix LTR in screenshot tests (MR)
  • iio-sensor-proxy: improve buffer sensor discovery: (MR)
  • Calls: allow favorites to ring (MR)
  • feedbackd: More haptic udev rules (MR)
  • feedbackd: Simplify udev rules (MR)
  • feedbackd: Support legacy LED naming scheme (MR)
  • gmobile: FLX1 wakeup key support (MR)
  • gmobile: FP6 support (MR)

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 July, 2025 08:47AM

Paul Wise

FLOSS Activities June 2025

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Sponsors

All work was done on a volunteer basis.

01 July, 2025 02:00AM

June 30, 2025

hackergotchi for Colin Watson

Colin Watson

Free software activity in June 2025

My Debian contributions this month were all sponsored by Freexian. This was a very light month; I did a few things that were easy or that seemed urgent for the upcoming trixie release, but otherwise most of my energy went into Debusine. I’ll be giving a talk about that at DebConf in a couple of weeks; this is the first DebConf I’ll have managed to make it to in over a decade, so I’m pretty excited.

You can also support my work directly via Liberapay or GitHub Sponsors.

PuTTY

After reading a bunch of recent discourse about X11 and Wayland, I decided to try switching my laptop (a Framework 13 AMD running Debian trixie with GNOME) over to Wayland. I don’t remember why it was running X; I think I must have either inherited some configuration from my previous laptop (in which case it could have been due to anything up to ten years ago or so), or else I had some initial problem while setting up my new laptop and failed to make a note of it. Anyway, the switch was hardly noticeable, which was great.

One problem I did notice is that my preferred terminal emulator, pterm, crashed after the upgrade. I run a slightly-modified version from git to make some small terminal emulation changes that I really must either get upstream or work out how to live without one of these days, so it took me a while to notice that it only crashed when running from the packaged version, because the crash was in code that only runs when pterm has a set-id bit. I reported this upstream, they quickly fixed it, and I backported it to the Debian package.

groff

Upstream bug #67169 reported URLs being dropped from PDF output in some cases. I investigated the history both upstream and in Debian, identified the correct upstream patch to backport, and uploaded a fix.

libfido2

I upgraded libfido2 to 1.16.0 in experimental.

Python team

I upgraded pydantic-extra-types to a new upstream version, and fixed some resulting fallout in pendulum.

I updated python-typing-extensions in bookworm-backports, to help fix python3-tango: python3-pytango from bookworm-backports does not work (10.0.2-1~bpo12+1).

I upgraded twisted to a new upstream version in experimental.

I fixed or helped to fix a few release-critical bugs:

30 June, 2025 11:30PM by Colin Watson

hackergotchi for Gunnar Wolf

Gunnar Wolf

Get your personalized map of DebConf25 in Brest

As I often do, this year I have also prepared a set of personalized maps for your OpenPGP keysigning in DebConf25, in Brest!

What is that, dare you ask?

Partial view of my OpenPGP map

One of the not-to-be-missed traditions of DebConf is a Key-Signing Party (KSP) that spans the whole conference! Travelling from all the corners of the world to a single, large group gathering, we have the ideal opportunity to spread some communicable diseases trust on your peers’ identities and strengthen Debian’s OpenPGP keyring.

But whom should you approach for keysigning?

Go find yourself in the nice listing I have prepared. By clicking on your long keyid (in my case, the link labeled 0x2404C9546E145360), anybody can download your certificate (public key + signatures). The SVG and PNG links will yield a graphic version of your position within the DC25 keyring, and the TXT link will give you a textual explanation of it. (of course, your links will differ, yada yada…)

Please note this is still a preview of our KSP information: You will notice there are outstanding several things for me to fix before marking the file as final. First, some names have encoding issues I will fix. Second, some keys might be missing — if you submitted your key as part of the conference registration form but it is not showing, it must be because my scripts didn’t find it in any of the queried keyservers. My scripts are querying the following servers:

hkps://keyring.debian.org/
hkps://keys.openpgp.org/
hkps://keyserver.computer42.org/
hkps://keyserver.ubuntu.com/
hkps://pgp.mit.edu/
hkps://pgp.pm/
hkps://pgp.surf.nl/
hkps://pgpkeys.eu/
hkps://the.earth.li/

Make sure your key is available in at least some of them; I will try to do a further run on Friday, before travelling, or shortly after arriving to France.

If you didn’t submit your key in time, but you will be at DC25, please mail me stating [DC25 KSP] in your mail title, and I will manually add it to the list.

On (hopefully!) Friday, I’ll post the final, canonical KSP coordination page which you should download and calculate its SHA256-sum. We will have printed out convenience sheets to help you do your keysigning at the front desk.

30 June, 2025 11:07PM

Swiss JuristGate

Lawyer X, Law Firm X and Elon Musk's X: scandals linked by Old Xaverian

Just when you thought it was safe to go to court, think again. Your lawyer might not have your best interests at heart and even worse, they may be working for the other side.

In 2014, journalists discovered Victoria Police had a secret informer, a mole snitching on the underworld, identified by the code name Lawyer X.

Initially, police were so concerned they sought a restraining order to prevent the media from publishing anything about the scandal. Police even sought to have the code name Lawyer X suppressed from publication.

It was beyond embarassing: not only did police have the burden of protecting their secret informer, they may also have to protect her relatives who share the same name. The most notable among them, the informer's uncle, James Gobbo, a supreme court judge who subsequently served as Governor for the State of Victoria.

There is absolutely no suggestion that Lawyer X's relatives had anything to do with her misdeeds. Nonetheless, the clients she betrayed were the biggest crooks in town, until, of course, her unethical behavior gave them the opportunity to have those convictions overturned and present themselves as model citizens once again. Any relatives or former business associates of Lawyer X, including the former governor, would be in danger for the rest of their lives.

James Gobbo and his son James Gobbo junior are both Old Xaverians, graduates of Melbourne's elite Jesuit school for boys, like my father and I.

Lawyer X was eventually revealed to be Nicola Gobbo, a graduate of the elite girls school Genazzano FCJ College. My aunt, that is my father's sister, also went to Genazzano.

Alumni communications typically refer to Old Xaverians with the symbols "OX" and the year of graduation, for example, "OX96" for somebody who graduated in 1996.

Whenever a scandal like this arises, if the suspect is a graduate of one of these elite schools, the newspapers will be very quick to dramatize the upper class background. The case of Lawyer X was a head and shoulders above any other scandal: a former prefect and class captain who made a career out of partying with drug lords, having their children and simultaneously bugging their conversations for the police.

Stories like this are inconvenient for those elite schools but in reality, I don't feel the schools are responsible when one of these unlucky outcomes arises. The majority of students are getting a head start in life but there is simply nothing that any school can do to prevent one or two alumni going off the rails like this.

Having been through this environment myself, I couldn't believe what I was seeing in 2023 when the Swiss financial regulator (FINMA) voluntarily published a few paragraphs from a secret judgment, using the code name "X" to refer to a whole law office (cabinet juridique in French) of jurists in Geneva who had ripped off their clients.

FINMA Judgment, Parreaux Thiebaud & Partners, Justicia SA, Justiva SA, Mathieu Parreaux, Gaelle Jeanmonod

The Gobbo family, Genazzano FCJ College and alumni have finally been vindicated. The misdeeds of Lawyer X pale in comparison to the crimes of the Swiss law firm X.

Remember, Lawyer X operated in secrecy, her identity only known to a small number of handlers inside the police department. Thanks to my own research, I was able to prove that the activities of Law firm X were fully known to the bar association and the financial regulator for at least two years before they belatedly closed the firm.

Lawyer X claims she contributed evidence to the arrest of 386 suspects during her time as a police informer. Law firm X had over twenty thousand clients at the time they were shut down. They admit that client records fell into the hands of Walder Wyss, a rival law firm engaged in legal proceedings against some of the clients who were abandoned by the Swiss jurists.

Lawyer X was a woman and in her most recent bid for compensation, she claimed she was exploited by the police. Law firm X trafficked at least one woman from France to come and work in Geneva helping them promote an unauthorized insurance service to residents of both France and Switzerland.

Lawyer X was a former member of a political party. One of the jurists from Law firm X was working for the rogue law office at the same time that he was a member of Geneva city council. He is a member of the same political party as the Swiss president from that era.

In 1993, Lawyer X was an editor of Farrago, Australia's leading student newspaper. Law firm X used the Swiss media to write positive stories about their company. When the same company was outlawed, nanny-state laws prevented the media reporting anything at all about its downfall. Ironically, one of my former clients was also an editor of Farrago before he became Australia's Minister for Finance. The word Farrago gives a fascinating insight into the life of Lawyer X. Here is a sample sentence using the word Farrago in the Cambridge dictionary:

... told us a farrago of lies

When FINMA revealed the secret judgment shuttering Law Firm X, Urban Angehrn, the FINMA director, resigned citing health reasons. His dramatic resignation helped bury news stories about the Law firm X judgment. In Australia, a number of chief commissioners have resigned. In fact, Victoria Police have been through three leaders in the last year.

Who predicted Elon Musk would acquire Twitter?

In 2018, I attended the UN Forum on Business and Human Rights, where I made this brief intervention predicting the future of Facebook and Twitter. When Elon Musk purchased Twitter in 2022, he called it X. Go figure.

30 June, 2025 11:00PM

Russell Coker

June 27, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

Viva

On Monday I had my Viva Voce (PhD defence), and passed (with minor corrections).

Post-viva refreshment

Post-viva refreshment

It's a relief to have passed after 8 years of work. I'm not quite done of course, as I have the corrections to make! Once those are accepted I'll upload my thesis here.

27 June, 2025 02:00PM

June 26, 2025

hackergotchi for Bits from Debian

Bits from Debian

AMD Platinum Sponsor of DebConf25

amd-logo

We are pleased to announce that AMD has committed to sponsor DebConf25 as a Platinum Sponsor.

The AMD ROCm platform includes programming models, tools, compilers, libraries, and runtimes for AI and HPC solution development on AMD GPUs. Debian is an officially supported platform for AMD ROCm and a growing number of components are now included directly in the Debian distribution.

For more than 55 years AMD has driven innovation in high-performance computing, graphics and visualization technologies. AMD is deeply committed to supporting and contributing to open-source projects, foundations, and open-standards organizations, taking pride in fostering innovation and collaboration within the open-source community.

With this commitment as Platinum Sponsor, AMD is contributing to the annual Debian Developers’ Conference, directly supporting the progress of Debian and Free Software. AMD contributes to strengthening the worldwide community that collaborates on Debian projects year-round.

Thank you very much, AMD, for your support of DebConf25!

Become a sponsor too!

DebConf25 will take place from 14 to 20 July 2025 in Brest, France, and will be preceded by DebCamp, from 7 to 13 July 2025.

DebConf25 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf25 website at https://debconf25.debconf.org/sponsors /become-a-sponsor/.

26 June, 2025 09:37PM by Daniel Lange

June 25, 2025

hackergotchi for Tollef Fog Heen

Tollef Fog Heen

Pronoun support in userdir-ldap

Debian uses LDAP for storing information about users, hosts and other objects. The wrapping around this is called userdir-ldap, or ud-ldap for short. It provides a mail gateway, web UI and a couple of schemas for different object types.

Back in late 2018 and early 2019, we (DSA) removed support for ISO5218 in userdir-ldap, and removed the corresponding data. This made some people upset, since they were using that information, as imprecise as it was, to infer people’s pronouns. ISO5218 has four values for sex, unknown, male, female and N/A. This might have been acceptable when the standard was new (in 1976), but it wasn’t acceptable any longer in 2018.

A couple of days ago, I finally got around to adding support to userdir-ldap to let people specify their pronouns. As it should be, it’s a free-form text field. (We don’t have localised fields in LDAP, so it probably makes sense for people to put the English version of their pronouns there, but the software does not try to control that.)

So far, it’s only exposed through the LDAP gateway, not in the web UI.

If you’re a Debian developer, you can set your pronouns using

echo "pronouns: he/him" | gpg --clearsign | mail changes@db.debian.org

I see that four people have already done so in the time I’ve taken to write this post.

25 June, 2025 08:00PM

June 24, 2025

hackergotchi for Evgeni Golov

Evgeni Golov

Using LXCFS together with Podman

JP was puzzled that using podman run --memory=2G … would not result in the 2G limit being visible inside the container. While we were able to identify this as a visualization problem — tools like free(1) only look at /proc/meminfo and that is not virtualized inside a container, you'd have to look at /sys/fs/cgroup/memory.max and friends instead — I couldn't leave it at that. And then I remembered there is actually something that can provide a virtual (cgroup-aware) /proc for containers: LXCFS!

But does it work with Podman?! I always used it with LXC, but there is technically no reason why it wouldn't work with a different container solution — cgroups are cgroups after all.

As we all know: there is only one way to find out!

Take a fresh Debian 12 VM, install podman and verify things behave as expected:

user@debian12:~$ podman run -ti --rm --memory=2G centos:stream9
bash-5.1# grep MemTotal /proc/meminfo
MemTotal:        6067396 kB
bash-5.1# cat /sys/fs/cgroup/memory.max
2147483648

And after installing (and starting) lxcfs, we can use the virtual /proc/meminfo it generates by bind-mounting it into the container (LXC does that part automatically for us):

user@debian12:~$ podman run -ti --rm --memory=2G --mount=type=bind,source=/var/lib/lxcfs/proc/meminfo,destination=/proc/meminfo centos:stream9
bash-5.1# grep MemTotal /proc/meminfo
MemTotal:        2097152 kB
bash-5.1# cat /sys/fs/cgroup/memory.max
2147483648

The same of course works with all the other proc entries lxcfs provides (cpuinfo, diskstats, loadavg, meminfo, slabinfo, stat, swaps, and uptime here), just bind-mount them.

And yes, free(1) now works too!

bash-5.1# free -m
               total        used        free      shared  buff/cache   available
Mem:            2048           3        1976           0          67        2044
Swap:              0           0           0

Just don't blindly mount the whole /var/lib/lxcfs/proc over the container's /proc. It did work (as in: "bash and free didn't crash") for me, but with /proc/$PID etc missing, I bet things will go south pretty quickly.

24 June, 2025 07:46PM by evgeni

hackergotchi for Daniel Pocock

Daniel Pocock

Controllo sociale, media, tecnologia e cattolicesimo: Sinodo sulla sinodalità, revisione e feedback

Il defunto Papa Francesco chiese a un gruppo di circa quattrocento vescovi di lavorare insieme dal 2021 al 2024 per analizzare il modo in cui i fedeli cattolici interagiscono e si sviluppano come movimento. Formalmente, a questo comitato di vescovi fu dato il titolo di Sinodo sulla sinodalità . Il termine Sinodo è ampiamente utilizzato in tutte le religioni cristiane per riferirsi a comitati, consigli o riunioni di tali gruppi a qualsiasi livello della gerarchia ecclesiastica. Il termine sinodalità è specifico della Chiesa cattolica. Il Sinodo ha una pagina web ufficiale in cui cerca di spiegare la sinodalità .

Sono stati creati diversi gruppi di lavoro su un'ampia gamma di argomenti. In questa analisi, mi limiterò a esaminare il gruppo di lavoro numero tre, che ha esaminato il tema della missione nell'ambiente digitale . Successivamente, fornirò alcune mie prove sugli argomenti che il gruppo di lavoro sta prendendo in considerazione.

Un altro gruppo di lavoro si è occupato del tema della poligamia. La psicologia e le neuroscienze dei media di controllo sociale usurpano le nostre relazioni al punto che alcune persone ritengono che gli algoritmi di Facebook esercitino la stessa influenza di una terza persona nel loro matrimonio. Ho creato un post separato sul blog per analizzare le prove relative ai fenomeni della poligamia e della cyberpoligamia . Data la diffusione sempre maggiore di queste tecnologie, è essenziale leggerlo insieme ai contenuti di questa analisi.

In un recente articolo di cronaca non correlato al Sinodo, la diocesi di Paderborn (Germania centro-settentrionale) ha annunciato che cercherà di utilizzare TikTok per coinvolgere i giovani . L'ambito del gruppo di lavoro tre è molto ampio e non riguarda solo le piattaforme di controllo dei social media . Ritengo che l'ambito copra tutte le forme di tecnologia digitale.

Anche i ripetitori di pacchetti per radio amatoriali rientrano nel campo di applicazione, sebbene le licenze per radio amatoriali non consentano la trasmissione esplicita di materiale religioso.

Il Vaticano è stato uno dei primi ad adottare la radio a onde corte. Papa Leone XIV e monsignor Lucio Adrian Ruiz, segretario del Dicastero per la Comunicazione, hanno visitato questa settimana la sede della Radio Vaticana:

Papa Leone, Mons. Lucio Adrian Ruiz, Radio Vaticana, Santa Maria di Galeria

 

Leggendo i risultati sia del gruppo di lavoro che del Sinodo nel suo complesso, ritengo che la Chiesa nel suo complesso non abbia deciso né di accogliere né di rifiutare i media di controllo sociale . Stanno riconoscendo che fanno parte del panorama digitale e stanno cercando di decidere come la Chiesa si relaziona ad esso.

Come si è evoluto il processo sinodale ad alto livello

Prima di entrare nei dettagli, ecco una panoramica del processo e dei resoconti pubblicati in momenti diversi, con link diretti alle edizioni tradotte.

Il sito web principale del Sinodo è www.Synod.va ed è disponibile in diverse lingue. A quanto pare, il contenuto è stato creato in italiano e tradotto in inglese e in altre lingue. Questo lo rende un po' più difficile da leggere.

Nell'ottobre 2023 si è svolto un lungo incontro a Roma durante il quale è stata elaborata una bozza iniziale del rapporto.

I vescovi tornarono nuovamente a Roma nell'ottobre 2024 e stilarono una relazione finale del Sinodo.

Il Vaticano ha pubblicato il saluto del defunto Papa Francesco a conclusione del Sinodo . Le notizie sull'intervento del Papa sono rapidamente apparse .

Hanno pubblicato anche alcune immagini e video .

Punti chiave del rapporto finale in relazione all'ambiente digitale

Al punto 58, il rapporto osserva che i cristiani potrebbero tentare di proclamare il Vangelo attraverso la loro partecipazione in un ambiente digitale.

58. ... I cristiani, ciascuno secondo i suoi diversi ruoli - nella famiglia e negli altri stati di vita; nel mondo del lavoro e nelle professioni; impegnati civilmente, politicamente, socialmente o ecologicamente; nello sviluppo di una cultura ispirata al Vangelo, inclusa l'evangelizzazione dell'ambiente digitale - percorrono le strade del mondo e annunciano il Vangelo lì dove vivono, sostenuti dai doni dello Spirito.

59. Così facendo, chiedono alla Chiesa di non abbandonarli, ma di farli sentire inviati e sostenuti nella missione.

Questo punto sembra incoraggiare la Chiesa a riflettere sulla situazione affrontata da coloro che sono sotto l'influenza di un ambiente digitale, ma non implica necessariamente che l'ambiente digitale sia buono o cattivo.

Al punto 112, riguardante la mobilità, che comprende persone di tutti i livelli sociali, il rapporto osserva:

Alcuni mantengono forti legami con il loro Paese d'origine, soprattutto grazie ai media digitali, e per questo motivo può risultare difficile stabilire legami nel nuovo Paese; altri si ritrovano a vivere senza radici.

Questa è un'osservazione eccellente. In Europa, ho incontrato coppie le cui relazioni dipendono interamente dai dispositivi che usano per la traduzione automatica. Quando arrivano nuovi arrivati &ZeroWidthSpace&ZeroWidthSpacein città, la cultura di WhatsApp incoraggia i vicini a passare settimane o mesi a parlare alle loro spalle senza mai guardarli negli occhi.

113. La diffusione della cultura digitale, particolarmente evidente tra i giovani, sta cambiando profondamente la loro esperienza dello spazio e del tempo, influenzando le loro attività quotidiane, la comunicazione e le relazioni interpersonali, inclusa la fede. Le opportunità che offre internet stanno ridisegnando relazioni, legami e confini. Oggi sperimentiamo spesso solitudine ed emarginazione, anche se siamo più connessi che mai. Inoltre, coloro che hanno interessi economici e politici propri possono usare i social media per diffondere ideologie e generare forme di polarizzazione aggressive e manipolatrici. Non siamo ben preparati a questo e dobbiamo dedicare risorse affinché l’ambiente digitale diventi uno spazio profetico per la missione e l’annuncio. Le Chiese locali devono incoraggiare, sostenere e accompagnare quanti si impegnano nella missione nell’ambiente digitale. Le comunità e i gruppi digitali cristiani, in particolare i giovani, sono chiamati anche a riflettere sul modo in cui creano legami di appartenenza, promuovendo l’incontro e il dialogo. Devono offrire formazione ai loro coetanei, sviluppando un modo sinodale di essere Chiesa. Internet, costituito come una rete di connessioni, offre nuove opportunità per vivere meglio la dimensione sinodale della Chiesa.

Questo paragrafo riconosce i pericoli della tecnologia digitale, in particolare dei social media che controllano la società , e le parole chiave sono "Non siamo ben preparati a questo". Tuttavia, suggerisce che le chiese locali dovrebbero "incoraggiare" a ridurre questi rischi online. Non credo che "incoraggiare" sia la parola giusta da usare, ma non credo nemmeno che dovrebbero scoraggiare.

149. Il processo sinodale ha richiamato con insistenza l'attenzione su alcuni ambiti specifici della formazione del Popolo di Dio alla sinodalità. Il primo di questi riguarda l'impatto dell'ambiente digitale sui processi di apprendimento, sulla concentrazione, sulla percezione di sé e del mondo e sulla costruzione delle relazioni interpersonali. La cultura digitale costituisce una dimensione cruciale della testimonianza della Chiesa nella cultura contemporanea e un campo missionario emergente. Ciò richiede di garantire che il messaggio cristiano sia presente online in modi affidabili che non ne distorcano ideologicamente i contenuti. Sebbene i media digitali abbiano un grande potenziale per migliorare le nostre vite, possono anche causare danni e lesioni attraverso il bullismo, la disinformazione, lo sfruttamento sessuale e la dipendenza. Le istituzioni educative della Chiesa devono aiutare i bambini e gli adulti a sviluppare competenze critiche per navigare in sicurezza nel web.

Questi commenti sono molto pertinenti e molto coerenti con la mia testimonianza, parte della quale è riprodotta più avanti in questa relazione.

150. Un altro ambito di grande importanza è la promozione in tutti i contesti ecclesiali di una cultura della tutela, rendendo le comunità luoghi sempre più sicuri per i minori e le persone vulnerabili.

Quando ho sollevato questo argomento nelle comunità del software libero, la mia famiglia è stata attaccata senza pietà. Si vedano le email che ho inviato alla fine del 2017 e i commenti su IBM Red Hat più avanti in questo rapporto.

Fonti relative al gruppo di lavoro tre, la missione in un ambiente digitale

Il sito web di Synod.va ha pubblicato l'elenco di tutti i gruppi di lavoro . Il sito web include un breve video su ciascun gruppo e un link ai loro rapporti più recenti.

Il video del gruppo di lavoro tre dura poco meno di due minuti. Ecco alcune delle citazioni chiave e le mie osservazioni:

"Oggi le persone, soprattutto i giovani, hanno imparato a vivere contemporaneamente e senza soluzione di continuità sia negli spazi digitali che in quelli fisici."

Ritengo che questa affermazione sia del tutto errata. Le persone hanno imparato a usare gli spazi digitali. Una recente ricerca suggerisce che quasi il settanta percento dei giovani si sente a disagio dopo aver utilizzato i social media . In altre parole, si sentono spinti a usarli. Pertanto, non vivono una vita fluida. Le persone stanno soffrendo.

Le affermazioni contenute nel video non sono quelle presentate nel rapporto finale. Ci arriveremo. Ciononostante, ogni volta che si parla di controllo sociale sui media , si tende a generalizzare sull'impossibilità di vivere senza. Ogni volta che vediamo un'affermazione come questa, è importante contestarla.

"In che modo la Chiesa utilizza e si appropria della cultura digitale?"

La domanda retorica è interessante. In realtà, i superpoteri della Silicon Valley usano e si appropriano di qualsiasi contenuto che forniamo loro. La chiesa non usa loro, usa noi. Come pensi che siano diventati così ricchi?

Una domanda più appropriata potrebbe essere: "In che modo la Chiesa supplisce alle carenze delle culture digitali?".

Non dimentichiamo che uno degli altri ingegneri morì il giorno del nostro matrimonio, ed era la Domenica delle Palme, il giorno in cui Gesù pianse mentre arrivava a Gerusalemme .

Nella pagina che elenca i gruppi di lavoro, viene fornito un link a un rapporto di tre pagine sullo stato di avanzamento dei lavori del terzo gruppo di lavoro . Ecco un'interessante citazione del defunto Papa Francesco, ripresa dal gruppo di lavoro:

"Questo ambiente è ormai "indistinguibile dalla sfera della vita quotidiana".

Papa Francesco era un uomo intelligente e aveva intorno a sé persone intelligenti, tra cui il defunto Cardinale Pell. Possiamo far risalire questa citazione al pensiero di Alan Turing. Turing è considerato il padre dell'informatica e un martire. Turing ci ha trasmesso esattamente lo stesso concetto nel leggendario test di Turing, che lo stesso Turing definì il gioco dell'imitazione nel 1949.

Un altro modo di interpretare questo fenomeno è dire che le masse sono state plagiate dai signori della Silicon Valley.

Quando la whistleblower di Facebook Frances Haugen ci ha fornito i dettagli nella sua testimonianza al Congresso :

Le scelte prese dai vertici di Facebook rappresentano un problema enorme – per i bambini, per la sicurezza pubblica, per la democrazia – ed è per questo che mi sono fatto avanti. E sia chiaro: non deve andare per forza così. Siamo qui oggi grazie alle scelte deliberate di Facebook.

Il riassunto del gruppo di lavoro continua...

Per annunciare efficacemente il Vangelo nella nostra cultura contemporanea, dobbiamo discernere le opportunità e le sfide presentate da questa nuova dimensione del “luogo”

Quella citazione in particolare riconosce che ci sono sia opportunità che sfide. L'anno del giubileo è tutto incentrato sulla speranza e spero davvero che i membri del gruppo di lavoro leggano il materiale di informatori, psicologi infantili e persino medici legali che ci mettono in guardia dall'impatto di Facebook e simili .

Ciononostante, il rapporto include l'espressione "maggiore immersione" e ritengo che la Chiesa non dovrebbe dare per scontato che questa sia una linea d'azione predefinita.

La sintesi affronta anche il concetto di giurisdizione. La Chiesa cattolica si è tradizionalmente organizzata su base geografica. Internet permette alle persone di connettersi e formare comunità virtuali senza alcuna connessione geografica.

Tra l'altro, prima di Internet, la Chiesa poteva spostare sacerdoti ad alto rischio da una parrocchia all'altra senza doversi preoccupare di eventuali collegamenti. Ho esaminato meticolosamente i documenti della Commissione Reale australiana e ho trovato questa nota del leggendario Padre X___:

Ciò significa che se qualcuno in Australia venisse a sapere che Padre Z___ è in cura a causa di qualcosa accaduto a Boston e andasse lì per scoprirlo, si troverebbe in un vicolo cieco.

La lettera in questione è stata scritta poco prima che Internet diventasse di dominio pubblico. Rileggendo quelle parole oggi, ci ricordano con chiarezza come Internet stia stravolgendo la nostra vita.

Il gruppo di lavoro prosegue affermando che sta cercando "raccomandazioni o proposte pratiche" da tutta la comunità su qualsiasi argomento correlato alla missione della Chiesa nell'ambiente digitale.

Le persone impegnate nel movimento del software libero, siano esse cattoliche o meno, possono contattare la propria diocesi locale per scoprire chi coordina a livello locale la risposta a queste sfide.

Un'altra frase che mi ha colpito:

"oggi viviamo in una cultura digitale"

Non esattamente. Alcuni direbbero che ci viene imposta una cultura digitale. Istituzioni come la politica e i media ne sono dipendenti e la mettono su un piedistallo. Pertanto, è ancora più vitale che altre istituzioni, come la Chiesa, si assumano il compito di mettere in discussione ogni aspetto della cultura digitale e di promuovere valide alternative.

La vita senza cellulari, la vita senza app

Telefoni cellulari e app sono strettamente correlati. Alcune persone scelgono di vivere senza uno smartphone, in altre parole, hanno solo la metà dei problemi di un telefono cellulare completo. Alcune persone scelgono anche di avere smartphone senza l'app store di Google o Apple, ad esempio chi installa Replicant o LineageOS e utilizza l' app store di F-Droid per limitare il proprio telefono alle app etiche.

In termini pratici, ci sono persone che non riescono a spostarsi nella propria città natale senza usare il telefono. Un interrogativo interessante per la chiesa è: quale percentuale di fedeli non è in grado di identificare il percorso più diretto da casa alla chiesa più vicina senza usare un'app? Sarebbe interessante analizzare le risposte in base a diversi fattori, come l'età e gli anni di residenza nella parrocchia.

Un'altra domanda chiave, strettamente correlata a quella precedente, è: quanti parrocchiani riescono a ricordare gli orari delle messe e gli eventi chiave del calendario parrocchiale senza guardare il telefono? È fantastico avere queste informazioni visibili sul sito web della parrocchia; tuttavia, quando le persone sono veramente coinvolte nella parrocchia e nella comunità, queste informazioni vengono memorizzate. Più queste informazioni sono diffuse in una comunità, più questa è resiliente.

I sistemi di autenticazione minano la dignità umana

Oggigiorno vediamo spesso aziende che insistono sul fatto che hanno bisogno dei nostri numeri di cellulare per "autenticarci" o per "firmare" documenti tramite SMS.

Questo tipo di cose è particolarmente inquietante. Molte persone hanno familiarità con la pratica nazista di marchiare a fuoco i numeri di identificazione sulla pelle dei prigionieri ebrei. I numeri di cellulare hanno una funzione simile. Anche se i numeri non vengono marchiati a fuoco sulla pelle, spesso è scomodo per le persone cambiare il proprio numero.

Esistono molti fenomeni strettamente correlati, tra cui siti web che richiedono agli utenti di autenticarsi tramite un account Gmail o Facebook.

A livello di Chiesa, Stato, istruzione, assistenza sanitaria e servizi finanziari, è fondamentale garantire che tutti possano partecipare nel modo che desiderano senza rinunciare alla propria dignità.

La Chiesa deve esprimersi su questi argomenti con la stessa voce con cui si esprime su temi come l'aborto.

È necessario sottolineare il consenso

Le preoccupazioni relative al consenso e alla coercizione sono diventate un tema di grande attualità nel mondo di oggi. Ironicamente, le piattaforme di controllo sociale che fingono di aiutare le donne a trovare una piattaforma violano il principio del consenso in molti altri modi.

Si consideri, ad esempio, chi ha dedicato tempo alla creazione di un profilo su Facebook o Twitter, a volte per molti anni, connettendosi con centinaia o migliaia di follower, per poi ritrovarsi a dover aggiungere il proprio numero di cellulare al proprio account. Se non lo fanno, l'account viene bloccato. Non esiste una vera e propria ragione tecnica per avere un numero di cellulare nell'account, poiché molti di questi servizi hanno funzionato esattamente allo stesso modo per molti anni prima che tali richieste diventassero comuni.

Le persone non acconsentono liberamente a condividere i propri numeri di telefono con Mark Zuckerberg ed Elon Musk. I servizi sono stati imbastarditi per tendere un'imboscata ai loro utenti con queste richieste.

È significativo che questa cultura di agguati e costrizioni si insinui nella società. In Australia, Chanel Contos ha lanciato una petizione/rivista molto pubblicizzata con storie di donne di scuole private d'élite che si sentivano vittime di agguati, bullismo e costrette a incontri fisici indesiderati.

Ironicamente, la signorina Contos ha reso pubbliche le sue preoccupazioni proprio attraverso le stesse piattaforme che stanno minando la nostra comprensione del consenso e della privacy.

La Chiesa stessa ha dovuto fare un profondo esame di coscienza sui temi del consenso e degli abusi di potere. Questo la pone in una posizione interessante, in cui possiamo affermare che, anche considerando alcune delle rivelazioni più sconvolgenti sugli abusi, i responsabili sono il male minore rispetto ai padroni della Silicon Valley.

È sorprendente la rapidità con cui le istituzioni della Silicon Valley hanno abbandonato ogni sistema di pesi e contrappesi, ritenendo opportuno fare ciò che più gli aggrada. La Chiesa cattolica e altre istituzioni religiose possono ora fare tesoro di quanto hanno imparato dall'analisi critica dei propri errori e mettere in guardia la società da quanto sarebbe stupido ripetere la stessa strada con questi gangster digitali.

La tecnologia digitale è molto più di un semplice controllo sociale dei media

La chiesa non è nuova alla tecnologia. Le prime macchine da stampa furono installate nei locali della chiesa. Caxton installò la prima macchina da stampa inglese nell'Abbazia di Westminster. Altri siti includevano Oxford e l'Abbazia di St Alban. Prima della stampa, leggere e scrivere erano attività riservate ai chierici e molte delle loro opere esistevano solo in latino. La stampa permise la produzione in serie di Bibbie in tedesco e inglese. Questo, a sua volta, ebbe un enorme impatto sulla standardizzazione della lingua, così come contribuì a standardizzare gli atteggiamenti morali che la Silicon Valley sta distruggendo sotto di noi. La versione della Bibbia di Re Giacomo è ampiamente riconosciuta per il suo impatto sulla lingua inglese.

La standardizzazione del linguaggio fu solo un effetto collaterale di questa invenzione. La Riforma fu un altro. Con l'acquisizione dei libri e della capacità di leggere, le persone divennero meno dipendenti dal clero.

Allo stesso modo, i media di controllo sociale stanno avendo un impatto sulla nostra cultura, nel bene e nel male. Proprio come la stampa ha permesso la Riforma, i media di controllo sociale potrebbero portare a ulteriori cambiamenti nel modo in cui gli esseri umani si organizzano attorno a strutture e credenze religiose. I signori della Silicon Valley stanno attivamente riflettendo su questi ruoli. Elon Musk si è persino travestito da Satana. Se la Chiesa cattolica non offrirà un'alternativa convincente a questi spostamenti di potere, verrà sottratta al suo controllo.

Elon Musk, Satan

 

Frances Haugen (informatrice di Facebook): quasi nessuno al di fuori di Facebook sa cosa succede al suo interno. I vertici dell'azienda nascondono informazioni vitali al pubblico, al governo degli Stati Uniti, ai suoi azionisti e ai governi di tutto il mondo. I documenti che ho fornito dimostrano che Facebook ci ha ripetutamente ingannato su ciò che le sue stesse ricerche rivelano sulla sicurezza dei bambini, sul suo ruolo nella diffusione di messaggi d'odio e divisivi e molto altro ancora.

Mentre le generazioni precedenti si rivolgevano al clero per un consiglio, per poi leggere la Bibbia a loro volta, i giovani di oggi si rivolgono a un motore di ricerca e un domani potrebbero affidarsi all'intelligenza artificiale. Possiamo già osservare come motori di ricerca, social media e bot di intelligenza artificiale spingano le persone a livelli crescenti di conflitto con i vicini o le spingano su sentieri oscuri di isolamento, autolesionismo e suicidio.

Risorse della Chiesa cattolica rilevanti per l'ambiente digitale

La Chiesa cattolica ha un ruolo importante nell'istruzione e nelle scuole, pertanto può vedere l'impatto del controllo sociale dei media e può far rispettare i divieti per i bambini e fornire formazione al personale e ai genitori.

Gli insegnanti, in quanto dipendenti della Chiesa o dello Stato, hanno segnalato un aumento dei casi di bullismo da parte di genitori che si raggruppano sulle app di messaggistica. In un caso recente, la polizia britannica ha inviato sei agenti a umiliare un genitore che aveva usato WhatsApp per protestare contro la scuola locale. Il conflitto, la natura conflittuale di questo ambiente e l'enorme spreco di risorse della polizia sono tutte conseguenze del modo in cui la tecnologia è progettata e utilizzata nella società. Ogni episodio come questo offre uno spunto di riflessione sulle opportunità che la Chiesa cattolica ha di chiedersi "esiste un modo migliore?".

Le parole di Frances Haugen aiutano a spiegare ai genitori di bambini piccoli l'assedio dei sei poliziotti:

Ho visto che Facebook ha ripetutamente incontrato conflitti tra i propri profitti e la nostra sicurezza. Facebook ha sistematicamente risolto questi conflitti a favore dei propri profitti. Il risultato è stato un sistema che amplifica divisione, estremismo e polarizzazione, minando le società di tutto il mondo.

La Chiesa cattolica è un importante datore di lavoro in molti paesi. Questo le conferisce la facoltà di prendere decisioni sull'uso di telefoni cellulari e app di messaggistica nel rapporto datore di lavoro/dipendente. Un datore di lavoro non può vietare ai dipendenti di utilizzare questi dispositivi nel tempo libero, ma può decidere di eliminarne l'uso ufficiale per motivi di lavoro. Il rapporto datore di lavoro/dipendente offre un'ulteriore opportunità per formare sull'importanza della dignità umana al di sopra delle esigenze dei nostri dispositivi.

L'agenda pubblica nell'ambiente digitale, l'aborto della nostra specie

Con molti politici e giornalisti che oggi vivono la loro vita sotto il controllo dei social media , la loro capacità di valutare quali temi meritino un dibattito pubblico è fortemente influenzata dalle tematiche che si suppone siano di tendenza online. Si pensa che le tematiche siano di tendenza online in conseguenza dell'interesse pubblico, mentre in realtà i gestori delle piattaforme online esercitano la loro influenza per garantire che alcune questioni sembrino crescere in modo organico, mentre argomenti significativi ma scomodi vengono opportunamente sepolti nel flusso di notizie.

In questo contesto, la Chiesa cattolica offre una via alternativa per porre questioni all'ordine del giorno del dibattito pubblico, indipendentemente dal fatto che una particolare questione appaia "di tendenza" o meno. Questo potere viene spesso utilizzato per questioni vicine all'insegnamento della Chiesa, come il lobbying sull'aborto, ma non c'è motivo per cui la Chiesa non possa utilizzare le stesse risorse per fare lobbying contro l'aborto del genere umano da parte dell'intelligenza artificiale.

Aiuto alle vittime di discriminazione da parte dei signori della Silicon Valley e delle bande online

Le origini della Chiesa cattolica risalgono alla persecuzione di Gesù e dei martiri San Pietro e San Paolo.

Ma lasciamo da parte gli esempi antichi e veniamo a coloro che, nei tempi a noi più vicini, hanno lottato per la fede. Prendiamo i nobili esempi della nostra generazione. Per gelosia e invidia, i più grandi e giusti pilastri della Chiesa furono perseguitati e giunsero fino alla morte. Poniamo davanti ai nostri occhi i buoni Apostoli. Pietro, per ingiusta invidia, sopportò non una o due, ma molte fatiche, e alla fine, dopo aver reso la sua testimonianza, se ne andò verso il luogo di gloria che gli spettava. Anche Paolo, per invidia, mostrò con l'esempio il premio che è dato alla pazienza: fu incatenato sette volte; fu bandito; fu lapidato; divenuto araldo sia in Oriente che in Occidente, ottenne la nobile fama dovuta alla sua fede; e dopo aver predicato la giustizia al mondo intero, giunto fino all'estremità dell'Occidente e reso testimonianza davanti ai governanti, lasciò finalmente il mondo e andò verso il luogo santo, divenendo il massimo esempio di pazienza. (prima lettera di Clemente ai Corinzi, 5:1 - 5:7)

Queste parole spiegano la persecuzione di Pietro e Paolo sotto l'imperatore Nerone, avvenuta quasi duemila anni fa.

Ottocento anni fa è stata promulgata la Magna Carta che, nel corso del tempo, ha ispirato la Carta dei diritti degli Stati Uniti, la Dichiarazione universale dei diritti umani e l'abolizione della pena di morte.

Eppure oggi vediamo i signori della Silicon Valley voler buttare tutto questo dalla finestra e riportarci indietro ai tempi di Nerone.

Consideriamo l'articolo 27 della Dichiarazione universale dei diritti umani :

  1. Ogni individuo ha diritto di prendere parte liberamente alla vita culturale della comunità, di godere delle arti e di partecipare al progresso scientifico ed ai suoi benefici.
  2. Ogni individuo ha diritto alla protezione degli interessi morali e materiali derivanti da ogni produzione scientifica, letteraria e artistica di cui egli sia autore.

Quando visitiamo i siti web di noti progetti di software libero come Debian e Fedora, li vediamo dichiarare apertamente il loro desiderio di censurare certe persone. Chiunque si esprima su questioni etiche nel nostro settore è stato oggetto di queste estreme rappresaglie di tanto in tanto.

Linus Torvals (fondatore di Linux) è stato bandito da DebConf . Il Dr. Jacob Appelbaum è stato oggetto di false voci di molestie sessuali . Il Dr. Richard Stallman è stato crocifisso tramite una petizione online a Pasqua 2021. Un professore rivale, Gunnar Wolf, ha chiesto un voto la notte del Giovedì Santo, proprio il giorno in cui Giuda tradì Gesù . Quando ho parlato per difendere i diritti civili del Dr. Stallman, la stessa folla si è rivoltata contro di me.

Il Dr. Peter Eckersley ed io eravamo compagni di corso all'Università di Melbourne. Il Dr. Eckersley era il Chief Computer Scientist dell'Electronic Frontier Foundation e successivamente direttore della ricerca presso la Partnership on AI. Dopo la morte del Dr. Eckersley, ho ripristinato i suoi post sul blog sull'uso militare dell'IA e la stessa EFF ha censurato i link a questi blog, dando origine a una nuova causa per i diritti civili in cui EFF e Google, stranamente, siedono insieme sul banco degli imputati .

Le somiglianze tra questi casi e la crescente lista di vittime sono la prova lampante che non si tratta di casi casuali. Esiste uno sforzo coordinato per ridurre o eludere i diritti civili. Se esiste uno spazio o un mondo digitale, allora è inquietantemente simile al mondo in cui gli imperatori romani ricorrevano a esecuzioni raccapriccianti per perpetuare il controllo attraverso la paura.

La Chiesa cattolica può andare alla ricerca delle vittime che sono state cancellate, delle vittime che sono state de-piattaformate e di coloro che hanno qualcosa da dire sulla dignità umana nell'era dell'intelligenza artificiale. Che queste persone siano cattoliche o meno, le preoccupazioni che gli esperti indipendenti hanno cercato di indagare e pubblicizzare devono essere poste al di sopra del rumore prodotto dai dipartimenti di pubbliche relazioni.

Allo stesso tempo, l'impatto orribile inflitto alle nostre famiglie è spesso nascosto alla vista del pubblico.

I bambini nell'ambiente digitale

È significativo che abbiamo trovato tattiche molto simili utilizzate da Harvey Weinstein e Chris Lamb, ex leader del progetto Debian.

Questo è significativo perché Lamb è stato formato durante il Google Summer of Code ed è stato finanziato da Google, che ha anche ricevuto un cospicuo pagamento di 300.000 dollari poco prima che tre vittime rivelassero lo scandalo. Nonostante la promessa di trasparenza di Debian, il denaro è stato rivelato solo più di sei mesi dopo e il nome di Google non è mai stato pubblicamente collegato a quei numeri.

Quando Weinstein nutriva preoccupazioni sul comportamento di alcune donne, inviava pettegolezzi sgradevoli sul "comportamento" ad altri membri del settore. C'è qualcosa di snob in questi atteggiamenti nei confronti del comportamento umano.

Quando le donne hanno sporto denuncia alla polizia, il regista Peter Jackson ha preso la parola e ha confermato che Weinstein aveva fatto ricorso a questi sporchi trucchi , diffondendo voci sul comportamento di donne che non erano abbastanza sottomesse per i suoi gusti.

"Ricordo che la Miramax ci disse che lavorare con loro era un incubo e che avremmo dovuto evitarli a tutti i costi. Probabilmente era il 1998", ha detto Jackson.

"All'epoca non avevamo motivo di mettere in dubbio ciò che queste persone ci stavano dicendo, ma a posteriori mi rendo conto che molto probabilmente si trattava della campagna diffamatoria della Miramax in pieno svolgimento."

Diverse persone si sono fatte avanti dimostrando che Chris Lamb stava facendo esattamente la stessa cosa nel suo ruolo in Debian. Secondo la legge sul copyright, i coautori non hanno alcun obbligo nei confronti della persona eletta a ricoprire di volta in volta il ruolo di Debian Project Leader. Siamo tutti uguali.

Oggetto: R: Stato di sviluppatore Debian 
Data: mar, 18 dic 2018 10:36:09 +0900 
Da: Norbert Preining <norbert@preining.info> 
A: Daniel Pocock <daniel@pocock.pro> 

Ciao Daniel, 

anche se affrontare una causa come questa nel Regno Unito è al di sopra delle 
mie capacità e possibilità finanziarie, 
ho paura che Lamb abbia effettivamente rovinato una candidatura per un'azienda 
di New York, un lavoro correlato a Debian. Se è successo, e posso 
ragionevolmente documentarlo, prenderei in considerazione una causa per diffamazione. 

> Lamb è residente nel Regno Unito e invia email dal Regno Unito 
> https://regainyourname.com/news/cyberbullying-cyberstalking-and-online-harassment-a-uk-study/ 

Grazie per i link, li terrò a mente. 

Norbert 

-- 
PREINING Norbert http://www.preining.info 
Accelia Inc. + JAIST + TeX Live + Debian Developer 
GPG: 0x860CDC13 fp: F7D8 A928 26E3 16A1 9FA0 ACF0 6CAC A448 860C DC13

Ancora più inquietante è il fatto che Lamb abbia iniziato ad attaccare la mia famiglia proprio nello stesso periodo in cui il cardinale George Pell è stato condannato nel 2018. Un mio cugino di secondo grado era membro dell'ex coro del cardinale George Pell a Melbourne. Lamb e i suoi complici, finanziati da Google, hanno diffuso voci anonime di abusi.

Diverse persone si sono fatte avanti con prove che Lamb si comportava come Weinstein, diffondendo voci alle nostre spalle. Quando io e il Dott. Preining abbiamo parlato, una terza vittima ha visto lo scandalo e si è identificata pubblicamente il giorno di Natale:

Oggetto: Ri: Censura in Debian 
Data: mar, 25 dic 2018 23:44:38 +0100 
Da: martin f krafft
Organizzazione: Il progetto Debian 
A: debian-project@lists.debian.org 

Ciao progetto, 

È molto triste leggere di quello che sta succedendo. 

So che c'è stato almeno un altro caso in cui DAM e AH 
hanno agito al di fuori del loro mandato, minacciando di 
espellere il progetto e scegliendo in modo molto selettivo con chi comunicare. 
Lo so perché sono stato preso di mira. 

Né DAM né AH (le stesse persone ancora attive oggi) hanno fatto 
un solo tentativo di ascoltarmi. Nessuna delle mie e-mail a DAM o ad AH 
ha mai ricevuto risposta. 

Al contrario, DAM ha emesso un verdetto e ha influenzato altre persone al 
punto che "perché DAM ha emesso una sentenza" è stato addotto come motivazione per altre 
misure. Si è trattato di un abuso incostituzionale dei poteri di DAM e, nel 
caso di AH, l'intera vicenda ha anche sfiorato la diffamazione. Tra gli altri, 
l'attuale DPL Chris Lamb ha promesso una revisione a tempo debito, ma 
non è mai successo nulla. 

... [ snip ] ...

Ma se non è sicuro per gli ingegneri che sviluppano questa tecnologia, non lo è certamente per i bambini.

Il 5 ottobre 2021 ho sollevato le preoccupazioni relative ai bambini in questa cultura con il rapporto Google, FSFE e lavoro minorile .

Red Hat , una sussidiaria di IBM dal 2019, ha avviato un'azione legale per censurare e screditare le mie preoccupazioni. Mi hanno accusato di malafede per aver pubblicato quell'articolo. Tuttavia, il collegio legale ha stabilito che Red Hat mi stava molestando e che stava commettendo un abuso della procedura amministrativa.

L'ironia, ovviamente, è che i Cardinali indossano cappelli rossi, come il nome dell'azienda Red Hat che è stata scoperta a maltrattarmi. Chris Lamb di Debian aveva diffuso voci sulla mia famiglia quando il Cardinale Pell fu condannato.

Il modo in cui tutto questo ha intersecato le nostre vite e la nostra fede, le voci di abusi dopo la condanna del defunto Cardinale Pell, la mia visita ai Carabinieri il giorno della morte del Cardinale, il giorno delle nozze, la Domenica delle Palme, un suicidio simulato (non confermato), la crocifissione del Dr. Stallman a Pasqua e i linciaggi natalizi di Debian, è sconcertante. Come si dice nei film polizieschi, segui i soldi.

Cardinale George Pell, Cappello Rosso

 

L'ambiente digitale sottopone i parrocchiani alla sorveglianza di terze parti

La Chiesa cattolica è nata dalla persecuzione e bisogna ricordare che la sorveglianza è un pilastro della persecuzione.

Il fatto che i servizi più grandi, come Google, Facebook e Twitter, siano tutti apparentemente gratuiti è la prova che ricavano tutti i loro profitti dalla capacità di condurre un'efficace sorveglianza e manipolazione della popolazione.

Un tempo, la Chiesa svolgeva ruoli simili. I fedeli si sottoponevano a una forma di sorveglianza attraverso il sacramento della confessione, dove ricevevano consiglio dal loro sacerdote. I sacerdoti cercavano di esercitare una certa influenza dal pulpito, con la minaccia della scomunica e, di tanto in tanto, l'inquisizione o la persecuzione di qualcuno che era all'avanguardia, come Galileo.

Se le aziende tecnologiche riescono ad approssimare tutte queste funzioni in modo così efficace tramite algoritmi, corriamo il rischio che la religione diventi superflua.

Pertanto, cercare di svolgere il ruolo della Chiesa attraverso un mezzo che si sostituisce a quello della religione è molto simile a scavarsi la fossa con i propri occhi.

Attraverso una serie di inchieste pubbliche e di segnalazioni di informatori, abbiamo appreso fino a che punto questi padroni ci stiano privando della nostra dignità. Il loro obiettivo è anticipare ogni nostra decisione, influenzare le persone con cui parliamo, il nostro voto e ogni singolo centesimo del nostro bilancio.

Se ognuna di queste decisioni è controllata e perfino microgestita per noi, con precisione scientifica, fino all'ultimo centesimo sul nostro conto in banca ogni mese, dall'influenza di algoritmi, quale spazio rimane nella nostra coscienza per l'influenza del Vangelo?

Missione: rimanere rilevanti

Pertanto, la domanda posta al gruppo di lavoro sulla missione nell'ambiente digitale potrebbe essere riformulata come segue: in che modo la religione, di qualsiasi natura, continua a essere rilevante?

Oggi, per tradizione, in molte famiglie delle culture più ricche la chiesa è un luogo in cui si celebrano matrimoni, funerali e talvolta anche l'istruzione dei figli.

Affinché la chiesa possa dotare i propri parrocchiani di tecnologia, anziché perderli a causa della tecnologia, dobbiamo porci delle domande su alcuni degli argomenti sollevati dal movimento del software libero.

Come garantire che ogni persona abbia il pieno controllo sui propri dispositivi, incluso il diritto alla riparazione e il diritto di cambiare il sistema operativo.

Sviluppare strategie per proteggere le persone dai rischi della tecnologia. Ad esempio, i social media che controllano il controllo sociale consentono a piccoli gruppi, ma molto rumorosi, di arrecare gravi danni alle proprie vittime attraverso la diffusione deliberata e ripetuta di pettegolezzi e diffamazione. Sta diventando sempre più difficile garantire che nessuna persona o minoranza venga esclusa dalle vendette online. Come fornire supporto alle persone prese di mira da questi individui tossici? Come garantire che ogni persona e gruppo possa parlare a turno?

Missione: proteggere la società dagli stessi errori

L'Australia ha avviato la procedura per l'istituzione di una Commissione Reale sugli abusi commessi da una vasta gamma di istituzioni, tra cui la Chiesa. Eppure, per molte persone decedute o che hanno perso familiari, salute e carriera, era troppo tardi. Non sarebbe opportuno intervenire con misure così incisive prima che si verifichino fallimenti catastrofici? È giunto il momento di rivolgere lo stesso livello di attenzione ai dirigenti dei media che controllano i social media e allo sfruttamento e alla manipolazione dell'opinione pubblica a più livelli.

Conclusione

I media di controllo sociale stanno rapidamente diventando una copertura per l'intelligenza artificiale. Come ci ha suggerito il test di Turing (il gioco dell'imitazione) fin dal 1949, è inevitabile che ogni nuova iterazione di questo fenomeno diventi sempre più indistinguibile dalla realtà. In quanto tale, potrebbe presentarsi non solo come un sostituto per i nostri simili, ma anche come un'alternativa alla Chiesa. Le persone potrebbero essere indotte ad accettarla come il loro Dio. In altre parole, i media di controllo sociale potrebbero rendere la Chiesa irrilevante e, dopo averlo fatto, potrebbero continuare a rendere irrilevante l'umanità.

Basta guardare come mi fanno le smorfie dopo la morte di mio padre. La maleducazione che subisco quasi quotidianamente è iniziata in un momento di dolore. Le persone vengono plagiate a mettere da parte anche il più elementare rispetto per la dignità umana, il rispetto per la famiglia, in un momento di dolore, e questo diventa solo un'altra opportunità per strumentalizzare gli altri per divertimento. Questo aspetto della mia vita è stato interamente creato dai social media e dalle persone che stanno definendo quello spazio nella mia professione.

Nella sua testimonianza al Congresso, Frances Haugen ci ha detto:

Credo che ciò che ho fatto sia stato giusto e necessario per il bene comune, ma so che Facebook ha risorse infinite che potrebbe usare per distruggermi.

Nel 2018, ho partecipato al Forum delle Nazioni Unite su Imprese e Diritti Umani a Ginevra, dove ho rilasciato alcune brevi dichiarazioni sul fatto che Facebook e Twitter fossero finiti nelle mani sbagliate. Il Forum delle Nazioni Unite si è svolto contemporaneamente alla giuria che stava esaminando le accuse contro il Cardinale George Pell. Pell è stato condannato e queste piattaforme di controllo sociale si sono riempite di voci su di me e la mia famiglia, proprio quei fenomeni di cui la stessa Haugen sembra aver paura.

Nel 2022, possiamo vedere che Software in the Public Interest, Inc. ha speso oltre 120.000 dollari in spese legali per attaccare me e la mia famiglia .

Ecco il video con i commenti che ho fatto al Forum delle Nazioni Unite. Ho parlato per appena quarantatré secondi e hanno speso 120.000 dollari per attaccare la mia famiglia.

24 June, 2025 07:00PM

Contrôle social, médias, technologie et catholicisme : examen et commentaires du Synode sur la synodalité

Le regretté pape François a demandé à un groupe d'environ quatre cents évêques de travailler ensemble, de 2021 à 2024, à une réflexion sur la manière dont les fidèles catholiques interagissent et progressent en tant que mouvement. Officiellement, ce comité d'évêques a reçu le titre de Synode sur la synodalité . Le terme « Synode » est largement utilisé dans toutes les religions chrétiennes pour désigner les comités, conseils ou réunions de ces groupes, à tous les niveaux de la hiérarchie ecclésiale. Le terme « Synodalité » est spécifique à l'Église catholique. Le Synode dispose d'une page web officielle où il tente d'expliquer ce qu'est la synodalité .

Plusieurs groupes de travail ont été créés sur des sujets très variés. Dans cette analyse, je me concentre uniquement sur le troisième groupe, qui s'est penché sur la mission dans l'environnement numérique . Je présente ensuite mes propres observations sur les sujets abordés par ce groupe.

Un autre groupe de travail s'est penché sur la polygamie. La psychologie et les neurosciences des médias de contrôle social usurpent nos relations à tel point que certaines personnes estiment que les algorithmes de Facebook exercent autant d'influence qu'une tierce personne dans leur mariage. J'ai créé un article de blog dédié pour examiner les données probantes concernant les phénomènes de polygamie et de cyberpolygamie . Compte tenu de l'omniprésence croissante de ces technologies, sa lecture est essentielle, en complément de cette analyse.

Dans un récent article de presse, sans rapport avec le Synode, le diocèse de Paderborn (centre-nord de l'Allemagne) a annoncé qu'il tenterait d'utiliser TikTok pour interagir avec les jeunes . Le champ d'action du groupe de travail 3 est très vaste et ne se limite pas aux plateformes de médias sociaux . Il me semble qu'il couvre toutes les formes de technologies numériques.

Même les répéteurs de paquets radioamateurs sont concernés, même si les licences de radioamateur n'autorisent pas la transmission explicite de matériel religieux.

Le Vatican a été l'un des premiers à adopter la radio à ondes courtes. Le pape Léon XIV et Mgr Lucio Adrian Ruiz, secrétaire du Dicastero per la Comunicazione, ont visité les installations de Radio Vatican cette semaine :

Pape Léon, Mgr Lucio Adrian Ruiz, Radio Vatican, Santa Maria di Galeria

 

À la lecture des résultats du groupe de travail et du Synode dans son ensemble, j'ai le sentiment que l'Église dans son ensemble n'a pas décidé d'adopter ou de rejeter les médias de contrôle social . Elle reconnaît qu'ils font partie du paysage numérique et tente de définir la manière dont l'Église s'y rapporte.

Comment le processus synodal a évolué à un niveau élevé

Avant d’entrer dans les détails, voici un aperçu du processus et des rapports parus à différents moments, avec des liens directs vers les éditions traduites.

Le site web principal du Synode est www.Synod.va et il est disponible en plusieurs langues. Il semble que le contenu ait été rédigé en italien puis traduit en anglais et dans d'autres langues. Cela le rend un peu plus difficile à lire.

Une réunion élargie a eu lieu à Rome en octobre 2023, au cours de laquelle un premier projet de rapport a été élaboré.

Les évêques sont revenus à Rome en octobre 2024 et ont produit un rapport final du Synode.

Le Vatican a publié le message de salutation du regretté pape François à l'issue du synode . Des articles de presse ont rapidement fait état de l'intervention du pape .

Ils ont également publié quelques images et vidéos .

Points clés du rapport final concernant l'environnement numérique

Au point 58, le rapport note que les chrétiens pourraient tenter de proclamer l’Évangile par leur participation à un environnement numérique.

58. ... Les chrétiens, chacun selon leurs divers rôles - au sein de la famille et des autres états de vie ; sur le lieu de travail et dans leur profession ; engagés civilement, politiquement, socialement ou écologiquement ; dans le développement d'une culture inspirée par l'Évangile, y compris l'évangélisation de l'environnement numérique - parcourent les chemins du monde et annoncent l'Évangile là où ils vivent, soutenus par les dons de l'Esprit.

59. Ce faisant, ils demandent à l’Église de ne pas les abandonner mais de leur permettre de se sentir envoyés et soutenus dans la mission.

Ce point semble encourager l’Église à réfléchir à la situation à laquelle sont confrontés ceux qui sont sous l’influence d’un environnement numérique, mais cela n’implique pas nécessairement que l’environnement numérique soit bon ou mauvais.

Au point 112, concernant la mobilité, qui inclut des personnes de tous les niveaux de la société, le rapport note :

Certains entretiennent des liens forts avec leur pays d’origine, notamment grâce aux médias numériques, et peuvent donc avoir du mal à nouer des liens dans leur nouveau pays ; d’autres se retrouvent à vivre sans racines.

C'est une excellente observation. En Europe, j'ai rencontré des couples dont la relation dépend entièrement des appareils de traduction automatique. Lorsque de nouveaux arrivants arrivent en ville, la culture WhatsApp encourage les voisins à passer des semaines, voire des mois, à discuter dans leur dos sans même les regarder dans les yeux.

113. La diffusion de la culture numérique, particulièrement visible chez les jeunes, transforme profondément leur perception de l'espace et du temps ; elle influence leurs activités quotidiennes, leur communication et leurs relations interpersonnelles, y compris la foi. Les opportunités offertes par Internet remodèlent les relations, les liens et les frontières. Aujourd'hui, nous ressentons souvent la solitude et la marginalisation, même si nous sommes plus connectés que jamais. De plus, ceux qui ont leurs propres intérêts économiques et politiques peuvent exploiter les médias sociaux pour diffuser des idéologies et générer des formes de polarisation agressives et manipulatrices. Nous ne sommes pas bien préparés à cela et devons consacrer des ressources pour que l'environnement numérique devienne un espace prophétique de mission et d'annonce. Les Églises locales devraient encourager, soutenir et accompagner ceux qui s'engagent dans la mission dans l'environnement numérique. Les communautés et groupes chrétiens numériques, en particulier les jeunes, sont également appelés à réfléchir à la manière dont ils créent des liens d'appartenance, favorisant la rencontre et le dialogue. Ils doivent offrir une formation à leurs pairs, développant une manière synodale d'être Église. Internet, constitué comme un réseau de connexions, offre de nouvelles opportunités pour mieux vivre la dimension synodale de l’Église.

Ce paragraphe reconnaît les dangers du numérique, en particulier des médias de contrôle social , et le mot clé est : « Nous ne sommes pas bien préparés à cela ». Il suggère néanmoins que les églises locales devraient « encourager » davantage ces risques en ligne. Je ne pense pas que le mot « encourager » soit le bon, mais je ne pense pas qu'elles devraient non plus décourager.

149. Le processus synodal a insisté sur certains aspects spécifiques de la formation du Peuple de Dieu à la synodalité. Le premier concerne l'impact de l'environnement numérique sur les processus d'apprentissage, la concentration, la perception de soi et du monde, et la construction des relations interpersonnelles. La culture numérique constitue une dimension cruciale du témoignage de l'Église dans la culture contemporaine et dans un champ missionnaire émergent. Cela exige de veiller à ce que le message chrétien soit présent en ligne de manière fiable, sans en déformer idéologiquement le contenu. Bien que les médias numériques aient un grand potentiel pour améliorer nos vies, ils peuvent aussi causer des préjudices et des blessures par le biais du harcèlement, de la désinformation, de l'exploitation sexuelle et des addictions. Les établissements d'enseignement de l'Église doivent aider les enfants et les adultes à développer des compétences essentielles pour naviguer en toute sécurité sur le web.

Ces commentaires sont très pertinents et très cohérents avec mon propre témoignage, dont une partie est reproduite plus loin dans ce rapport.

150. Un autre domaine de grande importance est la promotion dans tous les contextes ecclésiaux d’une culture de protection, faisant des communautés des lieux toujours plus sûrs pour les mineurs et les personnes vulnérables.

Lorsque j'ai évoqué ce sujet dans les communautés du logiciel libre, ma famille a été impitoyablement attaquée. Voir les courriels que j'ai envoyés fin 2017 et les commentaires sur IBM Red Hat plus loin dans ce rapport.

Sources liées au groupe de travail trois, la mission dans un environnement numérique

Le site web Synod.va a publié la liste de tous les groupes de travail . Il comprend une courte vidéo sur chaque groupe et un lien vers leurs rapports les plus récents.

La vidéo du troisième groupe de travail dure un peu moins de deux minutes. Voici quelques citations clés et mes propres observations :

« Aujourd’hui, les gens, en particulier les jeunes, ont appris à vivre simultanément et de manière transparente dans des espaces numériques et physiques. »

Je pense que cette affirmation est totalement erronée. Les gens ont appris à utiliser les espaces numériques. Une étude récente suggère que près de 70 % des jeunes se sentent mal après avoir utilisé les réseaux sociaux . Autrement dit, ils se sentent obligés de les utiliser. Par conséquent, leur vie est perturbée. Les gens souffrent.

Les déclarations faites dans la vidéo ne sont pas celles présentées dans le rapport final. Nous y reviendrons. Néanmoins, dès qu'il est question des médias de contrôle social , on a tendance à généraliser sur l'impossibilité de vivre sans eux. Chaque fois que nous voyons une telle affirmation, il est important de la remettre en question.

« Comment l’Église utilise-t-elle et s’approprie-t-elle la culture numérique ? »

La question rhétorique est intéressante. En réalité, les extrémistes de la Silicon Valley utilisent et s'approprient tout le contenu que nous leur fournissons. L'Église ne les utilise pas, elle nous utilise. Comment pensez-vous qu'ils sont devenus si riches ?

Une meilleure question pourrait être : « Comment l’Église comble-t-elle les lacunes des cultures numériques ? ».

N'oublions pas qu'un des autres ingénieurs est décédé le jour de notre mariage et c'était le dimanche des Rameaux, le jour où Jésus a pleuré à son arrivée à Jérusalem .

Sur la page répertoriant les groupes de travail, un lien vers un rapport d'étape de trois pages du groupe de travail n° 3 est fourni . Voici une citation intéressante du regretté pape François, reprise par le groupe de travail :

« Cet environnement est désormais « indiscernable de la sphère de la vie quotidienne ». »,

Le pape François était un homme intelligent, entouré de personnes brillantes, dont le regretté cardinal Pell. Cette citation trouve son origine dans la pensée d'Alan Turing. Turing est considéré comme le père de l'informatique et un martyr. Turing nous a transmis exactement le même concept avec le légendaire test de Turing, que Turing lui-même a appelé le « jeu de l'imitation » en 1949.

Une autre façon d’interpréter ce phénomène est de dire que les masses ont subi un lavage de cerveau de la part des seigneurs de la Silicon Valley.

Lorsque la lanceuse d'alerte de Facebook, Frances Haugen, nous a donné des informations dans son témoignage devant le Congrès :

Les choix des dirigeants de Facebook constituent un problème majeur – pour les enfants, la sécurité publique, la démocratie – et c'est pourquoi je me suis exprimé. Et soyons clairs : il n'est pas nécessaire qu'il en soit ainsi. Si nous sommes ici aujourd'hui, c'est grâce aux choix délibérés de Facebook.

Le résumé du groupe de travail continue...

« Pour proclamer efficacement l'Évangile dans notre culture contemporaine, nous devons discerner les opportunités et les défis présentés par cette nouvelle dimension du « lieu » »

Cette citation précise reconnaît qu'il existe à la fois des opportunités et des défis. L'année du jubilé est placée sous le signe de l'espoir et j'espère sincèrement que les membres du groupe de travail lisent les informations des lanceurs d'alerte, des psychologues pour enfants et même des médecins légistes qui nous alertent sur l'impact de Facebook et de ses semblables .

Néanmoins, le rapport inclut l’expression « immersion plus grande » et j’estime que l’Église ne devrait pas supposer que « l’immersion plus grande » est une ligne de conduite par défaut.

Le résumé aborde également la notion de juridiction. L'Église catholique s'est traditionnellement organisée sur une base géographique. Internet permet aux individus de se connecter et de former des communautés virtuelles sans lien géographique.

Par ailleurs, avant l'avènement d'Internet, l'Église pouvait déplacer des prêtres à haut risque d'une paroisse d'un bout à l'autre de la ville sans craindre que quiconque ne les relie. J'ai examiné minutieusement les documents de la Commission royale australienne et suis tombé sur cette note du légendaire Père X___ :

Cela signifie que si quelqu'un en Australie, apprenant que le père Z___ a suivi un traitement à cause de quelque chose qui s'est passé à Boston et se rendant sur place pour le savoir, se retrouverait dans une impasse.

La lettre en question a été écrite juste avant qu'Internet ne devienne une réalité pour le grand public. Lire ces mots aujourd'hui nous rappelle brutalement à quel point Internet bouleverse notre quotidien.

Le groupe de travail poursuit en indiquant qu'il recherche des « recommandations ou propositions pratiques » de la part de toute la communauté, sur tout sujet lié à la mission de l'Église dans l'environnement numérique.

Les personnes engagées dans le mouvement du logiciel libre, qu’elles soient catholiques ou non, peuvent contacter leur diocèse local pour savoir qui coordonne localement la réponse à ces défis.

Une autre phrase qui a attiré mon attention :

« Aujourd'hui, nous vivons dans une culture numérique »

Pas exactement. Certains diront qu'une culture numérique nous est imposée. Des institutions comme la politique et les médias s'y accrochent et la mettent sur un piédestal. Il est donc d'autant plus crucial que d'autres institutions, comme l'Église, se donnent pour mission de remettre en question l'ensemble de la culture numérique et de proposer des alternatives viables.

La vie sans téléphone portable, la vie sans applications

Téléphones mobiles et applications sont étroitement liés. Certains choisissent de vivre sans smartphone, c'est-à-dire qu'ils n'ont que la moitié des problèmes d'un téléphone portable classique. D'autres choisissent également d'utiliser un smartphone sans l'App Store de Google ou d'Apple, par exemple ceux qui installent Replicant ou LineageOS et utilisent l' App Store de F-Droid pour limiter leur utilisation aux applications éthiques.

Concrètement, certaines personnes sont incapables de se déplacer dans leur ville sans utiliser leur téléphone. Une question intéressante se pose pour l'Église : quelle proportion de fidèles est incapable d'identifier le chemin le plus direct entre leur domicile et l'église la plus proche sans consulter une application ? Il serait intéressant d'analyser les réponses en fonction de divers facteurs tels que l'âge et le nombre d'années de résidence dans la paroisse.

Une autre question clé, étroitement liée à la précédente, est de savoir combien de paroissiens peuvent se souvenir des horaires de messe et des événements clés du calendrier paroissial sans consulter leur téléphone. C'est formidable que ces informations soient visibles sur le site web de la paroisse, mais lorsque les gens s'impliquent véritablement dans la paroisse et la communauté, elles resteront gravées dans leur mémoire. Plus ces informations sont diffusées dans une communauté, plus celle-ci est résiliente.

Les systèmes d’authentification portent atteinte à la dignité humaine

Aujourd’hui, nous voyons fréquemment des entreprises insister sur le fait qu’elles ont besoin de nos numéros de téléphone portable pour nous « authentifier » ou pour « signer » des documents par SMS.

Ce genre de chose est particulièrement inquiétant. Nombreux sont ceux qui connaissent la pratique nazie consistant à graver des numéros d'identification sur la peau des prisonniers juifs. Les numéros de téléphone portable ont une fonction similaire. Même s'ils ne sont pas gravés physiquement sur notre peau, il est souvent gênant de les changer.

Il existe de nombreux phénomènes étroitement liés, notamment des sites Web exigeant que les utilisateurs s’authentifient à partir d’un compte Gmail ou Facebook.

Au niveau de l’Église, de l’État, de l’éducation, des soins de santé et des services financiers, il est essentiel de garantir que chacun puisse participer comme il le souhaite sans renoncer à sa dignité.

L’Église doit s’exprimer sur ces sujets avec la même intensité que sur des thèmes tels que l’avortement.

Il faut mettre l’accent sur le consentement

Les préoccupations relatives au consentement et à la coercition sont devenues un sujet majeur dans le monde d'aujourd'hui. Ironiquement, les plateformes médiatiques de contrôle social qui prétendent donner une tribune aux femmes violent le principe du consentement de bien d'autres manières.

Prenons l'exemple de personnes qui ont créé un profil sur Facebook ou Twitter, parfois pendant des années, se connectant à des centaines, voire des milliers d'abonnés, et qui se voient ensuite demander d'ajouter leur numéro de téléphone portable à leur compte. S'ils ne le font pas, leur compte est bloqué. Il n'y a aucune raison technique valable d'avoir un numéro de téléphone portable sur son compte, car nombre de ces services fonctionnaient exactement de la même manière pendant de nombreuses années avant que ces demandes ne deviennent courantes.

Les gens ne consentent pas librement à partager leur numéro de téléphone avec Mark Zuckerberg et Elon Musk. Les services ont été altérés pour piéger leurs utilisateurs avec ces demandes.

Il est significatif que cette culture du piège et de la coercition se propage dans la société. En Australie, Chanel Contos a lancé une pétition/journal très médiatisé, rassemblant les témoignages de femmes scolarisées dans des écoles privées prestigieuses qui se sentaient victimes de pièges, de harcèlement et de violences physiques non désirées.

Ironiquement, Mme Contos a fait connaître ses inquiétudes sur les mêmes plateformes qui sapent notre compréhension du consentement et de la vie privée.

L'Église elle-même a dû se livrer à un examen de conscience approfondi sur les questions de consentement et d'abus de pouvoir. Cela la place dans une position intéressante : même au vu des révélations les plus choquantes sur les abus, les responsables constituent un moindre mal par rapport aux dirigeants de la Silicon Valley.

Il est remarquable de constater la rapidité avec laquelle les institutions de la Silicon Valley ont abandonné tout contrôle et tout équilibre pour n'agir qu'à leur guise. L'Église catholique et les autres institutions religieuses peuvent désormais tirer les leçons de l'analyse critique de leurs propres erreurs et mettre en garde la société contre la stupidité de s'engager à nouveau sur la même voie avec ces gangsters du numérique.

La technologie numérique est bien plus qu’un simple moyen de contrôle social

L'Église n'est pas novice en matière de technologie. Les premières presses à imprimer ont été installées dans les locaux de l'Église. Caxton a installé la première presse d'Angleterre à l'abbaye de Westminster. Parmi les autres sites figuraient Oxford et l'abbaye de Saint-Alban. Avant l'apparition de l'imprimerie, la lecture et l'écriture étaient réservées aux clercs et nombre de leurs ouvrages n'existaient qu'en latin. L'imprimerie a permis la production en masse de bibles en allemand et en anglais. Cela a eu un impact considérable sur la standardisation de la langue, tout comme elle a contribué à standardiser les attitudes morales que la Silicon Valley est en train de détruire. La version King James de la Bible est largement reconnue pour son influence sur la langue anglaise.

La standardisation de la langue n'était qu'un effet secondaire de cette invention. La Réforme en était un autre. À mesure que les gens acquéraient des livres et la faculté de lire, ils devenaient moins dépendants du clergé.

De même, les médias de contrôle social influencent aujourd'hui notre culture, pour le meilleur et pour le pire. Tout comme l'imprimerie a permis la Réforme, ces médias pourraient entraîner de nouveaux changements dans la manière dont les humains s'organisent autour des structures et des croyances religieuses. Les dirigeants de la Silicon Valley envisagent activement ces rôles. Elon Musk s'est même déguisé en Satan. Si l'Église catholique ne propose pas d'alternative convaincante à ces changements de pouvoir, elle sera dépossédée de ses pouvoirs.

Elon Musk, Satan

 

Frances Haugen (lanceuse d'alerte Facebook) : Presque personne en dehors de Facebook ne sait ce qui se passe en interne. La direction de l'entreprise cache des informations vitales au public, au gouvernement américain, à ses actionnaires et aux gouvernements du monde entier. Les documents que j'ai fournis prouvent que Facebook nous a induits en erreur à plusieurs reprises sur ce que révèlent ses propres recherches concernant la sécurité des enfants, son rôle dans la diffusion de messages haineux et clivants, et bien d'autres choses encore.

Alors que les générations précédentes consultaient les religieux pour obtenir des conseils, puis lisaient la Bible, les jeunes d'aujourd'hui se tournent vers les moteurs de recherche et, demain, ils pourraient faire confiance à l'intelligence artificielle. Nous constatons déjà que les moteurs de recherche, les médias sociaux et les robots d'intelligence artificielle incitent les gens à multiplier les conflits avec leurs voisins ou les conduisent sur les sentiers sombres de l'isolement, de l'automutilation et du suicide.

Ressources de l'Église catholique pertinentes pour l'environnement numérique

L'Église catholique joue un rôle important dans l'éducation et les écoles. Par conséquent, l'Église peut voir l'impact des médias de contrôle social et l'Église peut imposer des interdictions aux enfants et fournir une formation au personnel et aux parents.

Les enseignants, qu'ils soient employés par l'Église ou l'État, ont signalé une augmentation du harcèlement de la part de parents qui se regroupent sur des applications de messagerie. Récemment, la police britannique a envoyé six agents humilier un parent qui avait utilisé WhatsApp pour s'en prendre à l'école locale. Le conflit, le caractère conflictuel de cet environnement et l'énorme gaspillage de ressources policières sont autant de conséquences de la manière dont cette technologie est conçue et utilisée dans la société. Chaque incident de ce type donne un aperçu des occasions pour l'Église catholique de se demander « existe-t-il une meilleure solution ? ».

Les mots de Frances Haugen aident à expliquer les six policiers qui assiégent les parents de jeunes enfants :

J'ai constaté que Facebook se heurtait régulièrement à des conflits entre ses propres profits et notre sécurité. Facebook a systématiquement résolu ces conflits en faveur de ses propres profits. Il en résulte un système qui amplifie les divisions, l'extrémisme et la polarisation, et qui fragilise les sociétés du monde entier.

L'Église catholique est un employeur important dans de nombreux pays. Cela lui confère le pouvoir de décider de l'utilisation des téléphones portables et des applications de messagerie dans le cadre de la relation employeur-employé. Un employeur ne peut interdire à ses employés d'utiliser ces appareils pendant leur temps libre, mais il peut décider d'en supprimer toute utilisation officielle à des fins professionnelles. La relation employeur-employé offre une nouvelle occasion de sensibiliser les employés à l'importance de la dignité humaine, au-delà des exigences de nos appareils.

L'agenda public dans l'environnement numérique, l'avortement de notre espèce

Alors que de nombreux hommes politiques et journalistes vivent désormais au sein des médias sociaux , leur capacité à évaluer les sujets dignes d'un débat public est fortement influencée par les sujets supposément tendance en ligne. On pense que ces sujets deviennent tendance en ligne grâce à l'intérêt général, alors qu'en réalité, les gestionnaires des plateformes en ligne exercent une influence pour que certains sujets semblent se développer naturellement, tandis que des sujets importants mais gênants sont commodément noyés dans le flot de l'actualité.

Dans ce contexte, l'Église catholique offre une voie alternative pour inscrire des questions à l'ordre du jour du débat public, qu'elles soient d'actualité ou non. Ce pouvoir est le plus souvent utilisé pour des questions proches de l'enseignement de l'Église, comme le lobbying sur l'avortement. Cependant, rien n'empêche l'Église d'utiliser les mêmes ressources pour lutter contre l'avortement de l'humanité par l'IA.

Aide aux victimes de discriminations de la part des seigneurs de la Silicon Valley et des mafias en ligne

L’Église catholique trouve ses origines dans la persécution de Jésus et des martyrs saint Pierre et saint Paul.

Mais laissons de côté les exemples anciens et venons-en à ceux qui, aux temps les plus proches de nous, ont lutté pour la foi. Prenons les nobles exemples de notre génération. Par jalousie et envie, les plus grands et les plus justes piliers de l'Église ont été persécutés et ont même été condamnés à mort. Plaçons devant nos yeux les bons apôtres. Pierre, par une envie injuste, a enduré non pas une ou deux, mais de nombreuses épreuves, et finalement, après avoir rendu son témoignage, il est parti pour la gloire qui lui était due. Par envie, Paul aussi a montré par l'exemple la récompense qui est donnée à la patience : sept fois il a été enchaîné ; il a été banni ; il a été lapidé ; devenu un héraut, tant en Orient qu'en Occident, il a acquis la noble renommée due à sa foi ; et après avoir prêché la justice au monde entier, et étant arrivé aux extrémités de l'Occident, et ayant rendu témoignage devant les dirigeants, il a finalement quitté le monde et est allé au lieu saint, étant devenu le plus grand exemple. de patience. » (première épître de Clément aux Corinthiens, 5:1 - 5:7)

Ces paroles expliquent la persécution de Pierre et Paul sous l’empereur Néron il y a près de deux mille ans.

Il y a huit cents ans, la Magna Carta est arrivée et, au fil du temps, elle a inspiré la Déclaration des droits des États-Unis, la Déclaration universelle des droits de l'homme et l'abolition de la peine capitale.

Et pourtant, aujourd’hui, nous voyons les seigneurs de la Silicon Valley vouloir tout jeter par la fenêtre et nous ramener à l’époque de Néron.

Considérez l’article 27 de la Déclaration universelle des droits de l’homme :

  1. Toute personne a le droit de participer librement à la vie culturelle de la communauté, de jouir des arts et de participer au progrès scientifique et aux bienfaits qui en découlent.
  2. Toute personne a droit à la protection des intérêts moraux et matériels découlant de toute production scientifique, littéraire ou artistique dont elle est l'auteur.

Lorsque nous consultons les sites web de projets de logiciels libres bien connus comme Debian et Fedora, nous les voyons proclamer ouvertement leur volonté de censurer certaines personnes. Quiconque s'exprime sur les questions éthiques dans notre secteur est parfois victime de représailles extrêmes.

Linus Torvals (fondateur de Linux) a été banni de DebConf . Le Dr Jacob Appelbaum a fait l'objet de fausses rumeurs d'inconduite sexuelle . Le Dr Richard Stallman a été crucifié suite à une pétition en ligne à Pâques 2021. Un professeur rival, Gunnar Wolf, a appelé à un vote le soir du Jeudi Saint, le jour même où Judas a trahi Jésus . Lorsque j'ai pris la parole pour défendre les libertés civiles du Dr Stallman, la même foule s'est tout simplement retournée contre moi.

Le Dr Peter Eckersley et moi étions camarades de classe à l'Université de Melbourne. Le Dr Eckersley était informaticien en chef de l'Electronic Frontier Foundation, puis directeur de recherche du Partenariat sur l'IA. Après sa mort, j'ai restauré ses articles de blog sur l'utilisation militaire de l'IA, et l'EFF elle-même a censuré les liens vers ces blogs, donnant lieu à un nouveau procès pour atteinte aux droits civiques où l'EFF et Google se retrouvent, curieusement, au banc des accusés .

Les similitudes entre ces cas et la liste croissante des victimes prouvent clairement qu'ils ne sont pas le fruit du hasard. Il existe une action coordonnée visant à restreindre ou à contourner les droits civiques. Si un espace ou un monde numérique existe, il ressemble étrangement à celui où les empereurs romains recouraient à des exécutions macabres pour perpétuer leur emprise par la peur.

L'Église catholique peut rechercher les victimes de l'annulation de leur publication, celles qui ont été déplateformées et celles qui ont leur mot à dire sur la dignité humaine à l'ère de l'IA. Que ces personnes soient catholiques ou non, les préoccupations que des experts indépendants tentent d'étudier et de faire connaître doivent être mises en avant, au-delà du brouhaha des services de relations publiques.

Dans le même temps, l’impact horrible infligé à nos familles est souvent caché au public.

Les enfants dans l'environnement numérique

Il est révélateur que nous ayons trouvé des tactiques très similaires utilisées par Harvey Weinstein et Chris Lamb, ancien dirigeant du projet Debian.

C'est important, car Lamb a été formé grâce au Google Summer of Code et financé par Google, notamment par un important versement de 300 000 dollars peu avant que trois victimes ne révèlent le scandale. Malgré la promesse de transparence de Debian, l'argent n'a été révélé que plus de six mois plus tard, et le nom de Google n'est jamais associé publiquement à ces chiffres.

Lorsque Weinstein s'inquiétait du comportement de certaines femmes, il envoyait de vilaines rumeurs à d'autres acteurs du milieu. Il y a quelque chose de snob dans ces attitudes envers le comportement humain.

Lorsque des femmes ont porté plainte auprès de la police, le réalisateur Peter Jackson a pris la parole et a confirmé que Weinstein avait utilisé ces sales tours , répandant des rumeurs sur le comportement de femmes qui n'étaient pas assez soumises à son goût.

« Je me souviens que Miramax nous avait dit que travailler avec eux était un cauchemar et que nous devions les éviter à tout prix. C'était probablement en 1998 », a déclaré Jackson.

« À l'époque, nous n'avions aucune raison de remettre en question ce que ces gens nous disaient, mais avec le recul, je me rends compte qu'il s'agissait très probablement de la campagne de diffamation de Miramax en plein essor. »

Plusieurs personnes se sont manifestées, démontrant que Chris Lamb faisait exactement la même chose dans son rôle chez Debian. En vertu du droit d'auteur, les coauteurs n'ont aucune obligation envers la personne élue pour occuper ponctuellement le poste de chef de projet Debian. Nous sommes tous égaux.

Objet : Re : Statut de développeur Debian 
Date : mar. 18 déc. 2018 10:36:09 +0900 
De : Norbert Preining <norbert@preining.info> 
À : Daniel Pocock <daniel@pocock.pro> 

Bonjour Daniel, 

même si subir un procès comme celui-ci au Royaume-Uni dépasse 
mes capacités et mes possibilités financières, 
j'ai peur que Lamb ait également refusé une candidature pour une 
entreprise à New York, un emploi lié à Debian. Si cela s'est produit, et que je peux 
raisonnablement le prouver, j'envisagerais une action en justice pour diffamation. 

> Lamb réside au Royaume-Uni et envoie des e-mails depuis le Royaume-Uni 
> https://regainyourname.com/news/cyberbullying-cyberstalking-and-online-harassment-a-uk-study/ 

Merci pour les liens, je les garderai à l'esprit. 

Norbert 

-- 
PREINING Norbert http://www.preining.info 
Accelia Inc. + JAIST + TeX Live + Développeur Debian 
GPG : 0x860CDC13 fp : F7D8 A928 26E3 16A1 9FA0 ACF0 6CAC A448 860C DC13

Plus inquiétant encore, Lamb a commencé ses attaques contre ma famille au moment même où le cardinal George Pell a été condamné en 2018. Mon cousin au second degré était membre de l'ancienne chorale du cardinal George Pell à Melbourne. Lamb et ses complices, financés par Google, ont lancé des rumeurs anonymes d'abus.

Plusieurs personnes ont apporté des preuves montrant que Lamb se comportait comme Weinstein, répandant des rumeurs dans notre dos. Lorsque le Dr Preining et moi-même avons pris la parole, une troisième victime a eu vent du scandale et s'est identifiée publiquement le jour de Noël :

Objet : Re : Censure dans Debian 
Date : mar. 25 déc. 2018 23:44:38 +0100 
De : martin f krafft
Organisation : Le projet Debian 
À : debian-project@lists.debian.org 

Bonjour projet, 

C’est très triste de lire ce qui se passe. 

Je sais qu’il y a eu au moins un autre cas où DAM et AH 
ont outrepassé leur mandat, menaçant d’ 
expulsion du projet et choisissant très sélectivement leurs interlocuteurs. 
Je le sais, car j’étais ciblé. 

Ni DAM ni AH (les mêmes personnes toujours actives aujourd’hui) n’ont 
tenté une seule fois de m’entendre. Aucun de mes courriels à DAM ou à AH 
n’a reçu de réponse. 

Au lieu de cela, DAM a rendu un verdict et a influencé d’autres personnes au 
point que « parce que DAM a statué » a été invoqué comme justification pour d’autres 
mesures. Il s’agissait d’un abus de pouvoir inconstitutionnel de DAM, et dans 
le cas d’AH, tout ce désordre frisait la diffamation. Entre autres, 
l’actuel DPL, Chris Lamb, a promis une révision en temps voulu, mais 
rien ne s’est jamais produit. 

… [snip] …

Mais si cette technologie n’est pas sûre pour les ingénieurs qui la développent, elle n’est certainement pas sûre pour les enfants.

Le 5 octobre 2021, j'ai soulevé les inquiétudes concernant les enfants dans cette culture avec le rapport Google, FSFE & Child Labor .

Red Hat , filiale d'IBM depuis 2019, a intenté une action en justice pour censurer et discréditer mes propos. Ils m'ont accusé de mauvaise foi pour avoir publié cet article. Pourtant, la commission d'enquête a jugé que Red Hat me harcelait et abusait de la procédure administrative.

L'ironie, bien sûr, c'est que les cardinaux portent des casquettes rouges, comme le nom de l'entreprise Red Hat qui a été accusée d'abus à mon égard. Chris Lamb, de Debian, avait lancé les rumeurs concernant ma famille lorsque le cardinal Pell a été condamné.

La manière dont cela a interféré avec nos vies et notre foi, les rumeurs d'abus après la condamnation du regretté cardinal Pell, ma visite aux carabiniers le jour de la mort du cardinal, le jour du mariage, le dimanche des Rameaux, qui a été un suicide par imitation (non confirmé), la crucifixion du Dr Stallman à Pâques et les lynchages de Noël de Debian, tout cela est stupéfiant. Comme on dit dans les films policiers, il faut suivre l'argent.

Cardinal George Pell, Chapeau Rouge

 

L'environnement numérique soumet les paroissiens à la surveillance de tiers

L’Église catholique est née de la persécution et il faut se rappeler que la surveillance est la pierre angulaire de la persécution.

Le fait que les plus grands services, comme Google, Facebook et Twitter, soient tous ostensiblement gratuits est la preuve qu’ils tirent tous leurs profits de leur capacité à surveiller et à manipuler efficacement la population.

Autrefois, l'Église remplissait des rôles similaires. Les fidèles se soumettaient à une forme de surveillance par le sacrement de la confession, où ils recevaient les conseils de leur prêtre. Les prêtres cherchaient à exercer une certaine influence depuis la chaire, menaçant l'excommunication et, de temps à autre, l'inquisition ou la persécution de personnes en avance sur leur temps, comme Galilée.

Si les entreprises technologiques peuvent approximer toutes ces fonctions de manière aussi efficace grâce à des algorithmes, nous courons le risque que la religion devienne redondante.

Par conséquent, tenter de jouer le rôle de l’Église à travers un média qui se substitue à celui de la religion revient à creuser sa propre tombe.

Grâce à une série d'enquêtes publiques et de lanceurs d'alerte, nous avons constaté à quel point ces seigneurs nous privent de notre dignité. Leur objectif est d'anticiper chacune de nos décisions, d'influencer nos interlocuteurs, nos votes et le moindre centime de notre budget.

Si chacune de ces décisions est contrôlée et même microgérée pour nous, avec une précision scientifique, jusqu’au dernier centime de notre compte bancaire chaque mois, par l’influence des algorithmes, quelle place reste-t-il dans notre conscience pour l’influence de l’Évangile ?

Mission : rester pertinent

Par conséquent, la question assignée au groupe de travail sur la mission dans l’environnement numérique pourrait être reformulée ainsi : comment la religion, quelle que soit sa nature, reste-t-elle pertinente ?

Pour de nombreuses familles des cultures aisées d’aujourd’hui, l’Église est engagée par tradition dans les mariages, les funérailles et parfois dans l’éducation des enfants.

Pour que l’Église puisse donner du pouvoir à ses paroissiens grâce à la technologie, plutôt que de les perdre à cause de la technologie, nous devons nous poser des questions sur certains des sujets soulevés par le mouvement du logiciel libre.

Comment garantir que chaque personne ait le contrôle total de ses appareils, y compris le droit de les réparer et le droit de modifier le système d’exploitation.

Élaborer des stratégies pour protéger les individus des risques liés à la technologie. Par exemple, les médias de contrôle social permettent à des groupes restreints, mais très bruyants, de nuire gravement à leurs victimes par la diffusion délibérée et répétée de rumeurs et de diffamations. Il devient de plus en plus difficile de garantir qu'aucune personne ni minorité ne soit exclue par les vendettas en ligne. Comment apporter un soutien aux personnes ciblées par ces individus toxiques ? Comment garantir que chaque personne et chaque groupe puisse s'exprimer à son tour ?

Mission : protéger la société des mêmes erreurs

L'Australie a mis en place une commission royale d'enquête sur les abus commis par diverses institutions, dont l'Église. Pourtant, il était trop tard pour nombre de personnes décédées ou ayant perdu des proches, la santé ou leur carrière. Ne serait-il pas judicieux d'intervenir aussi vigoureusement avant plutôt qu'après des échecs catastrophiques ? Il est grand temps d'exercer le même contrôle sur les dirigeants des médias, qui exercent un contrôle social , ainsi que sur l'exploitation et la manipulation du public à de multiples niveaux.

Conclusion

Les médias de contrôle social deviennent rapidement une façade pour l'intelligence artificielle. Comme le test de Turing (jeu d'imitation) nous l'a suggéré depuis 1949, il est inévitable que chaque nouvelle itération de ce phénomène devienne de plus en plus indiscernable de la réalité. De ce fait, ils pourraient se présenter non seulement comme un substitut à nos semblables, mais aussi comme une alternative à l'Église. Les gens pourraient être dupés et l'accepter comme leur Dieu. Autrement dit, les médias de contrôle social pourraient rendre l'Église insignifiante, et par la suite, rendre l'humanité insignifiante.

Il suffit de voir les grimaces des gens après la mort de mon père. L'impolitesse que je subis presque quotidiennement a commencé dans une période de deuil. On leur a inculqué le respect le plus élémentaire de la dignité humaine, le respect de la famille dans un moment de deuil, et cela devient une nouvelle occasion de se servir les uns des autres à des fins récréatives. Cet aspect de ma vie a été entièrement créé par les médias sociaux et ceux qui définissent cet espace dans ma propre profession.

Dans son témoignage devant le Congrès, Frances Haugen nous a dit :

Je crois que ce que j’ai fait était juste et nécessaire pour le bien commun, mais je sais que Facebook dispose de ressources infinies, qu’il pourrait utiliser pour me détruire.

En 2018, j'ai assisté au Forum des Nations Unies sur les entreprises et les droits de l'homme à Genève, où j'ai brièvement commenté la situation de Facebook et Twitter, tombés entre de mauvaises mains. Le Forum des Nations Unies s'est tenu au moment même où le jury examinait les accusations portées contre le cardinal George Pell. Pell a été condamné et ces plateformes de contrôle social se sont répandues dans les rumeurs concernant ma famille et moi, le phénomène même que Haugen elle-même semble redouter.

En 2022, nous pouvons constater que Software in the Public Interest, Inc. a dépensé plus de 120 000 $ en frais juridiques pour attaquer ma famille et moi .

Voici la vidéo avec les commentaires que j'ai faits au Forum de l'ONU. J'ai parlé à peine quarante-trois secondes et ils ont dépensé 120 000 dollars pour attaquer ma famille.

24 June, 2025 07:00PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppRedis 0.2.6 on CRAN: Extensions

A new minor release 0.2.6 of our RcppRedis package arrived on CRAN today. RcppRedis is one of several packages connecting R to the fabulous Redis in-memory datastructure store (and much more). It works equally well with the newer fork Valkey. RcppRedis does not pretend to be feature complete, but it may do some things faster than the other interfaces, and also offers an optional coupling with MessagePack binary (de)serialization via RcppMsgPack. The package has been “deployed in production” as a risk / monitoring tool on a trading floor for several years. It also supports pub/sub dissemination of streaming market data as per this earlier example.

This update brings new functions del, lrem, and lmove (for the matching Redis / Valkey commands) which may be helpful in using Redis (or Valkey) as a job queue. We also extended the publish accessor by supporting text (i.e. string) mode along with raw or rds (the prior default which always serialized R objects) just how listen already worked with these three cases. The change makes it possible to publish from R to subscribers not running R as they cannot rely on the R deserealizer. An example is provided by almm, a live market monitor, which we introduced in this blog post. Apart from that the continuous integration script received another mechanical update.

The detailed changes list follows.

Changes in version 0.2.6 (2025-06-24)

  • The commands DEL, LREM and LMOVE have been added

  • The continuous integration setup was updated once more

  • The pub/sub publisher now supports a type argument similar to the listener, this allows string message publishing for non-R subscribers

Courtesy of my CRANberries, there is also a diffstat report for this this release. More information is on the RcppRedis page and at the repository and its issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

24 June, 2025 04:23PM

Uwe Kleine-König

Temperature and humitidy sensor on OpenWrt

I have a SHT3x humidity and temperature sensor connected to the i2c bus of my Turris Omnia that runs OpenWrt.

To make it produce nice graphs shown in the webif I installed the packages collectd-mod-sensors, luci-app-statistics and kmod-hwmon-sht3x.

To make the sht3x driver bind to the device I added

echo 'sht3x 0x44' > /sys/bus/i2c/devices/0-0070/channel-6/new_device

to /etc/rc.local. After that I only had to enable the Sensors plugin below Statistics -> Setup -> General plugins and check 'Monitor all except specified` in its "Configure" dialog.

24 June, 2025 08:22AM

hackergotchi for Matthew Garrett

Matthew Garrett

Why is there no consistent single signon API flow?

Single signon is a pretty vital part of modern enterprise security. You have users who need access to a bewildering array of services, and you want to be able to avoid the fallout of one of those services being compromised and your users having to change their passwords everywhere (because they're clearly going to be using the same password everywhere), or you want to be able to enforce some reasonable MFA policy without needing to configure it in 300 different places, or you want to be able to disable all user access in one place when someone leaves the company, or, well, all of the above. There's any number of providers for this, ranging from it being integrated with a more general app service platform (eg, Microsoft or Google) or a third party vendor (Okta, Ping, any number of bizarre companies). And, in general, they'll offer a straightforward mechanism to either issue OIDC tokens or manage SAML login flows, requiring users present whatever set of authentication mechanisms you've configured.

This is largely optimised for web authentication, which doesn't seem like a huge deal - if I'm logging into Workday then being bounced to another site for auth seems entirely reasonable. The problem is when you're trying to gate access to a non-web app, at which point consistency in login flow is usually achieved by spawning a browser and somehow managing submitting the result back to the remote server. And this makes some degree of sense - browsers are where webauthn token support tends to live, and it also ensures the user always has the same experience.

But it works poorly for CLI-based setups. There's basically two options - you can use the device code authorisation flow, where you perform authentication on what is nominally a separate machine to the one requesting it (but in this case is actually the same) and as a result end up with a straightforward mechanism to have your users socially engineered into giving Johnny Badman a valid auth token despite webauthn nominally being unphisable (as described years ago), or you reduce that risk somewhat by spawning a local server and POSTing the token back to it - which works locally but doesn't work well if you're dealing with trying to auth on a remote device. The user experience for both scenarios sucks, and it reduces a bunch of the worthwhile security properties that modern MFA supposedly gives us.

There's a third approach, which is in some ways the obviously good approach and in other ways is obviously a screaming nightmare. All the browser is doing is sending a bunch of requests to a remote service and handling the response locally. Why don't we just do the same? Okta, for instance, has an API for auth. We just need to submit the username and password to that and see what answer comes back. This is great until you enable any kind of MFA, at which point the additional authz step is something that's only supported via the browser. And basically everyone else is the same.

Of course, when we say "That's only supported via the browser", the browser is still just running some code of some form and we can figure out what it's doing and do the same. Which is how you end up scraping constants out of Javascript embedded in the API response in order to submit that data back in the appropriate way. This is all possible but it's incredibly annoying and fragile - the contract with the identity provider is that a browser is pointed at a URL, not that any of the internal implementation remains consistent.

I've done this. I've implemented code to scrape an identity provider's auth responses to extract the webauthn challenges and feed those to a local security token without using a browser. I've also written support for forwarding those challenges over the SSH agent protocol to make this work with remote systems that aren't running a GUI. This week I'm working on doing the same again, because every identity provider does all of this differently.

There's no fundamental reason all of this needs to be custom. It could be a straightforward "POST username and password, receive list of UUIDs describing MFA mechanisms, define how those MFA mechanisms work". That even gives space for custom auth factors (I'm looking at you, Okta Fastpass). But instead I'm left scraping JSON blobs out of Javascript and hoping nobody renames a field, even though I only care about extremely standard MFA mechanisms that shouldn't differ across different identity providers.

Someone, please, write a spec for this. Please don't make it be me.

comment count unavailable comments

24 June, 2025 06:03AM

June 23, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

Private key management • Oh, the humanity...

If we ever thought a couple of years or decades of constant use would get humankind to understand how an asymetric key pair is to be handled… It’s time we moved back to square one.

I had to do an online tramit with the Mexican federal government to get a statement certifying I successfully finished my studies, and I found this jewel of user interface:

E.firma

So… I have to:

  1. Submit the asymetric key I use for tax purposes, as that’s the ID the government has registered for me. OK, I didn’t expect it to be used for this purpose as well, but I’ll accept it. Of course, in our tax system many people don’t require having a public key generated (“easier” regimes are authenticated by password only), but all professionals with a cédula profesional (everybody getting a unviersitary title) is now compelled to do this step.
  2. Not only I have to submit my certificate (public key)… But also the private part (and, of course, the password that secures it).

    I understand I’m interacting with a Javascript thingie that runs only client-side, and I trust it is not shipping my private key to their servers. But given it is an opaque script, I have no assurance about it. And, of course, this irks me because I am who I am and because I’ve spent several years thinking about cryptography. But for regular people, it just looks as a stupid inconvenience: they have to upload two weird files with odd names and provide a password. What for?

This is beyond stupid. I’m baffled.

(of course, I did it, because I need the fsckin’ document. Oh, and of course, I paid my MX$1770, ≈€80, for it… which does not make me too happy for a tramit that’s not even shuffling papers, only storing the right bits in the right corner of the right datacenter, but anyhow…)

23 June, 2025 07:40PM

Russell Coker

PFAs

For some time I’ve been noticing news reports about PFAs [1]. I hadn’t thought much about that issue, I grew up when leaded petrol was standard, when almost all thermometers had mercury, when all small batteries had mercury, and I had generally considered that I had already had so many nasty chemicals in my body that as long as I don’t eat bottom feeding seafood often I didn’t have much to worry about. I already had a higher risk of a large number of medical issues than I’d like due to decisions made before I was born and there’s not much to do about it given that there are regulations restricting the emissions of lead, mercury etc.

I just watched a Veritasium video about Teflon and the PFA poisoning related to it’s production [2]. This made me realise that it’s more of a problem than I realised and it’s a problem that’s getting worse. PFA levels in the parts-per-trillion range in the environment can cause parts-per-billion in the body which increases the risks of several cancers and causes other health problems. Fortunately there is some work being done on water filtering, you can get filters for a home level now and they are working on filters that can work at a sufficient scale for a city water plant.

There is a map showing PFAs in the environment in Australia which shows some sites with concerning levels that are near residential areas [3]. One of the major causes for that in Australia is fire retardant foam – Australia has never had much if any Teflon manufacturing AFAIK.

Also they noted that donating blood regularly can decrease levels of PFAs in the bloodstream. So presumably people who have medical conditions that require receiving donated blood regularly will have really high levels.

23 June, 2025 12:26PM by etbe

June 22, 2025

Iustin Pop

Coding, as we knew it, has forever changed

Back when I was terribly naïve

When I was younger, and definitely naïve, I was so looking forward to AI, which will help us write lots of good, reliable code faster. Well, principally me, not thinking what impact it will have industry-wide. Other more general concerns, like societal issues, role of humans in the future and so on were totally not on my radar.

At the same time, I didn’t expect this will actually happen. Even years later, things didn’t change dramatically. Even the first release of ChatGPT a few years back didn’t click for me, as the limitations were still significant.

Hints of serious change

The first hint of the change, for me, was when a few months ago (yes, behind the curve), I asked ChatGPT to re-explain a concept to me, and it just wrote a lot of words, but without a clear explanation. On a whim, I asked Grok—then recently launched, I think—to do the same. And for the first time, the explanation clicked and I felt I could have a conversation with it. Of course, now I forgot again that theoretical CS concept, but the first step was done: I can ask an LLM to explain something, and it will, and I can have a back and forth logical discussion, even if on some theoretical concept. Additionally, I learned that not all LLMs are the same, and that means there’s real competition and that leap frogging is possible.

Another topic on which I tried to adopt early and failed to get mileage out of it, was GitHub Copilot (in VSC). I tried, it helped, but didn’t feel any speed-up at all. Then more recently, in May, I asked Grok what’s the state of the art in AI-assisted coding. It said either Claude in a browser tab, or in VSC via continue.dev extension.

The continue.dev extension/tooling is a bit of a strange/interesting thing. It seems to want to be a middle-man between the user and actual LLM services, i.e. you pay a subscription to continue.dev, not to Anthropic itself, and they manage the keys/APIs, for whatever backend LLMs you want to use. The integration with Visual Studio Code is very nice, but I don’t know if long-term their business model will make sense. Well, not my problem.

Claude: reverse engineering my old code and teaching new concepts

So I installed the latter and subscribed, thinking 20 CHF for a month is good for testing. I skipped the tutorial model/assistant, created a new one from scratch, just enabled Claude 3.7 Sonnet, and started using it. And then, my mind was blown-not just by the LLM, but by the ecosystem. As said, I’ve used GitHub copilot before, but it didn’t seem effective. I don’t know if a threshold has been reached, or Claude (3.7 at that time) is just better than ChatGPT.

I didn’t use the AI to write (non-trivial) code for me, at most boilerplate snippets. But I used it both as partner for discussion - “I want to do x, what do you think, A or B?�, and as a teacher, especially for fronted topics, which I’m not familiar with.

Since May, in mostly fragmented sessions, I’ve achieved more than in the last two years. Migration from old school JS to ECMA modules, a webpacker (reducing bundle size by 50%), replacing an old Javascript library with hand written code using modern APIs, implementing the zoom feature together with all of keyboard, mouse, touchpad and touchscreen support, simplifying layout from manually computed to automatic layout, and finding a bug in webkit for which it also wrote a cool minimal test (cool, as in, way better than I’d have ever, ever written, because for me it didn’t matter that much). And more. Could I have done all this? Yes, definitely, nothing was especially tricky here. But hours and hours of reading MDN, scouring Stack Overflow and Reddit, and lots of trial and error. So doable, but much more toily.

This, to me, feels like cheating. 20 CHF per month to make me 3x more productive is free money—well, except that I don’t make money on my code which is written basically for myself. However, I don’t get stuck anymore searching hours in the web for guidance, I ask my question, and I get at least direction if not answer, and I’m finished way earlier. I can now actually juggle more hobbies, in the same amount of time, if my personal code takes less time or differently said, if I’m more efficient at it.

Not all is roses, of course. Once, it did write code with such an endearing error that it made me laugh. It was so blatantly obvious that you shouldn’t keep other state in the array that holds pointer status because that confuses the calculation of “how many pointers are down�, probably to itself too if I’d have asked. But I didn’t, since it felt a bit embarassing to point out such a dumb mistake. Yes, I’m anthropomorphising again, because this is the easiest way to deal with things.

In general, it does an OK-to-good-to-sometimes-awesome job, and the best thing is that it summarises documentation and all of Reddit and Stack Overflow. And gives links to those.

Now, I have no idea yet what this means for the job of a software engineer. If on open source code, my own code, it makes me 3x faster—reverse engineering my code from 10 years ago is no small feat—for working on large codebases, it should do at least the same, if not more.

As an example of how open-ended the assistance can be, at one point, I started implementing a new feature—threading a new attribute to a large number of call points. This is not complex at all, just add a new field to a Haskell record, and modifying everything to take it into account, populate it, merge it when merging the data structures, etc. The code is not complex, tending toward boilerplate a bit, and I was wondering on a few possible choices for implementation, so, with just a few lines of code written that were not even compiling, I asked “I want to add a new feature, should I do A or B if I want it to behave like this�, and the answer was something along the lines of “I see you want to add the specific feature I was working on, but the implementation is incomplete, you still need to to X, Y and Z�. My mind was blown at this point, as I thought, if the code doesn’t compile, surely the computer won’t be able to parse it, but this is not a program, this is an LLM, so of course it could read it kind of as a human would. Again, the code complexity is not great, but the fact that it was able to read a half-written patch, understand what I was working towards, and reason about, was mind-blowing, and scary. Like always.

Non-code writing

Now, after all this, while writing a recent blog post, I thought—this is going to be public anyway, so let me ask Claude what it thinks about it. And I was very surprised, again: gone was all the pain of rereading three times my post to catch typos (easy) or phrasing structure issues. It gave me very clearly points, and helped me cut 30-40% of the total time. So not only coding, but word smithing too is changed. If I were an author, I’d be delighted (and scared). Here is the overall reply it gave me:

  • Spelling and grammar fixes, all of them on point except one mistake (I claimed I didn’t capitalize one word, but I did). To the level of a good grammar checker.
  • Flow Suggestions, which was way beyond normal spelling and grammar. It felt like a teacher telling me to do better in my writing, i.e. nitpicking on things that actually were true even if they’d still work. I.e. lousy phrase structure, still understandable, but lousy nevertheless.
  • Other notes: an overall summary. This was mostly just praising my post 😅. I wish LLMs were not so focused on “praise the userâ€�.

So yeah, this speeds me up to about 2x on writing blog posts, too. It definitely feels not fair.

Wither the future?

After all this, I’m a bit flabbergasted. Gone are the 2000’s with code without unittests, gone are the 2010’s without CI/CD, and now, mid-2020’s, gone is the lone programmer that scours the internet to learn new things, alone?

What this all means for our skills in software development, I have no idea, except I know things have irreversibly changed (a butlerian jihad aside). Do I learn better with a dedicated tutor even if I don’t fight with the problem for so long? Or is struggling in finding good docs the main method of learning? I don’t know yet. I feel like I understand the topics I’m discussing with the AI, but who knows in reality what it will mean long term in terms of “stickiness� of learning. For the better, or for worse, things have changed. After all the advances over the last five centuries in mechanical sciences, it has now come to some aspects of the intellectual work.

Maybe this is the answer to the ever-growing complexity of tech stacks? I.e. a return of the lone programmer that builds things end-to-end, but with AI taming the complexity added in the last 25 years? I can dream, of course, but this also means that the industry overall will increase in complexity even more, because large companies tend to do that, so maybe a net effect of not much…

One thing I did learn so far is that my expectation that AI (at this level) will only help junior/beginner people, i.e. it would flatten the skills band, is not true. I think AI can speed up at least the middle band, likely the middle top band, I don’t know about the 10x programmers (I’m not one of them). So, my question about AI now is how to best use it, not to lament how all my learning (90% self learning, to be clear) is obsolete. No, it isn’t. AI helps me start and finish one migration (that I delayed for ages), then start the second, in the same day.

At the end of this—a bit rambling—reflection on the past month and a half, I still have many questions about AI and humanity. But one has been answered: yes, “AI�, quotes or no quotes, already has changed this field (producing software), and we’ve not seen the end of it, for sure.

22 June, 2025 09:33PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Superimposed codes

I had a peculiar question at work recently, and it went off of a tangent that was way too long and somewhat interesting, so I wanted to share.

The question is: Can you create a set of N-bit numbers (codes), so that

a) Neither is a subset of each other, and
b) Neither is a subset of the OR of two of the others?

Of course, you can trivially do this (e.g., for N=5, choose 10000, 01000, 00100 and so on), but how many can you make for a given N? This is seemingly an open question, but at least I found that they are called (1,2) superimposed codes and have history at least back to this 1964 paper. They present a fairly elegant (but definitely non-optimal) way of constructing them for certain N; let me show an example for N=25:

We start by counting 3-digit numbers (k=3) in base 5 (q=5):

  • 000
  • 001
  • 002
  • 003
  • 004
  • 010
  • 011
  • etc…

Now we have 5^3 numbers. Let's set out to give them the property that we want.

This code (set of numbers) trivially has distance 1; that is, every number differs from every other number by at least one digit. We'd like to increase that distance so that it is at least as large as k. Reed-Solomon gives us an optimal way of doing that; for every number, we add two checksum digits and R-S will guarantee that the resulting code has distance 3. (Just trust me on this, I guess. It only works for q >= (k+1)/2, though, and q must be a power of an odd prime because otherwise the group theory doesn't work out.)

We now have a set of 5-digit numbers with distance 3. But if we now take any three numbers from this set, there is at least one digit where all three must differ, since the distance is larger than half the number of digits: Two numbers A and B differ from each other in at least 3 of the 5 digits, and A and C also has to differ from each other in at least 3 of the 5 digits. There just isn't room for A and B to be the same in all the places that A differ from C.

To modify this property into the one that we want, we encode each digit into binary using one-hot encoding (00001, 00010, 00100, etc.). Now our 5-digit numbers are 25-bit numbers. And due to the "all different" property in the previous paragraph, we also have our superimposition property; there's at least one 5-bit group where A|B shares no bits with C. So this gives us a 25-bit set with 125 different values and our desired property.

This isn't necessarily an optimal code (and the authors are very clear on that), but it's at least systematic and easy to extend to larger sizes. (I used a SAT solver to extend this to 170 different values, just by keeping the 125 first and asking for 45 more that were not in conflict. 55 more was evidently hard.) The paper has tons more information, including some stuff based on Steiner systems that I haven't tried to understand. And of course, there are tons more later papers, including one by Erdős. :-)

I've applied for an account at OEIS so I can add a sequence for the maximum number of possible codes for each N. It doesn't have many terms known yet, because the SAT solver struggles hard with this (at least in my best formulation), but at least it will give the next person something to find when they are searching. :-)

22 June, 2025 11:45AM

Craig Small

epoll on pidfd

The Linux kernel has an interesting file descriptor called pidfd. As the name imples, it is a file descriptor to a pid or a specific process. The nice thing about it is that is guaranteed to be for the specific process you expected when you got that pidfd. A process ID, or PID, has no reuse guarantees, which means what you think process 1234 is and what the kernel knows what process 1234 is could be different because your process exited and the process IDs have looped around.

pidfds are *odd*, they’re half a “normal” file descriptor and half… something else. That means some file descriptor things work and some fail in odd ways. stat() works, but using them in the first parameter of openat() will fail.

One thing you can do with them is use epoll() on them to get process status, in fact the pidfd_open() manual page says:

A PID file descriptor returned by pidfd_open() (or by clone(2) with the CLONE_PID flag) can be used for the following purposes:

A PID file descriptor can be monitored using poll(2), select(2), and epoll(7). When the process that it refers to terminates, these interfaces indicate the file descriptor as readable.

So if you want to wait until something terminates, then you can just find the pidfd of the process and sit an epoll_wait() onto it. Simple, right? Except its not quite true.

procps issue #386 stated that if you had a list of processes, then pidwait only finds half of them. I’d like to thank Steve the issue reporter for the initial work on this. The odd thing is that for every exited process, you get two epoll events. You get an EPOLLIN first, then a EPOLLIN | EPOLLHUP after that. Steve suggested the first was when the process exits, the second when the process has been collected by the parent.

I have a collection of oddball processes, including ones that make zombies. A zombie is a child that has exited but has not been wait() ed by its parent. In other words, if a parent doesn’t collect its dead child, then the child becomes a zombie. The test program spawns a child, which exits after some seconds. The parent waits longer, calls wait() waits some more then exits. Running pidwait we can see the following epoll events:

  • When the child exits, EPOLLIN on the child is triggered. At this stage the child is a zombie.
  • When the parent calls wait(), then EPOLLIN | EPOLLHUP on the child is triggered.
  • When the parent exits, EPOLLIN then EPOLLIN | EPOLLHUP on the parent is triggered. That is, two events for the one thing.

If you want to use epoll() to know when a process terminates, then you need to decide on what you mean by that:

  • If you mean it has exited, but not collected yet (e.g. a zombie possibly) then you need to select on EPOLLIN only.
  • If you mean the process is fully gone, then EPOLLHUP is a better choice. You can even change the epoll_ctl() call to use this instead.

A “zombie trigger” (EPOLLIN with no subsequent EPOLLHUP) is a bit tricky to work out. There is no guarantee the two events have to be in the same epoll, especially if the parent is a bit tardy on their wait() call.

22 June, 2025 07:32AM by dropbear

June 20, 2025

Sven Hoexter

Terraform: Validation Condition Cycles

Terraform 1.9 introduced some time ago the capability to reference in an input variable validation condition other variables, not only the one you're validating.

What does not work is having two variables which validate each other, e.g.

variable "nat_min_ports" {
  description = "Minimal amount of ports to allocate for 'min_ports_per_vm'"
  default     = 32
  type        = number
  validation {
    condition = (
      var.nat_min_ports >= 32 &&
      var.nat_min_ports <= 32768 &&
      var.nat_min_ports < var.nat_max_ports
    )
    error_message = "Must be between 32 and 32768 and less than 'nat_max_ports'"
  }
}

variable "nat_max_ports" {
  description = "Maximal amount of ports to allocate for 'max_ports_per_vm'"
  default     = 16384
  type        = number
  validation {
    condition = (
      var.nat_max_ports >= 64 &&
      var.nat_max_ports <= 65536 &&
      var.nat_max_ports > var.nat_min_ports
    )
    error_message = "Must be between 64 and 65536 and above 'nat_min_ports'"
  }
}

That let directly to the following rather opaque error message: Received an error Error: Cycle: module.gcp_project_network.var.nat_max_ports (validation), module.gcp_project_network.var.nat_min_ports (validation)

Removed the sort of duplicate check var.nat_max_ports > var.nat_min_ports on nat_max_ports to break the cycle.

20 June, 2025 10:00AM

hackergotchi for Matthew Garrett

Matthew Garrett

My a11y journey

23 years ago I was in a bad place. I'd quit my first attempt at a PhD for various reasons that were, with hindsight, bad, and I was suddenly entirely aimless. I lucked into picking up a sysadmin role back at TCM where I'd spent a summer a year before, but that's not really what I wanted in my life. And then Hanna mentioned that her PhD supervisor was looking for someone familiar with Linux to work on making Dasher, one of the group's research projects, more usable on Linux. I jumped.

The timing was fortuitous. Sun were pumping money and developer effort into accessibility support, and the Inference Group had just received a grant from the Gatsy Foundation that involved working with the ACE Centre to provide additional accessibility support. And I was suddenly hacking on code that was largely ignored by most developers, supporting use cases that were irrelevant to most developers. Being in a relatively green field space sounds refreshing, until you realise that you're catering to actual humans who are potentially going to rely on your software to be able to communicate. That's somewhat focusing.

This was, uh, something of an on the job learning experience. I had to catch up with a lot of new technologies very quickly, but that wasn't the hard bit - what was difficult was realising I had to cater to people who were dealing with use cases that I had no experience of whatsoever. Dasher was extended to allow text entry into applications without needing to cut and paste. We added support for introspection of the current applications UI so menus could be exposed via the Dasher interface, allowing people to fly through menu hierarchies and pop open file dialogs. Text-to-speech was incorporated so people could rapidly enter sentences and have them spoke out loud.

But what sticks with me isn't the tech, or even the opportunities it gave me to meet other people working on the Linux desktop and forge friendships that still exist. It was the cases where I had the opportunity to work with people who could use Dasher as a tool to increase their ability to communicate with the outside world, whose lives were transformed for the better because of what we'd produced. Watching someone use your code and realising that you could write a three line patch that had a significant impact on the speed they could talk to other people is an incomparable experience. It's been decades and in many ways that was the most impact I've ever had as a developer.

I left after a year to work on fruitflies and get my PhD, and my career since then hasn't involved a lot of accessibility work. But it's stuck with me - every improvement in that space is something that has a direct impact on the quality of life of more people than you expect, but is also something that goes almost unrecognised. The people working on accessibility are heroes. They're making all the technology everyone else produces available to people who would otherwise be blocked from it. They deserve recognition, and they deserve a lot more support than they have.

But when we deal with technology, we deal with transitions. A lot of the Linux accessibility support depended on X11 behaviour that is now widely regarded as a set of misfeatures. It's not actually good to be able to inject arbitrary input into an arbitrary window, and it's not good to be able to arbitrarily scrape out its contents. X11 never had a model to permit this for accessibility tooling while blocking it for other code. Wayland does, but suffers from the surrounding infrastructure not being well developed yet. We're seeing that happen now, though - Gnome has been performing a great deal of work in this respect, and KDE is picking that up as well. There isn't a full correspondence between X11-based Linux accessibility support and Wayland, but for many users the Wayland accessibility infrastructure is already better than with X11.

That's going to continue improving, and it'll improve faster with broader support. We've somehow ended up with the bizarre politicisation of Wayland as being some sort of woke thing while X11 represents the Roman Empire or some such bullshit, but the reality is that there is no story for improving accessibility support under X11 and sticking to X11 is going to end up reducing the accessibility of a platform.

When you read anything about Linux accessibility, ask yourself whether you're reading something written by either a user of the accessibility features, or a developer of them. If they're neither, ask yourself why they actually care and what they're doing to make the future better.

comment count unavailable comments

20 June, 2025 08:48AM

June 19, 2025

hackergotchi for Jonathan Carter

Jonathan Carter

My first tag2upload upload

Tag2upload?

The tag2upload service has finally gone live for Debian Developers in an open beta.

If you’ve never heard of tag2upload before, here is a great primer presented by Ian Jackson and prepared by Ian Jackson and Sean Whitton.

In short, the world has moved on to hosting and working with source code in Git repositories. In Debian, we work with source packages that are used to generated the binary artifacts that users know as .deb files. In Debian, there is so much tooling and culture built around this. For example, our workflow passes what we call the island test – you could take every source package in Debian along with you to an island with no Internet, and you’ll still be able to rebuild or modify every package. When changing the workflows, you risk losing benefits like this, and over the years there has been a number of different ideas on how to move to a purely or partially git flow for Debian, none that really managed to gain enough momentum or project-wide support.

Tag2upload makes a lot of sense. It doesn’t take away any of the benefits of the current way of working (whether technical or social), but it does make some aspects of Debian packages significantly simpler and faster. Even so, if you’re a Debian Developer and more familiar with how the sausage have made, you’ll have noticed that this has been a very long road for the tag2upload maintainers, they’ve hit multiple speed bumps since 2019, but with a lot of patience and communication and persistence from all involved (and almost even a GR), it is finally materializing.

Performing my first tag2upload

So, first, I needed to choose which package I want to upload. We’re currently in hard freeze for the trixie release, so I’ll look for something simple that I can upload to experimental.

I chose bundlewrap, it’s quote a straightforward python package, and updates are usually just as straightforward, so it’s probably a good package to work on without having to deal with extra complexities in learning how to use tag2upload.

So, I do the usual uscan and dch -i to update my package…

And then I realise that I still want to build a source package to test it in cowbuilder. Hmm, I remember that Helmut showed me that building a source package isn’t necessary in sbuild, but I have a habit of breaking my sbuild configs somehow, but I guess I should revisit that.

So, I do a dpkg-buildpackage -S -sa and test it out with cowbuilder, because that’s just how I roll (at least for now, fixing my local sbuild setup is yak shaving for another day, let’s focus!).

I end up with a binary that looks good, so I’m satisfied that I can upload this package to the Debian archives. So, time to configure tag2upload.

The first step is to set up the webhook in Salsa. I was surprised two find two webhooks already configured:

I know of KGB that posts to IRC, didn’t know that this was the mechanism it does that by before. Nice! Also don’t know what the tagpending one does, I’ll go look into that some other time.

Configuring a tag2upload webhook is quite simple, add a URL, call the name tag2upload, and select only tag push events:

I run the test webhook, and it returned a code 400 message about a missing ‘message’ header, which the documentation says is normal.

Next, I install git-debpush from experimental.

The wiki page simply states that you can use the git-debpush command to upload, but doesn’t give any examples on how to use it, and its manpage doesn’t either. And when I run just git-debpush I get:

jonathan@lapcloud:~/devel/debian/python-team/bundlewrap/bundlewrap-4.23.1$ git-debpush
git-debpush: check failed: upstream tag upstream/4.22.0 is not an ancestor of refs/heads/debian/master; probably a mistake ('upstream-nonancestor' check)
pristine-tar is /usr/bin/pristine-tar
git-debpush: some check(s) failed; you can pass --force to ignore them

I have no idea what that’s supposed to mean. I was also not sure whether I should tag anything to begin with, or if some part of the tag2upload machinery automatically does it. I think I might have tagged debian/4.23-1 before tagging upstream/4.23 and perhaps it didn’t like it, I reverted and did it the other way around and got a new error message. Progress!

jonathan@lapcloud:~/devel/debian/python-team/bundlewrap/bundlewrap-4.23.1$ git-debpush
git-debpush: could not determine the git branch layout
git-debpush: please supply a --quilt= argument

Looking at the manpage, it looks like –quilt=baredebian matches my package the best, so I try that:

jonathan@lapcloud:~/devel/debian/python-team/bundlewrap/bundlewrap-4.23.1$ git-debpush --quilt=baredebian
Enumerating objects: 70, done.
Counting objects: 100% (70/70), done.
Delta compression using up to 12 threads
Compressing objects: 100% (37/37), done.
Writing objects: 100% (37/37), 8.97 KiB | 2.99 MiB/s, done.
Total 37 (delta 30), reused 0 (delta 0), pack-reused 0 (from 0)
To salsa.debian.org:python-team/packages/bundlewrap.git
6f55d99..3d5498f debian/master -> debian/master

 * [new tag] upstream/4.23.1 -> upstream/4.23.1
 * [new tag] debian/4.23.1-1_exp1 -> debian/4.23.1-1_exp1

Ooh! That looked like it did something! And a minute later I received the notification of the upload in my inbox:

So, I’m not 100% sure that this makes things much easier for me than doing a dput, but, it’s not any more difficult or more work either (once you know how it works), so I’ll be using git-debpush from now on, and I’m sure as I get more used to the git workflow of doing things I’ll understand more of the benefits. And at last, my one last use case for using FTP is now properly dead. RIP FTP :)

19 June, 2025 07:49PM by jonathan

June 18, 2025

Sergio Durigan Junior

GCC, glibc, stack unwinding and relocations – A war story

I’ve been meaning to write a post about this bug for a while, so here it is (before I forget the details!).

First, I’d like to thank a few people:

  • My friend Gabriel F. T. Gomes, who helped with debugging and simply talking about the issue. I love doing some pair debugging, and I noticed that he also had a great time diving into the internals of glibc and libgcc.
  • My teammate Dann Frazier, who always provides invaluable insights and was there to motivate me to push a bit further in order to figure out what was going on.
  • The upstream GCC and glibc developers who finally drove the investigation to completion and came up with an elegant fix.

I’ll probably forget some details because it’s been more than a week (and life at $DAYJOB moves fast), but we’ll see.

The background story

Wolfi OS takes security seriously, and one of the things we have is a package which sets the hardening compiler flags for C/C++ according to the best practices recommended by OpenSSF. At the time of this writing, these flags are (in GCC’s spec file parlance):

*self_spec:
+ %{!O:%{!O1:%{!O2:%{!O3:%{!O0:%{!Os:%{!0fast:%{!0g:%{!0z:-O2}}}}}}}}} -fhardened -Wno-error=hardened -Wno-hardened %{!fdelete-null-pointer-checks:-fno-delete-null-pointer-checks} -fno-strict-overflow -fno-strict-aliasing %{!fomit-frame-pointer:-fno-omit-frame-pointer} -mno-omit-leaf-frame-pointer

*link:
+ --as-needed -O1 --sort-common -z noexecstack -z relro -z now

The important part for our bug is the usage of -z now and -fno-strict-aliasing.

As I was saying, these flags are set for almost every build, but sometimes things don’t work as they should and we need to disable them. Unfortunately, one of these problematic cases has been glibc.

There was an attempt to enable hardening while building glibc, but that introduced a strange breakage to several of our packages and had to be reverted.

Things stayed pretty much the same until a few weeks ago, when I started working on one of my roadmap items: figure out why hardening glibc wasn’t working, and get it to work as much as possible.

Reproducing the bug

I started off by trying to reproduce the problem. It’s important to mention this because I often see young engineers forgetting to check if the problem is even valid anymore. I don’t blame them; the anxiety to get the bug fixed can be really blinding.

Fortunately, I already had one simple test to trigger the failure. All I had to do was install the py3-matplotlib package and then invoke:

$ python3 -c 'import matplotlib'

This would result in an abortion with a coredump.

I followed the steps above, and readily saw the problem manifesting again. OK, first step is done; I wasn’t getting out easily from this one.

Initial debug

The next step is to actually try to debug the failure. In an ideal world you get lucky and are able to spot what’s wrong after just a few minutes. Or even better: you also can devise a patch to fix the bug and contribute it to upstream.

I installed GDB, and then ran the py3-matplotlib command inside it. When the abortion happened, I issued a backtrace command inside GDB to see where exactly things had gone wrong. I got a stack trace similar to the following:

#0  0x00007c43afe9972c in __pthread_kill_implementation () from /lib/libc.so.6
#1  0x00007c43afe3d8be in raise () from /lib/libc.so.6
#2  0x00007c43afe2531f in abort () from /lib/libc.so.6
#3  0x00007c43af84f79d in uw_init_context_1[cold] () from /usr/lib/libgcc_s.so.1
#4  0x00007c43af86d4d8 in _Unwind_RaiseException () from /usr/lib/libgcc_s.so.1
#5  0x00007c43acac9014 in __cxxabiv1::__cxa_throw (obj=0x5b7d7f52fab0, tinfo=0x7c429b6fd218 <typeinfo for pybind11::attribute_error>, dest=0x7c429b5f7f70 <pybind11::reference_cast_error::~reference_cast_error() [clone .lto_priv.0]>)
    at ../../../../libstdc++-v3/libsupc++/eh_throw.cc:93
#6  0x00007c429b5ec3a7 in ft2font__getattr__(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) [clone .lto_priv.0] [clone .cold] () from /usr/lib/python3.13/site-packages/matplotlib/ft2font.cpython-313-x86_64-linux-gnu.so
#7  0x00007c429b62f086 in pybind11::cpp_function::initialize<pybind11::object (*&)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >), pybind11::object, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, pybind11::name, pybind11::scope, pybind11::sibling>(pybind11::object (*&)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >), pybind11::object (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >), pybind11::name const&, pybind11::scope const&, pybind11::sibling const&)::{lambda(pybind11::detail::function_call&)#1}::_FUN(pybind11::detail::function_call&) [clone .lto_priv.0] ()
   from /usr/lib/python3.13/site-packages/matplotlib/ft2font.cpython-313-x86_64-linux-gnu.so
#8  0x00007c429b603886 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) () from /usr/lib/python3.13/site-packages/matplotlib/ft2font.cpython-313-x86_64-linux-gnu.so
...

Huh. Initially this didn’t provide me with much information. There was something strange seeing the abort function being called right after _Unwind_RaiseException, but at the time I didn’t pay much attention to it.

OK, time to expand our horizons a little. Remember when I said that several of our packages would crash with a hardened glibc? I decided to look for another problematic package so that I could make it crash and get its stack trace. My thinking here is that maybe if I can compare both traces, something will come up.

I happened to find an old discussion where Dann Frazier mentioned that Emacs was also crashing for him. He and I share the Emacs passion, and I totally agreed with him when he said that “Emacs crashing is priority -1!” (I’m paraphrasing).

I installed Emacs, ran it, and voilà: the crash happened again. OK, that was good. When I ran Emacs inside GDB and asked for a backtrace, here’s what I got:

#0  0x00007eede329972c in __pthread_kill_implementation () from /lib/libc.so.6
#1  0x00007eede323d8be in raise () from /lib/libc.so.6
#2  0x00007eede322531f in abort () from /lib/libc.so.6
#3  0x00007eede262879d in uw_init_context_1[cold] () from /usr/lib/libgcc_s.so.1
#4  0x00007eede2646e7c in _Unwind_Backtrace () from /usr/lib/libgcc_s.so.1
#5  0x00007eede3327b11 in backtrace () from /lib/libc.so.6
#6  0x000059535963a8a1 in emacs_backtrace ()
#7  0x000059535956499a in main ()

Ah, this backtrace is much simpler to follow. Nice.

Hmmm. Now the crash is happening inside _Unwind_Backtrace. A pattern emerges! This must have something to do with stack unwinding (or so I thought… keep reading to discover the whole truth). You see, the backtrace function (yes, it’s a function) and C++’s exception handling mechanism use similar techniques to do their jobs, and it pretty much boils down to unwinding frames from the stack.

I looked into Emacs’ source code, specifically the emacs_backtrace function, but could not find anything strange over there. This bug was probably not going to be an easy fix…

The quest for a minimal reproducer

Being able to easily reproduce the bug is awesome and really helps with debugging, but even better is being able to have a minimal reproducer for the problem.

You see, py3-matplotlib is a huge package and pulls in a bunch of extra dependencies, so it’s not easy to ask other people to “just install this big package plus these other dependencies, and then run this command…”, especially if we have to file an upstream bug and talk to people who may not even run the distribution we’re using. So I set up to try and come up with a smaller recipe to reproduce the issue, ideally something that’s not tied to a specific package from the distribution.

Having all the information gathered from the initial debug session, especially the Emacs backtrace, I thought that I could write a very simple program that just invoked the backtrace function from glibc in order to trigger the code path that leads to _Unwind_Backtrace. Here’s what I wrote:

#include <execinfo.h>

int
main(int argc, char *argv[])
{
  void *a[4096];
  backtrace (a, 100);
  return 0;
}

After compiling it, I determined that yes, the problem did happen with this small program as well. There was only a small nuisance: the manifestation of the bug was not deterministic, so I had to execute the program a few times until it crashed. But that’s much better than what I had before, and a small price to pay. Having a minimal reproducer pretty much allows us to switch our focus to what really matters. I wouldn’t need to dive into Emacs’ or Python’s source code anymore.

At the time, I was sure this was a glibc bug. But then something else happened.

GCC 15

I had to stop my investigation efforts because something more important came up: it was time to upload GCC 15 to Wolfi. I spent a couple of weeks working on this (it involved rebuilding the whole archive, filing hundreds of FTBFS bugs, patching some programs, etc.), and by the end of it the transition went smooth. When the GCC 15 upload was finally done, I switched my focus back to the glibc hardening problem.

The first thing I did was to… yes, reproduce the bug again. It had been a few weeks since I had touched the package, after all. So I built a hardened glibc with the latest GCC and… the bug did not happen anymore!

Fortunately, the very first thing I thought was “this must be GCC”, so I rebuilt the hardened glibc with GCC 14, and the bug was there again. Huh, unexpected but very interesting.

Diving into glibc and libgcc

At this point, I was ready to start some serious debugging. And then I got a message on Signal. It was one of those moments where two minds think alike: Gabriel decided to check how I was doing, and I was thinking about him because this involved glibc, and Gabriel contributed to the project for many years. I explained what I was doing, and he promptly offered to help. Yes, there are more people who love low level debugging!

We spent several hours going through disassembles of certain functions (because we didn’t have any debug information in the beginning), trying to make sense of what we were seeing. There was some heavy GDB involved; unfortunately I completely lost the session’s history because it was done inside a container running inside an ephemeral VM. But we learned a lot. For example:

  • It was hard to actually understand the full stack trace leading to uw_init_context_1[cold]. _Unwind_Backtrace obviously didn’t call it (it called uw_init_context_1, but what was that [cold] doing?). We had to investigate the disassemble of uw_init_context_1 in order to determined where uw_init_context_1[cold] was being called.

  • The [cold] suffix is a GCC function attribute that can be used to tell the compiler that the function is unlikely to be reached. When I read that, my mind immediately jumped to “this must be an assertion”, so I went to the source code and found the spot.

  • We were able to determine that the return code of uw_frame_state_for was 5, which means _URC_END_OF_STACK. That’s why the assertion was triggering.

After finding these facts without debug information, I decided to bite the bullet and recompiled GCC 14 with -O0 -g3, so that we could debug what uw_frame_state_for was doing. After banging our heads a bit more, we found that fde is NULL at this excerpt:

// ...
  fde = _Unwind_Find_FDE (context->ra + _Unwind_IsSignalFrame (context) - 1,
                          &context->bases);
  if (fde == NULL)
    {
#ifdef MD_FALLBACK_FRAME_STATE_FOR
      /* Couldn't find frame unwind info for this function.  Try a
         target-specific fallback mechanism.  This will necessarily
         not provide a personality routine or LSDA.  */
      return MD_FALLBACK_FRAME_STATE_FOR (context, fs);
#else
      return _URC_END_OF_STACK;
#endif
    }
// ...

We’re debugging on amd64, which means that MD_FALLBACK_FRAME_STATE_FOR is defined and therefore is called. But that’s not really important for our case here, because we had established before that _Unwind_Find_FDE would never return NULL when using a non-hardened glibc (or a glibc compiled with GCC 15). So we decided to look into what _Unwind_Find_FDE did.

The function is complex because it deals with .eh_frame , but we were able to pinpoint the exact location where find_fde_tail (one of the functions called by _Unwind_Find_FDE) is returning NULL:

if (pc < table[0].initial_loc + data_base)
  return NULL;

We looked at the addresses of pc and table[0].initial_loc + data_base, and found that the former fell within libgcc’s text section, which the latter fell within /lib/ld-linux-x86-64.so.2 text.

At this point, we were already too tired to continue. I decided to keep looking at the problem later and see if I could get any further.

Bisecting GCC

The next day, I woke up determined to find what changed in GCC 15 that caused the bug to disappear. Unless you know GCC’s internals like they are your own home (which I definitely don’t), the best way to do that is to git bisect the commits between GCC 14 and 15.

I spent a few days running the bisect. It took me more time than I’d have liked to find the right range of commits to pass git bisect (because of how branches and tags are done in GCC’s repository), and I also had to write some helper scripts that:

  • Modified the gcc.yaml package definition to make it build with the commit being bisected.
  • Built glibc using the GCC that was just built.
  • Ran tests inside a docker container (with the recently built glibc installed) to determine whether the bug was present.

At the end, I had a commit to point to:

commit 99b1daae18c095d6c94d32efb77442838e11cbfb
Author: Richard Biener <rguenther@suse.de>
Date:   Fri May 3 14:04:41 2024 +0200

    tree-optimization/114589 - remove profile based sink heuristics

Makes sense, right?! No? Well, it didn’t for me either. Even after reading what was changed in the code and the upstream bug fixed by the commit, I was still clueless as to why this change “fixed” the problem (I say “fixed” because it may very well be an unintended consequence of the change, and some other problem might have been introduced).

Upstream takes over

After obtaining the commit that possibly fixed the bug, while talking to Dann and explaining what I did, he suggested that I should file an upstream bug and check with them. Great idea, of course.

I filed the following upstream bug:

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=120653

It’s a bit long, very dense and complex, but ultimately upstream was able to find the real problem and have a patch accepted in just two days. Nothing like knowing the code base. The initial bug became:

https://sourceware.org/bugzilla/show_bug.cgi?id=33088

In the end, the problem was indeed in how the linker defines __ehdr_start, which, according to the code (from elf/dl-support.c):

if (_dl_phdr == NULL)
  {
    /* Starting from binutils-2.23, the linker will define the
       magic symbol __ehdr_start to point to our own ELF header
       if it is visible in a segment that also includes the phdrs.
       So we can set up _dl_phdr and _dl_phnum even without any
       information from auxv.  */


    extern const ElfW(Ehdr) __ehdr_start attribute_hidden;
    assert (__ehdr_start.e_phentsize == sizeof *GL(dl_phdr));
    _dl_phdr = (const void *) &__ehdr_start + __ehdr_start.e_phoff;
    _dl_phnum = __ehdr_start.e_phnum;
  }

But the following definition is the problematic one (from elf/rtld.c):

extern const ElfW(Ehdr) __ehdr_start attribute_hidden;

This symbol (along with its counterpart, __ehdr_end) was being run-time relocated when it shouldn’t be. The fix that was pushed added optimization barriers to prevent the compiler from doing the relocations.

I don’t claim to fully understand what was done here, and Jakub’s analysis is a thing to behold, but in the end I was able to confirm that the patch fixed the bug. And in the end, it was indeed a glibc bug.

Conclusion

This was an awesome bug to investigate. It’s one of those that deserve a blog post, even though some of the final details of the fix flew over my head.

I’d like to start blogging more about these sort of bugs, because I’ve encountered my fair share of them throughout my career. And it was great being able to do some debugging with another person, exchange ideas, learn things together, and ultimately share that deep satisfaction when we find why a crash is happening.

I have at least one more bug in my TODO list to write about (another one with glibc, but this time I was able to get to the end of it and come up with a patch). Stay tunned.

P.S.: After having published the post I realized that I forgot to explain why the -z now and -fno-strict-aliasing flags were important.

-z now is the flag that I determined to be the root cause of the breakage. If I compiled glibc with every hardening flag except -z now, everything worked. So initially I thought that the problem had to do with how ld.so was resolving symbols at runtime. As it turns out, this ended up being more a symptom than the real cause of the bug.

As for -fno-strict-aliasing, a Gentoo developer who commented on the GCC bug above mentioned that this OpenSSF bug had a good point against using this flag for hardening. I still have to do a deep dive on what was discussed in the issue, but this is certainly something to take into consideration. There’s this very good write-up about strict aliasing in general if you’re interested in understanding it better.

18 June, 2025 03:29AM

June 17, 2025

hackergotchi for Evgeni Golov

Evgeni Golov

Arguing with an AI or how Evgeni tried to use CodeRabbit

Everybody is trying out AI assistants these days, so I figured I'd jump on that train and see how fast it derails.

I went with CodeRabbit because I've seen it on YouTube — ads work, I guess.

I am trying to answer the following questions:

  • Did the AI find things that humans did not find (or didn't bother to mention)
  • Did the AI output help the humans with the review (useful summary etc)
  • Did the AI output help the humans with the code (useful suggestions etc)
  • Was the AI output misleading?
  • Was the AI output distracting?

To reduce the amount of output and not to confuse contributors, CodeRabbit was configured to only do reviews on demand.

What follows is a rather unscientific evaluation of CodeRabbit based on PRs in two Foreman-related repositories, looking at the summaries CodeRabbit posted as well as the comments/suggestions it had about the code.

Ansible 2.19 support

PR: theforeman/foreman-ansible-modules#1848

summary posted

The summary CodeRabbit posted is technically correct.

This update introduces several changes across CI configuration, Ansible roles, plugins, and test playbooks. It expands CI test coverage to a new Ansible version, adjusts YAML key types in test variables, refines conditional logic in Ansible tasks, adds new default variables, and improves clarity and consistency in playbook task definitions and debug output.

Yeah, it does all of that, all right. But it kinda misses the point that the addition here is "Ansible 2.19 support", which starts with adding it to the CI matrix and then adjusting the code to actually work with that version. Also, the changes are not for "clarity" or "consistency", they are fixing bugs in the code that the older Ansible versions accepted, but the new one is more strict about.

Then it adds a table with the changed files and what changed in there. To me, as the author, it felt redundant, and IMHO doesn't add any clarity to understand the changes. (And yes, same "clarity" vs bugfix mistake here, but that makes sense as it apparently miss-identified the change reason)

And then the sequence diagrams… They probably help if you have a dedicated change to a library or a library consumer, but for this PR it's just noise, especially as it only covers two of the changes (addition of 2.19 to the test matrix and a change to the inventory plugin), completely ignoring other important parts.

Overall verdict: noise, don't need this.

comments posted

CodeRabbit also posted 4 comments/suggestions to the changes.

Guard against undefined result.task

IMHO a valid suggestion, even if on the picky side as I am not sure how to make it undefined here. I ended up implementing it, even if with slightly different (and IMHO better readable) syntax.

  • Valid complaint? Probably.
  • Useful suggestion? So-So.
  • Wasted time? No.

Inconsistent pipeline in when for composite CV versions

That one was funny! The original complaint was that the when condition used slightly different data manipulation than the data that was passed when the condition was true. The code was supposed to do "clean up the data, but only if there are any items left after removing the first 5, as we always want to keep 5 items".

And I do agree with the analysis that it's badly maintainable code. But the suggested fix was to re-use the data in the variable we later use for performing the cleanup. While this is (to my surprise!) valid Ansible syntax, it didn't make the code much more readable as you need to go and look at the variable definition.

The better suggestion then came from Ewoud: to compare the length of the data with the number we want to keep. Humans, so smart!

But Ansible is not Ewoud's native turf, so he asked whether there is a more elegant way to count how much data we have than to use | list | count in Jinja (the data comes from a Python generator, so needs to be converted to a list first).

And the AI helpfully suggested to use | count instead!

However, count is just an alias for length in Jinja, so it behaves identically and needs a list.

Luckily the AI quickly apologized for being wrong after being pointed at the Jinja source and didn't try to waste my time any further. Wouldn't I have known about the count alias, we'd have committed that suggestion and let CI fail before reverting again.

  • Valid complaint? Yes.
  • Useful suggestion? Nope.
  • Wasted time? Yes.

Apply the same fix for non-composite CV versions

The very same complaint was posted a few lines later, as the logic there is very similar — just slightly different data to be filtered and cleaned up.

Interestingly, here the suggestion also was to use the variable. But there is no variable with the data!

The text actually says one need to "define" it, yet the "committable suggestion" doesn't contain that part.

Interestingly, when asked where it sees the "inconsistency" in that hunk, it said the inconsistency is with the composite case above. That however is nonsense, as while we want to keep the same number of composite and non-composite CV versions, the data used in the task is different — it even gets consumed by a totally different playbook — so there can't be any real consistency between the branches.

  • Valid complaint? Yes (the expression really could use some cleanup).
  • Useful suggestion? Nope.
  • Wasted time? Yes.

I ended up applying the same logic as suggested by Ewoud above. As that refactoring was possible in a consistent way.

Ensure consistent naming for Oracle Linux subscription defaults

One of the changes in Ansible 2.19 is that Ansible fails when there are undefined variables, even if they are only undefined for cases where they are unused.

CodeRabbit complains that the names of the defaults I added are inconsistent. And that is technically correct. But those names are already used in other places in the code, so I'd have to refactor more to make it work properly.

Once being pointed at the fact that the variables already exist, the AI is as usual quick to apologize, yay.

  • Valid complaint? Technically, yes.
  • Useful suggestion? Nope.
  • Wasted time? Yes.

add new parameters to the repository module

PR: theforeman/foreman-ansible-modules#1860

summary posted

Again, the summary is technically correct

The repository module was updated to support additional parameters for repository synchronization and authentication. New options were added for ansible collections, ostree, Python packages, and yum repositories, including authentication tokens, filtering controls, and version retention settings. All changes were limited to module documentation and argument specification.

But it doesn't add anything you'd not get from looking at the diff, especially as it contains a large documentation chunk explaining those parameters.

No sequence diagram this time. That's a good thing!

Overall verdict: noise (even if the amount is small), don't need this.

comments posted

CodeRabbit generated two comments for this PR.

Interestingly, none of them overlapped with the issues ansible-lint and friends found.

get rid of the FIXMEs

Yepp, that's fair

  • Valid complaint? Yes.
  • Useful suggestion? Nope. (But it's not possible in this case!)
  • Wasted time? No.

add validation for the new parameters

Yepp, I forgot these (not intentionally!).

The diff it suggests is nonsense, as it doesn't take into account the existing Ansible and Yum validations, but it clearly has read them as the style etc of the new ones matches. It also managed to group the parameters correctly by repository type, so it's something.

 if module.foreman_params['content_type'] != 'ansible_collection':
     invalid_list = [key for key in ['ansible_collection_requirements'] if key in module.foreman_params]
     if invalid_list:
         module.fail_json(msg="({0}) can only be used with content_type 'ansible_collection'".format(",".join(invalid_list)))
+
+# Validate ansible_collection specific parameters
+if module.foreman_params['content_type'] != 'ansible_collection':
+    invalid_list = [key for key in ['ansible_collection_auth_token', 'ansible_collection_auth_url'] if key in module.foreman_params]
+    if invalid_list:
+        module.fail_json(msg="({0}) can only be used with content_type 'ansible_collection'".format(",".join(invalid_list)))
+
+# Validate ostree specific parameters
+if module.foreman_params['content_type'] != 'ostree':
+    invalid_list = [key for key in ['depth', 'exclude_refs', 'include_refs'] if key in module.foreman_params]
+    if invalid_list:
+        module.fail_json(msg="({0}) can only be used with content_type 'ostree'".format(",".join(invalid_list)))
+
+# Validate python package specific parameters
+if module.foreman_params['content_type'] != 'python':
+    invalid_list = [key for key in ['excludes', 'includes', 'package_types', 'keep_latest_packages'] if key in module.foreman_params]
+    if invalid_list:
+        module.fail_json(msg="({0}) can only be used with content_type 'python'".format(",".join(invalid_list)))
+
+# Validate yum specific parameter
+if module.foreman_params['content_type'] != 'yum' and 'upstream_authentication_token' in module.foreman_params:
+    module.fail_json(msg="upstream_authentication_token can only be used with content_type 'yum'")

Interestingly, it also said "Note: If 'python' is not a valid content_type, please adjust the validation accordingly." which is quite a hint at a bug in itself. The module currently does not even allow to create content_type=python repositories. That should have been more prominent, as it's a BUG!

  • Valid complaint? Yes.
  • Useful suggestion? Mostly (I only had to merge the Yum and Ansible branches with the existing code).
  • Wasted time? Nope.

parameter persistence in obsah

PR: theforeman/obsah#72

summary posted

Mostly correct.

It did miss-interpret the change to a test playbook as an actual "behavior" change: "Introduced new playbook variables for database configuration" — there is no database configuration in this repository, just the test playbook using the same metadata as a consumer of the library. Later on it does say "Playbook metadata and test fixtures", so… unclear whether this is a miss-interpretation or just badly summarized. As long as you also look at the diff, it won't confuse you, but if you're using the summary as the sole source of information (bad!) it would.

This time the sequence diagram is actually useful, yay. Again, not 100% accurate: it's missing the fact that saving the parameters is hidden behind an "if enabled" flag — something it did represent correctly for loading them.

Overall verdict: not really useful, don't need this.

comments posted

Here I was a bit surprised, especially as the nitpicks were useful!

Persist-path should respect per-user state locations (nitpick)

My original code used os.environ.get('OBSAH_PERSIST_PATH', '/var/lib/obsah/parameters.yaml') for the location of the persistence file. CodeRabbit correctly pointed out that this won't work for non-root users and one should respect XDG_STATE_HOME.

Ewoud did point that out in his own review, so I am not sure whether CodeRabbit came up with this on its own, or also took the human comments into account.

The suggested code seems fine too — just doesn't use /var/lib/obsah at all anymore. This might be a good idea for the generic library we're working on here, and then be overridden to a static /var/lib path in a consumer (which always runs as root).

In the end I did not implement it, but mostly because I was lazy and was sure we'd override it anyway.

  • Valid complaint? Yes.
  • Useful suggestion? Yes.
  • Wasted time? Nope.

Positional parameters are silently excluded from persistence (nitpick)

The library allows you to generate both positional (foo without --) and non-positional (--foo) parameters, but the code I wrote would only ever persist non-positional parameters. This was intentional, but there is no documentation of the intent in a comment — which the rabbit thought would be worth pointing out.

It's a fair nitpick and I ended up adding a comment.

  • Valid complaint? Yes.
  • Useful suggestion? Yes.
  • Wasted time? Nope.

Enforce FQDN validation for database_host

The library has a way to perform type checking on passed parameters, and one of the supported types is "FQDN" — so a fully qualified domain name, with dots and stuff. The test playbook I added has a database_host variable, but I didn't bother adding a type to it, as I don't really need any type checking here.

While using "FQDN" might be a bit too strict here — technically a working database connection can also use a non-qualified name or an IP address, I was positively surprised by this suggestion. It shows that the rest of the repository was taken into context when preparing the suggestion.

  • Valid complaint? In the context of a test, no. Would that be a real command definition, yes.
  • Useful suggestion? Yes.
  • Wasted time? Nope.

reset_args() can raise AttributeError when a key is absent

This is a correct finding, the code is not written in a way that would survive if it tries to reset things that are not set. However, that's only true for the case where users pass in --reset-<parameter> without ever having set parameter before. The complaint about the part where the parameter is part of the persisted set but not in the parsed args is wrong — as parsed args inherit from the persisted set.

The suggested code is not well readable, so I ended up fixing it slightly differently.

  • Valid complaint? Mostly.
  • Useful suggestion? Meh.
  • Wasted time? A bit.

Persisted values bypass argparse type validation

When persisting, I just yaml.safe_dump the parsed parameters, which means the YAML will contain native types like integers.

The argparse documentation warns that the type checking argparse does only applies to strings and is skipped if you pass anything else (via default values).

While correct, it doesn't really hurt here as the persisting only happens after the values were type-checked. So there is not really a reason to type-check them again. Well, unless the type changes, anyway.

Not sure what I'll do with this comment.

  • Valid complaint? Nah.
  • Useful suggestion? Nope.
  • Wasted time? Not much.

consider using contextlib.suppress

This was added when I asked CodeRabbit for a re-review after pushing some changes. Interestingly, the PR already contained try: … except: pass code before, and it did not flag that.

Also, the code suggestion contained import contextlib in the middle of the code, instead in the head of the file. Who would do that?!

But the comment as such was valid, so I fixed it in all places it is applicable, not only the one the rabbit found.

  • Valid complaint? Yes.
  • Useful suggestion? Nope.
  • Wasted time? Nope.

workaround to ensure LCE and CV are always sent together

PR: theforeman/foreman-ansible-modules#1867

summary posted

A workaround was added to the _update_entity method in the ForemanAnsibleModule class to ensure that when updating a host, both content_view_id and lifecycle_environment_id are always included together in the update payload. This prevents partial updates that could cause inconsistencies.

Partial updates are not a thing.

The workaround is purely for the fact that Katello expects both parameters to be sent, even if only one of them needs an actual update.

No diagram, good.

Overall verdict: misleading summaries are bad!

comments posted

Given a small patch, there was only one comment.

Implementation looks correct, but consider adding error handling for robustness.

This reads correct on the first glance. More error handling is always better, right?

But if you dig into the argumentation, you see it's wrong. Either:

  • we're working with a Katello setup and the host we're updating has content, so CV and LCE will be present
  • we're working with a Katello setup and the host has no content (yet), so CV and LCE will be "updated" and we're not running into the workaround
  • we're working with a plain Foreman, then both parameters are not even accepted by Ansible

The AI accepted defeat once I asked it to analyze things in more detail, but why did I have to ask in the first place?!

  • Valid complaint? Nope.
  • Useful suggestion? Nope.
  • Wasted time? Yes, as I've actually tried to come up with a case where it can happen.

Summary

Well, idk, really.

Did the AI find things that humans did not find (or didn't bother to mention)?

Yes. It's debatable whether these were useful (see e.g. the database_host example), but I tend to be in the "better to nitpick/suggest more and dismiss than oversee" team, so IMHO a positive win.

Did the AI output help the humans with the review (useful summary etc)?

In my opinion it did not. The summaries were either "lots of words, no real value" or plain wrong. The sequence diagrams were not useful either.

Luckily all of that can be turned off in the settings, which is what I'd do if I'd continue using it.

Did the AI output help the humans with the code (useful suggestions etc)?

While the actual patches it posted were "meh" at best, there were useful findings that resulted in improvements to the code.

Was the AI output misleading?

Absolutely! The whole Jinja discussion would have been easier without the AI "help". Same applies for the "error handling" in the workaround PR.

Was the AI output distracting?

The output is certainly a lot, so yes I think it can be distracting. As mentioned, I think dropping the summaries can make the experience less distracting.

What does all that mean?

I will disable the summaries for the repositories, but will leave the @coderabbitai review trigger active if someone wants an AI-assisted review. This won't be something that I'll force on our contributors and maintainers, but they surely can use it if they want.

But I don't think I'll be using this myself on a regular basis.

Yes, it can be made "usable". But so can be vim ;-)

Also, I'd prefer to have a junior human asking all the questions and making bad suggestions, so they can learn from it, and not some planet burning machine.

17 June, 2025 03:19PM by evgeni