Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

December 23, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

Remarkable

Remarkable tablet displaying my 2025 planner PDF.

My Remarkable tablet, displaying my 2025 planner.

During my PhD, on a sunny summer’s day, I copied some papers to read onto an iPad and cycled down to an outdoor cafe next to the beach. armed with a coffee and an ice cream, I sat and enjoyed the warmth. The only problem was that due to the bright sunlight, I couldn’t see a damn thing.

In 2021 I decided to take the plunge and buy the Remarkable 2 that has been heavily advertised at the time. Over the next four or so years, I made good use of it to read papers; read drafts of my own papers and chapters; read a small number of technical books; use as a daily planner; take meeting notes for work, PhD and later, personal matters.

I didn’t buy the remarkable stylus or folio cover instead opting for a (at the time, slightly cheaper) LAMY AL-star EMR. And a fantastic fabric sleeve cover from Emmerson Gray.

I installed a hack which let me use the Lamy’s button to activate an eraser and also added a bunch of other tweaks. I wouldn’t recommend that specific hack anymore as there are safer alternatives (personally untested, but e.g. https://github.com/isaacwisdom/RemarkableLamyEraser)

Pros: the writing experience is unparalleled. Excellent. I enjoy writing with fountain pens on good paper but that experience comes with inky fingers, dried up nibs, and a growing pile of paper notebooks. The remarkable is very nearly as good without those drawbacks.

Cons: lower contrast than black on white paper and no built in illumination. It needs good light to read. Almost the opposite problem to the iPad! I’ve tried a limited number of external clip on lights but nothing is frictionless to use.

The traditional two-column, wide margin formatting for academic papers is a bad fit for the remarkable’s size (just as it is for computer display sizes. Really is it good for anything people use anymore?). You can pinch to zoom which is OK, or pre-process papers (with e.g. Briss) to reframe them to be more suitable but that’s laborious.

The newer model, the Remarkable Paper Pro, might address both those issues: its bigger; has illumination and has also added colour which would be a nice to have. It’s also a lot more expensive.

I had considered selling on the tablet after I finished my PhD. My current plan, inspired to some extent by my former colleague Aleksey Shipilëv, who makes great use of his, is to have a go at using it more often, to see if it continues to provide value for me: more noodling out thoughts for work tasks, more drawings (e.g. plans for 3D models) and more reading of tech books.

23 December, 2025 10:58AM

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

AI and Secure Messaging Don't Mix

AI and Secure Messaging Don't Mix

Over on the ACLU's Free Future blog, I just published an article titled AI and Secure Messaging Don't Mix.

The blogpost assumes for the sake of the argument that people might actually want to have an AI involved in their personal conversations, and explores why Meta's Private Processing doesn't offer the level of assurance that they want it to offer.

In short, the promises of "confidential cloud computing" are built on shaky foundations, especially against adversaries as powerful as Meta themselves.

If you really want AI in your chat, the baseline step for privacy preservation is to include it in your local compute base, not to use a network service! But these operators clearly don't value private communication as much as they value binding you to their services.

But let's imagine some secure messenger that actually does put message confidentiality first -- and imagine they had integrated some sort of AI capability into the messenger. That at least bypasses the privacy questions around AI use.

Would you really want to talk with your friends, as augmented by their local AI, though? Would you want an AI, even one running locally with perfect privacy, intervening in your social connections?

What if it summarized your friend's messages to you in a way that led you to misunderstand (or ignore) an important point your friend had made? What if it encouraged you to make an edgy joke that comes across wrong? Or to say something that seriously upsets a friend? How would you respond? How would you even know that it had happened?

My handle is dkg. More times than i can count, i've had someone address me in a chat as "dog" and then cringe and apologize and blame their spellchecker/autocorrect. I can laugh these off because the failure mode is so obvious and transparent -- and repeatable. (also, dogs are awesome, so i don't really mind!)

But when our attention (and our responses!) are being shaped and molded by these plausibility engines, how will we even know that mistakes are being made? What if the plausibility engine you've hooked into your messenger embeds subtle (or unsubtle!) bias?

Don't we owe it to each other to engage with actual human attention?

23 December, 2025 05:00AM by Daniel Kahn Gillmor

December 22, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

NanoKVM: I like it

I bought a NanoKVM. I’d heard some of the stories about how terrible it was beforehand, and some I didn’t learn about until afterwards, but at £52, including VAT + P&P, that seemed like an excellent bargain for something I was planning to use in my home network environment.

Let’s cover the bad press first. apalrd did a video, entitled NanoKVM: The S stands for Security (Armen Barsegyan has a write up recommending a PiKVM instead that lists the objections raised in the video). Matej Kovačič wrote an article about the hidden microphone on a Chinese NanoKVM. Various other places have picked up both of these and still seem to be running with them, 10 months later.

Next, let me explain where I’m coming from here. I have over 2 decades of experience with terrible out-of-band access devices. I still wince when I think of the Sun Opteron servers that shipped with an iLOM that needed a 32-bit Windows browser in order to access it (IIRC some 32 bit binary JNI blob). It was a 64 bit x86 server from a company who, at the time, still had a major non-Windows OS. Sheesh. I do not assume these devices are fit for exposure to the public internet, even if they come from “reputable” vendors. Add into that the fact the NanoKVM is very much based on a development board (the LicheeRV Nano), and I felt I knew what I was getting into here.

And, as a TL;DR, I am perfectly happy with my purchase. Sipeed have actually dealt with a bunch of apalrd’s concerns (GitHub ticket), which I consider to be an impressive level of support for this price point. Equally the microphone is explained by the fact this is a £52 device based on a development board. You’re giving it USB + HDMI access to a host on your network, if you’re worried about the microphone then you’re concentrating on the wrong bit here.

I started out by hooking the NanoKVM up to my Raspberry Pi classic, which I use as a serial console / network boot tool for working on random bits of hardware. That meant the NanoKVM had no access to the outside world (the Pi is not configured to route, or NAT, for the test network interface), and I could observe what went on. As it happens you can do an SSH port forward of port 80 with this sort of setup and it all works fine - no need for the NanoKVM to have any external access, and it copes happily with being accessed as http://localhost:8000/ (though you do need to choose MJPEG as the video mode, more forwarding or enabling HTTPS is needed for an H.264 WebRTC session).

IPv6 is enabled in the kernel. My test setup doesn’t have a router advertisements configured, but I could connect to the web application over the v6 link-local address that came up automatically.

My device reports:

Image version:              v1.4.1
Application version:        2.2.9

That’s recent, but the GitHub releases page has 2.3.0 listed as more recent.

Out of the box it’s listening on TCP port 80. SSH is not running, but there’s a toggle to turn it on and the web interface offers a web based shell (with no extra authentication over the normal login). On first use I was asked to set a username + password. Default access, as you’d expect from port 80, is HTTP, but there’s a toggle to enable HTTPS. It generates a self signed certificate - for me it had the CN localhost but that might have been due to my use of port forwarding. Enabling HTTPS does not disable HTTP, but HTTP just redirects to the HTTPS URL.

As others have discussed it does a bunch of DNS lookups, primarily for NTP servers but also for cdn.sipeed.com. The DNS servers are hard coded:

~ # cat /etc/resolv.conf
nameserver 192.168.0.1
nameserver 8.8.4.4
nameserver 8.8.8.8
nameserver 114.114.114.114
nameserver 119.29.29.29
nameserver 223.5.5.5

This is actually restored on boot from /boot/resolv.conf, so if you want changes to persist you can just edit that file. NTP is configured with a standard set of pool.ntp.org services in /etc/ntp.conf (this does not get restored on reboot, so can just be edited in place). I had dnsmasq on the Pi setup to hand out DNS + NTP servers, but both were ignored (though actually udhcpc does write the DNS details to /etc/resolv.conf.dhcp).

My assumption is the lookup to cdn.sipeed.com is for firmware updates (as I bought the NanoKVM cube it came fully installed, so no need for a .so download to make things work); when working DNS was provided I witness attempts to connect to HTTPS. I’ve not bothered digging further into this. I did go grab the latest.zip being served from the URL, which turned out to be v2.2.9, matching what I have installed, not the latest on GitHub.

I note there’s an iptables setup (with nftables underneath) that’s not fully realised - it seems to be trying to allow inbound HTTP + WebRTC, as well as outbound SSH, but everything is default accept so none of it gets hit. Setting up a default deny outbound and tweaking a little should provide a bit more reassurance it’s not going to try and connect out somewhere it shouldn’t.

It looks like updates focus solely on the KVM application, so I wanted to take a look at the underlying OS. This is buildroot based:

~ # cat /etc/os-release
NAME=Buildroot
VERSION=-g98d17d2c0-dirty
ID=buildroot
VERSION_ID=2023.11.2
PRETTY_NAME="Buildroot 2023.11.2"

The kernel reports itself as 5.10.4-tag-. Somewhat ancient, but actually an LTS kernel. Except we’re now up to 5.10.247, so it obviously hasn’t been updated in some time.

TBH, this is what I expect (and fear) from embedded devices. They end up with some ancient base OS revision and a kernel with a bunch of hacks that mean it’s not easily updated. I get that the margins on this stuff are tiny, but I do wish folk would spend more time upstreaming. Or at least updating to the latest LTS point release for their kernel.

The SSH client/daemon is full-fat OpenSSH:

~ # sshd -V
OpenSSH_9.6p1, OpenSSL 3.1.4 24 Oct 2023

There are a number of CVEs fixed in later OpenSSL 3.1 versions, though at present nothing that looks too concerning from the server side. Yes, the image has tcpdump + aircrack installed. I’m a little surprised at aircrack (the device has no WiFi and even though I know there’s a variant that does, it’s not a standard debug tool the way tcpdump is), but there’s a copy of GNU Chess in there too, so it’s obvious this is just a kitchen-sink image. FWIW it looks like the buildroot config is here.

Sadly the UART that I believe the bootloader/kernel are talking to is not exposed externally - the UART pin headers are for UART1 + 2, and I’d have to open up the device to get to UART0. I’ve not yet done this (but doing so would also allow access to the SD card, which would make trying to compile + test my own kernel easier).

In terms of actual functionality it did what I’d expect. 1080p HDMI capture was fine. I’d have gone for a lower resolution, but I think that would have required tweaking on the client side. It looks like the 2.3.0 release allows EDID tweaking, so I might have to investigate that. The keyboard defaults to a US layout, which caused some problems with the | symbol until I reconfigured the target machine not to expect a GB layout.

There’s also the potential to share out images via USB. I copied a Debian trixie netinst image to /data on the NanoKVM and was able to select it in the web interface and have it appear on the target machine easily. There’s also the option to fetch direct from a URL in the web interface, but I was still testing without routable network access, so didn’t try that. There’s plenty of room for images:

~ # df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/mmcblk0p2            7.6G    823.3M      6.4G  11% /
devtmpfs                 77.7M         0     77.7M   0% /dev
tmpfs                    79.0M         0     79.0M   0% /dev/shm
tmpfs                    79.0M     30.2M     48.8M  38% /tmp
tmpfs                    79.0M    124.0K     78.9M   0% /run
/dev/mmcblk0p1           16.0M     11.5M      4.5M  72% /boot
/dev/mmcblk0p3           22.2G    160.0K     22.2G   0% /data

The NanoKVM also appears as an RNDIS USB network device, with udhcpd running on the interface. IP forwarding is not enabled, and there’s no masquerading rules setup, so this doesn’t give the target host access to the “management” LAN by default. I guess it could be useful for copying things over to the target host, as a more flexible approach than a virtual disk image.

One thing to note is this makes for a bunch of devices over the composite USB interface. There are 3 HID devices (keyboard, absolute mouse, relative mouse), the RNDIS interface, and the USB mass storage. I had a few occasions where the keyboard input got stuck after I’d been playing about with big data copies over the network and using the USB mass storage emulation. There is a HID-only mode (no network/mass storage) to try and help with this, and a restart of the NanoKVM generally brought things back, but something to watch out for. Again I see that the 2.3.0 application update mentions resetting the USB hardware on a HID reset, which might well help.

As I stated at the start, I’m happy with this purchase. Would I leave it exposed to the internet without suitable firewalling? No, but then I wouldn’t do so for any KVM. I wanted a lightweight KVM suitable for use in my home network, something unlikely to see heavy use but that would save me hooking up an actual monitor + keyboard when things were misbehaving. So far everything I’ve seen says I’ve got my money’s worth from it.

22 December, 2025 05:38PM

Russell Coker

Samsung 65″ QN900C 8K TV

As a follow up from my last post about my 8K TV [1] I tested out a Samsung 65″ QN900C Neo QLED 8K that’s on sale in JB Hifi. According to the JB employee I spoke to they are running out the last 8K TVs and have no plans to get more.

In my testing of that 8K TV YouTube had a 3840*2160 viewport which is better than the 1920*1080 of my Hisense TV. When running a web browser the codeshack page reported it as 1920*1080 with a 1.25* pixel density (presumably a configuration option) that gave a usable resolution of 1536*749.

The JB Hifi employee wouldn’t let me connect my own device via HDMI but said that it would work at 8K. I said “so if I buy it I can return it if it doesn’t do 8K HDMI?” and then he looked up the specs and found that it would only do 4K input on HDMI. It seems that actual 8K resolution might work on a Samsung streaming device but that’s not very useful particularly as there probably isn’t much 8K content on any streaming service.

Basically that Samsung allegedly 8K TV only works at 4K at best.

It seems to be impossible to buy an 8K TV or monitor in Australia that will actually display 8K content. ASUS has a 6K 32″ monitor with 6016*3384 resolution for $2016 [2]. When counting for inflation $2016 wouldn’t be the most expensive monitor I’ve ever bought and hopefully prices will continue to drop.

Rumour has it that there are 8K TVs available in China that actually take 8K input. Getting one to Australia might not be easy but it’s something that I will investigate.

Also I’m trying to sell my allegedly 8K TV.

22 December, 2025 07:52AM by etbe

François Marier

LXC setup on Debian forky

Similar to what I wrote for Ubuntu 18.04, here is how to setup an LXC container on Debian forky.

Installing the required packages

Start by installing the necessary packages on the host:

apt install lxc libvirt-clients debootstrap

Network setup

Ensure the veth kernel module is loaded by adding the following to /etc/modules-load.d/lxc-local.conf:

veth

and then loading it manually for now:

modprobe veth

Enable IPv4 forwarding by putting this in /etc/sysctl.d/lxc-local.conf:

net.ipv4.ip_forward=1

and applying it:

sysctl -p /etc/sysctl.d/lxc-local.conf

Restart the LXC network bridge:

systemctl restart lxc-net.service

Ensure that container traffic is not blocked by the host firewall, for example by adding the following to /etc/network/iptables.up.rules:

-A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 10.0.3.0/24 -j ACCEPT
-A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 239.255.255.250 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT

and applying the rules:

iptables-apply

Creating a container

To see all available images, run:

lxc-create -n foo --template=download -- --list

and then create a Debian forky container using:

lxc-create -n forky -t download -- -d debian -r forky -a amd64

Start and stop the container like this:

lxc-start -n forky
lxc-stop -n forky

Connecting to the container

Attach to the running container's console:

lxc-attach -n forky

Inside the container, you can change the root password by typing:

passwd

and install some essential packages:

apt install openssh-server vim

To find the container's IP address (for example, so that you can ssh to it from the host):

lxc-ls --fancy

22 December, 2025 02:47AM

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

I’m learning about perlguts today.


im-learning-about-perlguts-today.png


## 0.23	2025-12-20

commit be15aa25dea40aea66a8534143fb81b29d2e6c08
Author: C.J. Collier 
Date:   Sat Dec 20 22:40:44 2025 +0000

    Fixes C-level test infrastructure and adds more test cases for upb_to_sv conversions.
    
    - **Makefile.PL:**
        - Allow `extra_src` in `c_test_config.json` to be an array.
        - Add ASan flags to CCFLAGS and LDDLFLAGS for better debugging.
        - Corrected echo newlines in `test_c` target.
    - **c_test_config.json:**
        - Added missing type test files to `deps` and `extra_src` for `convert/sv_to_upb` and `convert/upb_to_sv` test runners.
    - **t/c/convert/upb_to_sv.c:**
        - Fixed a double free of `test_pool`.
        - Added missing includes for type test headers.
        - Updated test plan counts.
    - **t/c/convert/sv_to_upb.c:**
        - Added missing includes for type test headers.
        - Updated test plan counts.
        - Corrected Perl interpreter initialization.
    - **t/c/convert/types/**:
        - Added missing `test_util.h` include in new type test headers.
        - Completed the set of `upb_to_sv` test cases for all scalar types by adding optional and repeated tests for `sfixed32`, `sfixed64`, `sint32`, and `sint64`, and adding repeated tests to the remaining scalar type files.
    - **Documentation:**
        - Updated `01-xs-testing.md` with more debugging tips, including ASan usage and checking for double frees and typos.
        - Updated `xs_learnings.md` with details from the recent segfault.
        - Updated `llm-plan-execution-instructions.md` to emphasize debugging steps.


## 0.22	2025-12-19

commit 2c171d9a5027e0150eae629729c9104e7f6b9d2b
Author: C.J. Collier 
Date:   Fri Dec 19 23:41:02 2025 +0000

    feat(perl,testing): Initialize C test framework and build system
    
    This commit sets up the foundation for the C-level tests and the build system for the Perl Protobuf module:
    
    1.  **Makefile.PL Enhancements:**
        *   Integrates `Devel::PPPort` to generate `ppport.h` for better portability.
        *   Object files now retain their path structure (e.g., `xs/convert/sv_to_upb.o`) instead of being flattened, improving build clarity.
        *   The `MY::postamble` is significantly revamped to dynamically generate build rules for all C tests located in `t/c/` based on the `t/c/c_test_config.json` file.
        *   C tests are linked against `libprotobuf_common.a` and use `ExtUtils::Embed` flags.
        *   Added `JSON::MaybeXS` to `PREREQ_PM`.
        *   The `test` target now also depends on the `test_c` target.
    
    2.  **C Test Infrastructure (`t/c/`):
        *   Introduced `t/c/c_test_config.json` to configure individual C test builds, specifying dependencies and extra source files.
        *   Created `t/c/convert/test_util.c` and `.h` for shared test functions like loading descriptors.
        *   Initial `t/c/convert/upb_to_sv.c` and `t/c/convert/sv_to_upb.c` test runners.
        *   Basic `t/c/integration/030_protobuf_coro.c` for Coro safety testing on core utils using `libcoro`.
        *   Basic `t/c/integration/035_croak_test.c` for testing exception handling.
        *   Basic `t/c/integration/050_convert.c` for integration testing conversions.
    
    3.  **Test Proto:** Updated `t/data/test.proto` with more field types for conversion testing and regenerated `test_descriptor.bin`.
    
    4.  **XS Test Harness (`t/c/upb-perl-test.h`):** Added `like_n` macro for length-aware regex matching.
    
    5.  **Documentation:** Updated architecture and plan documents to reflect the C test structure.
    6.  **ERRSV Testing:** Note that the C tests (`t/c/`) will primarily check *if* a `croak` occurs (i.e., that the exception path is taken), but will not assert on the string content of `ERRSV`. Reliably testing `$@` content requires the full Perl test environment with `Test::More`, which will be done in the `.t` files when testing the Perl API.
    
    This provides a solid base for developing and testing the XS and C components of the module.


## 0.21	2025-12-18

commit a8b6b6100b2cf29c6df1358adddb291537d979bc
Author: C.J. Collier 
Date:   Thu Dec 18 04:20:47 2025 +0000

    test(C): Add integration tests for Milestone 2 components
    
    - Created t/c/integration/030_protobuf.c to test interactions
      between obj_cache, arena, and utils.
    - Added this test to t/c/c_test_config.json.
    - Verified that all C tests for Milestones 2 and 3 pass,
      including the libcoro-based stress test.


## 0.20	2025-12-18

commit 0fcad68680b1f700a83972a7c1c48bf3a6958695
Author: C.J. Collier 
Date:   Thu Dec 18 04:14:04 2025 +0000

    docs(plan): Add guideline review reminders to milestones
    
    - Added a "[ ] REFRESH: Review all documents in @perl/doc/guidelines/**"
      checklist item to the start of each component implementation
      milestone (C and Perl layers).
    - This excludes Integration Test milestones.


## 0.19	2025-12-18

commit 987126c4b09fcdf06967a98fa3adb63d7de59a34
Author: C.J. Collier 
Date:   Thu Dec 18 04:05:53 2025 +0000

    docs(plan): Add C-level and Perl-level Coro tests to milestones
    
    - Added checklist items for `libcoro`-based C tests
      (e.g., `t/c/integration/050_convert_coro.c`) to all C layer
      integration milestones (050 through 220).
    - Updated `030_Integration_Protobuf.md` to standardise checklist
      items for the existing `030_protobuf_coro.c` test.
    - Removed the single `xt/author/coro-safe.t` item from
      `010_Build.md`.
    - Added checklist items for Perl-level `Coro` tests
      (e.g., `xt/coro/240_arena.t`) to each Perl layer
      integration milestone (240 through 400).
    - Created `perl/t/c/c_test_config.json` to manage C test
      configurations externally.
    - Updated `perl/doc/architecture/testing/01-xs-testing.md` to describe
      both C-level `libcoro` and Perl-level `Coro` testing strategies.


## 0.18	2025-12-18

commit 6095a5a610401a6035a81429d0ccb9884d53687b
Author: C.J. Collier 
Date:   Thu Dec 18 02:34:31 2025 +0000

    added coro testing to c layer milestones


## 0.17	2025-12-18

commit cc0aae78b1f7f675fc8a1e99aa876c0764ea1cce
Author: C.J. Collier 
Date:   Thu Dec 18 02:26:59 2025 +0000

    docs(plan): Refine test coverage checklist items for SMARTness
    
    - Updated the "Tests provide full coverage" checklist items in
      C layer plan files (020, 040, 060, 080, 100, 120, 140, 160, 180, 200)
      to explicitly mention testing all public functions in the
      corresponding header files.
    - Expanded placeholder checklists in 140, 160, 180, 200.
    - Updated the "Tests provide full coverage" and "Add coverage checks"
      checklist items in Perl layer plan files (230, 250, 270, 290, 310, 330,
      350, 370, 390) to be more specific about the scope of testing
      and the use of `Test::TestCoverage`.
    - Expanded Well-Known Types milestone (350) to detail each type.


## 0.16	2025-12-18

commit e4b601f14e3817a17b0f4a38698d981dd4cb2818
Author: C.J. Collier 
Date:   Thu Dec 18 02:07:35 2025 +0000

    docs(plan): Full refactoring of C and Perl plan files
    
    - Split both ProtobufPlan-C.md and ProtobufPlan-Perl.md into
      per-milestone files under the `perl/doc/plan/` directory.
    - Introduced Integration Test milestones after each component
      milestone in both C and Perl plans.
    - Numbered milestone files sequentially (e.g., 010_Build.md,
      230_Perl_Arena.md).
    - Updated main ProtobufPlan-C.md and ProtobufPlan-Perl.md to
      act as Tables of Contents.
    - Ensured consistent naming for integration test files
      (e.g., `t/c/integration/030_protobuf.c`, `t/integration/260_descriptor_pool.t`).
    - Added architecture review steps to the end of all milestones.
    - Moved Coro safety test to C layer Milestone 1.
    - Updated Makefile.PL to support new test structure and added Coro.
    - Moved and split t/c/convert.c into t/c/convert/*.c.
    - Moved other t/c/*.c tests into t/c/protobuf/*.c.
    - Deleted old t/c/convert.c.


## 0.15	2025-12-17

commit 649cbacf03abb5e7293e3038bb451c0406e9d0ce
Author: C.J. Collier 
Date:   Wed Dec 17 23:51:22 2025 +0000

    docs(plan): Refactor and reset ProtobufPlan.md
    
    - Split the plan into ProtobufPlan-C.md and ProtobufPlan-Perl.md.
    - Reorganized milestones to clearly separate C layer and Perl layer development.
    - Added more granular checkboxes for each component:
      - C Layer: Create test, Test coverage, Implement, Tests pass.
      - Perl Layer: Create test, Test coverage, Implement Module/XS, Tests pass, C-Layer adjustments.
    - Reset all checkboxes to `[ ]` to prepare for a full audit.
    - Updated status in architecture/api and architecture/core documents to "Not Started".
    
    feat(obj_cache): Add unregister function and enhance tests
    
    - Added `protobuf_unregister_object` to `xs/protobuf/obj_cache.c`.
    - Updated `xs/protobuf/obj_cache.h` with the new function declaration.
    - Expanded tests in `t/c/protobuf_obj_cache.c` to cover unregistering,
      overwriting keys, and unregistering non-existent keys.
    - Corrected the test plan count in `t/c/protobuf_obj_cache.c` to 17.


## 0.14	2025-12-17

commit 40b6ad14ca32cf16958d490bb575962f88d868a1
Author: C.J. Collier 
Date:   Wed Dec 17 23:18:27 2025 +0000

    feat(arena): Complete C layer for Arena wrapper
    
    This commit finalizes the C-level implementation for the Protobuf::Arena wrapper.
    
    - Adds `PerlUpb_Arena_Destroy` for proper cleanup from Perl's DEMOLISH.
    - Enhances error checking in `PerlUpb_Arena_Get`.
    - Expands C-level tests in `t/c/protobuf_arena.c` to cover memory allocation
      on the arena and lifecycle through `PerlUpb_Arena_Destroy`.
    - Corrects embedded Perl initialization in the C test.
    
    docs(plan): Refactor ProtobufPlan.md
    
    - Restructures the development plan to clearly separate "C Layer" and
      "Perl Layer" tasks within each milestone.
    - This aligns the plan with the "C-First Implementation Strategy" and improves progress tracking.


## 0.13	2025-12-17

commit c1e566c25f62d0ae9f195a6df43b895682652c71
Author: C.J. Collier 
Date:   Wed Dec 17 22:00:40 2025 +0000

    refactor(perl): Rename C tests and enhance Makefile.PL
    
    - Renamed test files in `t/c/` to better match the `xs` module structure:
        - `01-cache.c` -> `protobuf_obj_cache.c`
        - `02-arena.c` -> `protobuf_arena.c`
        - `03-utils.c` -> `protobuf_utils.c`
        - `04-convert.c` -> `convert.c`
        - `load_test.c` -> `upb_descriptor_load.c`
    - Updated `perl/Makefile.PL` to reflect the new test names in `MY::postamble`'s `$c_test_config`.
    - Refactored the `$c_test_config` generation in `Makefile.PL` to reduce repetition by using a default flags hash and common dependencies array.
    - Added a `fail()` macro to `perl/t/c/upb-perl-test.h` for consistency.
    - Modified `t/c/upb_descriptor_load.c` to use the `t/c/upb-perl-test.h` macros, making its output consistent with other C tests.
    - Added a skeleton for `t/c/convert.c` to test the conversion functions.
    - Updated documentation in `ProtobufPlan.md` and `architecture/testing/01-xs-testing.md` to reflect new test names.


## 0.12	2025-12-17

commit d8cb5dd415c6c129e71cd452f78e29de398a82c9
Author: C.J. Collier 
Date:   Wed Dec 17 20:47:38 2025 +0000

    feat(perl): Refactor XS code into subdirectories
    
    This commit reorganizes the C code in the `perl/xs/` directory into subdirectories, mirroring the structure of the Python UPB extension. This enhances modularity and maintainability.
    
    - Created subdirectories for each major component: `convert`, `descriptor`, `descriptor_containers`, `descriptor_pool`, `extension_dict`, `map`, `message`, `protobuf`, `repeated`, and `unknown_fields`.
    - Created skeleton `.h` and `.c` files within each subdirectory to house the component-specific logic.
    - Updated top-level component headers (e.g., `perl/xs/descriptor.h`) to include the new sub-headers.
    - Updated top-level component source files (e.g., `perl/xs/descriptor.c`) to include their main header and added stub initialization functions (e.g., `PerlUpb_InitDescriptor`).
    - Moved code from the original `perl/xs/protobuf.c` to new files in `perl/xs/protobuf/` (arena, obj_cache, utils).
    - Moved code from the original `perl/xs/convert.c` to new files in `perl/xs/convert/` (upb_to_sv, sv_to_upb).
    - Updated `perl/Makefile.PL` to use a glob (`xs/*/*.c`) to find the new C source files in the subdirectories.
    - Added `perl/doc/architecture/core/07-xs-file-organization.md` to document the new structure.
    - Updated `perl/doc/ProtobufPlan.md` and other architecture documents to reference the new organization.
    - Corrected self-referential includes in the newly created .c files.
    
    This restructuring provides a solid foundation for further development and makes it easier to port logic from the Python implementation.


## 0.11	2025-12-17

commit cdedcd13ded4511b0464f5d3bdd72ce6d34e73fc
Author: C.J. Collier 
Date:   Wed Dec 17 19:57:52 2025 +0000

    feat(perl): Implement C-first testing and core XS infrastructure
    
    This commit introduces a significant refactoring of the Perl XS extension, adopting a C-first development approach to ensure a robust foundation.
    
    Key changes include:
    
    -   **C-Level Testing Framework:** Established a C-level testing system in `t/c/` with a dedicated Makefile, using an embedded Perl interpreter. Initial tests cover the object cache (`01-cache.c`), arena wrapper (`02-arena.c`), and utility functions (`03-utils.c`).
    -   **Core XS Infrastructure:**
        -   Implemented a global object cache (`xs/protobuf.c`) to manage Perl wrappers for UPB objects, using weak references.
        -   Created an `upb_Arena` wrapper (`xs/protobuf.c`).
        -   Consolidated common XS helper functions into `xs/protobuf.h` and `xs/protobuf.c`.
    -   **Makefile.PL Enhancements:** Updated to support building and linking C tests, incorporating flags from `ExtUtils::Embed`, and handling both `.c` and `.cc` source files.
    -   **XS File Reorganization:** Restructured XS files to mirror the Python UPB extension's layout (e.g., `message.c`, `descriptor.c`). Removed older, monolithic `.xs` files.
    -   **Typemap Expansion:** Added extensive typemap entries in `perl/typemap` to handle conversions between Perl objects and various `const upb_*Def*` pointers.
    -   **Descriptor Tests:** Added a new test suite `t/02-descriptor.t` to validate descriptor loading and accessor methods.
    -   **Documentation:** Updated development plans and guidelines (`ProtobufPlan.md`, `xs_learnings.md`, etc.) to reflect the C-first strategy, new testing methods, and lessons learned.
    -   **Build Cleanup:** Removed `ppport.h` from `.gitignore` as it's no longer used, due to `-DPERL_NO_PPPORT` being set in `Makefile.PL`.
    
    This C-first approach allows for more isolated and reliable testing of the core logic interacting with the UPB library before higher-level Perl APIs are built upon it.


## 0.10	2025-12-17

commit 1ef20ade24603573905cb0376670945f1ab5d829
Author: C.J. Collier 
Date:   Wed Dec 17 07:08:29 2025 +0000

    feat(perl): Implement C-level tests and core XS utils
    
    This commit introduces a C-level testing framework for the XS layer and implements key components:
    
    1.  **C-Level Tests (`t/c/`)**:
        *   Added `t/c/Makefile` to build standalone C tests.
        *   Created `t/c/upb-perl-test.h` with macros for TAP-compliant C tests (`plan`, `ok`, `is`, `is_string`, `diag`).
        *   Implemented `t/c/01-cache.c` to test the object cache.
        *   Implemented `t/c/02-arena.c` to test `Protobuf::Arena` wrappers.
        *   Implemented `t/c/03-utils.c` to test string utility functions.
        *   Corrected include paths and diagnostic messages in C tests.
    
    2.  **XS Object Cache (`xs/protobuf.c`)**:
        *   Switched to using stringified pointers (`%p`) as hash keys for stability.
        *   Fixed a critical double-free bug in `PerlUpb_ObjCache_Delete` by removing an extra `SvREFCNT_dec` on the lookup key.
    
    3.  **XS Arena Wrapper (`xs/protobuf.c`)**:
        *   Corrected `PerlUpb_Arena_New` to use `newSVrv` and `PTR2IV` for opaque object wrapping.
        *   Corrected `PerlUpb_Arena_Get` to safely unwrap the arena pointer.
    
    4.  **Makefile.PL (`perl/Makefile.PL`)**:
        *   Added `-Ixs` to `INC` to allow C tests to find `t/c/upb-perl-test.h` and `xs/protobuf.h`.
        *   Added `LIBS` to link `libprotobuf_common.a` into the main `Protobuf.so`.
        *   Added C test targets `01-cache`, `02-arena`, `03-utils` to the test config in `MY::postamble`.
    
    5.  **Protobuf.pm (`perl/lib/Protobuf.pm`)**:
        *   Added `use XSLoader;` to load the compiled XS code.
    
    6.  **New files `xs/util.h`**:
        *   Added initial type conversion function.
    
    These changes establish a foundation for testing the C-level interface with UPB and fix crucial bugs in the object cache implementation.


## 0.09	2025-12-17

commit 07d61652b032b32790ca2d3848243f9d75ea98f4
Author: C.J. Collier 
Date:   Wed Dec 17 04:53:34 2025 +0000

    feat(perl): Build system and C cache test for Perl XS
    
    This commit introduces the foundational pieces for the Perl XS implementation, focusing on the build system and a C-level test for the object cache.
    
    -   **Makefile.PL:**
        -   Refactored C test compilation rules in `MY::postamble` to use a hash (`$c_test_config`) for better organization and test-specific flags.
        -   Integrated `ExtUtils::Embed` to provide necessary compiler and linker flags for embedding the Perl interpreter, specifically for the `t/c/01-cache.c` test.
        -   Correctly constructs the path to the versioned Perl library (`libperl.so.X.Y.Z`) using `$Config{archlib}` and `$Config{libperl}` to ensure portability.
        -   Removed `VERSION_FROM` and `ABSTRACT_FROM` to avoid dependency on `.pm` files for now.
    
    -   **C Cache Test (t/c/01-cache.c):**
        -   Added a C test to exercise the object cache functions implemented in `xs/protobuf.c`.
        -   Includes tests for adding, getting, deleting, and weak reference behavior.
    
    -   **XS Cache Implementation (xs/protobuf.c, xs/protobuf.h):**
        -   Implemented `PerlUpb_ObjCache_Init`, `PerlUpb_ObjCache_Add`, `PerlUpb_ObjCache_Get`, `PerlUpb_ObjCache_Delete`, and `PerlUpb_ObjCache_Destroy`.
        -   Uses a Perl hash (`HV*`) for the cache.
        -   Keys are string representations of the C pointers, created using `snprintf` with `"%llx"`.
        -   Values are weak references (`sv_rvweaken`) to the Perl objects (`SV*`).
        -   `PerlUpb_ObjCache_Get` now correctly returns an incremented reference to the original SV, not a copy.
        -   `PerlUpb_ObjCache_Destroy` now clears the hash before decrementing its refcount.
    
    -   **t/c/upb-perl-test.h:**
        -   Updated `is_sv` to perform direct pointer comparison (`got == expected`).
    
    -   **Minor:** Added `util.h` (currently empty), updated `typemap`.
    
    These changes establish a working C-level test environment for the XS components.


## 0.08	2025-12-17

commit d131fd22ea3ed8158acb9b0b1fe6efd856dc380e
Author: C.J. Collier 
Date:   Wed Dec 17 02:57:48 2025 +0000

    feat(perl): Update docs and core XS files
    
    - Explicitly add TDD cycle to ProtobufPlan.md.
    - Clarify mirroring of Python implementation in upb-interfacing.md for both C and Perl layers.
    - Branch and adapt python/protobuf.h and python/protobuf.c to perl/xs/protobuf.h and perl/xs/protobuf.c, including the object cache implementation. Removed old cache.* files.
    - Create initial C test for the object cache in t/c/01-cache.c.


## 0.07	2025-12-17

commit 56fd6862732c423736a2f9a9fb1a2816fc59e9b0
Author: C.J. Collier 
Date:   Wed Dec 17 01:09:18 2025 +0000

    feat(perl): Align Perl UPB architecture docs with Python
    
    Updates the Perl Protobuf architecture documents to more closely align with the design and implementation strategies used in the Python UPB extension.
    
    Key changes:
    
    -   **Object Caching:** Mandates a global, per-interpreter cache using weak references for all UPB-derived objects, mirroring Python's `PyUpb_ObjCache`.
    -   **Descriptor Containers:** Introduces a new document outlining the plan to use generic XS container types (Sequence, ByNameMap, ByNumberMap) with vtables to handle collections of descriptors, similar to Python's `descriptor_containers.c`.
    -   **Testing:** Adds a note to the testing strategy to port relevant test cases from the Python implementation to ensure feature parity.


## 0.06	2025-12-17

commit 6009ce6ab64eccce5c48729128e5adf3ef98e9ae
Author: C.J. Collier 
Date:   Wed Dec 17 00:28:20 2025 +0000

    feat(perl): Implement object caching and fix build
    
    This commit introduces several key improvements to the Perl XS build system and core functionality:
    
    1.  **Object Caching:**
        *   Introduces `xs/protobuf.c` and `xs/protobuf.h` to implement a caching mechanism (`protobuf_c_to_perl_obj`) for wrapping UPB C pointers into Perl objects. This uses a hash and weak references to ensure object identity and prevent memory leaks.
        *   Updates the `typemap` to use `protobuf_c_to_perl_obj` for `upb_MessageDef *` output, ensuring descriptor objects are cached.
        *   Corrected `sv_weaken` to the correct `sv_rvweaken` function.
    
    2.  **Makefile.PL Enhancements:**
        *   Switched to using the Bazel-generated UPB descriptor sources from `bazel-bin/src/google/protobuf/_virtual_imports/descriptor_proto/google/protobuf/`.
        *   Updated `INC` paths to correctly locate the generated headers.
        *   Refactored `MY::dynamic_lib` to ensure the static library `libprotobuf_common.a` is correctly linked into each generated `.so` module, resolving undefined symbol errors.
        *   Overrode `MY::test` to use `prove -b -j$(nproc) t/*.t xt/*.t` for running tests.
        *   Cleaned up `LIBS` and `LDDLFLAGS` usage.
    
    3.  **Documentation:**
        *   Updated `ProtobufPlan.md` to reflect the current status and design decisions.
        *   Reorganized architecture documents into subdirectories.
        *   Added `object-caching.md` and `c-perl-interface.md`.
        *   Updated `llm-guidance.md` with notes on `upb/upb.h` and `sv_rvweaken`.
    
    4.  **Testing:**
        *   Fixed `xt/03-moo_immutable.t` to skip tests if no Moo modules are found.
    
    This resolves the build issues and makes the core test suite pass.


## 0.05	2025-12-16

commit 177d2f3b2608b9d9c415994e076a77d8560423b8
Author: C.J. Collier 
Date:   Tue Dec 16 19:51:36 2025 +0000

    Refactor: Rename namespace to Protobuf, build system and doc updates
    
    This commit refactors the primary namespace from `ProtoBuf` to `Protobuf`
    to align with the style guide. This involves renaming files, directories,
    and updating package names within all Perl and XS files.
    
    **Namespace Changes:**
    
    *   Renamed `perl/lib/ProtoBuf` to `perl/lib/Protobuf`.
    *   Moved and updated `ProtoBuf.pm` to `Protobuf.pm`.
    *   Moved and updated `ProtoBuf::Descriptor` to `Protobuf::Descriptor` (.pm & .xs).
    *   Removed other `ProtoBuf::*` stubs (Arena, DescriptorPool, Message).
    *   Updated `MODULE` and `PACKAGE` in `Descriptor.xs`.
    *   Updated `NAME`, `*_FROM` in `perl/Makefile.PL`.
    *   Replaced `ProtoBuf` with `Protobuf` throughout `perl/typemap`.
    *   Updated namespaces in test files `t/01-load-protobuf-descriptor.t` and `t/02-descriptor.t`.
    *   Updated namespaces in all documentation files under `perl/doc/`.
    *   Updated paths in `perl/.gitignore`.
    
    **Build System Enhancements (Makefile.PL):**
    
    *   Included `xs/*.c` files in the common object files list.
    *   Added `-I.` to the `INC` paths.
    *   Switched from `MYEXTLIB` to `LIBS => ['-L$(CURDIR) -lprotobuf_common']` for linking.
    *   Removed custom keys passed to `WriteMakefile` for postamble.
    *   `MY::postamble` now sources variables directly from the main script scope.
    *   Added `all :: ${common_lib}` dependency in `MY::postamble`.
    *   Added `t/c/load_test.c` compilation rule in `MY::postamble`.
    *   Updated `clean` target to include `blib`.
    *   Added more modules to `TEST_REQUIRES`.
    *   Removed the explicit `PM` and `XS` keys from `WriteMakefile`, relying on `XSMULTI => 1`.
    
    **New Files:**
    
    *   `perl/lib/Protobuf.pm`
    *   `perl/lib/Protobuf/Descriptor.pm`
    *   `perl/lib/Protobuf/Descriptor.xs`
    *   `perl/t/01-load-protobuf-descriptor.t`
    *   `perl/t/02-descriptor.t`
    *   `perl/t/c/load_test.c`: Standalone C test for UPB.
    *   `perl/xs/types.c` & `perl/xs/types.h`: For Perl/C type conversions.
    *   `perl/doc/architecture/upb-interfacing.md`
    *   `perl/xt/03-moo_immutable.t`: Test for Moo immutability.
    
    **Deletions:**
    
    *   Old test files: `t/00_load.t`, `t/01_basic.t`, `t/02_serialize.t`, `t/03_message.t`, `t/04_descriptor_pool.t`, `t/05_arena.t`, `t/05_message.t`.
    *   Removed `lib/ProtoBuf.xs` as it's not needed with `XSMULTI`.
    
    **Other:**
    
    *   Updated `test_descriptor.bin` (binary change).
    *   Significant content updates to markdown documentation files in `perl/doc/architecture` and `perl/doc/internal` reflecting the new architecture and learnings.


## 0.04	2025-12-14

commit 92de5d482c8deb9af228f4b5ce31715d3664d6ee
Author: C.J. Collier 
Date:   Sun Dec 14 21:28:19 2025 +0000

    feat(perl): Implement Message object creation and fix lifecycles
    
    This commit introduces the basic structure for `ProtoBuf::Message` object
    creation, linking it with `ProtoBuf::Descriptor` and `ProtoBuf::DescriptorPool`,
    and crucially resolves a SEGV by fixing object lifecycle management.
    
    Key Changes:
    
    1.  **`ProtoBuf::Descriptor`:** Added `_pool` attribute to hold a strong
        reference to the parent `ProtoBuf::DescriptorPool`. This is essential to
        prevent the pool and its C `upb_DefPool` from being garbage collected
        while a descriptor is still in use.
    
    2.  **`ProtoBuf::DescriptorPool`:**
        *   `find_message_by_name`: Now passes the `$self` (the pool object) to the
            `ProtoBuf::Descriptor` constructor to establish the lifecycle link.
        *   XSUB `pb_dp_find_message_by_name`: Updated to accept the pool `SV*` and
            store it in the descriptor's `_pool` attribute.
        *   XSUB `_load_serialized_descriptor_set`: Renamed to avoid clashing with the
            Perl method name. The Perl wrapper now correctly calls this internal XSUB.
        *   `DEMOLISH`: Made safer by checking for attribute existence.
    
    3.  **`ProtoBuf::Message`:**
        *   Implemented using Moo with lazy builders for `_upb_arena` and
            `_upb_message`.
        *   `_descriptor` is a required argument to `new()`.
        *   XS functions added for creating the arena (`pb_msg_create_arena`) and
            the `upb_Message` (`pb_msg_create_upb_message`).
        *   `pb_msg_create_upb_message` now extracts the `upb_MessageDef*` from the
            descriptor and uses `upb_MessageDef_MiniTable()` to get the minitable
            for `upb_Message_New()`.
        *   `DEMOLISH`: Added to free the message's arena.
    
    4.  **`Makefile.PL`:**
        *   Added `-g` to `CCFLAGS` for debugging symbols.
        *   Added Perl CORE include path to `MY::postamble`'s `base_flags`.
    
    5.  **Tests:**
        *   `t/04_descriptor_pool.t`: Updated to check the structure of the
            returned `ProtoBuf::Descriptor`.
        *   `t/05_message.t`: Now uses a descriptor obtained from a real pool to
            test `ProtoBuf::Message->new()`.
    
    6.  **Documentation:**
        *   Updated `ProtobufPlan.md` to reflect progress.
        *   Updated several files in `doc/architecture/` to match the current
            implementation details, especially regarding arena management and object
            lifecycles.
        *   Added `doc/internal/development_cycle.md` and `doc/internal/xs_learnings.md`.
    
    With these changes, the SEGV is resolved, and message objects can be successfully
    created from descriptors.


## 0.03	2025-12-14

commit 6537ad23e93680c2385e1b571d84ed8dbe2f68e8
Author: C.J. Collier 
Date:   Sun Dec 14 20:23:41 2025 +0000

    Refactor(perl): Object-Oriented DescriptorPool with Moo
    
    This commit refactors the `ProtoBuf::DescriptorPool` to be fully object-oriented using Moo, and resolves several issues related to XS, typemaps, and test data.
    
    Key Changes:
    
    1.  **Moo Object:** `ProtoBuf::DescriptorPool.pm` now uses `Moo` to define the class. The `upb_DefPool` pointer is stored as a lazy attribute `_upb_defpool`.
    2.  **XS Lifecycle:** `DescriptorPool.xs` now has `pb_dp_create_pool` called by the Moo builder and `pb_dp_free_pool` called from `DEMOLISH` to manage the `upb_DefPool` lifecycle per object.
    3.  **Typemap:** The `perl/typemap` file has been significantly updated to handle the conversion between the `ProtoBuf::DescriptorPool` Perl object and the `upb_DefPool *` C pointer. This includes:
        *   Mapping `upb_DefPool *` to `T_PTR`.
        *   An `INPUT` section for `ProtoBuf::DescriptorPool` to extract the pointer from the object's hash, triggering the lazy builder if needed via `call_method`.
        *   An `OUTPUT` section for `upb_DefPool *` to convert the pointer back to a Perl integer, used by the builder.
    4.  **Method Renaming:** `add_file_descriptor_set_binary` is now `load_serialized_descriptor_set`.
    5.  **Test Data:**
        *   Added `perl/t/data/test.proto` with a sample message and enum.
        *   Generated `perl/t/data/test_descriptor.bin` using `protoc`.
        *   Removed `t/data/` from `.gitignore` to ensure test data is versioned.
    6.  **Test Update:** `t/04_descriptor_pool.t` is updated to use the new OO interface, load the generated descriptor set, and check for message definitions.
    7.  **Build Fixes:**
        *   Corrected `#include` paths in `DescriptorPool.xs` to be relative to the `upb/` directory (e.g., `upb/wire/decode.h`).
        *   Added `-I../upb` to `CCFLAGS` in `Makefile.PL`.
        *   Reordered `INC` paths in `Makefile.PL` to prioritize local headers.
    
    **Note:** While tests now pass in some environments, a SEGV issue persists in `make test` runs, indicating a potential memory or lifecycle issue within the XS layer that needs further investigation.


## 0.02	2025-12-14

commit 6c9a6f1a5f774dae176beff02219f504ea3a6e07
Author: C.J. Collier 
Date:   Sun Dec 14 20:13:09 2025 +0000

    Fix(perl): Correct UPB build integration and generated file handling
    
    This commit resolves several issues to achieve a successful build of the Perl extension:
    
    1.  **Use Bazel Generated Files:** Switched from compiling UPB's stage0 descriptor.upb.c to using the Bazel-generated `descriptor.upb.c` and `descriptor.upb_minitable.c` located in `bazel-bin/src/google/protobuf/_virtual_imports/descriptor_proto/google/protobuf/`.
    2.  **Updated Include Paths:** Added the `bazel-bin` path to `INC` in `WriteMakefile` and to `base_flags` in `MY::postamble` to ensure the generated headers are found during both XS and static library compilation.
    3.  **Removed Stage0:** Removed references to `UPB_STAGE0_DIR` and no longer include headers or source files from `upb/reflection/stage0/`.
    4.  **-fPIC:** Explicitly added `-fPIC` to `CCFLAGS` in `WriteMakefile` and ensured `$(CCFLAGS)` is used in the custom compilation rules in `MY::postamble`. This guarantees all object files in the static library are compiled with position-independent code, resolving linker errors when creating the shared objects for the XS modules.
    5.  **Refined UPB Sources:** Used `File::Find` to recursively find UPB C sources, excluding `/conformance/` and `/reflection/stage0/` to avoid conflicts and unnecessary compilations.
    6.  **Arena Constructor:** Modified `ProtoBuf::Arena::pb_arena_new` XSUB to accept the class name argument passed from Perl, making it a proper constructor.
    7.  **.gitignore:** Added patterns to `perl/.gitignore` to ignore generated C files from XS (`lib/*.c`, `lib/ProtoBuf/*.c`), the copied `src_google_protobuf_descriptor.pb.cc`, and the `t/data` directory.
    8.  **Build Documentation:** Updated `perl/doc/architecture/upb-build-integration.md` to reflect the new build process, including the Bazel prerequisite, include paths, `-fPIC` usage, and `File::Find`.
    
    Build Steps:
    1.  `bazel build //src/google/protobuf:descriptor_upb_proto` (from repo root)
    2.  `cd perl`
    3.  `perl Makefile.PL`
    4.  `make`
    5.  `make test` (Currently has expected failures due to missing test data implementation).


## 0.01	2025-12-14

commit 3e237e8a26442558c94075766e0d4456daaeb71d
Author: C.J. Collier 
Date:   Sun Dec 14 19:34:28 2025 +0000

    feat(perl): Initialize Perl extension scaffold and build system
    
    This commit introduces the `perl/` directory, laying the groundwork for the Perl Protocol Buffers extension. It includes the essential build files, linters, formatter configurations, and a vendored Devel::PPPort for XS portability.
    
    Key components added:
    
    *   **`Makefile.PL`**: The core `ExtUtils::MakeMaker` build script. It's configured to:
        *   Build a static library (`libprotobuf_common.a`) from UPB, UTF8_Range, and generated protobuf C/C++ sources.
        *   Utilize `XSMULTI => 1` to create separate shared objects for `ProtoBuf`, `ProtoBuf::Arena`, and `ProtoBuf::DescriptorPool`.
        *   Link each XS module against the common static library.
        *   Define custom compilation rules in `MY::postamble` to handle C vs. C++ flags and build the static library.
        *   Set up include paths for the project root, UPB, and other dependencies.
    
    *   **XS Stubs (`.xs` files)**:
        *   `lib/ProtoBuf.xs`: Placeholder for the main module's XS functions.
        *   `lib/ProtoBuf/Arena.xs`: XS interface for `upb_Arena` management.
        *   `lib/ProtoBuf/DescriptorPool.xs`: XS interface for `upb_DefPool` management.
    
    *   **Perl Module Stubs (`.pm` files)**:
        *   `lib/ProtoBuf.pm`: Main module, loads XS.
        *   `lib/ProtoBuf/Arena.pm`: Perl class for Arenas.
        *   `lib/ProtoBuf/DescriptorPool.pm`: Perl class for Descriptor Pools.
        *   `lib/ProtoBuf/Message.pm`: Base class for messages (TBD).
    
    *   **Configuration Files**:
        *   `.gitignore`: Ignores build artifacts, editor files, etc.
        *   `.perlcriticrc`: Configures Perl::Critic for static analysis.
        *   `.perltidyrc`: Configures perltidy for code formatting.
    
    *   **`Devel::PPPort`**: Vendored version 3.72 to generate `ppport.h` for XS compatibility across different Perl versions.
    
    *   **`typemap`**: Custom typemap for XS argument/result conversion.
    
    *   **Documentation (`doc/`)**: Initial architecture and plan documents.
    
    This provides a solid foundation for developing the UPB-based Perl extension.


22 December, 2025 01:32AM by C.J. Collier

December 21, 2025

Ian Jackson

Debian’s git transition

tl;dr:

There is a Debian git transition plan. It’s going OK so far but we need help, especially with outreach and updating Debian’s documentation.

Goals of the Debian git transition project

  1. Everyone who interacts with Debian source code should be able to do so entirely in git.

That means, more specifically:

  1. All examination and edits to the source should be performed via normal git operations.

  2. Source code should be transferred and exchanged as git data, not tarballs. git should be the canonical form everywhere.

  3. Upstream git histories should be re-published, traceably, as part of formal git releases published by Debian.

  4. No-one should have to learn about Debian Source Packages, which are bizarre, and have been obsoleted by modern version control.

This is very ambitious, but we have come a long way!

Achievements so far, and current status

We have come a very long way. But, there is still much to do - especially, the git transition team needs your help with adoption, developer outreach, and developer documentation overhaul.

We’ve made big strides towards goals 1 and 4. Goal 2 is partially achieved: we currently have dual running. Goal 3 is within our reach but depends on widespread adoption of tag2upload (and/or dgit push).

Downstreams and users can obtain the source code of any Debian package in git form. (dgit clone, 2013). They can then work with this source code completely in git, including building binaries, merging new versions, even automatically (eg Raspbian, 2016), and all without having to deal with source packages at all (eg Wikimedia 2025).

A Debian maintainer can maintain their own package entirely in git. They can obtain upstream source code from git, and do their packaging work in git (git-buildpackage, 2006).

Every Debian maintainer can (and should!) release their package from git reliably and in a standard form (dgit push, 2013; tag2upload, 2025). This is not only more principled, but also more convenient, and with better UX, than pre-dgit tooling like dput.

Indeed a Debian maintainer can now often release their changes to Debian, from git, using only git branches (so no tarballs). Releasing to Debian can be simply pushing a signed tag (tag2upload, 2025).

A Debian maintainer can maintain a stack of changes to upstream source code in git (gbp pq 2009). They can even maintain such a delta series as a rebasing git branch, directly buildable, and use normal git rebase style operations to edit their changes, (git-dpm, 2010; git-debrebase, 2018)

An authorised Debian developer can do a modest update to any package in Debian, even one maintained by someone else, working entirely in git in a standard and convenient way (dgit, 2013).

Debian contributors can share their work-in-progress on git forges and collaborate using merge requests, git based code review, and so on. (Alioth, 2003; Salsa, 2018.)

Core engineering principle

The Debian git transition project is based on one core engineering principle:

Every Debian Source Package can be losslessly converted to and from git.

In order to transition away from Debian Source Packages, we need to gateway between the old dsc approach, and the new git approach.

This gateway obviously needs to be bidirectional: source packages uploaded with legacy tooling like dput need to be imported into a canonical git representation; and of course git branches prepared by developers need to be converted to source packages for the benefit of legacy downstream systems (such as the Debian Archive and apt source).

This bidirectional gateway is implemented in src:dgit, and is allowing us to gradually replace dsc-based parts of the Debian system with git-based ones.

Correspondence between dsc and git

A faithful bidirectional gateway must define an invariant:

The canonical git tree, corresponding to a .dsc, is the tree resulting from dpkg-source -x.

This canonical form is sometimes called the “dgit view”. It’s sometimes not the same as the maintainer’s git branch, because many maintainers are still working with “patches-unapplied” git branches. More on this below.

(For 3.0 (quilt) .dscs, the canonical git tree doesn’t include the quilt .pc directory.)

Patches-applied vs patches-unapplied

The canonical git format is “patches applied”. That is:

If Debian has modified the upstream source code, a normal git clone of the canonical branch gives the modified source tree, ready for reading and building.

Many Debian maintainers keep their packages in a different git branch format, where the changes made by Debian, to the upstream source code, are in actual patch files in a debian/patches/ subdirectory.

Patches-applied has a number of important advantages over patches-unapplied:

  • It is familiar to, and doesn’t trick, outsiders to Debian. Debian insiders radically underestimate how weird “patches-unapplied” is. Even expert software developers can get very confused or even accidentally build binaries without security patches!

  • Making changes can be done with just normal git commands, eg git commit. Many Debian insiders working with patches-unapplied are still using quilt(1), a footgun-rich contraption for working with patch files!

  • When developing, one can make changes to upstream code, and to Debian packaging, together, without ceremony. There is no need to switch back and forth between patch queue and packaging branches (as with gbp pq), no need to “commit” patch files, etc. One can always edit every file and commit it with git commit.

The downside is that, with the (bizarre) 3.0 (quilt) source format, the patch files files in debian/patches/ must somehow be kept up to date. Nowadays though, tools like git-debrebase and git-dpm (and dgit for NMUs) make it very easy to work with patches-applied git branches. git-debrebase can deal very ergonomically even with big patch stacks.

(For smaller packages which usually have no patches, plain git merge with an upstream git branch, and a much simpler dsc format, sidesteps the problem entirely.)

Prioritising Debian’s users (and other outsiders)

We want everyone to be able to share and modify the software that they interact with. That means we should make source code truly accessible, on the user’s terms.

Many of Debian’s processes assume everyone is an insider. It’s okay that there are Debian insiders and that people feel part of something that they worked hard to become involved with. But lack of perspective can lead to software which fails to uphold our values.

Our source code practices — in particular, our determination to share properly (and systematically) — are a key part of what makes Debian worthwhile at all. Like Debian’s installer, we want our source code to be useable by Debian outsiders.

This is why we have chosen to privilege a git branch format which is more familiar to the world at large, even if it’s less popular in Debian.

Consequences, some of which are annoying

The requirement that the conversion be bidirectional, lossless, and context-free can be inconvenient.

For example, we cannot support .gitattributes which modify files during git checkin and checkout. .gitattributes cause the meaning of a git tree to depend on the context, in possibly arbitrary ways, so the conversion from git to source package wouldn’t be stable. And, worse, some source packages might not to be representable in git at all.

Another example: Maintainers often have existing git branches for their packages, generated with pre-dgit tooling which is less careful and less principled than ours. That can result in discrepancies between git and dsc, which need to be resolved before a proper git-based upload can succeed.

That some maintainers use patches-unapplied, and some patches-unapplied, means that there has to be some kind of conversion to a standard git representation. Choosing the less-popular patches-applied format as the canonical form, means that many packages need their git representation converted. It also means that user- and outsider-facing branches from {browse,git}.dgit.d.o and dgit clone are not always compatible with maintainer branches on Salsa. User-contributed changes need cherry-picking rather than merging, or conversion back to the maintainer format. The good news is that dgit can automate much of this, and the manual parts are usually easy git operations.

Distributing the source code as git

Our source code management should be normal, modern, and based on git. That means the Debian Archive is obsolete and needs to be replaced with a set of git repositories.

The replacement repository for source code formally released to Debian is *.dgit.debian.org. This contains all the git objects for every git-based upload since 2013, including the signed tag for each released package version.

The plan is that it will contain a git view of every uploaded Debian package, by centrally importing all legacy uploads into git.

Tracking the relevant git data, when changes are made in the legacy Archive

Currently, many critical source code management tasks are done by changes to the legacy Debian Archive, which works entirely with dsc files (and the associated tarballs etc). The contents of the Archive are therefore still an important source of truth. But, the Archive’s architecture means it cannot sensibly directly contain git data.

To track changes made in the Archive, we added the Dgit: field to the .dsc of a git-based upload (2013). This declares which git commit this package was converted from. and where those git objects can be obtained.

Thus, given a Debian Source Package from a git-based upload, it is possible for the new git tooling to obtain the equivalent git objects. If the user is going to work in git, there is no need for any tarballs to be downloaded: the git data could be obtained from the depository using the git protocol.

The signed tags, available from the git depository, have standardised metdata which gives traceability back to the uploading Debian contributor.

Why *.dgit.debian.org is not Salsa

We need a git depository - a formal, reliable and permanent git repository of source code actually released to Debian.

Git forges like Gitlab can be very convenient. But Gitlab is not sufficiently secure, and too full of bugs, to be the principal and only archive of all our source code. (The “open core” business model of the Gitlab corporation, and the constant-churn development approach, are critical underlying problems.)

Our git depository lacks forge features like Merge Requests. But:

  • It is dependable, both in terms of reliability and security.
  • It is append-only: once something is pushed, it is permanently recorded.
  • Its access control is precisely that of the Debian Archive.
  • Its ref namespace is standardised and corresponds to Debian releases.
  • Pushes are authorised by PGP signatures, not ssh keys, so traceable.

The dgit git depository outlasted Alioth and it may well outlast Salsa.

We need both a good forge, and the *.dgit.debian.org formal git depository.

Roadmap

In progress

Right now we are quite focused on tag2upload.

We are working hard on eliminating the remaining issues that we feel need to be addressed before declaring the service out of beta.

Future Technology

Whole-archive dsc importer

Currently, the git depository only has git data for git-based package updates (tag2upload and dgit push). Legacy dput-based uploads are not currently present there. This means that the git-based and legacy uploads must be resolved client-side, by dgit clone.

We will want to start importing legacy uploads to git.

Then downstreams and users will be able to get the source code for any package simply with git clone, even if the maintainer is using legacy upload tools like dput.

Support for git-based uploads to security.debian.org

Security patching is a task which would particularly benefit from better and more formal use of git. git-based approaches to applying and backporting security patches are much more convenient than messing about with actual patch files.

Currently, one can use git to help prepare a security upload, but it often involves starting with a dsc import (which lacks the proper git history) or figuring out a package maintainer’s unstandardised git usage conventions on Salsa.

And it is not possible to properly perform the security release as git.

Internal Debian consumers switch to getting source from git

Buildds, QA work such as lintian checks, and so on, could be simpler if they don’t need to deal with source packages.

And since git is actually the canonical form, we want them to use it directly.

Problems for the distant future

For decades, Debian has been built around source packages. Replacing them is a long and complex process. Certainly source packages are going to continue to be supported for the foreseeable future.

There are no doubt going to be unanticipated problems. There are also foreseeable issues: for example, perhaps there are packages that work very badly when represented in git. We think we can rise to these challenges as they come up.

Mindshare and adoption - please help!

We and our users are very pleased with our technology. It is convenient and highly dependable.

dgit in particular is superb, even if we say so ourselves. As technologists, we have been very focused on building good software, but it seems we have fallen short in the marketing department.

A rant about publishing the source code

git is the preferred form for modification.

Our upstreams are overwhelmingly using git. We are overwhelmingly using git. It is a scandal that for many packages, Debian does not properly, formally and officially publish the git history.

Properly publishing the source code as git means publishing it in a way that means that anyone can automatically and reliably obtain and build the exact source code corresponding to the binaries. The test is: could you use that to build a derivative?

Putting a package in git on Salsa is often a good idea, but it is not sufficient. No standard branch structure git on Salsa is enforced, nor should it be (so it can’t be automatically and reliably obtained), the tree is not in a standard form (so it can’t be automatically built), and is not necessarily identical to the source package. So Vcs-Git fields, and git from Salsa, will never be sufficient to make a derivative.

Debian is not publishing the source code!

The time has come for proper publication of source code by Debian to no longer be a minority sport. Every maintainer of a package whose upstream is using git (which is nearly all packages nowadays) should be basing their work on upstream git, and properly publishing that via tag2upload or dgit.

And it’s not even difficult! The modern git-based tooling provides a far superior upload experience.

A common misunderstanding

dgit push is not an alternative to gbp pq or quilt. Nor is tag2upload. These upload tools complement your existing git workflow. They replace and improve source package building/signing and the subsequent dput. If you are using one of the usual git layouts on salsa, and your package is in good shape, you can adopt tag2upload and/or dgit push right away.

git-debrebase is distinct and does provides an alternative way to manage your git packaging, do your upstream rebases, etc.

Documentation

Debian’s documentation all needs to be updated, including particularly instructions for packaging, to recommend use of git-first workflows. Debian should not be importing git-using upstreams’ “release tarballs” into git. (Debian outsiders who discover this practice are typically horrified.) We should use only upstream git, work only in git, and properly release (and publish) in git form.

We, the git transition team, are experts in the technology, and can provide good suggestions. But we do not have the bandwidth to also engage in the massive campaigns of education and documentation updates that are necessary — especially given that (as with any programme for change) many people will be sceptical or even hostile.

So we would greatly appreciate help with writing and outreach.

Personnel

We consider ourselves the Debian git transition team.

Currently we are:

  • Ian Jackson. Author and maintainer of dgit and git-debrebase. Co-creator of tag2upload. Original author of dpkg-source, and inventor in 1996 of Debian Source Packages. Alumnus of the Debian Technical Committee.

  • Sean Whitton. Co-creator of the tag2upload system; author and maintainer of git-debpush. Co-maintainer of dgit. Debian Policy co-Editor. Former Chair of the Debian Technical Committee.

We wear the following hats related to the git transition:

You can contact us:

We do most of our heavy-duty development on Salsa.

Thanks

Particular thanks are due to Joey Hess, who, in the now-famous design session in Vaumarcus in 2013, helped invent dgit. Since then we have had a lot of support: most recently political support to help get tag2upload deployed, but also, over the years, helpful bug reports and kind words from our users, as well as translations and code contributions.

Many other people have contributed more generally to support for working with Debian source code in git. We particularly want to mention Guido Günther (git-buildpackage); and of course Alexander Wirt, Joerg Jaspert, Thomas Goirand and Antonio Terceiro (Salsa administrators); and before them the Alioth administrators.



comment count unavailable comments

21 December, 2025 11:24PM

Russell Coker

December 20, 2025

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Immutable Debian

Immutable Atomic Linux Distirbutions

Of late, I’ve been hearing a lot of (good) things about Immutable Linux Distributions, from friends, colleagues and mentors. It has been something on my plate for some time, to explore. But given the nature of the subject, it has been delayed for a while. Reasons are simple; I can only really judge this product if I use it for some time; and it has to be on my primary daily driver machine.

Personal life, this year, has been quite challenging as well. Thus it got pushed to until now.

Chrome OS

I’ve realized that I’ve been quite late to a lot of Linux parties. Containers, Docker, Kubernetes, Golang, Rust, Immutable Linux and many many more.

Late to the extent that I’ve had a Chrome Book lying at home for many months but never got to tinker with it at all.

Having used it for just around 2 weeks now, I can see what a great product Google built with it. In short, this is exactly how a Linux desktop integration should be. The GUI integration is just top notch. There’s consistency across all applications rendered on the Chrome OS

The integration of [X]Wayland and friends is equally good. Maybe Google should consider opensourcing all those components. IIRC, exo, sommelier, xwayland, ash and many more.

I was equally happy to see their Linux Development Environment offering on supported hardware. While tightly integrated, it still allows power users to tinker things around. I was quite impressed to see nested containers in crostini. Job well done.

All of this explains why there’s much buzz about Immutable Atomic Linux Distributions these days.

Then, there’s the Android integration, which is just awesome in case you care of it. Both libndk and libhoudini are well integrated and nicely usable.

Immutable Linux Distributions

This holiday season I wanted to find and spend some time catching up on stuff I had been prolonging.

I chose to explore this subject while trying to remain in familiar Debian land. So my first look was to see if there was any product derived out of the Debian base.

That brought me to Vanilla OS Orchid. This is a fresh out of oven project, recently switched to being based on Debian Sid. Previous iteration used Ubuntu as the base.

Vanilla OS turned out to be quite good an experience. The stock offering is created well enough to serve the general audience. And the framework is such wonderfully structured that seasoned users can tinker around with it, without much fuss.

Vanilla OS uses an A/B Partition model for how system updates are rolled. At any point, when a new OTA update is pushed, it gets applied to the inactive A/B partition. And it gets activated at next boot. If things break, user has the option to switch to the previous state. Just the usual set of expectations one would have with an immutable distribution.

What they’ve done beautifully is:

  • Integration Device Mapper LVM for A/B Partition
  • Linux Container OCI images to provison/flash A/B Paritions
  • Developed abroot utility for A/B Partition management
  • APX (Distrobox) integration for container workflows, with multiple Linux flavors
  • No sudo. Everything done via pkexec

But the most awesome thing I liked in Vanilla OS is custom images. This allows power users to easily tinker with the developer workflow and generate new images, tailored for their specific use cases. All of this done levraging the GitHub/GitLab CI/CD workflows, which I think is just plain awesome. Given that payload is of the OCI format, the CICD workflow just generates new OCI images and publishes to a registry. And then the same is pulled to the client as an OTA.

Hats off to this small team/community for doing such nice integration work, ultimately producing a superb Immutable Atomic Linux Distribution based on the Debian base.

Immutable Linux

My primary work machine has grown over the years, being on the rolling Debian Testing/Unstable channel. And I don’t much feel the itch ever to format my (primary) machine so quick, no matter how great the counter offer is.

So that got me wondering how to have some of bling of the immutable world that I’ve tasted (Thanks Chrome OS and Vanilla OS). With a fair idea of what they offer in features, I drew a line to what I’d want on my primary machine.

  • read-only rootfs
  • read-only /etc/

This also kinda hardens my systems to an extent that I can’t accidentally cause catastrophic damage to it.

The feature I’m letting go of is the A/B Partition (rpm-ostree for Fedora land). While a good feature, having to integrate it into my current machine is going to be very very challenging.

I actually feel that, the core assumption the Immutable Distros make, that all hardware is going to Just Work, is flawed. While Linux has substantially improved over the past years, there’s still a hit/miss when introducing very recent hardware.

Immutable Linux is targeted for the novice user, who won’t accidentally mess with the system. But what would the novice user do in case they have issues with their recently purchased hardware, that they are attempting to run (Immutable) Linux on.

Ritesh’s Immutable Debian

With the premise set, on to sailing in immutable land.

There’s another ground breaking innovation that has been happening; which I think everyone is aware of. And may be using it as well, direct or indirect.

Artificial Intelligence

While I’ve only been a user for a couple of months as I draft this post, I’m now very much impressed with all this innovation. Being at the consumer end has me appreciating it for what it has offered thus far. And I haven’t even scratched the surface. I’m making attempts at developing understanding of Machine Learning and Artificial Intelligence but there’s a looonnngg way to go still.

What I’m appreciating the most is the availability of the AI Technology. It has helped me be more efficient. And thus I get to use the gain (time) with family.

To wrap, what I tailored my primary OS to, wouldn’t have been possible without assistance from AI.

With that, I disclaim that the rest of this article is primarily drafted by my AI Companion. This is going to serve me as a reference for future, when I forget about how all of this was structured.

�� System Architecture: Immutable Debian (Btrfs + MergerFS)

This system is a custom-hardened Immutable Workstation based on Debian Testing/Unstable. It utilizes native Btrfs properties and surgical VFS mounting to isolate the Operating System from persistent data.

1. Storage Strategy: Subvolume Isolation

The system resides on a LUKS-encrypted NVMe partition, using a flattened subvolume layout to separate the “Gold Master” OS from volatile and persistent data.

Mount Point Subvolume Path State Purpose
/ /ROOTVOL RO The core OS image.
/etc /ROOTVOL/etc RO System configuration (Snapshot-capable).
/home/rrs /ROOTVOL/home/rrs RW User data and Kitty terminal configs.
/var/lib /ROOTVOL/var/lib RW Docker, Apt state, and system DBs.
/var/spool /ROOTVOL/var/spool RW Mail queues and service state.
/swap /ROOTVOL/swap RW Isolated path for No_COW Swapfile.
/disk-tmp /ROOTVOL/disk-tmp RW MergerFS overflow tier.

1.1 /etc/fstab

� cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# --- ROOT & BOOT ---
/dev/mapper/nvme0n1p3_crypt / btrfs autodefrag,compress=zstd,discard=async,noatime,defaults,ro 0 0
/dev/nvme0n1p2 /boot ext4 defaults 0 2
/dev/nvme0n1p1 /boot/efi vfat umask=0077 0 1
# --- SWAP ---
# Mount the "Portal" to the swap subvolume using UUID (Robust)
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /swap btrfs subvol=/ROOTVOL/swap,defaults,noatime 0 0
# Activate the swap file by path (Correct for files)
/swap/swapfile none swap defaults 0 0
# --- DATA / MEDIA ---
UUID=439e297a-96a5-4f81-8b3a-24559839539d /media/rrs/TOSHIBA btrfs noauto,compress=zstd,space_cache=v2,subvolid=5,subvol=/,user
# --- MERGERFS ---
# --- DISK-TMP (MergerFS Overflow Tier) ---
# Ensure this ID matches your actual disk-tmp subvolume
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /disk-tmp btrfs subvolid=417,discard=async,defaults,noatime,compress=zstd 0 0
tmpfs /ram-tmp tmpfs defaults 0 0
/ram-tmp:/disk-tmp /tmp fuse.mergerfs x-systemd.requires=/ram-tmp,x-systemd.requires=/disk-tmp,defaults,allow_other,use_ino,nonempty,minfreespace=1G,category.create=all,moveonenospc=true 0 0
# --- IMMUTABILITY PERSISTENCE LAYERS ---
# We explicitly mount these subvolumes so they remain Writable later.
# UUID is the same as your /var/lib entry (your main Btrfs volume).
# 1. /var/lib (Docker, Apt state) - ID 50659
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/lib btrfs subvolid=50659,discard=async,defaults,noatime,compress=zstd 0 0
# 2. /home/rrs (User Data) - ID 13032
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /home/rrs btrfs subvolid=13032,discard=async,defaults,noatime,compress=zstd 0 0
# 3. /etc (System Config) - ID 13030
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /etc btrfs subvolid=13030,discard=async,defaults,noatime,compress=zstd,ro 0 0
# 4. /var/log (Logs) - ID 406
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/log btrfs subvolid=406,discard=async,defaults,noatime,compress=zstd 0 0
# 5. /var/cache (Apt Cache) - ID 409
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/cache btrfs subvolid=409,discard=async,defaults,noatime,compress=zstd 0 0
# 6. /var/tmp (Temp files) - ID 401
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/tmp btrfs subvolid=401,discard=async,defaults,noatime,compress=zstd 0 0
# /var/spool
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/spool btrfs subvolid=50689,discard=async,defaults,noatime,compress=zstd 0 0

2. Tiered Memory Model (/tmp)

To balance performance and capacity, /tmp is managed via MergerFS:

  • Tier 1 (RAM): tmpfs mounted at /ram-tmp.
  • Tier 2 (Disk): Btrfs subvolume mounted at /disk-tmp.
  • Logic: Files are written to RAM first. If RAM falls below 1GB available, files spill over to the Btrfs disk tier.

3. Hibernation & Swap Logic

  • Size: 33 GiB (Configured for Suspend-to-Disk with 24GB RAM).
  • Attribute: The /swap subvolume is marked No_COW (+C).
  • Kernel Integration:
    • resume=UUID=... (Points to the unlocked LUKS container).
    • resume_offset=... (Physical extent mapping for Btrfs).

3.1 systemd sleep/Hibernation

� cat /etc/systemd/sleep.conf.d/sleep.conf
[Sleep]
HibernateDelaySec=12min

and

� cat /etc/systemd/logind.conf.d/logind.conf
[Login]
HandleLidSwitch=suspend-then-hibernate
HandlePowerKey=suspend-then-hibernate
HandleSuspendKey=suspend-then-hibernate
SleepOperation==suspend-then-hibernate

4. Immutability & Safety Mechanisms

The system state is governed by two key components:

A. The Control Script (immutectl)

Handles the state transition by flipping Btrfs properties and VFS mount flags in the correct order.

  • sudo immutectl unlock: Sets ro=false and remounts rw.
  • sudo immutectl lock: Sets ro=true and remounts ro.
� cat /usr/local/bin/immutectl
#!/bin/bash
# Ensure script is run as root
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root (sudo)."
exit 1
fi
ACTION=$1
case $ACTION in
unlock)
echo "🔓 Unlocking / and /etc for maintenance..."
# 1. First, tell the Kernel to allow writes to the mount point
mount -o remount,rw /
mount -o remount,rw /etc
# 2. Now that the VFS is RW, Btrfs will allow you to change the property
btrfs property set / ro false
btrfs property set /etc ro false
echo "Status: System is now READ-WRITE."
;;
lock)
echo "🔒 Locking / and /etc (Immutable Mode)..."
sync
btrfs property set / ro true
btrfs property set /etc ro true
# We still attempt remount, but we ignore failure since Property is the Hard Lock
mount -o remount,ro / 2>/dev/null
mount -o remount,ro /etc 2>/dev/null
echo "Status: System is now READ-ONLY (Btrfs Property Set)."
;;
status)
echo "--- System Immutability Status ---"
for dir in "/" "/etc"; do
# Get VFS state
VFS_STATE=$(grep " $dir " /proc/mounts | awk '{print $4}' | cut -d, -f1)
# Get Btrfs Property state
BTRFS_PROP=$(btrfs property get "$dir" ro | cut -d= -f2)
# Determine overall health
if [[ "$BTRFS_PROP" == "true" ]]; then
FINAL_STATUS="LOCKED (RO)"
else
FINAL_STATUS="UNLOCKED (RW)"
fi
echo "Path: $dir"
echo " - VFS Layer (Mount): $VFS_STATE"
echo " - Btrfs Property: ro=$BTRFS_PROP"
echo " - Effective State: $FINAL_STATUS"
# Check for mismatch (The "Busy" scenario)
if [[ "$VFS_STATE" == "rw" && "$BTRFS_PROP" == "true" ]]; then
echo " ⚠� NOTICE: VFS is RW but Btrfs is RO. System is effectively Immutable."
fi
echo ""
done
;;
*)
echo "Usage: $0 {lock|unlock|status}"
exit 1
;;
esac

B. The Smart Seal (immutability-seal.service)

A systemd one-shot service that ensures the system is locked on boot.

  • Fail-safe: The service checks /proc/cmdline for the standalone word rw. If found (via GRUB manual override), the seal is aborted to allow emergency maintenance.
� cat /etc/systemd/system/immutability-seal.service
[Unit]
Description=Ensure Btrfs Immutable Properties are set on Boot (unless rw requested)
DefaultDependencies=no
After=systemd-remount-fs.service
Before=local-fs.target
# Don't run in emergency/rescue modes
#ConditionPathExists=!/run/systemd/seats/seat0
[Service]
Type=oneshot
# The robust check: exit if 'rw' exists as a standalone word
ExecStartPre=/bin/sh -c '! grep -qE "\brw\b" /proc/cmdline'
ExecStartPre=mount -o remount,rw /
ExecStart=/usr/bin/btrfs property set / ro true
ExecStart=/usr/bin/btrfs property set /etc ro true
ExecStartPost=mount -o remount,ro /
RemainAfterExit=yes
[Install]
WantedBy=local-fs.target

5. Monitoring & Maintenance

  • Nagging: A systemd user-timer runs immutability-nag every 15 minutes to notify the desktop session if the system is currently in an “Unlocked” state.
  • Verification: Use sudo immutectl status to verify that both the VFS Layer and Btrfs Properties are in sync.

5.1 Nagging

� cat ~/bin/immutability-nag
#!/bin/bash
# Check Btrfs property
BTRFS_STATUS=$(btrfs property get / ro | cut -d= -f2)
if [[ "$BTRFS_STATUS" == "false" ]]; then
# Use notify-send (Standard, fast, non-intrusive)
notify-send -u critical -i security-low \
"🔓 System Unlocked" \
"Root is currently WRITABLE. Run 'immutectl lock' when finished."
fi

and

� usystemctl cat immutability-nag.service
# /home/rrs/.config/systemd/user/immutability-nag.service
[Unit]
Description=Check Btrfs immutability and notify user
# Ensure it doesn't run before the graphical session is ready
After=graphical-session.target
[Service]
Type=oneshot
ExecStart=%h/bin/immutability-nag
# Standard environment for notify-send to find the DBus session
Environment=DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/%U/bus
[Install]
WantedBy=default.target
   ~   20:35:15
� usystemctl cat immutability-nag.timer
# /home/rrs/.config/systemd/user/immutability-nag.timer
[Unit]
Description=Check immutability every 15 mins
[Timer]
OnStartupSec=5min
OnUnitActiveSec=15min
[Install]
WantedBy=timers.target

And the resultant nag in action.

Immutable Debian Nag

Immutable Debian Nag

5.2 Verification

� sudo immutectl status
[sudo] password for rrs:
--- System Immutability Status ---
Path: /
- VFS Layer (Mount): rw
- Btrfs Property: ro=false
- Effective State: UNLOCKED (RW)
Path: /etc
- VFS Layer (Mount): rw
- Btrfs Property: ro=false
- Effective State: UNLOCKED (RW)
   ~   21:14:08
� sudo immutectl lock
🔒 Locking / and /etc (Immutable Mode)...
Status: System is now READ-ONLY (Btrfs Property Set).
   ~   21:14:15
� sudo immutectl status
--- System Immutability Status ---
Path: /
- VFS Layer (Mount): rw
- Btrfs Property: ro=true
- Effective State: LOCKED (RO)
⚠� NOTICE: VFS is RW but Btrfs is RO. System is effectively Immutable.
Path: /etc
- VFS Layer (Mount): rw
- Btrfs Property: ro=true
- Effective State: LOCKED (RO)
⚠� NOTICE: VFS is RW but Btrfs is RO. System is effectively Immutable.

Date Configured: December 2025
Philosophy: The OS is a diagnostic tool. If an application fails to write to a locked path, the application is the variable, not the system.

Wrap

Overall, I’m very very happy with, the result of a day of working together with AI. I wouldn’t have gotten things done so quick in such time if it wasn’t around. Such great is this age of AI.

20 December, 2025 12:00AM by Ritesh Raj Sarraf (rrs@researchut.com)

December 19, 2025

hackergotchi for Kartik Mistry

Kartik Mistry

KDE Needs You!

* KDE Randa Meetings and make a donation!

I know that my contributions to KDE are minimal at this stage, but hey, I’m doing my part this time for sure!

19 December, 2025 01:44PM by કાર્તિક

December 18, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

dang 0.0.17: New Features, Plus Maintenance

dang image

A new release of my mixed collection of things package dang package arrived at CRAN earlier today. The dang package regroups a few functions of mine that had no other home as for example lsos() from a StackOverflow question from 2009 (!!), the overbought/oversold price band plotter from an older blog post, the market monitor blogged about as well as the checkCRANStatus() function tweeted about by Tim Taylor. And more so take a look.

This release retires two functions: the social media site nobody ever visits anymore shut down its API too, so no way to mute posts by a given handle. Similarly, the (never official) ability by Google to supply financial data is no more, so the function to access data this way is gone too. But we also have two new ones: one that helps with CRAN entries for ORCiD ids, and another little helper to re-order microbenchmark results by summary column (defaulting to the median). Other than the usual updates to continuous integrations, as well as a switch to Authors@R which will result in CRAN nagging me less about this, and another argument update.

The detailed NEWS entry follows.

Changes in version 0.0.17 (2025-12-18)

  • Added new funtion reorderMicrobenchmarkResults with alias rmr

  • Use tolower on email argument to checkCRANStatus

  • Added new function cranORCIDs bootstrapped from two emails by Kurt Hornik

  • Switched to using Authors@R in DESCRIPTION and added ORCIDs where available

  • Switched to r-ci action with included bootstrap step; updated updated the checkout action (twice); added (commented-out) log accessor

  • Removed googleFinanceData as the (unofficial) API access point no longer works

  • Removed muteTweeters because the API was turned off

Via my CRANberries, there is a comparison to the previous release. For questions or comments use the the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

18 December, 2025 09:14PM

hackergotchi for Colin Watson

Colin Watson

Preparing a transition in Debusine

We announced a public beta of Debusine repositories recently (Freexian blog, debian-devel-announce). One thing I’m very keen on is being able to use these to prepare “transitions”: changes to multiple packages that need to be prepared together in order to land in testing. As I said in my DebConf25 talk:

We have distribution-wide CI in unstable, but there’s only one of it and it’s shared between all of us. As a result it’s very possible to get into tangles when multiple people are working on related things at the same time, and we only avoid that as much as we do by careful coordination such as transition bugs. Experimental helps, but again, there’s only one of it and setting up another one is far from trivial.

So, what we want is a system where you can run experiments on possible Debian changes at a large scale without a high setup cost and without fear of breaking things for other people. And then, if it all works, push the whole lot into Debian.

Time to practice what I preach.

Setup

The setup process is documented on the Debian wiki. You need to decide whether you’re working on a short-lived experiment, in which case you’ll run the create-experiment workflow and your workspace will expire after 60 days of inactivity, or something that you expect to keep around for longer, in which case you’ll run the create-repository workflow. Either one of those will create a new workspace for you. Then, in that workspace, you run debusine archive suite create for whichever suites you want to use. For the case of a transition that you plan to land in unstable, you’ll most likely use create-experiment and then create a single suite with the pattern sid-<name>.

The situation I was dealing with here was moving to Pylint 4. Tests showed that we needed this as part of adding Python 3.14 as a supported Python version, and I knew that I was going to need newer upstream versions of the astroid and pylint packages. However, I wasn’t quite sure what the fallout of a new major version of pylint was going to be. Fortunately, the Debian Python ecosystem has pretty good autopkgtest coverage, so I thought I’d see what Debusine said about it. I created an experiment called cjwatson-pylint (resulting in https://debusine.debian.net/debian/developers-cjwatson-pylint/ - I’m not making that a proper link since it will expire in a couple of months) and a sid-pylint suite in it.

Iteration

From this starting point, the basic cycle involved uploading each package like this for each package I’d prepared:

$ dput -O debusine_workspace=developers-cjwatson-pylint \
       -O debusine_workflow=publish-to-sid-pylint \
       debusine.debian.net foo.changes

I could have made a new dput-ng profile to cut down on typing, but it wasn’t worth it here.

Then I looked at the workflow results, figured out which other packages I needed to fix based on those, and repeated until the whole set looked coherent. Debusine automatically built each upload against whatever else was currently in the repository, as you’d expect.

I should probably have used version numbers with tilde suffixes (e.g. 4.0.2-1~test1) in case I needed to correct anything, but fortunately that was mostly unnecessary. I did at least run initial test-builds locally of just the individual packages I was directly changing to make sure that they weren’t too egregiously broken, just because I usually find it quicker to iterate that way.

I didn’t take screenshots as I was going along, but here’s what the list of top-level workflows in my workspace looked like by the end:

Workflows

You can see that not all of the workflows are successful. This is because we currently just show everything in every workflow; we don’t consider whether a task was retried and succeeded on the second try, or whether there’s now a newer version of a reverse-dependency so tests of the older version should be disregarded, and so on. More fundamentally, you have to look through each individual workflow, which is a bit of a pain: we plan to add a dashboard that shows you the current state of a suite as a whole rather than the current workflow-oriented view, but we haven’t started on that yet.

Drilling down into one of these workflows, it looks something like this:

astroid workflow

This was the first package I uploaded. The first pass of failures told me about pylint (expected), pylint-flask (an obvious consequence), and python-sphinx-autodoc2 and sphinx-autoapi (surprises). The slightly odd pattern of failures and errors is because I retried a few things, and we sometimes report retries in a slightly strange way, especially when there are workflows involved that might not be able to resolve their input parameters any more.

The next level was:

pylint workflow

Again, there were some retries involved here, and also some cases where packages were already failing in unstable so the failures weren’t the fault of my change; for now I had to go through and analyze these by hand, but we’ll soon have regression tracking to compare with reference runs and show you where things have got better or worse.

After excluding those, that left pytest-pylint (not caused by my changes, but I fixed it anyway in unstable to clear out some noise) and spyder. I’d seen people talking about spyder on #debian-python recently, so after a bit of conversation there I sponsored a rope upload by Aeliton Silva, upgraded python-lsp-server, and patched spyder. All those went into my repository too, exposing a couple more tests I’d forgotten in spyder.

Once I was satisfied with the results, I uploaded everything to unstable. The next day, I looked through the tracker as usual starting from astroid, and while there are some test failures showing up right now it looks as though they should all clear out as pieces migrate to testing. Success!

Conclusions

We still have some way to go before this is a completely smooth experience that I’d be prepared to say that every developer can and should be using; there are all sorts of fit-and-finish issues that I can easily see here. Still, I do think we’re at the point where a tolerant developer can use this to deal with the common case of a mid-sized transition, and get more out of it than they put in.

Without Debusine, either I’d have had to put much more effort into searching for and testing reverse-dependencies myself, or (more likely, let’s face it) I’d have just dumped things into unstable and sorted them out afterwards, resulting in potentially delaying other people’s work. This way, everything was done with as little disruption as possible.

This works best when the packages likely to be involved have reasonably good autopkgtest coverage (even if the tests themselves are relatively basic). This is an increasingly good bet in Debian, but we have plans to add installability comparisons (similar to how Debian’s testing suite works) as well as optional rebuild testing.

If this has got you interested, please try it out for yourself and let us know how it goes!

18 December, 2025 01:21PM by Colin Watson

December 17, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

21 years of blogging

21 years ago today I wrote my first blog post. Did I think I’d still be writing all this time later? I’ve no idea to be honest. I’ve always had the impression my readership is small, and people who mostly know me in some manner, and I post to let them know what I’m up to in more detail than snippets of IRC conversation can capture. Or I write to make notes for myself (I frequently refer back to things I’ve documented here). I write less about my personal life than I used to, but I still occasionally feel the need to mark some event.

From a software PoV I started out with Blosxom, migrated to MovableType in 2008, ditched that, when the Open Source variant disappeared, for Jekyll in 2015 (when I also started putting it all in git). And have stuck there since. The static generator format works well for me, and I outsource comments to Disqus - I don’t get a lot, I can’t be bothered with the effort of trying to protect against spammers, and folk who don’t want to use it can easily email or poke me on the Fediverse. If I ever feel the need to move from Jekyll I’ll probably take a look at Hugo, but thankfully at present there’s no push factor to switch.

It’s interesting to look at my writing patterns over time. I obviously started keen, and peaked with 81 posts in 2006 (I’ve no idea how on earth that happened), while 2013 had only 2. Generally I write less when I’m busy, or stressed, or unhappy, so it’s kinda interesting to see how that lines up with various life events.

Blog posts over time

During that period I’ve lived in 10 different places (well, 10 different houses/flats, I think it’s only 6 different towns/cities), on 2 different continents, working at 6 different employers, as well as a period where I was doing my Masters in law. I’ve travelled around the world, made new friends, lost contact with folk, started a family. In short, I have lived, even if lots of it hasn’t made it to these pages.

At this point, do I see myself stopping? No, not really. I plan to still be around, like Flameeyes, to the end. Even if my posts are unlikely to hit the frequency from back when I started out.

17 December, 2025 05:06PM

Sven Hoexter

exfatprogs: Do not try defrag.exfat / mkfs.exfat Windows compatibility in Trixie

exfatprogs 1.3.0 added a new defrag.exfat utility which turned out to be not reliable and cause data loss. exfatprogs 1.3.1 disabled the utility, and I followed that decision with the upload to Debian/unstable yesterday. But as usual it will take some time until it's migrating to testing. Thus if you use testing do not try defag.exfat! At least not without a vetted and current backup.

Beside of that there is a compatibility issue with the way mkfs.exfat, as shipped in trixie (exfatprogs 1.2.9), handles drives which have a physical sector size of 4096 bytes but emulate a logical size of 512 bytes. With exfatprogs 1.2.6 a change was implemented to prefer the physical sector size on those devices. That turned out to be not compatible with Windows, and was reverted in exfatprogs 1.3.0. Sadly John Ogness ran into the issue and spent some time to debug it. I've to admit that I missed the relevance of that change. Huge kudos to John for the bug report. Based on that I prepared an update for the next trixie point release.

If you hit that issue on trixie with exfatprogs 1.2.9-1 you can work around it by formating with mkfs.exfat -s 512 /dev/sdX to get Windows compatibility. If you use exfatprogs 1.2.9-1+deb13u1 or later, and want the performance gain back, and do not need Windows compatibility, you can format with mkfs.exfat -s 4096 /dev/sdX.

17 December, 2025 02:38PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 15.2.3-1 on CRAN: Upstream Update

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1272 other packages on CRAN, downloaded 43.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 661 times according to Google Scholar.

This versions updates to the 15.2.3 upstream Armadillo release from yesterday. It brings minor changes over the RcppArmadillo 15.2.2 release made last month (and described in this post). As noted previously, and due to both the upstream transition to C++14 coupled with the CRAN move away from C++11, the package offers a transition by allowing packages to remain with the older, pre-15.0.0 ‘legacy’ Armadillo yet offering the current version as the default. If and when CRAN will have nudged (nearly) all maintainers away from C++11 (and now also C++14 !!) we can remove the fallback. Our offer to help with the C++ modernization still stands, so please get in touch if we can be of assistance. As a reminder, the meta-issue #475 regroups all the resources for the C++11 transition.

There were no R-side changes in this release. The detailed changes since the last release follow.

Changes in RcppArmadillo version 15.2.3-1 (2025-12-16)

  • Upgraded to Armadillo release 15.2.3 (Medium Roast Deluxe)

    • Faster .resize() for vectors

    • Faster repcube()

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

17 December, 2025 02:11PM

hackergotchi for Matthew Garrett

Matthew Garrett

How did IRC ping timeouts end up in a lawsuit?

I recently won a lawsuit against Roy and Rianne Schestowitz, the authors and publishers of the Techrights and Tuxmachines websites. The short version of events is that they were subject to an online harassment campaign, which they incorrectly blamed me for. They responded with a large number of defamatory online posts about me, which the judge described as unsubstantiated character assassination and consequently awarded me significant damages. That's not what this post is about, as such. It's about the sole meaningful claim made that tied me to the abuse.

In the defendants' defence and counterclaim[1], 15.27 asserts in part The facts linking the Claimant to the sock puppet accounts include, on the IRC network: simultaneous dropped connections to the mjg59_ and elusive_woman accounts. This is so unlikely to be coincidental that the natural inference is that the same person posted under both names. "elusive_woman" here is an account linked to the harassment, and "mjg59_" is me. This is actually a surprisingly interesting claim to make, and it's worth going into in some more detail.

The event in question occurred on the 28th of April, 2023. You can see a line reading *elusive_woman has quit (Ping timeout: 2m30s), followed by one reading *mjg59_ has quit (Ping timeout: 2m30s). The timestamp listed for the first is 09:52, and for the second 09:53. Is that actually simultaneous? We can actually gain some more information - if you hover over the timestamp links on the right hand side you can see that the link is actually accurate to the second even if that's not displayed. The first event took place at 09:52:52, and the second at 09:53:03. That's 11 seconds apart, which is clearly not simultaneous, but maybe it's close enough. Figuring out more requires knowing what a "ping timeout" actually means here.

The IRC server in question is running Ergo (link to source code), and the relevant function is handleIdleTimeout(). The logic here is fairly simple - track the time since activity was last seen from the client. If that time is longer than DefaultIdleTimeout (which defaults to 90 seconds) and a ping hasn't been sent yet, send a ping to the client. If a ping has been sent and the timeout is greater than DefaultTotalTimeout (which defaults to 150 seconds), disconnect the client with a "Ping timeout" message. There's no special logic for handling the ping reply - a pong simply counts as any other client activity and resets the "last activity" value and timeout.

What does this mean? Well, for a start, two clients running on the same system will only have simultaneous ping timeouts if their last activity was simultaneous. Let's imagine a machine with two clients, A and B. A sends a message at 02:22:59. B sends a message 2 seconds later, at 02:23:01. The idle timeout for A will fire at 02:24:29, and for B at 02:24:31. A ping is sent for A at 02:24:29 and is responded to immediately - the idle timeout for A is now reset to 02:25:59, 90 seconds later. The machine hosting A and B has its network cable pulled out at 02:24:30. The ping to B is sent at 02:24:31, but receives no reply. A minute later, at 02:25:31, B quits with a "Ping timeout" message. A ping is sent to A at 02:25:59, but receives no reply. A minute later, at 02:26:59, A quits with a "Ping timeout" message. Despite both clients having their network interrupted simultaneously, the ping timeouts occur 88 seconds apart.

So, two clients disconnecting with ping timeouts 11 seconds apart is not incompatible with the network connection being interrupted simultaneously - depending on activity, simultaneous network interruption may result in disconnections up to 90 seconds apart. But another way of looking at this is that network interruptions may occur up to 90 seconds apart and generate simultaneous disconnections[2]. Without additional information it's impossible to determine which is the case.

This already casts doubt over the assertion that the disconnection was simultaneous, but if this is unusual enough it's still potentially significant. Unfortunately for the Schestowitzes, even looking just at the elusive_woman account, there were several cases where elusive_woman and another user had a ping timeout within 90 seconds of each other - including one case where elusive_woman and schestowitz[TR] disconnect 40 seconds apart. By the Schestowitzes argument, it's also a natural inference that elusive_woman and schestowitz[TR] (one of Roy Schestowitz's accounts) are the same person.

We didn't actually need to make this argument, though. In England it's necessary to file a witness statement describing the evidence that you're going to present in advance of the actual court hearing. Despite being warned of the consequences on multiple occasions the Schestowitzes never provided any witness statements, and as a result weren't allowed to provide any evidence in court, which made for a fairly foregone conclusion.

[1] As well as defending themselves against my claim, the Schestowitzes made a counterclaim on the basis that I had engaged in a campaign of harassment against them. This counterclaim failed.

[2] Client A and client B both send messages at 02:22:59. A falls off the network at 02:23:00, has a ping sent at 02:24:29, and has a ping timeout at 02:25:29. B falls off the network at 02:24:28, has a ping sent at 02:24:29, and has a ping timeout at 02:25:29. Simultaneous disconnects despite over a minute of difference in the network interruption.

comment count unavailable comments

17 December, 2025 01:17PM

December 16, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Tom Silvagni sentencing: not Xavier College but DPP and social media to blame

After the recent abuse judgment, today we will find out about Tom Silvagni's sentence. Dad died shortly after his employer, the late Cardinal Pell, was sentenced to prison. The cardinal was subsequently acquitted in his appeal to the High Court. Dad was a Carlton supporter. He would be turning in his grave at the thought that Stephen Silvagni's son could be a rapist.

Suppression orders were lifted last week and the media have finally identified Tom Silvagni as the latest Australian football personality to be convicted of abuse.

News reports were quick to comment on the fact the Silvagni brothers all attended Xavier College, the elite Catholic boys school attended by my father and I. As explained in a previous blog, after moving from Bendigo to Melbourne for my final year of school, I used to cycle from Pentridge Prison to Xavier College each day while Tom really is in prison for at least one Christmas and maybe more.

The alleged incident took place in January 2024. That appears to be four years after Tom completed year 12. References to the school are therefore not helpful in any specific way. In a general sense, all we can say is there is a correlation between wealth and abuse, just as there is a correlation between wealth and attendance at elite schools. But correlation is not causation. The Chanel Contos "petition" about consent demonstrated that incidents of this nature were alleged to happen in every elite school of every denomination. The Federal Court published the Katharine Thornton Dossier about their former boss, the attorney general Christian Porter. In his case, it is alleged that abuse took place while he was representing another elite school at the national debating contest. An allegation against a student on an official school trip is far more severe than an allegation against a former student.

Silvagni background

Tom had started a job as a player agent at Kapital Sports Management shortly before the incident. The Wayback machine has captured images of Tom with his colleagues as well as his profile:

Tom is a recently accredited AFL Player Agent and works closely with our team of experienced agents at the ground level. Tom has “lived” the industry through his family ties and is a great resource to Kapital given he has recently experienced playing in AFL pathways. Tom offers great perspective to the young draftees as they navigate the system and is a great relationship piece with our draft talent.

Polarizing and adversarial procedures are not solving abuse

After the conviction was announced, the victim was invited to make a unilateral victim impact statement. She used much of that opportunity to direct her comments against Tom. She made little reference to anybody else at the party and no reference to the cultural and legal problems around abuse in Australia.

Shiannon Corcoron writes a strongly man-hating piece about the trial:

was about how the rights of the wealthy and powerful can override the rights of the small, the weak, the vulnerable and the truth. This man-child ...

As the accuser is anonymous, we do not know if she was small or weak. The facts do confirm she was vulnerable at that particular moment in time: she had gone to sleep in a bed with another man. She believed he would stay the night. The other man left at 2am, leaving the complainant alone and vulnerable.

The polarizing nature of these comments can be further exposed with reference to a parallel case in the United Kingdom. On the same day as the judgment in Melbourne, a British police officer failed in their appeal to overturn dismissal for gross misconduct. In the British case, the attacker was not a male police officer, it was a female police officer, PC Pamela Pritchard. While the police sacked her, there is no mention of any criminal prosecution for her actions.

Look at the women running around the world of open source software encouraging people to gang up on each other:

 

Comments like that are incredibly dangerous. In the world of football, Tom may have seen the way the Director of Public Prosecutions (DPP) handled the case against Stephen Milne and he may have felt that a yes to one man is almost as good as a yes to both men.

Abuse is not about the offender's gender. It is about power, having a gang on your side or just being incredibly drunk and stupid.

There are at least two sides to every story. Looking at the falsified harassment claims in the world of open source software, I was able to find internal emails manipulating journalists to repeat false accusations against Dr Jacob Appelbaum. If somebody was really abused, why did they try to fight their battle through B-grade journalists rather than going directly to the police?

One of the more notable examples in Australia was the wrongful persecution of Alex Matters from the ALP. Sky News conducted an excellent interview with Mr Matters about what it is like to be wrongly accused of abuse.

Based on these cases, I feel it is wise to be very cautious when somebody raises an accusation. It is important to listen and write down evidence but it is foolish to repeat an accusation willy-nilly on social control media.

The mental health defence

Silvagni's lawyers argued that due to the high profile of his family and his young age, he would be at unreasonable risk of self-harm or suicide if the story of the trial was published by the media. On this basis, the entire trial was conducted in secret and his identity only revealed after he was convicted.

There have been vast discussions about privacy, censorship and the credibility of mental health concerns.

Research into the mental health issue suggests that everybody in proximity to bullying and persecution, including family members, team mates, Carlton fans, friends of Tom's mum and Xavier alumni are going to collectively suffer some stress due to the public denunciation of the Silvagni family.

Take a moment to think about Tom's brothers and their families.

Ben was dating Eve Markoski-Wood. Eve's biological father, inconveniently named Rapoff, was convicted and jailed on drug offences. Eve's mother is a reality TV star and Eve uses the name of her step-father. It looks like Eve broke off the relationship with Ben shortly after the charges were officially declared. Britain's Daily Mail tabloid speculated that the "tyranny of distance" had forced them apart but now we know the real reason.

Jack had a very successful few years playing for Carlton. He arrived in the club at the same time as Grace Phillips took up a role as a social media intern. Grace was fortunate to strike up a relationship with her new colleague, the star recruit and son of one of the club legends. They married in 2023 and not long after, in 2024 they had a baby son, Charlie. How is the child going to feel when it arrives for its first day at school and some other five year old asks about uncle Tom?

Tom's girlfriend, Alannah Iocanis, who was a friend of the accuser, is also one of these influencer/model personalities in the world of social control media. With her boyfriend in jail, will other celebrities be willing to date her? Will she be able to maintain the influencer/model lifestyle or will she have to get a job in a supermarket or coffee shop?

Alannah was chosen as a finalist in Miss Universe Australia 2025 even while her boyfriend was on trial for rape. Many pages about her participation have vanished as news got around.

Alannah's model agency, KHOO, has removed her profile.

Media is self-censoring even after suppression order lifted

Many of the media reports do not mention the names of the other people attending the party. It is vital to understand that Anthony Lo Giudice, the other man who had been in the room with the girl was a close relative of the Carlton football club president, Mark Lo Giudice. At the same time, it is important to understand that Tom's father, one of the legends of Carlton, had been refusing to speak to Mark Lo Giudice for a number of years.

Channel 7 report about Anthony Lo Giudice and the sequence of events and Anthony's LinkedIn.

When the reader is aware of all these challenging relationships, they can begin to contemplate the possibility that people have had a role in manipulating the girl or manipulating Tom or manipulating both of them to create a crisis.

Tom's girlfriend, Alannah Iocanis, had invited the victim to the four-way party and she arrived after midnight. Tom's best friend was having an open relationship with the victim. Think of the film Cruel Intentions from 1999. It remains a masterpiece of this genre.

The role of technology

Within minutes of the alleged abuse, the victim had used her mobile phone to alert multiple people that she was an abuse victim. Being only nineteen years old, she may not have realized the extent to which these text messages would change her life. The identities of abuse victims can't be published by the press in Australia, nonetheless, her name has been shared widely between people in her circle of friends, people she thought she could trust and the football clubs concerned.

Without a mobile phone, she may have had time to think about her response to the situation. Once she had gone down the path of telling multiple people, she was unable to turn back.

Deception and rape go together, from Chateau Silvagni to the FSFE & Debian lies

News reports were quick to emphasize that Tom is accused of using deception to gain access to the sleeping nineteen year old. He has admitted using deception, a falsified Uber receipt, to obfuscate the identities of those really in the house at the time of the alleged abuse.

I suspect many people would feel a sense of shock if accused of abuse and some may be tempted to put up barriers to protect themselves. The trial of Tom Silvagni found that his response was not merely a spontaneous lie made up on the spur of the moment, it was a co-ordinated deception involving at least one other person and a forged document.

During the nearly 10-day trial, Crown prosecutor Jeremy McWilliams told jurors the rapes were committed 'not through threats, not through force… but through deception,' with Silvagni impersonating his friend to trick the woman.

In Debianism, the former leader sent this email in December 2018:

You are well-aware that I have been nothing but scrupulous and gentlemanly with regards to your personal privacy and thus I would refuse to cite any outside or otherwise offer any objective rebuttals to your claims on a public forum.

Yet records show he had spent much of 2018 sending defamatory emails behind my back at a time when I lost two family members. Nothing could be a more hideous violation of privacy.

We've seen similar extremes as Matthias Kirschner uses the name FSFE to impersonate the real FSF in Boston. In a previous blog, I compared the FSFE to a Nigerian 911 scam.

Tom Silvagni is accused of using deception/impersonation to procure sex with one of his best friend's girlfriends. Chris Lamb and Matthias Kirschner used deception on a similar scale to procure victims' work while pretending to be independent voluntary organizations. In the latter case, we saw victims killed themselves in the Debian suicide cluster. One victim died on our wedding day.

16 December, 2025 10:30PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Lichess

I wish more pages on the Internet were like Lichess. It's fast. It feels like it only does one thing (even though it's really more like seven or eight)—well, perhaps except for the weird blogs. It does not feel like it's trying to sell me anything; in fact, it feels like it hardly even wants my money. (I've bought two T-shirts from their Spreadshirt, to support them.) It's super-efficient; I've seen their (public) balance sheets, and it feels like it runs off of a shoestring budget. (Take note, Wikimedia Foundation!) And, perhaps most relieving in this day and age, it does not try to grift any AI.

Yes, I know, chess.com is the juggernaut, and has probably done more for chess' popularity than FIDE ever did. But I still go to Lichess every now and then and just click that 2+1 button. (Generally without even logging in, so that I don't feel angry about it when I lose.) Be more like Lichess.

16 December, 2025 06:45PM

December 15, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

Unique security and privacy threats of large language models — a comprehensive survey

This post is an unpublished review for Unique security and privacy threats of large language models — a comprehensive survey

Much has been written about large language models (LLMs) being a risk to user security and privacy, including the issue that, being trained with datasets whose provenance and licensing are not always clear, they can be tricked into producing bits of data that should not be divulgated. I took on reading this article as means to gain a better understanding of this area. The article completely fulfilled my expectations.

This is a review article, which is not a common format for me to follow: instead of digging deep into a given topic, including an experiment or some way of proofing the authors’ claims, a review article will contain a brief explanation and taxonomy of the issues at hand, and a large number of references covering the field. And, at 36 pages and 151 references, that’s exactly what we get.

The article is roughly split in two parts: The first three sections present the issue of security and privacy threats as seen by the authors, as well as the taxonomy within which the review will be performed, and sections 4 through 7 cover the different moments in the life cycle of a LLM model (at pre-training, during fine-tuning, when deploying systems that will interact with end-users, and when deploying LLM-based agents), detailing their relevant publications. For each of said moments, the authors first explore the nature of the relevant risks, then present relevant attacks, and finally close outlining countermeasures to said attacks.

The text is accompanied all throughout its development with tables, pipeline diagrams and attack examples that visually guide the reader. While the examples presented are sometimes a bit simplistic, they are a welcome guide and aid to follow the explanations; the explanations for each of the attack models are necessarily not very deep, and I was often left wondering I correctly understood a given topic, or wanting to dig deeper – but being this a review article, it is absolutely understandable.

The authors present an easy to read prose, and this article covers an important spot in understanding this large, important, and emerging area of LLM-related study.

15 December, 2025 07:30PM

Russ Allbery

Review: Brigands & Breadknives

Review: Brigands & Breadknives, by Travis Baldree

Series: Legends & Lattes #3
Publisher: Tor
Copyright: 2025
ISBN: 1-250-33489-6
Format: Kindle
Pages: 325

Brigands & Breadknives is a secondary-world sword-and-sorcery fantasy and a sequel to both Legends & Lattes and Bookshops & Bonedust. It takes place shortly after Legends & Lattes chronologically, but Fern, the protagonist, was introduced in the Bookshops & Bonedust prequel.

You may have noticed I didn't describe this as cozy fantasy. That is intentional.

When we left Fern at the end of Bookshops & Bonedust, the rattkin was running a bookshop in the town of Murk. As Brigands & Breadknives opens, Fern is moving, for complicated and hard-to-describe personal reasons, to Thune where Viv has her coffee shop. Her plan is to open a new bookstore next door to Legends and Lattes. This is exactly the sort of plot one might expect from this series, and the first few chapters feel like yet another version of the first two novels. Then Fern makes an impulsive and rather inexplicable (even to herself) decision and the plot goes delightfully sideways.

Brigands & Breadknives is not, as Baldree puts it in the afterword, a book about fantasy small-business ownership as the answer to all of life's woes. It is, instead, a sword and sorcery story about a possibly immortal elven bounty hunter, her utterly baffling goblin prisoner, and a rattkin bookseller who becomes their unexpected travel companion for reasons she can't explain. It's a story about a mid-life crisis in a world and with supporting characters that I can only describe as inspired by a T. Kingfisher novel.

Baldree is not Ursula Vernon, of course. This book does not contain paladins or a romance, possibly to the relief of some readers. It's slower, a bit more introspective, and doesn't have as sharp of edges or the casual eerie unsettlingness. But there is a religious order that worships a tentacled space horror for entirely unexpected reasons, pompous and oleaginous talking swords with verbose opinions about everything, a mischievously chaotic orange-haired goblin who quickly became one of my favorite fantasy characters and then kept getting better, and a whole lot of heart. You may see why Kingfisher was my first thought for a comparison point.

Unlike Baldree's previous novels, there is a lot of combat and injury. I think some people will still describe this book as cozy, and I'm not going to argue too strongly because the conflicts are a bit lighter than the sort of rape and murder one would see in a Mercedes Lackey novel. But to me this felt like sword and sorcery in a Dungeons and Dragons universe made more interesting by letting the world-building go feral and a little bit sarcastic. Most of the book is spent traveling, there are a lot of random encounters that build into a connected plot, and some scenes (particularly the defense of the forest village) felt like they could have sold to the Swords and Sorceress anthology series.

Also, this was really good! I liked both Legends & Lattes and Bookshops & Bonedust, maybe a bit more than the prevailing opinion among reviewers since the anachronisms never bothered me, but I wasn't sure whether to dive directly into this book because I was expecting more of the same. This is not more of the same. I think it's clearly better writing and world-building than either of the previous books. It helps that Fern is the protagonist; as much as I like Viv, I think Fern is a more interesting character, and I am glad she got a book of her own.

Baldree takes a big risk on the emotional arc of this book. Fern starts the story in a bad state and makes some decisions to kick off the plot that are difficult to defend. She beats herself up for those decisions for most of the book, deservedly, and parts of that emotional turmoil are difficult to read. Baldree resists the urge to smooth everything over and instead provides a rather raw sense of depression, avoidance, and social anxiety that some readers are going to have to brace themselves for.

I respect the decision to not write the easy series book people probably expected, but I'm not sure Fern's emotional arc quite worked. Baldree is hinting at something that's hard to describe logically, and I'm not sure he was able to draw a clear enough map of Fern's thought process for the reader to understand her catharsis. The "follow your passion" self-help mindset has formed a gravitational singularity in the vicinity of this book's theme, it takes some skillful piloting to avoid being sucked into its event horizon, and I don't think Baldree quite managed to escape it. He made a valiant attempt, though, and it created a far more interesting book than one about safer emotions.

I wanted more of an emotional payoff than I got, but the journey, even with the moments of guilt and anxiety, was so worth it. The world-building is funnier and more interesting than the previous books of the series, and the supporting cast is fantastic. If you bailed on the series but you like sword and sorcery and T. Kingfisher novels, consider returning. You do probably need to read Bookshops & Bonedust first, if you haven't already, since it helps to know the start of Fern's story.

Recommended, and shortcomings aside, much better than I had expected.

Content notes: Bloody sword fights, major injury, some very raw emotions about letting down friends and destroying friendships.

Rating: 8 out of 10

15 December, 2025 03:25AM

December 14, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

BH 1.90.0-1 on CRAN: New Upstream

Boost

Boost is a very large and comprehensive set of (peer-reviewed) libraries for the C++ programming language, containing well over one hundred individual libraries. The BH package provides a sizeable subset of header-only libraries for (easier, no linking required) use by R. It is fairly widely used: the (partial) CRAN mirror logs (aggregated from the cloud mirrors) show over 41.5 million package downloads.

Version 1.90.0 of Boost was released a few days ago following the regular Boost release schedule of April, August and December releases. As before, we packaged it almost immediately and started testing following our annual update cycle which strives to balance being close enough to upstream and not stressing CRAN and the user base too much. The reverse depends check revealed only one really minor issue among the over three hundred direct reverse dependencies. And that issue was addressed yesterday within hours by a truly responsove maintainer (and it helped that a related issue had been addressed months earlier with version 1.89.). So big thanks to Jean-Romain Roussel for the prompt fix, and to Andrew Johnson for the earlier test with 1.89.0.

As last year with 1.87.0, no new Boost libraries were added to BH so the (considerable) size is more or less unchanged. It lead to CRAN doing a manual inspection but as there were no other issues it sailed through as is now in the CRAN repository.

The short NEWS entry follows.

Changes in version 1.90.0-1 (2025-12-13)

  • Upgrade to Boost 1.90.0, patched as usual to comment-out diagnostic suppression messages per the request of CRAN

  • Minor upgrades to continuous integration

Via my CRANberries, there is a diffstat report relative to the previous release. Comments and suggestions about BH are welcome via the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

14 December, 2025 03:03PM

December 13, 2025

hackergotchi for Junichi Uekawa

Junichi Uekawa

I was wondering if there was some debian thread and noticed maybe something is broken in my mail setup.

I was wondering if there was some debian thread and noticed maybe something is broken in my mail setup. The amount of emails I am receiving seems to be very small.

13 December, 2025 01:41AM by Junichi Uekawa

December 11, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#056: Running r-ci with R-devel

Welcome to post 56 in the R4 series.

The recent post #54 reviewed a number of earlier posts on r-ci, our small (but very versatile) runner for continunous integration (CI) with R. The post also introduced the notion of using a container in the ‘matrix’ of jobs defined and running in parallel. The initial motivation was the (still ongoing, and still puzzling) variation in run-times of GitHub Actions. So when running CI and relying on r2u for the ‘fast, easy, reliable: pick all three!’ provision of CRAN packages as Ubuntu binaries, a small amount of time is spent prepping a basic Ubuntu instance with the necessary setup. This can be as fast as maybe 20 to 30 seconds, but it can also stretch to almost two minutes when GitHub is busier or out of sorts for other reasons. When the CI job itself is short, that is a nuisance. We presented relying on a pre-made r2u4ci container that adds just a few commands to the standard r2u container to be complete for CI. And with that setup CI runs tend to be reliably faster.

This situation is still evolving. I have not converted any of my existing CI scripts (apart from a test instance or two), but I keep monitoring the situation. However, this also offered another perspective: why not rely on a different container for a different CI aspect? When discussing the CI approach with Jeff the other day (and helping add CI to his mmap repo), it occurred to me we could also use on of the Rocker containers for R-devel. A minimal change to the underlying run.sh script later, this was accomplished. An example is provided as both a test and an illustration in the repo for package RcppInt64 in its script ci.yaml:

    strategy:
      matrix:
        include:
          - { name: container, os: ubuntu-latest, container: rocker/r2u4ci }
          - { name: r-devel,   os: ubuntu-latest, container: rocker/drd }
          - { name: macos,     os: macos-latest }
          - { name: ubuntu,    os: ubuntu-latest }

    runs-on: ${{ matrix.os }}
    container: ${{ matrix.container }}

This runs both a standard Ubuntu setup (fourth entry) and the alternate just described relying on the container (first entry) along with the (usually commented-out) optional macOS setup (third entry). And line two brings the drd container from Rocker. The CI runner script now checks for a possible Rdevel binary as provided inside drd (along with alias RD) and uses it when present. And that is all that there is: no other change on the user side; tests now run under R-devel. You can see some of the initial runs at the rcppint64 repo actions log. Another example is now also at Jeff’s mmap repo.

It should be noted that this relies on R-devel running packages made with R-release. Every few years this breaks when R needs to break its binary API. If and when that happens this option will be costlier as the R-devel instance will then have to (re-)install its R package dependencies. This can be accomodated easily as a step in the yaml file. And under ‘normal’ circumstances it is not needed.

Having easy access to recent builds of R-devel (the container refreshes weekly on a schedule) with the convenience of r2u gives another option for package testing. I may continue to test locally with R-devel as my primary option, and most likely keep my CI small and lean (usually just one R-relase run on Ubuntu) but having another option at GitHub Actions is also a good thing.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

11 December, 2025 06:29PM

December 08, 2025

Thorsten Alteholz

My Debian Activities in November 2025

Debian LTS/ELTS

This was my hundred-thirty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian and my eighty-eighth ELTS month. As the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities.

During my allocated time I uploaded or worked on:

  • [DLA 4381-1] net-snmp security update to fix two CVEs related to denial of service.
  • [DLA 4382-1] libsdl2 security update to fix one CVE related to a memory leak and a denial of service.
  • [DLA 4380-1] cups-filters security update to fix three CVEs related to out of bounds read or writes or a heap buffer overflow.
  • [ELA-1586-1] cups-filters security update to fix three CVEs in Buster and Stretch, related to out of bounds read or writes or a heap buffer overflow.
  • [libcupsfilters] upload to unstable to fix two CVEs
  • [cups-filters] upload to unstable to fix three CVEs
  • [cups] upload to unstable to fix two CVEs
  • [rlottie] upload to unstable to finally fix three CVEs
  • [rplay] upload to unstable to finally fix one CVE
  • [#1121342] trixie-pu bug for libcupsfilters to fix two CVEs in Trixie.
  • [#1121391] trixie-pu bug for cups-filter to fix three CVEs in Trixie.
  • [#1121392] bookworm-pu bug for cups-filter to fix three CVEs in Bookworm.
  • [#112433] trixie-pu bug for rlottie to finally fix three CVEs in Trixie.
  • [#112437] bookworm-pu bug for rlottie to finally fix three CVEs in Bookworm.

I also attended the monthly LTS/ELTS meeting and did a week of LTS/ELTS frontdesk duties. I also stumbled upon a bug in python3-paramiko, where the parsing of include statements in the ssh_config does not work. Rather annoying but already fixed in the newest version, that only needs to find its way to my old VM.

Debian Printing

This month I uploaded a new upstream version or a bugfix version of:

I also uploaded cups to Trixie, to fix bug #1109471 related to a configuration problem with the admin panel.

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

  • siril to unstable (sponsored upload).
  • supernovas to unstable (sponsored upload).

Debian IoT

This month I uploaded a new upstream version or a bugfix version of:

Debian Mobcom

This month I uploaded a new upstream version or a bugfix version of:

misc

This month I uploaded a new upstream version or a bugfix version of:

On my fight against outdated RFPs, I closed 30 of them in November.

I started with about 3500 open RFP bugs. and after working six months on this project, I have closed 183 bugs. Of course new bugs appeared, so the overall number of bugs is only down to about 3360.

Though I view this as a successful project, I also have to admit that it is a bit boring to work on this daily. Therefore I close this diary again and will add the closed RFP bugs to my bug logbook now. I also try to close some of these bugs by really uploading some software, probably one package per month.

FTP master

This month I accepted 236 and rejected 16 packages. The overall number of packages that got accepted was 247.

08 December, 2025 03:20PM by alteholz

François Marier

Learning a new programming language with an LLM

I started learning Go this year. First, I picked a Perl project I wanted to rewrite, got a good book and ignored AI tools since I thought they would do nothing but interfere with learning. Eventually though, I decided to experiment a bit and ended up finding a few ways to use AI assistants effectively even when learning something new.

Searching more efficiently

The first use case that worked for me was search. Instead of searching on a traditional search engine and then ending up on Stack Overflow, I could get the answer I was looking for directly in an AI side-window in my editor. Of course, that's bad news for Stack Overflow.

I was however skeptical from the beginning since LLMs make mistakes, sometimes they making up function signatures or APIs that don't exist. Therefore I got into the habit of going to the official standard library documentation to double-check suggestions. For example, if the LLM suggests using strings.SplitN, I verify the function signature and behaviour carefully before using it. Basically, "don't trust and do verify."

I stuck to the standard library in my project, but if an LLM recommends third-party dependencies for you, make sure they exist and that Socket doesn't flag them as malicious. Research has found that 5-20% of packages suggested by LLMs don't actually exist, making this a real attack vector (dubbed "slopsquatting").

Autocomplete is too distracting

A step I took early on was to disable AI autocomplete in my editor. When learning a new language, you need to develop muscle memory for the syntax. Also, Go is no Java. There's not that much boilerplate to write in general.

I found it quite distracting to see some almost correct code replace my thinking about the next step. I can see how one could go faster with these suggestions, but being a developer is not just about cranking out lines of code as fast as possible, it's also about constantly learning new things (and retaining them).

Asking about idiomatic code

One of the most useful prompts when learning a new language is "Is this the most idiomatic way to do this in Go?". Large language models are good at recognizing patterns and can point out when you're writing code that works but doesn't follow the conventions of the language. This is especially valuable early on when you don't yet have a feel for what "good" code looks like in that language.

It's usually pretty easy (at least for an experience developer) to tell when the LLM suggestion is actually counter productive or wrong. If it increases complexity or is harder to read/decode, it's probably not a good idea to do it.

Reviews

One way a new dev gets better is through code review. If you have access to a friend who's an expert in the language you're learning, then you can definitely gain a lot by asking for feedback on your code.

If you don't have access to such a valuable resource, or as a first step before you consult your friend, I found that AI-assisted code reviews can be useful:

  1. Get the model to write the review prompt for you. Describe what you want reviewed and let it generate a detailed prompt.
  2. Feed that prompt to multiple models. They each have different answers and will detect different problems.
  3. Be prepared to ignore 50% of what they recommend. Some suggestions will be stylistic preferences, others will be wrong, or irrelevant.

The value is in the other 50%: the suggestions that make you think about your code differently or catch genuine problems.

Similarly for security reviews:

  • A lot of what they flag will need to be ignored (false positives, or things that don't apply to your threat model).
  • Some of it may highlight areas for improvement that you hadn't considered.
  • Occasionally, they will point out real vulnerabilities.

But always keep in mind that AI chatbots are trained to be people-pleasers and often feel the need to suggest something when nothing was needed

An unexpected benefit

One side effect of using AI assistants was that having them write the scaffolding for unit tests motivated me to increase my code coverage. Trimming unnecessary test cases and adding missing ones is pretty quick when the grunt work is already done, and I ended up testing more of my code (being a personal project written in my own time) than I might have otherwise.

Learning

In the end, I continue to believe in the value of learning from quality books (I find reading paper-based most effective). In addition, I like to create Anki questions for common mistakes or things I find I have to look up often. Remembering something will always be faster than asking an AI tool.

So my experience this year tells me that LLMs can supplement traditional time-tested learning techniques, but I don't believe it obsoletes them.

P.S. I experimented with getting an LLM to ghost-write this post for me from an outline (+ a detailed style guide) and I ended up having to rewrite at least 75% of it. It was largely a waste of time.

08 December, 2025 12:32AM

December 07, 2025

Vincent Bernat

Compressing embedded files in Go

Go’s embed feature lets you bundle static assets into an executable, but it stores them uncompressed. This wastes space: a web interface with documentation can bloat your binary by dozens of megabytes. A proposition to optionally enable compression was declined because it is difficult to handle all use cases. One solution? Put all the assets into a ZIP archive! 🗜�

Code

The Go standard library includes a module to read and write ZIP archives. It contains a function that turns a ZIP archive into an io/fs.FS structure that can replace embed.FS in most contexts.1

package embed

import (
  "archive/zip"
  "bytes"
  _ "embed"
  "fmt"
  "io/fs"
  "sync"
)

//go:embed data/embed.zip
var embeddedZip []byte

var dataOnce = sync.OnceValue(func() *zip.Reader {
  r, err := zip.NewReader(bytes.NewReader(embeddedZip), int64(len(embeddedZip)))
  if err != nil {
    panic(fmt.Sprintf("cannot read embedded archive: %s", err))
  }
  return r
})

func Data() fs.FS {
  return dataOnce()
}

We can build the embed.zip archive with a rule in a Makefile. We specify the files to embed as dependencies to ensure changes are detected.

common/embed/data/embed.zip: console/data/frontend console/data/docs
common/embed/data/embed.zip: orchestrator/clickhouse/data/protocols.csv 
common/embed/data/embed.zip: orchestrator/clickhouse/data/icmp.csv
common/embed/data/embed.zip: orchestrator/clickhouse/data/asns.csv
common/embed/data/embed.zip:
    mkdir -p common/embed/data && zip --quiet --recurse-paths --filesync $@ $^

The automatic variable $@ is the rule target, while $^ expands to all the dependencies, modified or not.

Space gain

Akvorado, a flow collector written in Go, embeds several static assets:

  • CSV files to translate port numbers, protocols or AS numbers, and
  • HTML, CSS, JS, and image files for the web interface, and
  • the documentation.
Breakdown of space used by each package before and after introducing embed.zip. It is displayed as a treemap and we can see many embedded files replaced by a bigger one.
Breakdown of the space used by each component before (left) and after (right) the introduction of embed.zip.

Embedding these assets into a ZIP archive reduced the size of the Akvorado executable by more than 4 MiB:

$ unzip -p common/embed/data/embed.zip | wc -c | numfmt --to=iec
7.3M
$ ll common/embed/data/embed.zip
-rw-r--r-- 1 bernat users 2.9M Dec  7 17:17 common/embed/data/embed.zip

Performance loss

Reading from a compressed archive is not as fast as reading a flat file. A simple benchmark shows it is more than 4× slower. It also allocates some memory.2

goos: linux
goarch: amd64
pkg: akvorado/common/embed
cpu: AMD Ryzen 5 5600X 6-Core Processor
BenchmarkData/compressed-12     2262   526553 ns/op   610 B/op   10 allocs/op
BenchmarkData/uncompressed-12   9482   123175 ns/op     0 B/op    0 allocs/op

Each access to an asset requires a decompression step, as seen in this flame graph:

&#128444; Flame graph when reading data from embed.zip compared to reading data directly
CPU flame graph comparing the time spent on CPU when reading data from embed.zip (left) versus reading data directly (right). Because the Go testing framework executes the benchmark for uncompressed data 4 times more often, it uses the same horizontal space as the benchmark for compressed data. The graph is interactive.

While a ZIP archive has an index to quickly find the requested file, seeking inside a compressed file is currently not possible.3 Therefore, the files from a compressed archive do not implement the io.ReaderAt or io.Seeker interfaces, unlike directly embedded files. This prevents some features, like serving partial files or detecting MIME types when serving files over HTTP.


For Akvorado, this is an acceptable compromise to save a few mebibytes from an executable of almost 100 MiB. Next week, I will continue this futile adventure by explaining how I prevented Go from disabling dead code elimination! 🦥


  1. You can safely read multiple files concurrently. However, it does not implement ReadDir() and ReadFile() methods. ↩︎

  2. You could keep frequently accessed assets in memory. This reduces CPU usage and trades cached memory for resident memory. ↩︎

  3. SOZip is a profile that enables fast random access in a compressed file. However, Go’s archive/zip module does not support it. ↩︎

07 December, 2025 11:05PM by Vincent Bernat

Iustin Pop

Yes, still alive!

Yeah, again three months have passed since my last (trivial) post, and I really don’t know where the time has flown.

I suppose the biggest problem was the long summer vacation, which threw me off-track, and then craziness started. Work work work, no time for anything, which kept me fully busy in August, and then “you should travel”.

So mid-September I went on my first business trip since Covid, again to Kirkland, which in itself was awesome. Flew out Sunday, and as I was concerned I was going to lose too much fitness—had a half-marathon planned on the weekend after the return—I ran every morning of the four days I was there. And of course, on the last day, I woke up even earlier (05:30 AM), went out to run before sunrise, intending to do a very simple “run along the road that borders the lake for 2.5K, then back”. And right at the farthest point, a hundred metres before my goal of turning around, I tripped, started falling, and as I was falling, I hit—sideways—a metal pole. I was in a bus station, it was the pole that has the schedule at the top, and I hit it at relatively full speed, right across my left-side ribs. The crash took the entire air out of my lungs, and I don’t remember if I ever felt pain/sensation like that—I was seriously not able to breathe for 20 seconds or so, and I was wondering if I’m going to pass out at this rate.

Only 20 seconds, because my Garmin started howling like a police siren, and the screen was saying something along the lines of: “Incident detected; contacting emergency services in 40…35…” and I was fumbling to cancel that, since a) I wasn’t that bad, b) notifying my wife that I had a crash would have not been a smart idea.

My left leg was scraped in a few places, my left hand pretty badly, or more than just scraped, so my focus was on limping back, and finding a fountain to wash my injuries, which I did, so I kept running with blood dripping down my hand. Fun fun, everything was hurting, I took an Uber for the ~1Km to the office, had many meetings, took another Uber and flew back to Zurich. Seattle → San Francisco → Zürich, I think 14 hours, with my ribs hurting pretty badly. But I got home (Friday afternoon), and was wondering if I can run or not on Saturday.

Saturday comes, I feel pretty OK, so I said let’s try, will stop if the pain is too great. I pick up my number, I go to the start, of course in the last block and not my normal block, and I start running. After 50 metres, I knew this won’t be good enough, but I said, let’s make it to the first kilometre. Then to the first fuelling point, then to the first aid point, at which moment I felt good enough to go to the second one.

Long story short, I ran the whole half marathon, with pain. Every stop for fuelling was mentally hard, as the pain stopped, and I knew I had to start running again, and the pain would resume. In the end, managed to finish: two and a half hours, instead of just two hours, but alive and very happy. Of course, I didn’t know what was waiting for me… Sunday I wake up in heavy pain, and despite painkillers, I was not feeling much better. The following night was terrible, Monday morning I went to the doctor, had X-rays, discussion with a radiologist. “Not really broken, but more than just bruised. See this angle here? Bones don’t have angles normally”. Painkillers, chest/abdomen wrapping, no running! So my attempts to “not lose fitness” put me off running for a couple of weeks.

Then October came, and I was getting better, but work was getting even more crazy. I don’t know where November passed, honestly, and now we’re already in December. I did manage to run, quite well, managed to bike a tiny bit and swim a little, but I’m not in a place where I can keep a regular and consistent schedule.

On the good side, I managed this year, for the first time since Covid, to not get sick. Hey, a sport injury is 100× better than a sickness, like I had in previous years, taking me out for two weeks. But life was crazy enough that I didn’t read some of my email accounts for months, and I’m just now starting to catch up to, well, baseline.

Of course, “the” rib—the lowest one on the left side—is long-healed, or so I thought. After some strength training early this week, I was very sore the next day, and I wanted to test whether my rib is still sore. I touched it at “the point”, and it hurt so badly I couldn’t believe. Two and a half months, and it’s not done-done.

And now it’s just two weeks before Christmas and New Year’s, and that time off will ruin my rhythm again. At least ski vacation is booked, ski service is done, and slowly, work is getting in good enough shape to actually enjoy thinking about vacation.

So, in the end, a very adventurous last third of the year, and that wasn’t even all. As I’m writing this, my right wrist is bandaged and for the past 24 hours it hasn’t hurt too much, but that’s another, and not so interesting, story.

I’ll close with a yay for always being behind/backlogged, but alive and relatively well. My sport injuries are “elective injuries” so to speak, and I’m very thankful for that. See you in the next post!

07 December, 2025 08:37PM

December 06, 2025

Simon Josefsson

Reproducible Guix Container Images

Around a year ago I wrote about Guix Container Images for GitLab CI/CD and these images have served the community well. Besides continous use in CI/CD, these Guix container images are used to confirm reproducibility of the source tarball artifacts in the releases of Libtasn1 v4.20, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, SASL v2.2.2, Guile-GnuTLS v5.0.1, and OATH Toolkit v2.6.13. See how all those release announcements mention a Guix commit? That’s the essential supply-chain information about the Guix build environment that allows the artifacts to be re-created. To make sure this is repeatable, the release tarball artifacts are re-created from source code every week in the verify-reproducible-artifacts project, that I wrote about earlier. Guix’s time travelling feature make this sustainable to maintain, and hopefully will continue to be able to reproduce the exact same tarball artifacts for years to come.

During the last year, unfortunately Guix was removed from Debian stable. My Guix container images were created from Debian with that Guix package. My setup continued to work since the old stage0 Debian+Guix containers were still available. Such a setup is not sustainable, as there will be bit-rot and we don’t want to rely on old containers forever, which (after the removal of Guix in Debian) could not be re-produced any more. Let this be a reminder how user-empowering features such as Guix time-travelling is! I have reworked my Guix container image setup, and this post is an update on the current status of this effort.

The first step was to re-engineer Debian container images with Guix, and I realized these were useful on their own, and warrant a separate project. A more narrowly scoped project makes will hopefully make it easier to keep them working. Now instead of apt-get install guix they use the official Guix guix-install.sh approach. Read more about that effort in the announcement of Debian with Guix.

The second step was to reconsider my approach to generate the Guix images. The earlier design had several stages. First, Debian+Guix containers were created. Then from those containers, a pure Guix container was created. Finally, using the pure Guix container another pure Guix container was created. The idea behind that GCC-like approach was to get to reproducible images that were created from an image that had no Debian left on it. However, I never managed to finish this. Partially because I hadn’t realized that every time you build a Guix container image from Guix, you effectively go back in time. When using Guix version X to build a container with Guix on it, it will not put Guix version X into the container but will put whatever version of Guix is available in its package archive, which will be an earlier version, such as version X-N. I had hope to overcome this somehow (running a guix pull in newly generated images may work), but never finished this before Guix was removed from Debian.

So what could a better design look like?

For efficiency, I had already started experimenting with generating the final images directly from the Debian+Guix images, and after reproducibility bugs were fixed I was able to get to reproducible images. However, I was still concerned that the Debian container could taint the process somehow, and was also concerned about the implied dependency on non-free software in Debian.

I’ve been using comparative rebuilds using “similar” distributions to confirm artifact reproducibility for my software projects, comparing builds on Trisquel 11 with Ubuntu 22.04, and AlmaLinux 9 with RockyLinux 9 for example. This works surprisingly well. Including one freedom-respecting distribution like Trisquel will detect if any non-free software has bearing on artifacts. Using different architectures, such as amd64 vs arm64 also help with deeper supply-chain concerns.

My conclusion was that I wanted containers with the same Guix commit for both Trisquel and Ubuntu. Given the similarity with Debian, adapting and launching the Guix on Trisquel/Debian project was straight forward. So we now have Trisquel 11/12 and Ubuntu 22.04/24.04 images with the same Guix on them.

Do you see where the debian-with-guix and guix-on-dpkg projects are leading to?

We are now ready to look at the modernized Guix Container Images project. The tags are the same as before:

registry.gitlab.com/debdistutils/guix/container:latest
registry.gitlab.com/debdistutils/guix/container:slim
registry.gitlab.com/debdistutils/guix/container:extra
registry.gitlab.com/debdistutils/guix/container:gash

The method to create them is different. Now there is a “build” job that uses the earlier Guix+Trisquel container (for amd64) or Guix+Debian (for arm64, pending Trisquel arm64 containers). The build job create the final containers directly. Next a Ubuntu “reproduce” job is launched that runs the same commands, failing if it cannot generate the bit-by-bit identical container. Then single-arch images are tested (installing/building GNU hello and building libksba), and then pushed to the GitLab registry, adding multi-arch images in the process. Then the final multi-arch containers are tested by building Guile-GnuTLS and, on success, uploaded to the Docker Hub.

How would you use them? A small way to start the container is like this:

jas@kaka:~$ podman run -it --privileged --entrypoint=/bin/sh registry.gitlab.com/debdistutils/guix/container:latest
sh-5.2# env HOME=/ guix describe # https://issues.guix.gnu.org/74949
  guix 21ce6b3
    repository URL: https://git.guix.gnu.org/guix.git
    branch: master
    commit: 21ce6b392ace4c4d22543abc41bd7c22596cd6d2
sh-5.2# 

The need for --entrypoint=/bin/sh is because Guix’s pack command sets up the entry point differently than most other containers. This could probably be fixed if people want that, and there may be open bug reports about this.

The need for --privileged is more problematic, but is discussed upstream. The above example works fine without it, but running anything more elaborate with guix-daemon installing packages will trigger a fatal error. Speaking of that, here is a snippet of commands that allow you to install Guix packages in the container.

cp -rL /gnu/store/*profile/etc/* /etc/
echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
echo 'root:x:0:' > /etc/group
groupadd --system guixbuild
for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
guix install hello
GUIX_PROFILE="/var/guix/profiles/per-user/root/guix-profile"
. "$GUIX_PROFILE/etc/profile"
hello

This could be simplified, but we chose to not hard-code in our containers because some of these are things that probably shouldn’t be papered over but fixed properly somehow. In some execution environments, you may need to pass --disable-chroot to guix-daemon.

To use the containers to build something in a GitLab pipeline, here is an example snippet:

test-amd64-latest-wget-configure-make-libksba:
  image: registry.gitlab.com/debdistutils/guix/container:latest
  before_script:
  - cp -rL /gnu/store/*profile/etc/* /etc/
  - echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
  - echo 'root:x:0:' > /etc/group
  - groupadd --system guixbuild
  - for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
  - export HOME=/
  - env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
  - guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
  - guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
  - guix describe
  - guix install libgpg-error
  - GUIX_PROFILE="//.guix-profile"
  - . "$GUIX_PROFILE/etc/profile"
  script:
  - wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
  - tar xfa libksba-1.6.7.tar.bz2
  - cd libksba-1.6.7
  - ./configure
  - make V=1
  - make check VERBOSE=t V=1

More help on the project page for the Guix Container Images.

That’s it for tonight folks, and remember, Happy Hacking!

06 December, 2025 10:22PM by simon

hackergotchi for Jonathan Dowland

Jonathan Dowland

thesis

It's done! It's over! I've graduated, I have the scroll, I'm staring at the eye-watering prices for the official photographer snap, I'm adjusting to post-thesis life.

My PhD thesis revisions have been accepted and my thesis is now available from Newcastle University Library's eThesis repository.

As part of submitting my corrections, I wrote a brief report detailing the changes I made from my thesis at the time of the viva. I also produced a latexdiff marked-up copy of the thesis to visualise the exact changes. In order to shed some light on the post-viva corrections process, at least at my institution, and in the hope that they are some use to someone, I'm sharing those documents:

06 December, 2025 09:41PM

December 04, 2025

hackergotchi for Colin Watson

Colin Watson

Free software activity in November 2025

My Debian contributions this month were all sponsored by Freexian. I had a bit less time than usual, because Freexian collaborators gathered in Marseille this month for our yearly sprint, doing some planning for next year.

You can also support my work directly via Liberapay or GitHub Sponsors.

OpenSSH

I began preparing for the second stage of the GSS-API key exchange package split (some details have changed since that message). It seems that we’ll need to wait until Ubuntu 26.04 LTS has been released, but that’s close enough that it’s worth making sure we’re ready. This month I just did some packaging cleanups that would otherwise have been annoying to copy, such as removing support for direct upgrades from pre-bookworm. I’m considering some other package rearrangements to make the split easier to manage, but haven’t made any decisions here yet.

This also led me to start on a long-overdue bug triage pass, mainly consisting of applying usertags to lots of our open bugs to sort them by which program they apply to, and also closing a few that have been fixed, since some bugs will eventually need to be reassigned to GSS-API packages and it would be helpful to make them easier to find. At the time of writing, about 30% of the bug list remains to be categorized this way.

Python packaging

I upgraded these packages to new upstream versions:

I packaged django-pgtransaction and backported it to trixie, since we plan to use it in Debusine; and I adopted python-certifi for the Python team.

I fixed or helped to fix several other build/test failures:

I fixed a couple of other bugs:

Other bits and pieces

Code reviews

04 December, 2025 05:55PM by Colin Watson

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in November 2025

04 December, 2025 02:59PM by Ben Hutchings

December 03, 2025

Reproducible Builds

Reproducible Builds in November 2025

Welcome to the report for November 2025 from the Reproducible Builds project!

These monthly reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As always, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this report:

  1. “10 years of Reproducible Build” at SeaGL
  2. Distribution work
  3. Tool development
  4. Website updates
  5. Miscellaneous news
  6. Software Supply Chain Security of Web3
  7. Upstream patches

10 years of Reproducible Builds’ at SeaGL 2025

On Friday 8th November, Chris Lamb gave a talk called 10 years of Reproducible Builds at SeaGL in Seattle, WA.

Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Chris’ talk:

[…] introduces the concept of reproducible builds, its technical underpinnings and its potentially transformative impact on software security and transparency. It is aimed at developers, security professionals and policy-makers who are concerned with enhancing trust and accountability in our software. It also provides a history of the Reproducible Builds project, which is approximately ten years old. How are we getting on? What have we got left to do? Aren’t all the builds reproducible now?


Distribution work

In Debian this month, Jochen Sprickerhof created a merge request to replace the use of reprotest in Debian’s Salsa Continuous Integration (CI) pipeline with debrebuild. Jochen cites the advantages as being threefold: firstly, that “only one extra build needed”; it “uses the same sbuild and ccache tooling as the normal build”; and “works for any Debian release”. The merge request was merged by Emmanuel Arias and is now active.

kpcyrd posted to our mailing list announcing the initial release of repro-threshold, which implements an APT transport that “defines a threshold of at least X of my N trusted rebuilders need to confirm they reproduced the binary” before installing Debian packages. “Configuration can be done through a config file, or through a curses-like user interface.

Holger then merged two commits by Jochen Sprickerhof in order to address a fakeroot-related reproducibility issue in the debian-installer, and Jörg Jaspert deployed a patch by Ivo De Decker for a bug originally filed by Holger in February 2025 related to some Debian packages not being archived on snapshot.debian.org.

Elsewhere, Roland Clobus performed some analysis on the “live” Debian trixie images, which he determined were not reproducible. However, in a follow-up post, Roland happily reports that the issues have been handled. In addition, 145 reviews of Debian packages were added, 12 were updated and 15 were removed this month adding to our knowledge about identified issues.

Lastly, Jochen Sprickerhof filed a bug announcing their intention to “binary NMU” a very large number of the R programming language after a reproducibility-related toolchain bug was fixed.


Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.


Julien Malka and Arnout Engelen launched the new hash collection server for NixOS. Aside from improved reporting to help focus reproducible builds efforts within NixOS, it collects build hashes as individually-signed attestations from independent builders, laying the groundwork for further tooling.


Tool development

diffoscope version 307 was uploaded to Debian unstable (as well as version 309). These changes included further attempts to automatically attempt to deploy to PyPI by liaising with the PyPI developers/maintainers (with this experimental feature). [][][]

In addition, reprotest versions 0.7.31 and 0.7.32 were uploaded to Debian unstable by Holger Levsen, who also made the following changes:

  • Do not vary the architecture personality if the kernel is not varied. (Thanks to Raúl Cumplido). []
  • Drop the debian/watch file, as Lintian now flags this as error for ‘native’ Debian packages. [][]
  • Bump Standards-Version to 4.7.2, with no changes needed. []
  • Drop the Rules-Requires-Root header as it is no longer required.. []

In addition, however, Vagrant Cascadian fixed a build failure by removing some extra whitespace from an older changelog entry. []


Website updates

Once again, there were a number of improvements made to our website this month including:


Miscellaneous news


Software Supply Chain Security of Web3

Via our mailing list, Martin Monperrus let us know about their recently-published page on the Software Supply Chain Security of Web3. The abstract of their paper is as follows:

Web3 applications, built on blockchain technology, manage billions of dollars in digital assets through decentralized applications (dApps) and smart contracts. These systems rely on complex, software supply chains that introduce significant security vulnerabilities. This paper examines the software supply chain security challenges unique to the Web3 ecosystem, where traditional Web2 software supply chain problems intersect with the immutable and high-stakes nature of blockchain technology. We analyze the threat landscape and propose mitigation strategies to strengthen the security posture of Web3 systems.

Their paper lists reproducible builds as one of the mitigating strategies. A PDF of the full text is available to download.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

03 December, 2025 08:28PM

December 02, 2025

François Marier

Recovering from a broken update on the Turris Omnia

The recent Turris OS update from 7.2.3 to 9.0.0 took down my WiFi entirely. The wired network still works fine, but wireless is completely broken.

Factory reset

It turns out the Omnia has an extensive (and fast) factory reset / recovery mode via the hardware reset button.

Unfortunately, the factory image didn't work for me, possibly because I don't use the stock WiFi radios anymore.

Rolling back with schnapps

Thanks to the fact that the Omnia uses a btrfs root filesystem, and the liberal use of snapshots around updates, I was able to rollback to the pre-9.0.0 state.

First, I connected to the router using ssh:

ssh root@192.168.1.1

Then I listed the available snapshots:

$ schnapps list
# | Type      | Size        | Date                        | Description
------+-----------+-------------+-----------------------------+------------------------------------
  500 | post      |    15.98MiB | 2025-08-09 11:27:48 -0700   | Automatic post-update snapshot (TurrisOS 7.2.2 - hbs)
  506 | pre       |    17.92MiB | 2025-09-12 03:44:32 -0700   | Automatic pre-update snapshot (TurrisOS 7.2.2 - hbs)
  507 | post      |    17.88MiB | 2025-09-12 03:45:14 -0700   | Automatic post-update snapshot (TurrisOS 7.2.3 - hbs)
  515 | time      |    20.03MiB | 2025-11-02 01:05:01 -0700   | Snapshot created by cron
  516 | time      |    20.05MiB | 2025-11-09 01:05:01 -0800   | Snapshot created by cron
  517 | time      |    20.29MiB | 2025-11-16 01:05:00 -0800   | Snapshot created by cron
  518 | time      |    20.64MiB | 2025-11-23 01:05:01 -0800   | Snapshot created by cron
  519 | time      |    20.83MiB | 2025-11-30 01:05:00 -0800   | Snapshot created by cron
  520 | pre       |    87.91MiB | 2025-11-30 07:41:10 -0800   | Automatic pre-update snapshot (TurrisOS 7.2.3 - hbs)
  521 | post      |   196.32MiB | 2025-11-30 07:48:11 -0800   | Automatic post-update snapshot (TurrisOS 9.0.0 - hbs)
  523 | pre       |     4.44MiB | 2025-11-30 20:47:31 -0800   | Automatic pre-update snapshot
  524 | post      |   224.00KiB | 2025-11-30 20:47:43 -0800   | Automatic post-update snapshot
  525 | rollback  |   224.00KiB | 2025-12-01 04:56:32 +0000   | Rollback to snapshot factory
  526 | pre       |     4.44MiB | 2025-11-30 21:04:19 -0800   | Automatic pre-update snapshot
  527 | post      |   272.00KiB | 2025-11-30 21:04:31 -0800   | Automatic post-update snapshot
  528 | rollback  |   272.00KiB | 2025-12-01 05:13:38 +0000   | Rollback to snapshot factory
  529 | pre       |     4.52MiB | 2025-11-30 21:28:44 -0800   | Automatic pre-update snapshot
  530 | single    |   208.00KiB |                             | 
  531 | rollback  |   224.00KiB | 2025-12-01 05:29:47 +0000   | Rollback to snapshot factory

Finally, I rolled back to the exact state I was on before the 9.0.0 update:

$ schnapps rollback 520
Current state saved as snapshot number 532
Rolled back to snapshot 520

Full wipe

As an aside, it turns out that the factory reset functionality is implemented as a brtfs rollback to a special factory snapshot. This is why is so fast, but it also means that doing a simple factory reset doesn't wipe the data on your router. If you are planning to sell your device or otherwise dispose of it, you also need to delete all btrfs snapshots

Conclusion

While this update was very disappointing, especially since it's never happened before with major updates on Turris OS, it made me discover just how great the recovery tools are. It would be pretty tricky to fully brick one of these devices.

02 December, 2025 11:23PM

Simon Josefsson

Guix on Trisquel & Ubuntu for Reproducible CI/CD Artifacts

Last week I published Guix on Debian container images that prepared for today’s announcement of Guix on Trisquel/Ubuntu container images.

I have published images with reasonably modern Guix for Trisquel 11 aramo, Trisquel 12 ecne, Ubuntu 22.04 and Ubuntu 24.04. The Ubuntu images are available for both amd64 and arm64, but unfortunately Trisquel arm64 containers aren’t available yet so they are only for amd64. Images for ppc64el and riscv64 are work in progress. The currently supported container names:

registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix

Or you prefer guix-on-dpkg on Docker Hub:

docker.io/jas4711/guix-on-dpkg:trisquel11-guix
docker.io/jas4711/guix-on-dpkg:trisquel12-guix
docker.io/jas4711/guix-on-dpkg:ubuntu22.04-guix
docker.io/jas4711/guix-on-dpkg:ubuntu24.04-guix

You may use them as follows. See the guix-on-dpkg README for how to start guix-daemon and installing packages.

jas@kaka:~$ podman run -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
root@guix:/# head -1 /etc/os-release 
NAME="Trisquel GNU/Linux"
root@guix:/# guix describe
  guix 136fc8b
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: 136fc8bfe91a64d28b6c54cf8f5930ffe787c16e
root@guix:/# 

You may now be asking yourself: why? Fear not, gentle reader, because having two container images of roughly similar software is a great tool for attempting to build software artifacts reproducible, and comparing the result to spot differences. Obviously.

I have been using this pattern to get reproducible tarball artifacts of several software releases for around a year and half, since libntlm 1.8.

Let’s walk through how to setup a CI/CD pipeline that will build a piece of software, in four different jobs for Trisquel 11/12 and Ubuntu 22.04/24.04. I am in the process of learning Codeberg/Forgejo CI/CD, so I am still using GitLab CI/CD here, but the concepts should be the same regardless of platform. Let’s start by defining a job skeleton:

.guile-gnutls: &guile-gnutls
  before_script:
  - /root/.config/guix/current/bin/guix-daemon --version
  - env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARGS &
  - GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
  - type guix
  - guix --version
  - guix describe
  - time guix install --verbosity=0 wget gcc-toolchain autoconf automake libtool gnutls guile pkg-config
  - time apt-get update
  - time apt-get install -y make git texinfo
  - GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
  script:
  - git clone https://codeberg.org/guile-gnutls/guile-gnutls.git
  - cd guile-gnutls
  - git checkout v5.0.1
  - ./bootstrap
  - ./configure
  - make V=1
  - make V=1 check VERBOSE=t
  - make V=1 dist
  after_script:
  - mkdir -pv out/$CI_JOB_NAME_SLUG/src
  - mv -v guile-gnutls/*-src.tar.* out/$CI_JOB_NAME_SLUG/src/
  - mv -v guile-gnutls/*.tar.* out/$CI_JOB_NAME_SLUG/
  artifacts:
    paths:
    - out/**

This installs some packages, clones guile-gnutls (it could be any project, that’s just an example), build it and return tarball artifacts. The artifacts are the git-archive and make dist tarballs.

Let’s instantiate the skeleton into four jobs, running the Trisquel 11/12 jobs on amd64 and the Ubuntu 22.04/24.04 jobs on arm64 for fun.

guile-gnutls-trisquel11-amd64:
  tags: [ saas-linux-medium-amd64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
  extends: .guile-gnutls

guile-gnutls-ubuntu22.04-arm64:
  tags: [ saas-linux-medium-arm64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
  extends: .guile-gnutls

guile-gnutls-trisquel12-amd64:
  tags: [ saas-linux-medium-amd64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
  extends: .guile-gnutls

guile-gnutls-ubuntu24.04-arm64:
  tags: [ saas-linux-medium-arm64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix
  extends: .guile-gnutls

Running this pipeline will result in artifacts that you want to confirm for reproducibility. Let’s add a pipeline job to do the comparison:

guile-gnutls-compare:
  image: alpine:latest
  needs: [ guile-gnutls-trisquel11-amd64,
           guile-gnutls-trisquel12-amd64,
           guile-gnutls-ubuntu22.04-arm64,
           guile-gnutls-ubuntu24.04-arm64 ]
  script:
  - cd out
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep    -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
  - sha256sum */*.tar.* */*/*.tar.* | grep    -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
  - sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
# Confirm modern git-archive tarball reproducibility
  - cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
# Confirm old git-archive (export-subst but long git describe) tarball reproducibility
  - cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
# Confirm 'make dist' generated tarball reproducibility
  - cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
  - cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz
  artifacts:
    when: always
    paths:
    - ./out/**

Look how beautiful, almost like ASCII art! The commands print SHA256 checksums of the artifacts, sorted in a couple of ways, and then proceeds to compare relevant artifacts. What would the output of such a run be, you may wonder? You can look for yourself in the guix-on-dpkg pipeline but here is the gist of it:

$ cd out
$ sha256sum */*.tar.* */*/*.tar.* | sort | grep    -- -src.tar.
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-ubuntu22-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-ubuntu24-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-ubuntu22-04-arm64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-ubuntu24-04-arm64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
      2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
      2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | grep    -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
      2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
      2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
      2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz

That’s it for today, but stay tuned for more updates on using Guix in containers, and remember; Happy Hacking!

02 December, 2025 10:01PM by simon

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

duckdb-mlpack 0.0.5: Added kmeans, version helpers, documentation

A new release of the still-recent duckdb extension for mlpack, the C++ header-only library for machine learning, was merged into the duckdb community extensions repo today, and has been updated at its duckdb ‘mlpack’ extension page.

This release 0.0.5 adds one new method: kmeans clustering. We also added two version accessors for both mlpack and armadillo. We found during the work on random forests (added in 0.0.4) that the multithreaded random number generation was not quite right in the respective upstream codes. This has by now been corrected in armadillo 15.2.2 as well as the trunk version of mlpack so if you build with those, and set a seed, then your forests and classification will be stable across reruns. We added a second state variable mlpack_silent that can be used to suppress even the minimal prediction quality summary some methods show, and expanded the documentation.

For more details, see the repo for code, issues and more, and the extension page for more about this duckdb community extension.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

02 December, 2025 05:40PM

Birger Schacht

Status update, November 2025

I started this month with a week of vacation which was followed by a small planned surgery and two weeks of sick leave. Nonetheless, I packaged and uploaded new releases of a couple of packages:

  • swayidle updated to version 1.9.0-1
  • swaylock updated to version 1.8.4-1
  • foot updated to version 1.25.0-1
  • swayimg updated to version 4.6-1
  • scdoc updated to version 1.11.4-1
  • wofi updated to version 1.5.1-1
  • xdg-desktop-portal-wlr updated to version 0.8.0-1

Besides that I reactivated I project I started in summer 2024: debiverse.org. The idea of that was to have interfaces to Debian bugs and packages that are usable on mobile devices (I know, ludicrous!). Back then I started with Flask and Sqlalchemy, but that soon got out of hand. I now switched the whole stack to FastAPI and SQLModel which makes it a lot easier to manage. And the upside is that it comes with an API and OpenAPI docs. For the rendered HTML pages I use Jinja2 with Tailwind as CSS framework. I am currently using udd-mirror as database backend, which works pretty good (for this single user project). It would be nice to have some of the data in a faster index, like Typesense or Meilisearch. This way it would be possible to have faceted search or more performant full text search. But I haven’t found any software that could provide this that is packaged in Debian.

Screenshot of the debiverse bug report list

Screenshot of the debiverse swagger API

02 December, 2025 05:28AM

December 01, 2025

hackergotchi for Guido Günther

Guido Günther

Free Software Activities November 2025

Another short status update of what happened on my side last month. Hand holding the release machinery for Phosh 0.51.0 but there's more:

See below for details on the above and more:

phosh

  • Better auto brightness (MR)
  • Update CI to forky (MR)
  • Test mobile data connection in CI (MR)
  • Add DebugControl interface (MR)
  • Release 0.51~rc1
  • caffeine prefs: Fix resize when adding intervals (MR)
  • Robustify plugin-prefs screenshot tests (MR)
  • Another build systemd dependency fix (MR)
  • Gesture to tune brightness on lock screen (MR)

phoc

  • Update ci to forky (MR)
  • Exit cleanly on SIGTERM (MR)
  • Release (0.51~rc1), 0.51.0)
  • Fix segfault triggered in alpine CI (MR)
  • Cancel preedit on submit (avoids resubmitted text in e.g. chatty or flare) (MR)

phosh-mobile-settings

  • Test suite robustness (MR)
  • Update CI (MR)
  • Release 0.51~rc1)

stevia

xdg-desktop-portal-phosh

  • Release 0.51~rc1, 0.50.0
  • Unbreak nightly builds (MR)
  • Unbreak 32bit builds (MR)
  • Drop C file chooser impl (MR)

pfs

  • pfs-open: Allow to open arbitrary directories and start fixing clippy warnings (MR)
  • More clippy cleanups (MR)
  • Allow to ship schema (MR)
  • Run a smoke test in ci (MR)
  • Implement org.freedesktop.FileManager1 in the demo (MR, MR, MR)
  • dir-view: Don't thumbnail when disabled (MR)

Phrog

  • Fix osk dependencies (MR)

gmobile

  • run-phosh: Allow to run headless (MR)
  • Release 0.5.0 (MR)
  • display-panel: Allow to take screenshots (MR)
  • Add hwdb and udev rules for torch min brightness (MR)

feedbackd

feedbackd-device-themes

libcall-ui

  • Ignore callaudiod deprecations as we otherwise break compilation of downstreams (MR)
  • Same for 0.1.x branch (MR)
  • Release (0.1.5)

wirepumber

  • doc: Fix make run invocation (MR)

Chatty

mobile-broadband-povider-info

Debian

  • stevia: Upload 0.51~rc1, 0.51.0)
  • phrog: Use stevia instead of osk-stub (MR)
  • meta-phosh: Modernize dependencies (MR)
  • phosh: Drop osk-stub (MR)
  • phosh: Upload 0.51~rc1
  • phoc: Upload 0.41~rc1
  • p-m-s: Upload 0.51~rc1
  • feedbackd-device-themes: Upload 0.8.7
  • m-b-p-i: Uplaod 20251101
  • debcargo-conf: Backport ashpd patch (MR)
  • xdg-desktop-portal-phosh: Get it into unstable (MR, MR)

Mobian

  • librem5: Drop exponential brightness (MR)

wlroots

  • input-method-unstable-v2: Fix two protocol issues (MR)

libqrtr-glib

  • Fix transfer annotation to unbreak usage from Python (MR)
  • Move doc build to gi-docgen (MR)

libadwaits-rs

  • Allow None for parent in adw_dialog_choose (MR)

phosh-site

  • Lint tools (MR)
  • Add slideshow to landing page (MR)
  • Add more videos (MR)
  • Fix typos and links (MR)
  • Update nightly details (MR)

bengalos-debs

  • Fix phrog build (MR, MR)
  • Enable arm64 builds (MR)

gtk

  • Drop unused defines (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • pfs: Create folder support (MR)`
  • portal: Create thumbnails via thumbnailer service (MR)
  • phosh: caffeine plugin prefs (MR)
  • phosh: lower torch brightness (MR)
  • phosh: wi-fi hotspot QR code (MR)
  • phosh/caffeine: Close status page when selecting an interval (MR)
  • phosh/caffeine: Use empty state (MR)
  • bengalos-recpipes: prep supporting multiple disk layouts (MR)
  • xdg-p-p: Longer test timeout (MR)
  • p-m-s: Volume slider for media rols (MR)

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 December, 2025 06:52PM

Russ Allbery

Review: Forever and a Day

Review: Forever and a Day, by Haley Cass

Series: Those Who Wait #1.5
Publisher: Haley Cass
Copyright: 2020
ISBN: 979-8-5902-5966-3
Format: Kindle
Pages: 101

Forever and a Day is a coda to Haley Cass's self-published sapphic romance novel Those Who Wait. There is no point in reading it unless you have already read and enjoyed the full book and wanted more of a denouement.

Given that Those Who Wait is a romance novel, it is definitionally not a spoiler to reveal that Sutton and Charlotte ended up together. This novella is seven scenes sketching out the next few years of their lives, interspersed with press clippings and social media commentary. These tie up loose ends, give the characters a bit more time together, throw in one more conflict and resolution, add one more sex scene, and stick a few exclamation points after the happily ever after.

I am the sort of person who likes long denouements in stories, so I'm the target audience for this sort of sequel that's essentially additional chapters to the book. (The funniest version of this I've read is Jacqueline Carey's Saints Astray.) They are usually not great literature, since there are good reasons for not including these chapters in the book. That is exactly what this is: a few more chapters of the characters being happy, entirely forgettable, and of interest only to people who want that.

Cass does try to introduce a bit of a plot via some light family conflict, which was sweet and mostly worked, and some conflict over having children, which was very stereotyped and which I did not enjoy as much. I thought the earlier chapters of this novella were the stronger ones, although I do have to give the characters credit in the later chapters for working through conflict in a mature and fairly reasonable way. It does help, though, when the conflict is entirely resolved by one character being right and the other character being happily wrong. That's character conflict on easy mode.

I was happy to see that Sutton got a career, although as in the novel I wish Cass had put some more effort into describing Sutton's efforts in building that career. The details are maddeningly vague, which admittedly matches the maddeningly vague description of Charlotte's politics but which left me unsatisfied.

Charlotte's political career continues to be pure wish fulfillment in the most utterly superficial and trivialized way, and it bothered me even more in the novella than it did in the novel. We still have absolutely no idea what she stands for, what she wants to accomplish, and why anyone would vote for her, and yet we get endless soft-focus paeans to how wonderful she will be for the country. Her opponents are similarly vague to the point that the stereotypes Cass uses to signal their inferiority to Charlotte are a little suspect.

I'm more critical of this in 2025 than I would have been in 2015 because the last ten years have made clear the amount of damage an absolute refusal to stand for anything except hazy bromides causes, and I probably shouldn't be this annoyed that Cass chose to vaguely gesture towards progressive liberalism without muddying her romance denouement with a concrete political debate. But, just, gah. I found the last chapter intensely annoying, in part because the narrative of that chapter was too cliched and trite to sufficiently distract me from the bad taste of the cotton-candy politics.

Other than that, this was minor, sweet, and forgettable. If you want another few chapters of an already long novel, this delivers exactly what you would expect. If the novel was plenty, nothing about this novella is going to change your mind and you can safely skip it. I really liked the scene between Charlotte and Sutton's mom, though, and I'm glad I read the novella just for that.

Rating: 6 out of 10

01 December, 2025 04:20AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

It's already December.

It's already December. I haven't figure out why suspend resume no longer works on my workstation.

01 December, 2025 04:06AM by Junichi Uekawa

November 30, 2025

Russ Allbery

Review: The Last Soul Among Wolves

Review: The Last Soul Among Wolves, by Melissa Caruso

Series: The Echo Archives #2
Publisher: Orbit
Copyright: August 2025
ISBN: 0-316-30404-2
Format: Kindle
Pages: 355

The Last Soul Among Wolves is urban high fantasy with strong mystery vibes. It is a direct sequel to The Last Hour Between Worlds. You need the previous book for some character setup (and this book would spoil it badly), but you don't have to remember the first book in detail. Only the main plot outcomes are directly relevant and the characters will remind you of those.

Kembrel Thorne is a Hound, the equivalent of a police detective in the medieval-inspired city setting of this series, but this book does not open with an official assignment. Instead, she has been dragged by her childhood friend Jaycel Morningrey as company for a reading of the will of old lady Lovegrace, reclusive owner of a gothic mansion on an island connected to the city by an intermittent sandbar. A surprise reunion with her gang of childhood friends ensues, followed by the revelation that they are all in serious trouble.

Shortly after Kem left the group to become a Hound, the remaining four, plus several other apparently random people, got entangled with a powerful Echo artifact. Now that Lovegrace has died, one of them will inherit the artifact and the ability to make a wish, but only one. The rest will be killed at decreasing intervals until only the winner is left alive.

The Last Hour Between Worlds was fae fantasy built around a problem that was more of a puzzle than a mystery. The Last Soul Among Wolves is closer to a classic mystery: A cast of characters are brought together and semi-isolated in a rural house, they start dying, and it's up to the detective to solve the mystery of their death before it's too late. In this case, the initial mechanism of death is supernatural and not in doubt — the challenge instead is how to stop it from happening again — but Kem's problems quickly become more complicated.

As mystery plots go, this is more thriller than classical despite the setting. There are a few scenes of analyzing clues, but Kem is more likely to use the time-honored protagonist technique of throwing herself into danger and learning what's going on via the villain monologues. As readers of the previous book would expect, Rika Nonesuch is here too, hired by another of Kem's old friends, and the two navigate their personal feelings and the rivalry between their guilds in much the way that they did in the Last Hour Between Worlds. As in the first book, there is a sapphic romance subplot, but it's a very slow burn asexual romance.

The best part of this series continues to be the world-building. The previous book introduced the idea of the Echoes and sent the characters exploring into stranger and stranger depths. This book fleshes out the rules in more detail, creating something that feels partly like a fae realm and partly like high fantasy involving gods, but diverges from both into a logic of its own. The ending satisfyingly passes my test of fantasy mysteries: Resolving the mystery requires understanding and applying the rules of the setting, which are sufficiently strange to create interesting outcomes but coherent enough that the reader doesn't feel like the author is cheating.

There are some hissable villains here, but my favorite part of this book was the way Caruso added a lot of nuance and poignancy to the Echoes rather than showing them only as an uncanny threat. That choice made the world feel deeper and richer. It's not yet clear whether that element is setup for a longer-term series plot, but I hope Caruso will develop the story in that direction.

It felt to me like Caruso is aiming for an ongoing series rather than a multi-volume story with a definite ending. She avoids a full episodic reset — Rika, in particular, gets considerable character development and new complications that bode well for future volumes — but it doesn't feel like the series is building towards an imminent climax. This is not a complaint. I enjoy these characters and this world and will happily keep devouring each new series entry.

If you liked The Last Hour Between Worlds, I think you will like this. It doesn't have the same delight of initial discovery of the great world-building, but the plot is satisfying and a bit more complex and the supporting characters are even better than those in the first book. Once again, Caruso kept me turning the pages, and I'm now looking forward to a third volume. Recommended.

The third book in the series has not yet been announced, but there are indications on social media that it is coming.

Rating: 7 out of 10

30 November, 2025 04:03AM

November 28, 2025

hackergotchi for Clint Adams

Clint Adams

monkeying around bitrot

One of the servers to which I SSH ratcheted up its public key requirements and thus the Monkeysphere key I've been using for 15 years stopped working.

Unfortunately, monkeysphere gen-subkey hardcodes RSA keys and if I'm going to be forced to use a new subkey I want mine to be of the 25519 variety. Therefore, to add a subkey by hand:

gpg --expert --edit-key $KEYID

Follow roughly what's in /usr/share/monkeysphere/m/gen_subkey, but change the key type to 11 (ECC (set your own capabilities)), don't bother with Encrypt capability, and pick Curve25519.

monkeysphere subkey-to-ssh-agent and agent-transfer will be all happy with the "ed25519" subkey without any code modifications, and you won't need to rewrite monkeysphere from scratch to use Sequoia for the next 15 years.

Posted on 2025-11-28
Tags:

28 November, 2025 08:56PM

Simon Josefsson

Container Images for Debian with Guix

The debian-with-guix-container project build and publish container images of Debian GNU/Linux stable with GNU Guix installed.

The images are like normal Debian stable containers but have the guix tool and a reasonable fresh guix pull.

Supported architectures include amd64 and arm64. The multi-arch container is called:

registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable

It may also be accessed via debian-with-guix at Docker Hub as:

docker.io/jas4711/debian-with-guix:stable

The container images may be used like this:

$ podman run --privileged -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable
root@guix:/# hello
bash: hello: command not found
root@guix:/# guix describe
  guix c9eb69d
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: c9eb69ddbf05e77300b59f49f4bb5aa50cae0892
root@guix:/# LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild &
[1] 21
root@guix:/# GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
root@guix:/# guix describe
Generation 2    Nov 28 2025 10:14:11    (current)
  guix c9eb69d
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: c9eb69ddbf05e77300b59f49f4bb5aa50cae0892
root@guix:/# guix install --verbosity=0 hello
accepted connection from pid 55, user root
The following package will be installed:
   hello 2.12.2

hint: Consider setting the necessary environment variables by running:

     GUIX_PROFILE="/root/.guix-profile"
     . "$GUIX_PROFILE/etc/profile"

Alternately, see `guix package --search-paths -p "/root/.guix-profile"'.

root@guix:/# GUIX_PROFILE="/root/.guix-profile"
root@guix:/# . "$GUIX_PROFILE/etc/profile"
root@guix:/# hello
Hello, world!
root@guix:/# 

Below is an example GitLab pipeline job that demonstrate how to run guix install to install additional dependencies, and then download and build a package that pick up the installed package from the system.

test-wget-configure-make-libksba-amd64:
  image: registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable
  before_script:
  - env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARG &
  - GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
  - guix describe
  - guix install libgpg-error
  - GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
  - apt-get install --update -y --no-install-recommends build-essential wget ca-certificates bzip2
  script:
  - wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
  - tar xfa libksba-1.6.7.tar.bz2
  - cd libksba-1.6.7
  - ./configure
  - make V=1
  - make check VERBOSE=t V=1

The images were initially created for use in GitLab CI/CD Pipelines but should work for any use.

The images are built in a GitLab CI/CD pipeline, see .gitlab-ci.yml.

The containers are derived from official Debian stable images with Guix installed and a successful run of guix pull, built using buildah invoked from build.sh using image/Containerfile that runs image/setup.sh.

The pipeline also push images to the GitLab container registry, and then also to Docker Hub.

Guix binaries are downloaded from the Guix binary tarballs project because of upstream download site availability and bandwidth concerns.

Enjoy these images! Hopefully they can help you overcome the loss of Guix in Debian which made it a mere apt-get install guix away before.

There are several things that may be improved further. An alternative to using podman --privileged is to use --security-opt seccomp=unconfined --cap-add=CAP_SYS_ADMIN,CAP_NET_ADMIN which may be slightly more fine-grained.

For ppc64el support I ran into an error message that I wasn’t able to resolve:

guix pull: error: while setting up the build environment: cannot set host name: Operation not permitted

For riscv64, I can’t even find a Guix riscv64 binary tarball for download, is there one anywhere?

For arm64 containers, it seems that you need to start guix-daemon with --disable-chroot to get something to work, at least on GitLab.com’s shared runners, otherwise you will get this error message:

guix install: error: clone: Invalid argument

Building the images themselves also require disabling some security functionality, and I was not able to build images with buildah without providing --cap-add=CAP_SYS_ADMIN,CAP_NET_ADMIN otherwise there were errors like this:

guix pull: error: cloning builder process: Operation not permitted
guix pull: error: clone: Operation not permitted
guix pull: error: while setting up the build environment: cannot set loopback interface flags: Operation not permitted

Finally on amd64 it seems --security-opt seccomp=unconfined is necessary, otherwise there is an error message like this, even if you use --disable-chroot:

guix pull: error: while setting up the child process: in phase setPersonality: cannot set personality: Function not implemented

This particular error is discussed upstream, but I think generally that these error suggest that guix-daemon could use more optional use of features: if some particular feature is not available, gracefully fall back to another mode of operation, instead of exiting with an error. Of course, it should never fall back to an insecure mode of operation, unless the user requests that.

Happy Hacking!

28 November, 2025 04:32PM by simon

Russell Coker

10gbit and 40gbit Home Networking

Aliexpress has a 4 port 2.5gbit switch with 2*SFP+ sockets for $34.35 delivered [1]. 4 ports isn’t very good for the more common use cases (if daisy chaining them then it’s only
2 available for devices) so this is really a device for use with 10Gbit uplink.

Aliexpress has a pair of SFP+ 10Gbit devices with 1M of copper between them for $15.79 delivered [2]. That page also offers a pair of QSFP+ 40Gbit devices with 1M of copper between them for $27.79 delivered.

They have a dual port SFP+ card for a server with two of the pairs of SFP+ 10gbit devices with copper between them for $32.51 delivered [3].

So you can get a 2.5gbit switch with two 10gbit uplink cables to nearby servers for $66.86 including postage. I don’t need this but it is tempting. I spent $93.78 to get 2.5gbit networking [4] so spending $66.86 to get part of my network to 10gbit isn’t much.

It is $99.81 including postage for a Mellanox 2*40Gbit QSFP+ card and two QSFP+ adaptors with 3M of copper between them [5]. It is $55.81 including postage for the Mellanox card without the cable. So that’s $155.62 for a point to point 40gbit link between systems that are less than 3M apart, that’s affordable for a home lab. As an aside the only NVMe I’ve tested which can deliver such speeds was in a Thinkpad and the Thinkpad entered a thermal throttling state after a few seconds of doing that.

The best price I could see for a 40Gbit switch is $1280 for a L3 Managed switch with 2*40G QSFP+ slot ports, 4*10G SFP+ ports, and 48*2.5G RJ45 ports [6]. That’s quite affordable for the SME market but a bit expensive for home users (although I’m sure that someone on r/homelab has one).

I’m not going to get 40Gbit, that’s well above what I need and while a point to point link is quite affordable I don’t have servers in that range. But I am seriously considering 10Gbit, I get paid to do enough networking stuff that having some hands on experience with 10Gbit could be useful.

For a laptop a 5gbit ethernet USB device is $29.48 including delivery which isn’t too expensive [7]. The faster ones seem to be all Thunderbolt and well over $100, which is disappointing as USB 3.2 can do up to 20Gbit. If I start doing 10gbit over ethernet I’ll get one of those USB devices for testing.

For a single server it’s cheaper and easier to get a 4 port 2.5Gbit ethernet for $55.61 [8].

28 November, 2025 08:13AM by etbe

November 27, 2025

Russ Allbery

Review: A Matter of Execution

Review: A Matter of Execution, by Nicholas & Olivia Atwater

Series: Tales of the Iron Rose #0
Publisher: Starwatch Press
Copyright: 2024
ISBN: 1-998257-08-8
Format: Kindle
Pages: 131

A Matter of Execution is the introductory novella that kicked off the Tales of the Iron Rose series. It is steampunk fantasy with airships. I previously read and reviewed the subsequent novel, Echoes of the Imperium.

As noted in that review, I read the novel first. That was a mistake; this is a much better place to start. A Matter of Execution was clearly intended as the introduction of all of these characters. More importantly, I think reading the novella first would have given me enough affinity with the characters to not mind the worst part of Echoes of the Imperium: the extremely slow first half that seemed filled with the protagonist's impostor syndrome.

A Matter of Execution opens, fittingly, with Captain William Blair, a goblin, former Imperial soldier, Oathbreaker, and series first-person protagonist being carted to his execution. He is not alone; in the same prison wagon is an arrogant (and racist) man named Strahl, the killer of one of the rulers of Lyonesse.

Strahl is rather contemptuous of Blair's claim to be a captain, given that he's both a goblin and an Oathbreaker. Strahl quickly revises that opinion when Blair's crew, somewhat predictably given that he is the series protagonist, creates a daring escape for both of them. The heat of action gives both a chance to gain some respect for the other, which explains why Blair is not only willing to invite Strahl to join his crew, but to go back for Strahl's companion.

Breaking out Strahl's companion will be a more difficult, and surprising, problem.

Nicholas Atwater is a role-playing game GM, something that you will learn in the "about the author" section at the end of this novella but probably will have guessed by then. Even more than Echoes of the Imperium, this novella feels like a (good) write-up of an RPG adventure. A wildly varied cast of characters come together and form a party with a well-defined objective that has some surrounding mysteries and surprises. Each of those characters get their individual moments to show off their specific skills. Readers with a certain gaming background will know exactly where to insert the Borderlands-style title card with a slightly demented description of each character.

This is not a complaint. You may be able to see the bones of the setup adventure for a long-running campaign, but I like this style of character introduction and the story moves right along. There are a ton of varied characters, some interesting villains and maybe-villains, a rather satisfying heist setup, and some good chemistry and a bit of banter. This is not a deep story — it's clearly an introductory episode for both the characters and the world background — but it's a fun way to spend a few hours.

I think the best part of this series is the world-building. If you have read my review of Echoes of the Imperium, you have unfortunately been mildly spoiled for the revelation in this novella. I don't think it hurt the story that much; you will be able to predict what obvious gaps in the novel backstory the novella is going to fill in, but it's just as enjoyable to see how that happens. But the Atwaters aren't going to drop any of the big world-building bombs in the introductory novella, of course. Instead, you get a gradual introduction to the nature of magic in this world, some of the political setup of the recent war, and a quick introduction to the capabilities of Strahl's mysterious companion.

If you've not yet read this series, I recommend starting here. It's a quick investment to see if you'll be interested. The novel is heavier and slower, and the pacing of the first half isn't great, but the world-building is even better.

If you've already read the novel, this is still worth reading as long as you enjoyed it. You'll have a few moments of "oh, that's how that happened," and it's a fun and fast-moving way to spend a bit more time with the characters.

Followed by Echoes of the Imperium. The back matter of the novella says that The Winds of Fortune is supposedly forthcoming.

Rating: 7 out of 10

27 November, 2025 05:34AM

Russell Coker

PineTime Band

I’ve had a Pine Time for just over 2 years [1]. About a year ago I had a band break and replaced it from a spare PineTime and now I just had another break. Having the band only last one year isn’t that great, but it’s fortunate that the break only affects the inner layer of plastic so there is no risk of the watch suddenly falling off and being broken or lost. The Pine64 web site has a page about this with bad options, one broken link and a few Amazon items that are have ridiculous postage [2].

I started writing this post while using the band from a Colmi P80 [3]. I bought one for a relative who wanted the metal band and the way the Aliexpress seller does it is to sell the package with the plastic band and include the metal band in the package so I had a spare band. It fits quite well and none of the reported problems of the PineTime having insufficient space between the spring bar and the watch. The Colmi band in question is described as “rose gold” but is more like “pinkish beige” and doesn’t match the style of the black PineTime.

I ordered a couple of cheap bands from AliExpress which cost $9.77 and $13.55 including postage while the ones that Pine64 recommend have over $15 postage from Amazon!

The 20mm Silicone Magnetic Buckle Watch Strap Band For Huawei GT2 Smart Watch Connected Bracelet Black Watchband Man [4] cost $13.55 including postage. It has a magnetic unfold mechanism which I find a bit annoying and it doesn’t allow easily changing the length. I don’t think I’ll choose that again. But it basically works and is comfortable.

The 20mm Metal Strap for Huawei Watch GT2 3 Quick Release Stainless Steel Watch Band for Samsung Galaxy Watch Bracelet [5] cost $9.77 including postage. I found this unreasonably difficult to put on and not particularly comfortable. But opinion will vary on that, it is cheap and will appeal to some people’s style.

Conclusion

There are claims that getting a replacement band for a PineTime is difficult. My experience is that every band with a 20mm attachment works as long as it’s designed for a square watch, some of the bands are designed to partly go around a round face and wouldn’t fit. I expect that some bands won’t fit, but I don’t think that it’s enough of a problem to be worried about buying a random band from AliExpress. The incidence of bands not fitting will probably be lower than the incidence of other AliExpress products not doing quite what you want (while meeting the legal criteria of doing what they are claimed to do) and not being used.

I’m now wearing the PineTime with the “Magnetic Buckle Watch Strap Band” and plan to wear it for the next year or so.

27 November, 2025 12:37AM by etbe

November 26, 2025

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (September and October 2025)

The following contributors got their Debian Developer accounts in the last two months:

  • Evangelos Ribeiro Tzaras (devrts)
  • Andrea Bolognani (abologna)

The following contributors were added as Debian Maintainers in the last two months:

  • Rylie Pavlik
  • Yuchin Tsai
  • Daniel Markstedt
  • Guido Berhörster
  • Renzo Davoli

Congratulations!

26 November, 2025 04:00PM by Jean-Pierre Giraud

November 25, 2025

Russell Coker

EDID and my 8K TV

I previously blogged about buying a refurbished Hisense 65u80g 8K TV with the aim of making it a large monitor [1] and about searching for a suitable video card for 8k [2]. After writing the second post I bought an Intel Arc B580 which also did a maximum of 4096*2160 resolution.

This post covers many attempts to try and get the TV to work correctly and it doesn’t have good answers. The best answer might be to not buy Hisense devices but I still lack data.

Attempts to Force 8K

I posted on Lemmy again about this [3] and got a single response, which is OK as it was a good response. They didn’t give me the answer on a silver platter but pointed me in the right direction of EDID [4].

I installed the Debian packages read-edid, wxedid, and edid-decode.

The command “get-edid > out.edid” saves the binary form of the edid to a file. The command “wxedid out.edid” allows graphical analysis of the EDID data. The command “edid-decode out.edid” dumps a plain text representation of the output, the command “edid-decode out.edid|grep VIC|cut -d: -f2|sort -n” shows an ordered list of video modes, in my case the highest resolution is 4096×2160 which is the highest that Linux had allowed me to set with two different video cards and a selection of different cables (both HDMI and DisplayPort).

xrandr --newmode 7680x4320 1042.63  7680 7984 7760 7824  4320 4353 4323 4328
xrandr --addmode HDMI-3 7680x4320
xrandr --output HDMI-3 --mode 7680x4320

I ran the above commands and got the below error:

xrandr: Configure crtc 0 failed

At this time I don’t know how much of this is due to the video card and how much is due to the TV. The parameters for xrandr came from a LLM because I couldn’t find any Google results on what 8K parameters to use. As an aside if you have a working 8K TV or monitor connected to a computer please publish the EDID data, xrandr, and everything else you can think of.

I found a Github repository for EDID data [5] but that didn’t have an entry for my TV and didn’t appear to have any other entry for an 8K device I could use.

Resolution for Web Browsing

I installed a browser on the TV, Chrome and Firefox aren’t available for a TV and the Play Store program tells you that (but without providing a reason) when you search for them. I tried the site CodeShack What is my Screen Resolution [6] which said that my laptop is 2460*1353 while the laptop display is actually 2560*1440. So apparently I have 100 pixels used for the KDE panel at the left of the screen and 87 pixels used by the Chrome tabs and URL bar – which seems about right. My Note 9 phone reports 384*661 out of it’s 2960*1440 display so it seems that Chrome on my phone is running web sites at 4/15 of the native resolution and about 16% of the height of the screen is used by the system notification bar, the back/home/tasklist buttons (I choose buttons instead of swipe for navigation in system settings), and the URL bar when I have “Screen zoom” in system settings at 1/4. When I changed “Screen zoom” to 0/4 the claimed resolution changed to 411*717 (2/7 of the native resolution). Font size changes didn’t change the claimed resolution. The claimed “Browser Viewport Size” by CodeShack is 1280*720 which is 1/6 of the real horizontal resolution and slightly more than 1/6 of the vertical resolution, it claims that the Pixel Density is 2* and a screen resolution of 970*540 which means to imply that the browser is only working at 1920*1080 resolution!

Netflix

When I view Netflix shows using the Netflix app running on the TV is reports “4K” which doesn’t happen on Linux PCs (as they restrict 4K content to platforms with DRM) and in the “Device” setting it reports “Device Model” as “Hisense_SmartTV 8K FFM” so the Netflix app knows all about 4K content and knows the text string “8K”.

YouTube

When I view a YouTube video that’s described as being 8K I don’t get a request for paying for YouTube Premium which is apparently what happens nowadays when you try to play actual 8K video. I turn on “State for Nerds” and one line has “Viewport / Frames 1920×1080*2.00” and another has “Current / Optimal Res 3840×2160@60 / 3840×2160@60” so it seems that the YouTube app is seeing the screen as 4K but choosing to only display FullHD even when I have Quality set to “2160p60 HDR”. It declares the network speed to be over 100mbit most of the time and the lowest it gets is 60mbit while 50mbit is allegedly what’s required for 8K.

I installed a few Android apps to report hardware capabilities and they reported the screen resolution to be 1920*1080.

Have I Been Ripped Off?

It looks like I might have been ripped off by this. I can’t get any app other than Netflix to display 4K content. My PC will only connect to it at 4K. Android apps (including YouTube) regard it as 1920*1080.

The “AI Upscaling” isn’t really that great and in most ways it seems at best equivalent to a 4K TV and less than a 4K TV that runs Android apps with an actual 4K display buffer.

Next Steps

The next things I plan to do are to continue attempts to get the TV to do what it’s claimed to be capable of, either an Android app that can display 8K content or a HDMI input of 8K content will do. Running a VNC client on the TV would be an acceptable way of getting an 8K display from a Linux PC.

I need to get a somewhat portable device that can give 8K signal output. Maybe a mini PC with a powerful GPU or maybe one of those ARM boards that’s designed to drive an 8K sign. Then I can hunt for stores that have 8K TVs on display.

It would be nice if someone made a USB device that does 8K video output – NOT a USB-C DisplayPort alternative mode that uses the video hardware on the laptop. Then I could take a laptop to any place that has an 8K display to show and connect my laptop to it.

The one thing I haven’t done yet is testing 8K MP4 files on a USB stick. That’s mainly due to a lack of content and the fact that none of the phone cameras I have access to can do 8K video. I will try displaying 8K PNG and JPEG files from a USB stick.

Most people would give up about now. But I am determined to solve this and buying another large TV isn’t out of the question.

25 November, 2025 07:09AM by etbe

November 21, 2025

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

Transferring Signal on Android

Transferring a Signal account between two Android devices

I spent far too much time recently trying to get a Signal Private Messenger account to transfer from one device to another.

What I eventually found worked was a very finicky path to enable functioning "Wi-Fi Direct", which I go into below.

I also offer some troubleshooting and recovery-from-failure guidance.

All of this blogpost uses "original device" to refer to the Android pocket supercomputer that already has Signal installed and set up, and "new device" to mean the Android device that doesn't yet have Signal on it.

Why Transfer?

Signal Private Messenger is designed with the expectation that the user has a "primary device", which is either an iPhone or an Android pocket supercomputer.

If you have an existing Signal account, and try to change your primary device by backing up and restoring from backup, it looks to me like Signal will cause your long-term identity keys to be changed. This in turn causes your peers to see a message like "Your safety number with Alice has changed."

These warning messages are the same messages that they would get if an adversary were to take over your account. So it's a good idea to minimize them when there isn't an account takeover — false alarms train people to ignore real alarms.

You can avoid "safety number changed" warnings by using signal's "account transfer" process during setup, at least if you're transferring between two Android devices.

However, my experience was that the transfer between two Android devices was very difficult to get to happen at all. I ran into many errors trying to do this, until I finally found a path that worked.

Dealing with Failure

After each failed attempt at a transfer, my original device's Signal installation would need to be re-registered. Having set a PIN meant that i could re-register the device without needing to receive a text message or phone call.

Set a PIN before you transfer!

Also, after a failure, you need to re-link any "linked device" (i.e. any Signal Desktop or iPad installation). If any message came in during the aborted transfer, the linked device won't get a copy of that message.

Finally, after a failed transfer, i recommend completely uninstalling Signal from the new device, and starting over with a fresh install on the new device.

Permissions

My understanding is that Signal on Android uses Wi-Fi Direct to accomplish the transfer. But to use Wi-Fi Direct, Signal needs to have the right permissions.

On each device:

  • Entirely stop the Signal app
  • Go to Settings » Apps » Signal » Permissions
  • Ensure that the following permissions are all enabled whenever the app is running:
  • Location
  • Nearby Devices
  • Network

Preparing for Wi-Fi Direct

The transfer process depends on "Wi-Fi Direct", which is a bit of a disaster on its own.

I found that if i couldn't get Wi-Fi Direct to work between the two devices, then the Signal transfer was guaranteed to fail.

So, for clearer debugging, i first tried to establish a Wi-Fi Direct link on Android, without Signal being involved at all.

Setting up a Wi-Fi Direct connection directly failed, multiple times, until i found the following combination of steps, to be done on each device:

  • Turn off Bluetooth
  • Ensure Wi-Fi is enabled
  • Disconnect from any Wi-Fi network you are connected to (go to the "Internet" or "Wi-Fi" settings page, long-press on the currently connected network, and choose "Disconnect"). If your device knows how to connect to multiple local Wi-Fi networks, disconnct from each of them in turn until you are in a stable state where Wi-Fi is enabled, but no network is connected.
  • Close to the bottom of the "Inteernet" or "Wi-Fi" settings page, choose "Network Preferences" and then "Wi-Fi Direct"
  • if there are any entries listed under "Remembered groups", tap them and choose to "Forget this group"
  • If there are Peer devices that say "Invited", tap them and choose to "Cancel invitation"

I found that this configuration is the most likely to enable a successful Wi-Fi Direct connection, where clicking "invite" on one device would pop up an alert on the other asking to accept the connection, and result in a "Connected" state between the two devices.

Actually Transferring

Start with both devices fully powered up and physically close to one another (on the same desk should be fine).

On the new device:

  • Reboot the device, and log into the profile you want to use
  • Enable Internet access via Wi-Fi.
  • Remove any old version of Signal.
  • Install Signal, but DO NOT OPEN IT!
  • Set up the permissions for the Signal app as described above
  • Open Signal, and choose "restore or transfer" -- you still need to be connected to the network at this point.
  • The new device should display a QR code.

On the original device:

  • Reboot the device, and log into the profile that has the Signal account you're looking to transfer
  • Enable Internet access via Wi-Fi, using the same network that the old device is using.
  • Make sure the permissions for Signal are set up as described above
  • Open Signal, and tap the camera button
  • Point the camera at the new device's QR code

Now tap the "continue" choices on both devices until they both display a message that they are searching for each other. You might see the location indicator (a green dot) turn on during this process.

If you see an immediate warning of failure on either device, you probably don't have the permissions set up right.

You might see an alert (a "toast") on one of the devices that the other one is trying to connect. You should click OK on that alert.

In my experience, both devices are likely to get stuck "searching" for each other. Wait for both devices to show Signal's warning that the search has timed out.

At this point, leave Signal open on both devices, and go through all the steps described above to prepare for Wi-Fi Direct. Your Internet access will be disabled.

Now, tap "Try again" in Signal on both devices, pressing the buttons within a few seconds of each other. You should see another alert that one device is trying to connect to the other. Press OK there.

At this point, the transfer should start happening! The old device will indicate what percentag has been transferred, and the new device will indicate how many messages hav been transferred.

When this is all done, re-connect to Wi-Fi on the new device.

Temporal gap for Linked Devices

Note that during this process, if new messages are arriving, they will be queuing up for you.

When you reconnect to wi-fi, the queued messages will flow to your new device. But the process of transferring automatically unlinks any linked devices. So if you want to keep your instance of Signal Desktop with as short a gap as possible, you should re-link that installation promptly after the transfer completes.

Clean-up

After all this is done successfully, you probably want to go into the Permissions settings and turn off the Location and Nearby Devices permissions for Signal on both devices.

I recommend also going into Wi-Fi Direct and removing any connected devices and forgetting any existing connections.

Conclusion

This is an abysmally clunky user experience, and I'm glad I don't have to do it often. It would have been much simpler to make a backup and restore from it, but I didn't want to freak out my contacts with a safety number change.

By contrast, when i wanted extend a DeltaChat account across two devices, the transfer was prompt and entirely painless -- i just had to make sure the devices were on the same network, and then scanned a QR code from one to the other. And there was no temporal gap for any other deviees. And i could use Delta on both devices simultaneously until i was convinced that it would work on the new device -- Delta doesn't have the concept of a primary account.

I wish Signal made it that easy! Until it's that easy, i hope the processes described here are useful to someone.

21 November, 2025 05:00AM by Daniel Kahn Gillmor

November 19, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

While it is cold-ish season in the North hemisphere...

Last week, our university held a «Mega Vaccination Center». Things cannot be small or regular with my university, ever! According to the official information, during last week ≈31,000 people were given a total of ≈74,000 vaccine dosis against influenza, COVID-19, pneumococcal disease and measles (specific vaccines for each person selected according to an age profile).

I was a tiny blip in said numbers. One person, three shots. Took me three hours, but am quite happy to have been among the huge crowd.

Long, long line

(↑ photo credit: La Jornada, 2025.11.14)

Really vaccinated!

And why am I bringing this up? Because I have long been involved in organizing DebConf, the best conference ever, naturally devoted to improving Debian GNU/Linux. And last year, our COVID reaction procedures ended up hurting people we care about. We, as organizers, are taking it seriously to shape a humane COVID handling policy that is, at the same time, responsible and respectful for people who are (reasonably!) afraid to catch the infection. No, COVID did not disappear in 2022, and its effects are not something we can turn a blind eye to.

Next year, DebConf will take place in Santa Fe, Argentina, in July. This means, it will be a Winter DebConf. And while you can catch COVID (or influenza, or just a bad cold) at any time of year, odds are a bit higher.

I know not every country still administers free COVID or influenza vaccines to anybody who requests them. And I know that any protection I might have got now will be quite weaker by July. But I feel it necessary to ask of everyone who can get it to get a shot. Most Northern Hemisphere countries will have a vaccination campaign (or at least, higher vaccine availability) before Winter.

If you plan to attend DebConf (hell… If you plan to attend any massive gathering of people travelling from all over the world to sit at a crowded auditorium) during the next year, please… Act responsibly. For yourself and for those surrounding you. Get vaccinated. It won’t absolutely save you from catching it, but it will reduce the probability. And if you do catch it, you will probably have a much milder version. And thus, you will spread it less during the first days until (and if!) you start developing symptoms.

19 November, 2025 03:59AM

November 18, 2025

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

App Store Oligopoly

A Call for Public Discussion about App Store Oligopoly

Over on the ACLU's Free Future blog, I just published an article titled Your Smartphone, Their Rules: How App Stores Enable Corporate-Government Censorship.

Free Software users and developers likely already understand the reasons why it matters who controls what tools you have access to. Hopefully this post can help clarify, even to people typically used to common non-free tooling, that there are real world risks to consolidated, proprietary control over computing and communication tools.

Big shout out to the projects out there doing good work in the "pocket supercomputer" space, providing an escape valve for many users and a counter-example to centralized corporate control, including F-Droid, GrapheneOS, and phosh.

The screws are tightening on user freedom, in the very place where most computing is happening today. The smartphone is already far too similar to an ankle monitor than it should be.

Please, publish your own suggestions on creative forms of mutual technical liberation. These are communications tools, so no person can fix the problems alone.

I would love to see a flourishing of non-Android, non-iOS systems in people's pockets, but i also know with the market the way it is, that is a long haul. Until that happens, we should also try to keep Android open, check out keepandroidopen.org for more suggestions.

18 November, 2025 05:00AM by Daniel Kahn Gillmor

November 17, 2025

Rodrigo Siqueira

XDC 2025

It has been a long time since I published any update in this space. Since this was a year of colossal changes for me, maybe it is also time for me to make something different with this blog and publish something just for a change — why not start talking about XDC 2025?

This year, I attended XDC 2025 in Vienna as an Igalia developer. I was thrilled to see some faces from people I worked with in the past and people I’m working with now. I had a chance to hang out with some folks I worked with at AMD (Harry, Alex, Leo, Christian, Shashank, and Pierre), many Igalians (Žan, Job, Ricardo, Paulo, Tvrtko, and many others), and finally some developers from Valve. In particular, I met Tímur in person for the first time, even though we have been talking for months about GPU recovery. Speaking of GPU recovery, we held a workshop on this topic together.

The workshop was packed with developers from different companies, which was nice because it added different angles on this topic. We began our discussion by focusing on the topic of job resubmission. Christian began sharing a brief history of how the AMDGPU driver started handling resubmission and the associated issues. After learning from erstwhile experience, amdgpu ended up adopting the following approach:

  1. When a job cause a hang, call driver specific handler.
  2. Stop the scheduler.
  3. Copy all jobs from the ring buffer, minus the job that caused the issue, to a temporary ring.
  4. Reset the ring buffer.
  5. Copy back the other jobs to the ring buffer.
  6. Resume the scheduler.

Below, you can see one crucial series associated with amdgpu recovery implementation:

The next topic was a discussion around the replacement of drm_sched_resubmit_jobs() since this function became deprecated. Just a few drivers still use this function, and they need a replacement for that. Some ideas were floating around to extract part of the specific implementation from some drivers into a generic function. The next day, Philipp Stanner continued to discuss this topic in his workshop, DRM GPU Scheduler.

Another crucial topic discussed was improving GPU reset debuggability to narrow down which operations cause the hang (keep in mind that GPU recovery is a medicine, not the cure to the problem). Intel developers shared their strategy for dealing with this by obtaining hints from userspace, which helped them provide a better set of information to append to the devcoredump. AMD could adopt this alongside dumping the IB data into the devcoredump (I am already investigating this).

Finally, we discussed strategies to avoid hang issues regressions. In summary, we have two lines of defense:

  • IGT: At the IGT level, we can have more tests that insert malicious instructions into the ring buffer, forcing the driver into an invalid state and triggering the recovery process.
  • HangTest suite: HangTest suite is a tool that simulates some potential hangs using Vulkan. Some tests are already available in this suite, but we should explore more creative combinations for trying to trigger hangs.
Lighting talk

This year, as always, XDC was super cool, packed with many engaging presentations which I highly recommend everyone check out. If you are interested, check the schedule and the presentation recordings available on the X.Org Foundation Youtube page. Anyway, I hope this blog post marks the inauguration of a new era for this site, where I will start posting more content ranging from updates to tutorials. See you soon.

17 November, 2025 12:00AM

November 16, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Game slowrunning

In 2013, I finished Zelda II: The Adventure of Link (on emulator), which I'd first played the summers of 1992 and 1993 (or thereabouts). At ~20 years between first start and first finish, it's a kind of weird opposite of speedrunning, and a personal best for me.

But this weekend, I trounced that record; in 1990 (I think!), we got a 512 kB RAM expansion for the Amiga 500 for the first time, which allowed us to play our warezed copy of Pool of Radiance without understanding much of the story or really reading that much English. And a couple of weeks ago, I realized that I had bought the game on GOG.com in 2018 and not done much about it… and went to finish it.

Pool of Radiance, fighting Thyranthraxus

Due to poor planning on my part, this ended up being a bit of a challenge run, with no stat modification, only five people in the party, no excessive rerolling (only 2–3 for each), no multiclassing, no glitches, no save-states (after finding out they help very little :-) ), very limited NPCs (only story NPCs plus a couple of hireds immediately killed for items, as opposed to the Amiga runs where we basically had only one PC and the rest top-grade NPCs!) and no Gold Box Companion.

However: Extensive guide use (the Internet is great!), and savescumming. Oh my, so much savescumming.

So that's 35 years from first start to first finish. We'll see when I get to Police Quest I…

16 November, 2025 11:46AM

Russ Allbery

Cumulative haul

I haven't posted a book haul in forever, so lots of stuff stacked up, including a new translation of Bambi that I really should get around to reading.

Nicholas & Olivia Atwater — A Matter of Execution (sff)
Nicholas & Olivia Atwater — Echoes of the Imperium (sff)
Travis Baldree — Brigands & Breadknives (sff)
Elizabeth Bear — The Folded Sky (sff)
Melissa Caruso — The Last Hour Between Worlds (sff)
Melissa Caruso — The Last Soul Among Wolves (sff)
Haley Cass — Forever and a Day (romance)
C.L. Clark — Ambessa: Chosen of the Wolf (sff)
C.L. Clark — Fate's Bane (sff)
C.L. Clark — The Sovereign (sff)
August Clarke — Metal from Heaven (sff)
Erin Elkin — A Little Vice (sff)
Audrey Faye — Alpha (sff)
Emanuele Galletto, et al. — Fabula Ultima: Core Rulebook (rpg)
Emanuele Galletto, et al. — Fabula Ultima: Atlas High Fantasy (rpg)
Emanuele Galletto, et al. — Fabula Ultima: Atlas Techno Fantasy (rpg)
Alix E. Harrow — The Everlasting (sff)
Alix E. Harrow — Starling House (sff)
Antonia Hodgson — The Raven Scholar (sff)
Bel Kaufman — Up the Down Staircase (mainstream)
Guy Gavriel Kay — All the Seas of the World (sff)
N.K. Jemisin & Jamal Campbell — Far Sector (graphic novel)
Mary Robinette Kowal — The Martian Conspiracy (sff)
Matthew Kressel — Space Trucker Jess (sff)
Mark Lawrence — The Book That Held Her Heart (sff)
Yoon Ha Lee — Moonstorm (sff)
Michael Lewis (ed.) — Who Is Government? (non-fiction)
Aidan Moher — Fight, Magic, Items (non-fiction)
Saleha Mohsin — Paper Soldiers (non-fiction)
Ada Palmer — Inventing the Renaissance (non-fiction)
Suzanne Palmer — Driving the Deep (sff)
Suzanne Palmer — The Scavenger Door (sff)
Suzanne Palmer — Ghostdrift (sff)
Terry Pratchett — Where's My Cow (graphic novel)
Felix Salten & Jack Zipes (trans.) — The Original Bambi (classic)
L.M. Sagas — Cascade Failure (sff)
Jenny Schwartz — The House That Walked Between Worlds (sff)
Jenny Schwartz — House in Hiding (sff)
Jenny Schwartz — The House That Fought (sff)
N.D. Stevenson — Scarlet Morning (sff)
Rory Stewart — Politics on the Edge (non-fiction)
Emily Tesh — The Incandescent (sff)
Brian K. Vaughan & Fiona Staples — Saga #1 (graphic novel)
Scott Warren — The Dragon's Banker (sff)
Sarah Wynn-Williams — Careless People (non-fiction)

As usual, I have already read and reviewed a whole bunch of these. More than I had expected, actually, given that I've not had a great reading year this year so far.

I am, finally, almost caught up with reviews, with just one book read and not yet reviewed. And hopefully I'll have lots of time to read for the last month and a half of the year.

16 November, 2025 06:32AM

November 15, 2025

Andrew Cater

2025-11-15 17:16 UTC Debian media testing for point release 13.2 of Trixie

*Busy* day in Cambridge. A roomful of people, large numbers of laptops and a lot of parallel installations.

Joined here by Emyr, Chris, Helen and Simon with Isy doing speech installs from her university accommodation. Two Andy's always makes it interesting. Steve providing breakfast, as ever.

We're almost there: the last test install is being repeated to flush out a possible bug. Other release processes are being done in the background.

Thanks again to Steve for hosting and all the hard work that goes into this from everybody.


 

15 November, 2025 08:39PM by Andrew Cater (noreply@blogger.com)

hackergotchi for Jonathan Dowland

Jonathan Dowland

Zoom R8

When I started looking at synths again, I had a feeling I would want to record from them, and ideally not with a computer. To that end, I also bought a second-hand standalone multitrack recorder, the Zoom R8.

Zoom R8

It's a little desktop console with two inputs, a built-in mic, and 8 sliders for adjusting the playback of 8 (ostensibly) independent tracks. It has a USB port to interface with a computer, and features some onboard effects (delay, reverb, that kind-of thing).

Look a bit closer, and the USB port is mini-USB, which gives away its age (and I'll never get rid of mini-USB cables, will I?). The two inputs are mono, so to capture stereo output from the minilogue-xd I need to tie up both inputs. Also, the 8 tracks are mono, so it's more like a stereo 4-track.

The effects (what little I've played with them) are really pretty cool; and it's great to apply them to a live signal. We had some fun running them over a bass guitar. However you can only use them for 44.1 kHz sample rate. If you ignore the effects the device supports 48 kHz.

I've ended up using it as my main USB interface on my computer; It's great for that. The internal mic ended up being too weak to use for video calls. As a USB interface, my computer can receive the signal from the synth (and I've wasted more time than I care to admit trying to wrestle with the Linux audio stack to do something with that).

It can also run on batteries, which opens up the possibility of recording piano with my daughter, or field recording or suchlike.

Writing this up serves as a reminder to me of why I bought it, and I now intend to spend a little more time using it that way and stop wasting time fighting ALSA/PulseAudio/PipeWire/PortAudio/etc.

15 November, 2025 10:14AM

November 14, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

404 not found

Found this grafitti on the wall behind my house today:

404 not found!

14 November, 2025 07:27PM

November 12, 2025

Simon Josefsson

Introducing the Debian Libre Live Images

The Debian Libre Live Images allows you to run and install Debian GNU/Linux without non-free software.

The general goal is to provide a way to use Debian without reliance on non-free software, to the extent possible within the Debian project.

One challenge are the official Debian live and installer images. Since the 2022 decision on non-free firmware, the official images for bookworm and trixie contains non-free software.

The Debian Libre Live Images project provides Live ISO images for Intel/AMD-compatible 64-bit x86 CPUs (amd64) built without any non-free software, suitable for running and installing Debian. The images are similar to the Debian Live Images distributed as Debian live images.

One advantage of Debian Libre Live Images is that you do not need to agree to the distribution terms and usage license agreements of the non-free blobs included in the official Debian images. The rights to your own hardware won’t be crippled by the legal restrictions that follows from relying on those non-free blobs. The usage of your own machine is no longer limited to what the non-free firmware license agreements allows you to do. This improve your software supply-chain situation, since you no longer need to consider their implication on your computing environment for your liberty, privacy or security. Inclusion of non-free firmware is a vehicle for xz-style attacks. For more information about the advantages of free software, see the FSF’s page on What is Free Software?.

Enough talking, show me the code! Err, binaries! Download images:

wget https://gitlab.com/api/v4/projects/74667529/packages/generic/debian-libre-live/main/live-image-amd64.hybrid.iso
wget https://gitlab.com/api/v4/projects/74667529/packages/generic/debian-libre-live/main/live-image-amd64.hybrid.iso.SHA256SUMS
sha256sum -c live-image-amd64.hybrid.iso.SHA256SUMS

Run in a virtual machine:

kvm -cdrom live-image-amd64.hybrid.iso -m 8G

Burn to an USB drive for installation on real hardware:

sudo dd if=live-images-amd64.hybrid.iso of=/dev/sdX # use sdX for USB drive

Images are built using live-build from the Debian Live Team. Inspiration has been taken from Reproducible Live Images and Kali Live.

The images are built by GitLab CI/CD shared runners. The pipeline .gitlab-ci.yml container job creates a container with live-build installed, defined in container/Containerfile. The build job then invokes run.sh that includes a run to lb build, and then upload the image to the package registry.

This is a first initial public release, calibrate your expectations! The primary audience are people already familiar with Debian. There are known issues. I have performed successful installations on a couple of different machines including laptops like Lenovo X201, Framework AMD Laptop 13″ etc.

Are you able to install Debian without any non-free software on some hardware using these images?

Happy Hacking!

12 November, 2025 11:16PM by simon

hackergotchi for Daniel Pocock

Daniel Pocock

Alfa Lions contre Bayonne Dockers, AFL France au Parc de Parilly, 8 novembre 2025

Vidéo : match masculin

12 November, 2025 06:00AM

Alfa Lions v Bayonne Dockers, AFL France in Parc de Parilly, 8 November 2025

Video: men's match

12 November, 2025 06:00AM

November 09, 2025

Alfa Lionnes contre Bayonne Dockers, AFL France au Parc de Parilly, le 8 novembre 2025

Vidéo : match féminin

09 November, 2025 04:30PM