Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

December 08, 2025

Thorsten Alteholz

My Debian Activities in November 2025

Debian LTS/ELTS

This was my hundred-thirty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian and my eighty-eighth ELTS month. As the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities.

During my allocated time I uploaded or worked on:

  • [DLA 4381-1] net-snmp security update to fix two CVEs related to denial of service.
  • [DLA 4382-1] libsdl2 security update to fix one CVE related to a memory leak and a denial of service.
  • [DLA 4380-1] cups-filters security update to fix three CVEs related to out of bounds read or writes or a heap buffer overflow.
  • [ELA-1586-1] cups-filters security update to fix three CVEs in Buster and Stretch, related to out of bounds read or writes or a heap buffer overflow.
  • [libcupsfilters] upload to unstable to fix two CVEs
  • [cups-filters] upload to unstable to fix three CVEs
  • [cups] upload to unstable to fix two CVEs
  • [rlottie] upload to unstable to finally fix three CVEs
  • [rplay] upload to unstable to finally fix one CVE
  • [#1121342] trixie-pu bug for libcupsfilters to fix two CVEs in Trixie.
  • [#1121391] trixie-pu bug for cups-filter to fix three CVEs in Trixie.
  • [#1121392] bookworm-pu bug for cups-filter to fix three CVEs in Bookworm.
  • [#112433] trixie-pu bug for rlottie to finally fix three CVEs in Trixie.
  • [#112437] bookworm-pu bug for rlottie to finally fix three CVEs in Bookworm.

I also attended the monthly LTS/ELTS meeting and did a week of LTS/ELTS frontdesk duties. I also stumbled upon a bug in python3-paramiko, where the parsing of include statements in the ssh_config does not work. Rather annoying but already fixed in the newest version, that only needs to find its way to my old VM.

Debian Printing

This month I uploaded a new upstream version or a bugfix version of:

I also uploaded cups to Trixie, to fix bug #1109471 related to a configuration problem with the admin panel.

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

  • siril to unstable (sponsored upload).
  • supernovas to unstable (sponsored upload).

Debian IoT

This month I uploaded a new upstream version or a bugfix version of:

Debian Mobcom

This month I uploaded a new upstream version or a bugfix version of:

misc

This month I uploaded a new upstream version or a bugfix version of:

On my fight against outdated RFPs, I closed 30 of them in November.

I started with about 3500 open RFP bugs. and after working six months on this project, I have closed 183 bugs. Of course new bugs appeared, so the overall number of bugs is only down to about 3360.

Though I view this as a successful project, I also have to admit that it is a bit boring to work on this daily. Therefore I close this diary again and will add the closed RFP bugs to my bug logbook now. I also try to close some of these bugs by really uploading some software, probably one package per month.

FTP master

This month I accepted 236 and rejected 16 packages. The overall number of packages that got accepted was 247.

08 December, 2025 03:20PM by alteholz

François Marier

Learning a new programming language with an LLM

I started learning Go this year. First, I picked a Perl project I wanted to rewrite, got a good book and ignored AI tools since I thought they would do nothing but interfere with learning. Eventually though, I decided to experiment a bit and ended up finding a few ways to use AI assistants effectively even when learning something new.

Searching more efficiently

The first use case that worked for me was search. Instead of searching on a traditional search engine and then ending up on Stack Overflow, I could get the answer I was looking for directly in an AI side-window in my editor. Of course, that's bad news for Stack Overflow.

I was however skeptical from the beginning since LLMs make mistakes, sometimes they making up function signatures or APIs that don't exist. Therefore I got into the habit of going to the official standard library documentation to double-check suggestions. For example, if the LLM suggests using strings.SplitN, I verify the function signature and behaviour carefully before using it. Basically, "don't trust and do verify."

I stuck to the standard library in my project, but if an LLM recommends third-party dependencies for you, make sure they exist and that Socket doesn't flag them as malicious. Research has found that 5-20% of packages suggested by LLMs don't actually exist, making this a real attack vector (dubbed "slopsquatting").

Autocomplete is too distracting

A step I took early on was to disable AI autocomplete in my editor. When learning a new language, you need to develop muscle memory for the syntax. Also, Go is no Java. There's not that much boilerplate to write in general.

I found it quite distracting to see some almost correct code replace my thinking about the next step. I can see how one could go faster with these suggestions, but being a developer is not just about cranking out lines of code as fast as possible, it's also about constantly learning new things (and retaining them).

Asking about idiomatic code

One of the most useful prompts when learning a new language is "Is this the most idiomatic way to do this in Go?". Large language models are good at recognizing patterns and can point out when you're writing code that works but doesn't follow the conventions of the language. This is especially valuable early on when you don't yet have a feel for what "good" code looks like in that language.

It's usually pretty easy (at least for an experience developer) to tell when the LLM suggestion is actually counter productive or wrong. If it increases complexity or is harder to read/decode, it's probably not a good idea to do it.

Reviews

One way a new dev gets better is through code review. If you have access to a friend who's an expert in the language you're learning, then you can definitely gain a lot by asking for feedback on your code.

If you don't have access to such a valuable resource, or as a first step before you consult your friend, I found that AI-assisted code reviews can be useful:

  1. Get the model to write the review prompt for you. Describe what you want reviewed and let it generate a detailed prompt.
  2. Feed that prompt to multiple models. They each have different answers and will detect different problems.
  3. Be prepared to ignore 50% of what they recommend. Some suggestions will be stylistic preferences, others will be wrong, or irrelevant.

The value is in the other 50%: the suggestions that make you think about your code differently or catch genuine problems.

Similarly for security reviews:

  • A lot of what they flag will need to be ignored (false positives, or things that don't apply to your threat model).
  • Some of it may highlight areas for improvement that you hadn't considered.
  • Occasionally, they will point out real vulnerabilities.

But always keep in mind that AI chatbots are trained to be people-pleasers and often feel the need to suggest something when nothing was needed

An unexpected benefit

One side effect of using AI assistants was that having them write the scaffolding for unit tests motivated me to increase my code coverage. Trimming unnecessary test cases and adding missing ones is pretty quick when the grunt work is already done, and I ended up testing more of my code (being a personal project written in my own time) than I might have otherwise.

Learning

In the end, I continue to believe in the value of learning from quality books (I find reading paper-based most effective). In addition, I like to create Anki questions for common mistakes or things I find I have to look up often. Remembering something will always be faster than asking an AI tool.

So my experience this year tells me that LLMs can supplement traditional time-tested learning techniques, but I don't believe it obsoletes them.

P.S. I experimented with getting an LLM to ghost-write this post for me from an outline (+ a detailed style guide) and I ended up having to rewrite at least 75% of it. It was largely a waste of time.

08 December, 2025 12:32AM

December 07, 2025

Vincent Bernat

Compressing embedded files in Go

Go’s embed feature lets you bundle static assets into an executable, but it stores them uncompressed. This wastes space: a web interface with documentation can bloat your binary by dozens of megabytes. A proposition to optionally enable compression was declined because it is difficult to handle all use cases. One solution? Put all the assets into a ZIP archive! 🗜�

Code

The Go standard library includes a module to read and write ZIP archives. It contains a function that turns a ZIP archive into an io/fs.FS structure that can replace embed.FS in most contexts.1

package embed

import (
  "archive/zip"
  "bytes"
  _ "embed"
  "fmt"
  "io/fs"
  "sync"
)

//go:embed data/embed.zip
var embeddedZip []byte

var dataOnce = sync.OnceValue(func() *zip.Reader {
  r, err := zip.NewReader(bytes.NewReader(embeddedZip), int64(len(embeddedZip)))
  if err != nil {
    panic(fmt.Sprintf("cannot read embedded archive: %s", err))
  }
  return r
})

func Data() fs.FS {
  return dataOnce()
}

We can build the embed.zip archive with a rule in a Makefile. We specify the files to embed as dependencies to ensure changes are detected.

common/embed/data/embed.zip: console/data/frontend console/data/docs
common/embed/data/embed.zip: orchestrator/clickhouse/data/protocols.csv 
common/embed/data/embed.zip: orchestrator/clickhouse/data/icmp.csv
common/embed/data/embed.zip: orchestrator/clickhouse/data/asns.csv
common/embed/data/embed.zip:
    mkdir -p common/embed/data && zip --quiet --recurse-paths --filesync $@ $^

The automatic variable $@ is the rule target, while $^ expands to all the dependencies, modified or not.

Space gain

Akvorado, a flow collector written in Go, embeds several static assets:

  • CSV files to translate port numbers, protocols or AS numbers, and
  • HTML, CSS, JS, and image files for the web interface, and
  • the documentation.
Breakdown of space used by each package before and after introducing embed.zip. It is displayed as a treemap and we can see many embedded files replaced by a bigger one.
Breakdown of the space used by each component before (left) and after (right) the introduction of embed.zip.

Embedding these assets into a ZIP archive reduced the size of the Akvorado executable by more than 4 MiB:

$ unzip -p common/embed/data/embed.zip | wc -c | numfmt --to=iec
7.3M
$ ll common/embed/data/embed.zip
-rw-r--r-- 1 bernat users 2.9M Dec  7 17:17 common/embed/data/embed.zip

Performance loss

Reading from a compressed archive is not as fast as reading a flat file. A simple benchmark shows it is more than 4× slower. It also allocates some memory.2

goos: linux
goarch: amd64
pkg: akvorado/common/embed
cpu: AMD Ryzen 5 5600X 6-Core Processor
BenchmarkData/compressed-12     2262   526553 ns/op   610 B/op   10 allocs/op
BenchmarkData/uncompressed-12   9482   123175 ns/op     0 B/op    0 allocs/op

Each access to an asset requires a decompression step, as seen in this flame graph:

🖼 Flame graph when reading data from embed.zip compared to reading data directly
CPU flame graph comparing the time spent on CPU when reading data from embed.zip (left) versus reading data directly (right). Because the Go testing framework executes the benchmark for uncompressed data 4 times more often, it uses the same horizontal space as the benchmark for compressed data. The graph is interactive.

While a ZIP archive has an index to quickly find the requested file, seeking inside a compressed file is currently not possible.3 Therefore, the files from a compressed archive do not implement the io.ReaderAt or io.Seeker interfaces, unlike directly embedded files. This prevents some features, like serving partial files or detecting MIME types when serving files over HTTP.


For Akvorado, this is an acceptable compromise to save a few mebibytes from an executable of almost 100 MiB. Next week, I will continue this futile adventure by explaining how I prevented Go from disabling dead code elimination! 🦥


  1. You can safely read multiple files concurrently. However, it does not implement ReadDir() and ReadFile() methods. ↩︎

  2. You could keep frequently accessed assets in memory. This reduces CPU usage and trades cached memory for resident memory. ↩︎

  3. SOZip is a profile that enables fast random access in a compressed file. However, Go’s archive/zip module does not support it. ↩︎

07 December, 2025 11:05PM by Vincent Bernat

Iustin Pop

Yes, still alive!

Yeah, again three months have passed since my last (trivial) post, and I really don’t know where the time has flown.

I suppose the biggest problem was the long summer vacation, which threw me off-track, and then craziness started. Work work work, no time for anything, which kept me fully busy in August, and then “you should travel”.

So mid-September I went on my first business trip since Covid, again to Kirkland, which in itself was awesome. Flew out Sunday, and as I was concerned I was going to lose too much fitness—had a half-marathon planned on the weekend after the return—I ran every morning of the four days I was there. And of course, on the last day, I woke up even earlier (05:30 AM), went out to run before sunrise, intending to do a very simple “run along the road that borders the lake for 2.5K, then back”. And right at the farthest point, a hundred metres before my goal of turning around, I tripped, started falling, and as I was falling, I hit—sideways—a metal pole. I was in a bus station, it was the pole that has the schedule at the top, and I hit it at relatively full speed, right across my left-side ribs. The crash took the entire air out of my lungs, and I don’t remember if I ever felt pain/sensation like that—I was seriously not able to breathe for 20 seconds or so, and I was wondering if I’m going to pass out at this rate.

Only 20 seconds, because my Garmin started howling like a police siren, and the screen was saying something along the lines of: “Incident detected; contacting emergency services in 40…35…” and I was fumbling to cancel that, since a) I wasn’t that bad, b) notifying my wife that I had a crash would have not been a smart idea.

My left leg was scraped in a few places, my left hand pretty badly, or more than just scraped, so my focus was on limping back, and finding a fountain to wash my injuries, which I did, so I kept running with blood dripping down my hand. Fun fun, everything was hurting, I took an Uber for the ~1Km to the office, had many meetings, took another Uber and flew back to Zurich. Seattle → San Francisco → Zürich, I think 14 hours, with my ribs hurting pretty badly. But I got home (Friday afternoon), and was wondering if I can run or not on Saturday.

Saturday comes, I feel pretty OK, so I said let’s try, will stop if the pain is too great. I pick up my number, I go to the start, of course in the last block and not my normal block, and I start running. After 50 metres, I knew this won’t be good enough, but I said, let’s make it to the first kilometre. Then to the first fuelling point, then to the first aid point, at which moment I felt good enough to go to the second one.

Long story short, I ran the whole half marathon, with pain. Every stop for fuelling was mentally hard, as the pain stopped, and I knew I had to start running again, and the pain would resume. In the end, managed to finish: two and a half hours, instead of just two hours, but alive and very happy. Of course, I didn’t know what was waiting for me… Sunday I wake up in heavy pain, and despite painkillers, I was not feeling much better. The following night was terrible, Monday morning I went to the doctor, had X-rays, discussion with a radiologist. “Not really broken, but more than just bruised. See this angle here? Bones don’t have angles normally”. Painkillers, chest/abdomen wrapping, no running! So my attempts to “not lose fitness” put me off running for a couple of weeks.

Then October came, and I was getting better, but work was getting even more crazy. I don’t know where November passed, honestly, and now we’re already in December. I did manage to run, quite well, managed to bike a tiny bit and swim a little, but I’m not in a place where I can keep a regular and consistent schedule.

On the good side, I managed this year, for the first time since Covid, to not get sick. Hey, a sport injury is 100× better than a sickness, like I had in previous years, taking me out for two weeks. But life was crazy enough that I didn’t read some of my email accounts for months, and I’m just now starting to catch up to, well, baseline.

Of course, “the” rib—the lowest one on the left side—is long-healed, or so I thought. After some strength training early this week, I was very sore the next day, and I wanted to test whether my rib is still sore. I touched it at “the point”, and it hurt so badly I couldn’t believe. Two and a half months, and it’s not done-done.

And now it’s just two weeks before Christmas and New Year’s, and that time off will ruin my rhythm again. At least ski vacation is booked, ski service is done, and slowly, work is getting in good enough shape to actually enjoy thinking about vacation.

So, in the end, a very adventurous last third of the year, and that wasn’t even all. As I’m writing this, my right wrist is bandaged and for the past 24 hours it hasn’t hurt too much, but that’s another, and not so interesting, story.

I’ll close with a yay for always being behind/backlogged, but alive and relatively well. My sport injuries are “elective injuries” so to speak, and I’m very thankful for that. See you in the next post!

07 December, 2025 08:37PM

December 06, 2025

Simon Josefsson

Reproducible Guix Container Images

Around a year ago I wrote about Guix Container Images for GitLab CI/CD and these images have served the community well. Besides continous use in CI/CD, these Guix container images are used to confirm reproducibility of the source tarball artifacts in the releases of Libtasn1 v4.20, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, SASL v2.2.2, Guile-GnuTLS v5.0.1, and OATH Toolkit v2.6.13. See how all those release announcements mention a Guix commit? That’s the essential supply-chain information about the Guix build environment that allows the artifacts to be re-created. To make sure this is repeatable, the release tarball artifacts are re-created from source code every week in the verify-reproducible-artifacts project, that I wrote about earlier. Guix’s time travelling feature make this sustainable to maintain, and hopefully will continue to be able to reproduce the exact same tarball artifacts for years to come.

During the last year, unfortunately Guix was removed from Debian stable. My Guix container images were created from Debian with that Guix package. My setup continued to work since the old stage0 Debian+Guix containers were still available. Such a setup is not sustainable, as there will be bit-rot and we don’t want to rely on old containers forever, which (after the removal of Guix in Debian) could not be re-produced any more. Let this be a reminder how user-empowering features such as Guix time-travelling is! I have reworked my Guix container image setup, and this post is an update on the current status of this effort.

The first step was to re-engineer Debian container images with Guix, and I realized these were useful on their own, and warrant a separate project. A more narrowly scoped project makes will hopefully make it easier to keep them working. Now instead of apt-get install guix they use the official Guix guix-install.sh approach. Read more about that effort in the announcement of Debian with Guix.

The second step was to reconsider my approach to generate the Guix images. The earlier design had several stages. First, Debian+Guix containers were created. Then from those containers, a pure Guix container was created. Finally, using the pure Guix container another pure Guix container was created. The idea behind that GCC-like approach was to get to reproducible images that were created from an image that had no Debian left on it. However, I never managed to finish this. Partially because I hadn’t realized that every time you build a Guix container image from Guix, you effectively go back in time. When using Guix version X to build a container with Guix on it, it will not put Guix version X into the container but will put whatever version of Guix is available in its package archive, which will be an earlier version, such as version X-N. I had hope to overcome this somehow (running a guix pull in newly generated images may work), but never finished this before Guix was removed from Debian.

So what could a better design look like?

For efficiency, I had already started experimenting with generating the final images directly from the Debian+Guix images, and after reproducibility bugs were fixed I was able to get to reproducible images. However, I was still concerned that the Debian container could taint the process somehow, and was also concerned about the implied dependency on non-free software in Debian.

I’ve been using comparative rebuilds using “similar” distributions to confirm artifact reproducibility for my software projects, comparing builds on Trisquel 11 with Ubuntu 22.04, and AlmaLinux 9 with RockyLinux 9 for example. This works surprisingly well. Including one freedom-respecting distribution like Trisquel will detect if any non-free software has bearing on artifacts. Using different architectures, such as amd64 vs arm64 also help with deeper supply-chain concerns.

My conclusion was that I wanted containers with the same Guix commit for both Trisquel and Ubuntu. Given the similarity with Debian, adapting and launching the Guix on Trisquel/Debian project was straight forward. So we now have Trisquel 11/12 and Ubuntu 22.04/24.04 images with the same Guix on them.

Do you see where the debian-with-guix and guix-on-dpkg projects are leading to?

We are now ready to look at the modernized Guix Container Images project. The tags are the same as before:

registry.gitlab.com/debdistutils/guix/container:latest
registry.gitlab.com/debdistutils/guix/container:slim
registry.gitlab.com/debdistutils/guix/container:extra
registry.gitlab.com/debdistutils/guix/container:gash

The method to create them is different. Now there is a “build” job that uses the earlier Guix+Trisquel container (for amd64) or Guix+Debian (for arm64, pending Trisquel arm64 containers). The build job create the final containers directly. Next a Ubuntu “reproduce” job is launched that runs the same commands, failing if it cannot generate the bit-by-bit identical container. Then single-arch images are tested (installing/building GNU hello and building libksba), and then pushed to the GitLab registry, adding multi-arch images in the process. Then the final multi-arch containers are tested by building Guile-GnuTLS and, on success, uploaded to the Docker Hub.

How would you use them? A small way to start the container is like this:

jas@kaka:~$ podman run -it --privileged --entrypoint=/bin/sh registry.gitlab.com/debdistutils/guix/container:latest
sh-5.2# env HOME=/ guix describe # https://issues.guix.gnu.org/74949
  guix 21ce6b3
    repository URL: https://git.guix.gnu.org/guix.git
    branch: master
    commit: 21ce6b392ace4c4d22543abc41bd7c22596cd6d2
sh-5.2# 

The need for --entrypoint=/bin/sh is because Guix’s pack command sets up the entry point differently than most other containers. This could probably be fixed if people want that, and there may be open bug reports about this.

The need for --privileged is more problematic, but is discussed upstream. The above example works fine without it, but running anything more elaborate with guix-daemon installing packages will trigger a fatal error. Speaking of that, here is a snippet of commands that allow you to install Guix packages in the container.

cp -rL /gnu/store/*profile/etc/* /etc/
echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
echo 'root:x:0:' > /etc/group
groupadd --system guixbuild
for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
guix install hello
GUIX_PROFILE="/var/guix/profiles/per-user/root/guix-profile"
. "$GUIX_PROFILE/etc/profile"
hello

This could be simplified, but we chose to not hard-code in our containers because some of these are things that probably shouldn’t be papered over but fixed properly somehow. In some execution environments, you may need to pass --disable-chroot to guix-daemon.

To use the containers to build something in a GitLab pipeline, here is an example snippet:

test-amd64-latest-wget-configure-make-libksba:
  image: registry.gitlab.com/debdistutils/guix/container:latest
  before_script:
  - cp -rL /gnu/store/*profile/etc/* /etc/
  - echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
  - echo 'root:x:0:' > /etc/group
  - groupadd --system guixbuild
  - for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
  - export HOME=/
  - env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
  - guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
  - guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
  - guix describe
  - guix install libgpg-error
  - GUIX_PROFILE="//.guix-profile"
  - . "$GUIX_PROFILE/etc/profile"
  script:
  - wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
  - tar xfa libksba-1.6.7.tar.bz2
  - cd libksba-1.6.7
  - ./configure
  - make V=1
  - make check VERBOSE=t V=1

More help on the project page for the Guix Container Images.

That’s it for tonight folks, and remember, Happy Hacking!

06 December, 2025 10:22PM by simon

hackergotchi for Jonathan Dowland

Jonathan Dowland

thesis

It's done! It's over! I've graduated, I have the scroll, I'm staring at the eye-watering prices for the official photographer snap, I'm adjusting to post-thesis life.

My PhD thesis revisions have been accepted and my thesis is now available from Newcastle University Library's eThesis repository.

As part of submitting my corrections, I wrote a brief report detailing the changes I made from my thesis at the time of the viva. I also produced a latexdiff marked-up copy of the thesis to visualise the exact changes. In order to shed some light on the post-viva corrections process, at least at my institution, and in the hope that they are some use to someone, I'm sharing those documents:

06 December, 2025 09:41PM

December 04, 2025

hackergotchi for Colin Watson

Colin Watson

Free software activity in November 2025

My Debian contributions this month were all sponsored by Freexian. I had a bit less time than usual, because Freexian collaborators gathered in Marseille this month for our yearly sprint, doing some planning for next year.

You can also support my work directly via Liberapay or GitHub Sponsors.

OpenSSH

I began preparing for the second stage of the GSS-API key exchange package split (some details have changed since that message). It seems that we’ll need to wait until Ubuntu 26.04 LTS has been released, but that’s close enough that it’s worth making sure we’re ready. This month I just did some packaging cleanups that would otherwise have been annoying to copy, such as removing support for direct upgrades from pre-bookworm. I’m considering some other package rearrangements to make the split easier to manage, but haven’t made any decisions here yet.

This also led me to start on a long-overdue bug triage pass, mainly consisting of applying usertags to lots of our open bugs to sort them by which program they apply to, and also closing a few that have been fixed, since some bugs will eventually need to be reassigned to GSS-API packages and it would be helpful to make them easier to find. At the time of writing, about 30% of the bug list remains to be categorized this way.

Python packaging

I upgraded these packages to new upstream versions:

I packaged django-pgtransaction and backported it to trixie, since we plan to use it in Debusine; and I adopted python-certifi for the Python team.

I fixed or helped to fix several other build/test failures:

I fixed a couple of other bugs:

Other bits and pieces

Code reviews

04 December, 2025 05:55PM by Colin Watson

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in November 2025

04 December, 2025 02:59PM by Ben Hutchings

December 03, 2025

Reproducible Builds

Reproducible Builds in November 2025

Welcome to the report for November 2025 from the Reproducible Builds project!

These monthly reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As always, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this report:

  1. “10 years of Reproducible Build” at SeaGL
  2. Distribution work
  3. Tool development
  4. Website updates
  5. Miscellaneous news
  6. Software Supply Chain Security of Web3
  7. Upstream patches

10 years of Reproducible Builds’ at SeaGL 2025

On Friday 8th November, Chris Lamb gave a talk called 10 years of Reproducible Builds at SeaGL in Seattle, WA.

Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Chris’ talk:

[…] introduces the concept of reproducible builds, its technical underpinnings and its potentially transformative impact on software security and transparency. It is aimed at developers, security professionals and policy-makers who are concerned with enhancing trust and accountability in our software. It also provides a history of the Reproducible Builds project, which is approximately ten years old. How are we getting on? What have we got left to do? Aren’t all the builds reproducible now?


Distribution work

In Debian this month, Jochen Sprickerhof created a merge request to replace the use of reprotest in Debian’s Salsa Continuous Integration (CI) pipeline with debrebuild. Joschen cites the advantages as being threefold: firstly, that “only one extra build needed”; it “uses the same sbuild and ccache tooling as the normal build”; and “works for any Debian release”. The merge request was merged by Emmanuel Arias and is now active.

kpcyrd posted to our mailing list announcing the initial release of repro-threshold, which implements an APT transport that “defines a threshold of at least X of my N trusted rebuilders need to confirm they reproduced the binary” before installing Debian packages. “Configuration can be done through a config file, or through a curses-like user interface.

Holger then merged two commits by Jochen Sprickerhof in order to address a fakeroot-related reproducibility issue in the debian-installer, and Jörg Jaspert deployed a patch by Ivo De Decker for a bug originally filed by Holger in February 2025 related to some Debian packages not being archived on snapshot.debian.org.

Elsewhere, Roland Clobus performed some analysis on the “live” Debian trixie images, which he determined were not reproducible. However, in a follow-up post, Roland happily reports that the issues have been handled. In addition, 145 reviews of Debian packages were added, 12 were updated and 15 were removed this month adding to our knowledge about identified issues.

Lastly, Jochen Sprickerhof filed a bug announcing their intention to “binary NMU” a very large number of the R programming language after a reproducibility-related toolchain bug was fixed.


Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.


Julien Malka and Arnout Engelen launched the new hash collection server for NixOS. Aside from improved reporting to help focus reproducible builds efforts within NixOS, it collects build hashes as individually-signed attestations from independent builders, laying the groundwork for further tooling.


Tool development

diffoscope version 307 was uploaded to Debian unstable (as well as version 309). These changes included further attempts to automatically attempt to deploy to PyPI by liaising with the PyPI developers/maintainers (with this experimental feature). [][][]

In addition, reprotest versions 0.7.31 and 0.7.32 were uploaded to Debian unstable by Holger Levsen, who also made the following changes:

  • Do not vary the architecture personality if the kernel is not varied. (Thanks to Raúl Cumplido). []
  • Drop the debian/watch file, as Lintian now flags this as error for ‘native’ Debian packages. [][]
  • Bump Standards-Version to 4.7.2, with no changes needed. []
  • Drop the Rules-Requires-Root header as it is no longer required.. []

In addition, however, Vagrant Cascadian fixed a build failure by removing some extra whitespace from an older changelog entry. []


Website updates

Once again, there were a number of improvements made to our website this month including:


Miscellaneous news


Software Supply Chain Security of Web3

Via our mailing list, Martin Monperrus let us know about their recently-published page on the Software Supply Chain Security of Web3. The abstract of their paper is as follows:

Web3 applications, built on blockchain technology, manage billions of dollars in digital assets through decentralized applications (dApps) and smart contracts. These systems rely on complex, software supply chains that introduce significant security vulnerabilities. This paper examines the software supply chain security challenges unique to the Web3 ecosystem, where traditional Web2 software supply chain problems intersect with the immutable and high-stakes nature of blockchain technology. We analyze the threat landscape and propose mitigation strategies to strengthen the security posture of Web3 systems.

Their paper lists reproducible builds as one of the mitigating strategies. A PDF of the full text is available to download.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

03 December, 2025 08:28PM

December 02, 2025

François Marier

Recovering from a broken update on the Turris Omnia

The recent Turris OS update from 7.2.3 to 9.0.0 took down my WiFi entirely. The wired network still works fine, but wireless is completely broken.

Factory reset

It turns out the Omnia has an extensive (and fast) factory reset / recovery mode via the hardware reset button.

Unfortunately, the factory image didn't work for me, possibly because I don't use the stock WiFi radios anymore.

Rolling back with schnapps

Thanks to the fact that the Omnia uses a btrfs root filesystem, and the liberal use of snapshots around updates, I was able to rollback to the pre-9.0.0 state.

First, I connected to the router using ssh:

ssh root@192.168.1.1

Then I listed the available snapshots:

$ schnapps list
# | Type      | Size        | Date                        | Description
------+-----------+-------------+-----------------------------+------------------------------------
  500 | post      |    15.98MiB | 2025-08-09 11:27:48 -0700   | Automatic post-update snapshot (TurrisOS 7.2.2 - hbs)
  506 | pre       |    17.92MiB | 2025-09-12 03:44:32 -0700   | Automatic pre-update snapshot (TurrisOS 7.2.2 - hbs)
  507 | post      |    17.88MiB | 2025-09-12 03:45:14 -0700   | Automatic post-update snapshot (TurrisOS 7.2.3 - hbs)
  515 | time      |    20.03MiB | 2025-11-02 01:05:01 -0700   | Snapshot created by cron
  516 | time      |    20.05MiB | 2025-11-09 01:05:01 -0800   | Snapshot created by cron
  517 | time      |    20.29MiB | 2025-11-16 01:05:00 -0800   | Snapshot created by cron
  518 | time      |    20.64MiB | 2025-11-23 01:05:01 -0800   | Snapshot created by cron
  519 | time      |    20.83MiB | 2025-11-30 01:05:00 -0800   | Snapshot created by cron
  520 | pre       |    87.91MiB | 2025-11-30 07:41:10 -0800   | Automatic pre-update snapshot (TurrisOS 7.2.3 - hbs)
  521 | post      |   196.32MiB | 2025-11-30 07:48:11 -0800   | Automatic post-update snapshot (TurrisOS 9.0.0 - hbs)
  523 | pre       |     4.44MiB | 2025-11-30 20:47:31 -0800   | Automatic pre-update snapshot
  524 | post      |   224.00KiB | 2025-11-30 20:47:43 -0800   | Automatic post-update snapshot
  525 | rollback  |   224.00KiB | 2025-12-01 04:56:32 +0000   | Rollback to snapshot factory
  526 | pre       |     4.44MiB | 2025-11-30 21:04:19 -0800   | Automatic pre-update snapshot
  527 | post      |   272.00KiB | 2025-11-30 21:04:31 -0800   | Automatic post-update snapshot
  528 | rollback  |   272.00KiB | 2025-12-01 05:13:38 +0000   | Rollback to snapshot factory
  529 | pre       |     4.52MiB | 2025-11-30 21:28:44 -0800   | Automatic pre-update snapshot
  530 | single    |   208.00KiB |                             | 
  531 | rollback  |   224.00KiB | 2025-12-01 05:29:47 +0000   | Rollback to snapshot factory

Finally, I rolled back to the exact state I was on before the 9.0.0 update:

$ schnapps rollback 520
Current state saved as snapshot number 532
Rolled back to snapshot 520

Full wipe

As an aside, it turns out that the factory reset functionality is implemented as a brtfs rollback to a special factory snapshot. This is why is so fast, but it also means that doing a simple factory reset doesn't wipe the data on your router. If you are planning to sell your device or otherwise dispose of it, you also need to delete all btrfs snapshots

Conclusion

While this update was very disappointing, especially since it's never happened before with major updates on Turris OS, it made me discover just how great the recovery tools are. It would be pretty tricky to fully brick one of these devices.

02 December, 2025 11:23PM

Simon Josefsson

Guix on Trisquel & Ubuntu for Reproducible CI/CD Artifacts

Last week I published Guix on Debian container images that prepared for today’s announcement of Guix on Trisquel/Ubuntu container images.

I have published images with reasonably modern Guix for Trisquel 11 aramo, Trisquel 12 ecne, Ubuntu 22.04 and Ubuntu 24.04. The Ubuntu images are available for both amd64 and arm64, but unfortunately Trisquel arm64 containers aren’t available yet so they are only for amd64. Images for ppc64el and riscv64 are work in progress. The currently supported container names:

registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix

Or you prefer guix-on-dpkg on Docker Hub:

docker.io/jas4711/guix-on-dpkg:trisquel11-guix
docker.io/jas4711/guix-on-dpkg:trisquel12-guix
docker.io/jas4711/guix-on-dpkg:ubuntu22.04-guix
docker.io/jas4711/guix-on-dpkg:ubuntu24.04-guix

You may use them as follows. See the guix-on-dpkg README for how to start guix-daemon and installing packages.

jas@kaka:~$ podman run -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
root@guix:/# head -1 /etc/os-release 
NAME="Trisquel GNU/Linux"
root@guix:/# guix describe
  guix 136fc8b
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: 136fc8bfe91a64d28b6c54cf8f5930ffe787c16e
root@guix:/# 

You may now be asking yourself: why? Fear not, gentle reader, because having two container images of roughly similar software is a great tool for attempting to build software artifacts reproducible, and comparing the result to spot differences. Obviously.

I have been using this pattern to get reproducible tarball artifacts of several software releases for around a year and half, since libntlm 1.8.

Let’s walk through how to setup a CI/CD pipeline that will build a piece of software, in four different jobs for Trisquel 11/12 and Ubuntu 22.04/24.04. I am in the process of learning Codeberg/Forgejo CI/CD, so I am still using GitLab CI/CD here, but the concepts should be the same regardless of platform. Let’s start by defining a job skeleton:

.guile-gnutls: &guile-gnutls
  before_script:
  - /root/.config/guix/current/bin/guix-daemon --version
  - env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARGS &
  - GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
  - type guix
  - guix --version
  - guix describe
  - time guix install --verbosity=0 wget gcc-toolchain autoconf automake libtool gnutls guile pkg-config
  - time apt-get update
  - time apt-get install -y make git texinfo
  - GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
  script:
  - git clone https://codeberg.org/guile-gnutls/guile-gnutls.git
  - cd guile-gnutls
  - git checkout v5.0.1
  - ./bootstrap
  - ./configure
  - make V=1
  - make V=1 check VERBOSE=t
  - make V=1 dist
  after_script:
  - mkdir -pv out/$CI_JOB_NAME_SLUG/src
  - mv -v guile-gnutls/*-src.tar.* out/$CI_JOB_NAME_SLUG/src/
  - mv -v guile-gnutls/*.tar.* out/$CI_JOB_NAME_SLUG/
  artifacts:
    paths:
    - out/**

This installs some packages, clones guile-gnutls (it could be any project, that’s just an example), build it and return tarball artifacts. The artifacts are the git-archive and make dist tarballs.

Let’s instantiate the skeleton into four jobs, running the Trisquel 11/12 jobs on amd64 and the Ubuntu 22.04/24.04 jobs on arm64 for fun.

guile-gnutls-trisquel11-amd64:
  tags: [ saas-linux-medium-amd64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
  extends: .guile-gnutls

guile-gnutls-ubuntu22.04-arm64:
  tags: [ saas-linux-medium-arm64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
  extends: .guile-gnutls

guile-gnutls-trisquel12-amd64:
  tags: [ saas-linux-medium-amd64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
  extends: .guile-gnutls

guile-gnutls-ubuntu24.04-arm64:
  tags: [ saas-linux-medium-arm64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix
  extends: .guile-gnutls

Running this pipeline will result in artifacts that you want to confirm for reproducibility. Let’s add a pipeline job to do the comparison:

guile-gnutls-compare:
  image: alpine:latest
  needs: [ guile-gnutls-trisquel11-amd64,
           guile-gnutls-trisquel12-amd64,
           guile-gnutls-ubuntu22.04-arm64,
           guile-gnutls-ubuntu24.04-arm64 ]
  script:
  - cd out
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep    -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
  - sha256sum */*.tar.* */*/*.tar.* | grep    -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
  - sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
# Confirm modern git-archive tarball reproducibility
  - cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
# Confirm old git-archive (export-subst but long git describe) tarball reproducibility
  - cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
# Confirm 'make dist' generated tarball reproducibility
  - cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
  - cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz
  artifacts:
    when: always
    paths:
    - ./out/**

Look how beautiful, almost like ASCII art! The commands print SHA256 checksums of the artifacts, sorted in a couple of ways, and then proceeds to compare relevant artifacts. What would the output of such a run be, you may wonder? You can look for yourself in the guix-on-dpkg pipeline but here is the gist of it:

$ cd out
$ sha256sum */*.tar.* */*/*.tar.* | sort | grep    -- -src.tar.
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-ubuntu22-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-ubuntu24-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-ubuntu22-04-arm64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-ubuntu24-04-arm64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
      2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
      2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | grep    -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
      2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
      2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
      2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz

That’s it for today, but stay tuned for more updates on using Guix in containers, and remember; Happy Hacking!

02 December, 2025 10:01PM by simon

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

duckdb-mlpack 0.0.5: Added kmeans, version helpers, documentation

A new release of the still-recent duckdb extension for mlpack, the C++ header-only library for machine learning, was merged into the duckdb community extensions repo today, and has been updated at its duckdb ‘mlpack’ extension page.

This release 0.0.5 adds one new method: kmeans clustering. We also added two version accessors for both mlpack and armadillo. We found during the work on random forests (added in 0.0.4) that the multithreaded random number generation was not quite right in the respective upstream codes. This has by now been corrected in armadillo 15.2.2 as well as the trunk version of mlpack so if you build with those, and set a seed, then your forests and classification will be stable across reruns. We added a second state variable mlpack_silent that can be used to suppress even the minimal prediction quality summary some methods show, and expanded the documentation.

For more details, see the repo for code, issues and more, and the extension page for more about this duckdb community extension.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

02 December, 2025 05:40PM

Birger Schacht

Status update, November 2025

I started this month with a week of vacation which was followed by a small planned surgery and two weeks of sick leave. Nonetheless, I packaged and uploaded new releases of a couple of packages:

  • swayidle updated to version 1.9.0-1
  • swaylock updated to version 1.8.4-1
  • foot updated to version 1.25.0-1
  • swayimg updated to version 4.6-1
  • scdoc updated to version 1.11.4-1
  • wofi updated to version 1.5.1-1
  • xdg-desktop-portal-wlr updated to version 0.8.0-1

Besides that I reactivated I project I started in summer 2024: debiverse.org. The idea of that was to have interfaces to Debian bugs and packages that are usable on mobile devices (I know, ludicrous!). Back then I started with Flask and Sqlalchemy, but that soon got out of hand. I now switched the whole stack to FastAPI and SQLModel which makes it a lot easier to manage. And the upside is that it comes with an API and OpenAPI docs. For the rendered HTML pages I use Jinja2 with Tailwind as CSS framework. I am currently using udd-mirror as database backend, which works pretty good (for this single user project). It would be nice to have some of the data in a faster index, like Typesense or Meilisearch. This way it would be possible to have faceted search or more performant full text search. But I haven’t found any software that could provide this that is packaged in Debian.

Screenshot of the debiverse bug report list

Screenshot of the debiverse swagger API

02 December, 2025 05:28AM

December 01, 2025

hackergotchi for Guido Günther

Guido Günther

Free Software Activities November 2025

Another short status update of what happened on my side last month. Hand holding the release machinery for Phosh 0.51.0 but there's more:

See below for details on the above and more:

phosh

  • Better auto brightness (MR)
  • Update CI to forky (MR)
  • Test mobile data connection in CI (MR)
  • Add DebugControl interface (MR)
  • Release 0.51~rc1
  • caffeine prefs: Fix resize when adding intervals (MR)
  • Robustify plugin-prefs screenshot tests (MR)
  • Another build systemd dependency fix (MR)
  • Gesture to tune brightness on lock screen (MR)

phoc

  • Update ci to forky (MR)
  • Exit cleanly on SIGTERM (MR)
  • Release (0.51~rc1), 0.51.0)
  • Fix segfault triggered in alpine CI (MR)
  • Cancel preedit on submit (avoids resubmitted text in e.g. chatty or flare) (MR)

phosh-mobile-settings

  • Test suite robustness (MR)
  • Update CI (MR)
  • Release 0.51~rc1)

stevia

xdg-desktop-portal-phosh

  • Release 0.51~rc1, 0.50.0
  • Unbreak nightly builds (MR)
  • Unbreak 32bit builds (MR)
  • Drop C file chooser impl (MR)

pfs

  • pfs-open: Allow to open arbitrary directories and start fixing clippy warnings (MR)
  • More clippy cleanups (MR)
  • Allow to ship schema (MR)
  • Run a smoke test in ci (MR)
  • Implement org.freedesktop.FileManager1 in the demo (MR, MR, MR)
  • dir-view: Don't thumbnail when disabled (MR)

Phrog

  • Fix osk dependencies (MR)

gmobile

  • run-phosh: Allow to run headless (MR)
  • Release 0.5.0 (MR)
  • display-panel: Allow to take screenshots (MR)
  • Add hwdb and udev rules for torch min brightness (MR)

feedbackd

feedbackd-device-themes

libcall-ui

  • Ignore callaudiod deprecations as we otherwise break compilation of downstreams (MR)
  • Same for 0.1.x branch (MR)
  • Release (0.1.5)

wirepumber

  • doc: Fix make run invocation (MR)

Chatty

mobile-broadband-povider-info

Debian

  • stevia: Upload 0.51~rc1, 0.51.0)
  • phrog: Use stevia instead of osk-stub (MR)
  • meta-phosh: Modernize dependencies (MR)
  • phosh: Drop osk-stub (MR)
  • phosh: Upload 0.51~rc1
  • phoc: Upload 0.41~rc1
  • p-m-s: Upload 0.51~rc1
  • feedbackd-device-themes: Upload 0.8.7
  • m-b-p-i: Uplaod 20251101
  • debcargo-conf: Backport ashpd patch (MR)
  • xdg-desktop-portal-phosh: Get it into unstable (MR, MR)

Mobian

  • librem5: Drop exponential brightness (MR)

wlroots

  • input-method-unstable-v2: Fix two protocol issues (MR)

libqrtr-glib

  • Fix transfer annotation to unbreak usage from Python (MR)
  • Move doc build to gi-docgen (MR)

libadwaits-rs

  • Allow None for parent in adw_dialog_choose (MR)

phosh-site

  • Lint tools (MR)
  • Add slideshow to landing page (MR)
  • Add more videos (MR)
  • Fix typos and links (MR)
  • Update nightly details (MR)

bengalos-debs

  • Fix phrog build (MR, MR)
  • Enable arm64 builds (MR)

gtk

  • Drop unused defines (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • pfs: Create folder support (MR)`
  • portal: Create thumbnails via thumbnailer service (MR)
  • phosh: caffeine plugin prefs (MR)
  • phosh: lower torch brightness (MR)
  • phosh: wi-fi hotspot QR code (MR)
  • phosh/caffeine: Close status page when selecting an interval (MR)
  • phosh/caffeine: Use empty state (MR)
  • bengalos-recpipes: prep supporting multiple disk layouts (MR)
  • xdg-p-p: Longer test timeout (MR)
  • p-m-s: Volume slider for media rols (MR)

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 December, 2025 06:52PM

Russ Allbery

Review: Forever and a Day

Review: Forever and a Day, by Haley Cass

Series: Those Who Wait #1.5
Publisher: Haley Cass
Copyright: 2020
ISBN: 979-8-5902-5966-3
Format: Kindle
Pages: 101

Forever and a Day is a coda to Haley Cass's self-published sapphic romance novel Those Who Wait. There is no point in reading it unless you have already read and enjoyed the full book and wanted more of a denouement.

Given that Those Who Wait is a romance novel, it is definitionally not a spoiler to reveal that Sutton and Charlotte ended up together. This novella is seven scenes sketching out the next few years of their lives, interspersed with press clippings and social media commentary. These tie up loose ends, give the characters a bit more time together, throw in one more conflict and resolution, add one more sex scene, and stick a few exclamation points after the happily ever after.

I am the sort of person who likes long denouements in stories, so I'm the target audience for this sort of sequel that's essentially additional chapters to the book. (The funniest version of this I've read is Jacqueline Carey's Saints Astray.) They are usually not great literature, since there are good reasons for not including these chapters in the book. That is exactly what this is: a few more chapters of the characters being happy, entirely forgettable, and of interest only to people who want that.

Cass does try to introduce a bit of a plot via some light family conflict, which was sweet and mostly worked, and some conflict over having children, which was very stereotyped and which I did not enjoy as much. I thought the earlier chapters of this novella were the stronger ones, although I do have to give the characters credit in the later chapters for working through conflict in a mature and fairly reasonable way. It does help, though, when the conflict is entirely resolved by one character being right and the other character being happily wrong. That's character conflict on easy mode.

I was happy to see that Sutton got a career, although as in the novel I wish Cass had put some more effort into describing Sutton's efforts in building that career. The details are maddeningly vague, which admittedly matches the maddeningly vague description of Charlotte's politics but which left me unsatisfied.

Charlotte's political career continues to be pure wish fulfillment in the most utterly superficial and trivialized way, and it bothered me even more in the novella than it did in the novel. We still have absolutely no idea what she stands for, what she wants to accomplish, and why anyone would vote for her, and yet we get endless soft-focus paeans to how wonderful she will be for the country. Her opponents are similarly vague to the point that the stereotypes Cass uses to signal their inferiority to Charlotte are a little suspect.

I'm more critical of this in 2025 than I would have been in 2015 because the last ten years have made clear the amount of damage an absolute refusal to stand for anything except hazy bromides causes, and I probably shouldn't be this annoyed that Cass chose to vaguely gesture towards progressive liberalism without muddying her romance denouement with a concrete political debate. But, just, gah. I found the last chapter intensely annoying, in part because the narrative of that chapter was too cliched and trite to sufficiently distract me from the bad taste of the cotton-candy politics.

Other than that, this was minor, sweet, and forgettable. If you want another few chapters of an already long novel, this delivers exactly what you would expect. If the novel was plenty, nothing about this novella is going to change your mind and you can safely skip it. I really liked the scene between Charlotte and Sutton's mom, though, and I'm glad I read the novella just for that.

Rating: 6 out of 10

01 December, 2025 04:20AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

It's already December.

It's already December. I haven't figure out why suspend resume no longer works on my workstation.

01 December, 2025 04:06AM by Junichi Uekawa

November 30, 2025

Russ Allbery

Review: The Last Soul Among Wolves

Review: The Last Soul Among Wolves, by Melissa Caruso

Series: The Echo Archives #2
Publisher: Orbit
Copyright: August 2025
ISBN: 0-316-30404-2
Format: Kindle
Pages: 355

The Last Soul Among Wolves is urban high fantasy with strong mystery vibes. It is a direct sequel to The Last Hour Between Worlds. You need the previous book for some character setup (and this book would spoil it badly), but you don't have to remember the first book in detail. Only the main plot outcomes are directly relevant and the characters will remind you of those.

Kembrel Thorne is a Hound, the equivalent of a police detective in the medieval-inspired city setting of this series, but this book does not open with an official assignment. Instead, she has been dragged by her childhood friend Jaycel Morningrey as company for a reading of the will of old lady Lovegrace, reclusive owner of a gothic mansion on an island connected to the city by an intermittent sandbar. A surprise reunion with her gang of childhood friends ensues, followed by the revelation that they are all in serious trouble.

Shortly after Kem left the group to become a Hound, the remaining four, plus several other apparently random people, got entangled with a powerful Echo artifact. Now that Lovegrace has died, one of them will inherit the artifact and the ability to make a wish, but only one. The rest will be killed at decreasing intervals until only the winner is left alive.

The Last Hour Between Worlds was fae fantasy built around a problem that was more of a puzzle than a mystery. The Last Soul Among Wolves is closer to a classic mystery: A cast of characters are brought together and semi-isolated in a rural house, they start dying, and it's up to the detective to solve the mystery of their death before it's too late. In this case, the initial mechanism of death is supernatural and not in doubt — the challenge instead is how to stop it from happening again — but Kem's problems quickly become more complicated.

As mystery plots go, this is more thriller than classical despite the setting. There are a few scenes of analyzing clues, but Kem is more likely to use the time-honored protagonist technique of throwing herself into danger and learning what's going on via the villain monologues. As readers of the previous book would expect, Rika Nonesuch is here too, hired by another of Kem's old friends, and the two navigate their personal feelings and the rivalry between their guilds in much the way that they did in the Last Hour Between Worlds. As in the first book, there is a sapphic romance subplot, but it's a very slow burn asexual romance.

The best part of this series continues to be the world-building. The previous book introduced the idea of the Echoes and sent the characters exploring into stranger and stranger depths. This book fleshes out the rules in more detail, creating something that feels partly like a fae realm and partly like high fantasy involving gods, but diverges from both into a logic of its own. The ending satisfyingly passes my test of fantasy mysteries: Resolving the mystery requires understanding and applying the rules of the setting, which are sufficiently strange to create interesting outcomes but coherent enough that the reader doesn't feel like the author is cheating.

There are some hissable villains here, but my favorite part of this book was the way Caruso added a lot of nuance and poignancy to the Echoes rather than showing them only as an uncanny threat. That choice made the world feel deeper and richer. It's not yet clear whether that element is setup for a longer-term series plot, but I hope Caruso will develop the story in that direction.

It felt to me like Caruso is aiming for an ongoing series rather than a multi-volume story with a definite ending. She avoids a full episodic reset — Rika, in particular, gets considerable character development and new complications that bode well for future volumes — but it doesn't feel like the series is building towards an imminent climax. This is not a complaint. I enjoy these characters and this world and will happily keep devouring each new series entry.

If you liked The Last Hour Between Worlds, I think you will like this. It doesn't have the same delight of initial discovery of the great world-building, but the plot is satisfying and a bit more complex and the supporting characters are even better than those in the first book. Once again, Caruso kept me turning the pages, and I'm now looking forward to a third volume. Recommended.

The third book in the series has not yet been announced, but there are indications on social media that it is coming.

Rating: 7 out of 10

30 November, 2025 04:03AM

November 28, 2025

hackergotchi for Clint Adams

Clint Adams

monkeying around bitrot

One of the servers to which I SSH ratcheted up its public key requirements and thus the Monkeysphere key I've been using for 15 years stopped working.

Unfortunately, monkeysphere gen-subkey hardcodes RSA keys and if I'm going to be forced to use a new subkey I want mine to be of the 25519 variety. Therefore, to add a subkey by hand:

gpg --expert --edit-key $KEYID

Follow roughly what's in /usr/share/monkeysphere/m/gen_subkey, but change the key type to 11 (ECC (set your own capabilities)), don't bother with Encrypt capability, and pick Curve25519.

monkeysphere subkey-to-ssh-agent and agent-transfer will be all happy with the "ed25519" subkey without any code modifications, and you won't need to rewrite monkeysphere from scratch to use Sequoia for the next 15 years.

Posted on 2025-11-28
Tags:

28 November, 2025 08:56PM

Simon Josefsson

Container Images for Debian with Guix

The debian-with-guix-container project build and publish container images of Debian GNU/Linux stable with GNU Guix installed.

The images are like normal Debian stable containers but have the guix tool and a reasonable fresh guix pull.

Supported architectures include amd64 and arm64. The multi-arch container is called:

registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable

It may also be accessed via debian-with-guix at Docker Hub as:

docker.io/jas4711/debian-with-guix:stable

The container images may be used like this:

$ podman run --privileged -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable
root@guix:/# hello
bash: hello: command not found
root@guix:/# guix describe
  guix c9eb69d
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: c9eb69ddbf05e77300b59f49f4bb5aa50cae0892
root@guix:/# LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild &
[1] 21
root@guix:/# GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
root@guix:/# guix describe
Generation 2    Nov 28 2025 10:14:11    (current)
  guix c9eb69d
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: c9eb69ddbf05e77300b59f49f4bb5aa50cae0892
root@guix:/# guix install --verbosity=0 hello
accepted connection from pid 55, user root
The following package will be installed:
   hello 2.12.2

hint: Consider setting the necessary environment variables by running:

     GUIX_PROFILE="/root/.guix-profile"
     . "$GUIX_PROFILE/etc/profile"

Alternately, see `guix package --search-paths -p "/root/.guix-profile"'.

root@guix:/# GUIX_PROFILE="/root/.guix-profile"
root@guix:/# . "$GUIX_PROFILE/etc/profile"
root@guix:/# hello
Hello, world!
root@guix:/# 

Below is an example GitLab pipeline job that demonstrate how to run guix install to install additional dependencies, and then download and build a package that pick up the installed package from the system.

test-wget-configure-make-libksba-amd64:
  image: registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable
  before_script:
  - env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARG &
  - GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
  - guix describe
  - guix install libgpg-error
  - GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
  - apt-get install --update -y --no-install-recommends build-essential wget ca-certificates bzip2
  script:
  - wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
  - tar xfa libksba-1.6.7.tar.bz2
  - cd libksba-1.6.7
  - ./configure
  - make V=1
  - make check VERBOSE=t V=1

The images were initially created for use in GitLab CI/CD Pipelines but should work for any use.

The images are built in a GitLab CI/CD pipeline, see .gitlab-ci.yml.

The containers are derived from official Debian stable images with Guix installed and a successful run of guix pull, built using buildah invoked from build.sh using image/Containerfile that runs image/setup.sh.

The pipeline also push images to the GitLab container registry, and then also to Docker Hub.

Guix binaries are downloaded from the Guix binary tarballs project because of upstream download site availability and bandwidth concerns.

Enjoy these images! Hopefully they can help you overcome the loss of Guix in Debian which made it a mere apt-get install guix away before.

There are several things that may be improved further. An alternative to using podman --privileged is to use --security-opt seccomp=unconfined --cap-add=CAP_SYS_ADMIN,CAP_NET_ADMIN which may be slightly more fine-grained.

For ppc64el support I ran into an error message that I wasn’t able to resolve:

guix pull: error: while setting up the build environment: cannot set host name: Operation not permitted

For riscv64, I can’t even find a Guix riscv64 binary tarball for download, is there one anywhere?

For arm64 containers, it seems that you need to start guix-daemon with --disable-chroot to get something to work, at least on GitLab.com’s shared runners, otherwise you will get this error message:

guix install: error: clone: Invalid argument

Building the images themselves also require disabling some security functionality, and I was not able to build images with buildah without providing --cap-add=CAP_SYS_ADMIN,CAP_NET_ADMIN otherwise there were errors like this:

guix pull: error: cloning builder process: Operation not permitted
guix pull: error: clone: Operation not permitted
guix pull: error: while setting up the build environment: cannot set loopback interface flags: Operation not permitted

Finally on amd64 it seems --security-opt seccomp=unconfined is necessary, otherwise there is an error message like this, even if you use --disable-chroot:

guix pull: error: while setting up the child process: in phase setPersonality: cannot set personality: Function not implemented

This particular error is discussed upstream, but I think generally that these error suggest that guix-daemon could use more optional use of features: if some particular feature is not available, gracefully fall back to another mode of operation, instead of exiting with an error. Of course, it should never fall back to an insecure mode of operation, unless the user requests that.

Happy Hacking!

28 November, 2025 04:32PM by simon

Russell Coker

10gbit and 40gbit Home Networking

Aliexpress has a 4 port 2.5gbit switch with 2*SFP+ sockets for $34.35 delivered [1]. 4 ports isn’t very good for the more common use cases (if daisy chaining them then it’s only
2 available for devices) so this is really a device for use with 10Gbit uplink.

Aliexpress has a pair of SFP+ 10Gbit devices with 1M of copper between them for $15.79 delivered [2]. That page also offers a pair of QSFP+ 40Gbit devices with 1M of copper between them for $27.79 delivered.

They have a dual port SFP+ card for a server with two of the pairs of SFP+ 10gbit devices with copper between them for $32.51 delivered [3].

So you can get a 2.5gbit switch with two 10gbit uplink cables to nearby servers for $66.86 including postage. I don’t need this but it is tempting. I spent $93.78 to get 2.5gbit networking [4] so spending $66.86 to get part of my network to 10gbit isn’t much.

It is $99.81 including postage for a Mellanox 2*40Gbit QSFP+ card and two QSFP+ adaptors with 3M of copper between them [5]. It is $55.81 including postage for the Mellanox card without the cable. So that’s $155.62 for a point to point 40gbit link between systems that are less than 3M apart, that’s affordable for a home lab. As an aside the only NVMe I’ve tested which can deliver such speeds was in a Thinkpad and the Thinkpad entered a thermal throttling state after a few seconds of doing that.

The best price I could see for a 40Gbit switch is $1280 for a L3 Managed switch with 2*40G QSFP+ slot ports, 4*10G SFP+ ports, and 48*2.5G RJ45 ports [6]. That’s quite affordable for the SME market but a bit expensive for home users (although I’m sure that someone on r/homelab has one).

I’m not going to get 40Gbit, that’s well above what I need and while a point to point link is quite affordable I don’t have servers in that range. But I am seriously considering 10Gbit, I get paid to do enough networking stuff that having some hands on experience with 10Gbit could be useful.

For a laptop a 5gbit ethernet USB device is $29.48 including delivery which isn’t too expensive [7]. The faster ones seem to be all Thunderbolt and well over $100, which is disappointing as USB 3.2 can do up to 20Gbit. If I start doing 10gbit over ethernet I’ll get one of those USB devices for testing.

For a single server it’s cheaper and easier to get a 4 port 2.5Gbit ethernet for $55.61 [8].

28 November, 2025 08:13AM by etbe

November 27, 2025

Russ Allbery

Review: A Matter of Execution

Review: A Matter of Execution, by Nicholas & Olivia Atwater

Series: Tales of the Iron Rose #0
Publisher: Starwatch Press
Copyright: 2024
ISBN: 1-998257-08-8
Format: Kindle
Pages: 131

A Matter of Execution is the introductory novella that kicked off the Tales of the Iron Rose series. It is steampunk fantasy with airships. I previously read and reviewed the subsequent novel, Echoes of the Imperium.

As noted in that review, I read the novel first. That was a mistake; this is a much better place to start. A Matter of Execution was clearly intended as the introduction of all of these characters. More importantly, I think reading the novella first would have given me enough affinity with the characters to not mind the worst part of Echoes of the Imperium: the extremely slow first half that seemed filled with the protagonist's impostor syndrome.

A Matter of Execution opens, fittingly, with Captain William Blair, a goblin, former Imperial soldier, Oathbreaker, and series first-person protagonist being carted to his execution. He is not alone; in the same prison wagon is an arrogant (and racist) man named Strahl, the killer of one of the rulers of Lyonesse.

Strahl is rather contemptuous of Blair's claim to be a captain, given that he's both a goblin and an Oathbreaker. Strahl quickly revises that opinion when Blair's crew, somewhat predictably given that he is the series protagonist, creates a daring escape for both of them. The heat of action gives both a chance to gain some respect for the other, which explains why Blair is not only willing to invite Strahl to join his crew, but to go back for Strahl's companion.

Breaking out Strahl's companion will be a more difficult, and surprising, problem.

Nicholas Atwater is a role-playing game GM, something that you will learn in the "about the author" section at the end of this novella but probably will have guessed by then. Even more than Echoes of the Imperium, this novella feels like a (good) write-up of an RPG adventure. A wildly varied cast of characters come together and form a party with a well-defined objective that has some surrounding mysteries and surprises. Each of those characters get their individual moments to show off their specific skills. Readers with a certain gaming background will know exactly where to insert the Borderlands-style title card with a slightly demented description of each character.

This is not a complaint. You may be able to see the bones of the setup adventure for a long-running campaign, but I like this style of character introduction and the story moves right along. There are a ton of varied characters, some interesting villains and maybe-villains, a rather satisfying heist setup, and some good chemistry and a bit of banter. This is not a deep story — it's clearly an introductory episode for both the characters and the world background — but it's a fun way to spend a few hours.

I think the best part of this series is the world-building. If you have read my review of Echoes of the Imperium, you have unfortunately been mildly spoiled for the revelation in this novella. I don't think it hurt the story that much; you will be able to predict what obvious gaps in the novel backstory the novella is going to fill in, but it's just as enjoyable to see how that happens. But the Atwaters aren't going to drop any of the big world-building bombs in the introductory novella, of course. Instead, you get a gradual introduction to the nature of magic in this world, some of the political setup of the recent war, and a quick introduction to the capabilities of Strahl's mysterious companion.

If you've not yet read this series, I recommend starting here. It's a quick investment to see if you'll be interested. The novel is heavier and slower, and the pacing of the first half isn't great, but the world-building is even better.

If you've already read the novel, this is still worth reading as long as you enjoyed it. You'll have a few moments of "oh, that's how that happened," and it's a fun and fast-moving way to spend a bit more time with the characters.

Followed by Echoes of the Imperium. The back matter of the novella says that The Winds of Fortune is supposedly forthcoming.

Rating: 7 out of 10

27 November, 2025 05:34AM

Russell Coker

PineTime Band

I’ve had a Pine Time for just over 2 years [1]. About a year ago I had a band break and replaced it from a spare PineTime and now I just had another break. Having the band only last one year isn’t that great, but it’s fortunate that the break only affects the inner layer of plastic so there is no risk of the watch suddenly falling off and being broken or lost. The Pine64 web site has a page about this with bad options, one broken link and a few Amazon items that are have ridiculous postage [2].

I started writing this post while using the band from a Colmi P80 [3]. I bought one for a relative who wanted the metal band and the way the Aliexpress seller does it is to sell the package with the plastic band and include the metal band in the package so I had a spare band. It fits quite well and none of the reported problems of the PineTime having insufficient space between the spring bar and the watch. The Colmi band in question is described as “rose gold” but is more like “pinkish beige” and doesn’t match the style of the black PineTime.

I ordered a couple of cheap bands from AliExpress which cost $9.77 and $13.55 including postage while the ones that Pine64 recommend have over $15 postage from Amazon!

The 20mm Silicone Magnetic Buckle Watch Strap Band For Huawei GT2 Smart Watch Connected Bracelet Black Watchband Man [4] cost $13.55 including postage. It has a magnetic unfold mechanism which I find a bit annoying and it doesn’t allow easily changing the length. I don’t think I’ll choose that again. But it basically works and is comfortable.

The 20mm Metal Strap for Huawei Watch GT2 3 Quick Release Stainless Steel Watch Band for Samsung Galaxy Watch Bracelet [5] cost $9.77 including postage. I found this unreasonably difficult to put on and not particularly comfortable. But opinion will vary on that, it is cheap and will appeal to some people’s style.

Conclusion

There are claims that getting a replacement band for a PineTime is difficult. My experience is that every band with a 20mm attachment works as long as it’s designed for a square watch, some of the bands are designed to partly go around a round face and wouldn’t fit. I expect that some bands won’t fit, but I don’t think that it’s enough of a problem to be worried about buying a random band from AliExpress. The incidence of bands not fitting will probably be lower than the incidence of other AliExpress products not doing quite what you want (while meeting the legal criteria of doing what they are claimed to do) and not being used.

I’m now wearing the PineTime with the “Magnetic Buckle Watch Strap Band” and plan to wear it for the next year or so.

27 November, 2025 12:37AM by etbe

November 26, 2025

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (September and October 2025)

The following contributors got their Debian Developer accounts in the last two months:

  • Evangelos Ribeiro Tzaras (devrts)
  • Andrea Bolognani (abologna)

The following contributors were added as Debian Maintainers in the last two months:

  • Rylie Pavlik
  • Yuchin Tsai
  • Daniel Markstedt
  • Guido Berhörster
  • Renzo Davoli

Congratulations!

26 November, 2025 04:00PM by Jean-Pierre Giraud

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

tidyCpp 0.0.8 on CRAN: Maintenance

Another maintenance release of the tidyCpp package arrived on CRAN this morning, the first in about two years. The packages offers a clean C++ layer (as well as one small C++ helper class) on top of the C API for R which aims to make use of this robust (if awkward) C API a little easier and more consistent. See the (now updated, see below) vignette for motivating examples.

This release contains mostly internal upkeep of the usual type: refreshing continuous integration, updating links, switching to Authors@R. But as we wrap the C API of R here too, changes made in R-devel this week affected the two reverse-dependencies (i.e. “downstream”) packages (of mine) using this. So we commented-out the definitions for the five now-hidden accessors so that these downstream packages can build again under R-devel.

The NEWS entry follows.

Changes in tidyCpp version 0.0.8 (2025-11-25)

  • Updated continuous integration setup several times

  • Updated README.md documentation with link to R API site

  • Updated example snippets to use of Protect

  • Updated documentation in defines.h header

  • Updated internals.h header reflecting in R API changes

As it happens, hours after the release at CRAN a helpful issue ticket was opened detailing more than a handful of typos in the vignette. This has been corrected, and I am not exporting the vignette via GitHub Pages so the motivating examples vignette contains the corrections.

Thanks to my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo. If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

26 November, 2025 02:57PM

November 25, 2025

Russell Coker

EDID and my 8K TV

I previously blogged about buying a refurbished Hisense 65u80g 8K TV with the aim of making it a large monitor [1] and about searching for a suitable video card for 8k [2]. After writing the second post I bought an Intel Arc B580 which also did a maximum of 4096*2160 resolution.

This post covers many attempts to try and get the TV to work correctly and it doesn’t have good answers. The best answer might be to not buy Hisense devices but I still lack data.

Attempts to Force 8K

I posted on Lemmy again about this [3] and got a single response, which is OK as it was a good response. They didn’t give me the answer on a silver platter but pointed me in the right direction of EDID [4].

I installed the Debian packages read-edid, wxedid, and edid-decode.

The command “get-edid > out.edid” saves the binary form of the edid to a file. The command “wxedid out.edid” allows graphical analysis of the EDID data. The command “edid-decode out.edid” dumps a plain text representation of the output, the command “edid-decode out.edid|grep VIC|cut -d: -f2|sort -n” shows an ordered list of video modes, in my case the highest resolution is 4096×2160 which is the highest that Linux had allowed me to set with two different video cards and a selection of different cables (both HDMI and DisplayPort).

xrandr --newmode 7680x4320 1042.63  7680 7984 7760 7824  4320 4353 4323 4328
xrandr --addmode HDMI-3 7680x4320
xrandr --output HDMI-3 --mode 7680x4320

I ran the above commands and got the below error:

xrandr: Configure crtc 0 failed

At this time I don’t know how much of this is due to the video card and how much is due to the TV. The parameters for xrandr came from a LLM because I couldn’t find any Google results on what 8K parameters to use. As an aside if you have a working 8K TV or monitor connected to a computer please publish the EDID data, xrandr, and everything else you can think of.

I found a Github repository for EDID data [5] but that didn’t have an entry for my TV and didn’t appear to have any other entry for an 8K device I could use.

Resolution for Web Browsing

I installed a browser on the TV, Chrome and Firefox aren’t available for a TV and the Play Store program tells you that (but without providing a reason) when you search for them. I tried the site CodeShack What is my Screen Resolution [6] which said that my laptop is 2460*1353 while the laptop display is actually 2560*1440. So apparently I have 100 pixels used for the KDE panel at the left of the screen and 87 pixels used by the Chrome tabs and URL bar – which seems about right. My Note 9 phone reports 384*661 out of it’s 2960*1440 display so it seems that Chrome on my phone is running web sites at 4/15 of the native resolution and about 16% of the height of the screen is used by the system notification bar, the back/home/tasklist buttons (I choose buttons instead of swipe for navigation in system settings), and the URL bar when I have “Screen zoom” in system settings at 1/4. When I changed “Screen zoom” to 0/4 the claimed resolution changed to 411*717 (2/7 of the native resolution). Font size changes didn’t change the claimed resolution. The claimed “Browser Viewport Size” by CodeShack is 1280*720 which is 1/6 of the real horizontal resolution and slightly more than 1/6 of the vertical resolution, it claims that the Pixel Density is 2* and a screen resolution of 970*540 which means to imply that the browser is only working at 1920*1080 resolution!

Netflix

When I view Netflix shows using the Netflix app running on the TV is reports “4K” which doesn’t happen on Linux PCs (as they restrict 4K content to platforms with DRM) and in the “Device” setting it reports “Device Model” as “Hisense_SmartTV 8K FFM” so the Netflix app knows all about 4K content and knows the text string “8K”.

YouTube

When I view a YouTube video that’s described as being 8K I don’t get a request for paying for YouTube Premium which is apparently what happens nowadays when you try to play actual 8K video. I turn on “State for Nerds” and one line has “Viewport / Frames 1920×1080*2.00” and another has “Current / Optimal Res 3840×2160@60 / 3840×2160@60” so it seems that the YouTube app is seeing the screen as 4K but choosing to only display FullHD even when I have Quality set to “2160p60 HDR”. It declares the network speed to be over 100mbit most of the time and the lowest it gets is 60mbit while 50mbit is allegedly what’s required for 8K.

I installed a few Android apps to report hardware capabilities and they reported the screen resolution to be 1920*1080.

Have I Been Ripped Off?

It looks like I might have been ripped off by this. I can’t get any app other than Netflix to display 4K content. My PC will only connect to it at 4K. Android apps (including YouTube) regard it as 1920*1080.

The “AI Upscaling” isn’t really that great and in most ways it seems at best equivalent to a 4K TV and less than a 4K TV that runs Android apps with an actual 4K display buffer.

Next Steps

The next things I plan to do are to continue attempts to get the TV to do what it’s claimed to be capable of, either an Android app that can display 8K content or a HDMI input of 8K content will do. Running a VNC client on the TV would be an acceptable way of getting an 8K display from a Linux PC.

I need to get a somewhat portable device that can give 8K signal output. Maybe a mini PC with a powerful GPU or maybe one of those ARM boards that’s designed to drive an 8K sign. Then I can hunt for stores that have 8K TVs on display.

It would be nice if someone made a USB device that does 8K video output – NOT a USB-C DisplayPort alternative mode that uses the video hardware on the laptop. Then I could take a laptop to any place that has an 8K display to show and connect my laptop to it.

The one thing I haven’t done yet is testing 8K MP4 files on a USB stick. That’s mainly due to a lack of content and the fact that none of the phone cameras I have access to can do 8K video. I will try displaying 8K PNG and JPEG files from a USB stick.

Most people would give up about now. But I am determined to solve this and buying another large TV isn’t out of the question.

25 November, 2025 07:09AM by etbe

November 24, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppQuantuccia 0.1.3 on CRAN: Micro Maintenance

A minor release of RcppQuantuccia arrived on CRAN moments ago. RcppQuantuccia started from the Quantuccia header-only subset / variant of QuantLib which it brings it to R. This project validated the idea of making the calendaring functionality of QuantLib available in a more compact and standalone project – which we now do with qlcal which can be seen as a successor package to this earlier package. qlcal tracks QuantLib (releases) closely and provides approximately quarterly updates. Switching to using qlcal is generally recommended.

This release, the first in almost exactly two years, only updates internals (as detailed below). Notably it switches to ‘Authors@R’ to avoid a nag from CRAN on two platforms. The complete list changes for this release follows.

Changes in version 0.1.3 (2025-11-24)

  • A badge URL and link have been updated in README.md

  • The continuous integration sript switched first to r-ci-setup and then to the r-ci action with embedded setup

  • The DESCRIPTION file now uses Authors@R

Courtesy of CRANberries, there is also a diffstat report relative to the previous release. More information is on the RcppQuantuccia page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 November, 2025 06:22PM

November 22, 2025

RcppArmadillo 15.2.2-1 on CRAN: Upstream Update, OpenMP Updates

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1286 other packages on CRAN, downloaded 42.6 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 659 times according to Google Scholar.

This versions updates to the 15.2.2 upstream Armadillo release made two days ago. It brings a few changes over the RcppArmadillo 15.2.0 release made only to GitHub (and described in this post), and of course even more changes relative to the last CRAN release described in this earlier post. As described previously, and due to both the upstream transition to C++14 coupled with the CRAN move away from C++11, the package offers a transition by allowing packages to remain with the older, pre-15.0.0 ‘legacy’ Armadillo yet offering the current version as the default. During the transition we did not make any releases to CRAN allowing both the upload cadence to settle back to the desired ‘about six in six months’ that the CRAN Policy asks for, and for packages to adjust to any potential changes. Most affected packages have done so (as can be seen in the GitHub issues #489 and #491) which is good to see. We appreciate all the work done by the respective package maintainers. A number of packages are still under a (now formally expired) deadline at CRAN and may get removed. Our offer to help where we can still stands, so please get in touch if we can be of assistance. As a reminder, the meta-issue #475 regroups all the resources for the transition.

With respect to changes in the package, we once more overhauled the OpenMP detection and setup, following the approach take by package data.table but sticking with an autoconf-based configure. The detailed changes since the last CRAN release follow.

Changes in RcppArmadillo version 15.2.2-1 (2025-11-21)

  • Upgraded to Armadillo release 15.2.2 (Medium Roast Deluxe)

    • Improved reproducibility of random number generation when using OpenMP
  • Skip a unit test file under macOS as complex algebra seems to fail under newer macOS LAPACK setting

  • Further OpenMP detection rework for macOS (Dirk in #497, #499)

  • Define ARMA_CRIPPLED_LAPACK on Windows only if 'LEGACY' Armadillo selected

Changes in RcppArmadillo version 15.2.1-0 (2025-10-28) (GitHub Only)

  • Upgraded to Armadillo release 15.2.1 (Medium Roast Deluxe)

    • Faster handling of submatrices with one row
  • Improve OpenMP detection (Dirk in #495 fixing #493)

Changes in RcppArmadillo version 15.2.0-0 (2025-10-20) (GitHub Only)

  • Upgraded to Armadillo release 15.2.0 (Medium Roast Deluxe)

    • Added rande() for generating matrices with elements from exponential distributions

    • shift() has been deprecated in favour of circshift(), for consistency with Matlab/Octave

    • Reworked detection of aliasing, leading to more efficient compiled code

  • OpenMP detection in configure has been simplified

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

22 November, 2025 03:44PM

November 21, 2025

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

Transferring Signal on Android

Transferring a Signal account between two Android devices

I spent far too much time recently trying to get a Signal Private Messenger account to transfer from one device to another.

What I eventually found worked was a very finicky path to enable functioning "Wi-Fi Direct", which I go into below.

I also offer some troubleshooting and recovery-from-failure guidance.

All of this blogpost uses "original device" to refer to the Android pocket supercomputer that already has Signal installed and set up, and "new device" to mean the Android device that doesn't yet have Signal on it.

Why Transfer?

Signal Private Messenger is designed with the expectation that the user has a "primary device", which is either an iPhone or an Android pocket supercomputer.

If you have an existing Signal account, and try to change your primary device by backing up and restoring from backup, it looks to me like Signal will cause your long-term identity keys to be changed. This in turn causes your peers to see a message like "Your safety number with Alice has changed."

These warning messages are the same messages that they would get if an adversary were to take over your account. So it's a good idea to minimize them when there isn't an account takeover — false alarms train people to ignore real alarms.

You can avoid "safety number changed" warnings by using signal's "account transfer" process during setup, at least if you're transferring between two Android devices.

However, my experience was that the transfer between two Android devices was very difficult to get to happen at all. I ran into many errors trying to do this, until I finally found a path that worked.

Dealing with Failure

After each failed attempt at a transfer, my original device's Signal installation would need to be re-registered. Having set a PIN meant that i could re-register the device without needing to receive a text message or phone call.

Set a PIN before you transfer!

Also, after a failure, you need to re-link any "linked device" (i.e. any Signal Desktop or iPad installation). If any message came in during the aborted transfer, the linked device won't get a copy of that message.

Finally, after a failed transfer, i recommend completely uninstalling Signal from the new device, and starting over with a fresh install on the new device.

Permissions

My understanding is that Signal on Android uses Wi-Fi Direct to accomplish the transfer. But to use Wi-Fi Direct, Signal needs to have the right permissions.

On each device:

  • Entirely stop the Signal app
  • Go to Settings » Apps » Signal » Permissions
  • Ensure that the following permissions are all enabled whenever the app is running:
  • Location
  • Nearby Devices
  • Network

Preparing for Wi-Fi Direct

The transfer process depends on "Wi-Fi Direct", which is a bit of a disaster on its own.

I found that if i couldn't get Wi-Fi Direct to work between the two devices, then the Signal transfer was guaranteed to fail.

So, for clearer debugging, i first tried to establish a Wi-Fi Direct link on Android, without Signal being involved at all.

Setting up a Wi-Fi Direct connection directly failed, multiple times, until i found the following combination of steps, to be done on each device:

  • Turn off Bluetooth
  • Ensure Wi-Fi is enabled
  • Disconnect from any Wi-Fi network you are connected to (go to the "Internet" or "Wi-Fi" settings page, long-press on the currently connected network, and choose "Disconnect"). If your device knows how to connect to multiple local Wi-Fi networks, disconnct from each of them in turn until you are in a stable state where Wi-Fi is enabled, but no network is connected.
  • Close to the bottom of the "Inteernet" or "Wi-Fi" settings page, choose "Network Preferences" and then "Wi-Fi Direct"
  • if there are any entries listed under "Remembered groups", tap them and choose to "Forget this group"
  • If there are Peer devices that say "Invited", tap them and choose to "Cancel invitation"

I found that this configuration is the most likely to enable a successful Wi-Fi Direct connection, where clicking "invite" on one device would pop up an alert on the other asking to accept the connection, and result in a "Connected" state between the two devices.

Actually Transferring

Start with both devices fully powered up and physically close to one another (on the same desk should be fine).

On the new device:

  • Reboot the device, and log into the profile you want to use
  • Enable Internet access via Wi-Fi.
  • Remove any old version of Signal.
  • Install Signal, but DO NOT OPEN IT!
  • Set up the permissions for the Signal app as described above
  • Open Signal, and choose "restore or transfer" -- you still need to be connected to the network at this point.
  • The new device should display a QR code.

On the original device:

  • Reboot the device, and log into the profile that has the Signal account you're looking to transfer
  • Enable Internet access via Wi-Fi, using the same network that the old device is using.
  • Make sure the permissions for Signal are set up as described above
  • Open Signal, and tap the camera button
  • Point the camera at the new device's QR code

Now tap the "continue" choices on both devices until they both display a message that they are searching for each other. You might see the location indicator (a green dot) turn on during this process.

If you see an immediate warning of failure on either device, you probably don't have the permissions set up right.

You might see an alert (a "toast") on one of the devices that the other one is trying to connect. You should click OK on that alert.

In my experience, both devices are likely to get stuck "searching" for each other. Wait for both devices to show Signal's warning that the search has timed out.

At this point, leave Signal open on both devices, and go through all the steps described above to prepare for Wi-Fi Direct. Your Internet access will be disabled.

Now, tap "Try again" in Signal on both devices, pressing the buttons within a few seconds of each other. You should see another alert that one device is trying to connect to the other. Press OK there.

At this point, the transfer should start happening! The old device will indicate what percentag has been transferred, and the new device will indicate how many messages hav been transferred.

When this is all done, re-connect to Wi-Fi on the new device.

Temporal gap for Linked Devices

Note that during this process, if new messages are arriving, they will be queuing up for you.

When you reconnect to wi-fi, the queued messages will flow to your new device. But the process of transferring automatically unlinks any linked devices. So if you want to keep your instance of Signal Desktop with as short a gap as possible, you should re-link that installation promptly after the transfer completes.

Clean-up

After all this is done successfully, you probably want to go into the Permissions settings and turn off the Location and Nearby Devices permissions for Signal on both devices.

I recommend also going into Wi-Fi Direct and removing any connected devices and forgetting any existing connections.

Conclusion

This is an abysmally clunky user experience, and I'm glad I don't have to do it often. It would have been much simpler to make a backup and restore from it, but I didn't want to freak out my contacts with a safety number change.

By contrast, when i wanted extend a DeltaChat account across two devices, the transfer was prompt and entirely painless -- i just had to make sure the devices were on the same network, and then scanned a QR code from one to the other. And there was no temporal gap for any other deviees. And i could use Delta on both devices simultaneously until i was convinced that it would work on the new device -- Delta doesn't have the concept of a primary account.

I wish Signal made it that easy! Until it's that easy, i hope the processes described here are useful to someone.

21 November, 2025 05:00AM by Daniel Kahn Gillmor

November 19, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

digest 0.6.39 on CRAN: Micro Update

Release 0.6.39 of the digest package arrived at CRAN today and has also been uploaded to Debian.

digest creates hash digests of arbitrary R objects. It can use a number different hashing algorithms (md5, sha-1, sha-256, sha-512, crc32, xxhash32, xxhash64, murmur32, spookyhash, blake3,crc32c, xxh3_64 and xxh3_128), and enables easy comparison of (potentially large and nested) R language objects as it relies on the native serialization in R. It is a mature and widely-used package (with 86.8 million downloads just on the partial cloud mirrors of CRAN which keep logs) as many tasks may involve caching of objects for which it provides convenient general-purpose hash key generation to quickly identify the various objects.

As noted last week in the 0.6.38 release note, hours after it was admitted to CRAN, I heard from the ever-so-tireless Brian Ripley about an SAN issue on arm64 only (and apparently non-reproducible elsewhere). He kindly provided a fix; it needed a cast. Checking this on amd64 against our Rocker-based ASAN and UBSAN containers (where is remains impossible to replicate, this issue is apparently known for some arm64 issues) another micro-issue (of a missing final argument NULL missing in one .Call()) was detected. Both issues were fixed the same day, and constitute the only change here. I merely waited a week to avoid a mechanical nag triggered when release happen within a week.

My CRANberries provides a summary of changes to the previous version. For questions or comments use the issue tracker off the GitHub repo. For documentation (including the changelog) see the documentation site.

If you like this or other open-source work I do, you can now sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

19 November, 2025 11:29PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

While it is cold-ish season in the North hemisphere...

Last week, our university held a «Mega Vaccination Center». Things cannot be small or regular with my university, ever! According to the official information, during last week ≈31,000 people were given a total of ≈74,000 vaccine dosis against influenza, COVID-19, pneumococcal disease and measles (specific vaccines for each person selected according to an age profile).

I was a tiny blip in said numbers. One person, three shots. Took me three hours, but am quite happy to have been among the huge crowd.

Long, long line

(↑ photo credit: La Jornada, 2025.11.14)

Really vaccinated!

And why am I bringing this up? Because I have long been involved in organizing DebConf, the best conference ever, naturally devoted to improving Debian GNU/Linux. And last year, our COVID reaction procedures ended up hurting people we care about. We, as organizers, are taking it seriously to shape a humane COVID handling policy that is, at the same time, responsible and respectful for people who are (reasonably!) afraid to catch the infection. No, COVID did not disappear in 2022, and its effects are not something we can turn a blind eye to.

Next year, DebConf will take place in Santa Fe, Argentina, in July. This means, it will be a Winter DebConf. And while you can catch COVID (or influenza, or just a bad cold) at any time of year, odds are a bit higher.

I know not every country still administers free COVID or influenza vaccines to anybody who requests them. And I know that any protection I might have got now will be quite weaker by July. But I feel it necessary to ask of everyone who can get it to get a shot. Most Northern Hemisphere countries will have a vaccination campaign (or at least, higher vaccine availability) before Winter.

If you plan to attend DebConf (hell… If you plan to attend any massive gathering of people travelling from all over the world to sit at a crowded auditorium) during the next year, please… Act responsibly. For yourself and for those surrounding you. Get vaccinated. It won’t absolutely save you from catching it, but it will reduce the probability. And if you do catch it, you will probably have a much milder version. And thus, you will spread it less during the first days until (and if!) you start developing symptoms.

19 November, 2025 03:59AM

November 18, 2025

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

App Store Oligopoly

A Call for Public Discussion about App Store Oligopoly

Over on the ACLU's Free Future blog, I just published an article titled Your Smartphone, Their Rules: How App Stores Enable Corporate-Government Censorship.

Free Software users and developers likely already understand the reasons why it matters who controls what tools you have access to. Hopefully this post can help clarify, even to people typically used to common non-free tooling, that there are real world risks to consolidated, proprietary control over computing and communication tools.

Big shout out to the projects out there doing good work in the "pocket supercomputer" space, providing an escape valve for many users and a counter-example to centralized corporate control, including F-Droid, GrapheneOS, and phosh.

The screws are tightening on user freedom, in the very place where most computing is happening today. The smartphone is already far too similar to an ankle monitor than it should be.

Please, publish your own suggestions on creative forms of mutual technical liberation. These are communications tools, so no person can fix the problems alone.

I would love to see a flourishing of non-Android, non-iOS systems in people's pockets, but i also know with the market the way it is, that is a long haul. Until that happens, we should also try to keep Android open, check out keepandroidopen.org for more suggestions.

18 November, 2025 05:00AM by Daniel Kahn Gillmor

November 17, 2025

Rodrigo Siqueira

XDC 2025

It has been a long time since I published any update in this space. Since this was a year of colossal changes for me, maybe it is also time for me to make something different with this blog and publish something just for a change — why not start talking about XDC 2025?

This year, I attended XDC 2025 in Vienna as an Igalia developer. I was thrilled to see some faces from people I worked with in the past and people I’m working with now. I had a chance to hang out with some folks I worked with at AMD (Harry, Alex, Leo, Christian, Shashank, and Pierre), many Igalians (Žan, Job, Ricardo, Paulo, Tvrtko, and many others), and finally some developers from Valve. In particular, I met Tímur in person for the first time, even though we have been talking for months about GPU recovery. Speaking of GPU recovery, we held a workshop on this topic together.

The workshop was packed with developers from different companies, which was nice because it added different angles on this topic. We began our discussion by focusing on the topic of job resubmission. Christian began sharing a brief history of how the AMDGPU driver started handling resubmission and the associated issues. After learning from erstwhile experience, amdgpu ended up adopting the following approach:

  1. When a job cause a hang, call driver specific handler.
  2. Stop the scheduler.
  3. Copy all jobs from the ring buffer, minus the job that caused the issue, to a temporary ring.
  4. Reset the ring buffer.
  5. Copy back the other jobs to the ring buffer.
  6. Resume the scheduler.

Below, you can see one crucial series associated with amdgpu recovery implementation:

The next topic was a discussion around the replacement of drm_sched_resubmit_jobs() since this function became deprecated. Just a few drivers still use this function, and they need a replacement for that. Some ideas were floating around to extract part of the specific implementation from some drivers into a generic function. The next day, Philipp Stanner continued to discuss this topic in his workshop, DRM GPU Scheduler.

Another crucial topic discussed was improving GPU reset debuggability to narrow down which operations cause the hang (keep in mind that GPU recovery is a medicine, not the cure to the problem). Intel developers shared their strategy for dealing with this by obtaining hints from userspace, which helped them provide a better set of information to append to the devcoredump. AMD could adopt this alongside dumping the IB data into the devcoredump (I am already investigating this).

Finally, we discussed strategies to avoid hang issues regressions. In summary, we have two lines of defense:

  • IGT: At the IGT level, we can have more tests that insert malicious instructions into the ring buffer, forcing the driver into an invalid state and triggering the recovery process.
  • HangTest suite: HangTest suite is a tool that simulates some potential hangs using Vulkan. Some tests are already available in this suite, but we should explore more creative combinations for trying to trigger hangs.
Lighting talk

This year, as always, XDC was super cool, packed with many engaging presentations which I highly recommend everyone check out. If you are interested, check the schedule and the presentation recordings available on the X.Org Foundation Youtube page. Anyway, I hope this blog post marks the inauguration of a new era for this site, where I will start posting more content ranging from updates to tutorials. See you soon.

17 November, 2025 12:00AM

November 16, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Game slowrunning

In 2013, I finished Zelda II: The Adventure of Link (on emulator), which I'd first played the summers of 1992 and 1993 (or thereabouts). At ~20 years between first start and first finish, it's a kind of weird opposite of speedrunning, and a personal best for me.

But this weekend, I trounced that record; in 1990 (I think!), we got a 512 kB RAM expansion for the Amiga 500 for the first time, which allowed us to play our warezed copy of Pool of Radiance without understanding much of the story or really reading that much English. And a couple of weeks ago, I realized that I had bought the game on GOG.com in 2018 and not done much about it… and went to finish it.

Pool of Radiance, fighting Thyranthraxus

Due to poor planning on my part, this ended up being a bit of a challenge run, with no stat modification, only five people in the party, no excessive rerolling (only 2–3 for each), no multiclassing, no glitches, no save-states (after finding out they help very little :-) ), very limited NPCs (only story NPCs plus a couple of hireds immediately killed for items, as opposed to the Amiga runs where we basically had only one PC and the rest top-grade NPCs!) and no Gold Box Companion.

However: Extensive guide use (the Internet is great!), and savescumming. Oh my, so much savescumming.

So that's 35 years from first start to first finish. We'll see when I get to Police Quest I…

16 November, 2025 11:46AM

Russ Allbery

Cumulative haul

I haven't posted a book haul in forever, so lots of stuff stacked up, including a new translation of Bambi that I really should get around to reading.

Nicholas & Olivia Atwater — A Matter of Execution (sff)
Nicholas & Olivia Atwater — Echoes of the Imperium (sff)
Travis Baldree — Brigands & Breadknives (sff)
Elizabeth Bear — The Folded Sky (sff)
Melissa Caruso — The Last Hour Between Worlds (sff)
Melissa Caruso — The Last Soul Among Wolves (sff)
Haley Cass — Forever and a Day (romance)
C.L. Clark — Ambessa: Chosen of the Wolf (sff)
C.L. Clark — Fate's Bane (sff)
C.L. Clark — The Sovereign (sff)
August Clarke — Metal from Heaven (sff)
Erin Elkin — A Little Vice (sff)
Audrey Faye — Alpha (sff)
Emanuele Galletto, et al. — Fabula Ultima: Core Rulebook (rpg)
Emanuele Galletto, et al. — Fabula Ultima: Atlas High Fantasy (rpg)
Emanuele Galletto, et al. — Fabula Ultima: Atlas Techno Fantasy (rpg)
Alix E. Harrow — The Everlasting (sff)
Alix E. Harrow — Starling House (sff)
Antonia Hodgson — The Raven Scholar (sff)
Bel Kaufman — Up the Down Staircase (mainstream)
Guy Gavriel Kay — All the Seas of the World (sff)
N.K. Jemisin & Jamal Campbell — Far Sector (graphic novel)
Mary Robinette Kowal — The Martian Conspiracy (sff)
Matthew Kressel — Space Trucker Jess (sff)
Mark Lawrence — The Book That Held Her Heart (sff)
Yoon Ha Lee — Moonstorm (sff)
Michael Lewis (ed.) — Who Is Government? (non-fiction)
Aidan Moher — Fight, Magic, Items (non-fiction)
Saleha Mohsin — Paper Soldiers (non-fiction)
Ada Palmer — Inventing the Renaissance (non-fiction)
Suzanne Palmer — Driving the Deep (sff)
Suzanne Palmer — The Scavenger Door (sff)
Suzanne Palmer — Ghostdrift (sff)
Terry Pratchett — Where's My Cow (graphic novel)
Felix Salten & Jack Zipes (trans.) — The Original Bambi (classic)
L.M. Sagas — Cascade Failure (sff)
Jenny Schwartz — The House That Walked Between Worlds (sff)
Jenny Schwartz — House in Hiding (sff)
Jenny Schwartz — The House That Fought (sff)
N.D. Stevenson — Scarlet Morning (sff)
Rory Stewart — Politics on the Edge (non-fiction)
Emily Tesh — The Incandescent (sff)
Brian K. Vaughan & Fiona Staples — Saga #1 (graphic novel)
Scott Warren — The Dragon's Banker (sff)
Sarah Wynn-Williams — Careless People (non-fiction)

As usual, I have already read and reviewed a whole bunch of these. More than I had expected, actually, given that I've not had a great reading year this year so far.

I am, finally, almost caught up with reviews, with just one book read and not yet reviewed. And hopefully I'll have lots of time to read for the last month and a half of the year.

16 November, 2025 06:32AM

November 15, 2025

Andrew Cater

2025-11-15 17:16 UTC Debian media testing for point release 13.2 of Trixie

*Busy* day in Cambridge. A roomful of people, large numbers of laptops and a lot of parallel installations.

Joined here by Emyr, Chris, Helen and Simon with Isy doing speech installs from her university accommodation. Two Andy's always makes it interesting. Steve providing breakfast, as ever.

We're almost there: the last test install is being repeated to flush out a possible bug. Other release processes are being done in the background.

Thanks again to Steve for hosting and all the hard work that goes into this from everybody.


 

15 November, 2025 08:39PM by Andrew Cater (noreply@blogger.com)

hackergotchi for Jonathan Dowland

Jonathan Dowland

Zoom R8

When I started looking at synths again, I had a feeling I would want to record from them, and ideally not with a computer. To that end, I also bought a second-hand standalone multitrack recorder, the Zoom R8.

Zoom R8

It's a little desktop console with two inputs, a built-in mic, and 8 sliders for adjusting the playback of 8 (ostensibly) independent tracks. It has a USB port to interface with a computer, and features some onboard effects (delay, reverb, that kind-of thing).

Look a bit closer, and the USB port is mini-USB, which gives away its age (and I'll never get rid of mini-USB cables, will I?). The two inputs are mono, so to capture stereo output from the minilogue-xd I need to tie up both inputs. Also, the 8 tracks are mono, so it's more like a stereo 4-track.

The effects (what little I've played with them) are really pretty cool; and it's great to apply them to a live signal. We had some fun running them over a bass guitar. However you can only use them for 44.1 kHz sample rate. If you ignore the effects the device supports 48 kHz.

I've ended up using it as my main USB interface on my computer; It's great for that. The internal mic ended up being too weak to use for video calls. As a USB interface, my computer can receive the signal from the synth (and I've wasted more time than I care to admit trying to wrestle with the Linux audio stack to do something with that).

It can also run on batteries, which opens up the possibility of recording piano with my daughter, or field recording or suchlike.

Writing this up serves as a reminder to me of why I bought it, and I now intend to spend a little more time using it that way and stop wasting time fighting ALSA/PulseAudio/PipeWire/PortAudio/etc.

15 November, 2025 10:14AM

November 14, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

404 not found

Found this grafitti on the wall behind my house today:

404 not found!

14 November, 2025 07:27PM

November 12, 2025

Simon Josefsson

Introducing the Debian Libre Live Images

The Debian Libre Live Images allows you to run and install Debian GNU/Linux without non-free software.

The general goal is to provide a way to use Debian without reliance on non-free software, to the extent possible within the Debian project.

One challenge are the official Debian live and installer images. Since the 2022 decision on non-free firmware, the official images for bookworm and trixie contains non-free software.

The Debian Libre Live Images project provides Live ISO images for Intel/AMD-compatible 64-bit x86 CPUs (amd64) built without any non-free software, suitable for running and installing Debian. The images are similar to the Debian Live Images distributed as Debian live images.

One advantage of Debian Libre Live Images is that you do not need to agree to the distribution terms and usage license agreements of the non-free blobs included in the official Debian images. The rights to your own hardware won’t be crippled by the legal restrictions that follows from relying on those non-free blobs. The usage of your own machine is no longer limited to what the non-free firmware license agreements allows you to do. This improve your software supply-chain situation, since you no longer need to consider their implication on your computing environment for your liberty, privacy or security. Inclusion of non-free firmware is a vehicle for xz-style attacks. For more information about the advantages of free software, see the FSF’s page on What is Free Software?.

Enough talking, show me the code! Err, binaries! Download images:

wget https://gitlab.com/api/v4/projects/74667529/packages/generic/debian-libre-live/main/live-image-amd64.hybrid.iso
wget https://gitlab.com/api/v4/projects/74667529/packages/generic/debian-libre-live/main/live-image-amd64.hybrid.iso.SHA256SUMS
sha256sum -c live-image-amd64.hybrid.iso.SHA256SUMS

Run in a virtual machine:

kvm -cdrom live-image-amd64.hybrid.iso -m 8G

Burn to an USB drive for installation on real hardware:

sudo dd if=live-images-amd64.hybrid.iso of=/dev/sdX # use sdX for USB drive

Images are built using live-build from the Debian Live Team. Inspiration has been taken from Reproducible Live Images and Kali Live.

The images are built by GitLab CI/CD shared runners. The pipeline .gitlab-ci.yml container job creates a container with live-build installed, defined in container/Containerfile. The build job then invokes run.sh that includes a run to lb build, and then upload the image to the package registry.

This is a first initial public release, calibrate your expectations! The primary audience are people already familiar with Debian. There are known issues. I have performed successful installations on a couple of different machines including laptops like Lenovo X201, Framework AMD Laptop 13″ etc.

Are you able to install Debian without any non-free software on some hardware using these images?

Happy Hacking!

12 November, 2025 11:16PM by simon

hackergotchi for Daniel Pocock

Daniel Pocock

Alfa Lions contre Bayonne Dockers, AFL France au Parc de Parilly, 8 novembre 2025

Vidéo : match masculin

12 November, 2025 05:00AM

Alfa Lions v Bayonne Dockers, AFL France in Parc de Parilly, 8 November 2025

Video: men's match

12 November, 2025 05:00AM

November 09, 2025

hackergotchi for Colin Watson

Colin Watson

Free software activity in October 2025

About 95% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

OpenSSH

OpenSSH upstream released 10.1p1 this month, so I upgraded to that. In the process, I reverted a Debian patch that changed IP quality-of-service defaults, which made sense at the time but has since been reworked upstream anyway, so it makes sense to find out whether we still have similar problems. So far I haven’t heard anything bad in this area.

10.1p1 caused a regression in the ssh-agent-filter package’s tests, which I bisected and chased up with upstream.

10.1p1 also had a few other user-visible regressions (#1117574, #1117594, #1117638, #1117720); I upgraded to 10.2p1 which fixed some of these, and contributed some upstream debugging help to clear up the rest. While I was there, I also fixed ssh-session-cleanup: fails due to wrong $ssh_session_pattern in our packaging.

Finally, I got all this into trixie-backports, which I intend to keep up to date throughout the forky development cycle.

Python packaging

For some time, ansible-core has had occasional autopkgtest failures that usually go away before anyone has a chance to look into them properly. I ran into these via openssh recently and decided to track them down. It turns out that they only happened when the libpython3.13-stdlib package had different versions in testing and unstable, because an integration test setup script made a change that would be reverted if that package was ever upgraded in the testbed, and one of the integration tests accidentally failed to disable system apt sources comprehensively enough while testing the behaviour of the ansible.builtin.apt module. I fixed this in Debian and contributed the relevant part upstream.

We’ve started working on enabling Python 3.14 as a supported version in Debian. I fixed or helped to fix a number of packages for this:

I upgraded these packages to new upstream versions:

I packaged python-blockbuster and python-pytokens, needed as new dependencies of various other packages.

Santiago Vila filed a batch of bugs about packages that fail to build when using the nocheck build profile, and I fixed several of these (generally just a matter of adjusting build-dependencies):

I helped out with the scikit-learn 1.7 transition:

I fixed or helped to fix several other build/test failures:

I fixed some other bugs:

I investigated a python-py build failure, which turned out to have been fixed in Python 3.13.9.

I adopted zope.hookable and zope.location for the Python team.

Following an IRC question, I ported linux-gpib-user to pybuild-plugin-pyproject, and added tests to make sure the resulting binary package layout is correct.

Rust packaging

Another Pydantic upgrade meant I had to upgrade a corresponding stack of Rust packages to new upstream versions:

  • rust-idna
  • rust-jiter
  • rust-pyo3
  • rust-regex
  • rust-regex-automata
  • rust-speedate
  • rust-uuid

I also upgraded rust-archery and rust-rpds.

Other bits and pieces

I fixed a few bugs in other packages I maintain:

I investigated a malware report against tini, which I think we can prove to be a false positive (at least under the reasonable assumption that there isn’t malware hiding in libgcc or glibc). Yay for reproducible builds!

I noticed and fixed a small UI deficiency in debbugs, making the checkboxes under “Misc options” on package pages easier to hit. This is merged but we haven’t yet deployed it.

I notced and fixed a typo in the Being kind to porters section of the Debian Developer’s Reference.

Code reviews

09 November, 2025 03:33PM by Colin Watson

hackergotchi for Daniel Pocock

Daniel Pocock

Alfa Lionnes contre Bayonne Dockers, AFL France au Parc de Parilly, le 8 novembre 2025

Vidéo : match féminin

09 November, 2025 03:30PM

Alfa Lionesses v Bayonne Dockers, AFL France in Parc de Parilly, 8 November 2025

Video: women's match

09 November, 2025 03:30PM

Russell Coker

AMD Video Driver Issues

I have had some graphics hangs on my HP z640 workstation which seem to always be after about 4 days of uptime, in one instance running Debian kernel 6.16.12+deb14+1 I got the following kernel error:

kernel: amdgpu 0000:02:00.0: [drm] *ERROR* [CRTC:58:crtc-0] flip_done timed out

Then I got the following errors from kwin_wayland:

kwin_wayland_wrapper[19598]: kwin_wayland_drm: Pageflip timed out! This is a bug in the amdgpu kernel driver
kwin_wayland_wrapper[19598]: kwin_wayland_drm: Please report this at https://gitlab.freedesktop.org/drm/amd/-/issues
kwin_wayland_wrapper[19598]: kwin_wayland_drm: With the output of 'sudo dmesg' and 'journalctl --user-unit plasma-kwin_wayland --boot 0'

In another instance running Debian kernel 6.12.48+deb13 I got the kernel errors at the bottom of the post (not in the RSS feed).

A google result suggested putting the following on the kernel command line which has the downside of increasing the idle power, but given that it’s a low power GPU (that I selected when I was using a system without a PCIe power cable) a bit of extra power use shouldn’t matter much. But it didn’t seem to change anything.

amdgpu.runpm=0 amdgpu.dcdebugmask=0x10

I had tried out the Debian/Unstable kernel 6.16.12-2 which didn’t work with my USB speakers and had problems with the HDMI sound through my monitor but still had AMD GPU issues.

This all seemed to start with the PCIe errors being reported on this system [1]. So I’m now wondering if the PCIe errors were from the GPU not the socket/motherboard. The GPU in question is a Radeon RX560 4G which cost $246.75 back in about 2021 [2]. I could buy a new one of those on ebay for $149 or one of the faster AMD cards like Radeon RX570 that are around the same price. I probably have a Radeon R7 260X in my collection of spare parts that would do the job too (2G of VRAM is more than sufficient for my desktop computing needs).

Any suggestions on how I should proceed from here?

[419976.222647] amdgpu 0000:02:00.0: amdgpu: GPU fault detected: 146 0x0138482c
[419976.222659] amdgpu 0000:02:00.0: amdgpu:  for process mpv pid 141328 thread vo pid 141346
[419976.222662] amdgpu 0000:02:00.0: amdgpu:   VM_CONTEXT1_PROTECTION_FAULT_ADDR   0x00101427
[419976.222664] amdgpu 0000:02:00.0: amdgpu:   VM_CONTEXT1_PROTECTION_FAULT_STATUS 0x0404802C
[419976.222666] amdgpu 0000:02:00.0: amdgpu: VM fault (0x2c, vmid 2, pasid 32810) at page 1053735, read from 'TC0' (0x54433000) (72)
[419986.245051] amdgpu 0000:02:00.0: amdgpu: Dumping IP State
[419986.245061] amdgpu 0000:02:00.0: amdgpu: Dumping IP State Completed
[419986.255152] amdgpu 0000:02:00.0: amdgpu: ring gfx timeout, signaled seq=11839646, emitted seq=11839648
[419986.255158] amdgpu 0000:02:00.0: amdgpu: Process information: process mpv pid 141328 thread vo pid 141346
[419986.255209] amdgpu 0000:02:00.0: amdgpu: GPU reset begin!
[419986.503030] amdgpu: cp is busy, skip halt cp
[419986.658198] amdgpu: rlc is busy, skip halt rlc
[419986.659270] amdgpu 0000:02:00.0: amdgpu: BACO reset
[419986.884672] amdgpu 0000:02:00.0: amdgpu: GPU reset succeeded, trying to resume
[419986.885398] [drm] PCIE GART of 256M enabled (table at 0x000000F402000000).
[419986.885413] [drm] VRAM is lost due to GPU reset!
[419987.021051] [drm] UVD and UVD ENC initialized successfully.
[419987.120999] [drm] VCE initialized successfully.
[419987.193302] amdgpu 0000:02:00.0: amdgpu: GPU reset(1) succeeded!
[419987.194117] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR* Failed to initialize parser -125!
[419997.509120] amdgpu 0000:02:00.0: amdgpu: Dumping IP State
[419997.509131] amdgpu 0000:02:00.0: amdgpu: Dumping IP State Completed
[419997.519145] amdgpu 0000:02:00.0: amdgpu: ring gfx timeout, signaled seq=11839650, emitted seq=11839652
[419997.519152] amdgpu 0000:02:00.0: amdgpu: Process information: process kwin_wayland pid 3577 thread kwin_wayla:cs0 pid 3615
[419997.519158] amdgpu 0000:02:00.0: amdgpu: GPU reset begin!
[419997.772966] amdgpu: cp is busy, skip halt cp
[419997.928138] amdgpu: rlc is busy, skip halt rlc
[419997.929165] amdgpu 0000:02:00.0: amdgpu: BACO reset
[419998.164705] amdgpu 0000:02:00.0: amdgpu: GPU reset succeeded, trying to resume
[419998.165412] [drm] PCIE GART of 256M enabled (table at 0x000000F402000000).
[419998.165427] [drm] VRAM is lost due to GPU reset!
[419998.311054] [drm] UVD and UVD ENC initialized successfully.
[419998.411006] [drm] VCE initialized successfully.
[419998.476272] amdgpu 0000:02:00.0: amdgpu: GPU reset(2) succeeded!
[419998.476363] [drm:amdgpu_cs_ioctl [amdgpu]] *ERROR* Failed to initialize parser -125!
[420008.773202] amdgpu 0000:02:00.0: amdgpu: Dumping IP State
[420008.773212] amdgpu 0000:02:00.0: amdgpu: Dumping IP State Completed
[420008.773240] amdgpu 0000:02:00.0: amdgpu: ring gfx timeout, but soft recovered
=== the above sequence of 3 repeated many times (narrator's voice "but it did not recover") ===
[420130.933612] rfkill: input handler disabled
[420135.594195] rfkill: input handler enabled
[420145.734076] amdgpu 0000:02:00.0: amdgpu: Dumping IP State
[420145.734085] amdgpu 0000:02:00.0: amdgpu: Dumping IP State Completed
[420145.744099] amdgpu 0000:02:00.0: amdgpu: ring gfx timeout, signaled seq=11839790, emitted seq=11839792
[420145.744105] amdgpu 0000:02:00.0: amdgpu: Process information: process kwin_wayland pid 3577 thread kwin_wayla:cs0 pid 3615
[420145.744111] amdgpu 0000:02:00.0: amdgpu: GPU reset begin!

There were more kernel messages, but they were just repeats and after a certain stage there probably isn’t any more data worth getting.

09 November, 2025 09:36AM by etbe

November 08, 2025

Thorsten Alteholz

My Debian Activities in October 2025

Debian LTS

This was my hundred-thirty-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4316-1] open-vm-tools security update to fix one CVE related to a local privilege escalation.
  • [DLA 4329-1] libfcgi security update to fix one CVE related to a heap-based buffer overflow via crafted nameLen or valueLen values in data to the IPC socket.
  • [DLA 4337-1] svgpp security update to fix one CVE related to a nullpointer reference.
  • [DLA 4336-1] sysstat security update to fix two CVEs related to a size_t overflow and a multiplication integer overflow.
  • [DLA 4343-1] raptor2 security update to fix two CVEs related to a heap-based buffer over-read and an integer underflow.
  • [DLA 4349-1] request-tracker4 security update to fix one CVE related to CSV injection via ticket values with special characters. The patch was prepared by Andrew Ruthven
  • [DLA 4353-1] xorg-server security update to fix three CVES related to privilege escalation.

I also attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the eighty-seventh ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1538-1] libfcgi security update to fix one CVE in Buster and Stretch, related to a heap-based buffer overflow via crafted nameLen or valueLen values in data to the IPC socket.
  • [ELA-1551-1] raptor2 security update to fix two CVES in Buster and Stretch, related to a heap-based buffer over-read and an integer underflow.
  • [ELA-1555-1] request-tracker4 security update to fix one CVE in Buster, related to CSV injection via ticket values with special characters. The patch was prepared by Andrew Ruthven.
  • [ELA-1561-1] xorg-server security update to fix three CVEs in Buster and Stretch, related to privilege escalation.

I also attended the monthly LTS/ELTS meeting.

Debian Printing

This month I uploaded a new upstream version or a bugfix version of:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

Debian IoT

Unfortunately I didn’t found any time to work on this topic.

Debian Mobcom

This month I uploaded a new upstream version or a bugfix version of:

misc

This month I uploaded a new upstream version or a bugfix version of:

On my fight against outdated RFPs, I closed 31 of them in October. I could even close one RFP by uploading the new package gypsy. Meanwhile only 3373 are still open, so don’t hesitate to help closing one or another.

FTP master

This month I accepted 420 and rejected 45 packages. The overall number of packages that got accepted was 423.

I would like to remind everybody that in case you don’t agree with the removal of a package, please set the moreinfo tag on this bug. This is the only reliable way to prevent processing of that RM-bug. Well, there is a second way, of course you could also achieve this by closing the bug.

08 November, 2025 09:12AM by alteholz

November 07, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

L’enseignement catholique pro-vie inclut les risques liés à l’intelligence artificielle

Monseigneur Dominique Rey, ancien évêque de Fréjus-Toulon, est bien connu en France pour son engagement en faveur du caractère sacré de la vie humaine. Le Réseau Vie est l' association catholique française pro-vie.

Dans l'actualité, le débat pro-vie est généralement caractérisé par un conflit très émotionnel autour de l'avortement.

En réalité, la position catholique sur l'avortement s'inscrit dans un contexte plus large qui offre une réfutation cohérente de toutes les pratiques, de l'euthanasie à la peine de mort en passant par la guerre.

De plus, il s'agit d'une façon de penser qui a évolué au cours de près de deux mille ans de christianisme. Le récent séminaire de Mgr Rey à Lyon, à l'instar de ses ouvrages, ne se limitait pas à la question de l'avortement.

J'ai posé une question sur les risques liés à l'intelligence artificielle dans la prise de décisions de vie ou de mort, et Mgr Rey s'est immédiatement montré disposé à y répondre longuement.

Quelles que soient les opinions sur le féminisme et l'avortement, il me semble important de respecter le rôle de l' Église catholique qui nous empêche de nous engager sur une pente glissante.

En réalité, la science actuelle ne nous permet pas de déterminer avec précision le début de la vie. De plus, nous ne disposons même pas d'une définition scientifique parfaite de ce qu'est la vie . S'il est plausible que la vie commence à la conception, nous ne devons pas prendre le risque de nous tromper à ce sujet.

L' Église catholique ne nous demande pas de croire au lapin de Pâques ni au Père Noël. Elle nous demande de croire en la possibilité que la vie commence dès la conception. Même pour ceux qui n'y croient pas, il semble raisonnable de vérifier. Avant le décollage d'un avion, l'équipage doit compter le nombre de personnes à bord, et ce à plusieurs reprises pour plus de sûreté.

Que l'on se considère comme pro-vie ou pro-choix, il est essentiel d'envisager la possibilité que l'un ou l'autre camp nous manipule, comme dans toute autre tentative d'ingénierie sociale . L'Église nous induit-elle en erreur en nous faisant croire que la vie commence plus tôt qu'elle ne le fait réellement ? Ou bien le lobby pro-choix trompe-t-il les femmes en leur faisant croire que leur fœtus n'est pas viable, en leur faisant miroiter une prétendue émancipation féminine ?

Les vastes empires des magnats des réseaux sociaux sont bâtis et contrôlés presque entièrement par des hommes. Leur principale tactique pour inciter les femmes à soumettre toutes leurs données et communications à la surveillance masculine consiste à leur donner une fausse impression d'autonomie . Le lobby pro-choix utilise les mêmes arguments pour séduire les femmes.

Au final, il y a des femmes qui regrettent d'avoir avorté et il y a des femmes qui regrettent d'avoir donné toutes leurs données aux médias de contrôle social .

Dans mes rapports sur JuristGate , j'analyse comment des clients ont été dupés par nos propres avocats. Il est facile d'imaginer que certaines personnes se suicident après avoir été ainsi trompées.

Dans le cas le plus flagrant de supercherie, j'ai examiné la mort d' Adrian von Bidder-Senn , à 32 ans, le jour même de notre mariage. C'était le dimanche des Rameaux. En passant en revue tous les décès liés au débianisme , on découvre l'ensemble des suicides de membres de la communauté débienne . À la lumière des écrits de Frans Pop , je ne peux ignorer le sentiment qu'il s'est senti dupé au moment de sa mort. En remontant aux origines de ces suicides , on constate que le premier, celui de Joel « Espy » Klecker, est survenu dans l'État américain de l'Oregon, immédiatement après la légalisation très médiatisée de l'euthanasie . De plus, les écrits de Klecker révèlent qu'il a lui aussi été dupé par la promesse d'une philosophie que nos pairs ont utilisée pour nous tromper .

Le docteur australien John Billings est célèbre pour avoir créé la méthode d'ovulation Billings . Cette méthode permet aux femmes de déterminer leurs périodes de fertilité d'un jour à l'autre. Elle peut donc être utilisée aussi bien pour concevoir un enfant que comme moyen de contraception. À ce titre, c'est la seule méthode contraceptive jamais approuvée par l' Église catholique . Par une curieuse coïncidence, le docteur Billings, mon père et moi avons fréquenté le même établissement catholique, le Xavier College de Melbourne, à des périodes différentes.

Photos de reunion avec Monseigneur Rey, Réseau Vie, Basilique Saint-Bonaventure, Lyon

Monseigneur Dominique Rey, Réseau Vie, Basilique Saint-Bonaventure, Lyon

 

Monseigneur Dominique Rey, Réseau Vie, Basilique Saint-Bonaventure, Lyon

 

Monseigneur Dominique Rey, Réseau Vie, Basilique Saint-Bonaventure, Lyon

 

Monseigneur Dominique Rey, Réseau Vie, Basilique Saint-Bonaventure, Lyon

 

Monseigneur Dominique Rey, Réseau Vie, Basilique Saint-Bonaventure, Lyon

 

Réseau Vie, Basilique Saint-Bonaventure, Lyon

 

Réseau Vie, Basilique Saint-Bonaventure, Lyon

 

Réseau Vie, Basilique Saint-Bonaventure, Lyon

 

Monseigneur Dominique Rey, Réseau Vie, Basilique Saint-Bonaventure, Lyon

 

Monseigneur Dominique Rey, Réseau Vie, Basilique Saint-Bonaventure, Lyon

 

Monseigneur Dominique Rey, Réseau Vie, Basilique Saint-Bonaventure, Lyon

 

Monseigneur Dominique Rey, Réseau Vie, Basilique Saint-Bonaventure, Lyon

 

Daniel Pocock, Monseigneur Dominique Rey, Réseau Vie, Basilique Saint-Bonaventure, Lyon

 

Blogs à propos l'église

Catholic.Community

Veuillez suivre le Catholic.Community site web comme votre page principal.

07 November, 2025 09:00PM

November 06, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

inert media, or the explotation of attention

It occurred to me recently that one of the attractions of vinyl, or more generally physical media, could be that it's inert, safe: the music is a groove cut into some plastic. That's it. The record can't do anything unexpected to you1: it just contains music.

Safe.

Safe.

There's so much exploitation of attention, and so much of what we interact with in a computing context (social media) etc has been weaponised against us, that having something some matter-of-fact is a relief. I know that sometimes, I prefer to put a record on than to dial up the very same album from any number of (ostensibly more convenient) digital sources, partly because I don't need to spend any of my attention spoons to do so, I can save them for the task at hand.

The same is perhaps not true for audio CDs. That might depend on your own relationship with them, of course. For me they're inexorably tied up with computing and a certain amount of faff (ripping, encoding, metadata). And long dead it might be (hopefully), but I can still remember Sony's CD rootkit scandal: CDs could be a trojan horse. Beware!


  1. I'm sure there are ingenious exceptions.

06 November, 2025 03:11PM

November 05, 2025

Reproducible Builds

Reproducible Builds in October 2025

Welcome to the October 2025 report from the Reproducible Builds project!

Welcome to the very latest report from the Reproducible Builds project. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As ever, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this report:

  1. Farewell from the Reproducible Builds Summit 2025
  2. Google’s Play Store breaks reproducible builds for Signal
  3. Mailing list updates
  4. The Original Sin of Computing…that no one can fix
  5. Reproducible Builds at the Transparency.dev summit
  6. Supply Chain Security for Go
  7. Three new academic papers published
  8. Distribution work
  9. Upstream patches
  10. Website updates
  11. Tool development

Farewell from the Reproducible Builds Summit 2025…

Thank you to everyone who joined us at the Reproducible Builds Summit in Vienna, Austria!

We were thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. During this event, participants had the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim was to create an inclusive space that fosters collaboration, innovation and problem-solving.

The agenda of the three main days is available online — however, some working sessions may still lack notes at time of publication.

One tangible outcome of the summit is that Johannes Starosta finished their rebuilderd tutorial, which is now available online and Johannes is actively seeking feedback.


Google’s Play Store breaks reproducible builds for Signal

On the issue tracker for the popular Signal messenger app, developer Greyson Parrelli reports that updates to the Google Play store have, in effect, broken reproducible builds:

The most recent issues have to do with changes to the APKs that are made by the Play Store. Specifically, they add some attributes to some .xml files around languages are resources, which is not unexpected because of how the whole bundle system works. This is trickier to resolve, because unlike current “expected differences” (like signing information), we can’t just exclude a whole file from the comparison. We have to take a more nuanced look at the diff. I’ve been hesitant to do that because it’ll complicate our currently-very-readable comparison script, but I don’t think there’s any other reasonable option here.

The full thread with additional context is available on GitHub.


Mailing list updates

On our mailing list this month:

  • kpcyrd forwarded a fascinating tidbit regarding so-called ninja and samurai build ordering, that uses data structures in which the pointer values returned from malloc are used to determine some order of execution.

  • Arnout Engelen, Justin Cappos, Ludovic Courtès and kpcyrd continued a conversation started in September regarding the “Minimum Elements for a Software Bill of Materials”. (Full thread)

  • Felix Moessbauer of Siemens posted to the list reporting that he had recently “stumbled upon a couple of Debian source packages on the snapshot mirrors that are listed multiple times (same name and version), but each time with a different checksum”. The thread, which Felix titled, Debian: what precisely identifies a source package is about precisely that — what can be axiomatically relied upon by consumers of the Debian archives, as well as indicating an issue where “we can’t exactly say which packages were used during build time (even when having the .buildinfo files).

  • Luca DiMaio posted to the list announcing the release of xfsprogs 6.17.0 which specifically includes a commit that “implements the functionality to populate a newly created XFS filesystem directly from an existing directory structure” which “makes it easier to create populated filesystems without having to mount them [and thus is] particularly useful for reproducible builds”. Luca asked the list how they might contribute to the docs of the System images page.


The Original Sin of Computing…that no one can fix

Popular YouTuber @laurewired published a video this month with an engaging take on the Trusting Trust problem. Titled The Original Sin of Computing…that no one can fix, the video touches on David A. Wheeler’s Diverse Double-Compiling dissertation.

GNU developer Janneke Nieuwenhuizen followed-up with an email (additionally sent to our mailing list) as well, underscoring that GNU Mes’s “current solution [to this issue] uses ancient softwares in its bootstrap path, such as gcc-2.95.3 and glibc-2.2.5”. (According to Colby Russell, the GNU Mes bootstrapping sequence is shown at 18m54s in the video.)


Reproducible Builds at the Transparency.dev summit

Holger Levsen gave a talk at this year’s Transparency.dev summit in Gothenburg, Sweden, outlining the achievements of the Reproducible Builds project in the last 12 years, covering both upstream developments as well as some distribution-specific details. As mentioned on the talk’s page, Holger’s presentation concluded “with an outlook into the future and an invitation to collaborate to bring transparency logs into Reproducible Builds projects”.

The slides of the talk are available, although a video has yet to be released. Nevertheless, as a result of the discussions at Transparency.dev there is a new page on the Debian wiki with the aim of describing a potential transparency log setup for Debian.


Supply Chain Security for Go

Andrew Ayer has setup a new service at sourcespotter.com that aims to monitor the supply chain security for Go releases. It consists of four separate trackers:

  1. A tool to verify that the Go Module Mirror and Checksum Database is behaving honestly and has not presented inconsistent information to clients.
  2. A module monitor that records every module version served by the Go Module Mirror and Checksum Database, allowing you to monitor for unexpected versions of your modules.
  3. A tool to verifies that the Go toolchains published in the Go Module Mirror can be reproduced from source code, making it difficult to hide backdoors in the binaries downloaded by the go command.
  4. A telemetry config tracker that tracks the names of telemetry counters uploaded by the Go toolchain, to ensure that Go telemetry is not violating users’ privacy.

As the homepage of the service mentions, the trackers are free software and do not rely on Google infrastructure.


Three new academic papers published

Julien Malka of the Institut Polytechnique de Paris published an exciting paper this month on How NixOS could have detected the XZ supply-chain attack for the benefit of all thanks to reproducible-builds. Julien outlines his paper as follows:

In March 2024, a sophisticated backdoor was discovered in xz, a core compression library in Linux distributions, covertly inserted over three years by a malicious maintainer, Jia Tan. The attack, which enabled remote code execution via ssh, was only uncovered by chance when Andres Freund investigated a minor performance issue. This incident highlights the vulnerability of the open-source supply chain and the effort attackers are willing to invest in gaining trust and access. In this article, I analyze the backdoor’s mechanics and explore how bitwise build reproducibility could have helped detect it.

A PDF of the paper is available online.


Iyán Méndez Veiga and Esther Hänggi (of the Lucerne University of Applied Sciences and Arts and ETH Zurich) published a paper this month on the topic of Reproducible Builds for Quantum Computing. The abstract of their paper mentions the following:

Although quantum computing is a rapidly evolving field of research, it can already benefit from adopting reproducible builds. This paper aims to bridge the gap between the quantum computing and reproducible builds communities. We propose a generalization of the definition of reproducible builds in the quantum setting, motivated by two threat models: one targeting the confidentiality of end users’ data during circuit preparation and submission to a quantum computer, and another compromising the integrity of quantum computation results. This work presents three examples that show how classical information can be hidden in transpiled quantum circuits, and two cases illustrating how even minimal modifications to these circuits can lead to incorrect quantum computation results.

A full PDF of their paper is available.


Congratulations to Georg Kofler who submitted their Master’s thesis for the Johannes Kepler University of Linz, Austria on the topic of Reproducible builds of E2EE-messengers for Android using Nix hermetic builds:

The thesis focuses on providing a reproducible build process for two open-source E2EE messaging applications: Signal and Wire. The motivation to ensure reproducibility—and thereby the integrity—of E2EE messaging applications stems from their central role as essential tools for modern digital privacy. These applications provide confidentiality for private and sensitive communications, and their compromise could undermine encryption mechanisms, potentially leaking sensitive data to third parties.

A full PDF of their thesis is available online.


Shawkot Hossain of Aalto University, Finland has also submitted their Master’s thesis on the The Role of SBOM in Modern Development with a focus on the extant tooling:

Currently, there are numerous solutions and techniques available in the market to tackle supply chain security, and all claim to be the best solution. This thesis delves deeper by implementing those solutions and evaluates them for better understanding. Some of the tools that this thesis implemented are Syft, Trivy, Grype, FOSSA, dependency-check, and Gemnasium. Software dependencies are generated in a Software Bill of Materials (SBOM) format by using these open-source tools, and the corresponding results have been analyzed. Among these tools, Syft and Trivy outperform others as they provide relevant and accurate information on software dependencies.

A PDF of the thesis is also available.


Distribution work

Michael Plura published an interesting article on Heise.de on the topic of Trust is good, reproducibility is better:

In the wake of growing supply chain attacks, the FreeBSD developers are relying on a transparent build concept in the form of Zero-Trust Builds. The approach builds on the established Reproducible Builds, where binary files can be rebuilt bit-for-bit from the published source code. While reproducible builds primarily ensure verifiability, the zero-trust model goes a step further and removes trust from the build process itself. No single server, maintainer, or compiler can be considered more than potentially trustworthy.

The article mentions that this “goal has now been achieved with a slight delay and can be used in the current development branch for FreeBSD 15”.


In Debian this month, 7 reviews of Debian packages were added, 5 were updated and 11 were removed this month adding to our knowledge about identified issues.

For the Debian CI tests Holger fixed #786644 and set nocheck in DEB_BUILD_OPTIONS for the 2nd build..


Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Website updates

Once again, there were a number of improvements made to our website this month including:

In addition, a number of contributors added a series of notes from our recent summit to the website, including Alexander Couzens [], Robin Candau [][][][][][][][][] and kpcyrd [].


Tool development

diffoscope version 307 was uploaded to Debian unstable by Chris Lamb, who made a number of changes including fixing compatibility with LLVM version 21 [], an attempt to automatically attempt to deploy to PyPI by liaising with the PyPI developers/maintainers (with this experimental feature). [] In addition, Vagrant Cascadian updated diffoscope in GNU Guix to version 307.



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

05 November, 2025 09:01PM

November 03, 2025

Antoine Beaupré

Encrypting a Debian install with UKI

I originally setup a machine without any full disk encryption, then somehow regretted it quickly after. My original reasoning was that this was a "play" machine so I wanted as few restrictions on accessing the machine as possible, which meant removing passwords, mostly.

I actually ended up having a user password, but disabled the lock screen. Then I started using the device to manage my photo collection, and suddenly there was a lot of "confidential" information on the device that I didn't want to store in clear text anymore.

Pre-requisites

So, how does one convert an existing install from plain text to full disk encryption? One way is to backup to an external drive, re-partition everything and copy things back, but that's slow and boring. Besides, cryptsetup has a cryptsetup-reencrypt command, surely we can do this in place?

Having not set aside enough room for /boot, I briefly considered a "encrypted /boot" configuration and conversion (e.g. with this guide) but remembered grub's support for this is flaky, at best, so I figured I would try something else.

Here, I'm going to guide you through how I first converted from grub to systemd-boot then to UKI kernel, then re-encrypt my main partition.

Note that secureboot is disabled here, see further discussion below.

systemd-boot and Unified Kernel Image conversion

systemd folks have been developing UKI ("unified kernel image") to ship kernels. The way this works is the kernel and initrd (and UEFI boot stub) in a single portable executable that lives in the EFI partition, as opposed to /boot. This neatly solves my problem, because I already have such a clear-text partition and won't need to re-partition my disk to convert.

Debian has started some preliminary support for this. It's not default, but I found this guide from Vasudeva Kamath which was pretty complete. Since the guide assumes some previous configuration, I had to adapt it to my case.

Here's how I did the conversion to both systemd-boot and UKI, all at once. I could have perhaps done it one at a time, but doing both at once works fine.

Before your start, make sure secureboot is disabled, see the discussion below.

  1. install systemd tools:

    apt install systemd-ukify systemd-boot
    
  2. Configure systemd-ukify, in /etc/kernel/install.conf:

    layout=uki
    initrd_generator=dracut
    uki_generator=ukify
    

    TODO: it doesn't look like this generates a initrd with dracut, do we care?

  3. Configure the kernel boot arguments with the following in /etc/kernel/uki.conf:

    [UKI]
    Cmdline=@/etc/kernel/cmdline
    

    The /etc/kernel/cmdline file doesn't actually exist here, and that's fine. Defaults are okay, as the image gets generated from your current /proc/cmdline. Check your /etc/default/grub and /proc/cmdline if you are unsure. You'll see the generated arguments in bootctl list below.

  4. Build the image:

    dpkg-reconfigure linux-image-$(uname -r)
    
  5. Check the boot options:

    bootctl list
    

    Look for a Type #2 (.efi) entry for the kernel.

  6. Reboot:

    reboot
    

You can tell you have booted with systemd-boot because (a) you won't see grub and (b) the /proc/cmdline will reflect the configuration listed in bootctl list. In my case, a systemd.machine_id variable is set there, and not in grub (compare with /boot/grub/grub.cfg).

By default, the systemd-boot loader just boots, without a menu. You can force the menu to show up by un-commenting the timeout line in /boot/efit/loader/loader.conf, by hitting keys during boot (e.g. hitting "space" repeatedly), or by calling:

systemctl reboot --boot-loader-menu=0

See the systemd-boot(7) manual for details on that.

I did not go through the secureboot process, presumably I had already disabled secureboot. This is trickier: because one needs a "special key" to sign the UKI image, one would need the collaboration of debian.org to get this working out of the box with the keys shipped onboard most computers.

In other words, if you want to make this work with secureboot enabled on your computer, you'll need to figure out how to sign the generated images before rebooting here, because otherwise you will break your computer. Otherwise, follow the following guides:

Re-encrypting root filesystem

Now that we have a way to boot an encrypted filesystem, we can switch to LUKS for our filesystem. Note that you can probably follow this guide if, somehow, you managed to make grub work with your LUKS setup, although as this guide shows, you'd need to downgrade the cryptographic algorithms, which seems like a bad tradeoff.

We're using cryptsetup-reencrypt for this which, amazingly, supports re-encrypting devices on the fly. The trick is it needs free space at the end of the partition for the LUKS header (which, I guess, makes it a footer), so we need to resize the filesystem to leave room for that, which is the trickiest bit.

This is a possibly destructive behavior. Be sure your backups are up to date, or be ready to lose all data on the device.

We assume 512 byte sectors here. Check your sector size with fdisk -l and adjust accordingly.

  1. Before you perform the procedure, make sure requirements are installed:

    apt install cryptsetup systemd-cryptsetup cryptsetup-initramfs
    

    Note that this requires network access, of course.

  2. Reboot in a live image, I like GRML but any Debian live image will work, possibly including the installer

  3. First, calculate how many sectors to free up for the LUKS header

    qalc> 32Mibyte / ( 512 byte )
    
      (32 mebibytes) / (512 bytes) = 65536
    
  4. Find the sector sizes of the Linux partitions:

    fdisk  -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 }' |
    

    For example, here's an example with a /boot and / filesystem:

    $ sudo fdisk -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 }'
    /dev/nvme0n1p2 999424
    /dev/nvme0n1p3 3904979087
    
  5. Substract 1 from 2:

    qalc> set precision 100
    qalc> 3904979087 - 65536
    

    Or, last step and this one, in one line:

    fdisk -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 - 65536 }'
    
  6. Recheck filesystem:

    e2fsck -f /dev/nvme0n1p2
    
  7. Resize filesystem:

    resize2fs /dev/nvme0n1p2 $(fdisk -l /dev/nvme0n1 | awk '/nvme0n1p2/ { print $4 - 65536 }')s
    

    Notice the trailing s here: it makes resize2fs interpret the number as a 512 byte sector size, as opposed to the default (4k blocks).

  8. Re-encrypt filesystem:

    cryptsetup reencrypt --encrypt /dev/nvme0n1p2 --reduce-device-size=32M
    

    This is it! This is the most important step! Make sure your laptop is plugged in and try not to interrupt it. This can, apparently, be resumed without problem, but I'd hate to show you how.

    This will show progress information like:

    Progress:   2.4% ETA 23m45s,      53GiB written, speed   1.3 GiB/s
    

    Wait until the ETA has passed.

  9. Open and mount the encrypted filesystem and mount the EFI system partition (ESP):

    cryptsetup open /dev/nvme0n1p2 crypt
    mount /dev/mapper/crypt /mnt
    mount /dev/nvme0n1p1 /mnt/boot/efi
    

    If this fails, now is the time to consider restoring from backups.

  10. Enter the chroot

    for fs in proc sys dev ; do
      mount --bind /$fs /mnt/$fs
    done
    chroot /mnt
    

    Pro tip: this can be done in one step in GRML with:

    grml-chroot /mnt bash
    
  11. Generate a crypttab:

    echo crypt_dev_nvme0n1p2 UUID=$(blkid -o value -s UUID /dev/nvme0n1p2) none luks,discard >> /etc/crypttab
    
  12. Adjust root filesystem in /etc/fstab, make sure you have a line like this:

    /dev/mapper/crypt_dev_nvme0n1p2 /               ext4    errors=remount-ro 0       1
    

    If you were already using a UUID entry for this, there's nothing to change!

  13. Configure the root filesystem in the initrd:

    echo root=/dev/mapper/crypt_dev_nvme0n1p2 > /etc/kernel/cmdline
    

    You might also want to look at the options: field in bootctl list to set the right thing here.

  14. Regenerate UKI:

    dpkg-reconfigure linux-image-$(uname -r)
    

    Be careful here! systemd-boot inherits the command line from the system where it is generated, so this will possibly feature some unsupported commands from your boot environment. In my case GRML had a couple of those, which broke the boot. It's still possible to workaround this issue by tweaking the arguments at boot time, that said.

    Also, the above will reconfigure the package named after the running kernel, it will only work if that's exactly the same version as the installed kernel.

  15. Exit chroot and reboot

    exit
    reboot
    

Some of the ideas in this section were taken from this guide but was mostly rewritten to simplify the work. My guide also avoids the grub hacks or a specific initrd system (as the guide uses initramfs-tools and grub, while I, above, switched to dracut and systemd-boot). RHEL also has a similar guide, perhaps even better.

Somehow I have made this system without LVM at all, which simplifies things a bit (as I don't need to also resize the physical volume/volume groups), but if you have LVM, you need to tweak this to also resize the LVM bits. The RHEL guide has some information about this.

03 November, 2025 04:34PM

Birger Schacht

Status update, October 2025

At the beginning of the month I uploaded a new version of the sway package to Debian. This contains two backported patches, one to fix reported WM capabilities and one to revert the default behavior for drag_lock to disabled.

I also uploaded new releases of cage (a kiosk for Wayland), labwc, the window-stacking Wayland compositor that is inspired by Openbox, and wf-recorder, a tool for creating screen recordings of wlroots-based Wayland compositors.

If I don’t forget I try to update the watch file of the packages I touch to the new version 5 format.

Simon Ser announced vali, a C library for Varlink. The blog post also mentions that this will be a dependency of “the next version of the kanshi Wayland output management daemon” and the PR to do so is now already merged. So I created ITP: vali – A Varlink C implementation and code generator, packaged the library and it is now waiting in NEW. In addition to libscfg this is now the second dependency of kanshi that is in NEW.

On the Rust side of things I fixed a bug in carl. The fix introduces new date properties which can be use to highlight a calendar date. I also updated all the dependencies and plan to create a new release soon.

Later I dug up a Rust project that I started a couple of years ago, where I try to use wasm-bindgen to implement interactive web components. There is a lot I have to refactor in this code base, but I will work on that and try to publish something in the next few months.

Miscellaneous

Two weeks ago I wrote A plea for <dialog>, which made the case for using standardized HTML elements instead of resorting to JavaScript libraries.

I finally managed to update my shell Server to Debian 13.

I created an issue for the nextcloud-news android client because I moved to a new phone and my starred articles did not show up in the news app, which is a bit annoying.

I got my ticket for 39C3.

In my dayjob I continued to work on the refactoring of the import logic of our apis-core-rdf app. I released version 0.56 which also introduced the “#snackbar” as the container for the toast message, as described in the <dialog> block post. At the end of the month I released version 0.57 of apis-core-rdf, which got rid of the remaining leftovers of the old import logic.

A couple of interesting articles I stumbled upon (or finally had the time to read):

03 November, 2025 05:28AM

Russ Allbery

Review: The Raven Scholar

Review: The Raven Scholar, by Antonia Hodgson

Series: Eternal Path Trilogy #1
Publisher: Orbit
Copyright: April 2025
ISBN: 0-316-57723-5
Format: Kindle
Pages: 651

The Raven Scholar is an epic fantasy and the first book of a projected trilogy. It is Antonia Hodgson's first published fantasy novel; her previous published novels are historical mystery. I would classify this as adult fantasy — the main character is thirty-four with a stable court position — but it has strong YA vibes because of the generational turnover feel of the main plot.

Eight years before the start of this book, Andren Valit attempted to assassinate the emperor and failed. Since then, his widow and three children — twins Yana and Ruko and infant Nisthala — have been living in disgrace in a cramped apartment, subject to constant inspections and suspicion. As the story opens, they have been summoned to appear before the emperor, escorted by a young and earnest Hound (essentially the state security services) named Shal Worthy. The resulting interrogation is full of dangerous traps. Not all of them will be avoided.

The formalization of the consequences of that imperial summons falls to an unpopular Junior Archivist (Third Class) whose one notable skill is her penmanship. A meeting that was disasterous for the Valits becomes unexpectedly fortunate for the archivist, albeit with a poisonous core.

Eight years later, Neema Kraa is High Scholar, and Emperor Bersun's twenty-four years of permitted reign is coming to an end. The Festival is about to begin. One representative from each of the empire's eight anats (religious schools) will compete in seven days of Trials, save for the Dragons who do not want the throne and will send a proxy. The victor according to the Trials scoring system will become emperor and reign unquestioned for twenty-four years or until resignation. This is the system that put an end to the era of chaos and has been in place for over a thousand years.

On the eve of the Trials, the Raven contender is found murdered. Neema is immediately a suspect; she even has reasons to suspect herself. She volunteers to lead the investigation because she has to know what happened. She is also volunteered to be the replacement Raven contender. There is no chance that she will become emperor; she doesn't even know how to fight. But agnostic Neema has a rather unexpected ally.

As the last chime fades we drop neatly on to the balcony's rusting hand rail, folding our wings with a soft shuffle. Noon, on the ninth day of the eighth month, 1531. Neema Kraa's lodgings. We are here, exactly where we should be, at exactly the right moment, because we are the Raven, and we are magnificent.

The Raven Scholar is a rather good epic fantasy, with some caveats that I'll get to in a moment, but I found it even more fascinating as a genre artifact.

I've read my share of epic fantasy over the years, although most of my familiarity of the current wave of new adult fairy epics comes from reviews rather than personal experience. The Raven Scholar is epic fantasy, through and through. There is court intrigue, a main character who is a court functionary unexpectedly thrown into the middle of some problem, civilization-wide stakes, dramatic political alliances, detailed magic and mythological systems, and gods. There were moments that reminded me of a Guy Gavriel Kay novel, although Hodgson's characters tend more towards disarming moments of humanization instead of Kay's operatic scenes of emotional intensity.

But The Raven Scholar is also a murder mystery, complete with a crime scene, clues, suspects, evidence, an investigation, a possibly compromised detective, and a morass of possible motives and red herrings. I'm not much of a mystery reader, but this didn't feel like sort of ancillary mystery that might crop up in the course of a typical epic fantasy. It felt like a full-fledged investigation with an amateur detective; one can tell that Hodgson's previous four books were historical mysteries.

And then there's the Trials, which are the centerpiece of the book.

This book helped me notice that people (okay, me, I'm the people) have been sleeping on the influence of The Hunger Games, Battle Royale, and reality TV (specifically Survivor) on genre fiction, possibly because the more obvious riffs on the idea (Powerless, The Selection) have been young adult or new adult. Once I started looking, I realized this idea is everywhere now: Throne of Glass, Fourth Wing, even The Night Circus to some extent. Competitions with consequences are having a moment.

I suspect having a competition to decide the next emperor is going to strike some traditional fantasy readers as sufficiently absurd and unbelievable that it will kick them out of the book. I had a moment of "okay, this is weird, why would anyone stick with this system for so long" myself. But I would encourage such readers to interrogate whether that's only a response from unfamiliarity; after all, strange women lying in ponds distributing swords is no basis for a system of government either. This is hardly the most unrealistic epic fantasy trope, and it has the advantage of being a hell of a plot generator when handled well.

Hodgson handles it well. Society in this novel is structured around the anats and the eight Guardians, gods who, according to myth, had returned seven times previously to save the world, but who will destroy the world when they return again. Each Guardian represents a group of characteristics and useful societal functions: the Ox is trustworthy, competent and hard-working; the Fox is a trickster and a rule-bender; the Raven is shrewd and careful and is the Guardian of scholars and lawyers. Each Trial is organized by one of the anats and tests the contenders for the skills most valued by that Guardian, often in subtle and rather ingenious ways. There are flaws here that you could poke at if you wanted to, but I was charmed and thoroughly entertained by how well Hodgson weaves the story around the Trials and uses the conflicting values to create character conflict, unexpected alliances, and engrossing plot.

Most importantly for a book of this sort, I liked Neema. She has a charming combination of competence, quirks (she is almost physically unable to not correct people's factual errors), insecurity, imposter syndrome, and determination. She is way out of her depth and knows it, but she has an ethical core and an insatiable curiosity that won't let her leave the central mysteries of the book alone. And the character dynamics are great; there are a lot of characters, including the competition problem of having to juggle eight contenders and give them all sufficient characterization to be meaningful, but this book uses its length to give each character some room to breathe. This is a long book, well over 600 pages, but it felt packed with events and plot twists. After every chapter I had to fight the urge to read just one more.

The biggest drawback of this book is that it is very much the first book of a trilogy, none of the other volumes are out yet, and the ending is rather nasty. This is the sort of trilogy that opens with a whole lot of bad things happening, and while I am thoroughly hooked and will purchase the next volume as soon as it's available, I wish Hodgson had found a way to end the book on a somewhat more positive or hopeful note. The middle of the book was great; the end was a bit of an emotional slog, alas. The writing is good enough here that I'm fairly sure the depression will be worth it, but if you need your endings to be triumphant (and who could blame you in this moment in history), you may want to wait on this one until more volumes are out.

Apart from that, though, this was a lot of fun. The Guardians felt like they came from a different strand of fantasy than you usually see in epic, more of a traditional folk tale vibe, which adds an intriguing twist to the epic fantasy setting. The characters all work, and Hodgson even pulls off some Game of Thrones–style twists that make you sympathetic to characters you previously hated. The magic system apart from the Guardians felt underbaked, but the politics had more depth than a lot of fantasy novels. If you want the truly complex and twisty politics you would get from one of Guy Gavriel Kay's historical rewrites, you will come away disappointed, but it was good enough for me. And I did enjoy the Raven.

Respect, that's all we demand. Recognition of our magnificence. Offerings. Love. Fear. Trembling awe. Worship. Shiny things. Blood sacrifice, some of us very much enjoy blood sacrifice. Truly, we ask for so little.

Followed by an as-yet untitled sequel that I hope will materialize.

Rating: 7 out of 10

03 November, 2025 03:25AM

November 02, 2025

hackergotchi for Guido Günther

Guido Günther

Free Software Activities October 2025

Quiete some things made progress last month: We put out Phosh 0.50 release, got closer to enabling media roles for audio by default in Phosh (see related post) and reworked our images builds. You should also (hopefully) notice some nice quality of life improvements once changes land in a distro near you and you're using Phosh. See below for details:

phosh

  • Switch back to default them when disabling automatic HighContrast (MR)
  • Hande gnome-session 49 changes so OSK can still start up (MR)
  • Release 0.50.0, 0.50.1
  • Don't forget to apply corner-shift to gear icon (MR)
  • Fix startup warning (MR)
  • Update doap (MR)
  • DBus codegen cleanups (MR, MR, MR)
  • Add Autobrightness handling (MR), (MR), (MR)

phoc

  • Dispatch idle loop in prepare (MR)
  • Release 0.50.0

phosh-mobile-settings

  • Allow to hide plugins (MR)
  • Release 0.50~rc1, 0.50.0
  • Hide demo plugins by default (MR)
  • Sink floating refs properly (MR)
  • Simplify includes (MR)
  • Allow to configure alarm sound if clock app likely supports it (MR)
  • Use shared check CI images (MR)
  • Release 0.50.1
  • Fix ringtone role (MR)

stevia (formerly phosh-osk-stub)

  • Ship sytemd user unit so things work with gnome-session 49 (MR)
  • Fix fallback to default OSK size with multiple outputs (MR)
  • Improve character styling a bit (MR)
  • Consolidate input surface creation (MR)
  • Release 0.50.0, 0.50.1
  • Don't trigger backspace key repeat in keypad or emoji layouts (MR)
  • Better restore layout after swipe closing (MR)

phosh-tour

meta-phosh

  • Build shared CI image MR

xdg-desktop-portal-phosh

  • Use pfs subproject for Rust portal (MR)

libphosh-rs

  • Fix doc build (MR)

Calls

  • Release 49.1
  • Fix plugin loading (MR)
  • Fix debug logging (MR)

Phrog

  • Allow OSKs to run with gnome-session 49 (MR)
  • Release 0.50.0 (MR)

phosh-recipes

  • Fix build (MR)
  • Simplify and cleanup docs (MR)
  • Fix mkosi build (MR)
  • Install Recommends: (MR)
  • Draft: Add support for initial-setup (MR)
  • Add version option (MR)
  • Drop debos build (MR, MR)

feedbackd

  • Fix compatiblility with systemd >= 258 (MR)
  • Use meson.options (MR)
  • Add phone role (MR)

feedbackd-device-themes

  • Add support for Google Pixel 3A (MR)

Chatty

  • Fix failing CI build (MR)

Squeekboard

  • Add systemd unit to start with gnome-session 49 (MR)

Debian

  • Upload phosh-tour 0.50.0
  • Upload stevia 0.50.0, 0.50.1
  • Upload phosh-mobile-settings 0.50.0
  • Upload feedbackd 0.8.6
  • Prepare phrog upload (MR)
  • gnome-session Breaks (MR, MR)
  • cellbroadcastd: Backport our upstream deadlock fix (MR)
  • Upload wlroots 0.19.2
  • Upload iio-sensor-proxy 3.8 (MR)

Cellbroadcastd

  • Release 0.0.3
  • Fix deadlock on start MR)

gnome-settings-daemon

  • Fix brightness values (MR)

gnome-control-center

  • Ignore role based loopbacks (MR)

gnome-initial-setup

  • Use AdwSwitchRow to reduce horizonatal allocation (MR)
  • Quit when done (and not under GDM) (MR)

sdm845-mainline

  • shift6mq: Switch to mainline panel driver (MR)

gnome-session

  • RequiredComponents is now gone (MR)

alpine

  • stevia: Add hunspell-en-us dependency (MR)

droid-juicer

  • Install sensor firmware for SHIFT6mq (MR)
  • Install sensor firmware for Oneplus 6T (MR)
  • Fix clippy complaints (MR)

phosh-site

  • Mention mirror (MR)
  • Update for hugo 150 (MR, MR)
  • Embed some ouf our peertube videos (MR)
  • Add donations item (MR)
  • Update osk post to mention systemd unit (MR)
  • Update list of distributions and users (MR)
  • Switch nightly builds to forky (MR)
  • Lint more markdown (MR)
  • Release 0.50.1 (MR)
  • Notes on audio (MR)

phosh-debs

  • Switch to Debian forky (MR)
  • Add g-i-s (MR)

Linux

  • shift6mq: Add missing panel driver dependency (MR)
  • shift6mq: Fix DTS warning (MR)

Wireplumber

  • Update media-role volume MR (MR)
  • Allow to set a default target for e.g. alerts and alarms (MR)

Phosh.mobi e.V.

demo-phones

  • Add demo videos (MR)
  • Add demo epubs (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • phoc: Use correct format specifiers (MR)
  • phoc: seat: Use the getter to access focused layer's output (MR)
  • phosh: Upcoming events empty state (MR)
  • phosh: Caffeine duration (MR)
  • phosh: Location service quick setting (MR)
  • phosh: UI check fix (MR)
  • phosh: Autobrightness toggle (MR)
  • p-m-s: Empty tweaks page test (MR)
  • p-m-s: Symlink backend (MR)
  • p-m-s: Return boolean from value setters (MR)
  • p-m-s: conf-tweaks: Make setting_data a prop in Xresources backend (MR)
  • p-m-s: conf-tweaks: Add gtk3settings backend (MR)
  • gmobile: xiaomi-sweet support (MR)
  • xdg-d-p: Use clippy (MR), MR)
  • libcmatrix: various tweaks ([MR}(https://source.puri.sm/Librem5/libcmatrix/-/merge_requests/110))
  • libcmatrix: Release 0.0.4 (MR](https://source.puri.sm/Librem5/libcmatrix/-/merge_requests/115)
  • meta-phosh: Add cellbroadcasts (MR)
  • calls: Unload plugin test (MR)
  • calls: Sip proxy support (MR)
  • calls: Build system cleanups (MR)
  • m-b-p-i: JP MVNO (MR)

Comments?

Join the Fediverse thread

02 November, 2025 06:39PM

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in October 2025

02 November, 2025 12:54PM by Ben Hutchings

Russell Coker

PCIe Problems

HP z840 Dead Slot

I just had an issue with the HP z840 system I’m using as a build server [1]. I had to take it to a site that was about 20 minutes drive away and after getting there it didn’t work and just gave 6 beeps and the red LED on the power button flashed. The beeps indicate a video issue, which refers to the Intel Arc B580 card (which is annoyingly large) [2]. I swapped the card with another video card I had lying around (which I knew to be reliable) and got the same result.

It turned out that the PCIe*16 slot that I was using for it had broken, maybe bumps during transport with the big heavy GPU had broken it. I plugged it into the next slot along which is a PCIe*8 slot that’s open ended so it takes larger cards. The upside of this is that the system is still working well, the downside is that the issues I already had with the GPU being unreasonably large are exacerbated by losing one of the *16 slots. Having it in a PCIe 3.0*8 slot is not a problem for me as I only plan to use it for 8K display and for ML stuff and I think that *8 speed (7.8GB/s) is sufficient for both those tasks. In that slot the card could display 8K video at 60Hz with 32bpp and no compression (something that I don’t anticipate ever doing). It could also transfer the maximum size LLM in under 2 seconds which isn’t an unreasonable delay for starting a LLM.

The question now is, should I remove PCIe cards before transport in future?

HP z640 Intermittant Errors

The next issue I have is with my HP z640 workstation which is now my main workstation [3]. I started getting the below errors and then I had the kwin_wayland session hang and another time I started getting video corruption with mpv.

Oct 10 20:46:36 xev kernel: pcieport 0000:00:02.0: AER: Correctable error 
message received from 0000:00:02.0
Oct 10 20:46:36 xev kernel: pcieport 0000:00:02.0: AER: found no error details 
for 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: Multiple Correctable 
error message received from 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:   device [8086:2f04] error 
status/mask=00001040/00002000
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:    [ 6] BadTLP                
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:    [12] Timeout               
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER:   Error of this Agent 
is reported first
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0:   device [1002:6987] error 
status/mask=00001000/00002000
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0:    [12] Timeout               
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1:   device [1002:aae0] 
error status/mask=00001000/00002000
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1:    [12] Timeout               
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: Correctable error 
message received from 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: found no error details 
for 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: Multiple Correctable 
error message received from 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: found no error details 
for 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER: Multiple Correctable 
error message received from 0000:00:02.0
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:   device [8086:2f04] error 
status/mask=00001040/00002000
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:    [ 6] BadTLP                
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0:    [12] Timeout               
Oct 10 20:46:37 xev kernel: pcieport 0000:00:02.0: AER:   Error of this Agent 
is reported first
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0:   device [1002:6987] error 
status/mask=00001100/00002000
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0:    [ 8] Rollover              
Oct 10 20:46:37 xev kernel: amdgpu 0000:02:00.0:    [12] Timeout               
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1: PCIe Bus Error: 
severity=Correctable, type=Data Link Layer, (Transmitter ID)
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1:   device [1002:aae0] 
error status/mask=00001100/00002000
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1:    [ 8] Rollover              
Oct 10 20:46:37 xev kernel: snd_hda_intel 0000:02:00.1:    [12] Timeout        

On that system I took the CPU out and reinstalled it with new heatsink paste on the theory that it might not have made good contact with some of the pins. The system also has one DIMM slot not working which can be a symptom of poor seating of the CPU. Doing that made no difference to the DIMM slot (I had bought the system for $50 in “unknown condition”) but the video has worked correctly since. It has been suggested to me that reseating the CPU didn’t directly affect the issue and that just taking the system apart could have addressed an issue of the GPU not making good contact in the PCIe slot.

It has been suggested that I could try “contact cleaner” which can be obtained from automotive supply stores among other places. I’m hesitant to put that in a PCIe slot but putting it on the connector of the card and then polishing it off seems like something to consider. Another suggestion was to use isopropyl alcohol to wash the contacts. I guess washing a PCIe slot out with isopropyl alcohol and leaving it for hours to dry is an option as a last resort.

For the moment it seems to be fine but I am not certain that the problem is gone forever. At the moment my main aim is to have these systems keep working until after the release of DDR6 workstations which is when I expect DDR5 workstations to become affordable on all the second hand sites.

02 November, 2025 07:51AM by etbe

October 28, 2025

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

KDE PowerDevil Systemd Inhibit

With KDE 6.5.0, PowerDevil has broken forced its own set of suspend/hibernate inhibitors onto logind. And to my knowledge, there’s no way to disable them.

KDE PowerDevil Settings Window

As a user, I’d prefer to set lid action as Do nothing and really expect KDE/PowerDevil to do nothing in that regard. But with KDE 6.5.0 PowerDevil forces those inhibitors whatsoever.

❯ systemd-inhibit --list
WHO UID USER PID COMM WHAT WHY >
ModemManager 0 root 3541 ModemManager sleep ModemManager needs to reset devices >
NetworkManager 0 root 3453 NetworkManager sleep NetworkManager needs to turn off networks >
UPower 0 root 4342 upowerd sleep Pause device polling >
PowerDevil 1000 rrs 82735 org_kde_powerde handle-power-key:handle-suspend-key:handle-hibernate-key:handle-lid-switch KDE handles power events >
Screen Locker 1000 rrs 4844 kwin_wayland sleep Ensuring that the screen gets locked before going to sleep >

5 inhibitors listed.

This essentially prohibits logind to act on the lid actions. And instead forces the user to depend on nothing else and other than PowerDevil. This assumes the wishful thought that PowerDevil is Solid.

I’d love to continue using my suspend workflow via systemd’s suspend-then-hibernate target as it has been working reliably for years. And it also allows me to customize the behavior as I see fit.

Of course, I do have the option to trigger systemd suspend-then-hibernate manually, every time, before closing the lid. But computers and automation has spoilt things.

The quick workaround/fix is to delegate it to ACPI, on platforms that support it. Thankfully all of x86 to my knowledge.

So, in ACPI actions I’ve a new config set to:

❯ cat /etc/acpi/actions/lm_lid.sh
#! /bin/sh

grep close /proc/acpi/button/lid/LID/state && systemctl suspend-then-hibernate

And with that I can be back to reliably (and carelessly) suspend-then-hibernate my laptop.

28 October, 2025 12:00AM by Ritesh Raj Sarraf (rrs@researchut.com)

October 27, 2025

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

It's NOT always DNS.

I’ve written down a new rule (no name, sorry) that I’ll be repeating to myself and those around me. “If you can replace ‘DNS’ with ‘key value store mapping a name to an ip’ and it still makes sense, it was not, in fact, DNS.” Feel free to repeat it along with me.

Sure, the “It’s always DNS” meme is funny the first few hundred times you see it – but what’s less funny is when critical thinking ends because a DNS query is involved. DNS failures are often the first observable problem because it’s one of the first things that needs to be done. DNS is fairly complicated, implementation-dependent, and at times – frustrating to debug – but it is not the operational hazard it’s made out to be. It’s at best a shallow take, and at worst actively holding teams back from understanding their true operational risks.

IP connectivity failures between a host and the rest of the network is not a reason to blame DNS. This would happen no matter how you distribute the updated name to IP mappings. Wiping out all the records during the course of operations due to an automation bug is not a reason to blame DNS. This, too, would happen no matter how you distribute the name to IP mappings. Something made the choice to delete all the mappings, and it did what you asked it to do

There’s plenty of annoying DNS specific sharp edges to blame when things do go wrong (like 8.8.8.8 and 1.1.1.1 disagreeing about resolving a domain because of DNSSEC, or since we’re on the topic, a DNSSEC rollout bricking prod for hours) for us to be cracking jokes anytime a program makes a DNS request.

We can do better.

27 October, 2025 05:15PM

October 25, 2025

hackergotchi for Mike Gabriel

Mike Gabriel

Debian Lomiri Tablets - We are hiring!

We at Fre{i}e Software GmbH now have a confirmed budget for working on Debian based tablets with the special goal to use them for educational purposes (i.e. in schools).

Those Debian Edu tablets shall be powered by the Lomiri Operating Environment (that same operating environment that is powering Ubuntu Touch).

That said, we are hiring developers (full time, part time) [*] [**]:

  • Lomiri developers (C/C++, Qt5 and Qt6, QML, CMake)
  • Debian maintainers

Global tasks will be:

  • Transition Lomiri from Qt5 to Qt6
  • Consolidate the Lomiri Shell on various reference devices (mainline Linux only)
  • Integrate Lomiri Shell with cloud services such as Nextcloud and OpenCloud
  • XDG Desktop Portal support for Lomiri, integrate better with non-Lomiri Wayland apps
  • Bring more Lomiri-specific (Ubuntu Touch) apps to Debian
  • ... (more to come) ...

The budget will cover work for the +/- next 1.5-2 yrs. Development achievements shall culminate in the release of Debian 14.

If you are interested in joining our team, please get in touch with me via known communication channels.

light+love,
Mike (aka sunweaver at debian.org)

[fsgmbh] https://freiesoftware.gmbh
[*] We can employ applicants who are located in Germany, Austria or Poland (for other regions within the EU, please ask).
[**] Alternatively, if you are self-employed, we are happy to onboard you as a freelancer.

25 October, 2025 08:58PM by sunweaver

hackergotchi for Jonathan Dowland

Jonathan Dowland

franken keyboard

Since it's spooky season, let me present to you the FrankenKeyboard!

The FrankenKeyboard

8bitdo retro keyboard

For some reason I can't fathom, I was persuaded into buying an 8bitdo retro mechanical keyboard. It was very reasonably priced, and has a few nice fun features: built-in bluetooth and 2.4GHz wireless (with the supplied dongle); colour scheme inspired by the Nintendo Famicom; fun to use knobs for volume control; some basic macro support; and funky oversized mashable macro keys (which work really well as "Copy" and "Paste")

The 8bitdo keyboards come with switch-types I had not previously experienced: Kailh Box White v2. I'm used to Cherry MX Reds, but I loved the feel of the Box White v2s. The 8bitdo keyboards all have hot-swappable key switches.

It's relatively compact (comes without a numpad), but still larger than my TEX Shura, which (at home) is my daily driver. I also miss the trackpoint mouse on the Shura. Finally, the 8bitdo model I bought has American ANSI key layout, which I can tolerate but is not as nice as ISO. I later learned that they have a limited range of ISO-layout keyboards too, but not (yet) in the Famicom colour scheme I'd bought.

DIY Shura

My existing Shura's key switches are soldered on and can't be swapped out. But I really preferred the Kailh white switches.

I decided to buy a second Shura, this time as a "DIY kit" which accepts hot-swappable switches. I then moved the Kailh Box White v2 switches over from the 8bitdo keyboard.

keycaps

Part of justifying buying the DIY kit was the possibility that I could sell on my older Shura with the Cherry MX Red switches. My existing Shura's key caps are for the ISO-GB layout and have their legends printed onto them. After three years the legends have faded in a few places.

The DIY kit comes with a set of ABS "double-shot" key caps (where the key legends are plastic rather than printed). They look a lot nicer, but I don't look at my keys. I'm considering applying the new, pristine key caps to the old Shura board, to make it more attractive to buyers. One problem is I'm not sure the new set of caps includes the ISO-UK specific ones. It might be that potential buyers might prefer to have used caps with the correct legends rather than pristine ones which are mislabelled.

franken keyboard

Given I wasn't going to use the new key cap set, I borrowed most of the caps from the 8bitdo keyboard. I had to retain the G, H and B keys from my older Shura as they are specially molded to leave space for the trackpoint, and a couple of the modifier keys which weren't the right size. Hence the odd look! (It needs some tweaking. That left-ALT looks out of place. It may be that the 8bitdo caps are temporary. Left "cmd" is really Fn, and "Caps lock" is really "Super". The right-hand red dot is a second "Super".)

Since taking the photo I've removed the "stabilisers" under the right-shift and backspace keys, in order to squeeze a couple more keys in their place. the new keycap set includes a regular-sized "BS" key, as the JIS keyboard layout has a regular-sized backspace. (Everyone should have a BS key in my opinion.)

I plan to map my new keys to "Copy" and "Paste" actions following the advice in this article.

25 October, 2025 09:57AM

October 23, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Modern perfect hashing

Wojciech Muła posted about modern perfect hashing for strings and I wanted to make some comments about my own implementation (that sadly never got productionized because doubling the speed compared to gperf wasn't really that impactful in the end).

First, let's define the problem, just so we're all on the same page; the goal is to create code that maps a known, fixed set of strings to a predefined integer (per string), and rejects everything else. This is essentially the same as a hash table, except that since the set of strings is known ahead of time, we can do better than a normal hash table. (So no “but I heard SwissTables uses SIMD and thus cannot be beat”, please. :-) ) My use case is around a thousand strings or so, and we'll assume that a couple of minutes of build time is OK (shorter would be better, but we can probably cache somehow). If you've got millions of strings, and you don't know them compile-time (for instance because you want to use your hash table in the join phase of a database), see this survey; it's a different problem with largely different solutions.

Like Wojciech, I started splitting by length. This means that we can drop all bounds checking after this, memcmp will be optimized by the compiler to use SIMD if relevant, and so on.

But after that, he recommends using PEXT (bit extraction, from BMI2), which has two problems: First, the resulting table can get quite big if your input set isn't well-behaved. (You can do better than the greedy algorithm he suggests, but not infinitely so, and finding the optimal mask quickly is sort of annoying if you don't want to embed a SAT solver or something.) Second, I needed the code to work on Arm, where you simply don't have this instruction or anything like it available. (Also, not all x86 has it, and on older Zen, it's slow.)

So, we need some other way, short of software emulation of PEXT (which exists, but we'd like to do better), to convert a sparse set of bits into a table without any collisions. It turns out the computer chess community has needed to grapple with this for a long time (they want to convert from “I have a \ on \ and there are pieces on relevant squares \, give me an index that points to an array of squares I can move to”), and their solution is to use… well, magic. It turns out that if you do something like ((value & mask) * magic), it is very likely that the upper bits will be collision-free between your different values if you try enough different numbers for magic. We can use this too; for instance, here is code for all length-4 CSS keywords:

   static const uint8_t table[] = {
        6,   0,   0,   3,   2,   5,   9,   0,   0,   1,   0,   8,   7,   0,   0,
   };
   static const uint8_t strings[] = {
       1,   0, 'z', 'o', 'o', 'm',
       2,   0, 'c', 'l', 'i', 'p',
       3,   0, 'f', 'i', 'l', 'l',
       4,   0, 'l', 'e', 'f', 't',
       5,   0, 'p', 'a', 'g', 'e',
       6,   0, 's', 'i', 'z', 'e',
       7,   0, 'f', 'l', 'e', 'x',
       8,   0, 'f', 'o', 'n', 't',
       9,   0, 'g', 'r', 'i', 'd',
      10,   0, 'm', 'a', 's', 'k',
   };

   uint16_t block;
   memcpy(&block, str + 0, sizeof(block));
   uint32_t pos = uint32_t(block * 0x28400000U) >> 28;
   const uint8_t *candidate = strings + 6 * table[pos];
   if (memcmp(candidate + 2, str, 4) == 0) {
     return candidate[0] + (candidate[1] << 8);
   }
   return 0;

There's a bit to unpack here; we read the first 16 bits from our value with memcpy (big-endian users beware!), multiply it with the magic value 0x28400000U found by trial and error, shift the top bits down, and now all of our ten candidate values (“zoom”, “clip”, etc.) have different top four bits. We use that to index into a small table, check that we got the right one instead of a random collision (e.g. “abcd”, 0x6261, would get a value of 12, and table[12] is 7, so we need to disambiguate that from “flex”, which is what we are actually looking for in that spot), and then return the 16-bit identifier related to the match (or zero, if we didn't find it).

We don't need to use the first 16 bits; we could have used any other consecutive 16 bits, or any 32 bits, or any 64 bits, or possibly any of those masked off, or even XOR of two different 32-bit sets if need be. My code prefers smaller types because a) they tend to give smaller code size (easier to load into registers, or can even be used as immediates), and b) you can bruteforce them instead of doing random searches (which, not the least, has the advantage that you can give up much quicker).

You also don't really need the intermediate table; if the fit is particularly good, you can just index directly into the final result without wasting any space. Here's the case for length-24 CSS keywords, where we happened to have exactly 16 candidates and we found a magic giving a perfect (4-bit) value, making it a no-brainer:

  static const uint8_t strings[] = {
     95,   0, 'b', 'o', 'r', 'd', 'e', 'r', '-', 'b', 'l', 'o', 'c', 'k', '-', 's', 't', 'a', 'r', 't', '-', 'w', 'i', 'd', 't', 'h',
     40,   0, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 't', 'e', 'x', 't', '-', 'o', 'r', 'i', 'e', 'n', 't', 'a', 't', 'i', 'o', 'n',
    115,   1, 's', 'c', 'r', 'o', 'l', 'l', '-', 'p', 'a', 'd', 'd', 'i', 'n', 'g', '-', 'b', 'l', 'o', 'c', 'k', '-', 'e', 'n', 'd',
    198,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 't', 'r', 'a', 'n', 's', 'f', 'o', 'r', 'm', '-', 'o', 'r', 'i', 'g', 'i', 'n',
    225,   0, '-', 'i', 'n', 't', 'e', 'r', 'n', 'a', 'l', '-', 'o', 'v', 'e', 'r', 'f', 'l', 'o', 'w', '-', 'b', 'l', 'o', 'c', 'k',
    101,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 'b', 'o', 'r', 'd', 'e', 'r', '-', 'e', 'n', 'd', '-', 's', 't', 'y', 'l', 'e',
     93,   0, 'b', 'o', 'r', 'd', 'e', 'r', '-', 'b', 'l', 'o', 'c', 'k', '-', 's', 't', 'a', 'r', 't', '-', 'c', 'o', 'l', 'o', 'r',
    102,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 'b', 'o', 'r', 'd', 'e', 'r', '-', 'e', 'n', 'd', '-', 'w', 'i', 'd', 't', 'h',
    169,   1, 't', 'e', 'x', 't', '-', 'd', 'e', 'c', 'o', 'r', 'a', 't', 'i', 'o', 'n', '-', 's', 'k', 'i', 'p', '-', 'i', 'n', 'k',
    156,   0, 'c', 'o', 'n', 't', 'a', 'i', 'n', '-', 'i', 'n', 't', 'r', 'i', 'n', 's', 'i', 'c', '-', 'h', 'e', 'i', 'g', 'h', 't',
    201,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 't', 'r', 'a', 'n', 's', 'i', 't', 'i', 'o', 'n', '-', 'd', 'e', 'l', 'a', 'y',
    109,   1, 's', 'c', 'r', 'o', 'l', 'l', '-', 'm', 'a', 'r', 'g', 'i', 'n', '-', 'i', 'n', 'l', 'i', 'n', 'e', '-', 'e', 'n', 'd',
    240,   0, '-', 'i', 'n', 't', 'e', 'r', 'n', 'a', 'l', '-', 'v', 'i', 's', 'i', 't', 'e', 'd', '-', 's', 't', 'r', 'o', 'k', 'e',
    100,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 'b', 'o', 'r', 'd', 'e', 'r', '-', 'e', 'n', 'd', '-', 'c', 'o', 'l', 'o', 'r',
     94,   0, 'b', 'o', 'r', 'd', 'e', 'r', '-', 'b', 'l', 'o', 'c', 'k', '-', 's', 't', 'a', 'r', 't', '-', 's', 't', 'y', 'l', 'e',
    196,   2, '-', 'w', 'e', 'b', 'k', 'i', 't', '-', 't', 'e', 'x', 't', '-', 's', 'i', 'z', 'e', '-', 'a', 'd', 'j', 'u', 's', 't',
  };

  uint32_t block;
  memcpy(&block, str + 16, sizeof(block));
  uint32_t pos = uint32_t(block * 0xe330a008U) >> 28;
  const uint8_t *candidate = strings + 26 * pos;
  if (memcmp(candidate + 2, str, 24) == 0) {
    return candidate[0] + (candidate[1] << 8);
  }
  return 0;

You can see that we used a 32-bit value here (bytes 16 through 19 of the input), and a corresponding 32-bit magic (though still not with an AND mask). So we got fairly lucky, but sometimes you do that. Of course, we need to validate the entire 24-byte value even though we only discriminated on four of the bytes! (Unless you know for sure that you never have any out-of-distribution inputs, that is. There are use cases where this is true.)

(If you wonder what 95, 0 or similar is above; that's just “the answer the user wanted for that input”. It corresponds to a 16-bit enum in the parser.)

If there are only a few values, we don't need any of this; just like Wojciech, we do with a simple compare. Here's the generated code for all length-37 CSS keywords, plain and simple:

  if (memcmp(str, "-internal-inactive-list-box-selection", 37) == 0) {
    return 171;
  }
  return 0;

(Again 171 is “the desired output for that input”, not a value the code generator decides in any way.)

So how do we find these magic values? There's really only one way: Try lots of different ones and see if they work. But there's a trick to accelerate “see if they work”, which I also borrowed from computer chess: The killer heuristic.

See, to try if a magic is good, you generally try to hash all the different values and see if any two go into the same bucket. (If they do, it's not a perfect hash and the entire point of the exercise is gone.) But it turns out that most of the time, it's the same two values that collide. So every couple hundred candidates, we check which two values disproved the magic, and put those in a slot. Whenever we check magics, we can now try those first, and more likely than not, discard the candidate right away and move on to the next one (whether it is by exhaustive search or randomness). It's actually a significant speedup.

But occasionally, we simply cannot find a magic for a given group; either there is none, or we didn't have enough time to scan through enough of the 64-bit space. At this point, Wojciech suggests we switch on one of the characters (heuristically) to get smaller subgroups and try again. I didn't actually find this to perform all that well; indirect branch predictors are better than 20 years ago, but the pattern is typically not that predictable. What I tried instead was to have more of a yes/no on some character (i.e., a non-indirect branch), which makes for a coarser split.

It's not at all obvious where the best split would be. You'd intuitively think that 50/50 would be a good idea, but if you have e.g. 40 elements, you'd much rather split them 32/8… if you can find perfect hashes for both subgroups (5-bit and 3-bit, respectively). If not, a 20–20 split is most likely better, since you very easily can find magics that put 20 elements into 32 buckets without collisions. I ended up basically trying all the different splits and scoring them, but this makes the searcher rather slow, and it means you basically must have some sort of cache if you want to run it as part of your build system. This is the part I'm by far the least happy about; gperf isn't great by modern standards, but it never feels slow to run.

The end result for me was: Runtime about twice as fast as gperf, compiled code about half as big. That's with everything hard-coded; if you're pushed for space (or are icache-bound), you could make more generic code at the expense of some speed.

So, if anyone wants to make a more modern gperf, I guess this space is up for grabs? It's not exactly technology that will make your stock go to AI levels, though.

23 October, 2025 08:23PM

October 21, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

LLM Hallucinations in Practical Code Generation — Phenomena, Mechanism, and Mitigation

This post is a review for Computing Reviews for LLM Hallucinations in Practical Code Generation — Phenomena, Mechanism, and Mitigation , a article published in Proceedings of the ACM on Software Engineering, Volume 2, Issue ISSTA

How good can large language models (LLMs) be at generating code? This may not seem like a very novel question, as several benchmarks (for example, HumanEval and MBPP, published in 2021) existed before LLMs burst into public view and started the current artificial intelligence (AI) “inflation.” However, as the paper’s authors point out, code generation is very seldom done as an isolated function, but instead must be deployed in a coherent fashion together with the rest of the project or repository it is meant to be integrated into. Today, several benchmarks (for example, CoderEval or EvoCodeBench) measure the functional correctness of LLM-generated code via test case pass rates.

This paper brings a new proposal to the table: comparing LLM-generated repository-level evaluated code by examining the hallucinations generated. The authors begin by running the Python code generation tasks proposed in the CoderEval benchmark against six code-generating LLMs. Next, they analyze the results and build a taxonomy to describe code-based LLM hallucinations, with three types of conflicts (task requirement, factual knowledge, and project context) as first-level categories and eight subcategories within them. The authors then compare the results of each of the LLMs per the main hallucination category. Finally, they try to find the root cause for the hallucinations.

The paper is structured very clearly, not only presenting the three research questions (RQ) but also referring to them as needed to explain why and how each partial result is interpreted. RQ1 (establishing a hallucination taxonomy) is the most thoroughly explored. While RQ2 (LLM comparison) is clear, it just presents straightforward results without much analysis. RQ3 (root cause discussion) is undoubtedly interesting, but I feel it to be much more speculative and not directly related to the analysis performed.

After tackling their research questions, Zhang et al. propose a possible mitigation to counter the effect of hallucinations: enhance the LLM with retrieval-augmented generation (RAG) so it better understands task requirements, factual knowledge, and project context. The presented results show that all of the models are clearly (though modestly) improved by the proposed RAG-based mitigation.

The paper is clearly written and easy to read. It should provide its target audience with interesting insights and discussions. I would have liked more details on their RAG implementation, but I suppose that’s for a follow-up work.

21 October, 2025 10:08PM