Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

January 03, 2026

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

2025 — A Musical Retrospective

2026 already! The winter weather here has really been beautiful and I always enjoy this time of year. Writing this yearly musical retrospective has now become a beloved tradition of mine1 and I enjoy retracing the year's various events through albums I listened to and concerts I went to.

Albums

In 2025, I added 141 new albums to my collection, around 60% more than last year's haul. I think this might have been too much? I feel like I didn't have time to properly enjoy all of them and as such, I decided to slow down my acquisition spree sometimes in early December, around the time I normally do the complete opposite.

This year again, I bought the vast majority of my music on Bandcamp. Most of the other albums I bought as CDs and ripped them.

Concerts

In 2025, I went to the following 25 (!!) concerts:

  • January 17th: Uzu, Young Blades, She came to quit, Fever Visions
  • February 1st: Over the Hill, Jail, Mortier, Ain't Right
  • February 7th: Béton Armé, Mulchulation II, Ooz
  • February 15th: The Prowlers, Ultra Razzia, Sistema de Muerte, Trauma Bond
  • February 28th: Francbâtards
  • March 28th: Conflit Majeur, to Even Exist, Crachat
  • April 12th: Jetsam, Mortier, NIIVI, Canette
  • April 26th-27th (Montreal Oi! Fest 2025): The Buzzers, Bad Terms, Sons of Pride, Liberty and Justice, Flafoot 56, The Beltones, Mortier, Street Code, The Stress, Alternate Action
  • May 1st: Bauxite, Atomic threat, the 351's
  • May 30th: Uzu, Tenaz, Extraña Humana, Sistema de muerte
  • June 7th: Ordures Ioniques, Tulaviok, Fucking Raymonds, Voyou
  • June 18th: Tiken Jah Fakoly
  • June 21st: Saïan Supa Celebration
  • June 26th: Taxi Girls, Death Proof, Laura Krieg
  • July 4th: Frente Cumbiero
  • July 12th: Montreal's Big Fiesta DJ Set
  • August 16th: Guerilla Poubelle
  • September 11th: No Suicide Act, Mortier
  • September 20th: Hors Contrôle, Union Thugs, Barricade Mentale
  • October 20th: Ezra Furman, The Golden Dregs
  • October 24th: Overbass, Hommage à Bérurier Noir, Self Control, Vermin Kaos
  • November 6th: Béton Armé, Faze, Slash Need, Chain Block
  • November 28th (Blood Moon Ritual 2025): Bhatt, Channeler, Pyrocene Death Cult, Masse d'Armes
  • December 13th (Stomp Records' 30th Anniversary Bash): The Planet Smashers, The Flatliners, Wine Lips, The Anti-Queens, Crash ton rock

Although I haven't touched metalfinder's code in a good while, my instance still works very well and I get the occasional match when a big-name artist in my collection comes in town. Most the venues that advertise on Bandsintown are tied to Ticketmaster though, which means most underground artists (i.e. most of the music I listen to) end up playing elsewhere.

As such, shout out again to the Gancio project and to the folks running the Montreal instance. It continues to be a smash hit and most of the interesting concerts end up being advertised there.

See you all in 2026!


  1. see the 2022, 2023 and 2024 entries 

03 January, 2026 12:32AM by Louis-Philippe Véronneau

January 02, 2026

hackergotchi for Joachim Breitner

Joachim Breitner

Seemingly impossible programs in Lean

In 2007, Martin Escardo wrote a often-read blog post about “Seemingly impossible functional programs”. One such seemingly impossible function is find, which takes a predicate on infinite sequences of bits, and returns an infinite sequence for which that predicate hold (unless the predicate is just always false, in which case it returns some arbitrary sequence).

Inspired by conversations with and experiments by Massin Guerdi at the dinner of LeaningIn 2025 in Berlin (yes, this blog post has been in my pipeline for far too long), I wanted to play around these concepts in Lean.

Let’s represent infinite sequences of bits as functions from Nat to Bit, and give them a nice name, and some basic functionality, including a binary operator for consing an element to the front:

import Mathlib.Data.Nat.Find

abbrev Bit := Bool

def Cantor : Type := Nat → Bit

def Cantor.head (a : Cantor) : Bit := a 0

def Cantor.tail (a : Cantor) : Cantor := fun i => a (i + 1)

@[simp, grind] def Cantor.cons (x : Bit) (a : Cantor) : Cantor
  | 0 => x
  | i+1 => a i

infix:60 " # " => Cantor.cons

With this in place, we can write Escardo’s function in Lean. His blog post discusses a few variants; I’ll focus on just one of them:

mutual
  partial def forsome (p : Cantor → Bool) : Bool :=
    p (find p)

  partial def find (p : Cantor → Bool) : Cantor :=
    have b := forsome (fun a => p (true # a))
    (b # find (fun a => p (b # a)))
end

We define find together with forsome, which checks if the predicate p holds for any sequence. Using that find sets the first element of the result to true if there exists a squence starting with true, else to false, and then tries to find the rest of the sequence.

It is a bit of a brian twiter that this code works, but it does:

def fifth_false : Cantor → Bool := fun a => not (a 5)

/-- info: [true, true, true, true, true, false, true, true, true, true] -/
#guard_msgs in
#eval List.ofFn (fun (i : Fin 10) => find fifth_false i)

Of course, in Lean we don’t just want to define these functions, but we want to prove that they do what we expect them to do.

Above we defined them as partial functions, even though we hope that they are not actually partial: The partial keyword means that we don’t have to do a termination proof, but also that we cannot prove anything about these functions.

So can we convince Lean that these functions are total after all? We can, but it’s a bit of a puzzle, and we have to adjust the definitions.

First of all, these “seemingly impossible functions” are only possible because we assume that the predicate we pass to it, p, is computable and total. This is where the whole magic comes from, and I recommend to read Escardo’s blog posts and papers for more on this fascinating topic. In particular, you will learn that a predicate on Cantor that is computable and total necessarily only looks at some initial fragment of the sequence. The length of that prefix is called the “modulus”. So if we hope to prove termination of find and forsome, we have to restrict their argument p to only such computable predicates.

To that end I introduce HasModulus and the subtype of predicates on Cantor that have such a modulus:

-- Extensional (!) modulus of uniform continuity
def HasModulus (p : Cantor → α) := ∃ n, ∀ a b : Cantor, (∀ i < n, a i = b i) → p a = p b

@[ext] structure CantorPred where
  pred : Cantor → Bool
  hasModulus : HasModulus pred

The modulus of such a predicate is now the least prefix lenght that determines the predicate. In particular, if the modulus is zero, the predicate is constant:

namespace CantorPred

variable (p : CantorPred)

noncomputable def modulus : Nat :=
  open Classical in Nat.find p.hasModulus

theorem eq_of_modulus : ∀a b : Cantor, (∀ i < p.modulus, a i = b i) → p a = p b := by
  open Classical in
  unfold modulus
  exact Nat.find_spec p.hasModulus

theorem eq_of_modulus_eq_0 (hm : p.modulus = 0) : ∀ a b, p a = p b := by
  intro a b
  apply p.eq_of_modulus
  simp [hm]

Because we want to work with CantorPred and not Cantor → Bool I have to define some operations on that new type; in particular the “cons element before predicate” operation that we saw above in find:

def comp_cons (b : Bit) : CantorPred where
  pred := fun a => p (b # a)
  hasModulus := by
    obtain ⟨n, h_n⟩ := p.hasModulus
    cases n with
    | zero => exists 0; grind
    | succ m =>
      exists m
      intro a b heq
      simp
      apply h_n
      intro i hi
      cases i
      · rfl
      · grind

@[simp, grind =] theorem comp_cons_pred (x : Bit) (a : Cantor) :
  (p.comp_cons x) a = p (x # a) := rfl

For this operation we know that the modulus decreases (if it wasn’t already zero):

theorem comp_cons_modulus (x : Bit) :
    (p.comp_cons x).modulus ≤ p.modulus - 1 := by
  open Classical in
  apply Nat.find_le
  intro a b hab
  apply p.eq_of_modulus
  cases hh : p.modulus
  · simp
  · intro i hi
    cases i
    · grind
    · grind
grind_pattern comp_cons_modulus => (p.comp_cons x).modulus

We can rewrite the find function above to use these operations:

mutual
  partial def forsome (p : CantorPred) : Bool := p (find p)

  partial def find (p : CantorPred) : Cantor := fun i =>
    have b := forsome (p.comp_cons true)
    (b # find (p.comp_cons b)) i
end

I have also eta-expanded the Cantor function returned by find; there is now a fun i => … i around the body. We’ll shortly see why that is needed.

Now have everything in place to attempt a termination proof. Before we do that proof, we could step back and try to come up with an informal termination argument.

  • The recursive call from forsome to find doesn’t decrease any argument at all. This is ok if all calls from find to forall are decreasing.

  • The recursive call from find to find decreases the index i as the recursive call is behind the Cantor.cons operation that shifts the index. Good.

  • The recursive call from find to forsome decreases the modulus of the argument p, if it wasn’t already zero.

    But if was zero, it does not decrease it! But if it zero, then the call from forall to find doesn’t actually need to call find, because then p doesn’t look at its argument.

We can express all this reasoning as a termination measure in the form of a lexicographic triple. The 0 and 1 in the middle component mean that for zero modulus, we can call forall from find “for free”.

mutual
  def forsome (p : CantorPred) : Bool := p (find p)
  termination_by (p.modulus, if p.modulus = 0 then 0 else 1, 0)
  decreasing_by grind

  def find (p : CantorPred) : Cantor := fun i =>
    have b := forsome (p.comp_cons true)
    (b # find (p.comp_cons b)) i
  termination_by i => (p.modulus, if p.modulus = 0 then 1 else 0, i)
  decreasing_by all_goals grind
end

The termination proof doesn’t go through just yet: Lean is not able to see that (_ # p) i will call p with i - 1, and it does not see that p (find p) only uses find p if the modulus of p is non-zero. We can use the wf_preprocess feature to tell it about that:

The following theorem replaces a call to p f, where p is a function parameter, with the slightly more complex but provably equivalent expression on the right, where the call to f is no in the else branch of an if-then-else and thus has ¬p.modulus = 0 in scope:

@[wf_preprocess]
theorem coe_wf (p : CantorPred) :
    (wfParam p) f = p (if _ : p.modulus = 0 then fun _ => false else f) := by
  split
  next h => apply p.eq_of_modulus_eq_0 h
  next => rfl

And similarly we replace (_ # p) i with a variant that extend the context with information on how p is called:

def cantor_cons' (x : Bit) (i : Nat) (a : ∀ j, j + 1 = i → Bit) : Bit :=
  match i with
  | 0 => x
  | j + 1 => a j (by grind)

@[wf_preprocess] theorem cantor_cons_congr (b : Bit) (a : Cantor) (i : Nat) :
  (b # a) i = cantor_cons' b i (fun j _ => a j) := by cases i <;> rfl

After these declarations, the above definition of forsome and find goes through!

It remains to now prove that they do what they should, by a simple induction on the modulus of p:

@[simp, grind =] theorem tail_cons_eq (a : Cantor) : (x # a).tail = a := by
  funext i; simp [Cantor.tail, Cantor.cons]

@[simp, grind =] theorem head_cons_tail_eq (a : Cantor) : a.head # a.tail = a := by
  funext i; cases i <;> rfl

theorem find_correct (p : CantorPred) (h_exists : ∃ a, p a) : p (find p) := by
  by_cases h0 : p.modulus = 0
  · obtain ⟨a, h_a⟩ := h_exists
    rw [← h_a]
    apply p.eq_of_modulus_eq_0 h0
  · rw [find.eq_unfold, forsome.eq_unfold]
    dsimp -zeta
    extract_lets b
    change p (_ # _)
    by_cases htrue : ∃ a, p (true # a)
    next =>
      have := find_correct (p.comp_cons true) htrue
      grind
    next =>
      have : b = false := by grind
      clear_value b; subst b
      have hfalse : ∃ a, p (false # a) := by
        obtain ⟨a, h_a⟩ := h_exists
        cases h : a.head
        · exists Cantor.tail a
          grind
        · exfalso
          apply htrue
          exists Cantor.tail a
          grind
      clear h_exists
      exact find_correct (p.comp_cons false) hfalse
termination_by p.modulus
decreasing_by all_goals grind

theorem forsome_correct (p : CantorPred) :
    forsome p ↔ (∃ a, p a) where
  mp hfind := by unfold forsome at hfind; exists find p
  mpr hex := by unfold forsome; exact find_correct p hex

This is pretty nice! However there is more to do. For example, Escardo has a “massively faster” variant of find that we can implement as a partial function in Lean:

def findBit (p : Bit → Bool) : Bit :=
  if p false then false else true

def branch (x : Bit) (l r : Cantor) : Cantor :=
  fun n =>
    if n = 0      then x
    else if 2 ∣ n then r ((n - 2) / 2)
                  else l ((n - 1) / 2)

mutual
  partial def forsome (p : Cantor -> Bool) : Bool :=
    p (find p)

  partial def find (p : Cantor -> Bool) : Cantor :=
    let x := findBit (fun x => forsome (fun l => forsome (fun r => p (branch x l r))))
    let l := find (fun l => forsome (fun r => p (branch x l r)))
    let r := find (fun r => p (branch x l r))
    branch x l r
end

But can we get this past Lean’s termination checker? In order to prove that the modulus of p is decreasing, we’d have to know that, for example, find (fun r => p (branch x l r)) is behaving nicely. Unforunately, it is rather hard to do termination proof for a function that relies on the behaviour of the function itself.

So I’ll leave this open as a future exercise.

I have dumped the code for this post at https://github.com/nomeata/lean-cantor.

02 January, 2026 02:30PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Ben Hutchings

Ben Hutchings

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Rewriting Git merge history, part 1

I remember that when Git was new and hip (around 2005), one of the supposed advantages was that “merging is so great!”. Well, to be honest, the competition at the time (mostly CVS and Subversion) wasn't fantastic, so I guess it was a huge improvement, but it's still… problematic. And this is even more visible when trying to rewrite history.

The case in question was that I needed to move Stockfish's cluster (MPI) branch up-to-date with master, which nobody had done for a year and and a half because there had been a number of sort-of tricky internal refactorings that caused merge conflicts. I fairly quickly realized that just doing “git merge master” would create a huge mess of unrelated conflicts that would be impossible to review and bisect, so I settled on a different strategy: Take one conflict at a time.

So I basically merged up as far as I could without any conflicts (essentially by bisecting), noted that as a merge commit, then merged one conflicting commit, noted that as another merge (with commit notes if the merge was nontrivial, e.g., if it required new code or a new approach), and then repeat. Notably, Git doesn't seem to have any kind of native support for this flow; I did it manually at first, and then only later realized that there were so many segments (20+) that I should write a script to get everything consistent. Notably, this approach means that a merge commit can have significant new code that was not in either parent. (Git does support this kind of flow, because a commit is just a list of zero or more parent commits and then the contents of the entire tree; git show does a diff on-the-fly, and object deduplication and compression makes this work without ballooning the size. But it is still surprising to those that don't do a lot of merges.)

That's where the nice parts ended, and the problems began. (Even ignoring that a conflict-free merge could break the compile, of course.) Because I realized that while I had merged everything, it wasn't actually done; the MPI support didn't even compile, for one, and once I had fixed that, I realized that I wanted to fix typos in commit messages, fix bugs pointed out to me by reviewers, and so on. In short, I wanted to rewrite history. And that's not where Git shines.

Everyone who works with a patch-based review flow (as opposed to having a throwaway branch per feature with lots of commits like “answer review comments #13” and then squash-merging it or similar) will know that git's basic answer to this is git rebase. rebase essentially sets up a script of what commits you've done, then executes a script (potentially at a different starting point, so you could get conflicts). Interactive rebase simply lets you edit that script in various ways, so that you can e.g. modify a commit message on the way, or take out a commit, or (more interestingly) make changes to a commit before continuing.

However, when merges are involved, regular interactive rebase just breaks down completely. It assumes that you don't really want merges; you just want a nice linear series of commits. And that's nice, except that in this case, I wanted the merges because the entire point was to upmerge. So then I needed to invoke git rebase --rebase-merges, which makes the script language into a somewhat different one that's subtly different and vastly more complicated (it basically sets up a list of ephemeral branches as “labels” to specify the trees that are merged into the various merge commits). And this is fine—until you want to edit that script.

In particular, let's take a fairly trivial change: Modifying a commit message. The merge command in the rebase script takes in a commit hash that's only used for the commit message and nothing else (the contents of the tree are ignored), and you can choose to either use a different hash or modify the message in an editor after-the-fact. And you can try to do this, but… then you get a merge conflict later in the rebase. What?

It turns out that git has a native machinery for remembering conflict resolutions. It basically remembers that you tried to merge commit A and B and ended up committing C (possibly after manual conflict resolution); so any merge of A and B will cause git to look that up and just use C. But that's not what really happened; since you modified the commit message of A (or even just its commit date), it changed its hash and became A', and now you're trying to merge A' and B, for which git has no conflict resolution remembered, and you're back to square one and have to do the resolution yourself. I had assumed that the merge remembered how to merge trees, but evidently it's on entire commits.

But wait, I hear you say; the solution for this is git-rerere! rerere exists precisely for this purpose; it remembers conflict resolutions you've done before and tries to reapply them. It only remembers merge conflicts you did when rerere was actually active, but there's a contrib script to “learn” from before that time, which works OK. So I tried to run the learn script and run the rebase… and it stopped with a merge conflict. You see, git rerere doesn't stop the conflicts, it just resolves them and then you have to continue the rebase yourself from the shell as usual. So I did that 20+ times (I can tell you, this gets tedious real quick)… and ended up with a different result. The tree simply wasn't the same as before the merge, even though I had only changed a commit message.

See, the problem is that rerere remembers conflicts, not merges. It has to, in order to reach its goal of being able to reapply conflict resolutions even if other parts of the file have changed. (Otherwise, it would be only marginally more useful than git's existing native support, which we discussed earlier.) But in this case, two or more conflicts in the rebase looked too similar to each other, yet needed different resolutions. So it picked the wrong resolution and ended up with a silent mismerge. And there's no way to guide it towards which one should apply when, so rerere was also out of the question.

This post is already long enough as it is; next time, we'll discuss the (horrible) workaround I used to actually (mostly) solve the problem.

02 January, 2026 09:50AM

Dima Kogan

Using libpython3 without linking it in; and old Python, g++ compatibility patches

I just released mrcal 2.5; much more about that in a future post. Here, I'd like to talk about some implementation details.

libpython3 and linking

mrcal is a C library and a Python library. Much of mrcal itself interfaces the C and Python libraries. And it is common for external libraries to want to pass Python mrcal.cameramodel objects to their C code. The obvious way to do this is in a converter function in an O& argument to PyArg_ParseTupleAndKeywords(). I wrote this mrcal_cameramodel_converter() function, which opened a whole can of worms when thinking about the compiling and linking and distribution of this thing.

mrcal_cameramodel_converter() is meant to be called by code that implements Python-wrapping of C code. This function will be called by the PyArg_ParseTupleAndKeywords() Python library function, and it uses the Python C API itself. Since it uses the Python C API, it would normally link against libpython. However:

  • The natural place to distribute this is in libmrcal.so, but this library doesn't touch Python, and I'd rather not pull in all of libpython for this utility function, even in the 99% case when that function won't even be called
  • In some cases linking to libpython actually breaks things, so I never do that anymore anyway. This is fine: since this code will only ever be called by libpython itself, we're guaranteed that libpython will already be loaded, and we don't need to ask for it.

OK, let's not link to libpython then. But if we do that, we're going to have unresolved references to our libpython calls, and the loader will complain when loading libmrcal.so, even if we're not actually calling those functions. This has an obvious solution: the references to the libpython calls should be marked weak. That won't generate unresolved-reference errors, and everything will be great.

OK, how do we mark things weak? There're two usual methods:

  1. We mark the declaration (or definition?) or the relevant functions with __attribute__((weak))
  2. We weaken the symbols after the compile with objcopy --weaken.

Method 1 is more work: I don't want to keep track of what Python API calls I'm actually making. This is non-trivial, because some of the Py_...() invocations in my code are actually macros that call functions internally that I must weaken. Furthermore, all the functions are declared in Python.h that I don't control. I can re-declare stuff with __attribute__((weak)), but then I have to match the prototypes. And I have to hope that re-declaring these will make __attribute__((weak)) actually work.

So clearly I want method 2. I implemented it:

python-cameramodel-converter.o: %.o:%.c
        $(c_build_rule); mv $@ _$@
        $(OBJCOPY) --wildcard --weaken-symbol='Py*' --weaken-symbol='_Py*' _$@ $@

Works great on my machine! But doesn't work on other people's machines. Because only the most recent objcopy tool actually works to weaken references. Apparently the older tools only weaken definitions, which isn't useful to me, and the tool only started handling references very recently.

Well that sucks. I guess I will need to mark the symbols with __attribute__((weak)) after all. I use the nm tool to find the symbols that should be weakened, and I apply the attribute with this macro:

#define WEAKEN(f) extern __typeof__(f) f __attribute__((weak));

The prototypes are handled by __typeof__. So are we done? With gcc, we are done. With clang we are not done. Apparently this macro does not weaken symbols generated by inline function calls if using clang I have no idea if this is a bug. The Python internal machinery has some of these, so this doesn't weaken all the symbols. I give up on the people that both have a too-old objcopy and are using clang, and declare victory. So the logic ends up being:

  1. Compile
  2. objcopy --weaken
  3. nm to find the non-weak Python references
  4. If there aren't any, our objcopy call worked and we're done!
  5. Otherwise, compile again, but explicitly asking to weaken those symbols
  6. nm again to see if the compiler didn't do it
  7. If any non-weak references still remain, complain and give up.

Whew. This logic appears here and here. There were even more things to deal with here: calling nm and objcopy needed special attention and build-system support in case we were cross-building. I took care of it in mrbuild.

This worked for a while. Until the converter code started to fail. Because ….

Supporting old Python

…. I was using PyTuple_GET_ITEM(). This is a macro to access PyTupleObject data. So the layout of PyTupleObject ended up encoded in libmrcal.so. But apparently this wasn't stable, and changed between Python3.13 and Python3.14. As described above, I'm not linking to libpython, so there's no NEEDED tag to make sure we pull in the right version. The solution was to call the PyTuple_GetItem() function instead. This is unsatisfying, and means that in theory other stuff here might stop working in some Python 3.future, but I'm ready to move on for now.

There were other annoying gymnastics that had to be performed to make this work with old-but-not-super old tooling.

The Python people deprecated PyModule_AddObject(), and added PyModule_Add() as a replacement. I want to support Pythons before and after this happened, so I needed some if statements. Today the old function still works, but eventually it will stop, and I will have needed to do this typing sooner or later.

Supporting old C++ compilers

mrcal is a C project, but it is common for people to want to #include the headers from C++. I widely use C99 designated initializers (27-years old in C!), which causes issues with not-very-old C++ compilers. I worked around this initialization in one spot, and disabled it a feature for a too-old compiler in another spot. Fortunately, semi-recent tooling supports my usages, so this is becoming a non-issue as time goes on.

02 January, 2026 05:52AM by Dima Kogan

Birger Schacht

Status update, December 2025

December 2025 started off with a nice event, namely a small gathering of Vienna based DDs. Some of us were at DebConf25 in Brest and we thought it might be nice to have a get-together of DDs in Vienna. A couple of months after DebConf25 I picked up the idea, let someone else ping the DDs, booked a table at a local cafe and in the end we were a group of 6 DDs. It was nice to put faces to names, names to nicknames and to hear what people are up to. We are definitely planning to repeat that!

December also ended with a meeting of nerds: the 39th Chaos Communication Congress in Hamburg. As usual, I did not really have that much time to watch many talks. I tend to bookmark a lot of them in the scheduling app in advance, but once I’m at the congress the social aspect is much more important and I try to only attend workshop or talks that are not recorded. Watching the recordings afterward is possible anyway (and I actually try to do that!).

There was also a Debian Developers meetup at day 3, combined with the usual time confusion regarding UTC and CET. We talked about having a Debian table at 40c3, so maybe the timezone won’t be that much of a problem in the next time.

Two talks I recommend are CSS Clicker Training: Making games in a “styling” language and To sign or not to sign: Practical vulnerabilities in GPG & friends.

Regarding package uploads this month did not happen that much, I only uploaded the new version (0.9.3) of labwc.

I created two new releases for carl. First a 0.5 release that adds Today and SpecifiedDate as properties. I forwarded an issue about dates not being parsed correctly to the icalendar issue tracker and this was fixed a couple of days later (thanks!). I then created a 0.5.1 release containing that fix. I also started planning to move the carl repository back to codeberg, because Github feels more and more like an AI Slop platform.

The work on debiverse also continued. I removed the tailwind CSS framework, and it was actually not that hard to reproduce all the needed CSS classes with custom CSS. I think that CSS frameworks make sense to a point, but once you start implementing stuff that the framework does not provide, it is easier if everything comes out of one set of rules. There was also the article Vanilla CSS is all you need which goes into the same direction and which gave me some ideas how to organize the CSS directives.

I also refactored the filter generation for the listing filters and the HTML filter form is now generated from the FastAPI Query Parameter Model.

Screenshot of the filter form

For navigation I implemented a sidebar, that is hidden on small screens but can be toggled using a burger menu.

Screenshot of the navigation bar

I also stumbled upon An uncomfortable but necessary discussion about the Debian bug tracker, which raises some valid points. I think debiverse could be a solution to the first point of “What could be a way forward?”, namely: “Create a new web service that parses the existing bug data and displays it in a “rich” format”.

But if there is ever another way than email to interact with bugs.debian.org, than this approach should not rely on passing on the commands via mail. If I click a button in a web interface to raise the severity, the severity should be raised right away - not 10 minutes later when the email is received. I think the individual parts (web, database, mail interface) should be decoupled and talk to each other via APIs.

02 January, 2026 05:28AM

January 01, 2026

Russ Allbery

2025 Book Reading in Review

In 2025, I finished and reviewed 32 books, not counting another five books I've finished but not yet reviewed and which will therefore roll over to 2026.

This was not a great reading year, although not my worst reading year since I started keeping track. I'm not entirely sure why, although part of the explanation was that I hit a bad stretch of books in spring of 2025 and got into a bit of a reading slump. Mostly, though, I shifted a lot of reading this year to short non-fiction (newsletters and doom-scrolling) and spent rather more time than I intended watching YouTube videos, and sadly each hour in the day can only be allocated one way.

This year felt a bit like a holding pattern. I have some hopes of being more proactive and intentional in 2026. I'm still working on finding a good balance between all of my hobbies and the enjoyment of mindless entertainment.

The best book I read this year was also the last book I reviewed (and yes, I snuck the review under the wire for that reason): Bethany Jacobs's This Brutal Moon, the conclusion of the Kindom Trilogy that started with These Burning Stars. I thought the first two books of the series were interesting but flawed, but the conclusion blew me away and improved the entire trilogy in retrospect. Like all books I rate 10 out of 10, I'm sure a large part of my reaction is idiosyncratic, but two friends of mine also loved the conclusion so it's not just me.

The stand-out non-fiction book of the year was Rory Stewart's Politics on the Edge. I have a lot of disagreements with Stewart's political positions (the more I listen to him, the more disagreements I find), but he is an excellent memoirist who skewers the banality, superficiality, and contempt for competence that has become so prevailing in centrist and right-wing politics. It's hard not to read this book and despair of electoralism and the current structures of governments, but it's bracing to know that even some people I disagree with believe in the value of expertise.

I also finished Suzanne Palmer's excellent Finder Chronicles series, reading The Scavenger Door and Ghostdrift. This series is some of the best science fiction I've read in a long time and I'm sad it is over (at least for now). Palmer has a new, unrelated book coming in 2026 (Ode to the Half-Broken), and I'm looking forward to reading that.

This year, I experimented with re-reading books I had already reviewed for the first time since I started writing reviews. After my reading slump, I felt like revisiting something I knew I liked, and therefore re-read C.J. Cherryh's Cyteen and Regenesis. Cyteen mostly held up, but Regenesis was worse than I had remembered. I experimented with a way to add on to my previous reviews, but I didn't like the results and the whole process of re-reading and re-reviewing annoyed me. I'm counting this as a failed experiment, which means I've still not solved the problem of how to revisit series that I read long enough ago that I want to re-read them before picking up the new book. (You may have noticed that I've not read the new Jacqueline Carey Kushiel novel, for example.)

You may have also noticed that I didn't start a new series re-read, or continue my semi-in-progress re-reads of Mercedes Lackey or David Eddings. I have tentative plans to kick off a new series re-read in 2026, but I'm not ready to commit to that yet.

As always, I have no firm numeric goals for the next year, but I hope to avoid another reading slump and drag my reading attention back from lower-quality and mostly-depressing material in 2026.

The full analysis includes some additional personal reading statistics, probably only of interest to me.

01 January, 2026 09:12PM

Swiss JuristGate

Stardust to Crans-Montana, Le Constellation: cover-up already under way

The author of these reports is Daniel Pocock, a member of Engineers Australia, Engineers Ireland and an elected representative of the Free Software Fellowship.

Mr Pocock is a citizen of Australia, Ireland and Switzerland.

Mr Pocock was granted Swiss citizenship in the Canton of Vaud, which is adjacent to the bilingual Canton of Valais where the tragedy occurred.

Valais can be thought of as the New Hampshire of Switzerland, a region where businessmen enjoy minimal regulation.

Valais is the third largest canton but it only has a population of 350,000 people. The population is relatively wealthy by European standards but on the other hand, they don't have the depth and breadth of skills that can be found in more populated cantons like Zurich and Geneva.

In Switzerland, each Canton operates with significant autonomy and the federal government has a rather insignificant role in comparison to other countries.

While standards in construction are agreed at a national level, it is the responsibility of each canton to enforce the standards in local premises.

Pocock's first blog after the fire found evidence the night club's owner had done the internal fit-out himself back in 2016:

(Translated to English) It was love at first sight! This Corsican businessman decided to open a business. Conveniently, Le Constellation, in the center of Crans, was for sale. He had to wait until June 2015 to sign the agreement. For six months, Jacques Moretti rolled up his sleeves and completely renovated the establishment. "I did almost everything myself. Look at these walls, there are 14 tons of dry-stone masonry, and the stones come from Saint-Léonard!" Since December 2015, Le Constellation has been showcasing Corsican products: cured meats, wines, beers, myrtle liqueur, and even chestnut-flavored whisky. "But mind you, I'm also keen to feature local Valais products. You have some excellent wines here; it's a pleasure to serve them to my customers." The Corsican admits he feels very much at home here. "You know, we're alike. We're both mountain people at heart. Stubborn, perhaps, but above all, very kind."

French TV channel BFMTV subsequently published statements from witnesses. The statements were repeated by Le Temps in Geneva:

The Hypothesis of a flame on a bottle

The testimony of two French women, gathered by BFMTV, is striking. The two witnesses, who were in the bar that night, recount the panic that seized the customers of Le Constellation and the stampede triggered by the fire.

They mention the use of "candles" placed in champagne bottles. One of them was reportedly held close to the ceiling. "In a few dozen seconds, the entire ceiling was on fire. Everything was made of wood," the two customers recount. They describe how a loud order had been given and a female member of the staff climbed onto a male colleague's shoulders. The fire then spread up to the ground floor of the establishment, they say. Panicked, the customers tried to escape through the exit door. "It was quite small compared to the number of people present. Someone broke a window so that people could get out," one of them adds.

The Canton of Valais shares borders with France and Italy. Shortly before the fire, the new French ambassador was presented to Switzerland in a ceremony in Valais, at the Opal Foundation, the very same location where the local authorities conducted their media briefing after the fire.

Marion Paradas, Clément Leclerc, Mathias Reynard, Franz Ruppen

 

On the morning after the fire, four local Swiss officials attended the media briefing. News reports emphasized their roles: the president of the canton, the minister for security, the attorney general and the chief of police. All except the last, the police chief, are political appointments.

By coincidence, three of these four people live in immediate proximity to Crans, where the fire occurred and Lens, where the business owner has another restaurant, Le Vieux Chalet 1978. People need to ask how many times they visited these businesses personally. How well did they know the proprietor and staff?

The attorney general's husband, Francois Pilloud, operates a local winery. In a small canton like Valais, these families have a common interest in promoting the canton as a destination for tourists. The night club owner told the media that he likes to serve wines from local producers in Valais.

During the course of the rescue operation, authorities made their own videos of the police and firemen at work. In the morning, they published a two minute montage of videos emphasizing all the equipment and people mobilized to respond to the crisis.

However, the video is uncomfortable to watch. It feels like a public relations exercise was under way even before the bodies started to be removed and counted. The video showcases all the expensive equipment possessed by canton Valais but the video doesn't tell us anything about their competence for building inspections.

In July 2024, a scaffolding collapsed outside a shopping center in Lausanne, canton Vaud. People died. There were international news reports about the tragedy. Searching online today, I couldn't find any evidence that anybody ever conducted a public inquiry or published a report.

In October 2022, a massive construction crane collapsed beside the University of Lausanne (UNIL). The crane collapsed onto the concrete foundations sending a shockwave through the region. Many people who felt the shockwave feared an accident in the EPFL nuclear reactor. The authorities eventually told us that only one person died. Once again, I can find no further reports about inquiries or public reports about the accident. The accident occurred in Chavannes-près-Renens, the same commune where I was naturalized as a Swiss citizen. I was in the region on the very day of the accident.

Remember the referendum on corporate accountability (Responsible business initiative)? Canton Valais was one of the Cantons that voted No to accountability.

Look at how three of the four senior officials managing the crisis are personally domiciled in the immediate neighbourhood of the nightclub. In a canton with such a small population, it is very likely that some or all of these officials have frequented the businesses owned by Jacques Moretti.

Sion, Crans-Montana, Valais

 

Mathias Reynard is the president of the canton Valais. His official profile tells us that he is domiciled in Savièse, which is adjacent to Lens and Crans. He is a member of the Parti socialiste.

Stéphane Ganzer is the canton's minister for security. His official profile tells us he is domiciled in Veyras which is half way up the hill going to the Crans-Montana resort. Ganzer has twenty years experience as a fireman and this is more valuable than his job as a politician in the circumstances. He is a member of the PLR.Les Libéraux-Radicaux political party.

Frédéric Gisler is the police chief for canton Valais. A public announcement about his appointment gives us details about his career history. The report notes that he has worked as a police inspector, as a prosecutor and as a greffier. In a previous report about corruption, we demonstrated how a greffier is able to exercise the powers of a judge. A prosecutor exercising judge-like powers is a huge conflict of interest. The previous report made the remarkable revelation that Mathieu Parreaux, founder of the illegal legal insurance scheme, had worked as the greffier, similar to a judge, in the tribunal of Monthey. Coincidentally, Monthey is also in the Canton of Valais and Gisler is from Vernayaz, which is much closer to Monthey than the other individuals mentioned here.

Béatrice Pilloud is the attorney general or chief prosecutor for the canton. Her declaration of interests tells us she is a member of the PLR.Les Libéraux-Radicaux political party, that is, the same party as the canton's security minister. Is it appropriate for both of these roles to be linked politically? Her husband is Francois Pilloud. He is a co-owner of the PaP Vins winery and tourism business.

Béatrice Pilloud spent more than twenty years working as a criminal defence lawyer before she decided to work as the attorney general. Therefore, she has switched sides. Is this fair to all the clients she represented and defended over the years?

When people have all these powers and conflicts of interest, it is easy to see how they could use their powers to either cover something up, to punish whistleblowers or to designate somebody as a scapegoat and deceive the public about who is really at fault.

Over 100 survivors have suffered severe and critical injuries from burns and the inhalation of smoke. The crisis teams in three of Switzerlands biggest hospitals are fully occupied treating these unexpected casualties on a public holiday. Yet the Canton of Valais has told visitors in other ski resorts they can continue to ski as long as they don't have accidents.

The earlier blog about the subject identifies Jacques Moretti and his spouse Jessica Maric as co-proprietors/founders of the business where the tragedy occurred.

Ireland suffered a similar tragedy, the Stardust nightclub fire in 1981. Forty eight people died. Forty five years later, the families are still waiting for answers. As a citizen of both Ireland and Switzerland, it bothers me that the same lightning has struck two countries.

The JuristGate investigation found much the same phenomena. FINMA never gave any information to people who purchased the illegal legal insurance. They only published a redacted and anonymized version of their judgment six months later. The JuristGate web site has unredacted it so people can see how Switzerland covers up crime and incompetence.

It is entirely possible that people have committed suicide due to the financial and insurance crisis in Switzerland. These deaths are every bit as bad as the deaths in Le Constellation.

When people like Mr Pocock try to offer professional advice, for example, after the Debian suicide cluster, they are subject to public humiliation and threats of violence (recorded). The lawyers, politicians and small business owners are a group of kissing-cousins. Protecting the reputations and business interests of their families and friends is inextricably intertwined with covering up all the people who failed to prevent this disaster.

Look at the mayor of Basel, Diana von Bidder-Senn. They tell us that she has a PhD in cybersecurity from ETH Zurich but she didn't realize that her own husband was under the influence of social engineering. Can there be any more extreme example of social engineering than a victim who dies by suicide? IBM's annual Cost of a Data Breach report regularly concludes that social engineering is the number one risk for their clients.

JuristGate reports have been published in the hope that Swiss authorities can raise their standards in relation to both cybersecurity and fire safety.

Please see the rest of the JuristGate reports.

01 January, 2026 04:00PM

hackergotchi for Daniel Pocock

Daniel Pocock

Crans-Montana: Le Constellation ownership, Jacques Moretti and Jessica Maric, Lens (CH)

News reports have appeared about an explosion at the bar Le Constellation in Crans-Montana, Switzerland.

A 2016 news report from Le Nouvelliste quotes the owner of the bar around his acquisition of the establishment:

Coup de foudre! Ce commerçant corse décide d’ouvrir une affaire. Ça tombe bien, le Constellation, au centre de Crans, est à remettre. Il faudra attendre juin 2015 pour signer un accord. Durant six mois, Jacques Moretti retrousse ses manches et relooke l’établissement. «J’ai quasiment tout fait moi-même. Regardez ces murs, il y a 14 tonnes de pierres sèches, elles viennent de Saint-Léonard!» Depuis décembre 2015, le Constellation sert d’écrin aux produits corses. Charcuteries, vins, bières, liqueur de myrte et même whisky au parfum de châtaigne. «Mais attention, j’ai à cœur de présenter aussi le terroir valaisan. Vous avez de très bons vins, c’est un plaisir de les servir à mes clients.» Le Corse avoue se sentir très bien chez nous. «Vous savez, on est pareil. On est d’abord des montagnards. Avec la tête dure, mais surtout avec beaucoup de gentillesse.»

Translated to English:

(Translated to English) It was love at first sight! This Corsican businessman decided to open a business. Conveniently, Le Constellation, in the center of Crans, was for sale. He had to wait until June 2015 to sign the agreement. For six months, Jacques Moretti rolled up his sleeves and completely renovated the establishment. "I did almost everything myself. Look at these walls, there are 14 tons of dry-stone masonry, and the stones come from Saint-Léonard!" Since December 2015, Le Constellation has been showcasing Corsican products: cured meats, wines, beers, myrtle liqueur, and even chestnut-flavored whisky. "But mind you, I'm also keen to feature local Valais products. You have some excellent wines here; it's a pleasure to serve them to my customers." The Corsican admits he feels very much at home here. "You know, we're alike. We're both mountain people at heart. Stubborn, perhaps, but above all, very kind."

Jacques Moretti on LinkedIn.

The news report notes he did everything himself but doesn't pose questions about whether controlled works, such as the electrical, gas and fire safety systems, were a DIY.

The Facebook page for the bar has been taken down. The bar published a profile, with pictures and contact details, on the site of the local tourist office.

These are the details the owners chose to make public:

Rue Centrale 35
3963 Crans-Montana
constellationcransmontana@gmail.com
+41 78 717 14 86
www.facebook.com/leconstellation

Switzerland has 26 cantons and each canton maintains its own business register.

I previously had to research the scandal involving an illegal legal insurance scheme being operated across the border between Switzerland and France. The presence of records about multiple nominee owners and business entities in different cantons made it hard to find the truth. Nonetheless, the truth came out on the JuristGate web site.

Le Constellation is in the Canton of Valais and the business records can be searched in this public database.

The search reveals the owners are Jacques Moretti, a French citizen domiciled in Lens and Jesicca Macif, his spouse, who is also a French citizen.

The records mention they are domiciled in the Swiss village Lens, not to be confused with the French city of the same name.

Jessica Maric on LinkedIn

Searching for their names finds other businesses, including Le Vieux Chalet 1978, and Le Senso.

More links

Article about the couple in Altitude Immobilier magazine

Blog about their other restaurant from food critic Gilles Pudlowski.

To read more about researching businesses in Switzerland, please see the JuristGate web site.

01 January, 2026 07:30AM

Russ Allbery

Review: This Brutal Moon

Review: This Brutal Moon, by Bethany Jacobs

Series: Kindom Trilogy #3
Publisher: Orbit
Copyright: December 2025
ISBN: 0-316-46373-6
Format: Kindle
Pages: 497

This Brutal Moon is a science fiction thriller with bits of cyberpunk and space opera. It concludes the trilogy begun with These Burning Stars. The three books tell one story in three volumes, and ideally you would read all three in close succession.

There is a massive twist in the first book that I am still not trying to spoil, so please forgive some vague description.

At the conclusion of These Burning Stars, Jacobs had moved a lot of pieces into position, but it was not yet clear to me where the plot was going, or even if it would come to a solid ending in three volumes as promised by the series title. It does. This Brutal Moon opens with some of the political maneuvering that characterized These Burning Stars, but once things start happening, the reader gets all of the action they could wish for and then some.

I am pleased to report that, at least as far as I'm concerned, Jacobs nails the ending. Not only is it deeply satisfying, the characterization in this book is so good, and adds so smoothly to the characterization of the previous books, that I saw the whole series in a new light. I thought this was one of the best science fiction series finales I've ever read. Take that with a grain of salt, since some of those reasons are specific to me and the mood I was in when I read it, but this is fantastic stuff.

There is a lot of action at the climax of this book, split across at least four vantage points and linked in a grand strategy with chaotic surprises. I kept all of the pieces straight and understood how they were linked thanks to Jacobs's clear narration, which is impressive given the number of pieces in motion. That's not the heart of this book, though. The action climax is payoff for the readers who want to see some ass-kicking, and it does contain some moving and memorable moments, but it relies on some questionable villain behavior and a convenient plot device introduced only in this volume. The action-thriller payoff is competent but not, I think, outstanding.

What put this book into a category of its own were the characters, and specifically how Jacobs assembles sweeping political consequences from characters who, each alone, would never have brought about such a thing, and in some cases had little desire for it.

Looking back on the trilogy, I think Jacobs has captured, among all of the violence and action-movie combat and space-opera politics, the understanding that political upheaval is a relay race. The people who have the personalities to start it don't have the personality required to nurture it or supply it, and those who can end it are yet again different. This series is a fascinating catalog of political actors — the instigator, the idealist, the pragmatist, the soldier, the one who supports her friends, and several varieties and intensities of leaders — and it respects all of them without anointing any of them as the One True Revolutionary. The characters are larger than life, yes, and this series isn't going to win awards for gritty realism, but it's saying something satisfyingly complex about where we find courage and how a cause is pushed forward by different people with different skills and emotions at different points in time. Sometimes accidentally, and often in entirely unexpected ways.

As before, the main story is interwoven with flashbacks. This time, we finally see the full story of the destruction of the moon of Jeve. The reader has known about this since the first volume, but Jacobs has a few more secrets to show (including, I will admit, setting up a plot device) and some pointed commentary on resource extraction economies. I think this part of the book was a bit obviously constructed, although the characterization was great and the visible junction points of the plot didn't stop me from enjoying the thrill when the pieces came together.

But the best part of this book was the fact there was 10% of it left after the climax. Jacobs wrote an actual denouement, and it was everything I wanted and then some. We get proper story conclusions for each of the characters, several powerful emotional gut punches, some remarkably subtle and thoughtful discussion of political construction for a series that tended more towards space-opera action, and a conclusion for the primary series relationship that may not be to every reader's taste but was utterly, perfectly, beautifully correct for mine. I spent a whole lot of the last fifty pages of this book trying not to cry, in the best way.

The character evolution over the course of this series is simply superb. Each character ages like fine wine, developing more depth, more nuance, but without merging. They become more themselves, which is an impressive feat across at least four very different major characters. You can see the vulnerabilities and know what put them there, you can see the strengths they developed to compensate, and you can see why they need the support the other characters provide. And each of them is so delightfully different.

This was so good. This was so precisely the type of story that I was in the mood for, with just the type of tenderness for its characters that I wanted, that I am certain I am not objective about it. It will be one of those books where other people will complain about flaws that I didn't see or didn't care about because it was doing the things I wanted from it so perfectly. It's so good that it elevated the entire trilogy; the journey was so worth the ending.

I'm afraid this review will be less than helpful because it's mostly nonspecific raving. This series is such a spoiler minefield that I'd need a full spoiler review to be specific, but my reaction is so driven by emotion that I'm not sure that would help if the characters didn't strike you the way that they struck me. I think the best advice I can offer is to say that if you liked the emotional tone of the end of These Burning Stars (not the big plot twist, the character reaction to the political goal that you learn drove the plot), stick with the series, because that's a sign of the questions Jacobs is asking. If you didn't like the characters at the end (not the middle) of the first novel, bail out, because you're going to get a lot more of that.

Highly, highly recommended, and the best thing I've read all year, with the caveats that you should read the content notes, and that some people are going to bounce off this series because it's too intense and melodramatic. That intensity will not let up, so if that's not what you're in the mood for, wait on this trilogy until you are.

Content notes: Graphic violence, torture, mentions of off-screen child sexual assault, a graphic corpse, and a whole lot of trauma.

One somewhat grumbly postscript: This is the sort of book where I need to not read other people's reviews because I'll get too defensive of it (it's just a book I liked!). But there is one bit of review commentary I've seen about the trilogy that annoys me enough I have to mention it. Other reviewers seem to be latching on to the Jeveni (an ethnic group in the trilogy) as Space Jews and then having various feelings about that.

I can see some parallels, I'm not going to say that it's completely wrong, but I also beg people to read about a fictional oppressed ethnic and religious minority and not immediately think "oh, they must be stand-ins for Jews." That's kind of weird? And people from the US, in particular, perhaps should not read a story about an ethnic group enslaved due to their productive skill and economic value and think "they must be analogous to Jews, there are no other possible parallels here." There are a lot of other comparisons that can be made, including to the commonalities between the methods many different oppressed minorities have used to survive and preserve their culture.

Rating: 10 out of 10

01 January, 2026 05:27AM

December 31, 2025

hackergotchi for Junichi Uekawa

Junichi Uekawa

Happy new year.

Happy new year.

31 December, 2025 10:42PM by Junichi Uekawa

hackergotchi for Bits from Debian

Bits from Debian

DebConf26 dates announced

Alt Debconf26 by Romina Molina

As announced in Brest, France, in July, the Debian Conference is heading to Santa Fe, Argentina.

The DebConf26 team and the local organizers team in Argentina are excited to announce Debconf26 dates, the 27th edition of the Debian Developers and Contributors Conference:

DebCamp, the annual hacking session, will run from Monday July 13th to Sunday to July 19th 2026, followed by DebConf from Monday July 20th to Saturday July 25th 2026.

For all those who wish to meet us in Santa Fe, the next step will be the opening of registration on January 26, 2026. The call for proposals period for anyone wishing to submit a conference or event proposal will be launched on the same day.

DebConf26 is looking for sponsors; if you are interested or think you know of others who would be willing to help, please have a look at our sponsorship page and get in touch with sponsors@debconf.org.

About Debian

The Debian Project was founded in 1993 by Ian Murdock to be a truly free community project. Since then the project has grown to be one of the largest and most influential Open Source projects. Thousands of volunteers from all over the world work together to create and maintain Debian software. Available in 70 languages, and supporting a huge range of computer types, Debian calls itself the universal operating system.

About DebConf

DebConf is the Debian Project's developer conference. In addition to a full schedule of technical, social and policy talks, DebConf provides an opportunity for developers, contributors and other interested people to meet in person and work together more closely. It has taken place annually since 2000 in locations as varied as Scotland, Bosnia and Herzegovina, India, Korea. More information about DebConf is available from https://debconf.org/.

For further information, please visit the DebConf26 web page at https://debconf26.debconf.org/ or send mail to press@debian.org.

Debconf26 is made possible by Proxmox and others.

31 December, 2025 05:00PM by Publicity team

December 30, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Invitation to live next door to George and Amal Clooney

George and Amal Clooney have been in the news today after receiving French citizenship.

An interesting observation is that their villa is almost right next door to the Benedictine monastary established by Dom Alcuin Reid.

( Directions)

Dom Alcuin Reid is from Melbourne. He left the seminary for reasons that have not been well explained. He was invited to operate the English-speaking Benedictine monastery in the south of France. In 2022, a bishop outside France secretly ordained Dom Alcuin Reid as a priest. One of the other monks was secretly ordained as a deacon. The public and the Catholic faithful remain in the dark.

The story of secret ordinations taking place in the church reminded me of the secret demotions used to hide the Debian suicide cluster. What an uncanny coincidence. One of the victims died on our wedding day and it was Palm Sunday.

Nonetheless, the monastery invites male visitors over 18 to come and live with them and discover the monastic lifestyle. The advertisement doesn't state the fact the next-door-neighbors are the Clooneys:

Our classical Benedictine monastic observance is centred upon the solemn celebration of the Sacred Liturgy according to the older, classical forms of the Roman and Monastic Rites and is supported through our manual and intellectual work.

The Monastère Saint-Benoît welcomes all to their celebrations of the Monastic Office and Holy Mass (celebrated in Latin according to the usus antiquior) in our beautiful 10th century Romanesque church. The weekly schedule is posted here.

Men of 18 years of age or over are welcome to ask to stay in the monastery guest accommodation for a time of retreat and should contact the Guest Master. Our guest accommodation is limited. Ladies and families are welcome to arrange their own accommodation locally and are able to participate in the monastic offices, all of which are open to the public and are celebrated in the monastery church.

Monks are available for Confession or spiritual advice. The Monks welcome requests for prayer and accept intentions for Mass to be offered. Please contact us.

Men discerning the possibility of a Benedictine vocation are welcome to visit the monastery and to share in its daily life and work for an extended period. In the first instance they should write to the Prior.

The Monastery welcomes those who wish to associate themselves more formally with the prayer and the work of the community as Oblates. For further details, contact the Master of Oblates.

George Clooney, Le Canadel, Monastère Sainte-Benoit, Dom Alcuin Reid, Brignoles, Var, France

 

By coincidence, I had the fortunate opportunity to meet Mgr Rey, the former bishop of this region, at a recent event in Lyon.

Daniel Pocock, Monseigneur Dominique Rey, Réseau Vie, Basilique Saint-Bonaventure, Lyon

 

Related blogs about the church

Catholic.Community

Please follow the Catholic.Community web site and make it your home page.

30 December, 2025 10:30PM

From the ABC to Techrights: recognizing fake news about economics, finance and investment

On Sunday, I published some brief comments about the Eurozone, Bulgaria, fake news, inflation and bullion. Shortly afterwards, various reports appeared which contradict my own comments.

Techrights has asked Debt as the new currency?.

In fact, most banks are expected to hold some form of reserve. The reserve is actually a loan to the government. In countries with central banks, the central bank lends money to the government and it is authorized to print banknotes and make other loans. The founding of the Bank of England famously involves a loan of GBP 1.2 million to the British Government. The loan is a liability for the British government and it is an asset for the Bank of England. Off the back of this asset, the Bank can print banknotes.

Therefore, the Techrights headline was wrong: there is nothing new about debts backing up currency. Interested readers can discover more by reading about money supply or the history of central banking, in which the British have a special status.

Australia's ABC has gone on to comment that silver prices declined and palladium crashed on Monday, 29 December. They justify the comment by explaining that professional investors regard any downward price move over ten percent is a crash.

Most traders view such a price plunge as a market crash.

Fact checking, traders assign a volatility rating to every stock, every commodity and every corporate bond.

People can look at the volatility rating before they decide to purchase a stock, a metal or a bond. If you choose to put your money in an investment with a very high volatility rating then you can not use the word "crash" every time it swings ten percent in a single day.

Despite the high volatility of precious metals, people have been buying these things as long term investments to protect against inflation. Here is one of those Irish photos demonstrating how many silver coins you can buy with EUR 1,000. There is a stack of coins for every two year interval since 2007. There are ten stacks, implying the purchaser spent 10 x 1,000 = EUR 10,000 in total to buy the 607 coins in the photo.

At the silver price peak on 26 December, the coins were worth EUR 70 each, that is a total of EUR 42,500.

After the ABC's "crash" on Monday, the coins were worth EUR 64 each, that is a total of EUR 38,848. It is smaller than the peak price but it is still a lot more than the long term cost of buying the coins.

Silver coins, inflation, Eurozone

 

When I saw the article about a crash, I wondered whether these over-generalized comments were created by a real journalist or by artificial intelligence.

Last week, the same ABC web site published a report about an "up-crash" in the markets as people speculate on the artificial intelligence stocks.

Every stock and every metal has a bubble from time to time. Good investment may be nothing more than picking the bubble that is less wrong than the other bubbles.

Do we say that the Euro has crashed now that the number of silver coins we can purchase has fallen by more than fifty percent in twelve months?

If you took the four grams of free silver mentioned in my previous blog, as it was free, did you lose anything at all when the price changed? That depends on which metric you use to measure the price. The free grams of bullion are still available today but it feels like it won't be long before they reduce the size of the promotion.

More reports about economic subjects.

30 December, 2025 10:30AM

Russ Allbery

Review: Dark Ambitions

Review: Dark Ambitions, by Michelle Diener

Series: Class 5 #4.5
Publisher: Eclipse
Copyright: 2020
ISBN: 1-7637844-2-8
Format: Kindle
Pages: 81

Dark Ambitions is a science fiction romance novella set in Michelle Diener's Class 5 series, following the events of Dark Matters. It returns to Rose as the protagonist and in that sense is a sequel to Dark Horse, but you don't have to remember that book in detail to read this novella.

Rose and Dav (and the Class 5 ship Sazo) are escorting an exploration team to a planet that is being evaluated for settlement. Rose has her heart set on going down to the planet, feeling the breeze, and enjoying the plant life. Dav and his ship are called away to deal with a hostage situation. He tries to talk her out of going down without him, but Rose is having none of it. Predictably, hijinks ensue.

This is a very slight novella dropped into the middle of the series but not (at least so far as I can tell) important in any way to the overall plot. It provides a bit of a coda to Rose's story from Dark Horse, but given that Rose has made cameos in all of the other books, readers aren't going to learn much new here. According to the Amazon blurb, it was originally published in the Pets in Space 5 anthology. The pet in question is a tiny creature a bit like a flying squirrel that Rose rescues and that then helps Rose in exactly the way that you would predict in this sort of story.

This is so slight and predictable that it's hard to find enough to say about it to write a review. Dav is protective in a way that I found annoying and kind of sexist. Rose doesn't let that restrict her decisions, but seems to find this behavior more charming than I did. There is a tiny bit of Rose being awesome but a bit more damsel in distress than the series usually goes for. The cute animal is cute. There's the obligatory armory scene with another round of technomagical weapons that I think has appeared in every book in this series. It all runs on rather obvious rails.

There is a subplot involving Rose feeling some mysterious illness while on the planet that annoyed me entirely out of proportion to how annoying it is objectively, mostly because mysterious illnesses tend to ramp up my anxiety, which is not a pleasant reading emotion. This objection is probably specific to me.

This is completely skippable. I was told that in advance and thus only have myself to blame, but despite my completionist streak, I wish I'd skipped it. We learn one piece of series information that will probably come up in the future, but it's not the sort of information that would lead me to seek out a story about it. Otherwise, there's nothing wrong with it, really, but it would be a minor and entirely forgettable chapter in a longer novel, padded out with a cute animal and Dav trying to be smothering.

Not recommended just because you probably have something better to do with that reading time (reading the next full book of the series, for example), but there's nothing wrong with this if you want to read it anyway.

Followed by Dark Class.

Rating: 5 out of 10

30 December, 2025 06:19AM

December 28, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Eurozone: Bulgaria, Russian dirty tricks, Gold & Silver bullion

Bulgaria joined the European Union in 2007 and they anticipate adopting the Euro as their currency on 1 January 2026.

The decision to use the Euro has been divisive. Polls suggest that a majority of citizens would prefer to defer or completely cancel the decision to adopt the Euro. Everybody from the political parties to the Russians are getting in on the conflict. At the beginning of December, the Bulgarian parliament was asked to consider putting the Euro to a referendum.

The ECB is prepared for anything

The European authorities understand that if Bulgaria changes their mind or if the Euro transition flops badly, it could have major ramifications for every other country who is already part of the Euro.

In particular, ever since banks began using currencies that have no link to gold and silver, the currencies have been entirely dependent on public perception. A Euro rejection or flop in Bulgaria would undermine perception in unpredictable ways. Other countries would think twice before joining in the future.

With that in mind, the banks have well and truly prepared for every imaginable disaster, whether it is the accidental death of a prime minister or Russian cyberattacks.

Gold and silver: the silent referendum

While the Bulgarian public did not get to vote on the Euro per se, they are voting with their wallets. Reports claim that Bulgaria, the poorest country in Europe, is now the third highest country on the table of private bullion ownership.

Eurozone inflation and the price police

As in previous Euro changeovers, the authorities have promised that price police will check the prices of essential goods and services in a range of businesses before and after 1 January. Businesses who obviously increase their prices and blame the Euro have been threatened with punishment.

In practice, we've seen that businesses in other countries have found indirect ways to work around inflation. For example, in Ireland, a lot of restaurants periodically make a complete overhaul of their menu. They change the ingredients and serving sizes and there is no easy way to make a like-for-like comparison to the menu before Ireland got the Euro many years ago.

On top of that, businesses that don't increase their prices will probably fail completely. New businesses will appear and replace the old businesses. The new businesses will charge new prices and the price police will not be able to punish them because they didn't exist before the Euro changeover.

Inflation in a picture: gold and silver prices

The gold and silver prices in the media typically show a chart that is always going up.

A far better way to look at the prices of these metals is to ask if you took one thousand Euro from your salary and invested it in silver every December, how many coins would you get?

Somebody made a "chart" by stacking the silver coins they acquired over more than twenty years. It has got people talking in Ireland. In 2007, when Bulgaria joined the European Union, one thousand Euros could buy 92 silver coins. Today, one thousand Euro only buys fifteen coins.

Silver coins, inflation, Eurozone

 

Silver coins, inflation, Eurozone

 

Silver coins, inflation, Eurozone

 

BullionVault may stop giving away free silver

The well known BullionVault web site currently puts 4 grams of free silver into each new account. The new normal for silver prices may make them downgrade this policy and future customers may only get 2 or 3 grams free.

The terms and conditions discourage people from creating multiple accounts in their own name. There appears to be no reason multiple people in the same family, for example, the husband, wife and each child can't each open an account and claim 4 grams of silver in their own name.

Now that I've speculated about that, look out for new terms and conditions that limit the free 4 grams of silver to one person at the same address.

More reports about economic subjects.

28 December, 2025 09:00PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Our study, 2025

We’re currently thinking of renovating our study/home office. I’ll likely write more about that project. Embarking on it reminded me that I’d taken a photo of the state of it nearly a year ago and forgot to post it, so here it is.

Home workspace, January 2025

Home workspace, January 2025

When I took that pic last January, it had been three years since the last one, and the major difference was a reduction in clutter. I've added a lava lamp (charity shop find) and Rob Sheridan print. We got rid of the POÄNG chair (originally bought for breast feeding) so we currently have no alternate seating besides the desk chair.

As much as I love my vintage mahogany writing desk, our current thinking is it’s likely to go. I’m exploring whether we could fit in two smaller desks: one main one for the computer, and another “workbench” for play: the synthesiser, Amiga, crafting and 3d printing projects, etc.

28 December, 2025 08:25AM

Balasankar 'Balu' C

Granting Namespace-Specific Access in GKE Clusters

Heyo,

In production Kubernetes environments, access control becomes critical when multiple services share the same cluster. I recently faced this exact scenario: a GKE cluster hosting multiple services across different namespaces, where a new team needed access to maintain and debug their service-but only their service.

The requirement was straightforward yet specific: grant external users the ability to exec into pods, view logs, and forward ports, but restrict this access to a single namespace within a single GKE cluster. No access to other clusters in the Google Cloud project, and no access to other namespaces.

The Solution

Achieving this granular access control requires combining Google Cloud IAM with Kubernetes RBAC (Role-Based Access Control). Here’s how to implement it:

Step 1: Tag Your GKE Cluster

First, apply a unique tag to your GKE cluster. This tag will serve as the identifier for IAM policies.

Step 2: Grant IAM Access via Tags

Add an IAM policy binding that grants users access to resources with your specific tag. The Kubernetes Engine Viewer role (roles/container.viewer) provides sufficient base permissions without granting excessive access.

Step 3: Create a Kubernetes ClusterRole

Define a ClusterRole that specifies the exact permissions needed:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-access-role
rules:
  - apiGroups: [""]
    resources: ["pods", "pods/exec", "pods/attach", "pods/portforward", "pods/log"]
    verbs: ["get", "list", "watch", "create"]

Note: While you could use a namespace-scoped Role, a ClusterRole offers better reusability if you need similar permissions for other namespaces later.

Step 4: Bind the Role to Users

Create a RoleBinding to connect the role to specific users and namespaces:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: custom-rolebinding
  namespace: my-namespace
subjects:
  - kind: User
    name: myuser@gmail.com
    apiGroup: rbac.authorization.k8s.io
roleRef:
  kind: ClusterRole
  name: custom-access-role
  apiGroup: rbac.authorization.k8s.io

Apply both configurations using kubectl apply -f <filename>.

How It Works

This approach creates a two-layer security model:

  • GCP IAM controls which clusters users can access using resource tags
  • Kubernetes RBAC controls what users can do within the cluster and limits their scope to specific namespaces

The result is a secure, maintainable solution that grants teams the access they need without compromising the security of other services in your cluster.

28 December, 2025 06:00AM

December 25, 2025

Russ Allbery

Review: Machine

Review: Machine, by Elizabeth Bear

Series: White Space #2
Publisher: Saga Press
Copyright: October 2020
ISBN: 1-5344-0303-5
Format: Kindle
Pages: 485

Machine is a far-future space opera. It is a loose sequel to Ancestral Night, but you do not have to remember the first book to enjoy this book and they have only a couple of secondary characters in common. There are passing spoilers for Ancestral Night in the story, though, if you care.

Dr. Brookllyn Jens is a rescue paramedic on Synarche Medical Vessel I Race To Seek the Living. That means she goes into dangerous situations to get you out of them, patches you up enough to not die, and brings you to doctors who can do the slower and more time-consuming work. She was previously a cop (well, Judiciary, which in this universe is mostly the same thing) and then found that medicine, and specifically the flagship Synarche hospital Core General, was the institution in all the universe that she believed in the most.

As Machine opens, Jens is boarding the Big Rock Candy Mountain, a generation ship launched from Earth during the bad era before right-minding and joining the Synarche, back when it looked like humanity on Earth wouldn't survive. Big Rock Candy Mountain was discovered by accident in the wrong place, going faster than it was supposed to be going and not responding to hails. The Synarche ship that first discovered and docked with it is also mysteriously silent. It's the job of Jens and her colleagues to get on board, see if anyone is still alive, and rescue them if possible.

What they find is a corpse and a disturbingly servile early AI guarding a whole lot of people frozen in primitive cryobeds, along with odd artificial machinery that seems to be controlled by the AI. Or possibly controlling the AI.

Jens assumes her job will be complete once she gets the cryobeds and the AI back to Core General where both the humans and the AI can be treated by appropriate doctors. Jens is very wrong.

Machine is Elizabeth Bear's version of a James White Sector General novel. If one reads this book without any prior knowledge, the way that I did, you may not realize this until the characters make it to Core General, but then it becomes obvious to anyone who has read White's series. Most of the standard Sector General elements are here: A vast space station with rings at different gravity levels and atmospheres, a baffling array of species, and the ability to load other people's personalities into your head to treat other species at the cost of discomfort and body dysmorphia. There's a gruff supervisor, a fragile alien doctor, and a whole lot of idealistic and well-meaning people working around complex interspecies differences. Sadly, Bear does drop White's entertainingly oversimplified species classification codes; this is the correct call for suspension of disbelief, but I kind of missed them.

I thoroughly enjoy the idea of the Sector General series, so I was delighted by an updated version that drops the sexism and the doctor/nurse hierarchy and adds AIs, doctors for AIs, and a more complicated political structure. The hospital is even run by a sentient tree, which is an inspired choice.

Bear, of course, doesn't settle for a relatively simple James White problem-solving plot. There are interlocking, layered problems here, medical and political, immediate and structural, that unwind in ways that I found satisfyingly twisty. As with Ancestral Night, Bear has some complex points to make about morality. I think that aspect of the story was a bit less convincing than Ancestral Night, in part because some of the characters use rather bizarre tactics (although I will grant they are the sort of bizarre tactics that I could imagine would be used by well-meaning people using who didn't think through all of the possible consequences). I enjoyed the ethical dilemmas here, but they didn't grab me the way that Ancestral Night did. The setting, though, is even better: An interspecies hospital was a brilliant setting when James White used it, and it continues to be a brilliant setting in Bear's hands.

It's also worth mentioning that Jens has a chronic inflammatory disease and uses an exoskeleton for mobility, and (as much as I can judge while not being disabled myself) everything about this aspect of the character was excellent. It's rare to see characters with meaningful disabilities in far-future science fiction. When present at all, they're usually treated like Geordi's sight: something little different than the differential abilities of the various aliens, or even a backdoor advantage. Jens has a true, meaningful disability that she has to manage and that causes a constant cognitive drain, and the treatment of her assistive device is complex and nuanced in a way that I found thoughtful and satisfying.

The one structural complaint that I will make is that Jens is an astonishingly talkative first-person protagonist, particularly for an Elizabeth Bear novel. This is still better than being inscrutable, but she is prone to such extended philosophical digressions or infodumps in the middle of a scene that I found myself wishing she'd get on with it already in a few places. This provides good characterization, in the sense that the reader certainly gets inside Jens's head, but I think Bear didn't get the balance quite right.

That complaint aside, this was very fun, and I am certainly going to keep reading this series. Recommended, particularly if you like James White, or want to see why other people do.

The most important thing in the universe is not, it turns out, a single, objective truth. It's not a hospital whose ideals you love, that treats all comers. It's not a lover; it's not a job. It's not friends and teammates.

It's not even a child that rarely writes me back, and to be honest I probably earned that. I could have been there for her. I didn't know how to be there for anybody, though. Not even for me.

The most important thing in the universe, it turns out, is a complex of subjective and individual approximations. Of tries and fails. Of ideals, and things we do to try to get close to those ideals.

It's who we are when nobody is looking.

Followed by The Folded Sky.

Rating: 8 out of 10

25 December, 2025 03:05AM

December 23, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

Remarkable

Remarkable tablet displaying my 2025 planner PDF.

My Remarkable tablet, displaying my 2025 planner.

During my PhD, on a sunny summer’s day, I copied some papers to read onto an iPad and cycled down to an outdoor cafe next to the beach. armed with a coffee and an ice cream, I sat and enjoyed the warmth. The only problem was that due to the bright sunlight, I couldn’t see a damn thing.

In 2021 I decided to take the plunge and buy the Remarkable 2 that has been heavily advertised at the time. Over the next four or so years, I made good use of it to read papers; read drafts of my own papers and chapters; read a small number of technical books; use as a daily planner; take meeting notes for work, PhD and later, personal matters.

I didn’t buy the remarkable stylus or folio cover instead opting for a (at the time, slightly cheaper) LAMY AL-star EMR. And a fantastic fabric sleeve cover from Emmerson Gray.

I installed a hack which let me use the Lamy’s button to activate an eraser and also added a bunch of other tweaks. I wouldn’t recommend that specific hack anymore as there are safer alternatives (personally untested, but e.g. https://github.com/isaacwisdom/RemarkableLamyEraser)

Pros: the writing experience is unparalleled. Excellent. I enjoy writing with fountain pens on good paper but that experience comes with inky fingers, dried up nibs, and a growing pile of paper notebooks. The remarkable is very nearly as good without those drawbacks.

Cons: lower contrast than black on white paper and no built in illumination. It needs good light to read. Almost the opposite problem to the iPad! I’ve tried a limited number of external clip on lights but nothing is frictionless to use.

The traditional two-column, wide margin formatting for academic papers is a bad fit for the remarkable’s size (just as it is for computer display sizes. Really is it good for anything people use anymore?). You can pinch to zoom which is OK, or pre-process papers (with e.g. Briss) to reframe them to be more suitable but that’s laborious.

The newer model, the Remarkable Paper Pro, might address both those issues: its bigger; has illumination and has also added colour which would be a nice to have. It’s also a lot more expensive.

I had considered selling on the tablet after I finished my PhD. My current plan, inspired to some extent by my former colleague Aleksey Shipilëv, who makes great use of his, is to have a go at using it more often, to see if it continues to provide value for me: more noodling out thoughts for work tasks, more drawings (e.g. plans for 3D models) and more reading of tech books.

23 December, 2025 10:58AM

hackergotchi for Daniel Kahn Gillmor

Daniel Kahn Gillmor

AI and Secure Messaging Don't Mix

AI and Secure Messaging Don't Mix

Over on the ACLU's Free Future blog, I just published an article titled AI and Secure Messaging Don't Mix.

The blogpost assumes for the sake of the argument that people might actually want to have an AI involved in their personal conversations, and explores why Meta's Private Processing doesn't offer the level of assurance that they want it to offer.

In short, the promises of "confidential cloud computing" are built on shaky foundations, especially against adversaries as powerful as Meta themselves.

If you really want AI in your chat, the baseline step for privacy preservation is to include it in your local compute base, not to use a network service! But these operators clearly don't value private communication as much as they value binding you to their services.

But let's imagine some secure messenger that actually does put message confidentiality first -- and imagine they had integrated some sort of AI capability into the messenger. That at least bypasses the privacy questions around AI use.

Would you really want to talk with your friends, as augmented by their local AI, though? Would you want an AI, even one running locally with perfect privacy, intervening in your social connections?

What if it summarized your friend's messages to you in a way that led you to misunderstand (or ignore) an important point your friend had made? What if it encouraged you to make an edgy joke that comes across wrong? Or to say something that seriously upsets a friend? How would you respond? How would you even know that it had happened?

My handle is dkg. More times than i can count, i've had someone address me in a chat as "dog" and then cringe and apologize and blame their spellchecker/autocorrect. I can laugh these off because the failure mode is so obvious and transparent -- and repeatable. (also, dogs are awesome, so i don't really mind!)

But when our attention (and our responses!) are being shaped and molded by these plausibility engines, how will we even know that mistakes are being made? What if the plausibility engine you've hooked into your messenger embeds subtle (or unsubtle!) bias?

Don't we owe it to each other to engage with actual human attention?

23 December, 2025 05:00AM by Daniel Kahn Gillmor

December 22, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

NanoKVM: I like it

I bought a NanoKVM. I’d heard some of the stories about how terrible it was beforehand, and some I didn’t learn about until afterwards, but at £52, including VAT + P&P, that seemed like an excellent bargain for something I was planning to use in my home network environment.

Let’s cover the bad press first. apalrd did a video, entitled NanoKVM: The S stands for Security (Armen Barsegyan has a write up recommending a PiKVM instead that lists the objections raised in the video). Matej Kovačič wrote an article about the hidden microphone on a Chinese NanoKVM. Various other places have picked up both of these and still seem to be running with them, 10 months later.

Next, let me explain where I’m coming from here. I have over 2 decades of experience with terrible out-of-band access devices. I still wince when I think of the Sun Opteron servers that shipped with an iLOM that needed a 32-bit Windows browser in order to access it (IIRC some 32 bit binary JNI blob). It was a 64 bit x86 server from a company who, at the time, still had a major non-Windows OS. Sheesh. I do not assume these devices are fit for exposure to the public internet, even if they come from “reputable” vendors. Add into that the fact the NanoKVM is very much based on a development board (the LicheeRV Nano), and I felt I knew what I was getting into here.

And, as a TL;DR, I am perfectly happy with my purchase. Sipeed have actually dealt with a bunch of apalrd’s concerns (GitHub ticket), which I consider to be an impressive level of support for this price point. Equally the microphone is explained by the fact this is a £52 device based on a development board. You’re giving it USB + HDMI access to a host on your network, if you’re worried about the microphone then you’re concentrating on the wrong bit here.

I started out by hooking the NanoKVM up to my Raspberry Pi classic, which I use as a serial console / network boot tool for working on random bits of hardware. That meant the NanoKVM had no access to the outside world (the Pi is not configured to route, or NAT, for the test network interface), and I could observe what went on. As it happens you can do an SSH port forward of port 80 with this sort of setup and it all works fine - no need for the NanoKVM to have any external access, and it copes happily with being accessed as http://localhost:8000/ (though you do need to choose MJPEG as the video mode, more forwarding or enabling HTTPS is needed for an H.264 WebRTC session).

IPv6 is enabled in the kernel. My test setup doesn’t have a router advertisements configured, but I could connect to the web application over the v6 link-local address that came up automatically.

My device reports:

Image version:              v1.4.1
Application version:        2.2.9

That’s recent, but the GitHub releases page has 2.3.0 listed as more recent.

Out of the box it’s listening on TCP port 80. SSH is not running, but there’s a toggle to turn it on and the web interface offers a web based shell (with no extra authentication over the normal login). On first use I was asked to set a username + password. Default access, as you’d expect from port 80, is HTTP, but there’s a toggle to enable HTTPS. It generates a self signed certificate - for me it had the CN localhost but that might have been due to my use of port forwarding. Enabling HTTPS does not disable HTTP, but HTTP just redirects to the HTTPS URL.

As others have discussed it does a bunch of DNS lookups, primarily for NTP servers but also for cdn.sipeed.com. The DNS servers are hard coded:

~ # cat /etc/resolv.conf
nameserver 192.168.0.1
nameserver 8.8.4.4
nameserver 8.8.8.8
nameserver 114.114.114.114
nameserver 119.29.29.29
nameserver 223.5.5.5

This is actually restored on boot from /boot/resolv.conf, so if you want changes to persist you can just edit that file. NTP is configured with a standard set of pool.ntp.org services in /etc/ntp.conf (this does not get restored on reboot, so can just be edited in place). I had dnsmasq on the Pi setup to hand out DNS + NTP servers, but both were ignored (though actually udhcpc does write the DNS details to /etc/resolv.conf.dhcp).

My assumption is the lookup to cdn.sipeed.com is for firmware updates (as I bought the NanoKVM cube it came fully installed, so no need for a .so download to make things work); when working DNS was provided I witness attempts to connect to HTTPS. I’ve not bothered digging further into this. I did go grab the latest.zip being served from the URL, which turned out to be v2.2.9, matching what I have installed, not the latest on GitHub.

I note there’s an iptables setup (with nftables underneath) that’s not fully realised - it seems to be trying to allow inbound HTTP + WebRTC, as well as outbound SSH, but everything is default accept so none of it gets hit. Setting up a default deny outbound and tweaking a little should provide a bit more reassurance it’s not going to try and connect out somewhere it shouldn’t.

It looks like updates focus solely on the KVM application, so I wanted to take a look at the underlying OS. This is buildroot based:

~ # cat /etc/os-release
NAME=Buildroot
VERSION=-g98d17d2c0-dirty
ID=buildroot
VERSION_ID=2023.11.2
PRETTY_NAME="Buildroot 2023.11.2"

The kernel reports itself as 5.10.4-tag-. Somewhat ancient, but actually an LTS kernel. Except we’re now up to 5.10.247, so it obviously hasn’t been updated in some time.

TBH, this is what I expect (and fear) from embedded devices. They end up with some ancient base OS revision and a kernel with a bunch of hacks that mean it’s not easily updated. I get that the margins on this stuff are tiny, but I do wish folk would spend more time upstreaming. Or at least updating to the latest LTS point release for their kernel.

The SSH client/daemon is full-fat OpenSSH:

~ # sshd -V
OpenSSH_9.6p1, OpenSSL 3.1.4 24 Oct 2023

There are a number of CVEs fixed in later OpenSSL 3.1 versions, though at present nothing that looks too concerning from the server side. Yes, the image has tcpdump + aircrack installed. I’m a little surprised at aircrack (the device has no WiFi and even though I know there’s a variant that does, it’s not a standard debug tool the way tcpdump is), but there’s a copy of GNU Chess in there too, so it’s obvious this is just a kitchen-sink image. FWIW it looks like the buildroot config is here.

Sadly the UART that I believe the bootloader/kernel are talking to is not exposed externally - the UART pin headers are for UART1 + 2, and I’d have to open up the device to get to UART0. I’ve not yet done this (but doing so would also allow access to the SD card, which would make trying to compile + test my own kernel easier).

In terms of actual functionality it did what I’d expect. 1080p HDMI capture was fine. I’d have gone for a lower resolution, but I think that would have required tweaking on the client side. It looks like the 2.3.0 release allows EDID tweaking, so I might have to investigate that. The keyboard defaults to a US layout, which caused some problems with the | symbol until I reconfigured the target machine not to expect a GB layout.

There’s also the potential to share out images via USB. I copied a Debian trixie netinst image to /data on the NanoKVM and was able to select it in the web interface and have it appear on the target machine easily. There’s also the option to fetch direct from a URL in the web interface, but I was still testing without routable network access, so didn’t try that. There’s plenty of room for images:

~ # df -h
Filesystem                Size      Used Available Use% Mounted on
/dev/mmcblk0p2            7.6G    823.3M      6.4G  11% /
devtmpfs                 77.7M         0     77.7M   0% /dev
tmpfs                    79.0M         0     79.0M   0% /dev/shm
tmpfs                    79.0M     30.2M     48.8M  38% /tmp
tmpfs                    79.0M    124.0K     78.9M   0% /run
/dev/mmcblk0p1           16.0M     11.5M      4.5M  72% /boot
/dev/mmcblk0p3           22.2G    160.0K     22.2G   0% /data

The NanoKVM also appears as an RNDIS USB network device, with udhcpd running on the interface. IP forwarding is not enabled, and there’s no masquerading rules setup, so this doesn’t give the target host access to the “management” LAN by default. I guess it could be useful for copying things over to the target host, as a more flexible approach than a virtual disk image.

One thing to note is this makes for a bunch of devices over the composite USB interface. There are 3 HID devices (keyboard, absolute mouse, relative mouse), the RNDIS interface, and the USB mass storage. I had a few occasions where the keyboard input got stuck after I’d been playing about with big data copies over the network and using the USB mass storage emulation. There is a HID-only mode (no network/mass storage) to try and help with this, and a restart of the NanoKVM generally brought things back, but something to watch out for. Again I see that the 2.3.0 application update mentions resetting the USB hardware on a HID reset, which might well help.

As I stated at the start, I’m happy with this purchase. Would I leave it exposed to the internet without suitable firewalling? No, but then I wouldn’t do so for any KVM. I wanted a lightweight KVM suitable for use in my home network, something unlikely to see heavy use but that would save me hooking up an actual monitor + keyboard when things were misbehaving. So far everything I’ve seen says I’ve got my money’s worth from it.

22 December, 2025 05:38PM

Russell Coker

Samsung 65″ QN900C 8K TV

As a follow up from my last post about my 8K TV [1] I tested out a Samsung 65″ QN900C Neo QLED 8K that’s on sale in JB Hifi. According to the JB employee I spoke to they are running out the last 8K TVs and have no plans to get more.

In my testing of that 8K TV YouTube had a 3840*2160 viewport which is better than the 1920*1080 of my Hisense TV. When running a web browser the codeshack page reported it as 1920*1080 with a 1.25* pixel density (presumably a configuration option) that gave a usable resolution of 1536*749.

The JB Hifi employee wouldn’t let me connect my own device via HDMI but said that it would work at 8K. I said “so if I buy it I can return it if it doesn’t do 8K HDMI?” and then he looked up the specs and found that it would only do 4K input on HDMI. It seems that actual 8K resolution might work on a Samsung streaming device but that’s not very useful particularly as there probably isn’t much 8K content on any streaming service.

Basically that Samsung allegedly 8K TV only works at 4K at best.

It seems to be impossible to buy an 8K TV or monitor in Australia that will actually display 8K content. ASUS has a 6K 32″ monitor with 6016*3384 resolution for $2016 [2]. When counting for inflation $2016 wouldn’t be the most expensive monitor I’ve ever bought and hopefully prices will continue to drop.

Rumour has it that there are 8K TVs available in China that actually take 8K input. Getting one to Australia might not be easy but it’s something that I will investigate.

Also I’m trying to sell my allegedly 8K TV.

22 December, 2025 07:52AM by etbe

François Marier

LXC setup on Debian forky

Similar to what I wrote for Ubuntu 18.04, here is how to setup an LXC container on Debian forky.

Installing the required packages

Start by installing the necessary packages on the host:

apt install lxc libvirt-clients debootstrap

Network setup

Ensure the veth kernel module is loaded by adding the following to /etc/modules-load.d/lxc-local.conf:

veth

and then loading it manually for now:

modprobe veth

Enable IPv4 forwarding by putting this in /etc/sysctl.d/lxc-local.conf:

net.ipv4.ip_forward=1

and applying it:

sysctl -p /etc/sysctl.d/lxc-local.conf

Restart the LXC network bridge:

systemctl restart lxc-net.service

Ensure that container traffic is not blocked by the host firewall, for example by adding the following to /etc/network/iptables.up.rules:

-A FORWARD -d 10.0.3.0/24 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -s 10.0.3.0/24 -j ACCEPT
-A INPUT -d 224.0.0.251 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 239.255.255.250 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.255 -s 10.0.3.1 -j ACCEPT
-A INPUT -d 10.0.3.1 -s 10.0.3.0/24 -j ACCEPT

and applying the rules:

iptables-apply

Creating a container

To see all available images, run:

lxc-create -n foo --template=download -- --list

and then create a Debian forky container using:

lxc-create -n forky -t download -- -d debian -r forky -a amd64

Start and stop the container like this:

lxc-start -n forky
lxc-stop -n forky

Connecting to the container

Attach to the running container's console:

lxc-attach -n forky

Inside the container, you can change the root password by typing:

passwd

and install some essential packages:

apt install openssh-server vim

To find the container's IP address (for example, so that you can ssh to it from the host):

lxc-ls --fancy

22 December, 2025 02:47AM

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

I’m learning about perlguts today.


im-learning-about-perlguts-today.png


## 0.23	2025-12-20

commit be15aa25dea40aea66a8534143fb81b29d2e6c08
Author: C.J. Collier 
Date:   Sat Dec 20 22:40:44 2025 +0000

    Fixes C-level test infrastructure and adds more test cases for upb_to_sv conversions.
    
    - **Makefile.PL:**
        - Allow `extra_src` in `c_test_config.json` to be an array.
        - Add ASan flags to CCFLAGS and LDDLFLAGS for better debugging.
        - Corrected echo newlines in `test_c` target.
    - **c_test_config.json:**
        - Added missing type test files to `deps` and `extra_src` for `convert/sv_to_upb` and `convert/upb_to_sv` test runners.
    - **t/c/convert/upb_to_sv.c:**
        - Fixed a double free of `test_pool`.
        - Added missing includes for type test headers.
        - Updated test plan counts.
    - **t/c/convert/sv_to_upb.c:**
        - Added missing includes for type test headers.
        - Updated test plan counts.
        - Corrected Perl interpreter initialization.
    - **t/c/convert/types/**:
        - Added missing `test_util.h` include in new type test headers.
        - Completed the set of `upb_to_sv` test cases for all scalar types by adding optional and repeated tests for `sfixed32`, `sfixed64`, `sint32`, and `sint64`, and adding repeated tests to the remaining scalar type files.
    - **Documentation:**
        - Updated `01-xs-testing.md` with more debugging tips, including ASan usage and checking for double frees and typos.
        - Updated `xs_learnings.md` with details from the recent segfault.
        - Updated `llm-plan-execution-instructions.md` to emphasize debugging steps.


## 0.22	2025-12-19

commit 2c171d9a5027e0150eae629729c9104e7f6b9d2b
Author: C.J. Collier 
Date:   Fri Dec 19 23:41:02 2025 +0000

    feat(perl,testing): Initialize C test framework and build system
    
    This commit sets up the foundation for the C-level tests and the build system for the Perl Protobuf module:
    
    1.  **Makefile.PL Enhancements:**
        *   Integrates `Devel::PPPort` to generate `ppport.h` for better portability.
        *   Object files now retain their path structure (e.g., `xs/convert/sv_to_upb.o`) instead of being flattened, improving build clarity.
        *   The `MY::postamble` is significantly revamped to dynamically generate build rules for all C tests located in `t/c/` based on the `t/c/c_test_config.json` file.
        *   C tests are linked against `libprotobuf_common.a` and use `ExtUtils::Embed` flags.
        *   Added `JSON::MaybeXS` to `PREREQ_PM`.
        *   The `test` target now also depends on the `test_c` target.
    
    2.  **C Test Infrastructure (`t/c/`):
        *   Introduced `t/c/c_test_config.json` to configure individual C test builds, specifying dependencies and extra source files.
        *   Created `t/c/convert/test_util.c` and `.h` for shared test functions like loading descriptors.
        *   Initial `t/c/convert/upb_to_sv.c` and `t/c/convert/sv_to_upb.c` test runners.
        *   Basic `t/c/integration/030_protobuf_coro.c` for Coro safety testing on core utils using `libcoro`.
        *   Basic `t/c/integration/035_croak_test.c` for testing exception handling.
        *   Basic `t/c/integration/050_convert.c` for integration testing conversions.
    
    3.  **Test Proto:** Updated `t/data/test.proto` with more field types for conversion testing and regenerated `test_descriptor.bin`.
    
    4.  **XS Test Harness (`t/c/upb-perl-test.h`):** Added `like_n` macro for length-aware regex matching.
    
    5.  **Documentation:** Updated architecture and plan documents to reflect the C test structure.
    6.  **ERRSV Testing:** Note that the C tests (`t/c/`) will primarily check *if* a `croak` occurs (i.e., that the exception path is taken), but will not assert on the string content of `ERRSV`. Reliably testing `$@` content requires the full Perl test environment with `Test::More`, which will be done in the `.t` files when testing the Perl API.
    
    This provides a solid base for developing and testing the XS and C components of the module.


## 0.21	2025-12-18

commit a8b6b6100b2cf29c6df1358adddb291537d979bc
Author: C.J. Collier 
Date:   Thu Dec 18 04:20:47 2025 +0000

    test(C): Add integration tests for Milestone 2 components
    
    - Created t/c/integration/030_protobuf.c to test interactions
      between obj_cache, arena, and utils.
    - Added this test to t/c/c_test_config.json.
    - Verified that all C tests for Milestones 2 and 3 pass,
      including the libcoro-based stress test.


## 0.20	2025-12-18

commit 0fcad68680b1f700a83972a7c1c48bf3a6958695
Author: C.J. Collier 
Date:   Thu Dec 18 04:14:04 2025 +0000

    docs(plan): Add guideline review reminders to milestones
    
    - Added a "[ ] REFRESH: Review all documents in @perl/doc/guidelines/**"
      checklist item to the start of each component implementation
      milestone (C and Perl layers).
    - This excludes Integration Test milestones.


## 0.19	2025-12-18

commit 987126c4b09fcdf06967a98fa3adb63d7de59a34
Author: C.J. Collier 
Date:   Thu Dec 18 04:05:53 2025 +0000

    docs(plan): Add C-level and Perl-level Coro tests to milestones
    
    - Added checklist items for `libcoro`-based C tests
      (e.g., `t/c/integration/050_convert_coro.c`) to all C layer
      integration milestones (050 through 220).
    - Updated `030_Integration_Protobuf.md` to standardise checklist
      items for the existing `030_protobuf_coro.c` test.
    - Removed the single `xt/author/coro-safe.t` item from
      `010_Build.md`.
    - Added checklist items for Perl-level `Coro` tests
      (e.g., `xt/coro/240_arena.t`) to each Perl layer
      integration milestone (240 through 400).
    - Created `perl/t/c/c_test_config.json` to manage C test
      configurations externally.
    - Updated `perl/doc/architecture/testing/01-xs-testing.md` to describe
      both C-level `libcoro` and Perl-level `Coro` testing strategies.


## 0.18	2025-12-18

commit 6095a5a610401a6035a81429d0ccb9884d53687b
Author: C.J. Collier 
Date:   Thu Dec 18 02:34:31 2025 +0000

    added coro testing to c layer milestones


## 0.17	2025-12-18

commit cc0aae78b1f7f675fc8a1e99aa876c0764ea1cce
Author: C.J. Collier 
Date:   Thu Dec 18 02:26:59 2025 +0000

    docs(plan): Refine test coverage checklist items for SMARTness
    
    - Updated the "Tests provide full coverage" checklist items in
      C layer plan files (020, 040, 060, 080, 100, 120, 140, 160, 180, 200)
      to explicitly mention testing all public functions in the
      corresponding header files.
    - Expanded placeholder checklists in 140, 160, 180, 200.
    - Updated the "Tests provide full coverage" and "Add coverage checks"
      checklist items in Perl layer plan files (230, 250, 270, 290, 310, 330,
      350, 370, 390) to be more specific about the scope of testing
      and the use of `Test::TestCoverage`.
    - Expanded Well-Known Types milestone (350) to detail each type.


## 0.16	2025-12-18

commit e4b601f14e3817a17b0f4a38698d981dd4cb2818
Author: C.J. Collier 
Date:   Thu Dec 18 02:07:35 2025 +0000

    docs(plan): Full refactoring of C and Perl plan files
    
    - Split both ProtobufPlan-C.md and ProtobufPlan-Perl.md into
      per-milestone files under the `perl/doc/plan/` directory.
    - Introduced Integration Test milestones after each component
      milestone in both C and Perl plans.
    - Numbered milestone files sequentially (e.g., 010_Build.md,
      230_Perl_Arena.md).
    - Updated main ProtobufPlan-C.md and ProtobufPlan-Perl.md to
      act as Tables of Contents.
    - Ensured consistent naming for integration test files
      (e.g., `t/c/integration/030_protobuf.c`, `t/integration/260_descriptor_pool.t`).
    - Added architecture review steps to the end of all milestones.
    - Moved Coro safety test to C layer Milestone 1.
    - Updated Makefile.PL to support new test structure and added Coro.
    - Moved and split t/c/convert.c into t/c/convert/*.c.
    - Moved other t/c/*.c tests into t/c/protobuf/*.c.
    - Deleted old t/c/convert.c.


## 0.15	2025-12-17

commit 649cbacf03abb5e7293e3038bb451c0406e9d0ce
Author: C.J. Collier 
Date:   Wed Dec 17 23:51:22 2025 +0000

    docs(plan): Refactor and reset ProtobufPlan.md
    
    - Split the plan into ProtobufPlan-C.md and ProtobufPlan-Perl.md.
    - Reorganized milestones to clearly separate C layer and Perl layer development.
    - Added more granular checkboxes for each component:
      - C Layer: Create test, Test coverage, Implement, Tests pass.
      - Perl Layer: Create test, Test coverage, Implement Module/XS, Tests pass, C-Layer adjustments.
    - Reset all checkboxes to `[ ]` to prepare for a full audit.
    - Updated status in architecture/api and architecture/core documents to "Not Started".
    
    feat(obj_cache): Add unregister function and enhance tests
    
    - Added `protobuf_unregister_object` to `xs/protobuf/obj_cache.c`.
    - Updated `xs/protobuf/obj_cache.h` with the new function declaration.
    - Expanded tests in `t/c/protobuf_obj_cache.c` to cover unregistering,
      overwriting keys, and unregistering non-existent keys.
    - Corrected the test plan count in `t/c/protobuf_obj_cache.c` to 17.


## 0.14	2025-12-17

commit 40b6ad14ca32cf16958d490bb575962f88d868a1
Author: C.J. Collier 
Date:   Wed Dec 17 23:18:27 2025 +0000

    feat(arena): Complete C layer for Arena wrapper
    
    This commit finalizes the C-level implementation for the Protobuf::Arena wrapper.
    
    - Adds `PerlUpb_Arena_Destroy` for proper cleanup from Perl's DEMOLISH.
    - Enhances error checking in `PerlUpb_Arena_Get`.
    - Expands C-level tests in `t/c/protobuf_arena.c` to cover memory allocation
      on the arena and lifecycle through `PerlUpb_Arena_Destroy`.
    - Corrects embedded Perl initialization in the C test.
    
    docs(plan): Refactor ProtobufPlan.md
    
    - Restructures the development plan to clearly separate "C Layer" and
      "Perl Layer" tasks within each milestone.
    - This aligns the plan with the "C-First Implementation Strategy" and improves progress tracking.


## 0.13	2025-12-17

commit c1e566c25f62d0ae9f195a6df43b895682652c71
Author: C.J. Collier 
Date:   Wed Dec 17 22:00:40 2025 +0000

    refactor(perl): Rename C tests and enhance Makefile.PL
    
    - Renamed test files in `t/c/` to better match the `xs` module structure:
        - `01-cache.c` -> `protobuf_obj_cache.c`
        - `02-arena.c` -> `protobuf_arena.c`
        - `03-utils.c` -> `protobuf_utils.c`
        - `04-convert.c` -> `convert.c`
        - `load_test.c` -> `upb_descriptor_load.c`
    - Updated `perl/Makefile.PL` to reflect the new test names in `MY::postamble`'s `$c_test_config`.
    - Refactored the `$c_test_config` generation in `Makefile.PL` to reduce repetition by using a default flags hash and common dependencies array.
    - Added a `fail()` macro to `perl/t/c/upb-perl-test.h` for consistency.
    - Modified `t/c/upb_descriptor_load.c` to use the `t/c/upb-perl-test.h` macros, making its output consistent with other C tests.
    - Added a skeleton for `t/c/convert.c` to test the conversion functions.
    - Updated documentation in `ProtobufPlan.md` and `architecture/testing/01-xs-testing.md` to reflect new test names.


## 0.12	2025-12-17

commit d8cb5dd415c6c129e71cd452f78e29de398a82c9
Author: C.J. Collier 
Date:   Wed Dec 17 20:47:38 2025 +0000

    feat(perl): Refactor XS code into subdirectories
    
    This commit reorganizes the C code in the `perl/xs/` directory into subdirectories, mirroring the structure of the Python UPB extension. This enhances modularity and maintainability.
    
    - Created subdirectories for each major component: `convert`, `descriptor`, `descriptor_containers`, `descriptor_pool`, `extension_dict`, `map`, `message`, `protobuf`, `repeated`, and `unknown_fields`.
    - Created skeleton `.h` and `.c` files within each subdirectory to house the component-specific logic.
    - Updated top-level component headers (e.g., `perl/xs/descriptor.h`) to include the new sub-headers.
    - Updated top-level component source files (e.g., `perl/xs/descriptor.c`) to include their main header and added stub initialization functions (e.g., `PerlUpb_InitDescriptor`).
    - Moved code from the original `perl/xs/protobuf.c` to new files in `perl/xs/protobuf/` (arena, obj_cache, utils).
    - Moved code from the original `perl/xs/convert.c` to new files in `perl/xs/convert/` (upb_to_sv, sv_to_upb).
    - Updated `perl/Makefile.PL` to use a glob (`xs/*/*.c`) to find the new C source files in the subdirectories.
    - Added `perl/doc/architecture/core/07-xs-file-organization.md` to document the new structure.
    - Updated `perl/doc/ProtobufPlan.md` and other architecture documents to reference the new organization.
    - Corrected self-referential includes in the newly created .c files.
    
    This restructuring provides a solid foundation for further development and makes it easier to port logic from the Python implementation.


## 0.11	2025-12-17

commit cdedcd13ded4511b0464f5d3bdd72ce6d34e73fc
Author: C.J. Collier 
Date:   Wed Dec 17 19:57:52 2025 +0000

    feat(perl): Implement C-first testing and core XS infrastructure
    
    This commit introduces a significant refactoring of the Perl XS extension, adopting a C-first development approach to ensure a robust foundation.
    
    Key changes include:
    
    -   **C-Level Testing Framework:** Established a C-level testing system in `t/c/` with a dedicated Makefile, using an embedded Perl interpreter. Initial tests cover the object cache (`01-cache.c`), arena wrapper (`02-arena.c`), and utility functions (`03-utils.c`).
    -   **Core XS Infrastructure:**
        -   Implemented a global object cache (`xs/protobuf.c`) to manage Perl wrappers for UPB objects, using weak references.
        -   Created an `upb_Arena` wrapper (`xs/protobuf.c`).
        -   Consolidated common XS helper functions into `xs/protobuf.h` and `xs/protobuf.c`.
    -   **Makefile.PL Enhancements:** Updated to support building and linking C tests, incorporating flags from `ExtUtils::Embed`, and handling both `.c` and `.cc` source files.
    -   **XS File Reorganization:** Restructured XS files to mirror the Python UPB extension's layout (e.g., `message.c`, `descriptor.c`). Removed older, monolithic `.xs` files.
    -   **Typemap Expansion:** Added extensive typemap entries in `perl/typemap` to handle conversions between Perl objects and various `const upb_*Def*` pointers.
    -   **Descriptor Tests:** Added a new test suite `t/02-descriptor.t` to validate descriptor loading and accessor methods.
    -   **Documentation:** Updated development plans and guidelines (`ProtobufPlan.md`, `xs_learnings.md`, etc.) to reflect the C-first strategy, new testing methods, and lessons learned.
    -   **Build Cleanup:** Removed `ppport.h` from `.gitignore` as it's no longer used, due to `-DPERL_NO_PPPORT` being set in `Makefile.PL`.
    
    This C-first approach allows for more isolated and reliable testing of the core logic interacting with the UPB library before higher-level Perl APIs are built upon it.


## 0.10	2025-12-17

commit 1ef20ade24603573905cb0376670945f1ab5d829
Author: C.J. Collier 
Date:   Wed Dec 17 07:08:29 2025 +0000

    feat(perl): Implement C-level tests and core XS utils
    
    This commit introduces a C-level testing framework for the XS layer and implements key components:
    
    1.  **C-Level Tests (`t/c/`)**:
        *   Added `t/c/Makefile` to build standalone C tests.
        *   Created `t/c/upb-perl-test.h` with macros for TAP-compliant C tests (`plan`, `ok`, `is`, `is_string`, `diag`).
        *   Implemented `t/c/01-cache.c` to test the object cache.
        *   Implemented `t/c/02-arena.c` to test `Protobuf::Arena` wrappers.
        *   Implemented `t/c/03-utils.c` to test string utility functions.
        *   Corrected include paths and diagnostic messages in C tests.
    
    2.  **XS Object Cache (`xs/protobuf.c`)**:
        *   Switched to using stringified pointers (`%p`) as hash keys for stability.
        *   Fixed a critical double-free bug in `PerlUpb_ObjCache_Delete` by removing an extra `SvREFCNT_dec` on the lookup key.
    
    3.  **XS Arena Wrapper (`xs/protobuf.c`)**:
        *   Corrected `PerlUpb_Arena_New` to use `newSVrv` and `PTR2IV` for opaque object wrapping.
        *   Corrected `PerlUpb_Arena_Get` to safely unwrap the arena pointer.
    
    4.  **Makefile.PL (`perl/Makefile.PL`)**:
        *   Added `-Ixs` to `INC` to allow C tests to find `t/c/upb-perl-test.h` and `xs/protobuf.h`.
        *   Added `LIBS` to link `libprotobuf_common.a` into the main `Protobuf.so`.
        *   Added C test targets `01-cache`, `02-arena`, `03-utils` to the test config in `MY::postamble`.
    
    5.  **Protobuf.pm (`perl/lib/Protobuf.pm`)**:
        *   Added `use XSLoader;` to load the compiled XS code.
    
    6.  **New files `xs/util.h`**:
        *   Added initial type conversion function.
    
    These changes establish a foundation for testing the C-level interface with UPB and fix crucial bugs in the object cache implementation.


## 0.09	2025-12-17

commit 07d61652b032b32790ca2d3848243f9d75ea98f4
Author: C.J. Collier 
Date:   Wed Dec 17 04:53:34 2025 +0000

    feat(perl): Build system and C cache test for Perl XS
    
    This commit introduces the foundational pieces for the Perl XS implementation, focusing on the build system and a C-level test for the object cache.
    
    -   **Makefile.PL:**
        -   Refactored C test compilation rules in `MY::postamble` to use a hash (`$c_test_config`) for better organization and test-specific flags.
        -   Integrated `ExtUtils::Embed` to provide necessary compiler and linker flags for embedding the Perl interpreter, specifically for the `t/c/01-cache.c` test.
        -   Correctly constructs the path to the versioned Perl library (`libperl.so.X.Y.Z`) using `$Config{archlib}` and `$Config{libperl}` to ensure portability.
        -   Removed `VERSION_FROM` and `ABSTRACT_FROM` to avoid dependency on `.pm` files for now.
    
    -   **C Cache Test (t/c/01-cache.c):**
        -   Added a C test to exercise the object cache functions implemented in `xs/protobuf.c`.
        -   Includes tests for adding, getting, deleting, and weak reference behavior.
    
    -   **XS Cache Implementation (xs/protobuf.c, xs/protobuf.h):**
        -   Implemented `PerlUpb_ObjCache_Init`, `PerlUpb_ObjCache_Add`, `PerlUpb_ObjCache_Get`, `PerlUpb_ObjCache_Delete`, and `PerlUpb_ObjCache_Destroy`.
        -   Uses a Perl hash (`HV*`) for the cache.
        -   Keys are string representations of the C pointers, created using `snprintf` with `"%llx"`.
        -   Values are weak references (`sv_rvweaken`) to the Perl objects (`SV*`).
        -   `PerlUpb_ObjCache_Get` now correctly returns an incremented reference to the original SV, not a copy.
        -   `PerlUpb_ObjCache_Destroy` now clears the hash before decrementing its refcount.
    
    -   **t/c/upb-perl-test.h:**
        -   Updated `is_sv` to perform direct pointer comparison (`got == expected`).
    
    -   **Minor:** Added `util.h` (currently empty), updated `typemap`.
    
    These changes establish a working C-level test environment for the XS components.


## 0.08	2025-12-17

commit d131fd22ea3ed8158acb9b0b1fe6efd856dc380e
Author: C.J. Collier 
Date:   Wed Dec 17 02:57:48 2025 +0000

    feat(perl): Update docs and core XS files
    
    - Explicitly add TDD cycle to ProtobufPlan.md.
    - Clarify mirroring of Python implementation in upb-interfacing.md for both C and Perl layers.
    - Branch and adapt python/protobuf.h and python/protobuf.c to perl/xs/protobuf.h and perl/xs/protobuf.c, including the object cache implementation. Removed old cache.* files.
    - Create initial C test for the object cache in t/c/01-cache.c.


## 0.07	2025-12-17

commit 56fd6862732c423736a2f9a9fb1a2816fc59e9b0
Author: C.J. Collier 
Date:   Wed Dec 17 01:09:18 2025 +0000

    feat(perl): Align Perl UPB architecture docs with Python
    
    Updates the Perl Protobuf architecture documents to more closely align with the design and implementation strategies used in the Python UPB extension.
    
    Key changes:
    
    -   **Object Caching:** Mandates a global, per-interpreter cache using weak references for all UPB-derived objects, mirroring Python's `PyUpb_ObjCache`.
    -   **Descriptor Containers:** Introduces a new document outlining the plan to use generic XS container types (Sequence, ByNameMap, ByNumberMap) with vtables to handle collections of descriptors, similar to Python's `descriptor_containers.c`.
    -   **Testing:** Adds a note to the testing strategy to port relevant test cases from the Python implementation to ensure feature parity.


## 0.06	2025-12-17

commit 6009ce6ab64eccce5c48729128e5adf3ef98e9ae
Author: C.J. Collier 
Date:   Wed Dec 17 00:28:20 2025 +0000

    feat(perl): Implement object caching and fix build
    
    This commit introduces several key improvements to the Perl XS build system and core functionality:
    
    1.  **Object Caching:**
        *   Introduces `xs/protobuf.c` and `xs/protobuf.h` to implement a caching mechanism (`protobuf_c_to_perl_obj`) for wrapping UPB C pointers into Perl objects. This uses a hash and weak references to ensure object identity and prevent memory leaks.
        *   Updates the `typemap` to use `protobuf_c_to_perl_obj` for `upb_MessageDef *` output, ensuring descriptor objects are cached.
        *   Corrected `sv_weaken` to the correct `sv_rvweaken` function.
    
    2.  **Makefile.PL Enhancements:**
        *   Switched to using the Bazel-generated UPB descriptor sources from `bazel-bin/src/google/protobuf/_virtual_imports/descriptor_proto/google/protobuf/`.
        *   Updated `INC` paths to correctly locate the generated headers.
        *   Refactored `MY::dynamic_lib` to ensure the static library `libprotobuf_common.a` is correctly linked into each generated `.so` module, resolving undefined symbol errors.
        *   Overrode `MY::test` to use `prove -b -j$(nproc) t/*.t xt/*.t` for running tests.
        *   Cleaned up `LIBS` and `LDDLFLAGS` usage.
    
    3.  **Documentation:**
        *   Updated `ProtobufPlan.md` to reflect the current status and design decisions.
        *   Reorganized architecture documents into subdirectories.
        *   Added `object-caching.md` and `c-perl-interface.md`.
        *   Updated `llm-guidance.md` with notes on `upb/upb.h` and `sv_rvweaken`.
    
    4.  **Testing:**
        *   Fixed `xt/03-moo_immutable.t` to skip tests if no Moo modules are found.
    
    This resolves the build issues and makes the core test suite pass.


## 0.05	2025-12-16

commit 177d2f3b2608b9d9c415994e076a77d8560423b8
Author: C.J. Collier 
Date:   Tue Dec 16 19:51:36 2025 +0000

    Refactor: Rename namespace to Protobuf, build system and doc updates
    
    This commit refactors the primary namespace from `ProtoBuf` to `Protobuf`
    to align with the style guide. This involves renaming files, directories,
    and updating package names within all Perl and XS files.
    
    **Namespace Changes:**
    
    *   Renamed `perl/lib/ProtoBuf` to `perl/lib/Protobuf`.
    *   Moved and updated `ProtoBuf.pm` to `Protobuf.pm`.
    *   Moved and updated `ProtoBuf::Descriptor` to `Protobuf::Descriptor` (.pm & .xs).
    *   Removed other `ProtoBuf::*` stubs (Arena, DescriptorPool, Message).
    *   Updated `MODULE` and `PACKAGE` in `Descriptor.xs`.
    *   Updated `NAME`, `*_FROM` in `perl/Makefile.PL`.
    *   Replaced `ProtoBuf` with `Protobuf` throughout `perl/typemap`.
    *   Updated namespaces in test files `t/01-load-protobuf-descriptor.t` and `t/02-descriptor.t`.
    *   Updated namespaces in all documentation files under `perl/doc/`.
    *   Updated paths in `perl/.gitignore`.
    
    **Build System Enhancements (Makefile.PL):**
    
    *   Included `xs/*.c` files in the common object files list.
    *   Added `-I.` to the `INC` paths.
    *   Switched from `MYEXTLIB` to `LIBS => ['-L$(CURDIR) -lprotobuf_common']` for linking.
    *   Removed custom keys passed to `WriteMakefile` for postamble.
    *   `MY::postamble` now sources variables directly from the main script scope.
    *   Added `all :: ${common_lib}` dependency in `MY::postamble`.
    *   Added `t/c/load_test.c` compilation rule in `MY::postamble`.
    *   Updated `clean` target to include `blib`.
    *   Added more modules to `TEST_REQUIRES`.
    *   Removed the explicit `PM` and `XS` keys from `WriteMakefile`, relying on `XSMULTI => 1`.
    
    **New Files:**
    
    *   `perl/lib/Protobuf.pm`
    *   `perl/lib/Protobuf/Descriptor.pm`
    *   `perl/lib/Protobuf/Descriptor.xs`
    *   `perl/t/01-load-protobuf-descriptor.t`
    *   `perl/t/02-descriptor.t`
    *   `perl/t/c/load_test.c`: Standalone C test for UPB.
    *   `perl/xs/types.c` & `perl/xs/types.h`: For Perl/C type conversions.
    *   `perl/doc/architecture/upb-interfacing.md`
    *   `perl/xt/03-moo_immutable.t`: Test for Moo immutability.
    
    **Deletions:**
    
    *   Old test files: `t/00_load.t`, `t/01_basic.t`, `t/02_serialize.t`, `t/03_message.t`, `t/04_descriptor_pool.t`, `t/05_arena.t`, `t/05_message.t`.
    *   Removed `lib/ProtoBuf.xs` as it's not needed with `XSMULTI`.
    
    **Other:**
    
    *   Updated `test_descriptor.bin` (binary change).
    *   Significant content updates to markdown documentation files in `perl/doc/architecture` and `perl/doc/internal` reflecting the new architecture and learnings.


## 0.04	2025-12-14

commit 92de5d482c8deb9af228f4b5ce31715d3664d6ee
Author: C.J. Collier 
Date:   Sun Dec 14 21:28:19 2025 +0000

    feat(perl): Implement Message object creation and fix lifecycles
    
    This commit introduces the basic structure for `ProtoBuf::Message` object
    creation, linking it with `ProtoBuf::Descriptor` and `ProtoBuf::DescriptorPool`,
    and crucially resolves a SEGV by fixing object lifecycle management.
    
    Key Changes:
    
    1.  **`ProtoBuf::Descriptor`:** Added `_pool` attribute to hold a strong
        reference to the parent `ProtoBuf::DescriptorPool`. This is essential to
        prevent the pool and its C `upb_DefPool` from being garbage collected
        while a descriptor is still in use.
    
    2.  **`ProtoBuf::DescriptorPool`:**
        *   `find_message_by_name`: Now passes the `$self` (the pool object) to the
            `ProtoBuf::Descriptor` constructor to establish the lifecycle link.
        *   XSUB `pb_dp_find_message_by_name`: Updated to accept the pool `SV*` and
            store it in the descriptor's `_pool` attribute.
        *   XSUB `_load_serialized_descriptor_set`: Renamed to avoid clashing with the
            Perl method name. The Perl wrapper now correctly calls this internal XSUB.
        *   `DEMOLISH`: Made safer by checking for attribute existence.
    
    3.  **`ProtoBuf::Message`:**
        *   Implemented using Moo with lazy builders for `_upb_arena` and
            `_upb_message`.
        *   `_descriptor` is a required argument to `new()`.
        *   XS functions added for creating the arena (`pb_msg_create_arena`) and
            the `upb_Message` (`pb_msg_create_upb_message`).
        *   `pb_msg_create_upb_message` now extracts the `upb_MessageDef*` from the
            descriptor and uses `upb_MessageDef_MiniTable()` to get the minitable
            for `upb_Message_New()`.
        *   `DEMOLISH`: Added to free the message's arena.
    
    4.  **`Makefile.PL`:**
        *   Added `-g` to `CCFLAGS` for debugging symbols.
        *   Added Perl CORE include path to `MY::postamble`'s `base_flags`.
    
    5.  **Tests:**
        *   `t/04_descriptor_pool.t`: Updated to check the structure of the
            returned `ProtoBuf::Descriptor`.
        *   `t/05_message.t`: Now uses a descriptor obtained from a real pool to
            test `ProtoBuf::Message->new()`.
    
    6.  **Documentation:**
        *   Updated `ProtobufPlan.md` to reflect progress.
        *   Updated several files in `doc/architecture/` to match the current
            implementation details, especially regarding arena management and object
            lifecycles.
        *   Added `doc/internal/development_cycle.md` and `doc/internal/xs_learnings.md`.
    
    With these changes, the SEGV is resolved, and message objects can be successfully
    created from descriptors.


## 0.03	2025-12-14

commit 6537ad23e93680c2385e1b571d84ed8dbe2f68e8
Author: C.J. Collier 
Date:   Sun Dec 14 20:23:41 2025 +0000

    Refactor(perl): Object-Oriented DescriptorPool with Moo
    
    This commit refactors the `ProtoBuf::DescriptorPool` to be fully object-oriented using Moo, and resolves several issues related to XS, typemaps, and test data.
    
    Key Changes:
    
    1.  **Moo Object:** `ProtoBuf::DescriptorPool.pm` now uses `Moo` to define the class. The `upb_DefPool` pointer is stored as a lazy attribute `_upb_defpool`.
    2.  **XS Lifecycle:** `DescriptorPool.xs` now has `pb_dp_create_pool` called by the Moo builder and `pb_dp_free_pool` called from `DEMOLISH` to manage the `upb_DefPool` lifecycle per object.
    3.  **Typemap:** The `perl/typemap` file has been significantly updated to handle the conversion between the `ProtoBuf::DescriptorPool` Perl object and the `upb_DefPool *` C pointer. This includes:
        *   Mapping `upb_DefPool *` to `T_PTR`.
        *   An `INPUT` section for `ProtoBuf::DescriptorPool` to extract the pointer from the object's hash, triggering the lazy builder if needed via `call_method`.
        *   An `OUTPUT` section for `upb_DefPool *` to convert the pointer back to a Perl integer, used by the builder.
    4.  **Method Renaming:** `add_file_descriptor_set_binary` is now `load_serialized_descriptor_set`.
    5.  **Test Data:**
        *   Added `perl/t/data/test.proto` with a sample message and enum.
        *   Generated `perl/t/data/test_descriptor.bin` using `protoc`.
        *   Removed `t/data/` from `.gitignore` to ensure test data is versioned.
    6.  **Test Update:** `t/04_descriptor_pool.t` is updated to use the new OO interface, load the generated descriptor set, and check for message definitions.
    7.  **Build Fixes:**
        *   Corrected `#include` paths in `DescriptorPool.xs` to be relative to the `upb/` directory (e.g., `upb/wire/decode.h`).
        *   Added `-I../upb` to `CCFLAGS` in `Makefile.PL`.
        *   Reordered `INC` paths in `Makefile.PL` to prioritize local headers.
    
    **Note:** While tests now pass in some environments, a SEGV issue persists in `make test` runs, indicating a potential memory or lifecycle issue within the XS layer that needs further investigation.


## 0.02	2025-12-14

commit 6c9a6f1a5f774dae176beff02219f504ea3a6e07
Author: C.J. Collier 
Date:   Sun Dec 14 20:13:09 2025 +0000

    Fix(perl): Correct UPB build integration and generated file handling
    
    This commit resolves several issues to achieve a successful build of the Perl extension:
    
    1.  **Use Bazel Generated Files:** Switched from compiling UPB's stage0 descriptor.upb.c to using the Bazel-generated `descriptor.upb.c` and `descriptor.upb_minitable.c` located in `bazel-bin/src/google/protobuf/_virtual_imports/descriptor_proto/google/protobuf/`.
    2.  **Updated Include Paths:** Added the `bazel-bin` path to `INC` in `WriteMakefile` and to `base_flags` in `MY::postamble` to ensure the generated headers are found during both XS and static library compilation.
    3.  **Removed Stage0:** Removed references to `UPB_STAGE0_DIR` and no longer include headers or source files from `upb/reflection/stage0/`.
    4.  **-fPIC:** Explicitly added `-fPIC` to `CCFLAGS` in `WriteMakefile` and ensured `$(CCFLAGS)` is used in the custom compilation rules in `MY::postamble`. This guarantees all object files in the static library are compiled with position-independent code, resolving linker errors when creating the shared objects for the XS modules.
    5.  **Refined UPB Sources:** Used `File::Find` to recursively find UPB C sources, excluding `/conformance/` and `/reflection/stage0/` to avoid conflicts and unnecessary compilations.
    6.  **Arena Constructor:** Modified `ProtoBuf::Arena::pb_arena_new` XSUB to accept the class name argument passed from Perl, making it a proper constructor.
    7.  **.gitignore:** Added patterns to `perl/.gitignore` to ignore generated C files from XS (`lib/*.c`, `lib/ProtoBuf/*.c`), the copied `src_google_protobuf_descriptor.pb.cc`, and the `t/data` directory.
    8.  **Build Documentation:** Updated `perl/doc/architecture/upb-build-integration.md` to reflect the new build process, including the Bazel prerequisite, include paths, `-fPIC` usage, and `File::Find`.
    
    Build Steps:
    1.  `bazel build //src/google/protobuf:descriptor_upb_proto` (from repo root)
    2.  `cd perl`
    3.  `perl Makefile.PL`
    4.  `make`
    5.  `make test` (Currently has expected failures due to missing test data implementation).


## 0.01	2025-12-14

commit 3e237e8a26442558c94075766e0d4456daaeb71d
Author: C.J. Collier 
Date:   Sun Dec 14 19:34:28 2025 +0000

    feat(perl): Initialize Perl extension scaffold and build system
    
    This commit introduces the `perl/` directory, laying the groundwork for the Perl Protocol Buffers extension. It includes the essential build files, linters, formatter configurations, and a vendored Devel::PPPort for XS portability.
    
    Key components added:
    
    *   **`Makefile.PL`**: The core `ExtUtils::MakeMaker` build script. It's configured to:
        *   Build a static library (`libprotobuf_common.a`) from UPB, UTF8_Range, and generated protobuf C/C++ sources.
        *   Utilize `XSMULTI => 1` to create separate shared objects for `ProtoBuf`, `ProtoBuf::Arena`, and `ProtoBuf::DescriptorPool`.
        *   Link each XS module against the common static library.
        *   Define custom compilation rules in `MY::postamble` to handle C vs. C++ flags and build the static library.
        *   Set up include paths for the project root, UPB, and other dependencies.
    
    *   **XS Stubs (`.xs` files)**:
        *   `lib/ProtoBuf.xs`: Placeholder for the main module's XS functions.
        *   `lib/ProtoBuf/Arena.xs`: XS interface for `upb_Arena` management.
        *   `lib/ProtoBuf/DescriptorPool.xs`: XS interface for `upb_DefPool` management.
    
    *   **Perl Module Stubs (`.pm` files)**:
        *   `lib/ProtoBuf.pm`: Main module, loads XS.
        *   `lib/ProtoBuf/Arena.pm`: Perl class for Arenas.
        *   `lib/ProtoBuf/DescriptorPool.pm`: Perl class for Descriptor Pools.
        *   `lib/ProtoBuf/Message.pm`: Base class for messages (TBD).
    
    *   **Configuration Files**:
        *   `.gitignore`: Ignores build artifacts, editor files, etc.
        *   `.perlcriticrc`: Configures Perl::Critic for static analysis.
        *   `.perltidyrc`: Configures perltidy for code formatting.
    
    *   **`Devel::PPPort`**: Vendored version 3.72 to generate `ppport.h` for XS compatibility across different Perl versions.
    
    *   **`typemap`**: Custom typemap for XS argument/result conversion.
    
    *   **Documentation (`doc/`)**: Initial architecture and plan documents.
    
    This provides a solid foundation for developing the UPB-based Perl extension.


22 December, 2025 01:32AM by C.J. Collier

December 21, 2025

Ian Jackson

Debian’s git transition

tl;dr:

There is a Debian git transition plan. It’s going OK so far but we need help, especially with outreach and updating Debian’s documentation.

Goals of the Debian git transition project

  1. Everyone who interacts with Debian source code should be able to do so entirely in git.

That means, more specifically:

  1. All examination and edits to the source should be performed via normal git operations.

  2. Source code should be transferred and exchanged as git data, not tarballs. git should be the canonical form everywhere.

  3. Upstream git histories should be re-published, traceably, as part of formal git releases published by Debian.

  4. No-one should have to learn about Debian Source Packages, which are bizarre, and have been obsoleted by modern version control.

This is very ambitious, but we have come a long way!

Achievements so far, and current status

We have come a very long way. But, there is still much to do - especially, the git transition team needs your help with adoption, developer outreach, and developer documentation overhaul.

We’ve made big strides towards goals 1 and 4. Goal 2 is partially achieved: we currently have dual running. Goal 3 is within our reach but depends on widespread adoption of tag2upload (and/or dgit push).

Downstreams and users can obtain the source code of any Debian package in git form. (dgit clone, 2013). They can then work with this source code completely in git, including building binaries, merging new versions, even automatically (eg Raspbian, 2016), and all without having to deal with source packages at all (eg Wikimedia 2025).

A Debian maintainer can maintain their own package entirely in git. They can obtain upstream source code from git, and do their packaging work in git (git-buildpackage, 2006).

Every Debian maintainer can (and should!) release their package from git reliably and in a standard form (dgit push, 2013; tag2upload, 2025). This is not only more principled, but also more convenient, and with better UX, than pre-dgit tooling like dput.

Indeed a Debian maintainer can now often release their changes to Debian, from git, using only git branches (so no tarballs). Releasing to Debian can be simply pushing a signed tag (tag2upload, 2025).

A Debian maintainer can maintain a stack of changes to upstream source code in git (gbp pq 2009). They can even maintain such a delta series as a rebasing git branch, directly buildable, and use normal git rebase style operations to edit their changes, (git-dpm, 2010; git-debrebase, 2018)

An authorised Debian developer can do a modest update to any package in Debian, even one maintained by someone else, working entirely in git in a standard and convenient way (dgit, 2013).

Debian contributors can share their work-in-progress on git forges and collaborate using merge requests, git based code review, and so on. (Alioth, 2003; Salsa, 2018.)

Core engineering principle

The Debian git transition project is based on one core engineering principle:

Every Debian Source Package can be losslessly converted to and from git.

In order to transition away from Debian Source Packages, we need to gateway between the old dsc approach, and the new git approach.

This gateway obviously needs to be bidirectional: source packages uploaded with legacy tooling like dput need to be imported into a canonical git representation; and of course git branches prepared by developers need to be converted to source packages for the benefit of legacy downstream systems (such as the Debian Archive and apt source).

This bidirectional gateway is implemented in src:dgit, and is allowing us to gradually replace dsc-based parts of the Debian system with git-based ones.

Correspondence between dsc and git

A faithful bidirectional gateway must define an invariant:

The canonical git tree, corresponding to a .dsc, is the tree resulting from dpkg-source -x.

This canonical form is sometimes called the “dgit view”. It’s sometimes not the same as the maintainer’s git branch, because many maintainers are still working with “patches-unapplied” git branches. More on this below.

(For 3.0 (quilt) .dscs, the canonical git tree doesn’t include the quilt .pc directory.)

Patches-applied vs patches-unapplied

The canonical git format is “patches applied”. That is:

If Debian has modified the upstream source code, a normal git clone of the canonical branch gives the modified source tree, ready for reading and building.

Many Debian maintainers keep their packages in a different git branch format, where the changes made by Debian, to the upstream source code, are in actual patch files in a debian/patches/ subdirectory.

Patches-applied has a number of important advantages over patches-unapplied:

  • It is familiar to, and doesn’t trick, outsiders to Debian. Debian insiders radically underestimate how weird “patches-unapplied” is. Even expert software developers can get very confused or even accidentally build binaries without security patches!

  • Making changes can be done with just normal git commands, eg git commit. Many Debian insiders working with patches-unapplied are still using quilt(1), a footgun-rich contraption for working with patch files!

  • When developing, one can make changes to upstream code, and to Debian packaging, together, without ceremony. There is no need to switch back and forth between patch queue and packaging branches (as with gbp pq), no need to “commit” patch files, etc. One can always edit every file and commit it with git commit.

The downside is that, with the (bizarre) 3.0 (quilt) source format, the patch files files in debian/patches/ must somehow be kept up to date. Nowadays though, tools like git-debrebase and git-dpm (and dgit for NMUs) make it very easy to work with patches-applied git branches. git-debrebase can deal very ergonomically even with big patch stacks.

(For smaller packages which usually have no patches, plain git merge with an upstream git branch, and a much simpler dsc format, sidesteps the problem entirely.)

Prioritising Debian’s users (and other outsiders)

We want everyone to be able to share and modify the software that they interact with. That means we should make source code truly accessible, on the user’s terms.

Many of Debian’s processes assume everyone is an insider. It’s okay that there are Debian insiders and that people feel part of something that they worked hard to become involved with. But lack of perspective can lead to software which fails to uphold our values.

Our source code practices — in particular, our determination to share properly (and systematically) — are a key part of what makes Debian worthwhile at all. Like Debian’s installer, we want our source code to be useable by Debian outsiders.

This is why we have chosen to privilege a git branch format which is more familiar to the world at large, even if it’s less popular in Debian.

Consequences, some of which are annoying

The requirement that the conversion be bidirectional, lossless, and context-free can be inconvenient.

For example, we cannot support .gitattributes which modify files during git checkin and checkout. .gitattributes cause the meaning of a git tree to depend on the context, in possibly arbitrary ways, so the conversion from git to source package wouldn’t be stable. And, worse, some source packages might not to be representable in git at all.

Another example: Maintainers often have existing git branches for their packages, generated with pre-dgit tooling which is less careful and less principled than ours. That can result in discrepancies between git and dsc, which need to be resolved before a proper git-based upload can succeed.

That some maintainers use patches-unapplied, and some patches-unapplied, means that there has to be some kind of conversion to a standard git representation. Choosing the less-popular patches-applied format as the canonical form, means that many packages need their git representation converted. It also means that user- and outsider-facing branches from {browse,git}.dgit.d.o and dgit clone are not always compatible with maintainer branches on Salsa. User-contributed changes need cherry-picking rather than merging, or conversion back to the maintainer format. The good news is that dgit can automate much of this, and the manual parts are usually easy git operations.

Distributing the source code as git

Our source code management should be normal, modern, and based on git. That means the Debian Archive is obsolete and needs to be replaced with a set of git repositories.

The replacement repository for source code formally released to Debian is *.dgit.debian.org. This contains all the git objects for every git-based upload since 2013, including the signed tag for each released package version.

The plan is that it will contain a git view of every uploaded Debian package, by centrally importing all legacy uploads into git.

Tracking the relevant git data, when changes are made in the legacy Archive

Currently, many critical source code management tasks are done by changes to the legacy Debian Archive, which works entirely with dsc files (and the associated tarballs etc). The contents of the Archive are therefore still an important source of truth. But, the Archive’s architecture means it cannot sensibly directly contain git data.

To track changes made in the Archive, we added the Dgit: field to the .dsc of a git-based upload (2013). This declares which git commit this package was converted from. and where those git objects can be obtained.

Thus, given a Debian Source Package from a git-based upload, it is possible for the new git tooling to obtain the equivalent git objects. If the user is going to work in git, there is no need for any tarballs to be downloaded: the git data could be obtained from the depository using the git protocol.

The signed tags, available from the git depository, have standardised metdata which gives traceability back to the uploading Debian contributor.

Why *.dgit.debian.org is not Salsa

We need a git depository - a formal, reliable and permanent git repository of source code actually released to Debian.

Git forges like Gitlab can be very convenient. But Gitlab is not sufficiently secure, and too full of bugs, to be the principal and only archive of all our source code. (The “open core” business model of the Gitlab corporation, and the constant-churn development approach, are critical underlying problems.)

Our git depository lacks forge features like Merge Requests. But:

  • It is dependable, both in terms of reliability and security.
  • It is append-only: once something is pushed, it is permanently recorded.
  • Its access control is precisely that of the Debian Archive.
  • Its ref namespace is standardised and corresponds to Debian releases.
  • Pushes are authorised by PGP signatures, not ssh keys, so traceable.

The dgit git depository outlasted Alioth and it may well outlast Salsa.

We need both a good forge, and the *.dgit.debian.org formal git depository.

Roadmap

In progress

Right now we are quite focused on tag2upload.

We are working hard on eliminating the remaining issues that we feel need to be addressed before declaring the service out of beta.

Future Technology

Whole-archive dsc importer

Currently, the git depository only has git data for git-based package updates (tag2upload and dgit push). Legacy dput-based uploads are not currently present there. This means that the git-based and legacy uploads must be resolved client-side, by dgit clone.

We will want to start importing legacy uploads to git.

Then downstreams and users will be able to get the source code for any package simply with git clone, even if the maintainer is using legacy upload tools like dput.

Support for git-based uploads to security.debian.org

Security patching is a task which would particularly benefit from better and more formal use of git. git-based approaches to applying and backporting security patches are much more convenient than messing about with actual patch files.

Currently, one can use git to help prepare a security upload, but it often involves starting with a dsc import (which lacks the proper git history) or figuring out a package maintainer’s unstandardised git usage conventions on Salsa.

And it is not possible to properly perform the security release as git.

Internal Debian consumers switch to getting source from git

Buildds, QA work such as lintian checks, and so on, could be simpler if they don’t need to deal with source packages.

And since git is actually the canonical form, we want them to use it directly.

Problems for the distant future

For decades, Debian has been built around source packages. Replacing them is a long and complex process. Certainly source packages are going to continue to be supported for the foreseeable future.

There are no doubt going to be unanticipated problems. There are also foreseeable issues: for example, perhaps there are packages that work very badly when represented in git. We think we can rise to these challenges as they come up.

Mindshare and adoption - please help!

We and our users are very pleased with our technology. It is convenient and highly dependable.

dgit in particular is superb, even if we say so ourselves. As technologists, we have been very focused on building good software, but it seems we have fallen short in the marketing department.

A rant about publishing the source code

git is the preferred form for modification.

Our upstreams are overwhelmingly using git. We are overwhelmingly using git. It is a scandal that for many packages, Debian does not properly, formally and officially publish the git history.

Properly publishing the source code as git means publishing it in a way that means that anyone can automatically and reliably obtain and build the exact source code corresponding to the binaries. The test is: could you use that to build a derivative?

Putting a package in git on Salsa is often a good idea, but it is not sufficient. No standard branch structure git on Salsa is enforced, nor should it be (so it can’t be automatically and reliably obtained), the tree is not in a standard form (so it can’t be automatically built), and is not necessarily identical to the source package. So Vcs-Git fields, and git from Salsa, will never be sufficient to make a derivative.

Debian is not publishing the source code!

The time has come for proper publication of source code by Debian to no longer be a minority sport. Every maintainer of a package whose upstream is using git (which is nearly all packages nowadays) should be basing their work on upstream git, and properly publishing that via tag2upload or dgit.

And it’s not even difficult! The modern git-based tooling provides a far superior upload experience.

A common misunderstanding

dgit push is not an alternative to gbp pq or quilt. Nor is tag2upload. These upload tools complement your existing git workflow. They replace and improve source package building/signing and the subsequent dput. If you are using one of the usual git layouts on salsa, and your package is in good shape, you can adopt tag2upload and/or dgit push right away.

git-debrebase is distinct and does provides an alternative way to manage your git packaging, do your upstream rebases, etc.

Documentation

Debian’s documentation all needs to be updated, including particularly instructions for packaging, to recommend use of git-first workflows. Debian should not be importing git-using upstreams’ “release tarballs” into git. (Debian outsiders who discover this practice are typically horrified.) We should use only upstream git, work only in git, and properly release (and publish) in git form.

We, the git transition team, are experts in the technology, and can provide good suggestions. But we do not have the bandwidth to also engage in the massive campaigns of education and documentation updates that are necessary — especially given that (as with any programme for change) many people will be sceptical or even hostile.

So we would greatly appreciate help with writing and outreach.

Personnel

We consider ourselves the Debian git transition team.

Currently we are:

  • Ian Jackson. Author and maintainer of dgit and git-debrebase. Co-creator of tag2upload. Original author of dpkg-source, and inventor in 1996 of Debian Source Packages. Alumnus of the Debian Technical Committee.

  • Sean Whitton. Co-creator of the tag2upload system; author and maintainer of git-debpush. Co-maintainer of dgit. Debian Policy co-Editor. Former Chair of the Debian Technical Committee.

We wear the following hats related to the git transition:

You can contact us:

We do most of our heavy-duty development on Salsa.

Thanks

Particular thanks are due to Joey Hess, who, in the now-famous design session in Vaumarcus in 2013, helped invent dgit. Since then we have had a lot of support: most recently political support to help get tag2upload deployed, but also, over the years, helpful bug reports and kind words from our users, as well as translations and code contributions.

Many other people have contributed more generally to support for working with Debian source code in git. We particularly want to mention Guido Günther (git-buildpackage); and of course Alexander Wirt, Joerg Jaspert, Thomas Goirand and Antonio Terceiro (Salsa administrators); and before them the Alioth administrators.



comment count unavailable comments

21 December, 2025 11:24PM

Russell Coker

December 20, 2025

hackergotchi for Ritesh Raj Sarraf

Ritesh Raj Sarraf

Immutable Debian

Immutable Atomic Linux Distirbutions

Of late, I’ve been hearing a lot of (good) things about Immutable Linux Distributions, from friends, colleagues and mentors. It has been something on my plate for some time, to explore. But given the nature of the subject, it has been delayed for a while. Reasons are simple; I can only really judge this product if I use it for some time; and it has to be on my primary daily driver machine.

Personal life, this year, has been quite challenging as well. Thus it got pushed to until now.

Chrome OS

I’ve realized that I’ve been quite late to a lot of Linux parties. Containers, Docker, Kubernetes, Golang, Rust, Immutable Linux and many many more.

Late to the extent that I’ve had a Chrome Book lying at home for many months but never got to tinker with it at all.

Having used it for just around 2 weeks now, I can see what a great product Google built with it. In short, this is exactly how a Linux desktop integration should be. The GUI integration is just top notch. There’s consistency across all applications rendered on the Chrome OS

The integration of [X]Wayland and friends is equally good. Maybe Google should consider opensourcing all those components. IIRC, exo, sommelier, xwayland, ash and many more.

I was equally happy to see their Linux Development Environment offering on supported hardware. While tightly integrated, it still allows power users to tinker things around. I was quite impressed to see nested containers in crostini. Job well done.

All of this explains why there’s much buzz about Immutable Atomic Linux Distributions these days.

Then, there’s the Android integration, which is just awesome in case you care of it. Both libndk and libhoudini are well integrated and nicely usable.

Immutable Linux Distributions

This holiday season I wanted to find and spend some time catching up on stuff I had been prolonging.

I chose to explore this subject while trying to remain in familiar Debian land. So my first look was to see if there was any product derived out of the Debian base.

That brought me to Vanilla OS Orchid. This is a fresh out of oven project, recently switched to being based on Debian Sid. Previous iteration used Ubuntu as the base.

Vanilla OS turned out to be quite good an experience. The stock offering is created well enough to serve the general audience. And the framework is such wonderfully structured that seasoned users can tinker around with it, without much fuss.

Vanilla OS uses an A/B Partition model for how system updates are rolled. At any point, when a new OTA update is pushed, it gets applied to the inactive A/B partition. And it gets activated at next boot. If things break, user has the option to switch to the previous state. Just the usual set of expectations one would have with an immutable distribution.

What they’ve done beautifully is:

  • Integration Device Mapper LVM for A/B Partition
  • Linux Container OCI images to provison/flash A/B Paritions
  • Developed abroot utility for A/B Partition management
  • APX (Distrobox) integration for container workflows, with multiple Linux flavors
  • No sudo. Everything done via pkexec

But the most awesome thing I liked in Vanilla OS is custom images. This allows power users to easily tinker with the developer workflow and generate new images, tailored for their specific use cases. All of this done levraging the GitHub/GitLab CI/CD workflows, which I think is just plain awesome. Given that payload is of the OCI format, the CICD workflow just generates new OCI images and publishes to a registry. And then the same is pulled to the client as an OTA.

Hats off to this small team/community for doing such nice integration work, ultimately producing a superb Immutable Atomic Linux Distribution based on the Debian base.

Immutable Linux

My primary work machine has grown over the years, being on the rolling Debian Testing/Unstable channel. And I don’t much feel the itch ever to format my (primary) machine so quick, no matter how great the counter offer is.

So that got me wondering how to have some of bling of the immutable world that I’ve tasted (Thanks Chrome OS and Vanilla OS). With a fair idea of what they offer in features, I drew a line to what I’d want on my primary machine.

  • read-only rootfs
  • read-only /etc/

This also kinda hardens my systems to an extent that I can’t accidentally cause catastrophic damage to it.

The feature I’m letting go of is the A/B Partition (rpm-ostree for Fedora land). While a good feature, having to integrate it into my current machine is going to be very very challenging.

I actually feel that, the core assumption the Immutable Distros make, that all hardware is going to Just Work, is flawed. While Linux has substantially improved over the past years, there’s still a hit/miss when introducing very recent hardware.

Immutable Linux is targeted for the novice user, who won’t accidentally mess with the system. But what would the novice user do in case they have issues with their recently purchased hardware, that they are attempting to run (Immutable) Linux on.

Ritesh’s Immutable Debian

With the premise set, on to sailing in immutable land.

There’s another ground breaking innovation that has been happening; which I think everyone is aware of. And may be using it as well, direct or indirect.

Artificial Intelligence

While I’ve only been a user for a couple of months as I draft this post, I’m now very much impressed with all this innovation. Being at the consumer end has me appreciating it for what it has offered thus far. And I haven’t even scratched the surface. I’m making attempts at developing understanding of Machine Learning and Artificial Intelligence but there’s a looonnngg way to go still.

What I’m appreciating the most is the availability of the AI Technology. It has helped me be more efficient. And thus I get to use the gain (time) with family.

To wrap, what I tailored my primary OS to, wouldn’t have been possible without assistance from AI.

With that, I disclaim that the rest of this article is primarily drafted by my AI Companion. This is going to serve me as a reference for future, when I forget about how all of this was structured.

�� System Architecture: Immutable Debian (Btrfs + MergerFS)

This system is a custom-hardened Immutable Workstation based on Debian Testing/Unstable. It utilizes native Btrfs properties and surgical VFS mounting to isolate the Operating System from persistent data.

1. Storage Strategy: Subvolume Isolation

The system resides on a LUKS-encrypted NVMe partition, using a flattened subvolume layout to separate the “Gold Master” OS from volatile and persistent data.

Mount Point Subvolume Path State Purpose
/ /ROOTVOL RO The core OS image.
/etc /ROOTVOL/etc RO System configuration (Snapshot-capable).
/home/rrs /ROOTVOL/home/rrs RW User data and Kitty terminal configs.
/var/lib /ROOTVOL/var/lib RW Docker, Apt state, and system DBs.
/var/spool /ROOTVOL/var/spool RW Mail queues and service state.
/swap /ROOTVOL/swap RW Isolated path for No_COW Swapfile.
/disk-tmp /ROOTVOL/disk-tmp RW MergerFS overflow tier.

1.1 /etc/fstab

� cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# --- ROOT & BOOT ---
/dev/mapper/nvme0n1p3_crypt / btrfs autodefrag,compress=zstd,discard=async,noatime,defaults,ro 0 0
/dev/nvme0n1p2 /boot ext4 defaults 0 2
/dev/nvme0n1p1 /boot/efi vfat umask=0077 0 1
# --- SWAP ---
# Mount the "Portal" to the swap subvolume using UUID (Robust)
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /swap btrfs subvol=/ROOTVOL/swap,defaults,noatime 0 0
# Activate the swap file by path (Correct for files)
/swap/swapfile none swap defaults 0 0
# --- DATA / MEDIA ---
UUID=439e297a-96a5-4f81-8b3a-24559839539d /media/rrs/TOSHIBA btrfs noauto,compress=zstd,space_cache=v2,subvolid=5,subvol=/,user
# --- MERGERFS ---
# --- DISK-TMP (MergerFS Overflow Tier) ---
# Ensure this ID matches your actual disk-tmp subvolume
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /disk-tmp btrfs subvolid=417,discard=async,defaults,noatime,compress=zstd 0 0
tmpfs /ram-tmp tmpfs defaults 0 0
/ram-tmp:/disk-tmp /tmp fuse.mergerfs x-systemd.requires=/ram-tmp,x-systemd.requires=/disk-tmp,defaults,allow_other,use_ino,nonempty,minfreespace=1G,category.create=all,moveonenospc=true 0 0
# --- IMMUTABILITY PERSISTENCE LAYERS ---
# We explicitly mount these subvolumes so they remain Writable later.
# UUID is the same as your /var/lib entry (your main Btrfs volume).
# 1. /var/lib (Docker, Apt state) - ID 50659
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/lib btrfs subvolid=50659,discard=async,defaults,noatime,compress=zstd 0 0
# 2. /home/rrs (User Data) - ID 13032
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /home/rrs btrfs subvolid=13032,discard=async,defaults,noatime,compress=zstd 0 0
# 3. /etc (System Config) - ID 13030
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /etc btrfs subvolid=13030,discard=async,defaults,noatime,compress=zstd,ro 0 0
# 4. /var/log (Logs) - ID 406
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/log btrfs subvolid=406,discard=async,defaults,noatime,compress=zstd 0 0
# 5. /var/cache (Apt Cache) - ID 409
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/cache btrfs subvolid=409,discard=async,defaults,noatime,compress=zstd 0 0
# 6. /var/tmp (Temp files) - ID 401
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/tmp btrfs subvolid=401,discard=async,defaults,noatime,compress=zstd 0 0
# /var/spool
UUID=4473b40b-bb46-43d6-b69c-ef17bfcac41c /var/spool btrfs subvolid=50689,discard=async,defaults,noatime,compress=zstd 0 0

2. Tiered Memory Model (/tmp)

To balance performance and capacity, /tmp is managed via MergerFS:

  • Tier 1 (RAM): tmpfs mounted at /ram-tmp.
  • Tier 2 (Disk): Btrfs subvolume mounted at /disk-tmp.
  • Logic: Files are written to RAM first. If RAM falls below 1GB available, files spill over to the Btrfs disk tier.

3. Hibernation & Swap Logic

  • Size: 33 GiB (Configured for Suspend-to-Disk with 24GB RAM).
  • Attribute: The /swap subvolume is marked No_COW (+C).
  • Kernel Integration:
    • resume=UUID=... (Points to the unlocked LUKS container).
    • resume_offset=... (Physical extent mapping for Btrfs).

3.1 systemd sleep/Hibernation

� cat /etc/systemd/sleep.conf.d/sleep.conf
[Sleep]
HibernateDelaySec=12min

and

� cat /etc/systemd/logind.conf.d/logind.conf
[Login]
HandleLidSwitch=suspend-then-hibernate
HandlePowerKey=suspend-then-hibernate
HandleSuspendKey=suspend-then-hibernate
SleepOperation==suspend-then-hibernate

4. Immutability & Safety Mechanisms

The system state is governed by two key components:

A. The Control Script (immutectl)

Handles the state transition by flipping Btrfs properties and VFS mount flags in the correct order.

  • sudo immutectl unlock: Sets ro=false and remounts rw.
  • sudo immutectl lock: Sets ro=true and remounts ro.
� cat /usr/local/bin/immutectl
#!/bin/bash
# Ensure script is run as root
if [[ $EUID -ne 0 ]]; then
echo "This script must be run as root (sudo)."
exit 1
fi
ACTION=$1
case $ACTION in
unlock)
echo "🔓 Unlocking / and /etc for maintenance..."
# 1. First, tell the Kernel to allow writes to the mount point
mount -o remount,rw /
mount -o remount,rw /etc
# 2. Now that the VFS is RW, Btrfs will allow you to change the property
btrfs property set / ro false
btrfs property set /etc ro false
echo "Status: System is now READ-WRITE."
;;
lock)
echo "🔒 Locking / and /etc (Immutable Mode)..."
sync
btrfs property set / ro true
btrfs property set /etc ro true
# We still attempt remount, but we ignore failure since Property is the Hard Lock
mount -o remount,ro / 2>/dev/null
mount -o remount,ro /etc 2>/dev/null
echo "Status: System is now READ-ONLY (Btrfs Property Set)."
;;
status)
echo "--- System Immutability Status ---"
for dir in "/" "/etc"; do
# Get VFS state
VFS_STATE=$(grep " $dir " /proc/mounts | awk '{print $4}' | cut -d, -f1)
# Get Btrfs Property state
BTRFS_PROP=$(btrfs property get "$dir" ro | cut -d= -f2)
# Determine overall health
if [[ "$BTRFS_PROP" == "true" ]]; then
FINAL_STATUS="LOCKED (RO)"
else
FINAL_STATUS="UNLOCKED (RW)"
fi
echo "Path: $dir"
echo " - VFS Layer (Mount): $VFS_STATE"
echo " - Btrfs Property: ro=$BTRFS_PROP"
echo " - Effective State: $FINAL_STATUS"
# Check for mismatch (The "Busy" scenario)
if [[ "$VFS_STATE" == "rw" && "$BTRFS_PROP" == "true" ]]; then
echo " ⚠� NOTICE: VFS is RW but Btrfs is RO. System is effectively Immutable."
fi
echo ""
done
;;
*)
echo "Usage: $0 {lock|unlock|status}"
exit 1
;;
esac

B. The Smart Seal (immutability-seal.service)

A systemd one-shot service that ensures the system is locked on boot.

  • Fail-safe: The service checks /proc/cmdline for the standalone word rw. If found (via GRUB manual override), the seal is aborted to allow emergency maintenance.
� cat /etc/systemd/system/immutability-seal.service
[Unit]
Description=Ensure Btrfs Immutable Properties are set on Boot (unless rw requested)
DefaultDependencies=no
After=systemd-remount-fs.service
Before=local-fs.target
# Don't run in emergency/rescue modes
#ConditionPathExists=!/run/systemd/seats/seat0
[Service]
Type=oneshot
# The robust check: exit if 'rw' exists as a standalone word
ExecStartPre=/bin/sh -c '! grep -qE "\brw\b" /proc/cmdline'
ExecStartPre=mount -o remount,rw /
ExecStart=/usr/bin/btrfs property set / ro true
ExecStart=/usr/bin/btrfs property set /etc ro true
ExecStartPost=mount -o remount,ro /
RemainAfterExit=yes
[Install]
WantedBy=local-fs.target

5. Monitoring & Maintenance

  • Nagging: A systemd user-timer runs immutability-nag every 15 minutes to notify the desktop session if the system is currently in an “Unlocked” state.
  • Verification: Use sudo immutectl status to verify that both the VFS Layer and Btrfs Properties are in sync.

5.1 Nagging

� cat ~/bin/immutability-nag
#!/bin/bash
# Check Btrfs property
BTRFS_STATUS=$(btrfs property get / ro | cut -d= -f2)
if [[ "$BTRFS_STATUS" == "false" ]]; then
# Use notify-send (Standard, fast, non-intrusive)
notify-send -u critical -i security-low \
"🔓 System Unlocked" \
"Root is currently WRITABLE. Run 'immutectl lock' when finished."
fi

and

� usystemctl cat immutability-nag.service
# /home/rrs/.config/systemd/user/immutability-nag.service
[Unit]
Description=Check Btrfs immutability and notify user
# Ensure it doesn't run before the graphical session is ready
After=graphical-session.target
[Service]
Type=oneshot
ExecStart=%h/bin/immutability-nag
# Standard environment for notify-send to find the DBus session
Environment=DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/%U/bus
[Install]
WantedBy=default.target
   ~   20:35:15
� usystemctl cat immutability-nag.timer
# /home/rrs/.config/systemd/user/immutability-nag.timer
[Unit]
Description=Check immutability every 15 mins
[Timer]
OnStartupSec=5min
OnUnitActiveSec=15min
[Install]
WantedBy=timers.target

And the resultant nag in action.

Immutable Debian Nag

Immutable Debian Nag

5.2 Verification

� sudo immutectl status
[sudo] password for rrs:
--- System Immutability Status ---
Path: /
- VFS Layer (Mount): rw
- Btrfs Property: ro=false
- Effective State: UNLOCKED (RW)
Path: /etc
- VFS Layer (Mount): rw
- Btrfs Property: ro=false
- Effective State: UNLOCKED (RW)
   ~   21:14:08
� sudo immutectl lock
🔒 Locking / and /etc (Immutable Mode)...
Status: System is now READ-ONLY (Btrfs Property Set).
   ~   21:14:15
� sudo immutectl status
--- System Immutability Status ---
Path: /
- VFS Layer (Mount): rw
- Btrfs Property: ro=true
- Effective State: LOCKED (RO)
⚠� NOTICE: VFS is RW but Btrfs is RO. System is effectively Immutable.
Path: /etc
- VFS Layer (Mount): rw
- Btrfs Property: ro=true
- Effective State: LOCKED (RO)
⚠� NOTICE: VFS is RW but Btrfs is RO. System is effectively Immutable.

Date Configured: December 2025
Philosophy: The OS is a diagnostic tool. If an application fails to write to a locked path, the application is the variable, not the system.

Wrap

Overall, I’m very very happy with, the result of a day of working together with AI. I wouldn’t have gotten things done so quick in such time if it wasn’t around. Such great is this age of AI.

20 December, 2025 12:00AM by Ritesh Raj Sarraf (rrs@researchut.com)

December 19, 2025

hackergotchi for Kartik Mistry

Kartik Mistry

KDE Needs You!

* KDE Randa Meetings and make a donation!

I know that my contributions to KDE are minimal at this stage, but hey, I’m doing my part this time for sure!

19 December, 2025 01:44PM by કાર્તિક

December 18, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

dang 0.0.17: New Features, Plus Maintenance

dang image

A new release of my mixed collection of things package dang package arrived at CRAN earlier today. The dang package regroups a few functions of mine that had no other home as for example lsos() from a StackOverflow question from 2009 (!!), the overbought/oversold price band plotter from an older blog post, the market monitor blogged about as well as the checkCRANStatus() function tweeted about by Tim Taylor. And more so take a look.

This release retires two functions: the social media site nobody ever visits anymore shut down its API too, so no way to mute posts by a given handle. Similarly, the (never official) ability by Google to supply financial data is no more, so the function to access data this way is gone too. But we also have two new ones: one that helps with CRAN entries for ORCiD ids, and another little helper to re-order microbenchmark results by summary column (defaulting to the median). Other than the usual updates to continuous integrations, as well as a switch to Authors@R which will result in CRAN nagging me less about this, and another argument update.

The detailed NEWS entry follows.

Changes in version 0.0.17 (2025-12-18)

  • Added new funtion reorderMicrobenchmarkResults with alias rmr

  • Use tolower on email argument to checkCRANStatus

  • Added new function cranORCIDs bootstrapped from two emails by Kurt Hornik

  • Switched to using Authors@R in DESCRIPTION and added ORCIDs where available

  • Switched to r-ci action with included bootstrap step; updated updated the checkout action (twice); added (commented-out) log accessor

  • Removed googleFinanceData as the (unofficial) API access point no longer works

  • Removed muteTweeters because the API was turned off

Via my CRANberries, there is a comparison to the previous release. For questions or comments use the the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

18 December, 2025 09:14PM

hackergotchi for Colin Watson

Colin Watson

Preparing a transition in Debusine

We announced a public beta of Debusine repositories recently (Freexian blog, debian-devel-announce). One thing I’m very keen on is being able to use these to prepare “transitions”: changes to multiple packages that need to be prepared together in order to land in testing. As I said in my DebConf25 talk:

We have distribution-wide CI in unstable, but there’s only one of it and it’s shared between all of us. As a result it’s very possible to get into tangles when multiple people are working on related things at the same time, and we only avoid that as much as we do by careful coordination such as transition bugs. Experimental helps, but again, there’s only one of it and setting up another one is far from trivial.

So, what we want is a system where you can run experiments on possible Debian changes at a large scale without a high setup cost and without fear of breaking things for other people. And then, if it all works, push the whole lot into Debian.

Time to practice what I preach.

Setup

The setup process is documented on the Debian wiki. You need to decide whether you’re working on a short-lived experiment, in which case you’ll run the create-experiment workflow and your workspace will expire after 60 days of inactivity, or something that you expect to keep around for longer, in which case you’ll run the create-repository workflow. Either one of those will create a new workspace for you. Then, in that workspace, you run debusine archive suite create for whichever suites you want to use. For the case of a transition that you plan to land in unstable, you’ll most likely use create-experiment and then create a single suite with the pattern sid-<name>.

The situation I was dealing with here was moving to Pylint 4. Tests showed that we needed this as part of adding Python 3.14 as a supported Python version, and I knew that I was going to need newer upstream versions of the astroid and pylint packages. However, I wasn’t quite sure what the fallout of a new major version of pylint was going to be. Fortunately, the Debian Python ecosystem has pretty good autopkgtest coverage, so I thought I’d see what Debusine said about it. I created an experiment called cjwatson-pylint (resulting in https://debusine.debian.net/debian/developers-cjwatson-pylint/ - I’m not making that a proper link since it will expire in a couple of months) and a sid-pylint suite in it.

Iteration

From this starting point, the basic cycle involved uploading each package like this for each package I’d prepared:

$ dput -O debusine_workspace=developers-cjwatson-pylint \
       -O debusine_workflow=publish-to-sid-pylint \
       debusine.debian.net foo.changes

I could have made a new dput-ng profile to cut down on typing, but it wasn’t worth it here.

Then I looked at the workflow results, figured out which other packages I needed to fix based on those, and repeated until the whole set looked coherent. Debusine automatically built each upload against whatever else was currently in the repository, as you’d expect.

I should probably have used version numbers with tilde suffixes (e.g. 4.0.2-1~test1) in case I needed to correct anything, but fortunately that was mostly unnecessary. I did at least run initial test-builds locally of just the individual packages I was directly changing to make sure that they weren’t too egregiously broken, just because I usually find it quicker to iterate that way.

I didn’t take screenshots as I was going along, but here’s what the list of top-level workflows in my workspace looked like by the end:

Workflows

You can see that not all of the workflows are successful. This is because we currently just show everything in every workflow; we don’t consider whether a task was retried and succeeded on the second try, or whether there’s now a newer version of a reverse-dependency so tests of the older version should be disregarded, and so on. More fundamentally, you have to look through each individual workflow, which is a bit of a pain: we plan to add a dashboard that shows you the current state of a suite as a whole rather than the current workflow-oriented view, but we haven’t started on that yet.

Drilling down into one of these workflows, it looks something like this:

astroid workflow

This was the first package I uploaded. The first pass of failures told me about pylint (expected), pylint-flask (an obvious consequence), and python-sphinx-autodoc2 and sphinx-autoapi (surprises). The slightly odd pattern of failures and errors is because I retried a few things, and we sometimes report retries in a slightly strange way, especially when there are workflows involved that might not be able to resolve their input parameters any more.

The next level was:

pylint workflow

Again, there were some retries involved here, and also some cases where packages were already failing in unstable so the failures weren’t the fault of my change; for now I had to go through and analyze these by hand, but we’ll soon have regression tracking to compare with reference runs and show you where things have got better or worse.

After excluding those, that left pytest-pylint (not caused by my changes, but I fixed it anyway in unstable to clear out some noise) and spyder. I’d seen people talking about spyder on #debian-python recently, so after a bit of conversation there I sponsored a rope upload by Aeliton Silva, upgraded python-lsp-server, and patched spyder. All those went into my repository too, exposing a couple more tests I’d forgotten in spyder.

Once I was satisfied with the results, I uploaded everything to unstable. The next day, I looked through the tracker as usual starting from astroid, and while there are some test failures showing up right now it looks as though they should all clear out as pieces migrate to testing. Success!

Conclusions

We still have some way to go before this is a completely smooth experience that I’d be prepared to say that every developer can and should be using; there are all sorts of fit-and-finish issues that I can easily see here. Still, I do think we’re at the point where a tolerant developer can use this to deal with the common case of a mid-sized transition, and get more out of it than they put in.

Without Debusine, either I’d have had to put much more effort into searching for and testing reverse-dependencies myself, or (more likely, let’s face it) I’d have just dumped things into unstable and sorted them out afterwards, resulting in potentially delaying other people’s work. This way, everything was done with as little disruption as possible.

This works best when the packages likely to be involved have reasonably good autopkgtest coverage (even if the tests themselves are relatively basic). This is an increasingly good bet in Debian, but we have plans to add installability comparisons (similar to how Debian’s testing suite works) as well as optional rebuild testing.

If this has got you interested, please try it out for yourself and let us know how it goes!

18 December, 2025 01:21PM by Colin Watson

December 17, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

21 years of blogging

21 years ago today I wrote my first blog post. Did I think I’d still be writing all this time later? I’ve no idea to be honest. I’ve always had the impression my readership is small, and people who mostly know me in some manner, and I post to let them know what I’m up to in more detail than snippets of IRC conversation can capture. Or I write to make notes for myself (I frequently refer back to things I’ve documented here). I write less about my personal life than I used to, but I still occasionally feel the need to mark some event.

From a software PoV I started out with Blosxom, migrated to MovableType in 2008, ditched that, when the Open Source variant disappeared, for Jekyll in 2015 (when I also started putting it all in git). And have stuck there since. The static generator format works well for me, and I outsource comments to Disqus - I don’t get a lot, I can’t be bothered with the effort of trying to protect against spammers, and folk who don’t want to use it can easily email or poke me on the Fediverse. If I ever feel the need to move from Jekyll I’ll probably take a look at Hugo, but thankfully at present there’s no push factor to switch.

It’s interesting to look at my writing patterns over time. I obviously started keen, and peaked with 81 posts in 2006 (I’ve no idea how on earth that happened), while 2013 had only 2. Generally I write less when I’m busy, or stressed, or unhappy, so it’s kinda interesting to see how that lines up with various life events.

Blog posts over time

During that period I’ve lived in 10 different places (well, 10 different houses/flats, I think it’s only 6 different towns/cities), on 2 different continents, working at 6 different employers, as well as a period where I was doing my Masters in law. I’ve travelled around the world, made new friends, lost contact with folk, started a family. In short, I have lived, even if lots of it hasn’t made it to these pages.

At this point, do I see myself stopping? No, not really. I plan to still be around, like Flameeyes, to the end. Even if my posts are unlikely to hit the frequency from back when I started out.

17 December, 2025 05:06PM

Sven Hoexter

exfatprogs: Do not try defrag.exfat / mkfs.exfat Windows compatibility in Trixie

exfatprogs 1.3.0 added a new defrag.exfat utility which turned out to be not reliable and cause data loss. exfatprogs 1.3.1 disabled the utility, and I followed that decision with the upload to Debian/unstable yesterday. But as usual it will take some time until it's migrating to testing. Thus if you use testing do not try defag.exfat! At least not without a vetted and current backup.

Beside of that there is a compatibility issue with the way mkfs.exfat, as shipped in trixie (exfatprogs 1.2.9), handles drives which have a physical sector size of 4096 bytes but emulate a logical size of 512 bytes. With exfatprogs 1.2.6 a change was implemented to prefer the physical sector size on those devices. That turned out to be not compatible with Windows, and was reverted in exfatprogs 1.3.0. Sadly John Ogness ran into the issue and spent some time to debug it. I've to admit that I missed the relevance of that change. Huge kudos to John for the bug report. Based on that I prepared an update for the next trixie point release.

If you hit that issue on trixie with exfatprogs 1.2.9-1 you can work around it by formating with mkfs.exfat -s 512 /dev/sdX to get Windows compatibility. If you use exfatprogs 1.2.9-1+deb13u1 or later, and want the performance gain back, and do not need Windows compatibility, you can format with mkfs.exfat -s 4096 /dev/sdX.

17 December, 2025 02:38PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 15.2.3-1 on CRAN: Upstream Update

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1272 other packages on CRAN, downloaded 43.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 661 times according to Google Scholar.

This versions updates to the 15.2.3 upstream Armadillo release from yesterday. It brings minor changes over the RcppArmadillo 15.2.2 release made last month (and described in this post). As noted previously, and due to both the upstream transition to C++14 coupled with the CRAN move away from C++11, the package offers a transition by allowing packages to remain with the older, pre-15.0.0 ‘legacy’ Armadillo yet offering the current version as the default. If and when CRAN will have nudged (nearly) all maintainers away from C++11 (and now also C++14 !!) we can remove the fallback. Our offer to help with the C++ modernization still stands, so please get in touch if we can be of assistance. As a reminder, the meta-issue #475 regroups all the resources for the C++11 transition.

There were no R-side changes in this release. The detailed changes since the last release follow.

Changes in RcppArmadillo version 15.2.3-1 (2025-12-16)

  • Upgraded to Armadillo release 15.2.3 (Medium Roast Deluxe)

    • Faster .resize() for vectors

    • Faster repcube()

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

17 December, 2025 02:11PM

hackergotchi for Matthew Garrett

Matthew Garrett

How did IRC ping timeouts end up in a lawsuit?

I recently won a lawsuit against Roy and Rianne Schestowitz, the authors and publishers of the Techrights and Tuxmachines websites. The short version of events is that they were subject to an online harassment campaign, which they incorrectly blamed me for. They responded with a large number of defamatory online posts about me, which the judge described as unsubstantiated character assassination and consequently awarded me significant damages. That's not what this post is about, as such. It's about the sole meaningful claim made that tied me to the abuse.

In the defendants' defence and counterclaim[1], 15.27 asserts in part The facts linking the Claimant to the sock puppet accounts include, on the IRC network: simultaneous dropped connections to the mjg59_ and elusive_woman accounts. This is so unlikely to be coincidental that the natural inference is that the same person posted under both names. "elusive_woman" here is an account linked to the harassment, and "mjg59_" is me. This is actually a surprisingly interesting claim to make, and it's worth going into in some more detail.

The event in question occurred on the 28th of April, 2023. You can see a line reading *elusive_woman has quit (Ping timeout: 2m30s), followed by one reading *mjg59_ has quit (Ping timeout: 2m30s). The timestamp listed for the first is 09:52, and for the second 09:53. Is that actually simultaneous? We can actually gain some more information - if you hover over the timestamp links on the right hand side you can see that the link is actually accurate to the second even if that's not displayed. The first event took place at 09:52:52, and the second at 09:53:03. That's 11 seconds apart, which is clearly not simultaneous, but maybe it's close enough. Figuring out more requires knowing what a "ping timeout" actually means here.

The IRC server in question is running Ergo (link to source code), and the relevant function is handleIdleTimeout(). The logic here is fairly simple - track the time since activity was last seen from the client. If that time is longer than DefaultIdleTimeout (which defaults to 90 seconds) and a ping hasn't been sent yet, send a ping to the client. If a ping has been sent and the timeout is greater than DefaultTotalTimeout (which defaults to 150 seconds), disconnect the client with a "Ping timeout" message. There's no special logic for handling the ping reply - a pong simply counts as any other client activity and resets the "last activity" value and timeout.

What does this mean? Well, for a start, two clients running on the same system will only have simultaneous ping timeouts if their last activity was simultaneous. Let's imagine a machine with two clients, A and B. A sends a message at 02:22:59. B sends a message 2 seconds later, at 02:23:01. The idle timeout for A will fire at 02:24:29, and for B at 02:24:31. A ping is sent for A at 02:24:29 and is responded to immediately - the idle timeout for A is now reset to 02:25:59, 90 seconds later. The machine hosting A and B has its network cable pulled out at 02:24:30. The ping to B is sent at 02:24:31, but receives no reply. A minute later, at 02:25:31, B quits with a "Ping timeout" message. A ping is sent to A at 02:25:59, but receives no reply. A minute later, at 02:26:59, A quits with a "Ping timeout" message. Despite both clients having their network interrupted simultaneously, the ping timeouts occur 88 seconds apart.

So, two clients disconnecting with ping timeouts 11 seconds apart is not incompatible with the network connection being interrupted simultaneously - depending on activity, simultaneous network interruption may result in disconnections up to 90 seconds apart. But another way of looking at this is that network interruptions may occur up to 90 seconds apart and generate simultaneous disconnections[2]. Without additional information it's impossible to determine which is the case.

This already casts doubt over the assertion that the disconnection was simultaneous, but if this is unusual enough it's still potentially significant. Unfortunately for the Schestowitzes, even looking just at the elusive_woman account, there were several cases where elusive_woman and another user had a ping timeout within 90 seconds of each other - including one case where elusive_woman and schestowitz[TR] disconnect 40 seconds apart. By the Schestowitzes argument, it's also a natural inference that elusive_woman and schestowitz[TR] (one of Roy Schestowitz's accounts) are the same person.

We didn't actually need to make this argument, though. In England it's necessary to file a witness statement describing the evidence that you're going to present in advance of the actual court hearing. Despite being warned of the consequences on multiple occasions the Schestowitzes never provided any witness statements, and as a result weren't allowed to provide any evidence in court, which made for a fairly foregone conclusion.

[1] As well as defending themselves against my claim, the Schestowitzes made a counterclaim on the basis that I had engaged in a campaign of harassment against them. This counterclaim failed.

[2] Client A and client B both send messages at 02:22:59. A falls off the network at 02:23:00, has a ping sent at 02:24:29, and has a ping timeout at 02:25:29. B falls off the network at 02:24:28, has a ping sent at 02:24:29, and has a ping timeout at 02:25:29. Simultaneous disconnects despite over a minute of difference in the network interruption.

comment count unavailable comments

17 December, 2025 01:17PM

December 16, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Tom Silvagni sentencing: not Xavier College but DPP and social media to blame

After the recent abuse judgment, today we will find out about Tom Silvagni's sentence. Dad died shortly after his employer, the late Cardinal Pell, was sentenced to prison. The cardinal was subsequently acquitted in his appeal to the High Court. Dad was a Carlton supporter. He would be turning in his grave at the thought that Stephen Silvagni's son could be a rapist.

Suppression orders were lifted last week and the media have finally identified Tom Silvagni as the latest Australian football personality to be convicted of abuse.

News reports were quick to comment on the fact the Silvagni brothers all attended Xavier College, the elite Catholic boys school attended by my father and I. As explained in a previous blog, after moving from Bendigo to Melbourne for my final year of school, I used to cycle from Pentridge Prison to Xavier College each day while Tom really is in prison for at least one Christmas and maybe more.

The alleged incident took place in January 2024. That appears to be four years after Tom completed year 12. References to the school are therefore not helpful in any specific way. In a general sense, all we can say is there is a correlation between wealth and abuse, just as there is a correlation between wealth and attendance at elite schools. But correlation is not causation. The Chanel Contos "petition" about consent demonstrated that incidents of this nature were alleged to happen in every elite school of every denomination. The Federal Court published the Katharine Thornton Dossier about their former boss, the attorney general Christian Porter. In his case, it is alleged that abuse took place while he was representing another elite school at the national debating contest. An allegation against a student on an official school trip is far more severe than an allegation against a former student.

Silvagni background

Tom had started a job as a player agent at Kapital Sports Management shortly before the incident. The Wayback machine has captured images of Tom with his colleagues as well as his profile:

Tom is a recently accredited AFL Player Agent and works closely with our team of experienced agents at the ground level. Tom has “lived” the industry through his family ties and is a great resource to Kapital given he has recently experienced playing in AFL pathways. Tom offers great perspective to the young draftees as they navigate the system and is a great relationship piece with our draft talent.

Polarizing and adversarial procedures are not solving abuse

After the conviction was announced, the victim was invited to make a unilateral victim impact statement. She used much of that opportunity to direct her comments against Tom. She made little reference to anybody else at the party and no reference to the cultural and legal problems around abuse in Australia.

Shiannon Corcoron writes a strongly man-hating piece about the trial:

was about how the rights of the wealthy and powerful can override the rights of the small, the weak, the vulnerable and the truth. This man-child ...

As the accuser is anonymous, we do not know if she was small or weak. The facts do confirm she was vulnerable at that particular moment in time: she had gone to sleep in a bed with another man. She believed he would stay the night. The other man left at 2am, leaving the complainant alone and vulnerable.

The polarizing nature of these comments can be further exposed with reference to a parallel case in the United Kingdom. On the same day as the judgment in Melbourne, a British police officer failed in their appeal to overturn dismissal for gross misconduct. In the British case, the attacker was not a male police officer, it was a female police officer, PC Pamela Pritchard. While the police sacked her, there is no mention of any criminal prosecution for her actions.

Look at the women running around the world of open source software encouraging people to gang up on each other:

 

Comments like that are incredibly dangerous. In the world of football, Tom may have seen the way the Director of Public Prosecutions (DPP) handled the case against Stephen Milne and he may have felt that a yes to one man is almost as good as a yes to both men.

Abuse is not about the offender's gender. It is about power, having a gang on your side or just being incredibly drunk and stupid.

There are at least two sides to every story. Looking at the falsified harassment claims in the world of open source software, I was able to find internal emails manipulating journalists to repeat false accusations against Dr Jacob Appelbaum. If somebody was really abused, why did they try to fight their battle through B-grade journalists rather than going directly to the police?

One of the more notable examples in Australia was the wrongful persecution of Alex Matters from the ALP. Sky News conducted an excellent interview with Mr Matters about what it is like to be wrongly accused of abuse.

Based on these cases, I feel it is wise to be very cautious when somebody raises an accusation. It is important to listen and write down evidence but it is foolish to repeat an accusation willy-nilly on social control media.

The mental health defence

Silvagni's lawyers argued that due to the high profile of his family and his young age, he would be at unreasonable risk of self-harm or suicide if the story of the trial was published by the media. On this basis, the entire trial was conducted in secret and his identity only revealed after he was convicted.

There have been vast discussions about privacy, censorship and the credibility of mental health concerns.

Research into the mental health issue suggests that everybody in proximity to bullying and persecution, including family members, team mates, Carlton fans, friends of Tom's mum and Xavier alumni are going to collectively suffer some stress due to the public denunciation of the Silvagni family.

Take a moment to think about Tom's brothers and their families.

Ben was dating Eve Markoski-Wood. Eve's biological father, inconveniently named Rapoff, was convicted and jailed on drug offences. Eve's mother is a reality TV star and Eve uses the name of her step-father. It looks like Eve broke off the relationship with Ben shortly after the charges were officially declared. Britain's Daily Mail tabloid speculated that the "tyranny of distance" had forced them apart but now we know the real reason.

Jack had a very successful few years playing for Carlton. He arrived in the club at the same time as Grace Phillips took up a role as a social media intern. Grace was fortunate to strike up a relationship with her new colleague, the star recruit and son of one of the club legends. They married in 2023 and not long after, in 2024 they had a baby son, Charlie. How is the child going to feel when it arrives for its first day at school and some other five year old asks about uncle Tom?

Tom's girlfriend, Alannah Iocanis, who was a friend of the accuser, is also one of these influencer/model personalities in the world of social control media. With her boyfriend in jail, will other celebrities be willing to date her? Will she be able to maintain the influencer/model lifestyle or will she have to get a job in a supermarket or coffee shop?

Alannah was chosen as a finalist in Miss Universe Australia 2025 even while her boyfriend was on trial for rape. Many pages about her participation have vanished as news got around.

Alannah's model agency, KHOO, has removed her profile.

Media is self-censoring even after suppression order lifted

Many of the media reports do not mention the names of the other people attending the party. It is vital to understand that Anthony Lo Giudice, the other man who had been in the room with the girl was a close relative of the Carlton football club president, Mark Lo Giudice. At the same time, it is important to understand that Tom's father, one of the legends of Carlton, had been refusing to speak to Mark Lo Giudice for a number of years.

Channel 7 report about Anthony Lo Giudice and the sequence of events and Anthony's LinkedIn.

When the reader is aware of all these challenging relationships, they can begin to contemplate the possibility that people have had a role in manipulating the girl or manipulating Tom or manipulating both of them to create a crisis.

Tom's girlfriend, Alannah Iocanis, had invited the victim to the four-way party and she arrived after midnight. Tom's best friend was having an open relationship with the victim. Think of the film Cruel Intentions from 1999. It remains a masterpiece of this genre.

The role of technology

Within minutes of the alleged abuse, the victim had used her mobile phone to alert multiple people that she was an abuse victim. Being only nineteen years old, she may not have realized the extent to which these text messages would change her life. The identities of abuse victims can't be published by the press in Australia, nonetheless, her name has been shared widely between people in her circle of friends, people she thought she could trust and the football clubs concerned.

Without a mobile phone, she may have had time to think about her response to the situation. Once she had gone down the path of telling multiple people, she was unable to turn back.

Deception and rape go together, from Chateau Silvagni to the FSFE & Debian lies

News reports were quick to emphasize that Tom is accused of using deception to gain access to the sleeping nineteen year old. He has admitted using deception, a falsified Uber receipt, to obfuscate the identities of those really in the house at the time of the alleged abuse.

I suspect many people would feel a sense of shock if accused of abuse and some may be tempted to put up barriers to protect themselves. The trial of Tom Silvagni found that his response was not merely a spontaneous lie made up on the spur of the moment, it was a co-ordinated deception involving at least one other person and a forged document.

During the nearly 10-day trial, Crown prosecutor Jeremy McWilliams told jurors the rapes were committed 'not through threats, not through force… but through deception,' with Silvagni impersonating his friend to trick the woman.

In Debianism, the former leader sent this email in December 2018:

You are well-aware that I have been nothing but scrupulous and gentlemanly with regards to your personal privacy and thus I would refuse to cite any outside or otherwise offer any objective rebuttals to your claims on a public forum.

Yet records show he had spent much of 2018 sending defamatory emails behind my back at a time when I lost two family members. Nothing could be a more hideous violation of privacy.

We've seen similar extremes as Matthias Kirschner uses the name FSFE to impersonate the real FSF in Boston. In a previous blog, I compared the FSFE to a Nigerian 911 scam.

Tom Silvagni is accused of using deception/impersonation to procure sex with one of his best friend's girlfriends. Chris Lamb and Matthias Kirschner used deception on a similar scale to procure victims' work while pretending to be independent voluntary organizations. In the latter case, we saw victims killed themselves in the Debian suicide cluster. One victim died on our wedding day.

16 December, 2025 09:30PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Lichess

I wish more pages on the Internet were like Lichess. It's fast. It feels like it only does one thing (even though it's really more like seven or eight)—well, perhaps except for the weird blogs. It does not feel like it's trying to sell me anything; in fact, it feels like it hardly even wants my money. (I've bought two T-shirts from their Spreadshirt, to support them.) It's super-efficient; I've seen their (public) balance sheets, and it feels like it runs off of a shoestring budget. (Take note, Wikimedia Foundation!) And, perhaps most relieving in this day and age, it does not try to grift any AI.

Yes, I know, chess.com is the juggernaut, and has probably done more for chess' popularity than FIDE ever did. But I still go to Lichess every now and then and just click that 2+1 button. (Generally without even logging in, so that I don't feel angry about it when I lose.) Be more like Lichess.

16 December, 2025 06:45PM

December 15, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

Unique security and privacy threats of large language models — a comprehensive survey

This post is an unpublished review for Unique security and privacy threats of large language models — a comprehensive survey

Much has been written about large language models (LLMs) being a risk to user security and privacy, including the issue that, being trained with datasets whose provenance and licensing are not always clear, they can be tricked into producing bits of data that should not be divulgated. I took on reading this article as means to gain a better understanding of this area. The article completely fulfilled my expectations.

This is a review article, which is not a common format for me to follow: instead of digging deep into a given topic, including an experiment or some way of proofing the authors’ claims, a review article will contain a brief explanation and taxonomy of the issues at hand, and a large number of references covering the field. And, at 36 pages and 151 references, that’s exactly what we get.

The article is roughly split in two parts: The first three sections present the issue of security and privacy threats as seen by the authors, as well as the taxonomy within which the review will be performed, and sections 4 through 7 cover the different moments in the life cycle of a LLM model (at pre-training, during fine-tuning, when deploying systems that will interact with end-users, and when deploying LLM-based agents), detailing their relevant publications. For each of said moments, the authors first explore the nature of the relevant risks, then present relevant attacks, and finally close outlining countermeasures to said attacks.

The text is accompanied all throughout its development with tables, pipeline diagrams and attack examples that visually guide the reader. While the examples presented are sometimes a bit simplistic, they are a welcome guide and aid to follow the explanations; the explanations for each of the attack models are necessarily not very deep, and I was often left wondering I correctly understood a given topic, or wanting to dig deeper – but being this a review article, it is absolutely understandable.

The authors present an easy to read prose, and this article covers an important spot in understanding this large, important, and emerging area of LLM-related study.

15 December, 2025 07:30PM

Russ Allbery

Review: Brigands & Breadknives

Review: Brigands & Breadknives, by Travis Baldree

Series: Legends & Lattes #3
Publisher: Tor
Copyright: 2025
ISBN: 1-250-33489-6
Format: Kindle
Pages: 325

Brigands & Breadknives is a secondary-world sword-and-sorcery fantasy and a sequel to both Legends & Lattes and Bookshops & Bonedust. It takes place shortly after Legends & Lattes chronologically, but Fern, the protagonist, was introduced in the Bookshops & Bonedust prequel.

You may have noticed I didn't describe this as cozy fantasy. That is intentional.

When we left Fern at the end of Bookshops & Bonedust, the rattkin was running a bookshop in the town of Murk. As Brigands & Breadknives opens, Fern is moving, for complicated and hard-to-describe personal reasons, to Thune where Viv has her coffee shop. Her plan is to open a new bookstore next door to Legends and Lattes. This is exactly the sort of plot one might expect from this series, and the first few chapters feel like yet another version of the first two novels. Then Fern makes an impulsive and rather inexplicable (even to herself) decision and the plot goes delightfully sideways.

Brigands & Breadknives is not, as Baldree puts it in the afterword, a book about fantasy small-business ownership as the answer to all of life's woes. It is, instead, a sword and sorcery story about a possibly immortal elven bounty hunter, her utterly baffling goblin prisoner, and a rattkin bookseller who becomes their unexpected travel companion for reasons she can't explain. It's a story about a mid-life crisis in a world and with supporting characters that I can only describe as inspired by a T. Kingfisher novel.

Baldree is not Ursula Vernon, of course. This book does not contain paladins or a romance, possibly to the relief of some readers. It's slower, a bit more introspective, and doesn't have as sharp of edges or the casual eerie unsettlingness. But there is a religious order that worships a tentacled space horror for entirely unexpected reasons, pompous and oleaginous talking swords with verbose opinions about everything, a mischievously chaotic orange-haired goblin who quickly became one of my favorite fantasy characters and then kept getting better, and a whole lot of heart. You may see why Kingfisher was my first thought for a comparison point.

Unlike Baldree's previous novels, there is a lot of combat and injury. I think some people will still describe this book as cozy, and I'm not going to argue too strongly because the conflicts are a bit lighter than the sort of rape and murder one would see in a Mercedes Lackey novel. But to me this felt like sword and sorcery in a Dungeons and Dragons universe made more interesting by letting the world-building go feral and a little bit sarcastic. Most of the book is spent traveling, there are a lot of random encounters that build into a connected plot, and some scenes (particularly the defense of the forest village) felt like they could have sold to the Swords and Sorceress anthology series.

Also, this was really good! I liked both Legends & Lattes and Bookshops & Bonedust, maybe a bit more than the prevailing opinion among reviewers since the anachronisms never bothered me, but I wasn't sure whether to dive directly into this book because I was expecting more of the same. This is not more of the same. I think it's clearly better writing and world-building than either of the previous books. It helps that Fern is the protagonist; as much as I like Viv, I think Fern is a more interesting character, and I am glad she got a book of her own.

Baldree takes a big risk on the emotional arc of this book. Fern starts the story in a bad state and makes some decisions to kick off the plot that are difficult to defend. She beats herself up for those decisions for most of the book, deservedly, and parts of that emotional turmoil are difficult to read. Baldree resists the urge to smooth everything over and instead provides a rather raw sense of depression, avoidance, and social anxiety that some readers are going to have to brace themselves for.

I respect the decision to not write the easy series book people probably expected, but I'm not sure Fern's emotional arc quite worked. Baldree is hinting at something that's hard to describe logically, and I'm not sure he was able to draw a clear enough map of Fern's thought process for the reader to understand her catharsis. The "follow your passion" self-help mindset has formed a gravitational singularity in the vicinity of this book's theme, it takes some skillful piloting to avoid being sucked into its event horizon, and I don't think Baldree quite managed to escape it. He made a valiant attempt, though, and it created a far more interesting book than one about safer emotions.

I wanted more of an emotional payoff than I got, but the journey, even with the moments of guilt and anxiety, was so worth it. The world-building is funnier and more interesting than the previous books of the series, and the supporting cast is fantastic. If you bailed on the series but you like sword and sorcery and T. Kingfisher novels, consider returning. You do probably need to read Bookshops & Bonedust first, if you haven't already, since it helps to know the start of Fern's story.

Recommended, and shortcomings aside, much better than I had expected.

Content notes: Bloody sword fights, major injury, some very raw emotions about letting down friends and destroying friendships.

Rating: 8 out of 10

15 December, 2025 03:25AM

December 14, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

BH 1.90.0-1 on CRAN: New Upstream

Boost

Boost is a very large and comprehensive set of (peer-reviewed) libraries for the C++ programming language, containing well over one hundred individual libraries. The BH package provides a sizeable subset of header-only libraries for (easier, no linking required) use by R. It is fairly widely used: the (partial) CRAN mirror logs (aggregated from the cloud mirrors) show over 41.5 million package downloads.

Version 1.90.0 of Boost was released a few days ago following the regular Boost release schedule of April, August and December releases. As before, we packaged it almost immediately and started testing following our annual update cycle which strives to balance being close enough to upstream and not stressing CRAN and the user base too much. The reverse depends check revealed only one really minor issue among the over three hundred direct reverse dependencies. And that issue was addressed yesterday within hours by a truly responsove maintainer (and it helped that a related issue had been addressed months earlier with version 1.89.). So big thanks to Jean-Romain Roussel for the prompt fix, and to Andrew Johnson for the earlier test with 1.89.0.

As last year with 1.87.0, no new Boost libraries were added to BH so the (considerable) size is more or less unchanged. It lead to CRAN doing a manual inspection but as there were no other issues it sailed through as is now in the CRAN repository.

The short NEWS entry follows.

Changes in version 1.90.0-1 (2025-12-13)

  • Upgrade to Boost 1.90.0, patched as usual to comment-out diagnostic suppression messages per the request of CRAN

  • Minor upgrades to continuous integration

Via my CRANberries, there is a diffstat report relative to the previous release. Comments and suggestions about BH are welcome via the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

14 December, 2025 03:03PM

December 11, 2025

#056: Running r-ci with R-devel

Welcome to post 56 in the R4 series.

The recent post #54 reviewed a number of earlier posts on r-ci, our small (but very versatile) runner for continunous integration (CI) with R. The post also introduced the notion of using a container in the ‘matrix’ of jobs defined and running in parallel. The initial motivation was the (still ongoing, and still puzzling) variation in run-times of GitHub Actions. So when running CI and relying on r2u for the ‘fast, easy, reliable: pick all three!’ provision of CRAN packages as Ubuntu binaries, a small amount of time is spent prepping a basic Ubuntu instance with the necessary setup. This can be as fast as maybe 20 to 30 seconds, but it can also stretch to almost two minutes when GitHub is busier or out of sorts for other reasons. When the CI job itself is short, that is a nuisance. We presented relying on a pre-made r2u4ci container that adds just a few commands to the standard r2u container to be complete for CI. And with that setup CI runs tend to be reliably faster.

This situation is still evolving. I have not converted any of my existing CI scripts (apart from a test instance or two), but I keep monitoring the situation. However, this also offered another perspective: why not rely on a different container for a different CI aspect? When discussing the CI approach with Jeff the other day (and helping add CI to his mmap repo), it occurred to me we could also use on of the Rocker containers for R-devel. A minimal change to the underlying run.sh script later, this was accomplished. An example is provided as both a test and an illustration in the repo for package RcppInt64 in its script ci.yaml:

    strategy:
      matrix:
        include:
          - { name: container, os: ubuntu-latest, container: rocker/r2u4ci }
          - { name: r-devel,   os: ubuntu-latest, container: rocker/drd }
          - { name: macos,     os: macos-latest }
          - { name: ubuntu,    os: ubuntu-latest }

    runs-on: ${{ matrix.os }}
    container: ${{ matrix.container }}

This runs both a standard Ubuntu setup (fourth entry) and the alternate just described relying on the container (first entry) along with the (usually commented-out) optional macOS setup (third entry). And line two brings the drd container from Rocker. The CI runner script now checks for a possible Rdevel binary as provided inside drd (along with alias RD) and uses it when present. And that is all that there is: no other change on the user side; tests now run under R-devel. You can see some of the initial runs at the rcppint64 repo actions log. Another example is now also at Jeff’s mmap repo.

It should be noted that this relies on R-devel running packages made with R-release. Every few years this breaks when R needs to break its binary API. If and when that happens this option will be costlier as the R-devel instance will then have to (re-)install its R package dependencies. This can be accomodated easily as a step in the yaml file. And under ‘normal’ circumstances it is not needed.

Having easy access to recent builds of R-devel (the container refreshes weekly on a schedule) with the convenience of r2u gives another option for package testing. I may continue to test locally with R-devel as my primary option, and most likely keep my CI small and lean (usually just one R-relase run on Ubuntu) but having another option at GitHub Actions is also a good thing.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

11 December, 2025 06:29PM

December 08, 2025

Thorsten Alteholz

My Debian Activities in November 2025

Debian LTS/ELTS

This was my hundred-thirty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian and my eighty-eighth ELTS month. As the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities.

During my allocated time I uploaded or worked on:

  • [DLA 4381-1] net-snmp security update to fix two CVEs related to denial of service.
  • [DLA 4382-1] libsdl2 security update to fix one CVE related to a memory leak and a denial of service.
  • [DLA 4380-1] cups-filters security update to fix three CVEs related to out of bounds read or writes or a heap buffer overflow.
  • [ELA-1586-1] cups-filters security update to fix three CVEs in Buster and Stretch, related to out of bounds read or writes or a heap buffer overflow.
  • [libcupsfilters] upload to unstable to fix two CVEs
  • [cups-filters] upload to unstable to fix three CVEs
  • [cups] upload to unstable to fix two CVEs
  • [rlottie] upload to unstable to finally fix three CVEs
  • [rplay] upload to unstable to finally fix one CVE
  • [#1121342] trixie-pu bug for libcupsfilters to fix two CVEs in Trixie.
  • [#1121391] trixie-pu bug for cups-filter to fix three CVEs in Trixie.
  • [#1121392] bookworm-pu bug for cups-filter to fix three CVEs in Bookworm.
  • [#112433] trixie-pu bug for rlottie to finally fix three CVEs in Trixie.
  • [#112437] bookworm-pu bug for rlottie to finally fix three CVEs in Bookworm.

I also attended the monthly LTS/ELTS meeting and did a week of LTS/ELTS frontdesk duties. I also stumbled upon a bug in python3-paramiko, where the parsing of include statements in the ssh_config does not work. Rather annoying but already fixed in the newest version, that only needs to find its way to my old VM.

Debian Printing

This month I uploaded a new upstream version or a bugfix version of:

I also uploaded cups to Trixie, to fix bug #1109471 related to a configuration problem with the admin panel.

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

  • siril to unstable (sponsored upload).
  • supernovas to unstable (sponsored upload).

Debian IoT

This month I uploaded a new upstream version or a bugfix version of:

Debian Mobcom

This month I uploaded a new upstream version or a bugfix version of:

misc

This month I uploaded a new upstream version or a bugfix version of:

On my fight against outdated RFPs, I closed 30 of them in November.

I started with about 3500 open RFP bugs. and after working six months on this project, I have closed 183 bugs. Of course new bugs appeared, so the overall number of bugs is only down to about 3360.

Though I view this as a successful project, I also have to admit that it is a bit boring to work on this daily. Therefore I close this diary again and will add the closed RFP bugs to my bug logbook now. I also try to close some of these bugs by really uploading some software, probably one package per month.

FTP master

This month I accepted 236 and rejected 16 packages. The overall number of packages that got accepted was 247.

08 December, 2025 03:20PM by alteholz

François Marier

Learning a new programming language with an LLM

I started learning Go this year. First, I picked a Perl project I wanted to rewrite, got a good book and ignored AI tools since I thought they would do nothing but interfere with learning. Eventually though, I decided to experiment a bit and ended up finding a few ways to use AI assistants effectively even when learning something new.

Searching more efficiently

The first use case that worked for me was search. Instead of searching on a traditional search engine and then ending up on Stack Overflow, I could get the answer I was looking for directly in an AI side-window in my editor. Of course, that's bad news for Stack Overflow.

I was however skeptical from the beginning since LLMs make mistakes, sometimes they making up function signatures or APIs that don't exist. Therefore I got into the habit of going to the official standard library documentation to double-check suggestions. For example, if the LLM suggests using strings.SplitN, I verify the function signature and behaviour carefully before using it. Basically, "don't trust and do verify."

I stuck to the standard library in my project, but if an LLM recommends third-party dependencies for you, make sure they exist and that Socket doesn't flag them as malicious. Research has found that 5-20% of packages suggested by LLMs don't actually exist, making this a real attack vector (dubbed "slopsquatting").

Autocomplete is too distracting

A step I took early on was to disable AI autocomplete in my editor. When learning a new language, you need to develop muscle memory for the syntax. Also, Go is no Java. There's not that much boilerplate to write in general.

I found it quite distracting to see some almost correct code replace my thinking about the next step. I can see how one could go faster with these suggestions, but being a developer is not just about cranking out lines of code as fast as possible, it's also about constantly learning new things (and retaining them).

Asking about idiomatic code

One of the most useful prompts when learning a new language is "Is this the most idiomatic way to do this in Go?". Large language models are good at recognizing patterns and can point out when you're writing code that works but doesn't follow the conventions of the language. This is especially valuable early on when you don't yet have a feel for what "good" code looks like in that language.

It's usually pretty easy (at least for an experience developer) to tell when the LLM suggestion is actually counter productive or wrong. If it increases complexity or is harder to read/decode, it's probably not a good idea to do it.

Reviews

One way a new dev gets better is through code review. If you have access to a friend who's an expert in the language you're learning, then you can definitely gain a lot by asking for feedback on your code.

If you don't have access to such a valuable resource, or as a first step before you consult your friend, I found that AI-assisted code reviews can be useful:

  1. Get the model to write the review prompt for you. Describe what you want reviewed and let it generate a detailed prompt.
  2. Feed that prompt to multiple models. They each have different answers and will detect different problems.
  3. Be prepared to ignore 50% of what they recommend. Some suggestions will be stylistic preferences, others will be wrong, or irrelevant.

The value is in the other 50%: the suggestions that make you think about your code differently or catch genuine problems.

Similarly for security reviews:

  • A lot of what they flag will need to be ignored (false positives, or things that don't apply to your threat model).
  • Some of it may highlight areas for improvement that you hadn't considered.
  • Occasionally, they will point out real vulnerabilities.

But always keep in mind that AI chatbots are trained to be people-pleasers and often feel the need to suggest something when nothing was needed

An unexpected benefit

One side effect of using AI assistants was that having them write the scaffolding for unit tests motivated me to increase my code coverage. Trimming unnecessary test cases and adding missing ones is pretty quick when the grunt work is already done, and I ended up testing more of my code (being a personal project written in my own time) than I might have otherwise.

Learning

In the end, I continue to believe in the value of learning from quality books (I find reading paper-based most effective). In addition, I like to create Anki questions for common mistakes or things I find I have to look up often. Remembering something will always be faster than asking an AI tool.

So my experience this year tells me that LLMs can supplement traditional time-tested learning techniques, but I don't believe it obsoletes them.

P.S. I experimented with getting an LLM to ghost-write this post for me from an outline (+ a detailed style guide) and I ended up having to rewrite at least 75% of it. It was largely a waste of time.

08 December, 2025 12:32AM

December 07, 2025

Vincent Bernat

Compressing embedded files in Go

Go’s embed feature lets you bundle static assets into an executable, but it stores them uncompressed. This wastes space: a web interface with documentation can bloat your binary by dozens of megabytes. A proposition to optionally enable compression was declined because it is difficult to handle all use cases. One solution? Put all the assets into a ZIP archive! 🗜�

Code

The Go standard library includes a module to read and write ZIP archives. It contains a function that turns a ZIP archive into an io/fs.FS structure that can replace embed.FS in most contexts.1

package embed

import (
  "archive/zip"
  "bytes"
  _ "embed"
  "fmt"
  "io/fs"
  "sync"
)

//go:embed data/embed.zip
var embeddedZip []byte

var dataOnce = sync.OnceValue(func() *zip.Reader {
  r, err := zip.NewReader(bytes.NewReader(embeddedZip), int64(len(embeddedZip)))
  if err != nil {
    panic(fmt.Sprintf("cannot read embedded archive: %s", err))
  }
  return r
})

func Data() fs.FS {
  return dataOnce()
}

We can build the embed.zip archive with a rule in a Makefile. We specify the files to embed as dependencies to ensure changes are detected.

common/embed/data/embed.zip: console/data/frontend console/data/docs
common/embed/data/embed.zip: orchestrator/clickhouse/data/protocols.csv 
common/embed/data/embed.zip: orchestrator/clickhouse/data/icmp.csv
common/embed/data/embed.zip: orchestrator/clickhouse/data/asns.csv
common/embed/data/embed.zip:
    mkdir -p common/embed/data && zip --quiet --recurse-paths --filesync $@ $^

The automatic variable $@ is the rule target, while $^ expands to all the dependencies, modified or not.

Space gain

Akvorado, a flow collector written in Go, embeds several static assets:

  • CSV files to translate port numbers, protocols or AS numbers, and
  • HTML, CSS, JS, and image files for the web interface, and
  • the documentation.
Breakdown of space used by each package before and after introducing embed.zip. It is displayed as a treemap and we can see many embedded files replaced by a bigger one.
Breakdown of the space used by each component before (left) and after (right) the introduction of embed.zip.

Embedding these assets into a ZIP archive reduced the size of the Akvorado executable by more than 4 MiB:

$ unzip -p common/embed/data/embed.zip | wc -c | numfmt --to=iec
7.3M
$ ll common/embed/data/embed.zip
-rw-r--r-- 1 bernat users 2.9M Dec  7 17:17 common/embed/data/embed.zip

Performance loss

Reading from a compressed archive is not as fast as reading a flat file. A simple benchmark shows it is more than 4× slower. It also allocates some memory.2

goos: linux
goarch: amd64
pkg: akvorado/common/embed
cpu: AMD Ryzen 5 5600X 6-Core Processor
BenchmarkData/compressed-12     2262   526553 ns/op   610 B/op   10 allocs/op
BenchmarkData/uncompressed-12   9482   123175 ns/op     0 B/op    0 allocs/op

Each access to an asset requires a decompression step, as seen in this flame graph:

&#128444; Flame graph when reading data from embed.zip compared to reading data directly
CPU flame graph comparing the time spent on CPU when reading data from embed.zip (left) versus reading data directly (right). Because the Go testing framework executes the benchmark for uncompressed data 4 times more often, it uses the same horizontal space as the benchmark for compressed data. The graph is interactive.

While a ZIP archive has an index to quickly find the requested file, seeking inside a compressed file is currently not possible.3 Therefore, the files from a compressed archive do not implement the io.ReaderAt or io.Seeker interfaces, unlike directly embedded files. This prevents some features, like serving partial files or detecting MIME types when serving files over HTTP.


For Akvorado, this is an acceptable compromise to save a few mebibytes from an executable of almost 100 MiB. Next week, I will continue this futile adventure by explaining how I prevented Go from disabling dead code elimination! 🦥


  1. You can safely read multiple files concurrently. However, it does not implement ReadDir() and ReadFile() methods. ↩︎

  2. You could keep frequently accessed assets in memory. This reduces CPU usage and trades cached memory for resident memory. ↩︎

  3. SOZip is a profile that enables fast random access in a compressed file. However, Go’s archive/zip module does not support it. ↩︎

07 December, 2025 11:05PM by Vincent Bernat

Iustin Pop

Yes, still alive!

Yeah, again three months have passed since my last (trivial) post, and I really don’t know where the time has flown.

I suppose the biggest problem was the long summer vacation, which threw me off-track, and then craziness started. Work work work, no time for anything, which kept me fully busy in August, and then “you should travel”.

So mid-September I went on my first business trip since Covid, again to Kirkland, which in itself was awesome. Flew out Sunday, and as I was concerned I was going to lose too much fitness—had a half-marathon planned on the weekend after the return—I ran every morning of the four days I was there. And of course, on the last day, I woke up even earlier (05:30 AM), went out to run before sunrise, intending to do a very simple “run along the road that borders the lake for 2.5K, then back”. And right at the farthest point, a hundred metres before my goal of turning around, I tripped, started falling, and as I was falling, I hit—sideways—a metal pole. I was in a bus station, it was the pole that has the schedule at the top, and I hit it at relatively full speed, right across my left-side ribs. The crash took the entire air out of my lungs, and I don’t remember if I ever felt pain/sensation like that—I was seriously not able to breathe for 20 seconds or so, and I was wondering if I’m going to pass out at this rate.

Only 20 seconds, because my Garmin started howling like a police siren, and the screen was saying something along the lines of: “Incident detected; contacting emergency services in 40…35…” and I was fumbling to cancel that, since a) I wasn’t that bad, b) notifying my wife that I had a crash would have not been a smart idea.

My left leg was scraped in a few places, my left hand pretty badly, or more than just scraped, so my focus was on limping back, and finding a fountain to wash my injuries, which I did, so I kept running with blood dripping down my hand. Fun fun, everything was hurting, I took an Uber for the ~1Km to the office, had many meetings, took another Uber and flew back to Zurich. Seattle → San Francisco → Zürich, I think 14 hours, with my ribs hurting pretty badly. But I got home (Friday afternoon), and was wondering if I can run or not on Saturday.

Saturday comes, I feel pretty OK, so I said let’s try, will stop if the pain is too great. I pick up my number, I go to the start, of course in the last block and not my normal block, and I start running. After 50 metres, I knew this won’t be good enough, but I said, let’s make it to the first kilometre. Then to the first fuelling point, then to the first aid point, at which moment I felt good enough to go to the second one.

Long story short, I ran the whole half marathon, with pain. Every stop for fuelling was mentally hard, as the pain stopped, and I knew I had to start running again, and the pain would resume. In the end, managed to finish: two and a half hours, instead of just two hours, but alive and very happy. Of course, I didn’t know what was waiting for me… Sunday I wake up in heavy pain, and despite painkillers, I was not feeling much better. The following night was terrible, Monday morning I went to the doctor, had X-rays, discussion with a radiologist. “Not really broken, but more than just bruised. See this angle here? Bones don’t have angles normally”. Painkillers, chest/abdomen wrapping, no running! So my attempts to “not lose fitness” put me off running for a couple of weeks.

Then October came, and I was getting better, but work was getting even more crazy. I don’t know where November passed, honestly, and now we’re already in December. I did manage to run, quite well, managed to bike a tiny bit and swim a little, but I’m not in a place where I can keep a regular and consistent schedule.

On the good side, I managed this year, for the first time since Covid, to not get sick. Hey, a sport injury is 100× better than a sickness, like I had in previous years, taking me out for two weeks. But life was crazy enough that I didn’t read some of my email accounts for months, and I’m just now starting to catch up to, well, baseline.

Of course, “the” rib—the lowest one on the left side—is long-healed, or so I thought. After some strength training early this week, I was very sore the next day, and I wanted to test whether my rib is still sore. I touched it at “the point”, and it hurt so badly I couldn’t believe. Two and a half months, and it’s not done-done.

And now it’s just two weeks before Christmas and New Year’s, and that time off will ruin my rhythm again. At least ski vacation is booked, ski service is done, and slowly, work is getting in good enough shape to actually enjoy thinking about vacation.

So, in the end, a very adventurous last third of the year, and that wasn’t even all. As I’m writing this, my right wrist is bandaged and for the past 24 hours it hasn’t hurt too much, but that’s another, and not so interesting, story.

I’ll close with a yay for always being behind/backlogged, but alive and relatively well. My sport injuries are “elective injuries” so to speak, and I’m very thankful for that. See you in the next post!

07 December, 2025 08:37PM

December 06, 2025

Simon Josefsson

Reproducible Guix Container Images

Around a year ago I wrote about Guix Container Images for GitLab CI/CD and these images have served the community well. Besides continous use in CI/CD, these Guix container images are used to confirm reproducibility of the source tarball artifacts in the releases of Libtasn1 v4.20, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, SASL v2.2.2, Guile-GnuTLS v5.0.1, and OATH Toolkit v2.6.13. See how all those release announcements mention a Guix commit? That’s the essential supply-chain information about the Guix build environment that allows the artifacts to be re-created. To make sure this is repeatable, the release tarball artifacts are re-created from source code every week in the verify-reproducible-artifacts project, that I wrote about earlier. Guix’s time travelling feature make this sustainable to maintain, and hopefully will continue to be able to reproduce the exact same tarball artifacts for years to come.

During the last year, unfortunately Guix was removed from Debian stable. My Guix container images were created from Debian with that Guix package. My setup continued to work since the old stage0 Debian+Guix containers were still available. Such a setup is not sustainable, as there will be bit-rot and we don’t want to rely on old containers forever, which (after the removal of Guix in Debian) could not be re-produced any more. Let this be a reminder how user-empowering features such as Guix time-travelling is! I have reworked my Guix container image setup, and this post is an update on the current status of this effort.

The first step was to re-engineer Debian container images with Guix, and I realized these were useful on their own, and warrant a separate project. A more narrowly scoped project makes will hopefully make it easier to keep them working. Now instead of apt-get install guix they use the official Guix guix-install.sh approach. Read more about that effort in the announcement of Debian with Guix.

The second step was to reconsider my approach to generate the Guix images. The earlier design had several stages. First, Debian+Guix containers were created. Then from those containers, a pure Guix container was created. Finally, using the pure Guix container another pure Guix container was created. The idea behind that GCC-like approach was to get to reproducible images that were created from an image that had no Debian left on it. However, I never managed to finish this. Partially because I hadn’t realized that every time you build a Guix container image from Guix, you effectively go back in time. When using Guix version X to build a container with Guix on it, it will not put Guix version X into the container but will put whatever version of Guix is available in its package archive, which will be an earlier version, such as version X-N. I had hope to overcome this somehow (running a guix pull in newly generated images may work), but never finished this before Guix was removed from Debian.

So what could a better design look like?

For efficiency, I had already started experimenting with generating the final images directly from the Debian+Guix images, and after reproducibility bugs were fixed I was able to get to reproducible images. However, I was still concerned that the Debian container could taint the process somehow, and was also concerned about the implied dependency on non-free software in Debian.

I’ve been using comparative rebuilds using “similar” distributions to confirm artifact reproducibility for my software projects, comparing builds on Trisquel 11 with Ubuntu 22.04, and AlmaLinux 9 with RockyLinux 9 for example. This works surprisingly well. Including one freedom-respecting distribution like Trisquel will detect if any non-free software has bearing on artifacts. Using different architectures, such as amd64 vs arm64 also help with deeper supply-chain concerns.

My conclusion was that I wanted containers with the same Guix commit for both Trisquel and Ubuntu. Given the similarity with Debian, adapting and launching the Guix on Trisquel/Debian project was straight forward. So we now have Trisquel 11/12 and Ubuntu 22.04/24.04 images with the same Guix on them.

Do you see where the debian-with-guix and guix-on-dpkg projects are leading to?

We are now ready to look at the modernized Guix Container Images project. The tags are the same as before:

registry.gitlab.com/debdistutils/guix/container:latest
registry.gitlab.com/debdistutils/guix/container:slim
registry.gitlab.com/debdistutils/guix/container:extra
registry.gitlab.com/debdistutils/guix/container:gash

The method to create them is different. Now there is a “build” job that uses the earlier Guix+Trisquel container (for amd64) or Guix+Debian (for arm64, pending Trisquel arm64 containers). The build job create the final containers directly. Next a Ubuntu “reproduce” job is launched that runs the same commands, failing if it cannot generate the bit-by-bit identical container. Then single-arch images are tested (installing/building GNU hello and building libksba), and then pushed to the GitLab registry, adding multi-arch images in the process. Then the final multi-arch containers are tested by building Guile-GnuTLS and, on success, uploaded to the Docker Hub.

How would you use them? A small way to start the container is like this:

jas@kaka:~$ podman run -it --privileged --entrypoint=/bin/sh registry.gitlab.com/debdistutils/guix/container:latest
sh-5.2# env HOME=/ guix describe # https://issues.guix.gnu.org/74949
  guix 21ce6b3
    repository URL: https://git.guix.gnu.org/guix.git
    branch: master
    commit: 21ce6b392ace4c4d22543abc41bd7c22596cd6d2
sh-5.2# 

The need for --entrypoint=/bin/sh is because Guix’s pack command sets up the entry point differently than most other containers. This could probably be fixed if people want that, and there may be open bug reports about this.

The need for --privileged is more problematic, but is discussed upstream. The above example works fine without it, but running anything more elaborate with guix-daemon installing packages will trigger a fatal error. Speaking of that, here is a snippet of commands that allow you to install Guix packages in the container.

cp -rL /gnu/store/*profile/etc/* /etc/
echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
echo 'root:x:0:' > /etc/group
groupadd --system guixbuild
for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
guix install hello
GUIX_PROFILE="/var/guix/profiles/per-user/root/guix-profile"
. "$GUIX_PROFILE/etc/profile"
hello

This could be simplified, but we chose to not hard-code in our containers because some of these are things that probably shouldn’t be papered over but fixed properly somehow. In some execution environments, you may need to pass --disable-chroot to guix-daemon.

To use the containers to build something in a GitLab pipeline, here is an example snippet:

test-amd64-latest-wget-configure-make-libksba:
  image: registry.gitlab.com/debdistutils/guix/container:latest
  before_script:
  - cp -rL /gnu/store/*profile/etc/* /etc/
  - echo 'root:x:0:0:root:/:/bin/sh' > /etc/passwd
  - echo 'root:x:0:' > /etc/group
  - groupadd --system guixbuild
  - for i in $(seq -w 1 10); do useradd -g guixbuild -G guixbuild -d /var/empty -s $(command -v nologin) -c "Guix build user $i" --system guixbuilder$i; done
  - export HOME=/
  - env LANG=C.UTF-8 guix-daemon --build-users-group=guixbuild &
  - guix archive --authorize < /share/guix/ci.guix.gnu.org.pub
  - guix archive --authorize < /share/guix/bordeaux.guix.gnu.org.pub
  - guix describe
  - guix install libgpg-error
  - GUIX_PROFILE="//.guix-profile"
  - . "$GUIX_PROFILE/etc/profile"
  script:
  - wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
  - tar xfa libksba-1.6.7.tar.bz2
  - cd libksba-1.6.7
  - ./configure
  - make V=1
  - make check VERBOSE=t V=1

More help on the project page for the Guix Container Images.

That’s it for tonight folks, and remember, Happy Hacking!

06 December, 2025 10:22PM by simon

hackergotchi for Jonathan Dowland

Jonathan Dowland

thesis

It's done! It's over! I've graduated, I have the scroll, I'm staring at the eye-watering prices for the official photographer snap, I'm adjusting to post-thesis life.

My PhD thesis revisions have been accepted and my thesis is now available from Newcastle University Library's eThesis repository.

As part of submitting my corrections, I wrote a brief report detailing the changes I made from my thesis at the time of the viva. I also produced a latexdiff marked-up copy of the thesis to visualise the exact changes. In order to shed some light on the post-viva corrections process, at least at my institution, and in the hope that they are some use to someone, I'm sharing those documents:

06 December, 2025 09:41PM

December 04, 2025

hackergotchi for Colin Watson

Colin Watson

Free software activity in November 2025

My Debian contributions this month were all sponsored by Freexian. I had a bit less time than usual, because Freexian collaborators gathered in Marseille this month for our yearly sprint, doing some planning for next year.

You can also support my work directly via Liberapay or GitHub Sponsors.

OpenSSH

I began preparing for the second stage of the GSS-API key exchange package split (some details have changed since that message). It seems that we’ll need to wait until Ubuntu 26.04 LTS has been released, but that’s close enough that it’s worth making sure we’re ready. This month I just did some packaging cleanups that would otherwise have been annoying to copy, such as removing support for direct upgrades from pre-bookworm. I’m considering some other package rearrangements to make the split easier to manage, but haven’t made any decisions here yet.

This also led me to start on a long-overdue bug triage pass, mainly consisting of applying usertags to lots of our open bugs to sort them by which program they apply to, and also closing a few that have been fixed, since some bugs will eventually need to be reassigned to GSS-API packages and it would be helpful to make them easier to find. At the time of writing, about 30% of the bug list remains to be categorized this way.

Python packaging

I upgraded these packages to new upstream versions:

I packaged django-pgtransaction and backported it to trixie, since we plan to use it in Debusine; and I adopted python-certifi for the Python team.

I fixed or helped to fix several other build/test failures:

I fixed a couple of other bugs:

Other bits and pieces

Code reviews

04 December, 2025 05:55PM by Colin Watson

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in November 2025

04 December, 2025 02:59PM by Ben Hutchings

December 03, 2025

Reproducible Builds

Reproducible Builds in November 2025

Welcome to the report for November 2025 from the Reproducible Builds project!

These monthly reports outline what we’ve been up to over the past month, highlighting items of news from elsewhere in the increasingly-important area of software supply-chain security. As always, if you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this report:

  1. “10 years of Reproducible Build” at SeaGL
  2. Distribution work
  3. Tool development
  4. Website updates
  5. Miscellaneous news
  6. Software Supply Chain Security of Web3
  7. Upstream patches

10 years of Reproducible Builds’ at SeaGL 2025

On Friday 8th November, Chris Lamb gave a talk called 10 years of Reproducible Builds at SeaGL in Seattle, WA.

Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Chris’ talk:

[…] introduces the concept of reproducible builds, its technical underpinnings and its potentially transformative impact on software security and transparency. It is aimed at developers, security professionals and policy-makers who are concerned with enhancing trust and accountability in our software. It also provides a history of the Reproducible Builds project, which is approximately ten years old. How are we getting on? What have we got left to do? Aren’t all the builds reproducible now?


Distribution work

In Debian this month, Jochen Sprickerhof created a merge request to replace the use of reprotest in Debian’s Salsa Continuous Integration (CI) pipeline with debrebuild. Jochen cites the advantages as being threefold: firstly, that “only one extra build needed”; it “uses the same sbuild and ccache tooling as the normal build”; and “works for any Debian release”. The merge request was merged by Emmanuel Arias and is now active.

kpcyrd posted to our mailing list announcing the initial release of repro-threshold, which implements an APT transport that “defines a threshold of at least X of my N trusted rebuilders need to confirm they reproduced the binary” before installing Debian packages. “Configuration can be done through a config file, or through a curses-like user interface.

Holger then merged two commits by Jochen Sprickerhof in order to address a fakeroot-related reproducibility issue in the debian-installer, and Jörg Jaspert deployed a patch by Ivo De Decker for a bug originally filed by Holger in February 2025 related to some Debian packages not being archived on snapshot.debian.org.

Elsewhere, Roland Clobus performed some analysis on the “live” Debian trixie images, which he determined were not reproducible. However, in a follow-up post, Roland happily reports that the issues have been handled. In addition, 145 reviews of Debian packages were added, 12 were updated and 15 were removed this month adding to our knowledge about identified issues.

Lastly, Jochen Sprickerhof filed a bug announcing their intention to “binary NMU” a very large number of the R programming language after a reproducibility-related toolchain bug was fixed.


Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.


Julien Malka and Arnout Engelen launched the new hash collection server for NixOS. Aside from improved reporting to help focus reproducible builds efforts within NixOS, it collects build hashes as individually-signed attestations from independent builders, laying the groundwork for further tooling.


Tool development

diffoscope version 307 was uploaded to Debian unstable (as well as version 309). These changes included further attempts to automatically attempt to deploy to PyPI by liaising with the PyPI developers/maintainers (with this experimental feature). [][][]

In addition, reprotest versions 0.7.31 and 0.7.32 were uploaded to Debian unstable by Holger Levsen, who also made the following changes:

  • Do not vary the architecture personality if the kernel is not varied. (Thanks to Raúl Cumplido). []
  • Drop the debian/watch file, as Lintian now flags this as error for ‘native’ Debian packages. [][]
  • Bump Standards-Version to 4.7.2, with no changes needed. []
  • Drop the Rules-Requires-Root header as it is no longer required.. []

In addition, however, Vagrant Cascadian fixed a build failure by removing some extra whitespace from an older changelog entry. []


Website updates

Once again, there were a number of improvements made to our website this month including:


Miscellaneous news


Software Supply Chain Security of Web3

Via our mailing list, Martin Monperrus let us know about their recently-published page on the Software Supply Chain Security of Web3. The abstract of their paper is as follows:

Web3 applications, built on blockchain technology, manage billions of dollars in digital assets through decentralized applications (dApps) and smart contracts. These systems rely on complex, software supply chains that introduce significant security vulnerabilities. This paper examines the software supply chain security challenges unique to the Web3 ecosystem, where traditional Web2 software supply chain problems intersect with the immutable and high-stakes nature of blockchain technology. We analyze the threat landscape and propose mitigation strategies to strengthen the security posture of Web3 systems.

Their paper lists reproducible builds as one of the mitigating strategies. A PDF of the full text is available to download.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

03 December, 2025 08:28PM

December 02, 2025

François Marier

Recovering from a broken update on the Turris Omnia

The recent Turris OS update from 7.2.3 to 9.0.0 took down my WiFi entirely. The wired network still works fine, but wireless is completely broken.

Factory reset

It turns out the Omnia has an extensive (and fast) factory reset / recovery mode via the hardware reset button.

Unfortunately, the factory image didn't work for me, possibly because I don't use the stock WiFi radios anymore.

Rolling back with schnapps

Thanks to the fact that the Omnia uses a btrfs root filesystem, and the liberal use of snapshots around updates, I was able to rollback to the pre-9.0.0 state.

First, I connected to the router using ssh:

ssh root@192.168.1.1

Then I listed the available snapshots:

$ schnapps list
# | Type      | Size        | Date                        | Description
------+-----------+-------------+-----------------------------+------------------------------------
  500 | post      |    15.98MiB | 2025-08-09 11:27:48 -0700   | Automatic post-update snapshot (TurrisOS 7.2.2 - hbs)
  506 | pre       |    17.92MiB | 2025-09-12 03:44:32 -0700   | Automatic pre-update snapshot (TurrisOS 7.2.2 - hbs)
  507 | post      |    17.88MiB | 2025-09-12 03:45:14 -0700   | Automatic post-update snapshot (TurrisOS 7.2.3 - hbs)
  515 | time      |    20.03MiB | 2025-11-02 01:05:01 -0700   | Snapshot created by cron
  516 | time      |    20.05MiB | 2025-11-09 01:05:01 -0800   | Snapshot created by cron
  517 | time      |    20.29MiB | 2025-11-16 01:05:00 -0800   | Snapshot created by cron
  518 | time      |    20.64MiB | 2025-11-23 01:05:01 -0800   | Snapshot created by cron
  519 | time      |    20.83MiB | 2025-11-30 01:05:00 -0800   | Snapshot created by cron
  520 | pre       |    87.91MiB | 2025-11-30 07:41:10 -0800   | Automatic pre-update snapshot (TurrisOS 7.2.3 - hbs)
  521 | post      |   196.32MiB | 2025-11-30 07:48:11 -0800   | Automatic post-update snapshot (TurrisOS 9.0.0 - hbs)
  523 | pre       |     4.44MiB | 2025-11-30 20:47:31 -0800   | Automatic pre-update snapshot
  524 | post      |   224.00KiB | 2025-11-30 20:47:43 -0800   | Automatic post-update snapshot
  525 | rollback  |   224.00KiB | 2025-12-01 04:56:32 +0000   | Rollback to snapshot factory
  526 | pre       |     4.44MiB | 2025-11-30 21:04:19 -0800   | Automatic pre-update snapshot
  527 | post      |   272.00KiB | 2025-11-30 21:04:31 -0800   | Automatic post-update snapshot
  528 | rollback  |   272.00KiB | 2025-12-01 05:13:38 +0000   | Rollback to snapshot factory
  529 | pre       |     4.52MiB | 2025-11-30 21:28:44 -0800   | Automatic pre-update snapshot
  530 | single    |   208.00KiB |                             | 
  531 | rollback  |   224.00KiB | 2025-12-01 05:29:47 +0000   | Rollback to snapshot factory

Finally, I rolled back to the exact state I was on before the 9.0.0 update:

$ schnapps rollback 520
Current state saved as snapshot number 532
Rolled back to snapshot 520

Full wipe

As an aside, it turns out that the factory reset functionality is implemented as a brtfs rollback to a special factory snapshot. This is why is so fast, but it also means that doing a simple factory reset doesn't wipe the data on your router. If you are planning to sell your device or otherwise dispose of it, you also need to delete all btrfs snapshots

Conclusion

While this update was very disappointing, especially since it's never happened before with major updates on Turris OS, it made me discover just how great the recovery tools are. It would be pretty tricky to fully brick one of these devices.

02 December, 2025 11:23PM

Simon Josefsson

Guix on Trisquel & Ubuntu for Reproducible CI/CD Artifacts

Last week I published Guix on Debian container images that prepared for today’s announcement of Guix on Trisquel/Ubuntu container images.

I have published images with reasonably modern Guix for Trisquel 11 aramo, Trisquel 12 ecne, Ubuntu 22.04 and Ubuntu 24.04. The Ubuntu images are available for both amd64 and arm64, but unfortunately Trisquel arm64 containers aren’t available yet so they are only for amd64. Images for ppc64el and riscv64 are work in progress. The currently supported container names:

registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix

Or you prefer guix-on-dpkg on Docker Hub:

docker.io/jas4711/guix-on-dpkg:trisquel11-guix
docker.io/jas4711/guix-on-dpkg:trisquel12-guix
docker.io/jas4711/guix-on-dpkg:ubuntu22.04-guix
docker.io/jas4711/guix-on-dpkg:ubuntu24.04-guix

You may use them as follows. See the guix-on-dpkg README for how to start guix-daemon and installing packages.

jas@kaka:~$ podman run -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
root@guix:/# head -1 /etc/os-release 
NAME="Trisquel GNU/Linux"
root@guix:/# guix describe
  guix 136fc8b
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: 136fc8bfe91a64d28b6c54cf8f5930ffe787c16e
root@guix:/# 

You may now be asking yourself: why? Fear not, gentle reader, because having two container images of roughly similar software is a great tool for attempting to build software artifacts reproducible, and comparing the result to spot differences. Obviously.

I have been using this pattern to get reproducible tarball artifacts of several software releases for around a year and half, since libntlm 1.8.

Let’s walk through how to setup a CI/CD pipeline that will build a piece of software, in four different jobs for Trisquel 11/12 and Ubuntu 22.04/24.04. I am in the process of learning Codeberg/Forgejo CI/CD, so I am still using GitLab CI/CD here, but the concepts should be the same regardless of platform. Let’s start by defining a job skeleton:

.guile-gnutls: &guile-gnutls
  before_script:
  - /root/.config/guix/current/bin/guix-daemon --version
  - env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARGS &
  - GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
  - type guix
  - guix --version
  - guix describe
  - time guix install --verbosity=0 wget gcc-toolchain autoconf automake libtool gnutls guile pkg-config
  - time apt-get update
  - time apt-get install -y make git texinfo
  - GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
  script:
  - git clone https://codeberg.org/guile-gnutls/guile-gnutls.git
  - cd guile-gnutls
  - git checkout v5.0.1
  - ./bootstrap
  - ./configure
  - make V=1
  - make V=1 check VERBOSE=t
  - make V=1 dist
  after_script:
  - mkdir -pv out/$CI_JOB_NAME_SLUG/src
  - mv -v guile-gnutls/*-src.tar.* out/$CI_JOB_NAME_SLUG/src/
  - mv -v guile-gnutls/*.tar.* out/$CI_JOB_NAME_SLUG/
  artifacts:
    paths:
    - out/**

This installs some packages, clones guile-gnutls (it could be any project, that’s just an example), build it and return tarball artifacts. The artifacts are the git-archive and make dist tarballs.

Let’s instantiate the skeleton into four jobs, running the Trisquel 11/12 jobs on amd64 and the Ubuntu 22.04/24.04 jobs on arm64 for fun.

guile-gnutls-trisquel11-amd64:
  tags: [ saas-linux-medium-amd64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel11-guix
  extends: .guile-gnutls

guile-gnutls-ubuntu22.04-arm64:
  tags: [ saas-linux-medium-arm64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu22.04-guix
  extends: .guile-gnutls

guile-gnutls-trisquel12-amd64:
  tags: [ saas-linux-medium-amd64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:trisquel12-guix
  extends: .guile-gnutls

guile-gnutls-ubuntu24.04-arm64:
  tags: [ saas-linux-medium-arm64 ]
  image: registry.gitlab.com/debdistutils/guix/guix-on-dpkg:ubuntu24.04-guix
  extends: .guile-gnutls

Running this pipeline will result in artifacts that you want to confirm for reproducibility. Let’s add a pipeline job to do the comparison:

guile-gnutls-compare:
  image: alpine:latest
  needs: [ guile-gnutls-trisquel11-amd64,
           guile-gnutls-trisquel12-amd64,
           guile-gnutls-ubuntu22.04-arm64,
           guile-gnutls-ubuntu24.04-arm64 ]
  script:
  - cd out
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep    -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
  - sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
  - sha256sum */*.tar.* */*/*.tar.* | grep    -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
  - sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
# Confirm modern git-archive tarball reproducibility
  - cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
# Confirm old git-archive (export-subst but long git describe) tarball reproducibility
  - cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
# Confirm 'make dist' generated tarball reproducibility
  - cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
  - cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz
  artifacts:
    when: always
    paths:
    - ./out/**

Look how beautiful, almost like ASCII art! The commands print SHA256 checksums of the artifacts, sorted in a couple of ways, and then proceeds to compare relevant artifacts. What would the output of such a run be, you may wonder? You can look for yourself in the guix-on-dpkg pipeline but here is the gist of it:

$ cd out
$ sha256sum */*.tar.* */*/*.tar.* | sort | grep    -- -src.tar.
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-ubuntu22-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-ubuntu24-04-arm64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | sort | grep -v -- -src.tar.
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-ubuntu22-04-arm64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-ubuntu24-04-arm64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | sort | uniq -c -w64 | sort -rn
      2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
      2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | grep    -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
      2 79bc24143ba083819b36822eacb8f9e15a15a543e1257c53d30204e9ffec7aca  guile-gnutls-trisquel11-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
      2 b190047cee068f6b22a5e8d49ca49a2425ad4593901b9ac8940f8842ba7f164f  guile-gnutls-trisquel12-amd64/src/guile-gnutls-v5.0.1-src.tar.gz
$ sha256sum */*.tar.* */*/*.tar.* | grep -v -- -src.tar. | sort | uniq -c -w64 | grep -v '^      1 '
      2 1e8d107ad534b85f30e432d5c98bf599aab5d8db5f996c2530aabe91f203018a  guile-gnutls-trisquel11-amd64/guile-gnutls-5.0.1.tar.gz
      2 bc2df2d868f141bca5f3625aa146aa0f24871f6dcf0b48ff497eba3bb5219b84  guile-gnutls-trisquel12-amd64/guile-gnutls-5.0.1.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/src/*.tar.gz guile-gnutls-ubuntu24-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/src/*.tar.gz guile-gnutls-ubuntu22-04-arm64/src/*.tar.gz
$ cmp guile-gnutls-trisquel11-amd64/*.tar.gz guile-gnutls-ubuntu22-04-arm64/*.tar.gz
$ cmp guile-gnutls-trisquel12-amd64/*.tar.gz guile-gnutls-ubuntu24-04-arm64/*.tar.gz

That’s it for today, but stay tuned for more updates on using Guix in containers, and remember; Happy Hacking!

02 December, 2025 10:01PM by simon

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

duckdb-mlpack 0.0.5: Added kmeans, version helpers, documentation

A new release of the still-recent duckdb extension for mlpack, the C++ header-only library for machine learning, was merged into the duckdb community extensions repo today, and has been updated at its duckdb ‘mlpack’ extension page.

This release 0.0.5 adds one new method: kmeans clustering. We also added two version accessors for both mlpack and armadillo. We found during the work on random forests (added in 0.0.4) that the multithreaded random number generation was not quite right in the respective upstream codes. This has by now been corrected in armadillo 15.2.2 as well as the trunk version of mlpack so if you build with those, and set a seed, then your forests and classification will be stable across reruns. We added a second state variable mlpack_silent that can be used to suppress even the minimal prediction quality summary some methods show, and expanded the documentation.

For more details, see the repo for code, issues and more, and the extension page for more about this duckdb community extension.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

02 December, 2025 05:40PM

Birger Schacht

Status update, November 2025

I started this month with a week of vacation which was followed by a small planned surgery and two weeks of sick leave. Nonetheless, I packaged and uploaded new releases of a couple of packages:

  • swayidle updated to version 1.9.0-1
  • swaylock updated to version 1.8.4-1
  • foot updated to version 1.25.0-1
  • swayimg updated to version 4.6-1
  • scdoc updated to version 1.11.4-1
  • wofi updated to version 1.5.1-1
  • xdg-desktop-portal-wlr updated to version 0.8.0-1

Besides that I reactivated I project I started in summer 2024: debiverse.org. The idea of that was to have interfaces to Debian bugs and packages that are usable on mobile devices (I know, ludicrous!). Back then I started with Flask and Sqlalchemy, but that soon got out of hand. I now switched the whole stack to FastAPI and SQLModel which makes it a lot easier to manage. And the upside is that it comes with an API and OpenAPI docs. For the rendered HTML pages I use Jinja2 with Tailwind as CSS framework. I am currently using udd-mirror as database backend, which works pretty good (for this single user project). It would be nice to have some of the data in a faster index, like Typesense or Meilisearch. This way it would be possible to have faceted search or more performant full text search. But I haven’t found any software that could provide this that is packaged in Debian.

Screenshot of the debiverse bug report list

Screenshot of the debiverse swagger API

02 December, 2025 05:28AM

December 01, 2025

hackergotchi for Guido Günther

Guido Günther

Free Software Activities November 2025

Another short status update of what happened on my side last month. Hand holding the release machinery for Phosh 0.51.0 but there's more:

See below for details on the above and more:

phosh

  • Better auto brightness (MR)
  • Update CI to forky (MR)
  • Test mobile data connection in CI (MR)
  • Add DebugControl interface (MR)
  • Release 0.51~rc1
  • caffeine prefs: Fix resize when adding intervals (MR)
  • Robustify plugin-prefs screenshot tests (MR)
  • Another build systemd dependency fix (MR)
  • Gesture to tune brightness on lock screen (MR)

phoc

  • Update ci to forky (MR)
  • Exit cleanly on SIGTERM (MR)
  • Release (0.51~rc1), 0.51.0)
  • Fix segfault triggered in alpine CI (MR)
  • Cancel preedit on submit (avoids resubmitted text in e.g. chatty or flare) (MR)

phosh-mobile-settings

  • Test suite robustness (MR)
  • Update CI (MR)
  • Release 0.51~rc1)

stevia

xdg-desktop-portal-phosh

  • Release 0.51~rc1, 0.50.0
  • Unbreak nightly builds (MR)
  • Unbreak 32bit builds (MR)
  • Drop C file chooser impl (MR)

pfs

  • pfs-open: Allow to open arbitrary directories and start fixing clippy warnings (MR)
  • More clippy cleanups (MR)
  • Allow to ship schema (MR)
  • Run a smoke test in ci (MR)
  • Implement org.freedesktop.FileManager1 in the demo (MR, MR, MR)
  • dir-view: Don't thumbnail when disabled (MR)

Phrog

  • Fix osk dependencies (MR)

gmobile

  • run-phosh: Allow to run headless (MR)
  • Release 0.5.0 (MR)
  • display-panel: Allow to take screenshots (MR)
  • Add hwdb and udev rules for torch min brightness (MR)

feedbackd

feedbackd-device-themes

libcall-ui

  • Ignore callaudiod deprecations as we otherwise break compilation of downstreams (MR)
  • Same for 0.1.x branch (MR)
  • Release (0.1.5)

wirepumber

  • doc: Fix make run invocation (MR)

Chatty

mobile-broadband-povider-info

Debian

  • stevia: Upload 0.51~rc1, 0.51.0)
  • phrog: Use stevia instead of osk-stub (MR)
  • meta-phosh: Modernize dependencies (MR)
  • phosh: Drop osk-stub (MR)
  • phosh: Upload 0.51~rc1
  • phoc: Upload 0.41~rc1
  • p-m-s: Upload 0.51~rc1
  • feedbackd-device-themes: Upload 0.8.7
  • m-b-p-i: Uplaod 20251101
  • debcargo-conf: Backport ashpd patch (MR)
  • xdg-desktop-portal-phosh: Get it into unstable (MR, MR)

Mobian

  • librem5: Drop exponential brightness (MR)

wlroots

  • input-method-unstable-v2: Fix two protocol issues (MR)

libqrtr-glib

  • Fix transfer annotation to unbreak usage from Python (MR)
  • Move doc build to gi-docgen (MR)

libadwaits-rs

  • Allow None for parent in adw_dialog_choose (MR)

phosh-site

  • Lint tools (MR)
  • Add slideshow to landing page (MR)
  • Add more videos (MR)
  • Fix typos and links (MR)
  • Update nightly details (MR)

bengalos-debs

  • Fix phrog build (MR, MR)
  • Enable arm64 builds (MR)

gtk

  • Drop unused defines (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • pfs: Create folder support (MR)`
  • portal: Create thumbnails via thumbnailer service (MR)
  • phosh: caffeine plugin prefs (MR)
  • phosh: lower torch brightness (MR)
  • phosh: wi-fi hotspot QR code (MR)
  • phosh/caffeine: Close status page when selecting an interval (MR)
  • phosh/caffeine: Use empty state (MR)
  • bengalos-recpipes: prep supporting multiple disk layouts (MR)
  • xdg-p-p: Longer test timeout (MR)
  • p-m-s: Volume slider for media rols (MR)

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 December, 2025 06:52PM

November 28, 2025

hackergotchi for Clint Adams

Clint Adams

monkeying around bitrot

One of the servers to which I SSH ratcheted up its public key requirements and thus the Monkeysphere key I've been using for 15 years stopped working.

Unfortunately, monkeysphere gen-subkey hardcodes RSA keys and if I'm going to be forced to use a new subkey I want mine to be of the 25519 variety. Therefore, to add a subkey by hand:

gpg --expert --edit-key $KEYID

Follow roughly what's in /usr/share/monkeysphere/m/gen_subkey, but change the key type to 11 (ECC (set your own capabilities)), don't bother with Encrypt capability, and pick Curve25519.

monkeysphere subkey-to-ssh-agent and agent-transfer will be all happy with the "ed25519" subkey without any code modifications, and you won't need to rewrite monkeysphere from scratch to use Sequoia for the next 15 years.

Posted on 2025-11-28
Tags:

28 November, 2025 08:56PM

Simon Josefsson

Container Images for Debian with Guix

The debian-with-guix-container project build and publish container images of Debian GNU/Linux stable with GNU Guix installed.

The images are like normal Debian stable containers but have the guix tool and a reasonable fresh guix pull.

Supported architectures include amd64 and arm64. The multi-arch container is called:

registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable

It may also be accessed via debian-with-guix at Docker Hub as:

docker.io/jas4711/debian-with-guix:stable

The container images may be used like this:

$ podman run --privileged -it --hostname guix --rm registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable
root@guix:/# hello
bash: hello: command not found
root@guix:/# guix describe
  guix c9eb69d
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: c9eb69ddbf05e77300b59f49f4bb5aa50cae0892
root@guix:/# LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild &
[1] 21
root@guix:/# GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
root@guix:/# guix describe
Generation 2    Nov 28 2025 10:14:11    (current)
  guix c9eb69d
    repository URL: https://gitlab.com/debdistutils/guix/mirror.git
    branch: master
    commit: c9eb69ddbf05e77300b59f49f4bb5aa50cae0892
root@guix:/# guix install --verbosity=0 hello
accepted connection from pid 55, user root
The following package will be installed:
   hello 2.12.2

hint: Consider setting the necessary environment variables by running:

     GUIX_PROFILE="/root/.guix-profile"
     . "$GUIX_PROFILE/etc/profile"

Alternately, see `guix package --search-paths -p "/root/.guix-profile"'.

root@guix:/# GUIX_PROFILE="/root/.guix-profile"
root@guix:/# . "$GUIX_PROFILE/etc/profile"
root@guix:/# hello
Hello, world!
root@guix:/# 

Below is an example GitLab pipeline job that demonstrate how to run guix install to install additional dependencies, and then download and build a package that pick up the installed package from the system.

test-wget-configure-make-libksba-amd64:
  image: registry.gitlab.com/debdistutils/guix/debian-with-guix-container:stable
  before_script:
  - env LC_ALL=C.UTF-8 /root/.config/guix/current/bin/guix-daemon --build-users-group=guixbuild $GUIX_DAEMON_ARG &
  - GUIX_PROFILE=/root/.config/guix/current; . "$GUIX_PROFILE/etc/profile"
  - guix describe
  - guix install libgpg-error
  - GUIX_PROFILE="/root/.guix-profile"; . "$GUIX_PROFILE/etc/profile"
  - apt-get install --update -y --no-install-recommends build-essential wget ca-certificates bzip2
  script:
  - wget https://www.gnupg.org/ftp/gcrypt/libksba/libksba-1.6.7.tar.bz2
  - tar xfa libksba-1.6.7.tar.bz2
  - cd libksba-1.6.7
  - ./configure
  - make V=1
  - make check VERBOSE=t V=1

The images were initially created for use in GitLab CI/CD Pipelines but should work for any use.

The images are built in a GitLab CI/CD pipeline, see .gitlab-ci.yml.

The containers are derived from official Debian stable images with Guix installed and a successful run of guix pull, built using buildah invoked from build.sh using image/Containerfile that runs image/setup.sh.

The pipeline also push images to the GitLab container registry, and then also to Docker Hub.

Guix binaries are downloaded from the Guix binary tarballs project because of upstream download site availability and bandwidth concerns.

Enjoy these images! Hopefully they can help you overcome the loss of Guix in Debian which made it a mere apt-get install guix away before.

There are several things that may be improved further. An alternative to using podman --privileged is to use --security-opt seccomp=unconfined --cap-add=CAP_SYS_ADMIN,CAP_NET_ADMIN which may be slightly more fine-grained.

For ppc64el support I ran into an error message that I wasn’t able to resolve:

guix pull: error: while setting up the build environment: cannot set host name: Operation not permitted

For riscv64, I can’t even find a Guix riscv64 binary tarball for download, is there one anywhere?

For arm64 containers, it seems that you need to start guix-daemon with --disable-chroot to get something to work, at least on GitLab.com’s shared runners, otherwise you will get this error message:

guix install: error: clone: Invalid argument

Building the images themselves also require disabling some security functionality, and I was not able to build images with buildah without providing --cap-add=CAP_SYS_ADMIN,CAP_NET_ADMIN otherwise there were errors like this:

guix pull: error: cloning builder process: Operation not permitted
guix pull: error: clone: Operation not permitted
guix pull: error: while setting up the build environment: cannot set loopback interface flags: Operation not permitted

Finally on amd64 it seems --security-opt seccomp=unconfined is necessary, otherwise there is an error message like this, even if you use --disable-chroot:

guix pull: error: while setting up the child process: in phase setPersonality: cannot set personality: Function not implemented

This particular error is discussed upstream, but I think generally that these error suggest that guix-daemon could use more optional use of features: if some particular feature is not available, gracefully fall back to another mode of operation, instead of exiting with an error. Of course, it should never fall back to an insecure mode of operation, unless the user requests that.

Happy Hacking!

28 November, 2025 04:32PM by simon

Russell Coker

10gbit and 40gbit Home Networking

Aliexpress has a 4 port 2.5gbit switch with 2*SFP+ sockets for $34.35 delivered [1]. 4 ports isn’t very good for the more common use cases (if daisy chaining them then it’s only
2 available for devices) so this is really a device for use with 10Gbit uplink.

Aliexpress has a pair of SFP+ 10Gbit devices with 1M of copper between them for $15.79 delivered [2]. That page also offers a pair of QSFP+ 40Gbit devices with 1M of copper between them for $27.79 delivered.

They have a dual port SFP+ card for a server with two of the pairs of SFP+ 10gbit devices with copper between them for $32.51 delivered [3].

So you can get a 2.5gbit switch with two 10gbit uplink cables to nearby servers for $66.86 including postage. I don’t need this but it is tempting. I spent $93.78 to get 2.5gbit networking [4] so spending $66.86 to get part of my network to 10gbit isn’t much.

It is $99.81 including postage for a Mellanox 2*40Gbit QSFP+ card and two QSFP+ adaptors with 3M of copper between them [5]. It is $55.81 including postage for the Mellanox card without the cable. So that’s $155.62 for a point to point 40gbit link between systems that are less than 3M apart, that’s affordable for a home lab. As an aside the only NVMe I’ve tested which can deliver such speeds was in a Thinkpad and the Thinkpad entered a thermal throttling state after a few seconds of doing that.

The best price I could see for a 40Gbit switch is $1280 for a L3 Managed switch with 2*40G QSFP+ slot ports, 4*10G SFP+ ports, and 48*2.5G RJ45 ports [6]. That’s quite affordable for the SME market but a bit expensive for home users (although I’m sure that someone on r/homelab has one).

I’m not going to get 40Gbit, that’s well above what I need and while a point to point link is quite affordable I don’t have servers in that range. But I am seriously considering 10Gbit, I get paid to do enough networking stuff that having some hands on experience with 10Gbit could be useful.

For a laptop a 5gbit ethernet USB device is $29.48 including delivery which isn’t too expensive [7]. The faster ones seem to be all Thunderbolt and well over $100, which is disappointing as USB 3.2 can do up to 20Gbit. If I start doing 10gbit over ethernet I’ll get one of those USB devices for testing.

For a single server it’s cheaper and easier to get a 4 port 2.5Gbit ethernet for $55.61 [8].

28 November, 2025 08:13AM by etbe

November 27, 2025

PineTime Band

I’ve had a Pine Time for just over 2 years [1]. About a year ago I had a band break and replaced it from a spare PineTime and now I just had another break. Having the band only last one year isn’t that great, but it’s fortunate that the break only affects the inner layer of plastic so there is no risk of the watch suddenly falling off and being broken or lost. The Pine64 web site has a page about this with bad options, one broken link and a few Amazon items that are have ridiculous postage [2].

I started writing this post while using the band from a Colmi P80 [3]. I bought one for a relative who wanted the metal band and the way the Aliexpress seller does it is to sell the package with the plastic band and include the metal band in the package so I had a spare band. It fits quite well and none of the reported problems of the PineTime having insufficient space between the spring bar and the watch. The Colmi band in question is described as “rose gold” but is more like “pinkish beige” and doesn’t match the style of the black PineTime.

I ordered a couple of cheap bands from AliExpress which cost $9.77 and $13.55 including postage while the ones that Pine64 recommend have over $15 postage from Amazon!

The 20mm Silicone Magnetic Buckle Watch Strap Band For Huawei GT2 Smart Watch Connected Bracelet Black Watchband Man [4] cost $13.55 including postage. It has a magnetic unfold mechanism which I find a bit annoying and it doesn’t allow easily changing the length. I don’t think I’ll choose that again. But it basically works and is comfortable.

The 20mm Metal Strap for Huawei Watch GT2 3 Quick Release Stainless Steel Watch Band for Samsung Galaxy Watch Bracelet [5] cost $9.77 including postage. I found this unreasonably difficult to put on and not particularly comfortable. But opinion will vary on that, it is cheap and will appeal to some people’s style.

Conclusion

There are claims that getting a replacement band for a PineTime is difficult. My experience is that every band with a 20mm attachment works as long as it’s designed for a square watch, some of the bands are designed to partly go around a round face and wouldn’t fit. I expect that some bands won’t fit, but I don’t think that it’s enough of a problem to be worried about buying a random band from AliExpress. The incidence of bands not fitting will probably be lower than the incidence of other AliExpress products not doing quite what you want (while meeting the legal criteria of doing what they are claimed to do) and not being used.

I’m now wearing the PineTime with the “Magnetic Buckle Watch Strap Band” and plan to wear it for the next year or so.

27 November, 2025 12:37AM by etbe

November 26, 2025

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (September and October 2025)

The following contributors got their Debian Developer accounts in the last two months:

  • Evangelos Ribeiro Tzaras (devrts)
  • Andrea Bolognani (abologna)

The following contributors were added as Debian Maintainers in the last two months:

  • Rylie Pavlik
  • Yuchin Tsai
  • Daniel Markstedt
  • Guido Berhörster
  • Renzo Davoli

Congratulations!

26 November, 2025 04:00PM by Jean-Pierre Giraud