Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

October 03, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#053: Adding llvm Snapshots for R Package Testing

Welcome to post 53 in the R4 series.

Continuing with posts #51 from Tuesday and #52 from Wednesday and their stated intent of posting some more … here is another quick one. Earlier today I helped another package developer who came to the r-package-devel list asking for help with a build error on the Fedora machine at CRAN running recent / development clang. In such situations, the best first step is often to replicate the issue. As I pointed out on the list, the LLVM team behind clang maintains an apt repo at apt.llvm.org/ making it a good resource to add to Debian-based container such as Rocker r-base or the offical r-base (the two are in fact interchangeable, and I take care of both).

A small pothole, however, is that the documentation at the top of apt.llvm.org site is a bit stale and behind two aspects that changed on current Debian systems (i.e. unstable/testing as used for r-base). First, apt now prefers files ending in .sources (in a nicer format) and second, it now really requires a key (which is good practice). As it took me a few minutes to regather how to meet both requirements, I reckoned I might as well script this.

Et voilà the following script does that:

  • it can update and upgrade the container (currently commented-out)
  • it fetches the repository key in ascii form from the llvm.org site
  • it creates the sources entry for, here tagged for llvm ‘current’ (22 at time of writing)
  • it sets up the required ~/.R/Makevars to use that compiler
  • it installs clang-22 (and clang++-22) (still using the g++ C++ library)
#!/bin/sh

## Update does not hurt but is not strictly needed
#apt update --quiet --quiet
#apt upgrade --yes

## wget -qO- https://apt.llvm.org/llvm-snapshot.gpg.key | sudo tee /etc/apt/trusted.gpg.d/apt.llvm.org.asc
## or as we are root in container
wget -qO- https://apt.llvm.org/llvm-snapshot.gpg.key > /etc/apt/trusted.gpg.d/apt.llvm.org.asc

cat <<EOF >/etc/apt/sources.list.d/llvm-dev.sources
Types: deb
URIs: http://apt.llvm.org/unstable/
# for clang-21 
#   Suites: llvm-toolchain-21
# for current clang
Suites: llvm-toolchain
Components: main
Signed-By: /etc/apt/trusted.gpg.d/apt.llvm.org.asc
EOF

test -d ~/.R || mkdir ~/.R
cat <<EOF >~/.R/Makevars
CLANGVER=-22
# CLANGLIB=-stdlib=libc++
CXX=clang++\$(CLANGVER) \$(CLANGLIB)
CXX11=clang++\$(CLANGVER) \$(CLANGLIB)
CXX14=clang++\$(CLANGVER) \$(CLANGLIB)
CXX17=clang++\$(CLANGVER) \$(CLANGLIB)
CXX20=clang++\$(CLANGVER) \$(CLANGLIB)
CC=clang\$(CLANGVER)
SHLIB_CXXLD=clang++\$(CLANGVER) \$(CLANGLIB)
EOF

apt update
apt install --yes clang-22

Once the script is run, one can test a package (or set of packages) against clang-22 and clang++-22. This may help R package developers. The script is also generic enough for other development communities who can ignore (or comment-out / delete) the bit about ~/.R/Makevars and deploy the compiler differently. Updating the softlink as apt-preferences does is one way and done in many GitHub Actions recipes. As we only need wget here a basic Debian container should work, possibly with the addition of wget. For R users r-base hits a decent sweet spot.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

03 October, 2025 10:09PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Tron: Ares (soundtrack)

photo of Tron: Ares vinyl record on my turntable, next to packaging

There's a new Nine Inch Nails album! That doesn't happen very often. There's a new Trent Reznor & Atticus Ross soundtrack! That happens all the time! For the first time, they're the same thing.

The new one, Tron: Ares, is very deliberately presented as a Nine Inch Nails album, and not a TR&AR soundtrack. But is it neither fish nor fowl? 24 tracks, four with lyrics. Singing is not unheard of on TR&AR soundtracks, but it's rare (A Minute to Breathe from the excellent Before the Flood is another). Instrumentals are not rare on NIN albums, either, but this ratio is very unusual, and has disappointed some fans who were hoping for a more traditional NIN album.

What does it mean to label something a NIN album anyway? For me, the lines are now further blurred. One thing for sure is it means a lot of media attention, and this release, as well as the film it's promoting, are all over the media at the moment. Posters, trailers, promotional tie-in items, Disney logos everywhere. The album is hitched to the Disney marketing and promotion machine. It's a bit weird seeing the NIN logo all over the place advertising the movie.

On to the music. I love TR&AR soundtracks, and some of my favourite NIN tracks are instrumentals. Despite that, three highlights for me are songs: As Alive As You Need Me To Be, I Know You Can Feel It and closer Shadow Over Me. The other stand-out is Building Better Worlds, a short instrumental and clear nod to Wendy Carlos.

My main complaint here applies to some of the more recent soundtracks as well: the tracks are too short. They're scored to scenes in the movie, which makes a lot of sense in that presentation, but less so for independent listening. It's not a problem that their earlier, lauded soundtracks suffered (The Social Network, Before the Flood, Bird Box Extended). Perhaps a future remix album will address that.

03 October, 2025 10:01AM

hackergotchi for Guido Günther

Guido Günther

Free Software Activities September 2025

Another short status update of what happened on my side last month. Nothing stands out too much, I enjoyed doing the OSK changes the most as that helped to improve the typing experience further. Also doing a small bit of kernel work again was fun (still need to figure out the 6mq's touch controller repsonsiveness though).

See below for details on the above and more:

phosh

  • Add backlight brightness handling (MR)
  • Handle brightness keybinding (MR)
  • Use stevia (MR)
  • Test suite improvements (MR)
  • Simplify keybinding generation (MR)
  • Allow g-c-c to work against nested phosh (MR)
  • Hide demo plugins (MR)

phoc

  • Unbreak type to search (MR)
  • Update to wlroots 0.19.1 (MR)
  • Relese 0.50~rc1
  • Catch up with wlroots git (MR)
  • Damage tracking and render simplifications (MR)

phosh-mobile-settings

  • Allow to hide plugins (MR)
  • Release 0.50~rc1
  • Hide demo plugins by default (MR)
  • Sink floating refs properly (MR)
  • Simplify includes (MR)

stevia (formerly phosh-osk-stub)

  • Fix meson warning (MR)
  • Update URLs (MR)
  • Make backspace more clever (MR)
  • presage: Better handle predictions vs completions: (MR)

xdg-desktop-portal-phosh

  • Update to pfs 0.0.5 (MR)
  • Release 0.50~rc1
  • Allow to disable Rust portal (MR)
  • Use release ashpd (MR)

pfs

  • Release 0.0.5 (MR)

libphosh-rs

  • Modernize and release 0.0.7 (MR)

Phrog

  • Bump libphosh dependency to 0.0.7 (MR)

feedbackd

  • Release 0.8.5 (MR)
  • Publish API docs (MR)

feedbackd-device-themes

  • Release 0.8.6 (MR)

Debian

  • 0.46 backports for trixie: (MR) - testers needed!
  • cellbroadcastd: Upload to sid (MR)
  • meta-phosh: Update deps (MR)
  • meta-phosh: Adjust deps for 0.49 (MR)
  • phosh-tour: Upload to unstable (MR)
  • xdg-desktop-portal-phosh: Upload 0.50~rc1
  • xdg-desktop-portal-phosh: Enable Rust based portal (MR)
  • wlroots: Upload 0.19.1
  • rust-libphosh: Update to 0.0.7
  • Release Phosh 0.50~rc1
  • Release phosh-mobile-seettings 0.50~rc1
  • Release feedbackd 0.8.5
  • Release feedbackd-device-themes 0.8.6
  • Release phoc 0.50~rc1

gnome-settings-daemon

  • Fix brightness values (MR)

git-buildpackage

  • Make gbp import-orig --uscan useful again when passing in a version (MR)
  • Make dsc component tests fetch from salsa (MR)

govarnam

  • Fix gcc-15 build (MR)

Sessions

  • Fix missing application icon (MR)

twenty-twenty-hugo

  • Avoid 404 on each page load (MR)
  • Fingerprint custom CSS (MR)

tuwunnel

  • Fix alias in systemd unit (MR)
  • Document support items (MR)

Linux

  • Add backlight support for Shift6MQ (v1, v2, v3)

mutter

  • udev: Don't leak parent device (MR)

Phosh debs

  • Don't require gsd-49 yet (MR)

phosh-site

  • Fix links (MR)
  • Update several entries (MR)
  • Mention nonprofit (MR)
  • Automatic deploy (MR)

Reviews

This is not code by me but reviews I did on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • p-m-s: Tweaks parsing (MR)
  • p-m-s: Prefer char over gchar (MR)
  • p-m-s/tweaks: Add .XResources backend (MR)
  • p-m-s/tweaks: Add Symlink backend (MR)
  • p-m-s/tweaks: Cleanup includes (MR)
  • p-m-s/tweaks: Cleanup self ref (MR)
  • p-m-s/tweaks: Menu toggle (MR)
  • p-m-s/tweaks: i18n support (MR)
  • p-m-s/tweaks: Use toasts for errors (MR)
  • p-m-s/run: Add gdb invocation (MR)
  • p-m-s: Appinfo tweaks (MR)
  • p-m-s: Hide Config tweaks menu entry when not needed (MR)
  • m-b-p-i provider updates: (MR)
  • m-b-p-i emergency number updates: (MR, MR, MR)
  • pfs: Switch to gtk-rs 0.10 (MR)
  • x-d-p-p: Switch to gtk-rs 0.10 (MR)
  • x-d-p-p: Port file chooser portal to Rust (MR)
  • phosh: custom lockscreen message (MR)
  • libcmatrix: Bump endpoint versions (MR)
  • phosh-recipes: Add gnome-software-plugin-flatpak (MR)

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

03 October, 2025 08:08AM

October 02, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

tinythemes 0.0.4 at CRAN: Micro Maintenance

tinythemes demo

A ‘tiniest of tiny violins’ micro maintenance release 0.0.4 of our tinythemes arrived on CRAN today. tinythemes provides the theme_ipsum_rc() function from hrbrthemes by Bob Rudis in a zero (added) dependency way. A simple example is (also available as a demo inside the package) contrasts the default style (on the left) with the one added by this package (on the right):

This version adjusts to the fact that hrbrthemes is no longer on CRAN so the help page cannot link to its documentation. No other changes were made.

The set of changes since the last release follows.

Changes in tinythemes version 0.0.4 (2025-10-02)

  • No longer \link{} to now-archived hrbrthemes

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the repo where comments and suggestions are welcome.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

02 October, 2025 04:17PM

John Goerzen

A Twisty Maze of Ill-Behaved Bots

Like many, bot traffic has been causing significant issues for my hosted server recently. I’ve been noticing a dramatic increase in bots that do not respect robots.txt, especially the crawl-delay I have set there. Not only that, but many of them are sending user-agent strings that are quite precisely matching what desktop browsers send. That is, they don’t identify themselves.

They posed a particular problem on two sites: my blog, and the lists.complete.org archives.

The list archives is a completely static site, but it has many pages, so the bots that are ill-behaved absolutely hammer it following links.

My blog runs WordPress. It has fewer pages, but by using PHP, doesn’t need as many hits to start to bog down. Also, there is a Mastodon thundering herd problem, and since I participate on Mastodon, this hits my server.

The solution was one of layers.

I had already added a crawl-delay line to robots.txt. It helped a bit, but many bots these days aren’t well-behaved. Next, I added WP Super Cache to my WordPress installation. I also enabled APCu in PHP and installed APCu Manager. Again, each step helped. Again, not quite enough.

Finally, I added Anubis. Installing it (especially if using the Docker container) was under-documented, but I figured it out. By default, it is designed to block AI bots and try to challenge everything with “Mozilla” in its user-agent (which is most things) with a Javascript challenge.

That’s not quite what I want. If a bot is well-behaved, AI or otherwise, it will respect my robots.txt and I can more precisely control it there. Also, I intentionally support non-Javascript browsers on many of the sites I host, so I wanted to be judicious. Eventually I configured Anubis to only challenge things that present a user-agent that looks fully like a real browser. In other words, real browsers should pass right through, and bad bots pretending to be real browsers will fail.

That was quite effective. It reduced load further to the point where things are ordinarily fairly snappy.

I had previously been using mod_security to block some bots, but it seemed to be getting in the way of the Fediverse at times. When I disabled it, I observed another increase in speed. Anubis was likely going to get rid of those annoying bots itself anyhow.

As a final step, I migrated to a faster hosting option. This post will show me how well it survives the Mastodon thundering herd!

Update: Yes, it handled it quite nicely now.

02 October, 2025 03:01AM by John Goerzen

October 01, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#052: Running r-ci with Coverage

Welcome to post 52 in the R4 series.

Following up on the post #51 yesterday and its stated intent of posting some more here… The r-ci setup (which was introduced in post #32 and updated in post #45) offers portable continuous integration which can take advantage of different backends: Github Actions, Azure Pipelines, GitLab, BitBuckets, … and possibly as it only requires a basic Ubuntu shell after which it customizes itself and runs via shell script. Portably. Now, most of us, I suspect still use it with GitHub Actions but it is reassuring to know that one can take it elsewhere should the need or desire arise.

One thing many repos did, and which stopped working reliably is coverage analysis. This is made easy by the covr package, and made ‘fast, easy, reliable’ (as we quip) thanks to r2u. But transmission to codecov stopped working a while back so I had mostly commented it out in my repos, rendering the reports more and more stale. Which is not ideal.

A few weeks ago I gave this another look, and it turns that that codecov now requires an API token to upload. Which one can generate in the user -> settings menu on their website under the ‘access’ tab. Which in turn then needs to be stored in each repo wanting to upload. For Github, this is under settings -> secrets and variables -> actions as a ‘repository secret’. I suggest using CODECOV_TOKEN for its name.

After that the three-line block in the yaml file can reference it as a secret as in the following snippet, now five lines, taken from one of my ci.yaml files:

      - name: Coverage
        if: ${{ matrix.os == 'ubuntu-latest' }}
        run: ./run.sh coverage
        env:
          CODECOV_TOKEN: ${{ secrets.CODECOV_TOKEN }}

It assigns the secret we stored on the website, references it by prefix secrets. and assigns it to the environment variable CODECOV_TOKEN. After this reports flow as one can see on repositories where I re-enabled this as for example here for RcppArmadillo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

01 October, 2025 08:49PM

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in September 2025

Last month I attended and spoke at Kangrejos, for which I will post a separate report later. Besides that, here’s the usual categorised list of work:

01 October, 2025 03:24PM by Ben Hutchings

hackergotchi for Sean Whitton

Sean Whitton

Southern Biscuits with British ingredients

I miss the US more and more, and have recently been trying to perfect Southern Biscuits using British ingredients. It took me eight or nine tries before I was consistently getting good results. Here is my recipe.

Ingredients

  • 190g plain flour
  • 60g strong white bread flour
  • 4 tsp baking powder
  • ¼ tsp bicarbonate of soda
  • 1 tsp cream of tartar (optional)
  • 1 tsp salt
  • 100g unsalted butter
  • 180ml buttermilk, chilled
  • extra buttermilk for brushing

Method

  1. Slice and then chill the butter in the freezer for at least fifteen minutes.
  2. Preheat oven to 220°C with the fan turned off.
  3. Twice sieve together the flours, leaveners and salt. Some salt may not go through the sieve; just tip it back into the bowl.
  4. Cut cold butter slices into the flour with a pastry blender until the mixture resembles coarse crumbs: some small lumps of fat remaining is desirable. In particular, the fine crumbs you are looking for when making British scones are not wanted here. Rubbing in with fingertips just won’t do; biscuits demand keeping things cold even more than shortcrust pastry does.
  5. Make a well in the centre, pour in the buttermilk, and stir with a metal spoon until the dough comes together and pulls away from the sides of the bowl. I’ve found that so long as the ingredients are cold you don’t have to be too gentle at this stage and can make sure all the crumbs are mixed in.
  6. Flour your hands, turn dough onto a floured work surface, and pat together into a rectangle.
  7. Fold the dough in half, then gather any crumbs and pat it back into the same shape. Turn ninety degrees and do the same again, until you have completed a total of eight folds, two in each cardinal direction. The dough should now be a little springy.
  8. Roll to about ½ inch thick.
  9. Cut out biscuits, or slice the dough into equal pieces. If using a round cutter, do not twist it, as that seals the edges of the biscuits and so spoils the layering. An advantage of just slicing up the whole piece of dough is that then you don’t have to re-roll, which latter also spoils the layering.
  10. Transfer to a baking sheet, press an indent into the top of each biscuit with your thumb (helps them rise straight), brush with buttermilk.
  11. Bake until flaky and golden brown: about fifteen minutes.

Gravy

It turns out that the “pepper gravy” that one commonly has with biscuits is just a white/béchamel sauce made with lots of black pepper. I haven’t got a recipe I really like for this yet. Better is a “sausage gravy”; again this has a white sauce as its base, I believe. I have a vegetarian recipe for this to try at some point.

Notes

  • I’ve had more success with Dale Farm’s buttermilk than Sainsbury’s own.
  • These biscuits do come out fluffy but not so flaky. For that you can try using lard instead of butter, if you’re not vegetarian (vegetable shortening is hard to find here).
  • If you don’t have a pastry blender and don’t want to buy one you can try not slicing the butter and instead coarsely grating it into the flour out of the freezer.
  • An alternative to folding is cutting and piling the layers; I haven’t tried this yet.
  • Southern culture calls for biscuits to be made the size of cat’s heads.
  • Bleached flour is apparently usual in the South, but is illegal(!) here. This shouldn’t affect texture or taste but may make them look different.
  • American all-purpose flour has more gluten than our plain flour, hence the mix of plain and strong white, in a ratio of 3:1.
  • Baking powder in the US is usually double-acting but ours is always single-acting, so we need double quantities of that.

01 October, 2025 10:55AM

Birger Schacht

Status update, September 2025

Regarding Debian packaging this was a rather quiet month. I uploaded version 1.24.0-1 of foot and version 2.8.0-1 of git-quick-stats. I took the opportunity and started migrating my packages to the new version 5 watch file format, which I think is much more readable than the previous format.

I also uploaded version 0.1.1-1 of libscfg to NEW. libscfg is a C implementation of the scfg configuration file format and it is a dependency of recent version of kanshi. kanshi is a tool similar to autorandr which allows you define output profiles and kanshi switches to the correct output profile on hotplug events. Once libscfg is in unstable I can finally update kanshi to the latest version.

A lot of time this month in finalizing a redesign of the output rendering of carl. carl is a small rust program I wrote that provides a calendar view similar to cal, but it comes with colors and ical file integration. That means that you can not only display a simple calendar, but also colorize/highlight dates based on various attributes or based on events on that day. In the initial versions of carl the output rendering was simply hardcoded into the app.

Screenshot of carl

This was a bit cumbersome to maintain and not configurable for users. I am using templating languages on a daily basis, so I decided I would reimplement the output generation of carl to use templates. I chose the minijinja Rust library which is “based on the syntax and behavior of the Jinja2 template engine for Python”. There are others out there, like tera, but minijinja seems to be more active in development currently. I worked on this implementation on and off for the last year and finally had the time to finish it up and write some additional tests for the outputs. It is easier to maintain templates than Rust code that uses write!() to format the output. I also implemented a configuration option for users to override the templates.

Additional to the output refactoring I also fixed couple of bugs and finally released v0.4.0 of carl.

In my dayjob I released version 0.53 of apis-core-rdf which contains the place lookup field which I implemented in August. A couple of weeks later we released version 0.54 which comes with a middleware to show pass on messages from the Django messages framework via response header to HTMX to trigger message popups. This implementation is based on the blog post Using the Django messages framework with HTMX. Version 0.55 was the last release in September. It contained preparations for refactoring the import logic as well as a couple of UX improvements.

01 October, 2025 05:28AM

September 30, 2025

hackergotchi for Junichi Uekawa

Junichi Uekawa

Start of fourth quarter of the year.

Start of fourth quarter of the year. The end of year is feeling close!

30 September, 2025 11:25PM by Junichi Uekawa

hackergotchi for Jonathan McDowell

Jonathan McDowell

Local Voice Assistant Step 5: Remote Satellite

The last (software) piece of sorting out a local voice assistant is tying the openWakeWord piece to a local microphone + speaker, and thus back to Home Assistant. For that we use wyoming-satellite.

I’ve packaged that up - https://salsa.debian.org/noodles/wyoming-satellite - and then to run I do something like:

$ wyoming-satellite --name 'Living Room Satellite' \
    --uri 'tcp://[::]:10700' \
    --mic-command 'arecord -r 16000 -c 1 -f S16_LE -t raw -D plughw:CARD=CameraB409241,DEV=0' \
    --snd-command 'aplay -D plughw:CARD=UACDemoV10,DEV=0 -r 22050 -c 1 -f S16_LE -t raw' \
    --wake-uri tcp://[::1]:10400/ \
    --debug

That starts us listening for connections from Home Assistant on port 10700, uses the openWakeWord instance on localhost port 10400, uses aplay/arecord to talk to the local microphone and speaker, and gives us some debug output so we can see what’s going on.

And it turns out we need the debug. This setup is a bit too flaky for it to have ended up in regular use in our household. I’ve had some problems with reliable audio setup; you’ll note the Python is calling out to other tooling to grab audio, which feels a bit clunky to me and I don’t think is the actual problem, but the main audio for this host is hooked up to the TV (it’s a media box), so the setup for the voice assistant needs to be entirely separate. That means not plugging into Pipewire or similar, and instead giving direct access to wyoming-satellite. And sometimes having to deal with how to make the mixer happy + non-muted manually.

I’ve also had some issues with the USB microphone + speaker; I suspect a powered USB hub would help, and that’s on my list to try out.

When it does work I have sometimes found it necessary to speak more slowly, or enunciate my words more clearly. That’s probably something I could improve by switching from the base.en to small.en whisper.cpp model, but I’m waiting until I sort out the audio hardware issue before poking more.

Finally, the wake word detection is a little bit sensitive sometimes, as I mentioned in the previous post. To be honest I think it’s possible to deal with that, if I got the rest of the pieces working smoothly.

This has ended up sounding like a more negative post than I meant it to. Part of the issue in a resolution is finding enough free time to poke things (especially as it involves taking over the living room and saying “Hey Jarvis” a lot), part of it is no doubt my desire to actually hook up the pieces myself and understand what’s going on. Stay tuned and see if I ever manage to resolve it all!

30 September, 2025 06:23PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

#051: A Neat Little Rcpp Trick

Welcome to post 51 in the R4 series.

A while back I realized I should really just post a little more as not all post have to be as deep and introspective as for example the recent-ish ‘two cultures’ post #49.

So this post is a neat little trick I (somewhat belatedly) realized somewhat recently. The context is the ongoing transition from (Rcpp)Armadillo 14.6.3 and earlier to (Rcpp)Armadillo 15.0.2 or later. (I need to write a bit more about that, but that may require a bit more time.) (And there are a total of seven (!!) issue tickets managing the transition with issue #475 being the main ‘parent’ issue, please see there for more details.)

In brief, the newer and current Armadillo no longer allows C++11 (which also means it no longer allowes suppression of deprecation warnings …). It so happens that around a decade ago packages were actively encouraged to move towards C++11 so many either set an explicit SystemRequirements: for it, or set CXX_STD=CXX11 in src/Makevars{.win}. CRAN has for some time now issued NOTEs asking for this to be removed, and more recently enforced this with actual deadlines. In RcppArmadillo I opted to accomodate old(er) packages (using this by-now anti-pattern) and flip to Armadillo 14.6.3 during a transition period. That is what the package does now: It gives you either Armadillo 14.6.3 in case C++11 was detected (or this legacy version was actively selected via a compile-time #define), or it uses Armadillo 15.0.2 or later.

So this means we can have either one of two versions, and may want to know which one we have. Armadillo carries its own version macros, as many libraries or projects do (R of course included). Many many years ago (git blame points to sixteen and twelve for a revision) we added the following helper function to the package (full source here, we show it here without the full roxygen2 comment header)

// [[Rcpp::export]]
Rcpp::IntegerVector armadillo_version(bool single) {

    // These are declared as constexpr in Armadillo which actually does not define them
    // They are also defined as macros in arma_version.hpp so we just use that
    const unsigned int major = ARMA_VERSION_MAJOR;
    const unsigned int minor = ARMA_VERSION_MINOR;
    const unsigned int patch = ARMA_VERSION_PATCH;

    if (single) {
        return Rcpp::wrap(10000 * major + 100 * minor + patch) ;
    } else {
        return Rcpp::IntegerVector::create(Rcpp::Named("major") = major,
                                           Rcpp::Named("minor") = minor,
                                           Rcpp::Named("patch") = patch);
    }
}

It either returns a (named) vector of the standard ‘major’, ‘minor’, ‘patch’ form of the common package versioning pattern, or a single integer which can used more easily in C(++) via preprocessor macros. And this being an Rcpp-using package, we can of course access either easily from R:

> library(RcppArmadillo)
> armadillo_version(FALSE)
major minor patch 
   15     0     2 
> armadillo_version(TRUE)
[1] 150002
>

Perfectly valid and truthful. But … cumbersome at the R level. So when preparing for these (Rcpp)Armadillo changes in one of my package, I realized I could alter such a function and set the S3 type to package_version. (Full version of one such variant here)

// [[Rcpp::export]]
Rcpp::List armadilloVersion() {
    // create a vector of major, minor, patch
    auto ver = Rcpp::IntegerVector::create(ARMA_VERSION_MAJOR, ARMA_VERSION_MINOR, ARMA_VERSION_PATCH);
    // and place it in a list (as e.g. packageVersion() in R returns)
    auto lst = Rcpp::List::create(ver);
    // and class it as 'package_version' accessing print() etc methods
    lst.attr("class") = Rcpp::CharacterVector::create("package_version", "numeric_version");
    return lst;
}

Three statements each to

  • create the integeer vector of known dimensions and compile-time known value
  • embed it in a list (as that is what the R type expects)
  • set the S3 class which is easy because Rcpp accesses attributes and create character vectors

and return the value. And now in R we can operate more easily on this (using three dots as I didn’t export it from this package):

> RcppDE:::armadilloVersion()
[1] ‘15.0.2
> RcppDE:::armadilloVersion() >= "15.0.0"
[1] TRUE
>

An object of class package_version inheriting from numeric_version can directly compare against a (human- but not normally machine-readable) string like “15.0.0” because the simple S3 class defines appropriate operators, as well as print() / format() methods as the first expression shows. It is these little things that make working with R so smooth, and we can easily (three statements !!) do so from Rcpp-based packages too.

The underlying object really is merely a list containing a vector:

> unclass(RcppDE:::armadilloVersion())
[[1]]
[1] 15  0  2

> 

but the S3 “glue” around it makes it behave nicely.

So next time you are working with an object you plan to return to R, consider classing it to take advantage of existing infrastructure (if it exists, of course). It’s easy enough to do, and may smoothen the experience at the R side.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

30 September, 2025 03:14PM

Antoine Beaupré

Proper services

During 2025-03-21-another-home-outage, I reflected upon what's a properly ran service and blurted out what turned out to be something important I want to outline more. So here it is, again, on its own for my own future reference.

Typically, I tend to think of a properly functioning service as having four things:

  1. backups
  2. documentation
  3. monitoring
  4. automation
  5. high availability (HA)

Yes, I miscounted. This is why you need high availability.

A service doesn't properly exist if it doesn't at least have the first 3 of those. It will be harder to maintain without automation, and inevitably suffer prolonged outages without HA.

The five components of a proper service

Backups

Duh. If data is maliciously or accidentally destroyed, you need a copy somewhere. Preferably in a way that malicious Joe can't get to.

This is harder than you think.

Documentation

I have an entire template for this. Essentially, it boils down to using https://diataxis.fr/ and this "audit" guide. For me, the most important parts are:

  • disaster recovery (includes backups, probably)
  • playbook
  • install/upgrade procedures (see automation)

You probably know this is hard, and this is why you're not doing it. Do it anyways, you'll think it sucks, it will grow out of sync with reality, but you'll be really grateful for whatever scraps you wrote when you're in trouble.

Any docs, in other words, is better than no docs, but are no excuse for doing the work correctly.

Monitoring

If you don't have monitoring, you'll know it fails too late, and you won't know it recovers. Consider high availability, work hard to reduce noise, and don't have machine wake people up, that's literally torture and is against the Geneva convention.

Consider predictive algorithm to prevent failures, like "add storage within 2 weeks before this disk fills up".

This is also harder than you think.

Automation

Make it easy to redeploy the service elsewhere.

Yes, I know you have backups. That is not enough: that typically restores data and while it can also include configuration, you're going to need to change things when you restore, which is what automation (or call it "configuration management" if you will) will do for you anyways.

This also means you can do unit tests on your configuration, otherwise you're building legacy.

This is probably as hard as you think.

High availability

Make it not fail when one part goes down.

Eliminate single points of failures.

This is easier than you think, except for storage and DNS ("naming things" not "HA DNS", that is easy), which, I guess, means it's harder than you think too.

Assessment

In the above 5 items, I currently check two in my lab:

  1. backups
  2. documentation

And barely: I'm not happy about the offsite backups, and my documentation is much better at work than at home (and even there, I have a 15 year backlog to catchup on).

I barely have monitoring: Prometheus is scraping parts of the infra, but I don't have any sort of alerting -- by which I don't mean "electrocute myself when something goes wrong", I mean "there's a set of thresholds and conditions that define an outage and I can look at it".

Automation is wildly incomplete. My home server is a random collection of old experiments and technologies, ranging from Apache with Perl and CGI scripts to Docker containers running Golang applications. Most of it is not Puppetized (but the ratio is growing). Puppet itself introduces a huge attack vector with kind of catastrophic lateral movement if the Puppet server gets compromised.

And, fundamentally, I am not sure I can provide high availability in the lab. I'm just this one guy running my home network, and I'm growing older. I'm thinking more about winding things down than building things now, and that's just really sad, because I feel we're losing (well that escalated quickly).

Side note about Tor

The above applies to my personal home lab, not work!

At work, of course, it's another (much better) story:

  1. all services have backups
  2. lots of services are well documented, but not all
  3. most services have at least basic monitoring
  4. most services are Puppetized, but not crucial parts (DNS, LDAP, Puppet itself), and there are important chunks of legacy coupling between various services that make the whole system brittle
  5. most websites, DNS and large parts of email are highly available, but key services like the the Forum, GitLab and similar applications are not HA, although most services run under replicated VMs that can trivially survive a total, single-node hardware failure (through Ganeti and DRBD)

30 September, 2025 03:00PM

Minor outage at Teksavvy business

This morning, internet was down at home. The last time I had such an issue was in February 2023, when my provider was Oricom. Now I'm with a business service at Teksavvy Internet (TSI), in which I pay 100$ per month for a 250/50 mbps business package, with a static IP address, on which I run, well, everything: email services, this website, etc.

Mitigation

Email

The main problem when the service goes down like this for prolonged outages is email. Mail is pretty resilient to failures like this but after some delay (which varies according to the other end), mail starts to drop. I am actually not sure what the various settings are among different providers, but I would assume mail is typically kept for about 24h, so that's our mark.

Last time, I setup VMs at Linode and Digital Ocean to deal better with this. I have actually kept those VMs running as DNS servers until now, so that part is already done.

I had fantasized about Puppetizing the mail server configuration so that I could quickly spin up mail exchangers on those machines. But now I am realizing that my Puppet server is one of the service that's down, so this would not work, at least not unless the manifests can be applied without a Puppet server (say with puppet apply).

Thankfully, my colleague groente did amazing work to refactor our Postfix configuration in Puppet at Tor, and that gave me the motivation to reproduce the setup in the lab. So I have finally Puppetized part of my mail setup at home. That used to be hand-crafted experimental stuff documented in a couple of pages in this wiki, but is now being deployed by Puppet.

It's not complete yet: spam filtering (including DKIM checks and graylisting) are not implemented yet, but that's the next step, presumably to do during the next outage. The setup should be deployable with puppet apply, however, and I have refined that mechanism a little bit, with the run script.

Heck, it's not even deployed yet. But the hard part / grunt work is done.

Other

The outage was "short" enough (5 hours) that I didn't take time to deploy the other mitigations I had deployed in the previous incident.

But I'm starting to seriously consider deploying a web (and caching) reverse proxy so that I endure such problems more gracefully.

Side note on proper services

Well that was dumb. I wrote this clever piece on what's a properly ran service and originally shoved it deep inside this service note instead of making a blog article.

That is now fixed, see 2025-09-30-proper-services instead.

Resolution

In the end, I didn't need any mitigation and the problem fixed itself. I did do quite a bit of cleanup so that feels somewhat good, although I despaired quite a bit at the amount of technical debt I've accumulated in the lab.

Timeline

Times are in UTC-4.

  • 6:52: IRC bouncer goes offline
  • 9:20: called TSI support, waited on the line 15 minutes then was told I'd get a call back
  • 9:54: outage apparently detected by TSI
  • 11:00: no response, tried calling back support again
  • 11:10: confirmed bonding router outage, no official ETA but "today", source of the 9:54 timestamp above
  • 12:08: TPA monitoring notices service restored
  • 12:34: call back from TSI; service restored, problem was with the "bonder" configuration on their end, which was "fighting between Montréal and Toronto"

30 September, 2025 02:59PM

Russell Coker

Russ Allbery

Review: Deep Black

Review: Deep Black, by Miles Cameron

Series: Arcana Imperii #2
Publisher: Gollancz
Copyright: 2024
ISBN: 1-3996-1506-8
Format: Kindle
Pages: 509

Deep Black is a far-future science fiction novel and the direct sequel to Artifact Space. You do not want to start here. I regretted not reading the novels closer together and had to refresh my memory of what happened in the first book.

The shorter fiction in Beyond the Fringe takes place between the two series novels and leads into some of the events in this book, although reading it is optional.

Artifact Space left Marca Nbaro at the farthest point of the voyage of the Greatship Athens, an unexpected heroine and now well-integrated into the crew. On a merchant ship, however, there's always more work to be done after a heroic performance. Deep Black opens with that work: repairs from the events of the first book, the never-ending litany of tasks required to keep the ship running smoothly, and of course the trade with aliens that drew them so far out into the Deep Black.

We knew early in the first book that this wouldn't be the simple, if long, trading voyage that most of the crew of the Athens was expecting, but now they have to worry about an unsettling second group of aliens on top of a potential major war between human factions. They don't yet have the cargo they came for, they have to reconstruct their trading post, and they're a very long way from home. Marca also knows, at this point in the story, that this voyage had additional goals from the start. She will slowly gain a more complete picture of those goals during this novel.

Artifact Space was built around one of the most satisfying plots in military science fiction (at least to me): a protagonist who benefits immensely from the leveling effect and institutional inclusiveness of the military slowly discovering that, when working at its best, the military can be a true meritocracy. (The merchant marine of the Athens is not military, precisely, since it's modeled on the trading ships of Venice, but it's close enough for the purposes of this plot.) That's not a plot that lasts into a sequel, though, so Cameron had to find a new spine for the second half of the story. He chose first contact (of a sort) and space battle.

The space battle parts are fine. I read a ton of children's World War II military fiction when I was a boy, and I always preferred the naval battles to the land battles. This part of Deep Black reminded me of those naval battles, particularly a book whose title escapes me about the Arctic convoys to the Soviet Union. I'm more interested in character than military adventure these days, but every once in a while I enjoy reading about a good space battle. This was not an exemplary specimen of the genre, but it delivered on all the required elements.

The first contact part was more original, in part because Cameron chose an interesting medium ground between total incomprehensibility and universal translators. He stuck with the frustrations of communication for considerably longer than most SF authors are willing to write, and it worked for me. This is the first book I've read in a while where superficial alien fluency with the mere words of a human language masks continuing profound mutual incomprehension. The communication difficulties are neither malicious nor a setup for catastrophic misunderstanding, but an intrinsic part of learning about a truly alien species. I liked this, even though it makes for slower and more frustrating progress. It felt more believable than a lot of first contact, and it forced the characters to take risks and act on hunches and then live with the consequences.

One of the other things that Cameron does well is maintain the steady rhythm of life on a working ship as a background anchor to the story. I've read a lot of science fiction that shows the day-to-day routine only until something more interesting and plot-focused starts happening and then seems to forget about it entirely. Not here. Marca goes through intense and adrenaline-filled moments requiring risk and fast reactions, and then has to handle promotion write-ups, routine watches, and studying for advancement. Cameron knows that real battles involve long periods of stressful waiting and incorporates them into the book without making them too boring, which requires a lot of writing skill.

I prefer the emotional magic of finding a place where one belongs, so I was not as taken with Deep Black as I was with Artifact Space, but that's the inevitable result of plot progression and not really a problem with this book. Marca is absurdly central to the story in ways that have a whiff of "chosen one" dynamics, but if one can suspend one's disbelief about that, the rest of the book is solid. This is, fundamentally, a book about large space battles, so save it when you're in the mood for that sort of story, but it was a satisfying continuation of the series. I will definitely keep reading.

Recommended if you enjoyed Artifact Space. If you didn't, Deep Black isn't going to change your mind.

Followed by Whalesong, which is not yet released (and is currently in some sort of limbo for pre-orders in the US, which I hope will clear up).

Rating: 7 out of 10

30 September, 2025 04:12AM

September 29, 2025

hackergotchi for Thomas Lange

Thomas Lange

Updates on FAIme service: Linux Mint 22.2 and trixie backports available

The FAIme service [1] now offers to build customized installation images for Xfce edition of Linux Mint 22.2 'Zara'.

For Debian 13 installations, you can select the kernel from backports for the trixie release, which is currently version 6.16. This will support newer hardware.

29 September, 2025 11:15AM

Russ Allbery

Review: The Incandescent

Review: The Incandescent, by Emily Tesh

Publisher: Tor
Copyright: 2025
ISBN: 1-250-83502-X
Format: Kindle
Pages: 417

The Incandescent is a stand-alone magical boarding school fantasy.

Your students forgot you. It was natural for them to forget you. You were a brief cameo in their lives, a walk-on character from the prologue. For every sentimental my teacher changed my life story you heard, there were dozens of my teacher made me moderately bored a few times a week and then I got through the year and moved on with my life and never thought about them again.

They forgot you. But you did not forget them.

Doctor Saffy Walden is Director of Magic at Chetwood, an elite boarding school for prospective British magicians. She has a collection of impressive degrees in academic magic, a specialization in demonic invocation, and a history of vague but lucrative government job offers that go with that specialty. She turned them down to be a teacher, and although she's now in a mostly administrative position, she's a good teacher, with the usual crop of promising, lazy, irritating, and nervous students.

As the story opens, Walden's primary problem is Nikki Conway. Or, rather, Walden's primary problem is protecting Nikki Conway from the Marshals, and the infuriating Laura Kenning in particular.

When Nikki was seven, she summoned a demon who killed her entire family and left her a ward of the school. To Laura Kenning, that makes her a risk who should ideally be kept far away from invocation. To Walden, that makes Nikki a prodigious natural talent who is developing into a brilliant student and who needs careful, professional training before she's tempted into trying to learn on her own.

Most novels with this setup would become Nikki's story. This one does not. The Incandescent is Walden's story.

There have been a lot of young-adult magical boarding school novels since Harry Potter became a mass phenomenon, but most of them focus on the students and the inevitable coming-of-age story. This is a story about the teachers: the paperwork, the faculty meetings, the funding challenges, the students who repeat in endless variations, and the frustrations and joys of attempting to grab the interest of a young mind. It's also about the temptation of higher-paying, higher-status, and less ethical work, which however firmly dismissed still nibbles around the edges.

Even if you didn't know Emily Tesh is herself a teacher, you would guess that before you get far into this novel. There is a vividness and a depth of characterization that comes from being deeply immersed in the nuance and tedium of the life that your characters are living. Walden's exasperated fondness for her students was the emotional backbone of this book for me. She likes teenagers without idealizing the process of being a teenager, which I think is harder to pull off in a novel than it sounds.

It was hard to quantify the difference between a merely very intelligent student and a brilliant one. It didn't show up in a list of exam results. Sometimes, in fact, brilliance could be a disadvantage — when all you needed to do was neatly jump the hoop of an examiner's grading rubric without ever asking why. It was the teachers who knew, the teachers who could feel the difference. A few times in your career, you would have the privilege of teaching someone truly remarkable; someone who was hard work to teach because they made you work harder, who asked you questions that had never occurred to you before, who stretched you to the very edge of your own abilities. If you were lucky — as Walden, this time, had been lucky — your remarkable student's chief interest was in your discipline: and then you could have the extraordinary, humbling experience of teaching a child whom you knew would one day totally surpass you.

I also loved the world-building, and I say this as someone who is generally not a fan of demons. The demons themselves are a bit of a disappointment and mostly hew to one of the stock demon conventions, but the rest of the magic system is deep enough to have practitioners who approach it from different angles and meaty enough to have some satisfying layered complexity. This is magic, not magical science, so don't expect a fully fleshed-out set of laws, but the magical system felt substantial and satisfying to me.

Tesh's first novel, Some Desperate Glory, was by far my favorite science fiction novel of 2023. This is a much different book, which says good things about Tesh's range and the potential of her work yet to come: adult rather than YA, fantasy rather than science fiction, restrained and subtle in places where Some Desperate Glory was forceful and pointed. One thing the books do have in common, though, is some structure, particularly the false climax near the midpoint of the book. I like the feeling of uncertainty and possibility that gives both books, but in the case of The Incandescent, I was not quite in the mood for the second half of the story.

My problem with this book is more of a reader preference than an objective critique: I was in the mood for a story about a confident, capable protagonist who was being underestimated, and Tesh was writing a novel with a more complicated and fraught emotional arc. (I'm being intentionally vague to avoid spoilers.) There's nothing wrong with the story that Tesh wanted to tell, and I admire the skill with which she did it, but I got a tight feeling in my stomach when I realized where she was going. There is a satisfying ending, and I'm still very happy I read this book, but be warned that this might not be the novel to read if you're in the mood for a purer competence porn experience.

Recommended, and I am once again eagerly awaiting the next thing Emily Tesh writes (and reminding myself to go back and read her novellas).

Content warnings: Grievous physical harm, mind control, and some body horror.

Rating: 8 out of 10

29 September, 2025 04:45AM

September 28, 2025

Review: Echoes of the Imperium

Review: Echoes of the Imperium, by Nicholas & Olivia Atwater

Series: Tales of the Iron Rose #1
Publisher: Starwatch Press
Copyright: 2024
ISBN: 1-998257-04-5
Format: Kindle
Pages: 547

Echoes of the Imperium is a steampunk fantasy adventure novel, the first of a projected series. There is another novella in the series, A Matter of Execution, that takes place chronologically before this novel, but which I am told that you should read afterwards. (I have not yet read it.) If Olivia Atwater's name sounds familiar, it's probably for the romantic fantasy Half a Soul. Nicholas Atwater is her husband.

William Blair, a goblin, was a child sailor on the airship HMS Caliban during the final battle that ended the Imperium, and an eyewitness to the destruction of the capital. Like every imperial solider, that loss made him an Oathbreaker; the fae Oath that he swore to defend the Imperium did not care that nothing a twelve-year-old boy could have done would have changed the result of the battle. He failed to kill himself with most of the rest of the crew, and thus was taken captive by the Coalition.

Twenty years later, William Blair is the goblin captain of the airship Iron Rose. It's an independent transport ship that takes various somewhat-dodgy contracts and has to avoid or fight through pirates. The crew comes from both sides of the war and has built their own working truce. Blair himself is a somewhat manic but earnest captain who doesn't entirely believe he deserves that role, one who tends more towards wildly risky plans and improvisation than considered and sober decisions. The rest of the crew are the sort of wild mix of larger-than-life personality quirks that populate swashbuckling adventure books but leave me dubious that stuffing that many high-maintenance people into one ship would go as well as it does.

I did appreciate the gunnery knitting circle, though.

Echoes of the Imperium is told in the first person from Blair's perspective in two timelines. One follows Blair in the immediate aftermath of the war, tracing his path to becoming an airship captain and meeting some of the people who will later be part of his crew. The other is the current timeline, in which Blair gets deeper and deeper into danger by accepting a risky contract with unexpected complications.

Neither of these timelines are in any great hurry to arrive at some destination, and that's the largest problem with this book. Echoes of the Imperium is long, sprawling, and unwilling to get anywhere near any sort of a point until the reader is deeply familiar with the horrific aftermath of the war, the mountains guilt and trauma many of the characters carry around, and Blair's impostor syndrome and feelings of inadequacy. For the first half of this book, I was so bored. I almost bailed out; only a few flashes of interesting character interactions and hints of world-building helped me drag myself through all of the tedious setup.

What saves this book is that the world-building is a delight. Once the characters finally started engaging with it in earnest, I could not put it down. Present-time Blair is no longer an Oathbreaker because he was forgiven by a fairy; this will become important later. The sites of great battles are haunted by ghostly echoes of the last moments of the lives of those who died (hence the title); this will become very important later. Blair has a policy of asking no questions about people's pasts if they're willing to commit to working with the rest of the crew; this, also, will become important later. All of these tidbits the authors drop into the story and then ignore for hundreds of pages do have a payoff if you're willing to wait for it.

As the reader (too) slowly discovers, the Atwaters' world is set in a war of containment by light fae against dark fae. Instead of being inscrutable and separate, the fae use humans and human empires as tools in that war. The fallen Imperium was a bastion of fae defense, and the war that led to the fall of that Imperium was triggered by the price its citizens paid for that defense, one that the fae could not possibly care less about. The creatures may be out of epic fantasy and the technology from the imagined future of Victorian steampunk, but the politics are that of the Cold War and containment strategies. This book has a lot to say about colonialism and empire, but it says those things subtly and from a fantasy slant, in a world with magical Oaths and direct contact with powers that are both far beyond the capabilities of the main characters and woefully deficient in in humanity and empathy. It has a bit of the feel of Greek mythology if the gods believed in an icy realpolitik rather than embodying the excesses of human emotion.

The second half of this book was fantastic. The found-family vibe among a crew of high-maintenance misfits that completely failed to cohere for me in the first half of the book, while Blair was wallowing in his feelings and none of the events seemed to matter, came together brilliantly as soon as the crew had a real problem and some meaty world-building and plot to sink their teeth into. There is a delightfully competent teenager, some satisfying competence porn that Blair finally stops undermining, and a sharp political conflict that felt emotionally satisfying, if perhaps not that intellectually profound. In short, it turns into the fun, adventurous romp of larger-than-life characters that the setting promises. Even the somewhat predictable mid-book reveal worked for me, in part because the emotions of the characters around that reveal sold its impact.

If you're going to write a book with a bad half and a good half, it's always better to put the good half second. I came away with very positive feelings about Echoes of the Imperium and a tentative willingness to watch for the sequel. (It reaches a fairly satisfying conclusion, but there are a lot of unresolved plot hooks.) I'm a bit hesitant to recommend it, though, because the first half was not very fun. I want to say that about 75% of the first half of the book could have been cut and the book would have been stronger for it. I'm not completely sure I'm right, since the Atwaters were laying the groundwork for a lot of payoff, but I wish that groundwork hadn't been as much of a slog.

Tentatively recommended, particularly if you're in the mood for steampunk fae mythology, but know that this book requires some investment.

Technically, A Matter of Execution comes first, but I plan to read it as a sequel.

Rating: 8 out of 10

28 September, 2025 04:32AM

September 27, 2025

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (July and August 2025)

The following contributors got their Debian Developer accounts in the last two months:

  • Francesco Ballarin (ballarin)
  • Roland Clobus (rclobus)
  • Antoine Le Gonidec (vv221)
  • Guilherme Puida Moreira (puida)
  • NoisyCoil (noisycoil)
  • Akash Santhosh (akash)
  • Lena Voytek (lena)

The following contributors were added as Debian Maintainers in the last two months:

  • Andrew James Bower
  • Kirill Rekhov
  • Alexandre Viard
  • Manuel Traut
  • Harald Dunkel

Congratulations!

27 September, 2025 04:00PM by Jean-Pierre Giraud

Julian Andres Klode

Dependency Tries

As I was shopping groceries I had a shocking realization: The active dependencies of packages in a solver actually form a trie (a dependency A|B - “A or B” - of a package X is considered active if we marked X for install).

Consider the dependencies A|B|C, A|B, B|X. In most package managers these just express alternatives, that is, the “or” relationship, but in Debian packages, it also expresses a preference relationship between its operands, so in A|B|C, A is preferred over B and B over C (and A transitively over C).

This means that we can convert the three dependencies into a trie as follows:

Dependency trie of the three dependencies

Solving the dependency here becomes a matter of trying to install the package referenced by the first edge of the root, and seeing if that sticks. In this case, that would be ‘a’. Let’s assume that ‘a’ failed to install, the next step is to remove the empty node of a, and merging its children into the root.

Reduced dependency trie with “not A” containing b, b|c, b|x

For ease of visualisation, we remove “a” from the dependency nodes as well, leading us to a trie of the dependencies “b”, “b|c”, and “b|x”.

Presenting the Debian dependency problem, or the positive part of it as a trie allows us for a great visualization of the problem but it may not proof to be an effective implementation choice.

In the real world we may actually store this as a priority queue that we can delete from. Since we don’t actually want to delete from the queue for real, our queue items are pairs of a pointer to dependency and an activitity level, say A|B@1. Whenever a variable is assigned false, we look at its reverse dependencies and bump their activity, and reinsert them (the priority of the item being determined by the leftmost solution still possible, it has now changed). When we iterate the queue, we remove items with a lower activity level:

  1. Our queue is A|B@1, A|B|C@1, B|X@1
  2. Rejecting A bump the activity for its reverse dependencies and reinset them: Our queue is A|B@1, A|B|C@1, (A|)B@2, (A|)B|C@2, B|X@1
  3. We visit A|B@1 but see the activity of the underlying dependency is now 2 and remove it Our queue is A|B|C@1, (A|)B@2, (A|)B|C@2, B|X@1
  4. We visit A|B|C@1 but see the activity of the underlying dependency is now 2 and remove it Our queue is (A|)B@2, (A|)B|C@2, B|X@1
  5. We visit A|B@2, see the activity matches and find B is the solution.

27 September, 2025 02:32PM

September 25, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

tint 0.1.6 on CRAN: Maintenance

A new version 0.1.6 of the tint package arrived at CRAN today. tint provides a style ‘not unlike Tufte’ for use in html and pdf documents created from markdown. The github repo shows several examples in its README, more as usual in the package documentation.

This release addresses a small issue where in pdf mode, pandoc (3.2.1 or newer) needs a particular macros defined when static (premade) images or figure files are included. No other changes.

Changes in tint version 0.1.6 (2025-09-25)

  • An additional LaTeX command needed by pandoc (>= 3.2.1) has been defined for the two pdf variants.

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More information is on the tint page. For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

25 September, 2025 10:37PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Negative result: Branch-free sparse bitset iteration

Sometimes, it's nice to write up something that was a solution to an interesting problem but that didn't work; perhaps someone else can figure out a crucial missing piece, or perhaps their needs are subtly different. Or perhaps they'll just find it interesting. This is such a post.

The problem in question is that I have a medium-sized sparse bitset (say, 1024 bits) and some of those bits (say, 20–50, but may be more and may be less) are set. I want to iterate over those bits, spending as little time as possible on the rest.

The standard formulation (as far as I know, anyway?), given modern CPUs, is to treat them as a series of 64-bit unsigned integers, and then use a double for loop like this (C++20, but should be easily adaptable to any low-level enough language):

// Assume we have: uint64_t arr[1024 / 64];

for (unsigned block = 0; block < 1024 / 64; ++block) {
   for (unsigned bits = arr[block]; bits != 0; bits &= bits - 1) {
       unsigned idx = 64 * block + std::countr_zero(bits);
       // do something with idx here
   }
}

The building blocks are simple enough if you're familiar with bit manipulation; std::countr_zero() invokes a bit-scan instruction, and

bits &= bits - 1
clears the lowest set bit.

This is roughly proportional to the number of set bits in the bit set, except that if you have lots of zeros, you'll spend time skipping over empty blocks. That's fine. What's not fine is that this is a disaster for the branch predictor, and my code was (is!) spending something like 20% of its time in the CPU handling mispredicted branches. The structure of the two loops is just so irregular; what we'd like is a branch-free way of iterating.

Now, we can of course never be fully branch-free; in particular, we need to end the loop at some point, and that branch needs to be predicted. So call it branch…less? Less branchy. Perhaps.

(As an aside; of course you could just test the bits one by one, but that means you always get work proportional to the number of total bits, and you still get really difficult branch prediction, so I'm not going to discuss that option.)

Now, here are a bunch of things I tried to make this work that didn't.

First, there's a way to splat the bit set into uint8_t indexes using AVX512 (after which you can iterate over them using a normal for loop); it's based on setting up a full adder-like structure and then using compressed writes. I tried it, and it was just way too slow. Geoff Langdale has the code (in a bunch of different formulations) if you'd like to look at it yourself.

So, the next natural attempt is to try to make larger blocks. If we had an uint128_t and could use that just like we did with uint64_t, we'd make life easier for the branch predictor since there would be, simply put, fewer times the inner loop would end. You can do it branch-free by means of conditional moves and such (e.g., do two bit scans, switch between them based on whether the lowest word is zero or not—similar for the other operations), and there is some support from the compiler (__uint128_t on GCC-like platforms), but in the end, going to 128 was just not enough to end up net positive.

Going to 256 or 512 wasn't easily workable; you don't have bit-scan instructions over the entire word, nor really anything like whole word subtraction. And moving data between the SIMD and integer pipes typically has a cost in itself.

So I started thinking; isn't this much of what a decompressor does? We don't really care about the higher bits of the word; as long as we can get the position of the lowest one, we don't care whether we have few or many left. So perhaps we can look at the input more like a bit stream (or byte stream) than a series of blocks; have a loop where we find the lowest bit, shift everything we just skipped or processed out, and then refill bits from the top. As always, Fabian Giesen has a thorough treatise on the subject. I wasn't concerned with squeezing every last drop out, and my data order was largely fixed anyway, so I only tried two different ways, really:

The first option is what a typical decompressor would do, except byte-based; once I've got a sufficient number of zero bits at the bottom, shift them out and reload bytes at the top. This can be done largely branch-free, so in a sense, you only have a single loop, you just keep reloading and reloading until the end. (There are at least two ways to do this reloading; you can reload only at the top, or you can reload the entire 64-bit word and mask out the bits you just read. They seemed fairly equivalent in my case.) There is a problem with the ending, though; you can read past the end. This may or may not be a problem; it was for me, but it wasn't the biggest problem (see below), so I let it be.

The other variant is somewhat more radical; I always read exactly the next 64 bits (after the previously found bit). This is done by going back to the block idea; a 64-bit word will overlap exactly one or two blocks, so we read 128 bits (two consecutive blocks) and shift the right number of bits to the right. x86 has 128-bit shifts (although they're not that fast), so this makes it fairly natural, and you can use conditional moves to make sure the second read never goes past the end of the buffer, so this feels overall like a slightly saner option.

However: None of them were faster than the normal double-loop. And I think (but never found the energy to try to positively prove) that comes down to an edge case: If there's not a single bit set in the 64-bit window, we need to handle that specially. So there we get back a fairly unpredictable branch after all—or at least, in my data set, this seems to happen fairly often. If you've got a fairly dense bit set, this won't be an issue, but then you probably have more friendly branch behavior in the loop, too. (For the reference, I have something like 3% branch misprediction overall, which is really bad when most of the stuff that I do involves ANDing bit vectors with each other!)

So, that's where I ended up. It's back to the double-loop. But perhaps someone will be able to find a magic trick that I missed. Email is welcome if you ever got this to work. :-)

25 September, 2025 09:52PM

September 24, 2025

hackergotchi for Matthew Garrett

Matthew Garrett

Investigating a forged PDF

I had to rent a house for a couple of months recently, which is long enough in California that it pushes you into proper tenant protection law. As landlords tend to do, they failed to return my security deposit within the 21 days required by law, having already failed to provide the required notification that I was entitled to an inspection before moving out. Cue some tedious argumentation with the letting agency, and eventually me threatening to take them to small claims court.

This post is not about that.

Now, under Californian law, the onus is on the landlord to hold and return the security deposit - the agency has no role in this. The only reason I was talking to them is that my lease didn't mention the name or address of the landlord (another legal violation, but the outcome is just that you get to serve the landlord via the agency). So it was a bit surprising when I received an email from the owner of the agency informing me that they did not hold the deposit and so were not liable - I already knew this.

The odd bit about this, though, is that they sent me another copy of the contract, asserting that it made it clear that the landlord held the deposit. I read it, and instead found a clause reading SECURITY: The security deposit will secure the performance of Tenant’s obligations. IER may, but will not be obligated to, apply all portions of said deposit on account of Tenant’s obligations. Any balance remaining upon termination will be returned to Tenant. Tenant will not have the right to apply the security deposit in payment of the last month’s rent. Security deposit held at IER Trust Account., where IER is International Executive Rentals, the agency in question. Why send me a contract that says you hold the money while you're telling me you don't? And then I read further down and found this:
Text reading ENTIRE AGREEMENT: The foregoing constitutes the entire agreement between the parties and may bemodified only in writing signed by all parties. This agreement and any modifications, including anyphotocopy or facsimile, may be signed in one or more counterparts, each of which will be deemed anoriginal and all of which taken together will constitute one and the same instrument. The followingexhibits, if checked, have been made a part of this Agreement before the parties’ execution:۞Exhibit 1:Lead-Based Paint Disclosure (Required by Law for Rental Property Built Prior to 1978)۞Addendum 1 The security deposit will be held by (name removed) and applied, refunded, or forfeited in accordance with the terms of this lease agreement.
Ok, fair enough, there's an addendum that says the landlord has it (I've removed the landlord's name, it's present in the original).

Except. I had no recollection of that addendum. I went back to the copy of the contract I had and discovered:
The same text as the previous picture, but addendum 1 is empty
Huh! But obviously I could just have edited that to remove it (there's no obvious reason for me to, but whatever), and then it'd be my word against theirs. However, I'd been sent the document via RightSignature, an online document signing platform, and they'd added a certification page that looked like this:
A Signature Certificate, containing a bunch of data about the document including a checksum or the original
Interestingly, the certificate page was identical in both documents, including the checksums, despite the content being different. So, how do I show which one is legitimate? You'd think given this certificate page this would be trivial, but RightSignature provides no documented mechanism whatsoever for anyone to verify any of the fields in the certificate, which is annoying but let's see what we can do anyway.

First up, let's look at the PDF metadata. pdftk has a dump_data command that dumps the metadata in the document, including the creation date and the modification date. My file had both set to identical timestamps in June, both listed in UTC, corresponding to the time I'd signed the document. The file containing the addendum? The same creation time, but a modification time of this Monday, shortly before it was sent to me. This time, the modification timestamp was in Pacific Daylight Time, the timezone currently observed in California. In addition, the data included two ID fields, ID0 and ID1. In my document both were identical, in the one with the addendum ID0 matched mine but ID1 was different.

These ID tags are intended to be some form of representation (such as a hash) of the document. ID0 is set when the document is created and should not be modified afterwards - ID1 initially identical to ID0, but changes when the document is modified. This is intended to allow tooling to identify whether two documents are modified versions of the same document. The identical ID0 indicated that the document with the addendum was originally identical to mine, and the different ID1 that it had been modified.

Well, ok, that seems like a pretty strong demonstration. I had the "I have a very particular set of skills" conversation with the agency and pointed these facts out, that they were an extremely strong indication that my copy was authentic and their one wasn't, and they responded that the document was "re-sealed" every time it was downloaded from RightSignature and that would explain the modifications. This doesn't seem plausible, but it's an argument. Let's go further.

My next move was pdfalyzer, which allows you to pull a PDF apart into its component pieces. This revealed that the documents were identical, other than page 3, the one with the addendum. This page included tags entitled "touchUp_TextEdit", evidence that the page had been modified using Acrobat. But in itself, that doesn't prove anything - obviously it had been edited at some point to insert the landlord's name, it doesn't prove whether it happened before or after the signing.

But in the process of editing, Acrobat appeared to have renamed all the font references on that page into a different format. Every other page had a consistent naming scheme for the fonts, and they matched the scheme in the page 3 I had. Again, that doesn't tell us whether the renaming happened before or after the signing. Or does it?

You see, when I completed my signing, RightSignature inserted my name into the document, and did so using a font that wasn't otherwise present in the document (Courier, in this case). That font was named identically throughout the document, except on page 3, where it was named in the same manner as every other font that Acrobat had renamed. Given the font wasn't present in the document until after I'd signed it, this is proof that the page was edited after signing.

But eh this is all very convoluted. Surely there's an easier way? Thankfully yes, although I hate it. RightSignature had sent me a link to view my signed copy of the document. When I went there it presented it to me as the original PDF with my signature overlaid on top. Hitting F12 gave me the network tab, and I could see a reference to a base.pdf. Downloading that gave me the original PDF, pre-signature. Running sha256sum on it gave me an identical hash to the "Original checksum" field. Needless to say, it did not contain the addendum.

Why do this? The only explanation I can come up with (and I am obviously guessing here, I may be incorrect!) is that International Executive Rentals realised that they'd sent me a contract which could mean that they were liable for the return of my deposit, even though they'd already given it to my landlord, and after realising this added the addendum, sent it to me, and assumed that I just wouldn't notice (or that, if I did, I wouldn't be able to prove anything). In the process they went from an extremely unlikely possibility of having civil liability for a few thousand dollars (even if they were holding the deposit it's still the landlord's legal duty to return it, as far as I can tell) to doing something that looks extremely like forgery.

There's a hilarious followup. After this happened, the agency offered to do a screenshare with me showing them logging into RightSignature and showing the signed file with the addendum, and then proceeded to do so. One minor problem - the "Send for signature" button was still there, just below a field saying "Uploaded: 09/22/25". I asked them to search for my name, and it popped up two hits - one marked draft, one marked completed. The one marked completed? Didn't contain the addendum.

comment count unavailable comments

24 September, 2025 10:22PM

hackergotchi for Philipp Kern

Philipp Kern

PSA: APT::Default-Release might be holding back updates from you

If you are like me that you are installing machines with testing and then go and flip them over to the current stable for a while using APT::Default-Release, you might not be receiving all relevant updates. In fact this setting is kind of discouraged in favor of more extensive pinning configuration.

However, the field does support regexps, so instead of just specifying, say, "trixie", you can put this in place:

APT::Default-Release "/^trixie(|-security|-proposed-updates|-updates)$/";

That should bring the security and stable updates back in.

It feels like we are recently learning a lot about the drawbacks of these overlays and how they need to be configured properly...

24 September, 2025 09:07AM by Philipp Kern (noreply@blogger.com)

September 23, 2025

hackergotchi for David Bremner

David Bremner

Hibernate on the pocket reform 12/n

Context

Update to latest rockchip-devel

For some reason I decided to try re-applying the PCI series. Good news: the pci series finally applies cleanly.

$ git fetch collabora && git switch -c tmp collabora  # [1]
$ b4 am 20250715-pci-port-reset-v6-0-6f9cce94e7bb@oss.qualcomm.com
$ git switch reform-patches  # [2]
$ git rebase -i tmp
  1. https://gitlab.collabora.com/hardware-enablement/rockchip-3588/linux.git#rockchip-devel
  2. https://salsa.debian.org/bremner/collabora-rockchip-3588#reform-patches

Rebuild the kernel

$ cp /boot/config-6.17.0-rc7+ .config
$ make olddefconfig
$ yes '' | make localmodconfig
$ make KBUILD_IMAGE=arch/arm64/boot/Image bindeb-pkg -j$(nproc)

try the hibernation test, again

Running the following test script

set -x
echo platform >  /sys/power/pm_test
echo reboot > /sys/power/disk
sleep 2
rmmod mt76x2u
sleep 2
echo disk >  /sys/power/state
sleep 2
modprobe mt76x2u

Initially there is some output like this

[  151.752683] rockchip-dw-pcie a40c00000.pcie: Failed to receive PME_TO_Ack
[  151.754035] PM: hibernation: hibernation debug: Waiting for 5 second(s).
[  157.821584] rockchip-dw-pcie a40c00000.pcie: Phy link never came up
[  157.822139] rockchip-dw-pcie a40c00000.pcie: fail to resume
[  157.822636] rockchip-dw-pcie a40c00000.pcie: PM: dpm_run_callback(): genpd_restore_noirq returns -110
[  157.823442] rockchip-dw-pcie a40c00000.pcie: PM: failed to restore noirq: error -110

A small amount of detective work suggests that a40c00000.pcie corresponds to the first PCI bridge on the rk3588 SOC.

$ ls -l /sys/bus/pci/devices
total 0
lrwxrwxrwx 1 root root 0 Sep 23 10:32 0003:30:00.0 -> ../../../devices/platform/a40c00000.pcie/pci0003:30/0003:30:00.0
lrwxrwxrwx 1 root root 0 Sep 23 10:32 0004:40:00.0 -> ../../../devices/platform/a41000000.pcie/pci0004:40/0004:40:00.0
lrwxrwxrwx 1 root root 0 Sep 23 10:32 0004:41:00.0 -> ../../../devices/platform/a41000000.pcie/pci0004:40/0004:40:00.0/0004:41:00.0

Then after a pause,

[ 1032.039237] watchdog: CPU5: Watchdog detected hard LOCKUP on cpu 6
[ 1032.039778] Modules linked in: xt_CHECKSUM xt_tcpudp nft_chain_nat xt_MASQUERADE nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 nft_compat x_tables bridge stp llc nf_tables aes_neon_bs aes_neon_blk ccm dwmac_rk binfmt_misc mt76x2_common mt76x02_usb mt76_usb mt76x02_lib mt76 rk805_pwrkey snd_soc_tlv320aic31xx snd_soc_simple_card mac80211 rockchip_saradc reform2_lpc(OE) industrialio_triggered_buffer libarc4 kfifo_buf cfg80211 industrialio rockchip_thermal rockchip_rng cdc_acm rfkill snd_soc_rockchip_i2s_tdm hantro_vpu rockchip_rga panthor v4l2_vp9 v4l2_jpeg snd_soc_audio_graph_card videobuf2_dma_sg v4l2_h264 drm_gpuvm snd_soc_simple_card_utils drm_exec evdev joydev dm_mod nvme_fabrics efi_pstore configfs nfnetlink autofs4 ext4 crc16 mbcache jbd2 btrfs blake2b_generic xor xor_neon raid6_pq mali_dp snd_soc_meson_axg_toddr snd_soc_meson_axg_fifo snd_soc_meson_codec_glue panfrost drm_shmem_helper gpu_sched ao_cec_g12a meson_vdec(C) videobuf2_dma_contig videobuf2_memops v4l2_mem2mem videobuf2_v4l2 videodev
[ 1032.039834]  videobuf2_common mc dw_hdmi_i2s_audio meson_drm meson_canvas meson_dw_mipi_dsi meson_dw_hdmi mxsfb mux_mmio panel_edp imx_dcss ti_sn65dsi86 nwl_dsi mux_core pwm_imx27 hid_generic usbhid hid onboard_usb_dev nvme nvme_core nvme_keyring nvme_auth snd_soc_hdmi_codec snd_soc_core xhci_plat_hcd xhci_hcd snd_pcm_dmaengine snd_pcm snd_timer snd soundcore rtc_pcf8523 fan53555 micrel phy_package stmmac_platform stmmac pcs_xpcs phylink mdio_devres rk808_regulator of_mdio sdhci_of_dwcmshc fixed_phy sdhci_pltfm fwnode_mdio libphy sdhci phy_rockchip_usbdp dw_mmc_rockchip dw_mmc_pltfm typec phy_rockchip_naneng_combphy pwm_rockchip dw_wdt phy_rockchip_samsung_hdptx dwc3 cqhci dw_mmc mdio_bus rockchip_dfi ehci_platform rockchipdrm ulpi ehci_hcd dw_hdmi_qp ohci_platform udc_core ohci_hcd analogix_dp dw_mipi_dsi i2c_rk3x cpufreq_dt usbcore phy_rockchip_inno_usb2 dw_mipi_dsi2 drm_dp_aux_bus usb_common [last unloaded: mt76x2u]
[ 1032.039886] Sending NMI from CPU 5 to CPUs 6:

previous episode

23 September, 2025 02:34PM

September 22, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Mayday: Optus emergency calling crisis

Customers of Optus were unable to make emergency calls for nine hours on Thursday, 18 September 2025.

It is not clear if the fault lies in the routing system or in the call center where the calls are handled.

It is not clear if they are talking about mobiles or landlines or both.

If mobile customers were in scope for the outage then it raises serious concerns about the ability of mobile handsets to route emergency calls to alternative networks.

Context and history

Australia deregulated telecoms in the early 1990s. Optus was the first company given the opportunity to compete with the incumbent state monopoly in 1991. They were backed by the UK's Cable & Wireless and the US company BellSouth. Therefore, they have a lot of experience in the Australian market.

As a teenager, I was the youngest person to pass the amateur radio exam in my city. Amateur radio licensing is administered by ACMA, the same authority who regulates phone companies. The amateur radio exam requires all licensed amateurs to be capable of transmitting emergency messages. As part of WICEN, I completed additional training and practice at community events such as the Southern 80 ski race and the Red Cross Murray River Canoe Marathon. Given the remote nature of these events, radio amateurs become very aware of the level of responsibility we have to keep the community safe. The Southern 80, in particular, has had numerous deaths over the years.

When I first started consulting on IP-based telephony networks in the UK, the market was highly de-regulated. Officials from the Home Office were very concerned about the possibility that people using IP telephones may not be able to make emergency calls. Given the UK Government is keen on deregulation and competition, they couldn't ban IP telephony so the authorities dedicated resources to engaging with our industry and running regular meetings to discuss the risks and solutions.

The simple fact is many of us in the industry have had meaningful discussions and meaningful experience over many years. Situations like this do not come as a total surprise to the telecoms companies. With the right people and protocols, it could have been handled better.

In 2013, news stories simultaneously appeared about emergency call centers failing in Melbourne, Australia and Detroit, USA. I made a note of that on my blog. Were both call centers running the same software or did they have a common dependency that nobody else noticed at the time?

In 2023, the entire Optus network, including Internet, mobile and landline services failed between 04:05 and 13:00 AEDT. This total failure impacted many other services, including payment systems in shops, security alarms and monitoring for people with high risk medical conditions. Emergency calls were not possible during the outage.

After the 2023 incident, there were high level discussions between state officials, emergency services and the managers of the telecoms companies. Protocols were agreed for handling a similar problem if it ever happened again.

On 18 September 2025, in the early hours of the morning, Optus apparently made a firewall change and failed to realize the change had impacted their ability to handle emergency calls.

At approximately 9am some customers were able to contact the regular customer support number and report an outage in emergency calling.

Optus alleges they have records of over 600 failed emergency calls. They began making welfare checks on the callers by trying to return the calls some time in the afternoon.

The fact that Optus was making the welfare checks suggests that the emergency call center is managed by Optus too. It is not clear if the same call center is also subcontracted to other networks.

The problem was fixed late in the afternoon.

The police and state authorities were only informed after the problem was fixed and after attempts were already under way to make welfare checks.

The communications regulator, ACMA, was only notified after the outage.

The authorities were initially told that only 10 calls were missed.

On 19 September, Friday, authorities were told 100 calls had been missed.

Another twenty minutes after that, Optus revealed a total of 600 calls had been missed. Nobody is sure if this is the real number.

On 19 September, the boss of Optus revealed three people had died.

Subsequent reports have speculated that one of those deaths would have happened anyway even if an emergency call was made. The coroner will have to decide if the other two deaths were due to the failure of the emergency call center connectivity.

Resilience of emergency call centers

Multiple call centers are usually established in different parts of the city or state so that if one call center has a catastrophic failure, the other centers can handle the calls.

Each call center ideally has independent connections to each telephone company.

When the British Telecom (BT) telephone exchange caught fire in Manchester, people discovered that the call centers didn't have direct radio contact with ambulances. The call center was relying on a line from BT to join the radio transmitters to the call center.

The call centers should be able to detect if a particular phone company stops sending calls to them. Many call centers already have real-time statistical analysis of calling patterns to help them identify problems that might cost them lost sales. It would be nice to see them using the same statistical analysis tools to detect an absence of emergency calls.

However, call center management is often outsourced. It is not clear if the call centers themselves are also managed by Optus or some other company.

Bubble-wrapped people avoiding critical commentary

Why did it take so long for Optus to inform state authorities and police that there was an outage?

To understand that, we need to look at the problems in Debian and other free software organizations.

Debian spent over $120,000 on legal fees to avoid critical commentary. This shows the extent to which people will go out of their way to avoid hearing an inconvenient truth about their organization.

I have no knowledge of the internal problems at Optus. But it would be reasonable to ask if people were afraid to tell their managers about the problem because they were afraid of criticism. Were the managers afraid to inform police commanders out of a similar fear of criticism?

After Debian spent all their money on legal fees to avoid critical feedback, volunteers were asked to use their own money to pay for the day trip at DebConf23. Abraham Raji didn't pay anything, he was left alone to swim unsupervised and he drowned.

Personally, I completed the marine radio certificate (SRC / VHF) exam in France. So I'm qualified to relay mayday calls in more than one language. The Optus brand is starting to look like Titanic.

Daniel Pocock, SRC

 

One of the questions on my mind after reviewing the public reports about the Optus crisis: is there anybody in senior management who has prior experience in law enforcement or first aid management? Do the regulations in Australia require the telcoms companies to hire people with suitable field experience before operating an emergency call center? These people have life experience that can be very helpful when the culture of the organization is unable to escalate an issue of this nature in the correct manner.

In the financial services industry, state regulators take a keen interest in the structure of the firms and ensuring essential skills exist in the senior management team. Likewise, in the management of telecoms, the state can insist on periodic audits of these firms to ensure they have the right facilities and the right skills to deliver vital services. If the state isn't doing the necessary audits then they have to share the blame with the CEO.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

22 September, 2025 03:30PM

hackergotchi for Evgeni Golov

Evgeni Golov

Booting Vagrant boxes with UEFI on Fedora: Permission denied

If you're still using Vagrant (I am) and try to boot a box that uses UEFI (like boxen/debian-13), a simple vagrant init boxen/debian-13 and vagrant up will entertain you with a nice traceback:

% vagrant up
Bringing machine 'default' up with 'libvirt' provider...
==> default: Checking if box 'boxen/debian-13' version '2025.08.20.12' is up to date...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...
==> default:  -- Name:              tmp.JV8X48n30U_default
==> default:  -- Description:       Source: /tmp/tmp.JV8X48n30U/Vagrantfile
==> default:  -- Domain type:       kvm
==> default:  -- Cpus:              1
==> default:  -- Feature:           acpi
==> default:  -- Feature:           apic
==> default:  -- Feature:           pae
==> default:  -- Clock offset:      utc
==> default:  -- Memory:            2048M
==> default:  -- Loader:            /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd
==> default:  -- Nvram:             /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd
==> default:  -- Base box:          boxen/debian-13
==> default:  -- Storage pool:      default
==> default:  -- Image(vda):        /home/evgeni/.local/share/libvirt/images/tmp.JV8X48n30U_default.img, virtio, 20G
==> default:  -- Disk driver opts:  cache='default'
==> default:  -- Graphics Type:     vnc
==> default:  -- Video Type:        cirrus
==> default:  -- Video VRAM:        16384
==> default:  -- Video 3D accel:    false
==> default:  -- Keymap:            en-us
==> default:  -- TPM Backend:       passthrough
==> default:  -- INPUT:             type=mouse, bus=ps2
==> default:  -- CHANNEL:             type=unix, mode=
==> default:  -- CHANNEL:             target_type=virtio, target_name=org.qemu.guest_agent.0
==> default: Creating shared folders metadata...
==> default: Starting domain.
==> default: Removing domain...
==> default: Deleting the machine folder
/usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/requests/compute/vm_action.rb:7:in 'Libvirt::Domain#create': Call to virDomainCreate failed: internal error: process exited while connecting to monitor: 2025-09-22T10:07:55.081081Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}: Could not open '/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd': Permission denied (Libvirt::Error)
    from /usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/requests/compute/vm_action.rb:7:in 'Fog::Libvirt::Compute::Shared#vm_action'
    from /usr/share/gems/gems/fog-libvirt-0.13.1/lib/fog/libvirt/models/compute/server.rb:81:in 'Fog::Libvirt::Compute::Server#start'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/start_domain.rb:546:in 'VagrantPlugins::ProviderLibvirt::Action::StartDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/set_boot_order.rb:22:in 'VagrantPlugins::ProviderLibvirt::Action::SetBootOrder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/share_folders.rb:22:in 'VagrantPlugins::ProviderLibvirt::Action::ShareFolders#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/prepare_nfs_settings.rb:21:in 'VagrantPlugins::ProviderLibvirt::Action::PrepareNFSSettings#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/synced_folders.rb:87:in 'Vagrant::Action::Builtin::SyncedFolders#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/delayed.rb:19:in 'Vagrant::Action::Builtin::Delayed#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/synced_folder_cleanup.rb:28:in 'Vagrant::Action::Builtin::SyncedFolderCleanup#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/plugins/synced_folders/nfs/action_cleanup.rb:25:in 'VagrantPlugins::SyncedFolderNFS::ActionCleanup#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/prepare_nfs_valid_ids.rb:14:in 'VagrantPlugins::ProviderLibvirt::Action::PrepareNFSValidIds#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_network_interfaces.rb:197:in 'VagrantPlugins::ProviderLibvirt::Action::CreateNetworkInterfaces#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_networks.rb:40:in 'VagrantPlugins::ProviderLibvirt::Action::CreateNetworks#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_domain.rb:452:in 'VagrantPlugins::ProviderLibvirt::Action::CreateDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/resolve_disk_settings.rb:143:in 'VagrantPlugins::ProviderLibvirt::Action::ResolveDiskSettings#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/create_domain_volume.rb:97:in 'VagrantPlugins::ProviderLibvirt::Action::CreateDomainVolume#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/handle_box_image.rb:127:in 'VagrantPlugins::ProviderLibvirt::Action::HandleBoxImage#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/handle_box.rb:56:in 'Vagrant::Action::Builtin::HandleBox#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/handle_storage_pool.rb:63:in 'VagrantPlugins::ProviderLibvirt::Action::HandleStoragePool#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/set_name_of_domain.rb:34:in 'VagrantPlugins::ProviderLibvirt::Action::SetNameOfDomain#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/provision.rb:80:in 'Vagrant::Action::Builtin::Provision#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-libvirt-0.11.2/lib/vagrant-libvirt/action/cleanup_on_failure.rb:21:in 'VagrantPlugins::ProviderLibvirt::Action::CleanupOnFailure#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:127:in 'block in Vagrant::Action::Warden#finalize_action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/call.rb:53:in 'Vagrant::Action::Builtin::Call#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/box_check_outdated.rb:93:in 'Vagrant::Action::Builtin::BoxCheckOutdated#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builtin/config_validate.rb:25:in 'Vagrant::Action::Builtin::ConfigValidate#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/warden.rb:48:in 'Vagrant::Action::Warden#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/builder.rb:180:in 'Vagrant::Action::Builder#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'block in Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/util/busy.rb:19:in 'Vagrant::Util::Busy.busy'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/action/runner.rb:101:in 'Vagrant::Action::Runner#run'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:248:in 'Vagrant::Machine#action_raw'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:217:in 'block in Vagrant::Machine#action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/environment.rb:631:in 'Vagrant::Environment#lock'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:203:in 'Method#call'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/machine.rb:203:in 'Vagrant::Machine#action'
    from /usr/share/vagrant/gems/gems/vagrant-2.3.4/lib/vagrant/batch_action.rb:86:in 'block (2 levels) in Vagrant::BatchAction#run'

The important part here is

Call to virDomainCreate failed: internal error: process exited while connecting to monitor:
2025-09-22T10:07:55.081081Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}:
Could not open '/home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd': Permission denied (Libvirt::Error)

Of course we checked that the file permissions on this file are correct (I'll save you the ls output), so what's next? Yes, of course, SELinux!

# ausearch -m AVC
time->Mon Sep 22 12:07:55 2025
type=AVC msg=audit(1758535675.080:1613): avc:  denied  { read } for  pid=257204 comm="qemu-system-x86" name="OVMF_CODE.fd" dev="dm-2" ino=1883946 scontext=unconfined_u:unconfined_r:svirt_t:s0:c352,c717 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=file permissive=0

A process in the svirt_t domain tries to access something in the user_home_t domain and is denied by the kernel. So far, SELinux is both working as designed and preventing us from doing our work, nice.

For "normal" (non-UEFI) boxes, Vagrant uploads the image to libvirt, which stores it in ~/.local/share/libvirt/images/ and boots fine from there. For UEFI boxen, one also needs loader and nvram files, which Vagrant keeps in ~/.vagrant.d/boxes/<box_name> and that's what explodes in our face here.

As ~/.local/share/libvirt/images/ works well, and is labeled svirt_home_t let's see what other folders use that label:

# semanage fcontext -l |grep svirt_home_t
/home/[^/]+/\.cache/libvirt/qemu(/.*)?             all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.config/libvirt/qemu(/.*)?            all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.libvirt/qemu(/.*)?                   all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/gnome-boxes/images(/.*)? all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/libvirt/boot(/.*)?       all files          unconfined_u:object_r:svirt_home_t:s0
/home/[^/]+/\.local/share/libvirt/images(/.*)?     all files          unconfined_u:object_r:svirt_home_t:s0

Okay, that all makes sense, and it's just missing the Vagrant-specific folders!

# semanage fcontext -a -t svirt_home_t '/home/[^/]+/\.vagrant.d/boxes(/.*)?'

Now relabel the Vagrant boxes:

% restorecon -rv ~/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13 from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/metadata_url from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12 from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/box_0.img from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/metadata.json from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/Vagrantfile from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_VARS.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/box_update_check from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0
Relabeled /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd from unconfined_u:object_r:user_home_t:s0 to unconfined_u:object_r:svirt_home_t:s0

And it works!

% vagrant up
Bringing machine 'default' up with 'libvirt' provider...
==> default: Checking if box 'boxen/debian-13' version '2025.08.20.12' is up to date...
==> default: Creating image (snapshot of base box volume).
==> default: Creating domain with the following settings...
==> default:  -- Name:              tmp.JV8X48n30U_default
==> default:  -- Description:       Source: /tmp/tmp.JV8X48n30U/Vagrantfile
==> default:  -- Domain type:       kvm
==> default:  -- Cpus:              1
==> default:  -- Feature:           acpi
==> default:  -- Feature:           apic
==> default:  -- Feature:           pae
==> default:  -- Clock offset:      utc
==> default:  -- Memory:            2048M
==> default:  -- Loader:            /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/OVMF_CODE.fd
==> default:  -- Nvram:             /home/evgeni/.vagrant.d/boxes/boxen-VAGRANTSLASH-debian-13/2025.08.20.12/libvirt/efivars.fd
==> default:  -- Base box:          boxen/debian-13
==> default:  -- Storage pool:      default
==> default:  -- Image(vda):        /home/evgeni/.local/share/libvirt/images/tmp.JV8X48n30U_default.img, virtio, 20G
==> default:  -- Disk driver opts:  cache='default'
==> default:  -- Graphics Type:     vnc
==> default:  -- Video Type:        cirrus
==> default:  -- Video VRAM:        16384
==> default:  -- Video 3D accel:    false
==> default:  -- Keymap:            en-us
==> default:  -- TPM Backend:       passthrough
==> default:  -- INPUT:             type=mouse, bus=ps2
==> default:  -- CHANNEL:             type=unix, mode=
==> default:  -- CHANNEL:             target_type=virtio, target_name=org.qemu.guest_agent.0
==> default: Creating shared folders metadata...
==> default: Starting domain.
==> default: Domain launching with graphics connection settings...
==> default:  -- Graphics Port:      5900
==> default:  -- Graphics IP:        127.0.0.1
==> default:  -- Graphics Password:  Not defined
==> default:  -- Graphics Websocket: 5700
==> default: Waiting for domain to get an IP address...
==> default: Waiting for machine to boot. This may take a few minutes...
    default: SSH address: 192.168.124.157:22
    default: SSH username: vagrant
    default: SSH auth method: private key
    default:
    default: Vagrant insecure key detected. Vagrant will automatically replace
    default: this with a newly generated keypair for better security.
    default:
    default: Inserting generated public key within guest...
    default: Removing insecure key from the guest if it's present...
    default: Key inserted! Disconnecting and reconnecting using new SSH key...
==> default: Machine booted and ready!

22 September, 2025 10:37AM by evgeni

Russell Coker

More About the Colmi P80

The FOSS Android program for communicating with smart watches is Gadget Bridge which now has support for the Colmi P80 [1].

I first blogged about the Colmi P80 just over a month ago [2]. Now I have a couple of relatives using it happily on Android with the proprietary app. I couldn’t use it myself because I require more control over which apps have their notifications go to the watch than the Colmi app offers. Also I’m trying to move away from non-free software.

Yesterday the f-droid repository informed me that there was a new version of Gadget Bridge and the changelog indicated support for the Colmi P80 so I connected the P80 and disconnected the PineTime.

The first problem I noticed is that the vibrator on the P80 when on it’s maximum setting is much weaker than that on the PineTime, so weak that I often didn’t notice it. Maybe if I wore it for a few weeks I would teach myself to notice it but it should just be able to work with me on this. If it could be set to have multiple bursts of vibrating then that would work.

The next problem is that the P80 by default does not turn the screen on when there’s a notification and there seems to be no way to configure it to do so. I configured it to turn on when I raise my arm which can mostly work but that still relies on me noticing the vibration. Vibration and the screen light turning on would be harder to miss than vibration on it’s own.

I don’t recall seeing any review of smart watches ever that stated whether the screen would turn on when there’s a notification or whether the vibration was easy to notice.

One problem with both the PineTime (running InfiniTime) and the P80 is that when the screen is turned on (through gesture, pushing the button, or a notification in the case of the Pinetime) it is active for swiping to change the settings. I would like to have some other action required before settings can be changed so that if the screen turns on when I’m asleep my watch won’t brush against something and change it’s settings (which has happened).

It’s neat how Gadget Bridge supports talking to multiple smart watches at the same time. One useful feature for that would be to have different notification settings for each watch. I can imagine someone changing between a watch for jogging and a watch for work and wanting different settings.

Colmi P80 Problems

No authentication for Bluetooth connections.

Runs non-free software so no chance to fix things.

Battery life worse than PineTime (but not really bad).

Vibration weak.

Screen doesn’t turn on when notification is sent.

Conclusion

I’m using the PineTime as my daily driver again. While it works well enough for some people (even with the Colmi proprietary app) it doesn’t do what I want. It is however a good test device for FOSS work on the phone side, it has a decent feature set and is cheap.

Apart from lack of authentication and running non-free software the problems are mostly a matter of taste. Some people might think it’s great the way it works.

22 September, 2025 08:39AM by etbe

Vincent Bernat

Akvorado release 2.0

Akvorado 2.0 was released today! Akvorado collects network flows with IPFIX and sFlow. It enriches flows and stores them in a ClickHouse database. Users can browse the data through a web console. This release introduces an important architectural change and other smaller improvements. Let’s dive in! 🤿

$ git diff --shortstat v1.11.5
 493 files changed, 25015 insertions(+), 21135 deletions(-)

New “outlet� service

The major change in Akvorado 2.0 is splitting the inlet service into two parts: the inlet and the outlet. Previously, the inlet handled all flow processing: receiving, decoding, and enrichment. Flows were then sent to Kafka for storage in ClickHouse:

Akvorado flow processing before the change: flows are received and processed by the inlet, sent to Kafka and stored in ClickHouse
Akvorado flow processing before the introduction of the outlet service

Network flows reach the inlet service using UDP, an unreliable protocol. The inlet must process them fast enough to avoid losing packets. To handle a high number of flows, the inlet spawns several sets of workers to receive flows, fetch metadata, and assemble enriched flows for Kafka. Many configuration options existed for scaling, which increased complexity for users. The code needed to avoid blocking at any cost, making the processing pipeline complex and sometimes unreliable, particularly the BMP receiver.1 Adding new features became difficult without making the problem worse.2

In Akvorado 2.0, the inlet receives flows and pushes them to Kafka without decoding them. The new outlet service handles the remaining tasks:

Akvorado flow processing after the change: flows are received by the inlet, sent to Kafka, processed by the outlet and inserted in ClickHouse
Akvorado flow processing after the introduction of the outlet service

This change goes beyond a simple split:3 the outlet now reads flows from Kafka and pushes them to ClickHouse, two tasks that Akvorado did not handle before. Flows are heavily batched to increase efficiency and reduce the load on ClickHouse using ch-go, a low-level Go client for ClickHouse. When batches are too small, asynchronous inserts are used (e20645). The number of outlet workers scales dynamically (e5a625) based on the target batch size and latency (50,000 flows and 5 seconds by default).

This new architecture also allows us to simplify and optimize the code. The outlet fetches metadata synchronously (e20645). The BMP component becomes simpler by removing cooperative multitasking (3b9486). Reusing the same RawFlow object to decode protobuf-encoded flows from Kafka reduces pressure on the garbage collector (8b580f).

The effect on Akvorado’s overall performance was somewhat uncertain, but a user reported 35% lower CPU usage after migrating from the previous version, plus resolution of the long-standing BMP component issue. 🥳

Other changes

This new version includes many miscellaneous changes, such as completion for source and destination ports (f92d2e), and automatic restart of the orchestrator service (0f72ff) when configuration changes to avoid a common pitfall for newcomers.

Let’s focus on some key areas for this release: observability, documentation, CI, Docker, Go, and JavaScript.

Observability

Akvorado exposes metrics to provide visibility into the processing pipeline and help troubleshoot issues. These are available through Prometheus HTTP metrics endpoints, such as /api/v0/inlet/metrics. With the introduction of the outlet, many metrics moved. Some were also renamed (4c0b15) to match Prometheus best practices. Kafka consumer lag was added as a new metric (e3a778).

If you do not have your own observability stack, the Docker Compose setup shipped with Akvorado provides one. You can enable it by activating the profiles introduced for this purpose (529a8f).

The prometheus profile ships Prometheus to store metrics and Alloy to collect them (2b3c46, f81299, and 8eb7cd). Redis and Kafka metrics are collected through the exporter bundled with Alloy (560113). Other metrics are exposed using Prometheus metrics endpoints and are automatically fetched by Alloy with the help of some Docker labels, similar to what is done to configure Traefik. cAdvisor was also added (83d855) to provide some container-related metrics.

The loki profile ships Loki to store logs (45c684). While Alloy can collect and ship logs to Loki, its parsing abilities are limited: I could not find a way to preserve all metadata associated with structured logs produced by many applications, including Akvorado. Vector replaces Alloy (95e201) and features a domain-specific language, VRL, to transform logs. Annoyingly, Vector currently cannot retrieve Docker logs from before it was started.

Finally, the grafana profile ships Grafana, but the shipped dashboards are broken. This is planned for a future version.

Documentation

The Docker Compose setup provided by Akvorado makes it easy to get the web interface up and running quickly. However, Akvorado requires a few mandatory steps to be functional. It ships with comprehensive documentation, including a chapter about troubleshooting problems. I hoped this documentation would reduce the support burden. It is difficult to know if it works. Happy users rarely report their success, while some users open discussions asking for help without reading much of the documentation.

In this release, the documentation was significantly improved.

$ git diff --shortstat v1.11.5 -- console/data/docs
 10 files changed, 1873 insertions(+), 1203 deletions(-)

The documentation was updated (fc1028) to match Akvorado’s new architecture. The troubleshooting section was rewritten (17a272). Instructions on how to improve ClickHouse performance when upgrading from versions earlier than 1.10.0 was added (5f1e9a). An LLM proofread the entire content (06e3f3). Developer-focused documentation was also improved (548bbb, e41bae, and 871fc5).

From a usability perspective, table of content sections are now collapsable (c142e5). Admonitions help draw user attention to important points (8ac894).

Admonition in Akvorado documentation to ask a user not to open an issue or start a discussion before reading the documentation
Example of use of admonitions in Akvorado's documentation

Continuous integration

This release includes efforts to speed up continuous integration on GitHub. Coverage and race tests run in parallel (6af216 and fa9e48). The Docker image builds during the tests but gets tagged only after they succeed (8b0dce).

GitHub workflow for CI with many jobs, some of them running in parallel, some not
GitHub workflow to test and build Akvorado

End-to-end tests (883e19) ensure the shipped Docker Compose setup works as expected. Hurl runs tests on various HTTP endpoints, particularly to verify metrics (42679b and 169fa9). For example:

## Test inlet has received NetFlow flows
GET http://127.0.0.1:8080/prometheus/api/v1/query
[Query]
query: sum(akvorado_inlet_flow_input_udp_packets_total{job="akvorado-inlet",listener=":2055"})
HTTP 200
[Captures]
inlet_receivedflows: jsonpath "$.data.result[0].value[1]" toInt
[Asserts]
variable "inlet_receivedflows" > 10

## Test inlet has sent them to Kafka
GET http://127.0.0.1:8080/prometheus/api/v1/query
[Query]
query: sum(akvorado_inlet_kafka_sent_messages_total{job="akvorado-inlet"})
HTTP 200
[Captures]
inlet_sentflows: jsonpath "$.data.result[0].value[1]" toInt
[Asserts]
variable "inlet_sentflows" >= {{ inlet_receivedflows }}

Docker

Akvorado ships with a comprehensive Docker Compose setup to help users get started quickly. It ensures a consistent deployment, eliminating many configuration-related issues. It also serves as a living documentation of the complete architecture.

This release brings some small enhancements around Docker:

Previously, many Docker images were pulled from the Bitnami Containers library. However, VMWare acquired Bitnami in 2019 and Broadcom acquired VMWare in 2023. As a result, Bitnami images were deprecated in less than a month. This was not really a surprise4. Previous versions of Akvorado had already started moving away from them. In this release, the Apache project’s Kafka image replaces the Bitnami one (1eb382). Thanks to the switch to KRaft mode, Zookeeper is no longer needed (0a2ea1, 8a49ca, and f65d20).

Akvorado’s Docker images were previously compiled with Nix. However, building AArch64 images on x86-64 is slow because it relies on QEMU userland emulation. The updated Dockerfile uses multi-stage and multi-platform builds: one stage builds the JavaScript part on the host platform, one stage builds the Go part cross-compiled on the host platform, and the final stage assembles the image on top of a slim distroless image (268e95 and d526ca).

# This is a simplified version
FROM --platform=$BUILDPLATFORM node:20-alpine AS build-js
RUN apk add --no-cache make
WORKDIR /build
COPY console/frontend console/frontend
COPY Makefile .
RUN make console/data/frontend

FROM --platform=$BUILDPLATFORM golang:alpine AS build-go
RUN apk add --no-cache make curl zip
WORKDIR /build
COPY . .
COPY --from=build-js /build/console/data/frontend console/data/frontend
RUN go mod download
RUN make all-indep
ARG TARGETOS TARGETARCH TARGETVARIANT VERSION
RUN make

FROM gcr.io/distroless/static:latest
COPY --from=build-go /build/bin/akvorado /usr/local/bin/akvorado
ENTRYPOINT [ "/usr/local/bin/akvorado" ]

When building for multiple platforms with --platform linux/amd64,linux/arm64,linux/arm/v7, the build steps until the highlighted line execute only once for all platforms. This significantly speeds up the build. 🚅

Akvorado now ships Docker images for these platforms: linux/amd64, linux/amd64/v3, linux/arm64, and linux/arm/v7. When requesting ghcr.io/akvorado/akvorado, Docker selects the best image for the current CPU. On x86-64, there are two choices. If your CPU is recent enough, Docker downloads linux/amd64/v3. This version contains additional optimizations and should run faster than the linux/amd64 version. It would be interesting to ship an image for linux/arm64/v8.2, but Docker does not support the same mechanism for AArch64 yet (792808).

Go

This release includes many changes related to Go but not visible to the users.

Toolchain

In the past, Akvorado supported the two latest Go versions, preventing immediate use of the latest enhancements. The goal was to allow users of stable distributions to use Go versions shipped with their distribution to compile Akvorado. However, this became frustrating when interesting features, like go tool, were released. Akvorado 2.0 requires Go 1.25 (77306d) but can be compiled with older toolchains by automatically downloading a newer one (94fb1c).5 Users can still override GOTOOLCHAIN to revert this decision. The recommended toolchain updates weekly through CI to ensure we get the latest minor release (5b11ec). This change also simplifies updates to newer versions: only go.mod needs updating.

Thanks to this change, Akvorado now uses wg.Go() (77306d) and I have started converting some unit tests to the new test/synctest package (bd787e, 7016d8, and 159085).

Testing

When testing equality, I use a helper function Diff() to display the differences when it fails:

got := input.Keys()
expected := []int{1, 2, 3}
if diff := helpers.Diff(got, expected); diff != "" {
    t.Fatalf("Keys() (-got, +want):\n%s", diff)
}

This function uses kylelemons/godebug. This package is no longer maintained and has some shortcomings: for example, by default, it does not compare struct private fields, which may cause unexpectedly successful tests. I replaced it with google/go-cmp, which is stricter and has better output (e2f1df).

Another package for Kafka

Another change is the switch from Sarama to franz-go to interact with Kafka (756e4a and 2d26c5). The main motivation for this change is to get a better concurrency model. Sarama heavily relies on channels and it is difficult to understand the lifecycle of an object handed to this package. franz-go uses a more modern approach with callbacks6 that is both more performant and easier to understand. It also ships with a package to spawn fake Kafka broker clusters, which is more convenient than the mocking functions provided by Sarama.

Improved routing table for BMP

To store its routing table, the BMP component used kentik/patricia, an implementation of a patricia tree focused on reducing garbage collection pressure. gaissmai/bart is a more recent alternative using an adaptation of [Donald Knuth’s ART algorithm][] that promises better performance and delivers it: 90% faster lookups and 27% faster insertions (92ee2e and fdb65c).

Unlike kentik/patricia, gaissmai/bart does not help efficiently store values attached to each prefix. I adapted the same approach as kentik/patricia to store route lists for each prefix: store a 32-bit index for each prefix, and use it to build a 64-bit index for looking up routes in a map. This leverages Go’s efficient map structure.

gaissmai/bart also supports a lockless routing table version, but this is not simple because we would need to extend this to the map storing the routes and to the interning mechanism. I also attempted to use Go’s new unique package to replace the intern package included in Akvorado, but performance was worse.7

Miscellaneous

Previous versions of Akvorado were using a custom Protobuf encoder for performance and flexibility. With the introduction of the outlet service, Akvorado only needs a simple static schema, so this code was removed. However, it is possible to enhance performance with planetscale/vtprotobuf (e49a74, and 8b580f). Moreover, the dependency on protoc, a C++ program, was somewhat annoying. Therefore, Akvorado now uses buf, written in Go, to convert a Protobuf schema into Go code (f4c879).

Another small optimization to reduce the size of the Akvorado binary by 10 MB was to compress the static assets embedded in Akvorado in a ZIP file. It includes the ASN database, as well as the SVG images for the documentation. A small layer of code makes this change transparent (b1d638 and e69b91).

JavaScript

Recently, two large supply-chain attacks hit the JavaScript ecosystem: one affecting the popular packages chalk and debug and another impacting the popular package @ctrl/tinycolor. These attacks also exist in other ecosystems, but JavaScript is a prime target due to heavy use of small third-party dependencies. The previous version of Akvorado relied on 653 dependencies.

npm-run-all was removed (3424e8, 132 dependencies). patch-package was removed (625805 and e85ff0, 69 dependencies) by moving missing TypeScript definitions to env.d.ts. eslint was replaced with oxlint, a linter written in Rust (97fd8c, 125 dependencies, including the plugins).

I switched from npm to Pnpm, an alternative package manager (fce383). Pnpm does not run install scripts by default8 and prevents installing packages that are too recent. It is also significantly faster.9 Node.js does not ship Pnpm but it ships Corepack, which allows us to use Pnpm without installing it. Pnpm can also list licenses used by each dependency, removing the need for license-compliance (a35ca8, 42 dependencies).

For additional speed improvements, beyond switching to Pnpm and Oxlint, Vite was replaced with its faster Rolldown version (463827).

After these changes, Akvorado “only� pulls 225 dependencies. 😱

Next steps

I would like to land three features in the next version of Akvorado:

  • Add Grafana dashboards to complete the observability stack. See issue #1906 for details.

  • Integrate OVH’s Grafana plugin by providing a stable API for such integrations. Akvorado’s web console would still be useful for browsing results, but if you want to build and share dashboards, you should switch to Grafana. See issue #1895.

  • Move some work currently done in ClickHouse (custom dictionaries, GeoIP and IP enrichment) back into the outlet service. This should give more flexibility for adding features like the one requested in issue #1030. See issue #2006.


I started working on splitting the inlet into two parts more than one year ago. I found more motivation in recent months, partly thanks to Claude Code, which I used as a rubber duck. Almost none of the produced code was kept:10 it is like an intern who does not learn. 🦆


  1. Many attempts were made to make the BMP component both performant and not blocking. See for example PR #254, PR #255, and PR #278. Despite these efforts, this component remained problematic for most users. See issue #1461 as an example. ↩�

  2. Some features have been pushed to ClickHouse to avoid the processing cost in the inlet. See for example PR #1059. ↩�

  3. This is the biggest commit:

    $ git show --shortstat ac68c5970e2c | tail -1
    231 files changed, 6474 insertions(+), 3877 deletions(-)
    

    ↩�

  4. Broadcom is known for its user-hostile moves. Look at what happened with VMWare. ↩�

  5. As a Debian developer, I dislike these mechanisms that circumvent the distribution package manager. The final straw came when Go 1.25 spent one month in the Debian NEW queue, an arbitrary mechanism I don’t like at all. ↩�

  6. In the early years of Go, channels were heavily promoted. Sarama was designed during this period. A few years later, a more nuanced approach emerged. See notably “Go channels are bad and you should feel bad.� ↩�

  7. This should be investigated further, but my theory is that the intern package uses 32-bit integers, while unique uses 64-bit pointers. See commit 74e5ac. ↩�

  8. This is also possible with npm. See commit dab2f7. ↩�

  9. An even faster alternative is Bun, but it is less available. ↩�

  10. The exceptions are part of the code for the admonition blocks, the code for collapsing the table of content, and part of the documentation. ↩�

22 September, 2025 08:12AM by Vincent Bernat

September 21, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

We, Programmers A Chronicle of Coders from Ada to AI

This post is an unpublished review for We, Programmers A Chronicle of Coders from Ada to AI

When this book was presented as available to review, I jumped for it: who does not love reading a nice bit of computing history, as told by a well-known author (affectionaly known as “Uncle Bob”), one that has been immersed in computing since forever… What is not to like there?

Reading on, the book does not disappoint. Much to the contrary, it digs into details absent in most computer history books that, being an Operating Systems and Computer Architecture geek, I absolutely enjoyed. But let me first address the book’s organization.

The book is split in four parts. Part 1, “Setting the stage” is a short introduction, answering the questioun “Who are we?” (addressing “we” as the programmers, of course), describing the fascination most of us has ever felt when realizing the computer was there to obey us, to do our bidding, and we could absolutely control it.

Part 2, “The Giants”, talks about the Giants our computing world owes to, and on whose shoulders we stand on. It digs with a level of detail I had never seen before in the personal life and technical contributions (as well as the hoops they had to jump through to get their work done). Nine chapters cover “Giants” ranging chronologically from Charles Babbage and Ada Lovelace to Ken Thompson, Dennis Richie and Brian Kernighan (of course, several giants who did their contribution together are grouped in the same chapter). This is the part with most historic technical details often overlooked — What was the word size in the first computers, before even the concept of a “byte” had been brought into regular use? What was the register structure of early CPUs, and why did it lead to requiring self-modifying code to be able to execute loops?

Then, just as Unix and C get invented, Part 3 skips to computer history as seen by the eyes of “Uncle Bob”. I must admit the change of rhythm initially startled me, but it went through quite well. The focus was no longer in the Giants of the field, but on one particular person who… Casts a very long shadow. The narrative follows the author’s career, since being a boy given access to electronics by his father’s line of work, until he becomes a computing industry leader in the early 2000s with Extreme Programming and becoming among the first producers of training material in video format, something today might be recognized as an “influencer”. This first-person narrative reaches year 2023.

But the book is not just a historical overview of the computing world, of course. “Uncle Bob” has a final section with his thoughts for the future of computing. Being this a book for programmers, it is fitting to start by talking about changes in programming languages we should expect to see towards the future and where such changes are prone to take place. Second, the unavoidable topic of Artificial Intelligence is presented: What is it and what does it spell for computing, and in particular, for programming? Third, what does the future of hardware development look like? Fourth, mostly to my surprise, what is prone to be the evolution of the World Wide Web, and finally, what is the future of programming — and programmers.

At 480 pages, the book is a volume to be taken seriously. But space is very well used with this text. The material is easy to read, often funny, but always informative. If you enjoy computer history and understanding the little details in the implementations, it might very well be the book you want.

21 September, 2025 08:07PM

hackergotchi for Joey Hess

Joey Hess

cheap DIY solar fence design

A year ago I installed a 4 kilowatt solar fence. I'm revisiting it this Sun Day, to share the design, now that I have prooved it out.

The solar fence and some other ground and pole mount solar panels, seen through leaves.

Solar fencing manufacturers have some good simple designs, but it's hard to buy for a small installation. They are selling to utility scale solar mostly. And those are installed by driving metal beams into the ground, which requires heavy machinery.

Since I have experience with Ironridge rails for roof mount solar, I decided to adapt that system for a vertical mount. Which is something it was not designed for. I combined the Ironridge hardware with regular parts from the hardware store.

The cost of mounting solar panels nowadays is often higher than the cost of the panels. I hoped to match the cost, and I nearly did. The solar panels cost $100 each, and the fence cost $110 per solar panel. This fence was significantly cheaper than conventional ground mount arrays that I considered as alternatives, and made a better use of a difficult hillside location.

I used 7 foot long Ironridge XR-10 rails, which fit 2 solar panels per rail. (Longer rails would need a center post anyway, and the 7 foot long rails have cheaper shipping, since they do not need to be shipped freight.)

For the fence posts, I used regular 4x4" treated posts. 12 foot long, set in 3 foot deep post holes, with 3x 50 lb bags of concrete per hole and 6 inches of gravel on the bottom.

detail of how the rails are mounted to the posts, and the panels to the rails

To connect the Ironridge rails to the fence posts, I used the Ironridge LFT-03-M1 slotted L-foot bracket. Screwed into the post with a 5/8” x 3 inch hot-dipped galvanized lag screw. Since a treated post can react badly with an aluminum bracket, there needs to be some flashing between the post and bracket. I used Shurtape PW-100 tape for that. I see no sign of corrosion after 1 year.

The rest of the Ironridge system is a T-bolt that connects the rail to the L-foot (part BHW-SQ-02-A1), and Ironridge solar panel fasteners (UFO-CL-01-A1 and UFO-STP-40MM-M1). Also XR-10 end caps and wire clips.

Since the Ironridge hardware is not designed to hold a solar panel at a 90 degree angle, I was concerned that the panels might slide downward over time. To help prevent that, I added some additional support brackets under the bottom of the panels. So far, that does not seem to have been a problem though.

I installed Aptos 370 watt solar panels on the fence. They are bifacial, and while the posts block the back partially, there is still bifacial gain on cloudy days. I left enough space under the solar panels to be able to run a push mower under them.

Me standing in front of the solar fence at end of construction

I put pairs of posts next to one-another, so each 7 foot segment of fence had its own 2 posts. This is the least elegant part of this design, but fitting 2 brackets next to one-another on a single post isn't feasible. I bolted the pairs of posts together with some spacers. A side benefit of doing it this way is that treated lumber can warp as it dries, and this prevented much twisting of the posts.

Using separate posts for each segment also means that the fence can traverse a hill easily. And it does not need to be perfectly straight. In fact, my fence has a 30 degree bend in the middle. This means it has both south facing and south-west facing panels, so can catch the light for longer during the day.

After building the fence, I noticed there was a slight bit of sway at the top, since 9 feet of wooden post is not entirely rigid. My worry was that a gusty wind could rattle the solar panels. While I did not actually observe that happening, I added some diagonal back bracing for peace of mind.

view of rear upper corner of solar fence, showing back bracing connection

Inspecting the fence today, I find no problems after the first year. I hope it will last 30 years, with the lifespan of the treated lumber being the likely determining factor.

As part of my larger (and still ongoing) ground mount solar install, the solar fence has consistently provided great power. The vertical orientation works well at latitude 36. It also turned out that the back of the fence was useful to hang conduit and wiring and solar equipment, and so it turned into the electrical backbone of my whole solar field. But that's another story..

solar fence parts list

quantity cost per unit description
10 $27.89 7 foot Ironridge XR-10 rail
12 $20.18 12 foot treated 4x4
30 $4.86 Ironridge UFO-CL-01-A1
20 $0.87 Ironridge UFO-STP-40MM-M1
1 $12.62 Ironridge XR-10 end caps (20 pack)
20 $2.63 Ironridge LFT-03-M1
20 $1.69 Ironridge BHW-SQ-02-A1
22 $2.65 5/8” x 3 inch hot-dipped galvanized lag screw
10 $0.50 6” gravel per post
30 $6.91 50 lb bags of quickcrete
1 $15.00 Shurtape PW-100 Corrosion Protection Pipe Wrap Tape
N/A $30 other bolts and hardware (approximate)

$1100 total

(Does not include cost of panels, wiring, or electrical hardware.)

21 September, 2025 04:15PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

Lavalamps (things that spark joy)

photograph of a Mathmos Telstar rocket lava lamp with red wax and purple water

Life can sometimes be tricky, and it's useful to know that there are some simple things to take pleasure from. Amongst them for me are lava lamps.

At some point in the late 90s, my brother and I somehow had 6 lavalamps between us. I'm not sure what happened to them (and the gallery of photos I had of them has long disappeared from my site.)

More recently, I stumbled across a Mathmos "Telstar" rocket-shaped lava lamp in a charity shop: silver metal; purple water; red wax.

It now adorns my study desk.

21 September, 2025 12:21PM

hackergotchi for Bits from Debian

Bits from Debian

Bits From Argentina - August 2025

DebConf26 is already in the air in Argentina. Organizing DebConf26 give us the opportunity to talk about Debian in our country again. This is not the first time that Debian has come here, previously Argentina has hosted DebConf 8 in Mar del Plata.

In August, Nattie Mayer-Hutchings and Stefano Rivera from DebConf Committee visited the venue where the next DebConf will take place. They came to Argentina in order to see what it is like to travel from Buenos Aires to Santa Fe (the venue of the next DebConf). In addition, they were able to observe the layout and size of the classrooms and halls, as well as the infrastructure available at the venue, which will be useful for the Video Team.

But before going to Santa Fe, on the August 27th, we organized a meetup in Buenos Aires at GCoop, where we hosted some talks:

GCoop Talks

On August 28th, we had the opportunity to get to know the Venue. We walked around the city and, obviously, sampled some of the beers from Santa Fe.

On August 29th we met with representatives of the University and local government who were all very supportive. We are very grateful to them for opening their doors to DebConf.

UNL Meeting

In the afternoon we met some of the local free software community at an event we held in ATE Santa Fe. The event included several talks:

  • ¿Qué es Debian? - Pablo (sultanovich) / Emmanuel Arias
  • Ciberrestauradores: Gestores de basura electrónica - Programa RAEES Acutis
  • Debian and DebConf (Stefano Rivera/Nattie Mayer-Hutchings)

ATE Talks

Thanks to Debian Argentina, and all the people who will make DebConf26 possible.

Thanks to Nattie Mayer-Hutchings and Stefano Rivera for reviewing an earlier version of this article.

21 September, 2025 11:05AM by Emmanuel Arias

September 20, 2025

hackergotchi for Thomas Goirand

Thomas Goirand

Real-Time OpenStack Packaging Status with Event-Driven Automation

tl;dr: https://osbpo.debian.net/deb-status is now real-time updated and much better than it used to, helping the OpenStack packaging team be a way more efficient.

How it used to be

For years, the Debian OpenStack team has relied on automated tools to track the status of OpenStack packages across Debian releases. Our goal has always been simple: transparency, efficiency, accuracy.

We used to use a tool called os-version-checker, written by Michal Arbet, which generated a static status page at https://osbpo.debian.net/deb-status. It was functional and served us well — but it had limitations:

  • It ran on a cron job, not on demand
  • It processed all OpenStack releases at once, making it slow
  • The rsync from Jenkins hosts to osbpo.debian.net was also cron-driven
  • No immediate feedback after a package build

This meant that when a developer pushed a new package to salsa (the Debian GitLab instance) in the team’s repository, the following would happen:

  • Jenkins would build the backport
  • Store it in a local repository
  • Wait up to 30 minutes (or more) for the cron job to run rsync + status update
  • Only then would the status page reflect the new version

For maintainers actively working on a new release, this delay was frustrating. You’d fix a bug, push, build — and still see your package marked as “missing” or “out of date” for minutes. You had no real-time feedback. This was also an annoyance for testing, because when fixing a bug, I often had to trigger the rsync manually in order to not wait for it, so I could do my tests. Now, osbpo is always up-to-date a few seconds after the build of the package.

The New Way: Event-Driven, Real-Time Updates

We’ve rebuilt the system from the ground up to be fast, responsive, and event-driven. Now, the workflow is:

  • Developer git push → triggers Jenkins
  • Jenkins builds the package → publishes to local repo
  • Jenkins immediately triggers a webhook on osbpo.debian.net

The webhook on osbpo does:

  • rsyncs the new package to the central Debian repo
  • Pulls the latest OpenStack releases from git and use its YAML data (instead of parsing the release HTML pages)
  • Regenerates the status page, comparing what upstream released and what’s in Debian

No more cron. No more waiting…

How it works

The central osbpo.debian.net server runs:

  • webhook — to receive secure, HMAC-verified triggers that it processes in an async way
  • Apache — to serve the status pages and the Debian OpenStack repositories
  • Custom scripts — to rsync packages, validate, and generate reports

Jenkins instances are configured to curl the webhook on successful build. The status page is generated by openstack-debian-release-manager, a new tool I’ve packaged and deployed. The dashboard uses AJAX to load content dynamically (like when browsing from one release to another), with sorting, metadata, and real-time auto-refresh every 10 seconds.

openstack-debian-release-manager is easy to deploy and configure, and will do most (if not all) of the needed configuration. Uploading it to Debian is probably not needed, and a bit over-kill, so I believe I’ll just keep it in Salsa for the moment, unless there’s a way to make it more generic so it can help someone else (another team?) in Debian.

Room for improvement

There’s still things I want to add. Namely:

  • Add status for Debian stable (ie: without the osbpo.debian.net add-on repository), which we used to have with os-version-checker.
  • Add a per-release config file option to be able to mask not packaged project on a per OpenStack release granularity

Special thanks to Michal Arbet for the original os-version-checker that served me for years, helping me to never forget a missing OpenStack package release.

20 September, 2025 07:59PM by Goirand Thomas

September 18, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

Still use Twitter/X? Consider dropping it...

Many people that were once enthusiast Twitter users have dropped as a direct or indirect effect of its ownership change and the following policy changes. Given Twitter X is getting each time more irrelevant, it is less interesting and enciting for more and more people… But also, its current core users (mostly, hate-apologists of the right-wing mindset that finds conspiration theories everywhere) are becoming more commonplace, and by sheer probability (if not for algorithmic bias), every time it becomes more likely a given piece of content will be linked to what their authors would classify as crap.

So, there has been in effect an X exodus. This has been reported in media outlets as important as Reuters, or The Guardianresearch institutes such as Berkeley, even media that no matter how hard you push cannot be identified as the radical left Mr. Trump is so happy to blame for everything, such as Forbes

Today I read a short note in a magazine I very much enjoy, Communications of the ACM, where SIGDOC (the ACM’s Special Interest Group on Design of Communication) is officially closing their X account. The reasoning is crystal clear. They have the mission to create and study User Experience (UX) implementations and report on it, «focused on making communication clearer and more human centered». That is no longer, for many reasons, a goal that can be furthered by the means of an X account.

(BTW, and… How many people are actually angry that Mr. Musk took the X11 old logo and made it his? I am sure it is now protected under too many layers of legalese, even though I am aware of it since at least 30 years ago…)

18 September, 2025 07:57PM

John Goerzen

Running an Accurate 80×25 DOS-Style Console on Modern Linux Is Possible After All

Here, in classic Goerzen deep dive fashion, is more information than you knew you wanted about a topic you’ve probably never thought of. I found it pretty interesting, because it took me down a rabbit hole of subsystems I’ve never worked with much and a mishmash of 1980s and 2020s tech.

I had previously tried and failed to get an actual 80x25 Linux console, but I’ve since figured it out!

This post is about the Linux text console – not X or Wayland. We’re going to get the console right without using those systems. These instructions are for Debian trixie, but should be broadly applicable elsewhere also. The end result can look like this:

Photo of a color VGA monitor displaying a BBS login screen

(That’s a Wifi Retromodem that I got at VCFMW last year in the Hayes modem case)

What’s a pixel?

How would you define a “pixel” these days? Probably something like “a uniquely-addressable square dot in a two-dimensional grid”.

In the world of VGA and CRTs, that was just a logical abstraction. We got an API centered around that because it was convenient. But, down the VGA cable and on the device, that’s not what a pixel was.

A pixel, back then, was a time interval. On a multisync monitor, which were common except in the very early days of VGA, the timings could be adjusted which produced logical pixels of different sizes. Those screens often had a maximum resolution but not necessarily a “native resolution” in the sense that an LCD panel does. Different timings produced different-sized pixels with equal clarity (or, on cheaper monitors, equal fuzziness).

A side effect of this was that pixels need not be square. And, in fact, in the standard DOS VGA 80x25 text mode, they weren’t.

You might be seeing why DVI, DisplayPort, and HDMI replaced VGA for LCD monitors: with a VGA cable, you did a pixel-to-analog-timings conversion, then the display did a timings-to-pixels conversion, and this process could be a bit lossy. (Hence why you sometimes needed to fill the screen with an image and push the “center” button on those older LCD screens)

(Note to the pedantically-inclined: yes I am aware that I have simplified several things here; for instance, a color LCD pixel is made up of approximately 3 sub-dots of varying colors, and that things like color eInk displays have two pixel grids with different sizes of pixels layered atop each other, and printers are another confusing thing altogether, and and and…. MOST PEOPLE THINK OF A PIXEL AS A DOT THESE DAYS, OK?)

What was DOS text mode?

We think of this as the “standard” display: 80 columns wide and 25 rows tall. 80x25. By the time Linux came along, the standard Linux console was VGA text mode – something like the 4th incarnation of text modes on PCs (after CGA, MDA, and EGA). VGA also supported certain other sizes of characters giving certain other text dimensions, but if I cover all of those, this will explode into a ridiculously more massive page than it already is.

So to display text on an 80x25 DOS VGA system, ultimately characters and attributes were written into the text buffer in memory. The VGA system then rendered it to the display as a 720x400 image (at 70Hz) with non-square pixels such that the result was approximately a 4:3 aspect ratio.

The font used for this rendering was a bitmapped one using 8x16 cells. You might do some math here and point out that 8 * 80 is only 640, and you’d be correct. The fonts were 8x16 but the rendered cells were 9x16. The extra pixel was normally used for spacing between characters. However, in line graphics mode, characters 0xC0 through 0xDF repeated the 8th column in the position of the 9th, allowing the continuous line-drawing characters we’re used to from TUIs.

Problems rendering DOS fonts on modern systems

By now, you’re probably seeing some of the issues we have rendering DOS screens on more modern systems. These aren’t new at all; I remember some of these from back in the days when I ran OS/2, and I think also saw them on various terminals and consoles in OS/2 and Windows.

Some issues you’d encounter would be:

  • Incorrect aspect ratio caused by using the original font and rendering it using 1:1 square pixels (resulting in a squashed appearance)
  • Incorrect aspect ratio for ANOTHER reason, caused by failing to render column 9, resulting in text that is overall too narrow
  • Characters appearing to be touching each other when they shouldn’t (failing to render column 9; looking at you, dosbox)
  • Gaps between line drawing characters that should be continuous, caused by rendering column 9 as empty space in all cases

Character set issues

DOS was around long before Unicode was. In the DOS world, there were codepages that selected the glyphs for roughly the high half of the 256 possible characters. CP437 was the standard for the USA; others existed for other locations that needed different characters. On Unix, the USA pre-Unicode standard was Latin-1. Same concept, but with different character mappings.

Nowadays, just about everything is based on UTF-8. So, we need some way to map our CP437 glyphs into Unicode space. If we are displaying DOS-based content, we’ll also need a way to map CP437 characters to Unicode for display later, and we need these maps to match so that everything comes out right. Whew.

So, let’s get on with setting this up!

Selecting the proper video mode

As explained in my previous post, proper hardware support for DOS text mode is limited to x86 machines that do not use UEFI. Non-x86 machines, or x86 machines with UEFI, simply do not contain the necessary support for it. As these are now standard, most of the time, the text console you see on Linux is actually the kernel driving the video hardware in graphics mode, and doing the text rendering in software.

That’s all well and good, but it makes it quite difficult to actually get an 80x25 console.

First, we need to be running at 720x400. This is where I ran into difficulty last time. I realized that my laptop’s LCD didn’t advertise any video modes other than its own native resolution. However, almost all external monitors will, and 720x400@70 is a standard VGA mode from way back, so it should be well-supported.

You need to find the Linux device name for your device. You can look at the possible devices with ls -l /sys/class/drm. If you also have a GUI, xrandr may help too. But in any case, each directory under /sys/class/drm has a file named modes, and if you cat them all, you will eventually come across one with a bunch of modes defined. Drop the leading “card0” or whatever from the directory name, and that’s your device. (Verify that 720x400 is in modes while you’re at it.)

Now, you’re going to edit /etc/default/grub and add something like this to GRUB_CMDLINE_LINUX_DEFAULT:

video=DP-1:720x400@70

Of course, replace DP-1 with whatever your device is.

Now you can run update-grub and reboot. You should have a 720x400 display.

At first, I thought I had succeeded by using Linux’s built-in VGA font with that mode. But it looked too tall. After noticing that repeated 0s were touching, I got suspicious about the missing 9th column in the cells. stty -a showed that my screen was 90x25, which is exactly what it would show if I was using 8x16 instead of 9x16 cells. Sooo…. I need to prepare a 9x16 font.

Preparing a font

Here’s where it gets complicated.

I’ll give you the simple version and the hard mode.

The simple mode is this: Download https://www.complete.org/downloads/CP437-VGA.psf.gz and stick it in /usr/local/etc, then skip to the “Activating the font” section below.

The font assembled here is based on the Ultimate Oldschool PC Font Pack v2.2, which is (c) 2016-2020 VileR and licensed under Creative Commons Attribution-ShareAlike 4.0 International License. My psf file is derived from this using the instructions below.

Building it yourself

First, install some necessary software: apt-get install fontforge bdf2psf

Start by going to the Oldschool PC Font Pack Download page. Download oldschool_pc_font_pack_v2.2_FULL.zip and unpack it.

The file we’re interested in is otb - Bm (linux bitmap)/Bm437_IBM_VGA_9x16.otb. Open it in fontforge by running fontforge BmPlus_IBM_VGA_9x16.otb. When it asks if you will load the bitmap fonts, hit select all, then yes. Go to File -> generate fonts. Save in a BDF, no need for outlines, and use “guess” for resolution.

Now you have a file such as Bm437_IBM_VGA_9x16-16.bdf. Excellent.

Now we need to generate a Unicode map file. We will make sure this matches the system’s by enumerating every character from 0x00 to 0xFF, converting it from CP437 to Unicode, and writing the appropriate map.

Here’s a Python script to do that:

for i in range(0, 256):
    cp437b = b'%c' % i
    uni = ord(cp437b.decode('cp437'))
    print(f"U+{uni:04x}")

Save that file as genmap.py and run python3 genmap.py > cp437-uni.

Now, we’re ready to build the psf file:

bdf2psf --fb Bm437_IBM_VGA_9x16-16.bdf \
  /dev/null cp437-uni 256 CP437-VGA.psf

By convention, we normally store these files gzipped, so gzip CP437-VGA.psf.

You can test it on the console with setfont CP437-VGA.psf.gz.

Now copy this file into /usr/local/etc.

Activating the font

Now, edit /etc/default/console-setup. It should look like this:

# CONFIGURATION FILE FOR SETUPCON

# Consult the console-setup(5) manual page.

ACTIVE_CONSOLES="/dev/tty[1-6]"

CHARMAP="UTF-8"

CODESET="Lat15"
FONTFACE="VGA"
FONTSIZE="8x16"
FONT=/usr/local/etc/CP437-VGA.psf.gz

VIDEOMODE=

# The following is an example how to use a braille font
# FONT='lat9w-08.psf.gz brl-8x8.psf'

At this point, you should be able to reboot. You should have a proper 80x25 display! Log in and run stty -a to verify it is indeed 80x25.

Using and testing CP437

Part of the point of CP437 is to be able to access BBSs, ANSI art, and similar.

Now, remember, the Linux console is still in UTF-8 mode, so we have to translate CP437 to UTF-8, then let our font map translate it back to CP437. A weird trip, but it works.

Let’s test it using the Textfiles ANSI art collection. In the artworks section, I randomly grabbed a file near the top: borgman.ans. Download that, and display with:

clear; iconv -f CP437 -t UTF-8 < borgman.ans

You should see something similar to – but actually more accurate than – the textfiles PNG rendering of it, which you’ll note has an incorrect aspect ratio and some rendering issues. I spot-checked with a few others and they seemed to look good. belinda.ans in particular tries quite a few characters and should give you a good sense if it is working.

Use with interactive programs

That’s all well and good, but you’re probably going to want to actually use this with some interactive program that expects CP437. Maybe Minicom, Kermit, or even just telnet?

For this, you’ll want to apt-get install luit. luit maps CP437 (or any other encoding) to UTF-8 for display, and then of course the Linux console maps UTF-8 back to the CP437 font.

Here’s a way you can repeat the earlier experiment using luit to run the cat program:

clear; luit -encoding CP437 cat borgman.ans

You can run any command under luit. You can even run luit -encoding CP437 bash if you like. If you do this, it is probably a good idea to follow my instructions on generating locales on my post on serial terminals, and then within luit, set LANG=en_us.IBM437. But note especially that you can run programs like minicom and others for accessing BBSs under luit.

Final words

This gave you a nice DOS-type console. Although it doesn’t have glyphs for many codepoints, it does run in UTF-8 mode and therefore is compatible with modern software.

You can achieve greater compatibility with more UTF-8 codepoints with the DOS font, at the expense of accuracy of character rendering (especially for the double-line drawing characters) by using /usr/share/bdf2psf/standard.equivalents instead of /dev/null in the bdf2psf command.

Or you could go for another challenge, such as using the DEC vt-series fonts for coverage of ISO-8859-1. But just using fonts extracted from DEC ROM won’t work properly, because DEC terminals had even more strangeness going on than DOS fonts.

18 September, 2025 12:58PM by John Goerzen

September 17, 2025

Installing and Using Debian With My Decades-Old Genuine DEC vt510 Serial Terminal

Six years ago, I was inspired to buy a DEC serial terminal. Since then, my collection has grown to include several DEC models, an IBM 3151, a Wyse WY-55, a Televideo 990, and a few others.

When you are running a terminal program on Linux or MacOS, what you are really running is a terminal emulator. In almost all cases, the terminal emulator is emulating one of the DEC terminals in the vt100 through vt520 line, which themselves use a command set based on an ANSI standard.

In short, you spend all day using a program designed to pretend to be the exact kind of physical machine I’m using for this experiment!

I have long used my terminals connected to a Raspberry Pi 4, but due to the difficulty of entering a root filesystem encryption password using a serial console on a Raspberry Pi, I am switching to an x86 Mini PC (with a N100 CPU).

While I have used a terminal with the Pi, I’ve never before used it as a serial console all the way from early boot, and I have never installed Debian using the terminal to run the installer. A serial terminal gives you a login prompt. A serial console gives you access to kernel messages, the initrd environment, and sometimes even the bootloader.

This might be fun, I thought.

I selected one of my vt510 terminals for this. It is one of my newer ones, having been built in 1993. But it has a key feature: I can remap Ctrl to be at the caps lock position, something I do on every other system I use anyhow. I could have easily selected an older one from the 1980s.

A DEC vt510 terminal showing the Debian installer

Kernel configuration

To enable a serial console for Linux, you need to pass a parameter on the kernel command line. See the kernel documentaiton for more. I very frequently see instructions that are incomplete; they particularly omit flow control, which is most definitely needed for these real serial terminals.

I run my terminal at 57600 bps, so the parameter I need is console=ttyS0,57600n8r. The “r” means to use hardware flow control (ttyS0 corresponds to the first serial port on the system; use ttyS1 or something else as appropriate for your situation). While booting the Debian installer, according to Debian’s instructions, it may be useful to also add TERM=vt102 (the installer doesn’t support the vt510 terminal type directly). The TERM parameter should not be specified on a running system after instlalation.

Booting the Debian installer

When you start the Debian installer, to get it into serial mode, you have a couple of options:

  1. You can use a traditional display and keyboard just long enough to input the kernel parameters described above
  2. You can edit the bootloader configuration on the installer’s filesystem prior to booting from it

Option 1 is pretty easy. Option 2 is hard mode, but not that bad.

On x86, the Debian installer boots in at least two different ways: it uses GRUB if you’re booting under UEFI (which is most systems these days), or ISOLINUX if you are booting from the BIOS.

If using GRUB, the file to edit on the installer image is boot/grub/grub.cfg.

Near the top, add these lines:

serial --unit=0 --speed=57600 --word=8 --parity=no --stop=1
terminal_input console serial
terminal_output console serial

Unit 0 corresponds to ttyS0 as above.

GRUB’s serial command does not support flow control. If your terminal gets corrupted during the GRUB stage, you may need to configure it to a slower speed.

Then, find the “linux” line under the “Install” menuentry. Edit it to insert console=ttyS0,57600n8r TERM=vt102 right after the vga=788.

Save, unmount, and boot. You should see the GRUB screen displayed on your serial terminal. Select the Install option and the installer begins.

If you are using BIOS boot, I’m sure you can do something similar with the files in the isolinux directory, but haven’t researched it.

Now, you can install Debian like usual!

Configuring the System

I was pleasantly surprised to find that Debian’s installer took care of many, but not all, of the things I want to do in order to make the system work nicely with a serial terminal. You can perform these steps from a chroot under the installer environment before a reboot, or later in the running system.

First, while Debian does set up a getty (the program that displays the login prompt) on the serial console by default, it doesn’t enable hardware flow control. So let’s do that.

Configuring the System: agetty with systemd

Run systemctl edit serial-getty@ttyS0.service. This opens an editor that lets you customize the systemd configuration for a given service without having to edit the file directly. All you really need to do is modify the agetty command, so we just override it. At the top, in the designated area, write:

[Service]
ExecStart=
ExecStart=-/sbin/agetty --wait-cr -8 -h -L=always %I 57600 vt510

The empty ExecStart= line is necessary to tell systemd to remove the existing ExecStart command (otherwise, it will logically contain two ExecStart lines, which is an error).

These arguments say:

  • –wait-cr means to wait for the user to press Return at the terminal before attempting to display the login prompt
  • -8 tells it to assume 8-bit mode on the serial line
  • -h enables hardware flow control
  • -L=always enables local line mode, disabling monitoring of modem control lines
  • %I substitutes the name of the port from systemd
  • 57600 gives the desired speed, and vt510 gives the desired setting for the TERM environment variable

The systemd documentation refers to this page about serial consoles, which gives more background. However, I think it is better to use the systemctl edit method described here, rather than just copying the config file, since this lets things like new configurations with new Debian versions take effect.

Configuring the System: Kernel and GRUB

Your next stop is the /etc/default/grub file. Debian’s installer automatically makes some changes here. There are three lines you want to change. First, near the top, edit GRUB_CMDLINE_LINUX_DEFAULT and add console=tty0 console=ttyS0,57600n8r. By specifying console twice, you allow output to go both to the standard display and to the serial console. By specifying the serial console last, you make it be the preferred one for things like entering the root filesystem password.

Next, towards the bottom, make sure these two lines look like this:

GRUB_TERMINAL="console serial"
GRUB_SERIAL_COMMAND="serial --unit=0 --speed=57600 --word=8 --parity=no --stop=1"

Finally, near the top, you may want to raise the GRUB_TIMEOUT to somewhere around 10 to 20 seconds since things may be a bit slower than you’re used to.

Save the file and run update-grub.

Now, GRUB will display on both your standard display and the serial console. You can edit the boot command from either. If you have a VGA or HDMI monitor attached, for instance, and need to not use the serial console, you can just edit the Linux command line in GRUB and remove the reference to ttyS0 for one boot. Easy!

That’s it. You now have a system that is fully operational from a serial terminal.

My original article from 2019 has some additional hints, including on how to convert from UTF-8 for these terminals.

Update 2025-09-17: It is also useful to set up proper locales. To do this, first edit /etc/locale.gen. Make sure to add, or uncomment:

en_US ISO-8859-1
en_US.IBM437 IBM437
en_US.UTF-8 UTF-8 

Then run locale-gen. Normally, your LANG will be set to en_us.UTF-8, which will select the appropriate encoding. Plain en_US will select ISO-8859-1, which you need for the vt510. Then, add something like this to your ~/.bashrc:

if [ `tty` = "/dev/ttyS0" -o "$TERM" = "vt510" ]; then
        stty -iutf8
        # might add ixon ixoff
        export LANG=en_US
        export MANOPT="-E ascii"
        stty rows 25
fi

if [ "$TERM" = "screen" -o "$TERM" = "vt100" ]; then
    export LANG=en_US.utf8
fi

Finally, in my ~/.screenrc, I have this. It lets screen convert between UTF-8 and ISO-8859-1:

defencoding UTF-8
startup_message off
vbell off
termcapinfo * XC=B%,�-,
maptimeout 5
bindkey -k ku stuff ^[OA
bindkey -k kd stuff ^[OB
bindkey -k kr stuff ^[OC
bindkey -k kl stuff ^[OD

17 September, 2025 12:49PM by John Goerzen

Sven Hoexter

HaProxy: Configuring SNI for a TLS Proxy

If you use HaProxy to e.g. terminate TLS on the frontend and connect via TLS to a backend, one has to take care of sending the SNI (server name indication) extension in the TLS handshake sort of manually.

Even if you use host names to address the backend server, e.g.

server foobar foobar.example:2342 ssl verify required ca-file /etc/haproxy/ca/foo.crt

HaProxy will try to establish the connection without SNI. You manually have to enforce SNI here, e.g.

server foobar foobar.example:2342 ssl verify required ca-file /etc/haproxy/ca/foo.crt sni str(foobar.example)

The surprising thing here was that it requires an expression, so you can not just write sni foobar.example, you've to wrap it in an expression. The simplest one is making sure it's a string.

Update: Might be noteworthy that you've to configure SNI for the health check separately, and in that case it's a string not an expression. E.g.

server foobar foobar.example:2342 check check-ssl check-sni foobar.example ssl verify required ca-file /etc/haproxy/ca/foo.crt sni str(foobar.example)

The ca-file is shared between the ssl context and the check-ssl.

17 September, 2025 11:35AM

September 16, 2025

hackergotchi for David Bremner

David Bremner

Using git-annex for email and notmuch metadata

Introducing git-remote-notmuch

Based on an idea and ruby implementation by Felipe Contreras, I have been developing a git remote helper for notmuch. I will soon post an updated version of the patchset to the notmuch mailing list (I wanted to refer to this post in my email). In this blog post I'll outline my experiments with using that tool, along with git-annex to store (and sync) a moderate sized email store along with its notmuch metadata.

WARNING

The rest of this post describes some relatively complex operations using (at best) alpha level software (namely git-remote-notmuch). git-annex is good at not losing your files, but git-remote-notmuch can (and did several times during debugging) wipe out your notmuch database. If you have a backup (e.g. made with notmuch-dump), this is much less annoying, and in particular you can decide to walk away from this whole experiment and restore your database.

Why git-annex?

I currently have about 31GiB of email, spread across more than 830,000 files. I want to maintain the ability to search and read my email offline, so I need to maintain a copy on several workstations and at least one server (which is backed up explicitly). I am somewhat commited to maintaining synchronization of tags to git since that is how the notmuch bug tracker works. Commiting the email files to git seems a bit wasteful: by design notmuch does not modify email files, and even with compression, the extra copy adds a fair amount of overhead (in my case, 17G of git objects, about 57% overhead). It is also notoriously difficult to completely delete files from a git repository. git-annex offers potential mitigation for these two issues, at the cost of a somewhat more complex mental model. The main idea is that instead of committing every version of a file to the git repository, git-annex tracks the filename and metadata, with the file content being stored in a key-value store outside git. Conceptually this is similar to git-lfs. From our current point, the important point is that instead of a second (compressed) copy of the file, we store one copy, along with a symlink and a couple of directory entries.

What to annex

For sufficiently small files, the overhead of a symlink and couple of directory entries is greater than the cost of a compressed second copy. When this happens depends on several variables, and will probably depend on the file content in a particular collection of email. I did a few trials of different settings for annex.largefiles to come to a threshold of largerthan=32k 1. For the curious, my experimental results are below. One potentially surprising aspect is that annexing even a small fraction of the (largest) files yields a big drop in storage overhead.

Threshold fraction annexed overhead
0 100% 30%
8k 29% 13%
16k 12% 9.4%
32k 7% 8.9%
48k 6% 8.9%
100k 3% 9.1%
256k 2% 11%
∞ (git) 0 % 57%

In the end I chose to err on the side of annexing more files (for the flexibility of deletion) rather than potentially faster operations with fewer annexed files at the same level of overhead.

Summarizing the configuration settings for git-annex (some of these are actually defaults, but not in my environment).

$ git config annex.largefiles largerthan=32k
$ git config annex.dotfiles true
$ git config annex.synccontent true

Delivering mail

To get new mail, I do something like

# compute a date based folder under $HOME/Maildir
$ dest = $(folder)
# deliver mail to ${dest} (somehow).
$ notmuch new
$ git -C $HOME/Maildir add ${folder}
$ git -C $HOME/Maildir diff-index --quiet HEAD ${folder} || git -C $HOME/Maildir commit -m 'mail delivery'

The call to diff-index is just an optimization for the case when nothing was delivered. The default configuration of git-annex will automagically annex any files larger than my threshold. At this point the git-annex repo knows nothing about tags.

There is some git configuration that can speed up the "git add" above, namely

$ git config core.untrackedCache true
$ git config core.fsmonitor true

See git-status(1) under "UNTRACKED FILES AND PERFORMANCE"

Defining notmuch as a git remote

Assuming git-remote-notmuch is somewhere in your path, you can define a remote to connect to the default notmuch database.

$ git remote add database notmuch::
$ git fetch database
$ git merge --allow-unrelated database

The --allow-unrelated should be needed only the first time.

In my case the many small files used to represent the tags (one per message), use a noticeable amount of disk space (in my case about the same amount of space as the xapian database).

Once you start merging from the database to the git repo, you will likely have some conflicts, and most conflict resolution tools leave junk lying around. I added the following .gitignore file to the top level of the repo

*.orig
*~

This prevents our cavalier use of git add from adding these files to our git history (and prevents pushing random junk to the notmuch database.

To push the tags from git to notmuch, you can run

$ git push database master

You might need to run notmuch new first, so that the database knows about all of the messages (currently git-remote-notmuch can't index files, only update metadata).

git annex sync should work with the new remote, but pushing back will be very slow 2. I disable automatic pushing as follows

$ git config remote.database.annex-push false

Unsticking the database remote

If you are debugging git-remote-notmuch, or just unlucky, you may end up in a sitation where git thinks the database is ahead of your git remote. You can delete the database remote (and associated stuff) and re-create it. Although I cannot promise this will never cause problems (because, computers), it will not modify your local copy of the tags in the git repo, nor modify your notmuch database.

$ git remote rm database
$ git update-ref -d notmuch/master
$ rm -r .git/notmuch

Fine tuning notmuch config

  • In order to avoid dealing with file renames, I have

      notmuch config maildir.synchronize_flags false
    
  • I have added the following to new.ignore:

       .git;_notmuch_metadata;.gitignore
    

  1. I also had to set annex.dotfiles to true, as many of my maildirs follow the qmail style convention of starting with a .
  2. I'm not totally clear on why it so slow, but certainly git-annex tries to push several more branches, and these are ignored by git-remote-annex.

16 September, 2025 09:37PM

John Goerzen

I just want an 80×25 console, but that’s no longer possible

Update 2025-09-18: I figured out how to do this, at least for many non-laptop screens. This post still contains a lot of good background detail, however.

Somehow along the way, a feature that I’ve had across DOS, OS/2, FreeBSD, and Linux — and has been present on PCs for more than 40 years — is gone.

That feature, of course, is the 80×25 text console.

Linux has, for awhile now, rendered its text console using graphic modes. You can read all about it here. This has been necessary because only PCs really had the 80×25 text mode (Raspberry Pis, for instance, never did), and even they don’t have it when booted with UEFI.

I’ve lately been annoyed that:

  • The console is a different size on every screen — both in terms of size of letters and the dimensions of it
  • If a given machine has more than one display, one or both of them will have parts of the console chopped off
  • My system seems to run with three different resolutions or fonts at different points of the boot process. One during the initrd, and two different ones during the remaining boot.

And, I wanted to run some software on the console that was designed with 80×25 in mind. And I’d like to be able to plug in an old VGA monitor and have it just work if I want to do that.

That shouldn’t be so hard, right? Well, the old vga= option that you are used to doesn’t work when you booted from UEFI or on non-x86 platforms. Most of the tricks you see online for changing resolutions, etc., are no longer relevant. And things like setting a resolution with GRUB are useless for systems that don’t use GRUB (including ARM).

VGA text mode uses 8×16 glyphs in 9×16 cells, where the pixels are non-square, giving a native resolution of 720×400 (which historically ran at 70Hz), which should have streched pixels to make a 4:3 image.

While it is possible to select a console font, and 8×16 fonts are present and supported in Linux, it appears to be impossible to have a standard way to set 720×400 so that they present in a reasonable size, at the correct aspect ratio, with 80×25.

Tricks like nomodeset no longer work on UEFI or ARM systems. It’s possible that kmscon or something like it may help, but I’m not even certain of that (video=eDP1:720×400 produced an error saying that 720×400 wasn’t a supported mode, so I’m unsure kmscon would be any better.) Not that it matters; all the kmscon options to select a font or zoom are broken, and it doesn’t offer mode selection anyhow.

I think I’m going to have to track down an old machine.

Sigh.

16 September, 2025 01:53AM by John Goerzen

September 15, 2025

Sven Hoexter

Google Cloud: When the Load Balancer Frontend Hands you an F

If someone hands you an IP:Port of a Google Cloud load balancer, and tells you to connect there with TLS, but all you receive in return is an F (and a few other bytes with none printable characters) on running openssl s_client -connect ..., you might be missing SNI (server name indication). Sadly the other side was not transparent enough to explain in detail which exact type of Google Cloud load balancer they used, but the conversation got more detailed and up to a working TLS connection when the missing -servername foobar.host.name was added. I could not find any sort of official documentation on the responses of the GFE (the frontend part) when TLS parameters do not match the expectations. Also you won't have anything in the logs, because logging at Google Cloud is a backend function, and as long as your requests do not reach the backend, there are no logs. That makes it rather unpleasant to debug such cases, when one end says "I do not see anything in the logs", and the other one says "you reject my connection and just reply F".

15 September, 2025 12:32PM

September 14, 2025

Ian Jackson

tag2upload in the first month of forky

tl;dr: tag2upload (beta) is going well so far, and is already handling around one in 13 uploads to Debian.

Introduction and some stats

We announced tag2upload’s open beta in mid-July. That was in the middle of the the freeze for trixie, so usage was fairly light until the forky floodgates opened.

Since then the service has successfully performed 637 uploads, of which 420 were in the last 32 days. That’s an average of about 13 per day. For comparison, during the first half of September up to today there have been 2475 uploads to unstable. That’s about 176/day.

So, tag2upload is already handling around 7.5% of uploads. This is very gratifying for a service which is advertised as still being in beta!

Sean and I are very pleased both with the uptake, and with the way the system has been performing.

Recent UI/UX improvements

During this open beta period we have been hard at work. We have made many improvements to the user experience.

Current git-debpush in forky, or trixie-backports, is much better at detecting various problems ahead of time.

When uploads do fail on the service the emailed error reports are now more informative. For example, anomalies involving orig tarballs, which by definition can’t be detected locally (since one point of tag2upload is not to have tarballs locally) now generally result in failure reports containing a diffstat, and instructions for a local repro.

Why we are still in beta

There are a few outstanding work items that we currently want to complete before we declare the end of the beta.

Retrying on Salsa-side failures

The biggest of these is that the service should be able to retry when Salsa fails. Sadly, Salsa isn’t wholly reliable, and right now if it breaks when the service is trying to handle your tag, your upload can fail.

We think most of these failures could be avoided. Implementing retries is a fairly substantial task, but doesn’t pose any fundamental difficulties. We’re working on this right now.

Other notable ongoing work

We want to support pristine-tar, so that pristine-tar users can do a new upstream release. Andrea Pappacoda is working on that with us. See #1106071. (Note that we would generally recommend against use of pristine-tar within Debian. But we want to support it.)

We have been having conversations with Debusine folks about what integration between tag2upload and Debusine would look like. We’re making some progress there, but a lot is still up in the air.

We are considering how best to provide tag2upload pre-checks as part of Salsa CI. There are several problems detected by the tag2upload service that could be detected by Salsa CI too, but which can’t be detected by git-debpush.

Common problems

We’ve been monitoring the service and until very recently we have investigated every service-side failure, to understand the root causes. This has given us insight into the kinds of things our users want, and the kinds of packaging and git practices that are common. We’ve been able to improve the system’s handling of various anomalies and also improved the documentation.

Right now our failure rate is still rather high, at around 7%. Partly this is because people are trying out the system on packages that haven’t ever seen git tooling with such a level of rigour.

There are two classes of problem that are responsible for the vast majority of the failures that we’re still seeing:

Reuse of version numbers, and attempts to re-tag

tag2upload, like git (and like dgit), hates it when you reuse a version number, or try to pretend that a (perhaps busted) release never happened.

git tags aren’t namespaced, and tend to spread about promiscuously. So replacing a signed git tag, with a different tag of the same name, is a bad idea. More generally, reusing the same version number for a different (signed!) package is poor practice. Likewise, it’s usually a bad idea to remove changelog entries for versions which were actually released, just because they were later deemed improper.

We understand that many Debian contributors have gotten used to this kind of thing. Indeed, tools like dcut encourage it. It does allow you to make things neat-looking, even if you’ve made mistakes - but really it does so by covering up those mistakes!

The bottom line is that tag2upload can’t support such history-rewriting. If you discover a mistake after you’ve signed the tag, please just burn the version number and add a new changelog stanza.

One bonus of tag2upload’s approach is that it will discover if you are accidentally overwriting an NMU, and report that as an error.

Discrepancies between git and orig tarballs

tag2upload promises that the source package that it generates corresponds precisely to the git tree you tag and sign.

Orig tarballs make this complicated. They aren’t present on your laptop when you git-debpush. When you’re not uploading a new upstream version, the tag2upload service reuses existing orig tarballs from the archive. If your git and the archive’s orig don’t agree, the tag2upload service will report an error, rather than upload a package with contents that differ from your git tag.

With the most common Debian workflows, everything is fine:

If you base everything on upstream git, and make your orig tarballs with git archive (or git deborig), your orig tarballs are the same as the git, by construction. We recommend usually ignoring upstream tarballs: most upstreams work in git, and their tarballs can contain weirdness that we don’t want. (At worst, the tarball can contain an attack that isn’t visible in git, as with xz!)

Alternatively, if you use gbp import-orig, the differences (including an attack like Jia Tan’s) are imported into git for you. Then, once again, your git and the orig tarball will correspond.

But there are other workflows where this correspondence may not hold. Those workflows are hazardous, because the thing you’re probably working with locally for your routine development is the git view. Then, when you upload, your work is transplanted onto the orig tarball, which might be quite different - so what you upload isn’t what you’ve been working on!

This situation is detected by tag2upload, precisely because tag2upload checks that it’s keeping its promise: the source package is identical to the git view. (dgit push makes the same promise.)

Get involved

Of course the easiest way to get involved is to start using tag2upload.

We would love to have more contributors. There are some easy tasks to get started with, in bugs we’ve tagged “newcomer” — mostly UX improvements such as detecting certain problems earlier, in git-debpush.

More substantially, we are looking for help with sbuild: we’d like it to be able to work directly from git, rather than needing to build source packages: #868527.



comment count unavailable comments

14 September, 2025 03:36PM

September 11, 2025

Jonathan Wiltshire

Debian stable updates explained: security, updates, and point releases

Please consider supporting my work in Debian and elsewhere through Liberapay.

Debian stable updates work through three main channels: point releases, security repositories, and the updates repository. Understanding these ensures your system stays secure and current.

A note about suite names

Every Debian release, or suite, has a codename — the most recent major release was trixie, or Debian 13. The codename uniquely identifies that suite.

We also use changeable aliases to add meaning to the suite’s lifecycle. For example, trixie currently has the alias stable, but when forky becomes stable instead, trixie will become known as oldstable.

This post uses either codenames or aliases depending on context. In source lists, codenames are generally preferred since that avoids surprise major upgrades right after a release is made.

The stable suites (point releases)

stable and oldstable (currently trixie and bookworm) are only updated during a “point release.” This is a minor update released to a major version. For example, 13.1 is the first minor update to trixie. It’s not possible to install older minor versions of a suite except via the snapshots mechanism (not covered here). It’s possible to view past versions via snapshot.debian.org, which preserves historical Debian archives.

There are also the testing and unstable aliases for the development suites. However, these are not relevant for users who want to run officially released versions.

Almost every stable installation of Debian will be opted into a stable or oldstable base suite. An example APT source might look like:

Type: deb
URIs: http://deb.debian.org/debian
Suites: trixie
Components: main
Signed-By: /usr/share/keyrings/debian-archive-keyring.pgp

Or, in legacy sources.list style:

deb https://deb.debian.org/debian trixie main

The security suites (DSAs explained)

For urgent security-related updates, the Security Team maintains a counterpart suite for each stable suite. These are called stable-security and oldstable-security when maintained by Debian’s security team, and oldstable-security, oldoldstable-security, etc when maintained by the LTS team.

Example APT source:

Type: deb
URIs: https://deb.debian.org/debian-security
Suites: trixie-security
Components: main
Signed-By: /usr/share/keyrings/debian-archive-keyring.pgp

Or, in legacy sources.list style:

deb https://deb.debian.org/debian-security trixie-security main

The Debian installer enables the security suites by default. Debian Security Announcements (DSAs) are published to debian-security-announce@lists.debian.org.

The updates suites (SUAs and maintenance)

For urgent non-security updates, the final recommended suites are stable-updates and oldstable-updates. This is where updates staged for a point release, but needed sooner, are published. Examples include virus database updates, timezone changes, urgent bug fixes for specific problems and corrections to errors in the release process itself.

Example APT source:

Type: deb
URIs: https://deb.debian.org/debian
Suites: trixie-updates
Components: main
Signed-By: /usr/share/keyrings/debian-archive-keyring.pgp

Or, in legacy sources.list style:

deb https://deb.debian.org/debian trixie-updates main

Debian enables the updates suites by default. Stable Update Announcements (SUAs) are published to debian-stable-announce@lists.debian.org. This is also where announcements of forthcoming point releases are published.

Summary

These are the recommended suites for all production Debian systems:

SuiteExample codenamePurposeAnnouncements
stabletrixieBase suite containing all the available software for a release. Point releases every 2–4 months including lower-severity security fixes that do not require immediate release.Debian Release Announcements on debian-announce
stable-securitytrixie-securityUrgent security updates.Debian Security Announcements on debian-security-announce
stable-updatestrixie-updatesUrgent non-security updates, data updates and release maintenance.Stable Update Announcements on debian-stable-announce

After a release moves from oldstable to unsupported status, Long Term Support (LTS) takes over for several more years. LTS provides urgent security updates for selected architectures. For details, see wiki.debian.org/LTS.

If you’d like to stay informed, the official Debian announcement lists and release.debian.org share the latest schedules and updates.


Photo by Brian Wangenheim on Unsplash

11 September, 2025 08:29PM by Jonathan

John Goerzen

Performant Full-Disk Encryption on a Raspberry Pi, but Foiled by Twisty UARTs

In my post yesterday, ARM is great, ARM is terrible (and so is RISC-V), I described my desire to find ARM hardware with AES instructions to support full-disk encryption, and the poor state of the OS ecosystem around the newer ARM boards.

I was anticipating buying either a newer ARM SBC or an x86 mini PC of some sort.

More-efficient AES alternatives

Always one to think, “what if I didn’t have to actually buy something”, I decided to research whether it was possible to use encryption algorithms that are more performant on the Raspberry Pi 4 I already have.

The answer was yes. From cryptsetup benchmark:

root@mccoy:~# cryptsetup benchmark --cipher=xchacha12,aes-adiantum-plain64 
# Tests are approximate using memory only (no storage IO).
#            Algorithm |       Key |      Encryption |      Decryption
xchacha12,aes-adiantum        256b       159.7 MiB/s       160.0 MiB/s
xchacha20,aes-adiantum        256b       116.7 MiB/s       169.1 MiB/s
    aes-xts                   256b        52.5 MiB/s        52.6 MiB/s

With best-case reads from my SD card at 45MB/s (with dd if=/dev/mmcblk0 of=/dev/null bs=1048576 status=progress), either of the ChaCha-based algorithms will be fast enough. “Great,” I thought. “Now I can just solve this problem without spending a dollar.”

But not so fast.

Serial terminals vs. serial consoles

My primary use case for this device is to drive my actual old DEC vt510 terminal. I have long been able to do that by running a getty for my FTDI-based USB-to-serial converter on /dev/ttyUSB0. This gets me a login prompt, and I can do whatever I need from there.

This does not get me a serial console, however. The serial console would show kernel messages and could be used to interact with the pre-multiuser stages of the system — that is, everything before the loging prompt. You can use it to access an emergency shell for repair, etc.

Although I have long booted that kernel with console=tty0 console=ttyUSB0,57600, the serial console has never worked but I’d never bothered investigating because the text terminal was sufficient.

You might be seeing where this is going: to have root on an encrypted LUKS volume, you have to enter the decryption password in the pre-multiuser environment (which happens to be on the initramfs).

So I started looking. First, I extracted the initrd with cpio and noticed that the ftdi_sio and usbserial modules weren’t present. Added them to /etc/initramfs-tools/modules and rebooted; no better.

So I found the kernel’s serial console guide, which explicitly notes “To use a serial port as console you need to compile the support into your kernel”. Well, I have no desire to custom-build a kernel on a Raspberry Pi with MicroSD storage every time a new kernel comes out.

I thought — well I don’t stricly need the kernel to know about the console on /dev/ttyUSB0 for this; I just need the password prompt — which comes from userspace — to know about it.

So I looked at the initramfs code, and wouldn’t you know it, it uses /dev/console. Looking at /proc/consoles on that system, indeed it doesn’t show ttyUSB0. So even though it is possible to load the USB serial driver in the initramfs, there is no way to make the initramfs use it, because it only uses whatever the kernel recognizes as a console, and the kernel won’t recognize this. So there is no way to use a USB-to-serial adapter to enter a password for an encrypted root filesystem.

Drat.

The on-board UARTs?

I can hear you know: “The Pi already has on-board serial support! Why not use that?”

Ah yes, the reason I don’t want to use that is because it is difficult to use that, particularly if you want to have RTS/CTS hardware flow control (or DTR/DSR on these old terminals, but that’s another story, and I built a custom cable to map it to RTS/CTS anyhow).

Since you asked, I’ll take you down this unpleasant path.

The GPIO typically has only 2 pins for serial communication: 8 and 10, for TX and RX, respectively.

But dive in and you get into a confusing maze of UARTs. The “mini UART”, the one we are mostly familiar with on the Pi, does not support hardware flow control. The PL011 does. So the natural question is: how do we switch to the PL011, and what pins does it use? Great questions, and the answer is undocumented, at least for the Pi 4.

According to that page, for the Pi 4, the primary UART is UART1, UART1 is the mini UART, “the secondary UART is not normally present on the GPIO connector” and might be used by Bluetooth anyway, and there is no documented pin for RTS/CTS anyhow. (Let alone some of the other lines modems use) There are supposed to be /dev/ttyAMA* devices, but I don’t have those. There’s an enable_uart kernel parameter, which does things like stop the mini UART from changing baud rates every time the VPU changes clock frequency (I am not making this up!), but doesn’t seem to control the PL011 UART selection. This page has a program to do it, and map some GPIO pins to RTS/CTS, in theory.

Even if you get all that working, you still have the problem that the Pi UARTs (all of them of every type) is 3.3V and RS-232 is 5V, so unless you get a converter, you will fry your Pi the moment you connect it to something useful. So, you’re probably looking at some soldering and such just to build a cable that will work with an iffy stack.

So, I could probably make it work given enough time, but I don’t have that time to spare working with weird Pi serial problems, so I have always used USB converters when I need serial from a Pi.

Conclusion

I bought a fanless x86 micro PC with a N100 chip and all the ports I might want: a couple of DB-9 serial ports, some Ethernet ports, HDMI and VGA ports, and built-in wifi. Done.

11 September, 2025 01:41PM by John Goerzen

hackergotchi for Gunnar Wolf

Gunnar Wolf

Saying _hi_ to my good Reproducible Builds friends while reading a magazine article

Just wanted to share… I enjoy reading George V. Neville’s Kode Vicious column, which regularly appears on some of ACM’s publications I follow, such as ACM Queue or Communications.

Today I was very pleasantly surprised, while reading the column titled «Can’t we have nice things» Kode Vicious answers to a question on why computing has nothing comparable to the beauty of ancient physics laboratories turned into museums (i.e. Faraday’s laboratory) by giving a great hat tip to a project stemmed off Debian, and where many of my good Debian friends spend a lot of their energies: Reproducible builds. KV says:

Once the proper measurement points are known, we want to constrain the system such that what it does is simple enough to understand and easy to repeat. It is quite telling that the push for software that enables reproducible builds only really took off after an embarrassing widespread security issue ended up affecting the entire Internet. That there had already been 50 years of software development before anyone thought that introducing a few constraints might be a good idea is, well, let’s just say it generates many emotions, none of them happy, fuzzy ones.

Yes, KV is a seasoned free software author. But I found it heart warming that the Reproducible Builds project is mentioned without needing to introduce it (assuming familiarity across the computing industry and academia), recognized as game-changing as we understood it would be over ten years ago when it was first announced, and enabling of beauty in computing.

Congratulations to all of you who have made this possible!

RB+ACM

11 September, 2025 12:02AM

September 10, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Culture of silence: Ubisoft harassment convictions, Mozilla, Sylvestre Ledru & Debian make no comment

In an earlier blog, I provided an English translation of the accusations against the Ubisoft executives. This is serious stuff.

In the summer of 2017, people discovered the head of the Albanian group, who was also a Mozilla Tech Speaker, had a 16-year-old girlfriend. He was 24 at the time.

In October 2017, I sent a protected whistleblower complaint to Mozilla.

Here is the internal complaint about the harassment. The date is 12 October 2017 so the misfits publishing alternative statements about harassment are lying. I have redacted the section that identifies underage victims.

The next internal email from Larissa Shapiro at Mozilla admits that kids are at risk.

Emma Irwin from Mozilla admits this is a serious matter and asks me to speak to Marta, Mozilla's HR investigator.

It was around this time that I confided in some of the women that I had a family connection with the choir of Cardinal George Pell and that I was watching these matters very carefully.

Elio Qoshi

 

On 18 January 2018, at the peak of the scandals involving the Albanian female whisteblowers, Sylvestre Ledru, who is employed by Mozilla, sent a private email to Chris Lamb, Nicolas Dandrimont and I telling us that he was resigning as an administrator in Google Summer of Code.

On my side, I would appreciate being removed from this list and delegation. While I love Debian, my responsibilities at Mozilla are growing and I don't have the bandwidth for outreach/gsoc

On 27 August 2018 I sent a public message to the mailing list where I also resigned from future mentoring in the program.

Who wants to be part of the admin team in 2019? Personally, I am going to step back from that role with GSoC and I want to make that clear now so other people can step forward well before registration opens in January.

People have now spent seven years complaining. They never paid us anything for the work we did recruiting and mentoring the interns for Google. They squeal like stuck pigs after we resign.

In December 2018, the late Cardinal Pell was convicted of abuse and the rogue Debianists immediately began spreading rumors about my family and I.

A few weeks later, on 12 February 2019, Sylvestre Ledru used his blog at Mozilla to announce a collaboration between Ubisoft and Mozilla using the Ubisoft Clever-Commit artificial intelligence. An excerpt:

Making the Building of Firefox Faster for You with Clever-Commit from Ubisoft

Firefox fights for people online: for control and choice, for privacy, for safety. ... No other tech company has people’s back like we do.

... Mozilla just partnered with Ubisoft to start using Clever-Commit, an Artificial Intelligence coding assistant developed by Ubisoft La Forge ...

...

... Ultimately, the integration of Clever-Commit into the full Firefox developer workflow could help catch up to 3 to 4 out of 5 bugs ...

...

Mozilla will contribute to the development of Clever-Commit by providing programming language expertise in Rust, C++ and Javascript, as well as expertise in C++ code analysis and analysis of bug tracking systems.

It looks like a very close relationship between the developers at Mozilla and Ubisoft. Sylvestre is one of the Mozilla developers based in Paris, which is also the headquarters of Ubisoft.

Around the same time, Mozilla censored my blog from the Planet Mozilla blog syndication service. Is this because they don't want the Mozilla community to know what women told me in 2017 and 2018?

Mozilla, censorship, mhoye, Mike Hoye, Daniel Pocock

 

Look at the Mozilla Manifesto. It says that Mozilla is against censorship. Why did they censor my blog immediately before the evidence about harassment was published?

2. The internet is a global public resource that must remain open and accessible.

...

8. Transparent community-based processes promote participation, accountability and trust.

The franceinfo report about harssment describes the culture in our industry with quotes from the audit of Ubisoft culture:

"humiliations were commonplace"
"any resistance was immediately broken"

The fellowship had elected me as a representative in 2017 and somebody at Mozilla didn't want me to share evidence about harassment (real harassment).

Ubisoft employees began to speak up about harassment (real harassment) in summer 2020 ( Wikipedia report). French newspaper Libération started an investigation, eventually running a series of reports.

In 2023, Mozilla created and promoted a petition against the French government's browser censorship (SREN) legislation. Five years after censoring the blog of the person elected by the Fellowship, Mozilla declared:

Sign our petition to stop France from forcing browsers like Mozilla's Firefox to censor websites

In the summer of 2025, three senior Ubisoft executives were convicted and punished for harassment (real harassment).

Nobody from Mozilla or Debian has ever made any public Statement on Ubisoft.

Whether we are talking about censorship or talking about harassment, the message is clear: do as we say, not as we do

Why did they attack my family when my father died but they maintain the culture of silence for the Ubisoft scandals?

Here is a photo of Sylvestre Ledru, Paul Tagliamonte and I in the offices of Google during the Google Summer of Code mentor summit 2013.

Sylvestre Ledru, Paul Tagliamonte, Daniel Pocock, Google Summer of Code

 

Please see the chronological history of how the Debian harassment and abuse culture evolved.

10 September, 2025 07:00PM

Harassment evidence: franceinfo’s Clara Lainé report on Ubisoft prosecution

In the summer of 2025, executives of the French game development company Ubisoft were tried and convicted for harassment in the workplace. It is interesting to contrast the stories from Ubisoft with the stories about the Debian harassment culture. Clara Lainé at public broadcaster franceinfo published an in-depth report about the accusations. A partial English translation is provided here.

Remember, Debian's Sylvestre Ledru, who also works for Mozilla, published blogs promoting collaboration between Ubisoft and the open source software communities. Sylvestre was formerly a Debian administrator for the Google Summer of Code internships.

From the original report in French:

Serge Hascoët, Tommy François, and Guillaume Patrux are on trial for moral and sexual harassment starting Monday. But alongside them, the victims were hoping to see Ubisoft appear in court.

"Stop talking about this immediately. There's no problem at Ubisoft." This is the order a former employee of the French video game giant says he received when he tried, in 2017, to alert his superiors about moral harassment. "At the time of #MeToo, all companies tried, or pretended, to clean up. Not Ubisoft ," he laments in the investigation file consulted by franceinfo. The silence broke in July 2020, with the publication of a vast investigation by Libération. On the 6th floor of a building in Montreuil, home to Ubisoft's prestigious editorial department, the walls could no longer contain the hearty laughter, inappropriate remarks, and serial humiliations.

Five years later, the case reaches the Bobigny Criminal Court . Starting Monday, June 2, three former executives will have to answer for their actions: Serge Hascoët, the group's former number 2; Thomas François (nicknamed Tommy), former vice-president of editorial services; and Guillaume Patrux, former game director. All three deny the facts. For five days, they will be tried for moral and sexual harassment and attempted sexual assault, following accusations brought by six women, three men, and two unions.

The facts they are accused of are part of a system where "humiliations were commonplace" and where "any resistance was immediately broken , " concluded an internal investigation carried out in July 2020. How could such an omerta have taken root and lasted for almost a decade? Why did so many employees end up believing that they could "do nothing" other than "close their eyes or grit their teeth" ?

"We can no longer differentiate whether it's inappropriate or normal."

Behind the window of a creative and relaxed company, dozens of employee testimonies paint a different picture. "The victims' words (...) describe a department [Ubisoft's editorial department] in the hands of a group of immature men, who consider it their personal fiefdom and engage in all kinds of abuse there ," concluded the internal investigation conducted by the Altaïr firm in July 2020.

In this environment, the boundary between professional and personal life dissolves. Clarisse*, who held the position for six years, was unable to file a complaint, as the facts were statute-barred. However, she retains vivid memories of this stifling atmosphere. "I felt like I was always in a bar with constant flirting ," she told investigators. Crossing the open-plan office is a daily ordeal: pointed stares, ambiguous messages, barely veiled invitations...

"I felt like I was walking through a neighborhood alone at night."

Clarisse, former Ubisoft employee facing the investigators

"It was so hard to get up in the morning, I was putting on makeup and crying at the same time ," the young woman confides. "We can no longer differentiate if it's inappropriate or if these things are normal," she says. In this microcosm, everything seems permitted under the guise of humor. Some meetings end with drawings of penises on flipcharts, walls, or post-its, several employees say. Games of "cat-bite" or "olive" are freely played in the corridors, according to the accounts of a dozen witnesses to the case.

Tommy François, one of the three defendants in the case, admitted during a hearing to having evolved in an "institutionalized (...) schoolboy environment" to which one had to adhere "if one did not want to be excluded" . When he arrived at Ubisoft in 2006, he remembers being given the nickname "the TV whore" because he came from the Game One channel or "the fat one" because of his "weight" . As for the "cat-bite", he says he "suffered" it as much as he practiced it, "as a joke" . "It happened between colleagues who knew each other well, always in a humorous tone" , assures the former vice-president of the editorial department.

Misogynistic and racist remarks become commonplace

On the 6th floor of the Montreuil building, the victims of these acts are often the same: women, interns, people of color... Juliette*, hired in 2010 as an intern for Serge Hascoët's assistant, is forced to work at an absurd pace: "Alain* [her manager, for whom the statute of limitations has expired] timed my responses to his emails and sometimes he would say to me: 'you took six minutes to reply to my email, but you're slow.'" She stayed with the company for five years: "I had just come out of a very precarious situation (...), I had to arm myself to hold out."

Nathalie*, who succeeded her in 2015, very quickly became the target of her managers.

"Alain told me that I needed to get a makeover and lose weight."

Nathalie, former Ubisoft employee facing the investigators

She specifies that he forbade her from taking the elevator. Between Juliette and Nathalie, the name of an informal network circulates: "the bracelet community ," a group of employees who know "the truth" about the open space and help each other in hushed tones. These whispers ended up finding an echo in the Bobigny criminal court: the two young women will be on the benches of the civil parties.

The testimonies also report Islamophobic and racist remarks. Nathalie remembers a black colleague nicknamed "Bamboula" and comments about the size of his penis. The young woman , who is Muslim, also recounts that a manager asked her after the Bataclan attacks "if [she] planned to join Daesh ." She adds that her colleagues "amused themselves by changing [her] screen with images of McBacon" or putting sandwiches on her desk during Ramadan. Confronted with these comments, Serge Hascoët told investigators that he had no memory of such scenes.

An all-powerful "king" who "knights"

At the top of the pyramid, power was concentrated in the hands of a handful of "unavoidable and omnipotent" men . "Serge Hascoët had the power of life or death over projects ," summarizes an employee interviewed by the Altaïr firm. The creative director has the complete confidence of CEO Yves Guillemot. "To have the budget to complete the game created, you have to please Serge ," we can read in an internal investigation.

"When the king knights, he delegates his power ," emphasizes a veteran of the editorial department. This power is transmitted from one person to another, by affinity more than by competence, according to him. "There was no HR policy until 2020. Serge placed his knowledge ," notes the author of the internal audit on the department. Alain, his right-hand man for years, testified to this himself during his hearing: "They called me the king's son."

This privileged circle occupies the heart of a system where some can get away with anything. "The editorial department was the star department, and Tommy François was an untouchable personality. We couldn't say anything to him," recalls a former employee. On the HR side, the powerlessness is obvious. "It made him laugh a lot, me not at all," laments a manager, recounting a scene where Serge Hascoët and Tommy François simulate spanking each other while shouting "harassment!" in front of the human resources office. One of her colleagues admits that Ubisoft's number 2 "didn't like to follow the rules" : "I would even say that it amused him (...) to annoy us." During his hearing, Tommy François argued that "during [his] almost 14 years at Ubisoft, HR or [his] managers never notified [him] anything about [his] behavior . "

"Loyalty is a cardinal virtue"

Added to this omnipotence of managers is loyalty to the company and the fear of losing one's job. Ubisoft embodies the elite of global video games. In this "big family ," there is no criticism of the elders, much less those who hold the reins. "Loyalty is a cardinal virtue, even outweighing values," a Ubisoft employee said during an interview with Altaïr.

“My partner told me a while ago: 'Your company is a cult.' That's not wrong!"

A Ubisoft employee as part of Altaïr's internal audit

For many, reporting wrongdoing is not an option. "In the department's environment, it's difficult to speak out without risking reprisals or being ostracized," a former employee confided in 2020. More than 30 witnesses were heard during the investigation, but many of them decided not to file a complaint "for fear of reactions from the video game industry ," investigators point out.

Reputation weighs heavily. "Looking back, I think that at the time, it was a prestigious thing to join Ubisoft and Serge's team... Perhaps someone was willing to put up with certain things instead of speaking out," observed a human resources director during an interview. Because beyond the Montreuil studio, Ubisoft remains a key name in the industry.

Those who tried to resist found themselves alone. Bérénice*, who says she was regularly "humiliated" by Tommy François and "overloaded with work ," suffered a burnout in 2015. The reaction of Alain, one of her superiors, according to her? "Anyway, you have more than that, so we're taking advantage of it, you'll never say anything." Facing investigators, Alain declared that he had "nothing to say" about Bérenice and that he had "always found her very nice . "

She will be on the bench of the civil parties, just like Benoît*, for whom the years spent at Ubisoft as a 3D artist constituted "a downward spiral" which led him "towards anxiety, sleep and eating disorders" . He explained during a hearing: "This complaint is almost an act of desperation... In any case, I have nothing to lose since I will never return to a video game company."

“HR knew everything, saw everything”

At Ubisoft, many people point to the human resources department as a weak link in the system. "HR wasn't the allies of employees, but the protectors of abusers ," accuses a former employee. When Nathalie raises the alarm about a manager's sexist comments, the response is brutal, she says: "The assistant you're replacing lasted three years, it's up to you to adapt." Three human resources directors were indicted and interviewed by investigators, who ultimately dropped the charges. But their findings don't erase the image of a passive, even complicit, department. "I think everyone had pieces of the puzzle, but no overall vision ," admits one manager to investigators. "There was no disciplinary power in 2015 ," acknowledges another.

Bérénice describes a locked-down system and a broken chain of responsibility at every level. "HR knew everything, saw everything, and sometimes even joked with the harassers. Some played the role of stepmother, taking on the role of executioner to protect Serge Hascoët and Tommy François." Even medical appeals seem futile. Bérénice claims that "the occupational physician was 85 years old, she was no longer in her right mind. She asked people to undress and get dressed several times."

“We didn’t know who to turn to.”

Bérénice, former employee and victim facing the investigators

For his part, Yves Guillemot speaks of "a few toxic people" but denies any systemic abuse. This interpretation is contested by unions, civil parties, and defense lawyers. Jean-Guillaume Le Mintier, who represents Serge Hascoët, denounces the trial's scope as too narrow. "If we want to be consistent with the idea that harassment is systemic, everyone must be present in court," he argues.

"All management departments have encouraged this company policy."

Jean-Guillaume Le Mintier, lawyer of Serge Hascoët to franceinfo

"The alert tools still don't work ," laments the lawyer for the Video Game Workers' Union, which has filed a civil suit. She castigates a company "habitually doing this ," where "omerta has become a management method ." And concludes: "This trial should also have been Ubisoft's."

* First names have been changed.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

10 September, 2025 03:00PM

September 09, 2025

Sven Hoexter

debian/watch version 5 is a beauty

Kudos to yadd@ (and who else was involved to make that happen), for the new watch file v5 format. Especially templates for the big git hoster make it much nicer. Prepared two of my packages to switch on the next upload, exfatprogs tracking github releases and pflogsumm scraping a web page. That is much easier to read and less error prone.

09 September, 2025 03:02PM

September 08, 2025

Thorsten Alteholz

My Debian Activities in August 2025

Debian LTS

This was my hundred-thirty-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4272-1] aide security update of two CVEs where a local attacker can take advantage of these flaws to hide the addition or removal of a file from the the report, tamper with the log output, or cause aide to crash during report printing or database listing.
  • [DLA 4284-1] udisks2 security update to fix one CVE related to a possible local privilege escalation.
  • [DLA 4285-1] golang-github-gin-contrib-cors security update to fix one CVE related to circumvention of restrictions.
  • [#1112054] trixie-pu of golang-github-gin-contrib-cors, prepared and uploaded
  • [#1112335] trixie-pu of libcoap3, prepared and uploaded
  • [#1112053] bookworm-pu of golang-github-gin-contrib-cors, prepared and uploaded

I also continued my work on suricata and could backport all patches. Now I have to do some tests with the package. I also started to work on an openafs regression and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the eighty-fifth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1499-1] aide security update to fix one embargoed CVE in Stretch, related to a crash. The other CVE mentioned above was not affecting Stretch.
  • [ELA-1508-1] udisks2 security update to fix one embargoed CVE in Stretch and Buster.

I could also mark the CVEs of libcoap as not-affected. I also attended the monthly LTS/ELTS meeting.Of course like for LTS, As suricata had been requested for Stretch as well now, I didn’t not finish my work here.

Debian Printing

This month I uploaded a new upstream version or a bugfix version of:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream version or a bugfix version of:

Debian IoT

This month I uploaded a new upstream version or a bugfix version of:

Debian Mobcom

This month I uploaded a new upstream version or a bugfix version of:

misc

This month I uploaded a new upstream version or a bugfix version of:

On my fight against outdated RFPs, I closed 31 of them in August.

FTP master

Yeah, Trixie has been released, the tired bones need to be awaken again :-). This month I accepted 203 and rejected 18 packages. The overall number of packages that got accepted was 243.

08 September, 2025 03:35PM by alteholz

September 07, 2025

Antoine Beaupré

Encrypting a Debian install with UKI

I originally setup a machine without any full disk encryption, then somehow regretted it quickly after. My original reasoning was that this was a "play" machine so I wanted as few restrictions on accessing the machine as possible, which meant removing passwords, mostly.

I actually ended up having a user password, but disabled the lock screen. Then I started using the device to manage my photo collection, and suddenly there was a lot of "confidential" information on the device that I didn't want to store in clear text anymore.

Pre-requisites

So, how does one convert an existing install from plain text to full disk encryption? One way is to backup to an external drive, re-partition everything and copy things back, but that's slow and boring. Besides, cryptsetup has a cryptsetup-reencrypt command, surely we can do this in place?

Having not set aside enough room for /boot, I briefly considered a "encrypted /boot" configuration and conversion (e.g. with this guide) but remembered grub's support for this is flaky, at best, so I figured I would try something else.

Here, I'm going to guide you through how I first converted from grub to systemd-boot then to UKI kernel, then re-encrypt my main partition.

Note that secureboot is disabled here, see further discussion below.

systemd-boot and Unified Kernel Image conversion

systemd folks have been developing UKI ("unified kernel image") to ship kernels. The way this works is the kernel and initrd (and UEFI boot stub) in a single portable executable that lives in the EFI partition, as opposed to /boot. This neatly solves my problem, because I already have such a clear-text partition and won't need to re-partition my disk to convert.

Debian has started some preliminary support for this. It's not default, but I found this guide from Vasudeva Kamath which was pretty complete. Since the guide assumes some previous configuration, I had to adapt it to my case.

Here's how I did the conversion to both systemd-boot and UKI, all at once. I could have perhaps done it one at a time, but doing both at once works fine.

Before your start, make sure secureboot is disabled, see the discussion below.

  1. install systemd tools:

    apt install systemd-ukify systemd-boot
    
  2. Configure systemd-ukify, in /etc/kernel/install.conf:

    layout=uki
    initrd_generator=dracut
    uki_generator=ukify
    

    TODO: it doesn't look like this generates a initrd with dracut, do we care?

  3. Configure the kernel boot arguments with the following in /etc/kernel/uki.conf:

    [UKI]
    Cmdline=@/etc/kernel/cmdline
    

    The /etc/kernel/cmdline file doesn't actually exist here, and that's fine. Defaults are okay, as the image gets generated from your current /proc/cmdline. Check your /etc/default/grub and /proc/cmdline if you are unsure. You'll see the generated arguments in bootctl list below.

  4. Build the image:

    dpkg-reconfigure linux-image-$(uname -r)
    
  5. Check the boot options:

    bootctl list
    

    Look for a Type #2 (.efi) entry for the kernel.

  6. Reboot:

    reboot
    

You can tell you have booted with systemd-boot because (a) you won't see grub and (b) the /proc/cmdline will reflect the configuration listed in bootctl list. In my case, a systemd.machine_id variable is set there, and not in grub (compare with /boot/grub/grub.cfg).

By default, the systemd-boot loader just boots, without a menu. You can force the menu to show up by un-commenting the timeout line in /boot/efit/loader/loader.conf, by hitting keys during boot (e.g. hitting "space" repeatedly), or by calling:

systemctl reboot --boot-loader-menu=0

See the systemd-boot(7) manual for details on that.

I did not go through the secureboot process, presumably I had already disabled secureboot. This is trickier: because one needs a "special key" to sign the UKI image, one would need the collaboration of debian.org to get this working out of the box with the keys shipped onboard most computers.

In other words, if you want to make this work with secureboot enabled on your computer, you'll need to figure out how to sign the generated images before rebooting here, because otherwise you will break your computer. Otherwise, follow the following guides:

Re-encrypting root filesystem

Now that we have a way to boot an encrypted filesystem, we can switch to LUKS for our filesystem. Note that you can probably follow this guide if, somehow, you managed to make grub work with your LUKS setup, although as this guide shows, you'd need to downgrade the cryptographic algorithms, which seems like a bad tradeoff.

We're using cryptsetup-reencrypt for this which, amazingly, supports re-encrypting devices on the fly. The trick is it needs free space at the end of the partition for the LUKS header (which, I guess, makes it a footer), so we need to resize the filesystem to leave room for that, which is the trickiest bit.

This is a possibly destructive behavior. Be sure your backups are up to date, or be ready to lose all data on the device.

We assume 512 byte sectors here. Check your sector size with fdisk -l and adjust accordingly.

  1. Before you perform the procedure, make sure requirements are installed:

    apt install cryptsetup systemd-cryptsetup cryptsetup-initramfs
    

    Note that this requires network access, of course.

  2. Reboot in a live image, I like GRML but any Debian live image will work, possibly including the installer

  3. First, calculate how many sectors to free up for the LUKS header

    qalc> 32Mibyte / ( 512 byte )
    
      (32 mebibytes) / (512 bytes) = 65536
    
  4. Find the sector sizes of the Linux partitions:

    fdisk  -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 }' |
    

    For example, here's an example with a /boot and / filesystem:

    $ sudo fdisk -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 }'
    /dev/nvme0n1p2 999424
    /dev/nvme0n1p3 3904979087
    
  5. Substract 1 from 2:

    qalc> set precision 100
    qalc> 3904979087 - 65536
    

    Or, last step and this one, in one line:

    fdisk -l /dev/nvme0n1 | awk '/filesystem/ { print $1 " " $4 - 65536 }'
    
  6. Recheck filesystem:

    e2fsck -f /dev/nvme0n1p2
    
  7. Resize filesystem:

    resize2fs /dev/nvme0n1p2 $(fdisk -l /dev/nvme0n1 | awk '/nvme0n1p2/ { print $4 - 65536 }')s
    

    Notice the trailing s here: it makes resize2fs interpret the number as a 512 byte sector size, as opposed to the default (4k blocks).

  8. Re-encrypt filesystem:

    cryptsetup reencrypt --encrypt /dev/nvme0n1p2 --resize-device-size=32M
    

    This is it! This is the most important step! Make sure your laptop is plugged in and try not to interrupt it. This can, apparently, be resumed without problem, but I'd hate to show you how.

    This will show progress information like:

    Progress:   2.4% ETA 23m45s,      53GiB written, speed   1.3 GiB/s
    

    Wait until the ETA has passed.

  9. Open and mount the encrypted filesystem and mount the EFI system partition (ESP):

    cryptsetup open /dev/nvme0n1p2 crypt
    mount /dev/mapper/crypt /mnt
    mount /dev/nvme0n1p1 /mnt/boot/efi
    

    If this fails, now is the time to consider restoring from backups.

  10. Enter the chroot

    for fs in proc sys dev ; do
      mount --bind /$fs /mnt/$fs
    done
    chroot /mnt
    

    Pro tip: this can be done in one step in GRML with:

    grml-chroot /mnt bash
    
  11. Generate a crypttab:

    echo crypt_dev_nvme0n1p2 UUID=$(blkid -o value -s UUID /dev/nvme0n1p2) none luks,discard >> /etc/crypttab
    
  12. Adjust root filesystem in /etc/fstab, make sure you have a line like this:

    /dev/mapper/crypt_dev-nvme0n1p2 /               ext4    errors=remount-ro 0       1
    

    If you were already using a UUID entry for this, there's nothing to change!

  13. Configure the root filesystem in the initrd:

    echo root=/dev/mapper/crypt_dev_nvme0n1p2 > /etc/kernel/cmdline
    
  14. Regenerate UKI:

    dpkg-reconfigure linux-image-$(uname -r)
    

    Be careful here! systemd-boot inherits the command line from the system where it is generated, so this will possibly feature some unsupported commands from your boot environment. In my case GRML had a couple of those, which broke the boot. It's still possible to workaround this issue by tweaking the arguments at boot time, that said.

  15. Exit chroot and reboot

    exit
    reboot
    

Some of the ideas in this section were taken from this guide but was mostly rewritten to simplify the work. My guide also avoids the grub hacks or a specific initrd system (as the guide uses initramfs-tools and grub, while I, above, switched to dracut and systemd-boot). RHEL also has a similar guide, perhaps even better.

Somehow I have made this system without LVM at all, which simplifies things a bit (as I don't need to also resize the physical volume/volume groups), but if you have LVM, you need to tweak this to also resize the LVM bits. The RHEL guide has some information about this.

07 September, 2025 02:39AM

September 06, 2025

Reproducible Builds

Reproducible Builds in August 2025

Welcome to the August 2025 report from the Reproducible Builds project!

Welcome to the latest report from the Reproducible Builds project for August 2025. These monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please see the Contribute page on our website.

In this report:

  1. Reproducible Builds Summit 2025
  2. Reproducible Builds and live-bootstrap at WHY2025
  3. DALEQ Explainable Equivalence for Java Bytecode
  4. Reproducibility regression identifies issue with AppArmor security policies
  5. Rust toolchain fixes
  6. Distribution work
  7. diffoscope
  8. Website updates
  9. Reproducibility testing framework
  10. Upstream patches

Reproducible Builds Summit 2025

Please join us at the upcoming Reproducible Builds Summit, set to take place from October 28th — 30th 2025 in Vienna, Austria!**

We are thrilled to host the eighth edition of this exciting event, following the success of previous summits in various iconic locations around the world, including Venice, Marrakesh, Paris, Berlin, Hamburg and Athens. Our summits are a unique gathering that brings together attendees from diverse projects, united by a shared vision of advancing the Reproducible Builds effort.

During this enriching event, participants will have the opportunity to engage in discussions, establish connections and exchange ideas to drive progress in this vital field. Our aim is to create an inclusive space that fosters collaboration, innovation and problem-solving.

If you’re interesting in joining us this year, please make sure to read the event page which has more details about the event and location. Registration is open until 20th September 2025, and we are very much looking forward to seeing many readers of these reports there!


Reproducible Builds and live-bootstrap at WHY2025

WHY2025 (What Hackers Yearn) is a nonprofit outdoors hacker camp that takes place in Geestmerambacht in the Netherlands (approximately 40km north of Amsterdam). The event is “organised for and by volunteers from the worldwide hacker community, and knowledge sharing, technological advancement, experimentation, connecting with your hacker peers, forging friendships and hacking are at the core of this event”.

At this year’s event, Frans Faase gave a talk on live-bootstrap, an attempt to “provide a reproducible, automatic, complete end-to-end bootstrap from a minimal number of binary seeds to a supported fully functioning operating system”.

Frans’ talk is available to watch on video and his slides are available as well.


DALEQ Explainable Equivalence for Java Bytecode

Jens Dietrich of the Victoria University of Wellington, New Zealand and Behnaz Hassanshahi of Oracle Labs, Australia published an article this month entitled DALEQ — Explainable Equivalence for Java Bytecode which explores the options and difficulties when Java binaries are not identical despite being from the same sources, and what avenues are available for proving equivalence despite the lack of bitwise correlation:

[Java] binaries are often not bitwise identical; however, in most cases, the differences can be attributed to variations in the build environment, and the binaries can still be considered equivalent. Establishing such equivalence, however, is a labor-intensive and error-prone process.

Jens and Behnaz therefore propose a tool called DALEQ, which:

disassembles Java byte code into a relational database, and can normalise this database by applying Datalog rules. Those databases can then be used to infer equivalence between two classes. Notably, equivalence statements are accompanied with Datalog proofs recording the normalisation process. We demonstrate the impact of DALEQ in an industrial context through a large-scale evaluation involving 2,714 pairs of jars, comprising 265,690 class pairs. In this evaluation, DALEQ is compared to two existing bytecode transformation tools. Our findings reveal a significant reduction in the manual effort required to assess non-bitwise equivalent artifacts, which would otherwise demand intensive human inspection. Furthermore, the results show that DALEQ outperforms existing tools by identifying more artifacts rebuilt from the same code as equivalent, even when no behavioral differences are present.

Jens also posted this news to our mailing list.


Reproducibility regression identifies issue with AppArmor security policies

Tails developer intrigeri has tracked and followed a reproducibility regression in the generation of AppArmor policy caches, and has identified an issue with the 4.1.0 version of AppArmor.

Although initially tracked on the Tails issue tracker, intrigeri filed an issue on the upstream bug tracker. AppArmor developer John Johansen replied, confirming that they can reproduce the issue and went to work on a draft patch. Through this, John revealed that it was caused by an actual underlying security bug in AppArmor — that is to say, it resulted in permissions not (always) matching what the policy intends and, crucially, not merely a cache reproducibility issue.

Work on the fix is ongoing at time of writing.


Rust toolchain fixes

Rust Clippy is a linting tool for the Rust programming language. It provides a collection of lints (rules) designed to identify common mistakes, stylistic issues, potential performance problems and unidiomatic code patterns in Rust projects. This month, however, Sosthène Guédon filed a new issue in the GitHub requesting a new check that “would lint against non deterministic operations in proc-macros, such as iterating over a HashMap”.


Distribution work

In Debian this month:

Lastly, Bernhard M. Wiedemann posted another openSUSE monthly update for their work there.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions, 303, 304 and 305 to Debian:

  • Improvements:

    • Use sed(1) backreferences when generating debian/tests/control to avoid duplicating ourselves. []
    • Move from a mono-utils dependency to versioned mono-devel | mono-utils dependency, taking care to maintain the [!riscv64] architecture restriction. []
    • Use sed over awk to avoid mangling dependency lines containing = (equals) symbols such as version restrictions. []
  • Bug fixes:

    • Fix a test after the upload of systemd-ukify version 258~rc3. []
    • Ensure that Java class files are named .class on the filesystem before passing them to javap(1). []
    • Do not run jsondiff on files over 100KiB as the algorithm runs in O(n^2) time. []
    • Don’t check for PyPDF version 3 specifically; check for >= 3. []
  • Misc:

    • Update copyright years. [][]

In addition, Martin Joerg fixed an issue with the HTML presenter to avoid crash when page limit is None [] and Zbigniew Jędrzejewski-Szmek fixed compatibility with RPM 6 []. Lastly, John Sirois fixed a missing requests dependency in the trydiffoscope tool. []


Website updates

Once again, there were a number of improvements made to our website this month including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In August, however, a number of changes were made by Holger Levsen, including:

  • reproduce.debian.net-related:

    • Run 4 workers on the o4 node again in order to speed up testing. [][][][]
    • Also test trixie-proposed-updates and trixie-updates etc. [][]
    • Gather seperate statistics for each tested release. []
    • Support sources from all Debian suites. []
    • Run new code from the prototype database rework branch for the amd64-pull184 pseudo-architecture. [][]
    • Add a number of helpful links. [][][][][][][][][]
    • Temporarily call debrebuild without the --cache argument to experiment with a new version of devscripts. [][][]
    • Update public TODO. []
  • Installation tests:

    • Add comments to explain structure. []
    • Mark more old jobs as old or “dead”. [][][]
    • Turn the maintenance job into a no-op. []
  • Jenkins node maintenance:

    • Increase penalties if the osuosl5 or ionos7 nodes are down. []
    • Stop trying to fix network automatically. []
    • Correctly mark ppc64el architecture nodes when down. []
    • Upgrade the remaining arm64 nodes to Debian trixie in anticipation of the release. [][]
    • Allow higher SSD temperatures on the riscv64 architecture. []
  • Debian-related:

    • Drop the armhf architecture; many thanks to Vagrant for physically hosting the nodes for ten years. [][]
    • Add Debian forky, and archive bullseye. [][][][][][][]
    • Document the filesystem space savings from dropping the armhf architecture. []
    • Exclude i386 and armhfr from JSON results. []
    • Update TODOs for when Debian trixie and forky have been released. [][]
  • tests.reproducible-builds.org-related:

  • Misc:

    • Detect errors with openQA erroring out. []
    • Drop the long-disabled openwrt_rebuilder jobs. []
    • Use qa-jenkins-dev@alioth-lists.debian.net as the contact for jenkins.debian.net. []
    • Redirect reproducible-builds.org/vienna25 to reproducible-builds.org/vienna2025. []

    • Disable all OpenWrt reproducible CI jobs, in coordination with the OpenWrt community. [][]
    • Make reproduce.debian.net accessable via IPv6. []
    • Ignore that the megacli RAID controller requires packages from Debian bookworm. []

In addition,

  • James Addison migrated away from deprecated toplevel deb822 Python module in favour of debian.deb822 in the bin/reproducible_scheduler.py script [] and removed a note on reproduce.debian.net note after the release of Debian trixie [].

  • Jochen Sprickerhof made a huge number of improvements to the reproduce.debian.net statistics calculation [][][][][][] as well as to the reproduce.debian.net service more generally [][][][][][][][].

  • Mattia Rizzolo performed a lot of work migrating scripts to SQLAlchemy version 2.0 [][][][][][] in addition to making some changes to the way openSUSE reproducibility tests are handled internally. []

  • Lastly, Roland Clobus updated the Debian Live packages after the release of Debian trixie. [][]


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

06 September, 2025 09:58PM

September 04, 2025

Noah Meyerhans

False Positives

There are times when an email based workflow gets really difficult. One of those times is when discussing projects related to spam and malware detection.

 noahm@debian.org
host stravinsky.debian.org [2001:41b8:202:deb::311:108]
SMTP error from remote mail server after end of data:
550-malware detected: Sanesecurity.Phishing.Fake.30934.1.UNOFFICIAL:
550 message rejected
submit@bugs.debian.org
host stravinsky.debian.org [2001:41b8:202:deb::311:108]
SMTP error from remote mail server after end of data:
550-malware detected: Sanesecurity.Phishing.Fake.30934.1.UNOFFICIAL:
550 message rejected

This was, in fact, a false positive. And now, because reportbug doesn’t record outgoing messages locally, I need to retype the whole thing.

(NB. this is not a complaint about the policies deployed on the Debian mail servers; they’d be negligent if they didn’t implement such policies on today’s internet.)

04 September, 2025 02:53PM by Noah Meyerhans (frodo+blog@morgul.net)

September 03, 2025

hackergotchi for Joachim Breitner

Joachim Breitner

F91 in Lean

Back in March, with version 4.17.0, Lean introduced partial_fixpoint, a new way to define recursive functions. I had drafted a blog post for the official Lean FRO blog back then, but forgot about it, and with the Lean FRO blog discontinued, I’ll just publish it here, better late than never.

With the partial_fixpoint mechanism we can model possibly partial functions (so those returning an Option) without an explicit termination proof, and still prove facts about them. See the corresponding section in the reference manual for more details.

On the Lean Zulip, I was asked if we can use this feature to define the McCarthy 91 function and prove it to be total. This function is a well-known tricky case for termination proofs.

First let us have a brief look at why this function is tricky to define in a system like Lean. A naive definition like

def f91 (n : Nat) : Nat :=
  if n > 100
  then n - 10
  else f91 (f91 (n + 11))

does not work; Lean is not able to prove termination of this functions by itself.

Even using well-founded recursion with an explicit measure (e.g. termination_by 101 - n) is doomed, because we would have to prove facts about the function’s behaviour (namely that f91n = f91101 = 91 for 90 ≤ n ≤ 100) and at the same time use that fact in the termination proof that we have to provide while defining the function. (The Wikipedia page spells out the proof.)

We can make well-founded recursion work if we change the signature and use a subtype on the result to prove the necessary properties while we are defining the function. Lean by Example shows how to do it, but for larger examples this approach can be hard or tedious.

With partial_fixpoint, we can define the function as a partial function without worrying about termination. This requires a change to the function’s signature, returning an Option Nat:

def f91 (n : Nat) : Option Nat :=
  if n > 100
    then pure (n - 10)
    else f91 (n + 11) >>= f91
partial_fixpoint

From the point of view of the logic, Option.none is then used for those inputs for which the function does not terminate.

This function definition is accepted and the function runs fine as compiled code:

#eval f91 42

prints some 91.

The crucial question is now: Can we prove anything about f91 In particular, can we prove that this function is actually total?

Since we now have the f91 function defined, we can start proving auxillary theorems, using whatever induction schemes we need. In particular we can prove that f91 is total and always returns 91 for n ≤ 100:

theorem f91_spec_high (n : Nat) (h : 100 < n) : f91 n = some (n - 10) := by
  unfold f91; simp [*]

theorem f91_spec_low (n : Nat) (h₂ : n ≤ 100) : f91 n = some 91 := by
  unfold f91
  rw [if_neg (by omega)]
  by_cases n < 90
  · rw [f91_spec_low (n + 11) (by omega)]
    simp only [Option.bind_eq_bind, Option.some_bind]
    rw [f91_spec_low 91 (by omega)]
  · rw [f91_spec_high (n + 11) (by omega)]
    simp only [Nat.reduceSubDiff, Option.some_bind]
    by_cases h : n = 100
    · simp [f91, *]
    · exact f91_spec_low (n + 1) (by omega)

theorem f91_spec (n : Nat) : f91 n = some (if n ≤ 100 then 91 else n - 10) := by
  by_cases h100 : n ≤ 100
  · simp [f91_spec_low, *]
  · simp [f91_spec_high, Nat.lt_of_not_le ‹_›, *]

-- Generic totality theorem
theorem f91_total (n : Nat) : (f91 n).isSome := by simp [f91_spec]

(Note that theorem f91_spec_low is itself recursive in a somewhat non-trivial way, but Lean can figure that out all by itself. Use termination_by? if you are curious.)

This is already a solid start! But what if we want a function of type f91! (n : Nat) : Nat, without the Option? Then can derive that from the partial variant, as we have just proved that to be actually total:

def f91! (n : Nat) : Nat  := (f91 n).get (f91_total n)

theorem f91!_spec (n : Nat) : f91! n = if n ≤ 100 then 91 else n - 10 := by
  simp [f91!, f91_spec]

Using partial_fixpoint one can decouple the definition of a function from a termination proof, or even model functions that are not terminating on all inputs. This can be very useful in particular when using Lean for program verification, such as with the aeneas package, where such partial definitions are used to model Rust programs.

03 September, 2025 08:18PM by Joachim Breitner (mail@joachim-breitner.de)

Enrico Zini

CAdES signatures on Debian

CAdES is a digital signature standard that is used and sometimes mandated, by the Italian Public Administration.

To be able to do my job, I own a Carta Nazionale dei Servizi (CNS) with which I can generate legally binding signatures. Now comes the problem of finding a software to do it.

Infocamere Firma4NG

InfoCamere are distributing a software called Firma4NG, with a Linux option, which, I'm pleased to say, seems to work just fine.

Autofirma

AutoFirma is a Java software for digital signatures distributed by the Spanish government, which has a Linux version.

It is licensed as GPL-2+ | EUPL-1.1, and the source seems to be here.

While my Spanish is decent I lack jargon for this specific field, and I didn't manage to make it work with my CNS.

Autogram

Andrej Shadura pointed me to Autogram, a Slovakian software for digital signatures, licensed under the EUPL-1.2.

The interface is still only in Slovakian, so tried it but I didn't go very far in trying to make it work.

OpenSSL

In trixie, openssl is almost, but not quite, able to do it. Here's as far as I've got.

Install opensc

apt install opensc

Test if you can access the smart card with:

pkcs11-tool --list-objects [-l]

You can find other pkcs11-tool examples here

Set up a pkcs11 provider for openssl

apt install pkcs11-provider

Edit /etc/ssl/openssl.cnf:

  • In [provider_sect] add pkcs11 = pkcs11_sect
  • In [default_sect], uncomment activate = 1
  • Add this new section:
[pkcs11_sect]
module = /usr/lib/x86_64-linux-gnu/ossl-modules/pkcs11.so
pkcs11-module-path = /usr/lib/x86_64-linux-gnu/pkcs11/opensc-pkcs11.so
default_algorithms = ALL
activate = 1

Test with openssl list -providers

You can check if openssl can see keys on the card:

openssl pkey -in 'pkcs11:id=%01' -pubin -pubout -text

See PKCS11 URI documentation here.

Install the PKCS11 engine for openssl

apt install libengine-pkcs11-openssl

It looks like providers replaced engines, and this would not be needed, but I couldn't find a way to convince openssl to work without this.

Sign a document

openssl cms -nodetach -binary -cades -outform DER -in filename -out filename.p7m -sign -signer 'pkcs11:id=%01' -keyform engine -engine pkcs11

It verifies correctly using the Austrian verification system.

All the Italian verification systems I tried, however, complain that, although the signature is valid, the certificate is emitted by an unqualified CA and the certificate revocation information cannot be found.

PAdES

When signing PDF files, the PAdES standard is sometimes accepted.

LibreOffice is able to generate PAdES signatures using the "File / Digital signatures…" menu, and provided the smart card is in the reader it is able to use it. Both LibreOffice and Okular can verify that the signature is indeed there.

However, when trying to validate the signature using Italian validators, I get the same complaints about unqualified CAs and missing revocation information.

Wall of shame

Dike GoSign

Infocert (now Tinexta) used to distribute a software called "Dike GoSign" that worked on Ubuntu, which I used on a completely isolated VM, and it was awful but it worked.

I had to regenerate the VM for it, and discovered that the version they distribute now will refuse to work unless one signs in online with a Tinexta account. From the same company that asks you to install their own root certifiactes to use their digital signature system.

Gross.

Dropped.

Aruba Sign

Aruba used to distribute a software called Aruba Sign, which also worked on Ubuntu.

Ubuntu support has been discontinued, and they now only offer support for Windows or Mac.

Yuck. Dropped.

03 September, 2025 03:38PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in August 2025

About 95% of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay or GitHub Sponsors.

Python team

forky is open! As a result I’m starting to think about the upcoming Python 3.14. At some point we’ll doubtless do a full test rebuild, but in advance of that I concluded that one of the most useful things I could do would be to work on our very long list of packages with new upstream versions. Of course there’s no real chance of this ever becoming empty since upstream maintainers aren’t going to stop work for that long, but there are a lot of packages there where we’re quite a long way out of date, and many of those include fixes that we’ll need for 3.14, either directly or by fixing interactions with new versions of other packages that in turn will need to be fixed. We can backport changes when we need to, but more often than not the most efficient way to do things is just to keep up to date.

So, I upgraded these packages to new upstream versions (deep breath):

  • aioftp
  • aiosignal (building on work by IanLucca)
  • audioop-lts
  • celery
  • djangorestframework
  • djoser
  • fpylll
  • frozenlist
  • git-repo-updater
  • ipykernel
  • klepto
  • kombu
  • multipart
  • netmiko (sponsoring work by Eduardo Silva; contributed supporting fix upstream)
  • pathos
  • ppft
  • pydantic
  • pydantic-core
  • pydantic-settings
  • pylsqpack
  • pymssql
  • pytest-mock
  • pytest-pretty
  • pytest-repeat
  • pytest-rerunfailures
  • python-a2wsgi
  • python-apptools (sponsoring work by Kathlyn Lara Murussi)
  • python-asgiref
  • python-asyncssh
  • python-bitarray
  • python-bitstring
  • python-bytecode
  • python-channels-redis
  • python-charset-normalizer
  • python-daphne
  • python-django-analytical
  • python-django-guid
  • python-django-health-check
  • python-django-pgbulk
  • python-django-pgtrigger
  • python-django-postgres-extra
  • python-django-storages
  • python-holidays
  • python-httpx-sse
  • python-icalendar
  • python-lazy-model
  • python-line-profiler
  • python-lz4
  • python-marshmallow-dataclass
  • python-mastodon
  • python-model-bakery
  • python-oauthlib
  • python-parse-type
  • python-pathvalidate
  • python-pgspecial
  • python-processview
  • python-pytest-subtests
  • python-roman
  • python-semantic-release
  • python-testfixtures
  • python-time-machine
  • python-tokenize-rt
  • python-typeguard
  • python-typing-extensions
  • python-urllib3
  • pyupgrade
  • requests (fixing CVE-2024-47081)
  • responses
  • zope.deferredimport
  • zope.schema
  • zope.testrunner

That’s only about 10% of the backlog, but of course others are working on this too. If we can keep this up for a while then it should help.

I packaged pytest-run-parallel, pytest-unmagic (still in NEW), and python-forbiddenfruit (still in NEW), all needed as new dependencies of various other packages.

setuptools upstream will be removing the setup.py install command on 31 October. While this may not trickle down immediately into Debian, it does mean that in the near future nearly all Python packages will have to use pybuild-plugin-pyproject (note that this does not mean that they necessarily have to use pyproject.toml; this is just a question of how the packaging runs the build system). We talked about this a bit at DebConf, and I said that I’d noticed a number of packages where this isn’t straightforward and promised to write up some notes. I wrote the Python/PybuildPluginPyproject wiki page for this; I expect to add more bits and pieces to it as I find them.

On that note, I converted several packages to pybuild-plugin-pyproject:

  • billiard
  • lazr.config
  • python-timeline
  • zope.sqlalchemy
  • zope.testing

I fixed several build/test failures:

I fixed some other bugs:

I reviewed Debian defaults: nftables as banaction and systemd as backend, but it looked as though nothing actually needed to be changed so we closed this with no action.

Rust team

Upgrading Pydantic was complicated, and required a rust-pyo3 transition (which Jelmer Vernooij started and Peter Michael Green has mostly been driving, thankfully), packaging rust-malloc-size-of (including an upstream portability fix), and upgrading several packages to new upstream versions:

  • rust-serde
  • rust-serde-derive
  • rust-serde-json
  • rust-smallvec
  • rust-speedate
  • rust-time
  • rust-time-core
  • rust-time-macros

bugs.debian.org

I fixed bugs.debian.org: misspelled checkbox id “uselessmesages”, as well as a bug that caused incoming emails with certain header contents to go missing.

OpenSSH

I fixed openssh-server: refuses further connections after having handled PerSourceMaxStartups connections with a cherry-pick from upstream.

Other bits and pieces

I upgraded libfido2 to a new upstream version.

I fixed mimalloc: FTBFS on armhf: cc1: error: ‘-mfloat-abi=hard’: selected architecture lacks an FPU, which was blocking changes to pendulum in the Python team. I also spent some time helping to investigate libmimalloc3: Illegal instruction Running mtxrun —generate, though that bug is still open.

I fixed various autopkgtest bugs in gssproxy, prompted by #1007 in Debusine.

Since my old team is decommissioning Bazaar/Breezy code hosting in Launchpad (the end of an era, which I have distinctly mixed feelings about), I converted Storm to git.

03 September, 2025 10:56AM by Colin Watson

Paul Wise

FLOSS Activities August 2025

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

  • Obsolete conffile in zmap

All work was done on a volunteer basis.

03 September, 2025 05:16AM

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in August 2025

03 September, 2025 01:14AM by Ben Hutchings

September 02, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

Luminal and Lateral

For my birthday I was gifted copies of Eno's last two albums, Luminal and Lateral, both of which are collaborations with Beatie Wolfe.

Luminal and Lateral records in the sunshine

Let's start with the art. I love this semi-minimalist, bold style, and how the LP itself (in their coloured, bio-vinyl variants) feels like it's part of the artwork. I like the way the artist credits mirror each other: Wolfe, Eno for Luminal; Eno, Wolfe for Lateral.

My first "bio vinyl" LP was the Cure's last one, last year. Ahead of it arriving I planned to blog about it, but when it came arrived it turned out I had nothing interesting to say. In terms of how it feels, or sounds, it's basically the same as the traditional vinyl formulation.

The attraction of bio-vinyl to well-known environmentalists like Eno (and I guess, the Cure) is the reduced environmental impact due to changing out the petroleum and other ingredients with recycled used cooking oil. You can read more about bio-vinyl if you wish. I try not to be too cynical about things like this; my immediate response is to assume some kind of green-washing PR campaign (I'm currently reading Consumed by Saabira Chaudhuri, an excellent book that is not sadly only fuelling my cynicism) but I know Eno in particular takes this stuff seriously and has likely done more than a surface-level evaluation. So perhaps every little helps.

On to the music. The first few cuts I heard from the albums earlier in the year didn't inspire me much. Possibly I heard something from Luminal, the vocal album; and I'm generally more drawn to Eno's ambient work. (Lateral is ambient instrumental). I was not otherwise familiar with Beatie Wolfe. On returning to the albums months later, I found them more compelling. Luminal reminds me a little of Apollo: Atmospheres and Soundtracks. Lateral worked well as space music for phd-correction sessions.

The pair recently announced a third album, Liminal, to arrive in October, and totally throw off the symmetry of the first two. Two of its tracks are available to stream now in the usual places.

02 September, 2025 11:23AM

September 01, 2025

hackergotchi for Guido Günther

Guido Günther

Free Software Activities August 2025

Another short status update of what happened on my side last month. Released Phosh 0.49.0 and added some more QoL improvements to Phosh Mobile stack (e.g. around Cell broadcasts). Also pulled my SHIFT6mq out of the drawer (where it was sitting in a drawwer far too long) and got it to show a picture after a small driver fix. Thanks to the work the sdm845-mainlining folks are doing that was all that was needed. If I can get touch to work better that would be another nice device for demoing Phosh.

See below for details on the above and more:

phosh

  • Allow to auto-start pomodoro timer (MR)
  • Improve mpris player thumbnails (MR)
  • Cellbroadcast fixes (MR)
  • Release (MR)
  • searchd related build system fixes (MR)
  • gchar vs char cleanup (MR)
  • upcoming-events: Add filter icons (MR)
  • Fix missing header dependency (MR)
  • Release 0.49~rc1, 0.49.0
  • Fix some incorrect callback signatures (MR)

phoc

  • Workspace indicators (MR)
  • Don't overwrite picked output (MR)
  • Release (0.49~rc1, 0.49.0
  • Raise nofile rlimit (MR)
  • Fix gettings tarted page title (MR)
  • Update cursor when layer surface moves away from under the cursor (MR)
  • Support cursor-shape-v1 protocol (MR)
  • pointer: Use libinput's LIBINPUT_CONFIG_DRAG_LOCK_ENABLED_STICKY: (MR)

phosh-mobile-settings

  • Cellbroadcast fixes (MR)
  • build: Link statically against libcellbroadcast subproject (MR)
  • Release 0.49~rc1, 0.49.0

stevia (formerly phosh-osk-stub)

  • Release 0.49~rc1, 0.49.0
  • Fix emoji matching on big endian (MR)
  • Fix emoji matching again aftr switching to GTK's embedded emoji data (MR)
  • Fix scaling when adding new layouts (MR)
  • Improve character popover and other fixes (MR)

xdg-desktop-portal-phosh

pfs

  • Let pressing <enter> save the file (MR)

feedbackd

  • Release 0.8.4
  • Fix important override. (MR)

feedbackd-device-themes

  • Release 0.8.5
  • Lower status LED brightness on sargo (MR)

libcmatrix

  • Track room version (MR)

Chatty

  • Warning fixes (MR)
  • matrix: Show room version (MR)

Debian

Cellbroadcastd

  • Fix daemon systemd target (MR)
  • Ignore case when matching country when looking up channels (MR)
  • Meson dependency fix (MR)

ModemManager

  • Fix two country codes (MR)

gnome-clocks

  • Fix sporadic wakeup due to not diposed timers (MR) also resulting in a vala issue

git-buildpackage

  • Move test data to salsa and fetch it from there: deb, rpm, MR
  • clone: Be less strict on vcs-git URLs (MR)

mobian-recipies

  • Fail early without an ssh key (MR)

Linux

  • Shift6mq: Fix clock frequency of panel driver (MR)
  • Shift6mq: Set chassis type (MR)
  • Shift6mq: Tried to improve the touch driver to increase the sensitivity / sample rate not success yet.

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • phosh/upcoming-events: Allow to filter out empty days in (MR)
  • phosh: keypad and search bar CSS improvements (MR)
  • p-m-s: Tweaks definition parsing code (MR)
  • p-m-s: osk-shortcuts: UI tweaks (MR)
  • p-m-s: Add gchar check (MR)
  • p-m-s: Make it a search provider (MR)
  • phoc: toplevel-addons (MR)
  • debian: MM stable update (MR)
  • stevia: Use default font (MR)
  • upcoming-events: Use filtered list model (MR)
  • pms: Tweaks rename (MR)
  • pms: Clang build fix (MR)
  • feedbackd: Udev rule for AW86927 (FP5) (MR)
  • xdpm: Allow pure rust build for using in xdg-d-p-phrosh (MR)

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 September, 2025 06:05AM