Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

April 25, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

In defence of JD Vance, death of Pope Francis

When the sad news appeared about the death of Pope Francis on Easter Monday, people were quick to politicize the tragedy with references to his last official duty, a meeting with US Vice President JD Vance.

What about that dossier on abuse that changed hands on Good Friday? It fits an awkward pattern of dossiers and deaths.

Obituaries and commentaries have appeared far and wide. Many of them begin by praising the late Pope's work on climate and the poor and then they go on to politely but firmly express concern that he could have done more for victims of abuse. We need to look at the other Francis.

Walter Francis Pocock

Walter Francis Pocock was born on 28 February 1938. He went to the former St Patrick's college in East Melbourne and then became a Catholic school teacher. He rose through the ranks of schoolteachers to become a headmaster and then he went on to work in administration at the Catholic Education Office (CEO), part of the Archdiocese of Melbourne.

On 28 February 1998, Pope Francis had become Archbishop of Buenos Aires, on the birthday of Walter Francis Pocock. On 28 February 2013, Pope Benedict XVI resigned, creating the opportunity for Pope Francis to become Pope.

The brother of Walter Francis Pocock is my father, who attended Xavier College, one of Australia's most prominent Jesuit schools. Pope Francis was the first Jesuit pope. Dad and his brother both worked at the Catholic Eduction Office.

My uncle died in 2011. One of the most visible parts of his legacy is the work of his daughters. Bernice became a nurse back in the 1990s. She has worked her way up to the top of her profession, becoming the Health Complaints Commissioner for the State of Victoria.

In that role, she personally signs each of the prohibition orders.

Australia's Royal Commission into Institutional Abuse published a huge archive of internal documents from the Catholic Church. Looking through those documents, we can see that the church removed fewer abusers in fifty years than Bernice has removed in just two years.

The connections don't stop there of course. Pope Francis had a special relationship with the State of Victoria, having invited Cardinal George Pell, who started his career in Ballarat, to be the treasurer of the Vatican.

Let's have a fresh look at the history and how it intersects with deaths of both Cardinal Pell and Pope Francis.

17 April 2011, the day that Adrian von Bidder died, was both the day that Carla and I married and it was Palm Sunday, the beginning of Holy Week.

In December 2018, Cardinal Pell was convicted of abuse, although he was subsequently acquitted on appeal. On one of the most important Christian holidays, the night before Christmas, the Debianists went nuts spreading rumors about my family and abuse. Some of them are still nuts today. Sadly, more of these people have died.

In my last call with Dad, he was clearly disturbed about the Pell situation and the proximity of our family to these matters.

On 17 April 2019, which was Holy Wednesday, the current Archbishop of Melbourne put out a public statement about the grief many people felt throughout the Archdiocese. This was particularly profound for those who worked for the church. People like my father and his brother would see Cardinal Pell in the office from time to time when he was Archbishop of Melbourne. It would have been even more inconvenient for my father as this statement came out on our wedding anniversary.

Dad died on 20 April 2019, which was Easter Saturday.

Pope Benedict died on 31 December 2022, once again, in the Christmas season. Cardinal Pell appeared in news reports and that prompted me to have a fresh look at the evidence and see if I had missed anything.

By pure coincidence, I found myself in the office of the Carabinieri on 10 January 2023, as a witness and survivor talking about the blackmail in Debianism and the similarities to what I observed in the wider context of institutional abuse. I handed over a dossier of approximately 90 pages. Cardinal Pell's name was mentioned somewhere on the first page. I didn't know that the Cardinal was having surgery in the same hour. He died later that evening.

I created a further dossier, a similar size, containing emails from debian-private suggesting that some of the Debian suicides and accidental deaths may have been avoidable. That dossier was sent to the Cambridgeshire coroner on 9 September 2023, the first day of DebConf23. In the middle of the conference, Abraham Raji died in an avoidable accident. I had mentioned Cardinal Pell's name in the email to the coroner and it appears again inside the dossier.

This year, in March 2025, I published a blog pointing out the connection between Adrian von Bidder's death and Palm Sunday. The anniversary of the death was Holy Thursday, the day that Judas betrayed Jesus at the last supper. I previously wrote a blog about that phenomena too.

At the same time as publishing the blog about Palm Sunday, I had been preparing a new dossier about the intersection of the abuse crisis with the health system. It dealt with the cases I'm familiar with, mental health patients, the military, a priest using my name and the remarkable similarities that have emerged in modern-day cults like Debianism.

Pope Francis is featured on the final page, along with the Swiss Guard. The name of Cardinal Pell was repeated forty seven times in this dossier. I included a couple of pictures of the late Cardinal Pell, in one of them he is holding a galero and in another he is wearing his zucchetto and staring at Shadow Man.

Cardinal George Pell, Shadow Man, zucchetto, Fedora

 

The draft was handed over to an expert in the French health system on Good Friday, the day that Jesus was crucified.

Three dossiers, three deaths.

Like my father, I also graduated from the Jesuit school Xavier College.

Some of the dossier is confidential so I'm only going to share the pages with the conclusion. Even that had to be redacted. Nonetheless, it is telling that I used a quote from Ian Lawther about his Fair dinkum letter to the Pope. The letter is published on the web site of the Parliament of Victoria. The letter was dated 24 April 2008 and it appears to have been sent two days later, 26 April 2008. 26 April 2025 is the funeral of Pope Francis.

Ian Lawther's letter is a chilling indictment on the culture of many institutions, this is not only about the crisis in the Catholic Church. Reading this paragraph:

You were not the one who had to take your son to hospital at 2.30 in the morning, because he had broken three bones in his hand, in a fit of anger and guilt, because his mind had been so poisoned, that the priest was able to convince him everything was his fault.

I couldn't help thinking about the way people are being brainwashed in groups like Debian and GNOME. For example, look at how Dr Norbert Preining was brainwashed to write a self-deprecating public confession. Look at the discussions around GNOME this week, for example, Tobias Bernard writes nobody involved is against Codes of Conduct, in other words, victims can't see that amateur-hour Codes of Conduct are nothing more than a tool for denouncing, dividing and silencing people. When people put these amateur hour Codes of Conduct on a pedestal, the victims end up tying themselves in nots. There have been a number of unexplained suicides in the open source software ecosystem and I can't help wondering if some of those people had been shamed with Code of Conduct gaslighting. Look at the horrible message they sent to my last Outreachy intern, it took three months before she told me what they were doing to her.

Read the final pages of the dossier that changed hands on Good Friday.

Can we say JD Vance is off the hook?

Please see the chronological history of how the Debian harassment and abuse culture evolved.

Cardinal George Pell, Abraham Raji, Pope Francis

Fact check

Did the Debianists graffiti Dr Jacob Appelbaum's home when they were falsifying abuse accusations against him?

Compare the graffiti to the signature on real abuse cases: the prohibition orders.

When Debianists and social media addicts make stuff up, it makes it harder to believe real victims.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

 

prohbition order, abuse

 

Jacob Appelbaum, graffiti, rapist

25 April, 2025 09:00PM

Simon Josefsson

GitLab Runner with Rootless Privilege-less Podman on riscv64

I host my own GitLab CI/CD runners, and find that having coverage on the riscv64 CPU architecture is useful for testing things. The HiFive Premier P550 seems to be a common hardware choice. The P550 is possible to purchase online. You also need a (mini-)ATX chassi, power supply (~500W is more than sufficient), PCI-to-M2 converter and a NVMe storage device. Total cost per machine was around $8k/€8k for me. Assembly was simple: bolt everything, connect ATX power, connect cables for the front-panel, USB and and Audio. Be sure to toggle the physical power switch on the P550 before you close the box. Front-panel power button will start your machine. There is a P550 user manual available.

Below I will guide you to install the GitLab Runner on the pre-installed Ubuntu 24.04 that ships with the P550, and configure it to use Podman in root-less mode. Presumably you want to migrate to some other OS instead; hey Trisquel 13 riscv64 I’m waiting for you! I wouldn’t recommend using this machine for anything sensitive, there is an awful lot of non-free and/or vendor-specific software installed, and the hardware itself is young. I am not aware of any other riscv64 hardware that has been proven to be able to run a libre OS, all of them appear to require special patches and/or non-mainline kernels.

  • Login on console using username ‘ubuntu‘ and password ‘ubuntu‘. You will be asked to change the password, so do that.
  • Start a terminal, gain root with sudo -i and change the hostname:
    echo jas-p550-01 > /etc/hostname
  • Connect ethernet and run: apt-get update && apt-get dist-upgrade -u.
  • If your system doesn’t have valid MAC address (they show as MAC ‘8c:00:00:00:00:00 if you run ‘ip a’), you can fix this to avoid collisions if you install multiple P550’s on the same network. Connect the Debug USB-C connector on the back to one of the hosts USB-A slots. Use minicom (use Ctrl-A X to exit) to talk to it.
apt-get install minicom
minicom -o -D /dev/ttyUSB3
#cmd: ifconfig
inet 192.168.0.2 netmask: 255.255.240.0
gatway 192.168.0.1
SOM_Mac0: 8c:00:00:00:00:00
SOM_Mac1: 8c:00:00:00:00:00
MCU_Mac: 8c:00:00:00:00:00
#cmd: setmac 0 CA:FE:42:17:23:00
The MAC setting will be valid after rebooting the carrier board!!!
MAC[0] addr set to CA:FE:42:17:23:00(ca:fe:42:17:23:0)
#cmd: setmac 1 CA:FE:42:17:23:01
The MAC setting will be valid after rebooting the carrier board!!!
MAC[1] addr set to CA:FE:42:17:23:01(ca:fe:42:17:23:1)
#cmd: setmac 2 CA:FE:42:17:23:02
The MAC setting will be valid after rebooting the carrier board!!!
MAC[2] addr set to CA:FE:42:17:23:02(ca:fe:42:17:23:2)
#cmd:
  • For reference, if you wish to interact with the MCU you may do that via OpenOCD and telnet, like the following (as root on the P550). You need to have the Debug USB-C connected to a USB-A host port.
apt-get install openocd
wget https://raw.githubusercontent.com/sifiveinc/hifive-premier-p550-tools/refs/heads/master/mcu-firmware/stm32_openocd.cfg
echo 'acc115d283ff8533d6ae5226565478d0128923c8a479a768d806487378c5f6c3 stm32_openocd.cfg' | sha256sum -c
openocd -f stm32_openocd.cfg &
telnet localhost 4444
...
  • Reboot the machine and login remotely from your laptop. Gain root and set up SSH public-key authentication and disable SSH password logins.
echo 'ssh-ed25519 AAA...' > ~/.ssh/authorized_keys
sed -i 's;^#PasswordAuthentication.*;PasswordAuthentication no;' /etc/ssh/sshd_config
service ssh restart
  • With a NVME device in the PCIe slot, create a LVM partition where the GitLab runner will live:
parted /dev/nvme0n1 print
blkdiscard /dev/nvme0n1
parted /dev/nvme0n1 mklabel gpt
parted /dev/nvme0n1 mkpart jas-p550-nvm-02 ext2 1MiB 100% align-check optimal 1
parted /dev/nvme0n1 set 1 lvm on
partprobe /dev/nvme0n1
pvcreate /dev/nvme0n1p1
vgcreate vg0 /dev/nvme0n1p1
lvcreate -L 400G -n glr vg0
mkfs.ext4 -L glr /dev/mapper/vg0-glr

Now with a reasonable setup ready, let’s install the GitLab Runner. The following is adapted from gitlab-runner’s official installation instructions documentation. The normal installation flow doesn’t work because they don’t publish riscv64 apt repositories, so you will have to perform upgrades manually.

# wget https://s3.dualstack.us-east-1.amazonaws.com/gitlab-runner-downloads/latest/deb/gitlab-runner_riscv64.deb
# wget https://s3.dualstack.us-east-1.amazonaws.com/gitlab-runner-downloads/latest/deb/gitlab-runner-helper-images.deb
wget https://gitlab-runner-downloads.s3.amazonaws.com/v17.11.0/deb/gitlab-runner_riscv64.deb
wget https://gitlab-runner-downloads.s3.amazonaws.com/v17.11.0/deb/gitlab-runner-helper-images.deb
echo '68a4c2a4b5988a5a5bae019c8b82b6e340376c1b2190228df657164c534bc3c3 gitlab-runner-helper-images.deb' | sha256sum -c
echo 'ee37dc76d3c5b52e4ba35cf8703813f54f536f75cfc208387f5aa1686add7a8c gitlab-runner_riscv64.deb' | sha256sum -c
dpkg -i gitlab-runner-helper-images.deb gitlab-runner_riscv64.deb

Remember the NVMe device? Let’s not forget to use it, to avoid wear and tear of the internal MMC root disk. Do this now before any files in /home/gitlab-runner appears, or you have to move them manually.

gitlab-runner stop
echo 'LABEL=glr /home/gitlab-runner ext4 defaults,noatime 0 1' >> /etc/fstab
systemctl daemon-reload
mount /home/gitlab-runner

Next install gitlab-runner and configure it. Replace token glrt-REPLACEME below with the registration token you get from your GitLab project’s Settings -> CI/CD -> Runners -> New project runner. I used the tags ‘riscv64‘ and a runner description of the hostname.

gitlab-runner register --non-interactive --url https://gitlab.com --token glrt-REPLACEME --name $(hostname) --executor docker --docker-image debian:stable

We install and configure gitlab-runner to use podman, and to use non-root user.

apt-get install podman
gitlab-runner stop
usermod --add-subuids 100000-165535 --add-subgids 100000-165535 gitlab-runner

You need to run some commands as the gitlab-runner user, but unfortunately some interaction between sudo/su and pam_systemd makes this harder than it should be. So you have to setup SSH for the user and login via SSH to run the commands. Does anyone know of a better way to do this?

# on the p550:
cp -a /root/.ssh/ /home/gitlab-runner/
chown -R gitlab-runner:gitlab-runner /home/gitlab-runner/.ssh/
# on your laptop:
ssh gitlab-runner@jas-p550-01
systemctl --user --now enable podman.socket
systemctl --user --now start podman.socket
loginctl enable-linger gitlab-runner gitlab-runner
systemctl status --user podman.socket

We modify /etc/gitlab-runner/config.toml as follows, replace 997 with the user id shown by systemctl status above. See feature flags documentation for more documentation.

[[runners]]
environment = ["FF_NETWORK_PER_BUILD=1", "FF_USE_FASTZIP=1"]
...
[runners.docker]
host = "unix:///run/user/997/podman/podman.sock"

Note that unlike the documentation I do not add the ‘privileged = true‘ parameter here. I will come back to this later.

Restart the system to confirm that pushing a .gitlab-ci.yml with a job that uses the riscv64 tag like the following works properly.

dump-env-details-riscv64:
stage: build
image: riscv64/debian:testing
tags: [ riscv64 ]
script:
- set

Your gitlab-runner should now be receiving jobs and running them in rootless podman. You may view the log using journalctl as follows:

journalctl --follow _SYSTEMD_UNIT=gitlab-runner.service

To stop the graphical environment and disable some unnecessary services, you can use:

systemctl set-default multi-user.target
systemctl disable openvpn cups cups-browsed sssd colord

At this point, things were working fine and I was running many successful builds. Now starts the fun part with operational aspects!

I had a problem when running buildah to build a new container from within a job, and noticed that aardvark-dns was crashing. You can use the Debian ‘aardvark-dns‘ binary instead.

wget http://ftp.de.debian.org/debian/pool/main/a/aardvark-dns/aardvark-dns_1.14.0-3_riscv64.deb
echo 'df33117b6069ac84d3e97dba2c59ba53775207dbaa1b123c3f87b3f312d2f87a aardvark-dns_1.14.0-3_riscv64.deb' | sha256sum -c
mkdir t
cd t
dpkg -x ../aardvark-dns_1.14.0-3_riscv64.deb .
mv /usr/lib/podman/aardvark-dns /usr/lib/podman/aardvark-dns.ubuntu
mv usr/lib/podman/aardvark-dns /usr/lib/podman/aardvark-dns.debian

My setup uses podman in rootless mode without passing the –privileged parameter or any –add-cap parameters to add non-default capabilities. This is sufficient for most builds. However if you try to create container using buildah from within a job, you may see errors like this:

Writing manifest to image destination
Error: mounting new container: mounting build container "8bf1ec03d967eae87095906d8544f51309363ddf28c60462d16d73a0a7279ce1": creating overlay mount to /var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/merged, mount_data="lowerdir=/var/lib/containers/storage/overlay/l/I3TWYVYTRZ4KVYCT6FJKHR3WHW,upperdir=/var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/diff,workdir=/var/lib/containers/storage/overlay/23785e20a8bac468dbf028bf524274c91fbd70dae195a6cdb10241c345346e6f/work,volatile": using mount program /usr/bin/fuse-overlayfs: unknown argument ignored: lazytime
fuse: device not found, try 'modprobe fuse' first
fuse-overlayfs: cannot mount: No such file or directory
: exit status 1

According to GitLab runner security considerations, you should not enable the ‘privileged = true’ parameter, and the alternative appears to run Podman as root with privileged=false. Indeed setting privileged=true as in the following example solves the problem, as I suppose running as root would too.

[[runners]]
environment = ["FF_NETWORK_PER_BUILD=1", "FF_USE_FASTZIP=1"]
[runners.docker]
privileged = true

Can we do better? After some experimentation, and reading open issues with suggested capabilities and configuration snippets, I ended up with the following configuration. It runs podman in rootless mode (as the gitlab-runner user) without --privileged, but add the CAP_SYS_ADMIN capability and exposes the /dev/fuse device. Still, this is running as non-root user on the machine, so I think it is an improvement compared to using --privileged and also compared to running podman as root.

[[runners]]
environment = ["FF_NETWORK_PER_BUILD=1", "FF_USE_FASTZIP=1"]
[runners.docker]
host = "unix:///run/user/997/podman/podman.sock"
privileged = false
cap_add = ["SYS_ADMIN"]
devices = ["/dev/fuse"]

Still I worry about the security properties of such a setup, so I only enable these settings for a separately configured runner instance that I use when I need this docker-in-docker (oh, I meant buildah-in-podman) functionality. I found one article discussing Rootless Podman without the privileged flag that suggest –isolation=chroot but I have yet to make this work. Suggestions for improvement are welcome.

Happy Riscv64 Building!

25 April, 2025 06:30PM by simon

hackergotchi for Bits from Debian

Bits from Debian

Debian Project Leader election 2025 is over, Andreas Tille re-elected!

The voting period and tally of votes for the Debian Project Leader election has just concluded and the winner is Andreas Tille, who has been elected for the second time. Congratulations!

Out of a total of 1,030 developers, 362 voted. As usual in Debian, the voting method used was the Condorcet method.

More information about the result is available in the Debian Project Leader Elections 2025 page.

Many thanks to Andreas Tille, Gianfranco Costamagna, Julian Andres Klode, and Sruthi Chandran for their campaigns, and to our Developers for voting.

The new term for the project leader started on April 21st and will expire on April 20th 2026.

25 April, 2025 10:05AM by Jean-Pierre Giraud

April 24, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RQuantLib 0.4.26 on CRAN: Small Updates

A new minor release 0.4.26 of RQuantLib arrived on CRAN this morning, and has just now been uploaded to Debian too.

QuantLib is a rather comprehensice free/open-source library for quantitative finance. RQuantLib connects (some parts of) it to the R environment and language, and has been part of CRAN for nearly twenty-two years (!!) as it was one of the first packages I uploaded to CRAN.

This release of RQuantLib brings updated Windows build support taking advantage of updated Rtools, thanks to a PR by Tomas Kalibera. We also updated expected results for three of the ‘schedule’ tests (in a way that is dependent on the upstream library version) as the just-released QuantLib 1.38 differs slightly.

Changes in RQuantLib version 0.4.26 (2025-04-24)

  • Use system QuantLib (if found by pkg-config) on Windows too (Tomas Kalibera in #192)

  • Accommodate same test changes for schedules in QuantLib 1.38

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

24 April, 2025 10:27PM

hackergotchi for Jonathan McDowell

Jonathan McDowell

Local Voice Assistant Step 1: An ATOM Echo voice satellite

Back when I setup my home automation I ended up with one piece that used an external service: Amazon Alexa. I’d rather not have done this, but voice control is extremely convenient, both for us, and guests. Since then Home Assistant has done a lot of work in developing the capability of a local voice assistant - 2023 was their Year of Voice. I’ve had brief looks at this in the past, but never quite had the time to dig into setting it up, and was put off by the fact a lot of the setup instructions were just “Download our prebuilt components”. While I admire the efforts to get Home Assistant fully packaged for Debian I accept that’s a tricky proposition, and settle for running it in a venv on a Debian stable container. Voice requires a lot more binary components, and I want to have “voice satellites” in more than one location, so I set about trying to understand a bit better what I was deploying, and actually building the binary bits myself.

This is the start of a write-up of that. I’ll break it into a bunch of posts, trying to cover one bit in each, because otherwise this will get massive. Let’s start with some requirements:

  • All local processing; no call-outs to external services
  • Ability to have multiple voice satellites in the house
  • A desire to do wake word detection on the satellites, to avoid lots of network audio traffic all the time
  • As clean an install on a Debian stable based system as possible
  • Binaries built locally
  • No need for a GPU

My house server is an AMD Ryzen 7 5700G, so my expectation was that I’d have enough local processing power to be able to do this. That turned out to be a valid assumption - speech to text really has come a long way in recent years. I’m still running Home Assistant 2024.3.3 - the last one that supports (but complains about) Python 3.11. Trixie has started the freeze process, so once it releases I’ll look at updating the HA install. For now what I have has turned out to be Good Enough, but I know there have been improvements upstream I’m missing.

Finally, before I get into the details, I should point out that if you just want to get started with a voice assistant on Home Assistant and don’t care about what’s under the hood, there are a bunch of more user friendly details on Home Assistant’s site itself, and they have pre-built images you can just deploy.

My first step was sorting out a “voice satellite”. This is the device that actually has a microphone and speaker and communicates with the main Home Assistant setup. I’d seen the post about a $13 voice assistant, and as a result had an ATOM Echo sitting on my desk I hadn’t got around to setting up.

Here, we ignore a bit about delving into exactly what’s going on under the hood, even if we’re compiling locally. This is a constrained embedded device and while I’m familiar with the ESP32 IDF build system I just accepted that using ESPHome and letting it do it’s thing was the quickest way to get up and running. It is possible to do this all via the web with a pre-built image, but I wanted to change the wake word to “Hey Jarvis” rather than the default “Okay Nabu”, and that was a good reason to bother doing a local build. We’ll get into actually building a voice satellite on Debian in later posts.

I started with the default upstream assistant config and tweaked it a little for my setup:

diff of my configuration tweaks
$ diff -u m5stack-atom-echo.yaml assistant.yaml
--- m5stack-atom-echo.yaml    2025-04-18 13:41:21.812766112 +0100
+++ assistant.yaml  2025-01-20 17:33:24.918585244 +0000
@@ -1,7 +1,7 @@
 substitutions:
-  name: m5stack-atom-echo
+  name: study-atom-echo
   friendly_name: M5Stack Atom Echo
-  micro_wake_word_model: okay_nabu  # alexa, hey_jarvis, hey_mycroft are also supported
+  micro_wake_word_model: hey_jarvis  # alexa, hey_jarvis, hey_mycroft are also supported
 
 esphome:
   name: ${name}
@@ -16,15 +16,26 @@
     version: 4.4.8
     platform_version: 5.4.0
 
+# Enable logging
 logger:
+
+# Enable Home Assistant API
 api:
+  encryption:
+    key: "TGlrZVRoaXNJc1JlYWxseUl0Rm9vbGlzaFBlb3BsZSE="
 
 ota:
   - platform: esphome
-    id: ota_esphome
+    password: "itsnotarealthing"
 
 wifi:
+  ssid: "My Wifi Goes Here"
+  password: "AndThePasswordGoesHere"
+
+  # Enable fallback hotspot (captive portal) in case wifi connection fails
   ap:
+    ssid: "Study-Atom-Echo Fallback Hotspot"
+    password: "ThisIsRandom"
 
 captive_portal:


(I note that the current upstream config has moved on a bit since I first did this, but I double checked the above instructions still work at the time of writing. I end up pinning ESPHome to the right version below due to that.)

It turns out to be fairly easy to setup ESPHome in a venv and get it to build + flash the image for you:

Instructions for building + flashing ESPHome to ATOM Echo
noodles@sevai:~$ python3 -m venv esphome-atom-echo
noodles@sevai:~$ . esphome-atom-echo/bin/activate
(esphome-atom-echo) noodles@sevai:~$ cd esphome-atom-echo/
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$  pip install esphome==2024.12.4
Collecting esphome==2024.12.4
  Using cached esphome-2024.12.4-py3-none-any.whl (4.1 MB)
…
Successfully installed FontTools-4.57.0 PyYAML-6.0.2 appdirs-1.4.4 attrs-25.3.0 bottle-0.13.2 defcon-0.12.1 esphome-2024.12.4 esphome-dashboard-20241217.1 freetype-py-2.5.1 fs-2.4.16 gflanguages-0.7.3 glyphsLib-6.10.1 glyphsets-1.0.0 openstep-plist-0.5.0 pillow-10.4.0 platformio-6.1.16 protobuf-3.20.3 puremagic-1.27 ufoLib2-0.17.1 unicodedata2-16.0.0
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$ esphome compile assistant.yaml 
INFO ESPHome 2024.12.4
INFO Reading configuration assistant.yaml...
INFO Updating https://github.com/esphome/esphome.git@pull/5230/head
INFO Updating https://github.com/jesserockz/esphome-components.git@None
…
Linking .pioenvs/study-atom-echo/firmware.elf
/home/noodles/.platformio/packages/toolchain-xtensa-esp32@8.4.0+2021r2-patch5/bin/../lib/gcc/xtensa-esp32-elf/8.4.0/../../../../xtensa-esp32-elf/bin/ld: missing --end-group; added as last command line option
RAM:   [=         ]  10.6% (used 34632 bytes from 327680 bytes)
Flash: [========  ]  79.8% (used 1463813 bytes from 1835008 bytes)
Building .pioenvs/study-atom-echo/firmware.bin
Creating esp32 image...
Successfully created esp32 image.
esp32_create_combined_bin([".pioenvs/study-atom-echo/firmware.bin"], [".pioenvs/study-atom-echo/firmware.elf"])
Wrote 0x176fb0 bytes to file /home/noodles/esphome-atom-echo/.esphome/build/study-atom-echo/.pioenvs/study-atom-echo/firmware.factory.bin, ready to flash to offset 0x0
esp32_copy_ota_bin([".pioenvs/study-atom-echo/firmware.bin"], [".pioenvs/study-atom-echo/firmware.elf"])
==================================================================================== [SUCCESS] Took 130.57 seconds ====================================================================================
INFO Successfully compiled program.
(esphome-atom-echo) noodles@sevai:~/esphome-atom-echo$ esphome upload --device /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0 assistant.yaml 
INFO ESPHome 2024.12.4
INFO Reading configuration assistant.yaml...
INFO Updating https://github.com/esphome/esphome.git@pull/5230/head
INFO Updating https://github.com/jesserockz/esphome-components.git@None
…
INFO Upload with baud rate 460800 failed. Trying again with baud rate 115200.
esptool.py v4.7.0
Serial port /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0
Connecting....
Chip is ESP32-PICO-D4 (revision v1.1)
Features: WiFi, BT, Dual Core, 240MHz, Embedded Flash, VRef calibration in efuse, Coding Scheme None
Crystal is 40MHz
MAC: 64:b7:08:8a:1b:c0
Uploading stub...
Running stub...
Stub running...
Configuring flash size...
Auto-detected Flash size: 4MB
Flash will be erased from 0x00010000 to 0x00176fff...
Flash will be erased from 0x00001000 to 0x00007fff...
Flash will be erased from 0x00008000 to 0x00008fff...
Flash will be erased from 0x00009000 to 0x0000afff...
Compressed 1470384 bytes to 914252...
Wrote 1470384 bytes (914252 compressed) at 0x00010000 in 82.0 seconds (effective 143.5 kbit/s)...
Hash of data verified.
Compressed 25632 bytes to 16088...
Wrote 25632 bytes (16088 compressed) at 0x00001000 in 1.8 seconds (effective 113.1 kbit/s)...
Hash of data verified.
Compressed 3072 bytes to 134...
Wrote 3072 bytes (134 compressed) at 0x00008000 in 0.1 seconds (effective 383.7 kbit/s)...
Hash of data verified.
Compressed 8192 bytes to 31...
Wrote 8192 bytes (31 compressed) at 0x00009000 in 0.1 seconds (effective 813.5 kbit/s)...
Hash of data verified.

Leaving...
Hard resetting via RTS pin...
INFO Successfully uploaded program.


And then you can watch it boot (this is mine already configured up in Home Assistant):

Watching the ATOM Echo boot
$ picocom --quiet --imap lfcrlf --baud 115200 /dev/serial/by-id/usb-Hades2001_M5stack_9552AF8367-if00-port0
I (29) boot: ESP-IDF 4.4.8 2nd stage bootloader
I (29) boot: compile time 17:31:08
I (29) boot: Multicore bootloader
I (32) boot: chip revision: v1.1
I (36) boot.esp32: SPI Speed      : 40MHz
I (40) boot.esp32: SPI Mode       : DIO
I (45) boot.esp32: SPI Flash Size : 4MB
I (49) boot: Enabling RNG early entropy source...
I (55) boot: Partition Table:
I (58) boot: ## Label            Usage          Type ST Offset   Length
I (66) boot:  0 otadata          OTA data         01 00 00009000 00002000
I (73) boot:  1 phy_init         RF data          01 01 0000b000 00001000
I (81) boot:  2 app0             OTA app          00 10 00010000 001c0000
I (88) boot:  3 app1             OTA app          00 11 001d0000 001c0000
I (96) boot:  4 nvs              WiFi data        01 02 00390000 0006d000
I (103) boot: End of partition table
I (107) esp_image: segment 0: paddr=00010020 vaddr=3f400020 size=58974h (362868) map
I (247) esp_image: segment 1: paddr=0006899c vaddr=3ffb0000 size=03400h ( 13312) load
I (253) esp_image: segment 2: paddr=0006bda4 vaddr=40080000 size=04274h ( 17012) load
I (260) esp_image: segment 3: paddr=00070020 vaddr=400d0020 size=f5cb8h (1006776) map
I (626) esp_image: segment 4: paddr=00165ce0 vaddr=40084274 size=112ach ( 70316) load
I (665) boot: Loaded app from partition at offset 0x10000
I (665) boot: Disabling RNG early entropy source...
I (677) cpu_start: Multicore app
I (677) cpu_start: Pro cpu up.
I (677) cpu_start: Starting app cpu, entry point is 0x400825c8
I (0) cpu_start: App cpu up.
I (695) cpu_start: Pro cpu start user code
I (695) cpu_start: cpu freq: 160000000
I (695) cpu_start: Application information:
I (700) cpu_start: Project name:     study-atom-echo
I (705) cpu_start: App version:      2024.12.4
I (710) cpu_start: Compile time:     Apr 18 2025 17:29:39
I (716) cpu_start: ELF file SHA256:  1db4989a56c6c930...
I (722) cpu_start: ESP-IDF:          4.4.8
I (727) cpu_start: Min chip rev:     v0.0
I (732) cpu_start: Max chip rev:     v3.99 
I (737) cpu_start: Chip rev:         v1.1
I (742) heap_init: Initializing. RAM available for dynamic allocation:
I (749) heap_init: At 3FFAE6E0 len 00001920 (6 KiB): DRAM
I (755) heap_init: At 3FFB8748 len 000278B8 (158 KiB): DRAM
I (761) heap_init: At 3FFE0440 len 00003AE0 (14 KiB): D/IRAM
I (767) heap_init: At 3FFE4350 len 0001BCB0 (111 KiB): D/IRAM
I (774) heap_init: At 40095520 len 0000AAE0 (42 KiB): IRAM
I (781) spi_flash: detected chip: gd
I (784) spi_flash: flash io: dio
I (790) cpu_start: Starting scheduler on PRO CPU.
I (0) cpu_start: Starting scheduler on APP CPU.
[I][logger:171]: Log initialized
[C][safe_mode:079]: There have been 0 suspected unsuccessful boot attempts
[D][esp32.preferences:114]: Saving 1 preferences to flash...
[D][esp32.preferences:143]: Saving 1 preferences to flash: 0 cached, 1 written, 0 failed
[I][app:029]: Running through setup()...
[C][esp32_rmt_led_strip:021]: Setting up ESP32 LED Strip...
[D][template.select:014]: Setting up Template Select
[D][template.select:023]: State from initial (could not load stored index): On device
[D][select:015]: 'Wake word engine location': Sending state On device (index 1)
[D][esp-idf:000]: I (100) gpio: GPIO[39]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 0| Pulldown: 0| Intr:0 

[D][binary_sensor:034]: 'Button': Sending initial state OFF
[C][light:021]: Setting up light 'M5Stack Atom Echo 8a1bc0'...
[D][light:036]: 'M5Stack Atom Echo 8a1bc0' Setting:
[D][light:041]:   Color mode: RGB
[D][template.switch:046]:   Restored state ON
[D][switch:012]: 'Use listen light' Turning ON.
[D][switch:055]: 'Use listen light': Sending state ON
[D][light:036]: 'M5Stack Atom Echo 8a1bc0' Setting:
[D][light:047]:   State: ON
[D][light:051]:   Brightness: 60%
[D][light:059]:   Red: 100%, Green: 89%, Blue: 71%
[D][template.switch:046]:   Restored state OFF
[D][switch:016]: 'timer_ringing' Turning OFF.
[D][switch:055]: 'timer_ringing': Sending state OFF
[C][i2s_audio:028]: Setting up I2S Audio...
[C][i2s_audio.microphone:018]: Setting up I2S Audio Microphone...
[C][i2s_audio.speaker:096]: Setting up I2S Audio Speaker...
[C][wifi:048]: Setting up WiFi...
[D][esp-idf:000]: I (206) wifi:
[D][esp-idf:000]: wifi driver task: 3ffc8544, prio:23, stack:6656, core=0
[D][esp-idf:000]: 

[D][esp-idf:000][wifi]: I (1238) system_api: Base MAC address is not set

[D][esp-idf:000][wifi]: I (1239) system_api: read default base MAC address from EFUSE

[D][esp-idf:000][wifi]: I (1274) wifi:
[D][esp-idf:000][wifi]: wifi firmware version: ff661c3
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1274) wifi:
[D][esp-idf:000][wifi]: wifi certification version: v7.0
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1286) wifi:
[D][esp-idf:000][wifi]: config NVS flash: enabled
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1297) wifi:
[D][esp-idf:000][wifi]: config nano formating: disabled
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1317) wifi:
[D][esp-idf:000][wifi]: Init data frame dynamic rx buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1338) wifi:
[D][esp-idf:000][wifi]: Init static rx mgmt buffer num: 5
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1348) wifi:
[D][esp-idf:000][wifi]: Init management short buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1368) wifi:
[D][esp-idf:000][wifi]: Init dynamic tx buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1389) wifi:
[D][esp-idf:000][wifi]: Init static rx buffer size: 1600
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1399) wifi:
[D][esp-idf:000][wifi]: Init static rx buffer num: 10
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1419) wifi:
[D][esp-idf:000][wifi]: Init dynamic rx buffer num: 32
[D][esp-idf:000][wifi]: 

[D][esp-idf:000]: I (1441) wifi_init: rx ba win: 6

[D][esp-idf:000]: I (1441) wifi_init: tcpip mbox: 32

[D][esp-idf:000]: I (1450) wifi_init: udp mbox: 6

[D][esp-idf:000]: I (1450) wifi_init: tcp mbox: 6

[D][esp-idf:000]: I (1460) wifi_init: tcp tx win: 5760

[D][esp-idf:000]: I (1471) wifi_init: tcp rx win: 5760

[D][esp-idf:000]: I (1481) wifi_init: tcp mss: 1440

[D][esp-idf:000]: I (1481) wifi_init: WiFi IRAM OP enabled

[D][esp-idf:000]: I (1491) wifi_init: WiFi RX IRAM OP enabled

[C][wifi:061]: Starting WiFi...
[C][wifi:062]:   Local MAC: 64:B7:08:8A:1B:C0
[D][esp-idf:000][wifi]: I (1513) phy_init: phy_version 4791,2c4672b,Dec 20 2023,16:06:06

[D][esp-idf:000][wifi]: I (1599) wifi:
[D][esp-idf:000][wifi]: mode : sta (64:b7:08:8a:1b:c0)
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1600) wifi:
[D][esp-idf:000][wifi]: enable tsf
[D][esp-idf:000][wifi]: 

[D][esp-idf:000][wifi]: I (1605) wifi:
[D][esp-idf:000][wifi]: Set ps type: 1

[D][esp-idf:000][wifi]: 

[D][wifi:482]: Starting scan...
[D][esp32.preferences:114]: Saving 1 preferences to flash...
[D][esp32.preferences:143]: Saving 1 preferences to flash: 1 cached, 0 written, 0 failed
[W][micro_wake_word:151]: Wake word detection can't start as the component hasn't been setup yet
[D][esp-idf:000][wifi]: I (1646) wifi:
[D][esp-idf:000][wifi]: Set ps type: 1

[D][esp-idf:000][wifi]: 

[W][component:157]: Component wifi set Warning flag: scanning for networks
…
[I][wifi:617]: WiFi Connected!
…
[D][wifi:626]: Disabling AP...
[C][api:026]: Setting up Home Assistant API server...
[C][micro_wake_word:062]: Setting up microWakeWord...
[C][micro_wake_word:069]: Micro Wake Word initialized
[I][app:062]: setup() finished successfully!
[W][component:170]: Component wifi cleared Warning flag
[W][component:157]: Component api set Warning flag: unspecified
[I][app:100]: ESPHome version 2024.12.4 compiled on Apr 18 2025, 17:29:39
…
[C][logger:185]: Logger:
[C][logger:186]:   Level: DEBUG
[C][logger:188]:   Log Baud Rate: 115200
[C][logger:189]:   Hardware UART: UART0
[C][esp32_rmt_led_strip:187]: ESP32 RMT LED Strip:
[C][esp32_rmt_led_strip:188]:   Pin: 27
[C][esp32_rmt_led_strip:189]:   Channel: 0
[C][esp32_rmt_led_strip:214]:   RGB Order: GRB
[C][esp32_rmt_led_strip:215]:   Max refresh rate: 0
[C][esp32_rmt_led_strip:216]:   Number of LEDs: 1
[C][template.select:065]: Template Select 'Wake word engine location'
[C][template.select:066]:   Update Interval: 60.0s
[C][template.select:069]:   Optimistic: YES
[C][template.select:070]:   Initial Option: On device
[C][template.select:071]:   Restore Value: YES
[C][gpio.binary_sensor:015]: GPIO Binary Sensor 'Button'
[C][gpio.binary_sensor:016]:   Pin: GPIO39
[C][light:092]: Light 'M5Stack Atom Echo 8a1bc0'
[C][light:094]:   Default Transition Length: 0.0s
[C][light:095]:   Gamma Correct: 2.80
[C][template.switch:068]: Template Switch 'Use listen light'
[C][template.switch:091]:   Restore Mode: restore defaults to ON
[C][template.switch:057]:   Optimistic: YES
[C][template.switch:068]: Template Switch 'timer_ringing'
[C][template.switch:091]:   Restore Mode: always OFF
[C][template.switch:057]:   Optimistic: YES
[C][factory_reset.button:011]: Factory Reset Button 'Factory reset'
[C][factory_reset.button:011]:   Icon: 'mdi:restart-alert'
[C][captive_portal:089]: Captive Portal:
[C][mdns:116]: mDNS:
[C][mdns:117]:   Hostname: study-atom-echo-8a1bc0
[C][esphome.ota:073]: Over-The-Air updates:
[C][esphome.ota:074]:   Address: study-atom-echo.local:3232
[C][esphome.ota:075]:   Version: 2
[C][esphome.ota:078]:   Password configured
[C][safe_mode:018]: Safe Mode:
[C][safe_mode:020]:   Boot considered successful after 60 seconds
[C][safe_mode:021]:   Invoke after 10 boot attempts
[C][safe_mode:023]:   Remain in safe mode for 300 seconds
[C][api:140]: API Server:
[C][api:141]:   Address: study-atom-echo.local:6053
[C][api:143]:   Using noise encryption: YES
[C][micro_wake_word:051]: microWakeWord:
[C][micro_wake_word:052]:   models:
[C][micro_wake_word:015]:     - Wake Word: Hey Jarvis
[C][micro_wake_word:016]:       Probability cutoff: 0.970
[C][micro_wake_word:017]:       Sliding window size: 5
[C][micro_wake_word:021]:     - VAD Model
[C][micro_wake_word:022]:       Probability cutoff: 0.500
[C][micro_wake_word:023]:       Sliding window size: 5

[D][api:103]: Accepted 192.168.39.6
[W][component:170]: Component api cleared Warning flag
[W][component:237]: Component api took a long time for an operation (58 ms).
[W][component:238]: Components should block for at most 30 ms.
[D][api.connection:1446]: Home Assistant 2024.3.3 (192.168.39.6): Connected successfully
[D][ring_buffer:034]: Created ring buffer with size 2048
[D][micro_wake_word:399]: Resetting buffers and probabilities
[D][micro_wake_word:195]: State changed from IDLE to START_MICROPHONE
[D][micro_wake_word:107]: Starting Microphone
[D][micro_wake_word:195]: State changed from START_MICROPHONE to STARTING_MICROPHONE
[D][esp-idf:000]: I (11279) I2S: DMA Malloc info, datalen=blocksize=1024, dma_buf_count=4

[D][micro_wake_word:195]: State changed from STARTING_MICROPHONE to DETECTING_WAKE_WORD


That’s enough to get a voice satellite that can be configured up in Home Assistant; you’ll need the ESPHome Integration added, then for the noise_psk key you use the same string as I have under api/encryption/key in my diff above (obviously do your own, I used dd if=/dev/urandom bs=32 count=1 | base64 to generate mine).

If you’re like me and a compulsive VLANer and firewaller even within your own network then you need to allow Home Assistant to connect on TCP port 6053 to the ATOM Echo, and also allow access to/from UDP port 6055 on the Echo (it’ll send audio from that port to Home Assistant, then receive back audio to the same port).

At this point you can now shout “Hey Jarvis, what time is it?” at the Echo, and the white light will turn flashing blue (indicating it’s heard the wake word). Which means we’re ready to teach Home Assistant how to do something with the incoming audio.

24 April, 2025 06:34PM

April 23, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

qlcal 0.0.15 on CRAN: Calendar Updates

The fifteenth release of the qlcal package arrivied at CRAN today, following the QuantLib 1.38 release this morning.

qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.

This releases synchronizes qlcal with the QuantLib release 1.38.

Changes in version 0.0.15 (2025-04-23)

  • Synchronized with QuantLib 1.38 released today

  • Calendar updates for China, Hongkong, Thailand

  • Minor continuous integration update

Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

23 April, 2025 06:12PM

hackergotchi for Thomas Lange

Thomas Lange

FAI 6.4 and new ISO images available

The new FAI release 6.4 comes with some nice new features.

It now supports installing the Xfce edition of Linux Mint 22.1 'Xia'. There's now an additional Linux Mint ISO [1] which does an unattended Linux Mint installation via FAI and does not need a network connection because all packages are available on the ISO.

The package_config configurations now support arbitrary boolean expressions with FAI classes like this:

PACKAGES install UBUNTU && XORG && ! MINT

If you use the command ifclass in customization scripts you can now also use these expressions.

The tool fai-kvm for starting a KVM virtual machine now uses UEFI variables if the VM is started with an UEFI environment, so boot settings are saved during a reboot.

For the installation of Rocky Linux and Almalinux in an UEFI environment some configuration files were added.

New ISO images [2] are available but it may take some time until the FAIme service [3] will supports customized Linux Mint images.

23 April, 2025 01:36PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

hackergotchi for Michael Prokop

Michael Prokop

Lessons learned from running an open source project for 20 years @ GLT25

Time flies by so quickly, it’s >20 years since I started the Grml project.

I’m giving a (german) talk about the lessons learned from 20 years of running the Grml project this Saturday, 2025-04-26 at the Grazer Linuxtage (Graz/Austria). Would be great to see you there!

23 April, 2025 06:11AM by mika

Russell Coker

Last Post About the Yoga Gen3

Just over a year ago I bought myself a Thinkpad Yoga Gen 3 [1]. That is a nice machine and I really enjoyed using it. But a few months ago it started crashing and would often play some music on boot. The music is a diagnostic code that can be interpreted by the Lenovo Android app. Often the music translated to “code 0284 TCG-compliant functionality-related error” which suggests a motherboard problem. So I bought a new motherboard.

The system still crashes with the new motherboard. It seems to only crash when on battery so that indicates that it might be a power issue causing the crashes. I configured the BIOS to disable the TPM and that avoided the TCG messages and tunes on boot but it still crashes.

An additional problem is that the design of the Yoga series is that the keys retract when the system is opened past 180 degrees and when the lid is closed. After the motherboard replacement about half the keys don’t retract which means that they will damage the screen more when the lid is closed (the screen was already damaged from the keys when I bought it).

I think that spending more money on trying to fix this would be a waste. So I’ll use it as a test machine and I might give it to a relative who needs a portable computer to be used when on power only.

For the moment I’m back to the Thinkpad X1 Carbon Gen 5 [2]. Hopefully the latest kernel changes to zswap and the changes to Chrome to suspend unused tabs will make up for more RAM use in other areas. Currently it seems to be giving decent performance with 8G of RAM and I usually don’t notice any difference from the Yoga Gen 3.

Now I’m considering getting a Thinkpad X1 Carbon Extreme with a 4K display. But they seem a bit expensive at the moment. Currently there’s only one on ebay Australia for $1200ono.

23 April, 2025 05:11AM by etbe

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RInside 0.2.19 on CRAN: Mostly Maintenance

A new release 0.2.19 of RInside arrived on CRAN and in Debian today. RInside provides a set of convenience classes which facilitate embedding of R inside of C++ applications and programs, using the classes and functions provided by Rcpp.

This release fixes a minor bug that got tickled (after a decade and a half RInside) by environment variables (which we parse at compile time and encode in a C/C++ header file as constants) built using double quotes. CRAN currently needs that on one or two platforms, and RInside was erroring. This has been addressed. In the two years since the last release we also received two kind PRs updating the Qt examples to Qt6. And as always we also updated a few other things around the package.

The list of changes since the last release:

Changes in RInside version 0.2.19 (2025-04-22)

  • The qt example now supports Qt6 (Joris Goosen in #54 closing #53)

  • CMake support was refined for more recent versions (Joris Goosen in #55)

  • The sandboxed-server example now states more clearly that RINSIDE_CALLBACKS needs to be defined

  • More routine update to package and continuous integration.

  • Some now-obsolete checks for C++11 have been removed

  • When parsing environment variables, use of double quotes is now supported

My CRANberries also provide a short report with changes from the previous release. More information is on the RInside page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page, or to issues tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

23 April, 2025 12:40AM

April 22, 2025

hackergotchi for Joey Hess

Joey Hess

offgrid electric car

Eight months ago I came up my rocky driveway in an electric car, with the back full of solar panel mounting rails. I didn't know how I'd manage to keep it charged. I got the car earlier than planned, with my offgrid solar upgrade only beginning. There's no nearby EV charger, and winter was coming, less solar power every day. Still, it was the right time to take a leap to offgid EV life.

My existing 1 kilowatt solar array could charge the car only 5 miles on a good day. Here's my first try at charging the car offgrid:

first feeble charging offgrid

It was not worth charging the car that way, the house battery tended to get drained while doing that, and adding cycles to that battery is not desirable. So that was only a proof of concept, I knew I'd need to upgrade.

My goal with the upgrade was to charge the car directly from the sun, even when it was cloudy, using the house battery only to skate over brief darker periods (like a thunderstorm). By mid October, I had enough solar installed to do that (5 kilowatts).

me standing in front of solar fence

first charging from solar fence

Using this, in 2 days I charged the car up from 57% to 82%, and took off on a celebratory road trip to Niagra Falls, where I charged the car from hydro power from a dam my grandfather had engineered.

When I got home, it was November. Days were getting ever shorter. My solar upgrade was only 1/3rd complete and could charge the car 30-some miles per day, but only on a good day, and weather was getting worse. I came back with a low state of charge (both car and me), and needed to get back to full in time for my Thanksgiving trip at the end of the month. I decided to limit my trips to town.

charging up gradually through the month of November

This kind of medium term planning about car travel was new to me. But not too unusual for offgrid living. You look at the weather forecast and make some rough plans, and get to feel connected to the natural world a bit more.

December is the real test for offgrid solar, and honestly this was a bit rough, with a road trip planned for the end of the month. I did the usual holiday stuff but otherwise holed up at home a bit more than I usually would. Charging was limited and the cold made it charge less efficiently.

bleak December charging

Still, I was busy installing more solar panels, and by winter solstice, was back to charging 30 miles on a good day.

Of course, from there out things improved. In January and February I was able to charge up easily enough for my usual trips despite the cold. By March the car was often getting full before I needed to go anywhere, and I was doing long round trips without bothering to fast charge along the way, coming home low, knowing even cloudy days would let it charge up enough.

That brings me up to today. The car is 80% full and heading up toward 100% for a long trip on Friday. Despite the sky being milky white today with no visible sun, there's plenty of power to absorb, and the car charger turned on at 11 am with the house battery already full.

My solar upgrade is only 2/3rds complete, and also I have not yet installed my inverter upgrade, so the car can only currenly charge at 9 amps despite much more solar power often being available. So I'm looking forward to how next December goes with my full planned solar array and faster charging.

But first, a summer where I expect the car will mostly be charged up and ready to go at all times, and the only car expense will be fast charging on road trips!


By the way, the code I've written to automate offgrid charging that runs only when there's enough solar power is here.

And here are the charging graphs for the other months. All told, it's charged 475 kwh offgrid, enough to drive more than 1500 miles.

January
February
March
April

22 April, 2025 04:50PM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

One last Bookworm for the road — report from the Montreal 2025 BSP

Hello, hello, hello!

This report for the Bug Squashing Party we held in Montreal on March 28-29th is very late ... but better late than never? We're now at our fifth BSP in a row1, which is both nice and somewhat terrifying.

Have I really been around for five Debian releases already? Geez...

This year, around 13 different people showed up, including some brand new folks! All in all, we ended up working on 77 bugs, 61 of which have since been closed.

This is somewhat skewed by the large number of Lintian bugs I closed by merging and releasing the very many patches submitted by Maytham Alsudany (hello Maytham!), but that was still work :D

For our past few events, we have been renting a space at Ateliers de la transition socio-écologique. This building used to be nunnery (thus the huge cross on the top floor), but has since been transformed into a multi-faceted project.

A drawing of the building where the BSP was hosted

BSPs are great and this one was no exception. You should try to join an upcoming event or to organise one if you can. It is loads of fun and you will be helping the Debian project release its next stable version sooner!

As always, thanks to Debian for granting us a budget for the food and to rent the venue.

Pictures

Here are a bunch of pictures of the BSP, mixed in with some other pictures I took at this venue during a previous event.

Some of the people present on Friday, in the smaller room we had that day

A picture of a previous event, which includes many of the folks present at the BSP and the larger room we used on Saturday

A sticker on the door of the bathroom with text saying 'All Employees Must Wash Away Sin Before Returning To Work', a tongue-in-cheek reference to the building's previous purpose

A wall with posters for upcoming events

A drawing on one of the single-occupancy rooms in the building, warning people the door can't be opened from the inside (yikes!)

A table at the entrance with many flyers for social and political events


  1. See our previous BSPs in 2017, 2019, 2021 and 2023

22 April, 2025 04:00AM by Louis-Philippe Véronneau

April 21, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

Want your title? Here, have some XML!

As it seems ChatGPT would phrase it… Sweet Mother of God!

I received a mail from my University’s Scholar Administrative division informing me my Doctor degree has been granted and emitted (yayyyyyy! 👨��), and before printing the corresponding documents, I should review all of the information is correct.

Attached to the mail, I found they sent me a very friendly and welcoming XML file, that stated it followed the schema at https://www.siged.sep.gob.mx/titulos/schema.xsd… Wait! There is nothing to be found in that address! Well, never mind, I can make sense out of a XML document, right?

XML sample

Of course, who needs an XSD schema? Everybody can parse through the data in a XML document, right? Of course, it took me close to five seconds to spot a minor mistake (in the finish and start dates of my previous degree), for which I mailed the relevant address…

But… What happens if I try to undestand the world as seen by 9.8 out of 10 people getting a title from UNAM, in all of its different disciplines (scientific, engineering, humanities…) Some people will have no clue about what to do with a XML file. Fortunately, the mail has a link to a very useful tutorial (roughly translated by myself):

The attached file has an XML extension, so in order to visualize it, you must open it with a text editor such as Notepad or Sublime Text. In case you have any questions on how to open the file, please refer to the following guide: https://www.dgae.unam.mx/guia_abrir_xml.html

Seriously! Asking people getting a title in just about any area of knowledge to… Install SublimeText to validate the content of a XML (that includes the oh-so-very-readable signature of some universitary bureaucrat).

Of course, for many years Mexican people have been getting XML files by mail (for any declared monetary exchange, i.e. buying goods or offering services), but they are always sent together with a render of such XML to a personalized PDF. And yes — the PDF is there only to give the human receiving the file an easier time understanding it. Who thought a bare XML was a good idea? 😠

21 April, 2025 06:33PM

April 20, 2025

Nazi.Compare

Deja vu: Hitler's Birthday, Andreas Tille elected Debian Project Leader again

Adolf Hitler was born on 20 April 1889 in Austria. Today would be the Fuhrer's 136th birthday.

In 2025, just as in 2024, the Debian Project Leader (DPL) election finished on Hitler's birthday.

In 2025, just as in 2024, the Debian Project Leader (DPL) election winner is the German, Andreas Tille.

History repeating itself? Iti is a common theme in the world today. Let's hope not.

We reprint the original comments about this anniversary from 2024.

In 1939, shortly after Hitler annexed Austria, the Nazi command in Berlin had a big celebration for the 50th birthday of Adolf Hitler. It was such a big occasion that it has its own Wikipedia entry.

One of the quotes in Wikipedia comes from British historian Ian Kershaw:

an astonishing extravaganza of the Führer cult. The lavish outpourings of adulation and sycophancy surpassed those of any previous Führer Birthdays

For the first time ever, the Debian Project Leader election has finished just after 2am (Germany, Central European Summer Time) on the birthday of Hitler and the winning candidate is Andreas Tille from Germany.

Hitler's time of birth was 18:30, much later in the day.

Tille appears to be the first German to win this position in Debian.

We don't want to jinx Tille's first day on the job so we went to look at how each of the candidates voted in the 2021 lynching of Dr Richard Stallman.

This blog previously explored the question of whether Dr Stallman, who is an atheist, would be subject to anti-semitism during the Holocaust years because of his Jewish ancestry. We concluded that RMS would have definitely been on Hitler's list of targets.

Here we trim the voting tally sheet to show how Andreas Tille and Sruthi Chandran voted on the question of lynching Dr Stallman:

       Tally Sheet for the votes cast. 
 
   The format is:
       "V: vote 	Login	Name"
 The vote block represents the ranking given to each of the 
 candidates by the voter. 
 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
 -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=

     Option 1--------->: Call for the FSF board removal, as in rms-open-letter.github.io
   /  Option 2-------->: Call for Stallman's resignation from all FSF bodies
   |/  Option 3------->: Discourage collaboration with the FSF while Stallman is in a leading position
   ||/  Option 4------>: Call on the FSF to further its governance processes
   |||/  Option 5----->: Support Stallman's reinstatement, as in rms-support-letter.github.io
   ||||/  Option 6---->: Denounce the witch-hunt against RMS and the FSF
   |||||/  Option 7--->: Debian will not issue a public statement on this issue
   ||||||/  Option 8-->: Further Discussion
   |||||||/
V: 88888817	          tille	Andreas Tille
V: 21338885	           srud	Sruthi Chandran

We can see that Tille voted for option 7: he did not want Debian's name used in the attacks on Dr Stallman. However, he did not want Debian to denounce the witch hunt either. This is scary. A lot of Germans were willing to stand back and do nothing while Dr Stallman's Jewish ancestors were being dragged off to concentration camps.

The only thing necessary for the triumph of evil is that good men do nothing.

On the other hand, Sruthi Chandran appears to be far closer to the anti-semitic spirit. She put her first and second vote preferences next to the options that involved defaming and banishing Dr Stallman.

Will the new DPL be willing to stop the current vendettas against a volunteer and his family? Or will Tille continue using resources for stalking a volunteer in the same way that Nazis stalked the Jews?

Adolf Hitler famously died by suicide, a lot like the founder of Debian, Ian Murdock, who was born in Konstanz, Germany.

Will Tille address the questions of the Debian suicide cluster or will he waste more money on legal fees to try and cover it up?

20 April, 2025 04:00AM

Russ Allbery

Review: Up the Down Staircase

Review: Up the Down Staircase, by Bel Kaufman

Publisher: Vintage Books
Copyright: 1964, 1991, 2019
Printing: 2019
ISBN: 0-525-56566-3
Format: Kindle
Pages: 360

Up the Down Staircase is a novel (in an unconventional format, which I'll describe in a moment) about the experiences of a new teacher in a fictional New York City high school. It was a massive best-seller in the 1960s, including a 1967 movie, but seems to have dropped out of the public discussion. I read it from the library sometime in the late 1980s or early 1990s and have thought about it periodically ever since. It was Bel Kaufman's first novel.

Sylvia Barrett is a new graduate with a master's degree in English, where she specialized in Chaucer. As Up the Down Staircase opens, it is her first day as an English teacher in Calvin Coolidge High School. As she says in a letter to a college friend:

What I really had in mind was to do a little teaching. "And gladly wolde he lerne, and gladly teche" — like Chaucer's Clerke of Oxenford. I had come eager to share all I know and feel; to imbue the young with a love for their language and literature; to instruct and to inspire. What happened in real life (when I had asked why they were taking English, a boy said: "To help us in real life") was something else again, and even if I could describe it, you would think I am exaggerating.

She instead encounters chaos and bureaucracy, broken windows and mindless regulations, a librarian who is so protective of her books that she doesn't let any students touch them, a school guidance counselor who thinks she's Freud, and a principal whose sole interaction with the school is to occasionally float through on a cushion of cliches, dispensing utterly useless wisdom only to vanish again.

I want to take this opportunity to extend a warm welcome to all faculty and staff, and the sincere hope that you have returned from a healthful and fruitful summer vacation with renewed vim and vigor, ready to gird your loins and tackle the many important and vital tasks that lie ahead undaunted. Thank you for your help and cooperation in the past and future.

Maxwell E. Clarke
Principal

In practice, the school is run by James J. McHare, Clarke's administrative assistant, who signs his messages JJ McH, Adm. Asst. and who Sylvia immediately starts calling Admiral Ass. McHare is a micro-managing control freak who spends the book desperately attempting to impose order over school procedures, the teachers, and the students, with very little success. The title of the book comes from one of his detention slips:

Please admit bearer to class—

Detained by me for going Up the Down staircase and subsequent insolence.

JJ McH

The conceit of this book is that, except for the first and last chapters, it consists only of memos, letters, notes, circulars, and other paper detritus, often said to come from Sylvia's wastepaper basket. Sylvia serves as the first-person narrator through her long letters to her college friend, and through shorter but more frequent exchanges via intraschool memo with Beatrice Schachter, another English teacher at the same school, but much of the book lies outside her narration. The reader has to piece together what's happening from the discarded paper of a dysfunctional institution.

Amid the bureaucratic and personal communications, there are frequent chapters with notes from the students, usually from the suggestion box that Sylvia establishes early in the book. These start as chaotic glimpses of often-misspelled wariness or open hostility, but over the course of Up the Down Staircase, some of the students become characters with fragmentary but still visible story arcs. This remains confusing throughout the novel — there are too many students to keep them entirely straight, and several of them use pseudonyms for the suggestion box — but it's the sort of confusion that feels like an intentional authorial choice. It mirrors the difficulty a teacher has in piecing together and remembering the stories of individual students in overstuffed classrooms, even if (like Sylvia and unlike several of her colleagues) the teacher is trying to pay attention.

At the start, Up the Down Staircase reads as mostly-disconnected humor. There is a strong "kids say the darnedest things" vibe, which didn't entirely work for me, but the send-up of chaotic bureaucracy is both more sophisticated and more entertaining. It has the "laugh so that you don't cry" absurdity of a system with insufficient resources, entirely absent management, and colleagues who have let their quirks take over their personalities. Sylvia alternates between incredulity and stubbornness, and I think this book is at its best when it shows the small acts of practical defiance that one uses to carve out space and coherence from mismanaged bureaucracy.

But this book is not just a collection of humorous anecdotes about teaching high school. Sylvia is sincere in her desire to teach, which crystallizes around, but is not limited to, a quixotic attempt to reach one delinquent that everyone else in the school has written off. She slowly finds her footing, she has a few breakthroughs in reaching her students, and the book slowly turns into an earnest portrayal of an attempt to make the system work despite its obvious unfitness for purpose. This part of the book is hard to review. Parts of it worked brilliantly; I could feel myself both adjusting my expectations alongside Sylvia to something less idealistic and also celebrating the rare breakthrough with her. Parts of it were weirdly uncomfortable in ways that I'm not sure I enjoyed. That includes Sylvia's climactic conversation with the boy she's been trying to reach, which was weirdly charged and ambiguous in a way that felt like the author's reach exceeding their grasp.

One thing that didn't help my enjoyment is Sylvia's relationship with Paul Barringer, another of the English teachers and a frustrated novelist and poet. Everyone who works at the school has found their own way to cope with the stress and chaos, and many of the ways that seem humorous turn out to have a deeper logic and even heroism. Paul's, however, is to retreat into indifference and alcohol. He is a believable character who works with Kaufman's themes, but he's also entirely unlikable. I never understood why Sylvia tolerated that creepy asshole, let alone kept having lunch with him. It is clear from the plot of the book that Kaufman at least partially understands Paul's deficiencies, but that did not help me enjoy reading about him.

This is a great example of a book that tried to do something unusual and risky and didn't entirely pull it off. I like books that take a risk, and sometimes Up the Down Staircase is very funny or suddenly insightful in a way that I'm not sure Kaufman could have reached with a more traditional novel. It takes a hard look at what it means to try to make a system work when it's clearly broken and you can't change it, and the way all of the characters arrive at different answers that are much deeper than their initial impressions was subtle and effective. It's the sort of book that sticks in your head, as shown by the fact I bought it on a whim to re-read some 35 years after I first read it. But it's not consistently great. Some parts of it drag, the characters are frustratingly hard to keep track of, and the emotional climax points are odd and unsatisfying, at least to me.

I'm not sure whether to recommend it or not, but it's certainly unusual. I'm glad I read it again, but I probably won't re-read it for another 35 years, at least.

If you are considering getting this book, be aware that it has a lot of drawings and several hand-written letters. The publisher of the edition I read did a reasonably good job formatting this for an ebook, but some of the pages, particularly the hand-written letters, were extremely hard to read on a Kindle. Consider paper, or at least reading on a tablet or computer screen, if you don't want to have to puzzle over low-resolution images.

The 1991 trade paperback had a new introduction by the author, reproduced in the edition I read as an afterward (which is a better choice than an introduction). It is a long and fascinating essay from Kaufman about her experience with the reaction to this book, culminating in a passionate plea for supporting public schools and public school teachers. Kaufman's personal account adds a lot of depth to the story; I highly recommend it.

Content note: Self-harm, plus several scenes that are closely adjacent to student-teacher relationships. Kaufman deals frankly with the problems of mostly-poor high school kids, including sexuality, so be warned that this is not the humorous romp that it might appear on first glance. A couple of the scenes made me uncomfortable; there isn't anything explicit, but the emotional overtones can be pretty disturbing.

Rating: 7 out of 10

20 April, 2025 03:43AM

April 18, 2025

hackergotchi for Keith Packard

Keith Packard

sanitizer-fun

Fun with -fsanitize=undefined and Picolibc

Both GCC and Clang support the -fsanitize=undefined flag which instruments the generated code to detect places where the program wanders into parts of the C language specification which are either undefined or implementation defined. Many of these are also common programming errors. It would be great if there were sanitizers for other easily detected bugs, but for now, at least the undefined sanitizer does catch several useful problems.

Supporting the sanitizer

The sanitizer can be built to either trap on any error or call handlers. In both modes, the same problems are identified, but when trap mode is enabled, the compiler inserts a trap instruction and doesn't expect the program to continue running. When handlers are in use, each identified issue is tagged with a bunch of useful data and then a specific sanitizer handling function is called.

The specific functions are not all that well documented, nor are the parameters they receive. Maybe this is because both compilers provide an implementation of all of the functions they use and don't really expect external implementations to exist? However, to make these useful in an embedded environment, picolibc needs to provide a complete set of handlers that support all versions both gcc and clang as the compiler-provided versions depend upon specific C (and C++) libraries.

Of course, programs can be built in trap-on-error mode, but that makes it much more difficult to figure out what went wrong.

Fixing Sanitizer Issues

Once the sanitizer handlers were implemented, picolibc could be built with them enabled and all of the picolibc tests run to uncover issues within the library.

As with the static analyzer adventure from last year, the vast bulk of sanitizer complaints came from invoking undefined or implementation-defined behavior in harmless ways:

  • Computing pointers past &array[size+1]. I found no cases where the resulting pointers were actually used, but the mere computation is still undefined behavior. These were fixed by adjusting the code to avoid computing pointers like this. The result was clearer code, which is good.

  • Signed arithmetic overflow in PRNG code. There are several linear congruential PRNGs in the library which used signed integer arithmetic. The rand48 generator carefully used unsigned short values. Of course, in C, the arithmetic performed on them is done with signed ints if int is wider than short. C specifies signed overflow as undefined, but both gcc and clang generate the expected code anyways. The fixes here were simple; just switch the computations to unsigned arithmetic, adjusting types and inserting casts as required.

  • Passing pointers to the middle of a data structure. For example, free takes a pointer to the start of an allocation. The management structure appears just before that in memory; computing the address of which appears to be undefined behavior to the compiler. The only fix I could do here was to disable the sanitizer in functions doing these computations -- the sanitizer was mis-detecting correct code and it doesn't provide a way to skip checks on a per-operator basis.

  • Null pointer plus or minus zero. C says that any arithmetic with the NULL pointer is undefined, even when the value being added or subtracted is zero. The fix here was to create a macro, enabled only when the sanitizer is enabled, which checks for this case and skips the arithmetic.

  • Discarded computations which overflow. A couple of places computed a value, then checked if that would have overflowed and discard the result. Even though the program doesn't depend upon the computation, its mere presence is undefined behavior. These were fixed by moving the computation into an else clause in the overflow check. This inserts an extra branch instruction, which is annoying.

  • Signed integer overflow in math code. There's a common pattern in various functions that want to compare against 1.0. Instead of using the floating point equality operator, they do the computation using the two 32-bit halves with ((hi - 0x3ff00000) | lo) == 0. It's efficient, but because most of these functions store the 'hi' piece in a signed integer (to make checking the sign bit fast), the result is undefined when hi is a large negative value. These were fixed by inserting casts to unsigned types as the results were always tested for equality.

Signed integer shifts

This is one area where the C language spec is just wrong.

For left shift, before C99, it worked on signed integers as a bit-wise operator, equivalent to the operator on unsigned integers. After that, left shift of negative integers became undefined. Fortunately, it's straightforward (if tedious) to work around this issue by just casting the operand to unsigned, performing the shift and casting it back to the original type. Picolibc now has an internal macro, lsl, which does this:

    #define lsl(__x,__s) ((sizeof(__x) == sizeof(char)) ?                   \
                          (__typeof(__x)) ((unsigned char) (__x) << (__s)) :  \
                          (sizeof(__x) == sizeof(short)) ?                  \
                          (__typeof(__x)) ((unsigned short) (__x) << (__s)) : \
                          (sizeof(__x) == sizeof(int)) ?                    \
                          (__typeof(__x)) ((unsigned int) (__x) << (__s)) :   \
                          (sizeof(__x) == sizeof(long)) ?                   \
                          (__typeof(__x)) ((unsigned long) (__x) << (__s)) :  \
                          (sizeof(__x) == sizeof(long long)) ?              \
                          (__typeof(__x)) ((unsigned long long) (__x) << (__s)) : \
                          __undefined_shift_size(__x, __s))

Right shift is significantly more complicated to implement. What we want is an arithmetic shift with the sign bit being replicated as the value is shifted rightwards. C defines no such operator. Instead, right shift of negative integers is implementation defined. Fortunately, both gcc and clang define the >> operator on signed integers as arithmetic shift. Also fortunately, C hasn't made this undefined, so the program itself doesn't end up undefined.

The trouble with arithmetic right shift is that it is not equivalent to right shift of unsigned values. Here's what Per Vognsen came up with using standard C operators:

    int
    __asr_int(int x, int s) {
        return x < 0 ? ~(~x >> s) : x >> s;
    }

When the value is negative, we invert all of the bits (making it positive), shift right, then flip all of the bits back. Both GCC and Clang seem to compile this to a single asr instruction. This function is replicated for each of the five standard integer types and then the set of them wrapped in another sizeof-selecting macro:

    #define asr(__x,__s) ((sizeof(__x) == sizeof(char)) ?           \
                          (__typeof(__x))__asr_char(__x, __s) :       \
                          (sizeof(__x) == sizeof(short)) ?          \
                          (__typeof(__x))__asr_short(__x, __s) :      \
                          (sizeof(__x) == sizeof(int)) ?            \
                          (__typeof(__x))__asr_int(__x, __s) :        \
                          (sizeof(__x) == sizeof(long)) ?           \
                          (__typeof(__x))__asr_long(__x, __s) :       \
                          (sizeof(__x) == sizeof(long long)) ?      \
                          (__typeof(__x))__asr_long_long(__x, __s):   \
                          __undefined_shift_size(__x, __s))

The lsl and asr macros use sizeof instead of the type-generic mechanism to remain compatible with compilers that lack type-generic support.

Once these macros were written, they needed to be applied where required. To preserve the benefits of detecting programming errors, they were only applied where required, not blindly across the whole codebase.

There are a couple of common patterns in the math code using shift operators. One is when computing the exponent value for subnormal numbers.

for (ix = -1022, i = hx << 11; i > 0; i <<= 1)
    ix -= 1;

This code computes the exponent by shifting the significand left by 11 bits (the width of the exponent field) and then incrementally shifting it one bit at a time until the sign flips, which indicates that the most-significant bit is set. Use of the pre-C99 definition of the left shift operator is intentional here; so both shifts are replaced with our lsl operator.

In the implementation of pow, the final exponent is computed as the sum of the two exponents, both of which are in the allowed range. The resulting sum is then tested to see if it is zero or negative to see if the final value is sub-normal:

hx += n << 20;
if (hx >> 20 <= 0)
    /* do sub-normal things */

In this case, the exponent adjustment, n, is a signed value and so that shift is replaced with the lsl macro. The test value needs to compute the correct the sign bit, so we replace this with the asr macro.

Because the right shift operation is not undefined, we only use our fancy macro above when the undefined behavior sanitizer is enabled. On the other hand, the lsl macro should have zero cost and covers undefined behavior, so it is always used.

Actual Bugs Found!

The goal of this little adventure was both to make using the undefined behavior sanitizer with picolibc possible as well as to use the sanitizer to identify bugs in the library code. I fully expected that most of the effort would be spent masking harmless undefined behavior instances, but was hopeful that the effort would also uncover real bugs in the code. I was not disappointed. Through this work, I found (and fixed) eight bugs in the code:

  1. setlocale/newlocale didn't check for NULL locale names

  2. qsort was using uintptr_t to swap data around. On MSP430 in 'large' mode, that's a 20-bit type inside a 32-bit representation.

  3. random() was returning values in int range rather than long.

  4. m68k assembly for memcpy was broken for sizes > 64kB.

  5. freopen returned NULL, even on success

  6. The optimized version of memrchr was always performing unaligned accesses.

  7. String to float conversion had a table missing four values. This caused an array access overflow which resulted in imprecise values in some cases.

  8. vfwscanf mis-parsed floating point values by assuming that wchar_t was unsigned.

Sanitizer Wishes

While it's great to have a way to detect places in your C code which evoke undefined and implementation defined behaviors, it seems like this tooling could easily be extended to detect other common programming mistakes, even where the code is well defined according to the language spec. An obvious example is in unsigned arithmetic. How many bugs come from this seemingly innocuous line of code?

    p = malloc(sizeof(*p) * c);

Because sizeof returns an unsigned value, the resulting computation never results in undefined behavior, even when the multiplication wraps around, so even with the undefined behavior sanitizer enabled, this bug will not be caught. Clang seems to have an unsigned integer overflow sanitizer which should do this, but I couldn't find anything like this in gcc.

Summary

The undefined behavior sanitizers present in clang and gcc both provide useful diagnostics which uncover some common programming errors. In most cases, replacing undefined behavior with defined behavior is straightforward, although the lack of an arithmetic right shift operator in standard C is irksome. I recommend anyone using C to give it a try.

18 April, 2025 10:37PM

Sven Hoexter

Trixie Upgrade and X11 Clipboard Manager Madness

Due to my own laziness and a few functionality issues my "for work laptop" is still using a 15+ year old setup with X11 and awesome. Since trixie is now starting its freeze, it's time to update that odd machine as well and look at the fallout. Good news: It's mostly my own resistance to change which required some kick in the back to move on.

Clipboard Manager Madness

For the past decade or so I used parcellite which served me well. Now that is no longer available in trixie and I started to look into one of the dead end streets of X11 related tooling, searching for an alternative.

Parcellite

Seems upstream is doing sporadic fixes, but holds GTK2 tight. The Debian package was patched to be GTK3 compatible, but has unfixed ftbfs issues with GCC 14.

clipit

Next I checked for a parcellite fork named clipit, and that's when it started to get funky. It's packaged in Debian, QA maintained, and recently received at least two uploads to keep it working. Installed it and found it's greeting me with a nag screen that I should migrate to diodon. The real clipit tool is still shipped as a binary named clipit.real, so if you know it you can still use it. To achieve the nag screen it depends on zenity and to ease the migration it depends on diodon. Two things I do not really need. Also the package description prominently mentions that you should not use the package.

diodon

The nag screen of clipit made me look at diodon. It claims it was written for the Ubuntu Unity desktop, something where I've no idea how alive and relevant it still is. While there is still something on launchpad, it seems to receive sporadic commits on github. Not sure if it's dead or just feature complete.

Interim Solution: clipit

Settled with clipit for now, but decided to fork the Debian package to remove the nag screen and the dependency on diodon and zenity (package build). My hope is to convert this last X11 setup to wayland within the lifetime of trixie.

I also contacted the last uploader regarding a removal of the nag screen, who then brought in the last maintainer who added the nag screen. While I first thought clipit is somewhat maintained upstream, Andrej quickly pointed out that this is not really the case. Still that leaves us in trixie with a rather odd situation. We ship now for the second stable release a package that recommends to move to a different tool while still shipping the original tool. Plus it's getting patched by some of its users who refuse to migrate to the alternative envisioned by the former maintainer.

VirtualBox and moving to libvirt

I always liked the GUI of VirtualBox, and it really made desktop virtualization easy. But with Linux 6.12, which enables KVM by default, it seems to get even more painful to get it up and running. In the past I just took the latest release from unstable and rebuild that one on the current stable. Currently the last release in unstable is 7.0.20, while the Linux 6.12 fixes only started to appear in VirtualBox 7.1.4 and later. The good thing is with virt-manager and the whole libvirt ecosystem there is a good enough replacement available, and it works fine with related tooling like vagrant. There are instructions available on how to set it up. I can only add that it makes sense to export VAGRANT_DEFAULT_PROVIDER=libvirt in your .bashrc to make that provider change permanent.

18 April, 2025 06:16PM

April 17, 2025

Simon Josefsson

Verified Reproducible Tarballs

Remember the XZ Utils backdoor? One factor that enabled the attack was poor auditing of the release tarballs for differences compared to the Git version controlled source code. This proved to be a useful place to distribute malicious data.

The differences between release tarballs and upstream Git sources is typically vendored and generated files. Lots of them. Auditing all source tarballs in a distribution for similar issues is hard and boring work for humans. Wouldn’t it be better if that human auditing time could be spent auditing the actual source code stored in upstream version control instead? That’s where auditing time would help the most.

Are there better ways to address the concern about differences between version control sources and tarball artifacts? Let’s consider some approaches:

  • Stop publishing (or at least stop building from) source tarballs that differ from version control sources.
  • Create recipes for how to derive the published source tarballs from version control sources. Verify that independently from upstream.

While I like the properties of the first solution, and have made effort to support that approach, I don’t think normal source tarballs are going away any time soon. I am concerned that it may not even be a desirable complete solution to this problem. We may need tarballs with pre-generated content in them for various reasons that aren’t entirely clear to us today.

So let’s consider the second approach. It could help while waiting for more experience with the first approach, to see if there are any fundamental problems with it.

How do you know that the XZ release tarballs was actually derived from its version control sources? The same for Gzip? Coreutils? Tar? Sed? Bash? GCC? We don’t know this! I am not aware of any automated or collaborative effort to perform this independent confirmation. Nor am I aware of anyone attempting to do this on a regular basis. We would want to be able to do this in the year 2042 too. I think the best way to reach that is to do the verification continuously in a pipeline, fixing bugs as time passes. The current state of the art seems to be that people audit the differences manually and hope to find something. I suspect many package maintainers ignore the problem and take the release source tarballs and trust upstream about this.

We can do better.

I have launched a project to setup a GitLab pipeline that invokes per-release scripts to rebuild that release artifact from git sources. Currently it only contain recipes for projects that I released myself. Releases which where done in a controlled way with considerable care to make reproducing the tarballs possible. The project homepage is here:

https://gitlab.com/debdistutils/verify-reproducible-releases

The project is able to reproduce the release tarballs for Libtasn1 v4.20.0, InetUtils v2.6, Libidn2 v2.3.8, Libidn v1.43, and GNU SASL v2.2.2. You can see this in a recent successful pipeline. All of those releases were prepared using Guix, and I’m hoping the Guix time-machine will make it possible to keep re-generating these tarballs for many years to come.

I spent some time trying to reproduce the current XZ release tarball for version 5.8.1. That would have been a nice example, wouldn’t it? First I had to somehow mimic upstream’s build environment. The XZ release tarball contains GNU Libtool files that are identified with version 2.5.4.1-baa1-dirty. I initially assumed this was due to the maintainer having installed libtool from git locally (after making some modifications) and made the XZ release using it. Later I learned that it may actually be coming from ArchLinux which ship with this particular libtool version. It seems weird for a distribution to use libtool built from a non-release tag, and furthermore applying patches to it, but things are what they are. I made some effort to setup an ArchLinux build environment, however the now-current Gettext version in ArchLinux seems to be more recent than the one that were used to prepare the XZ release. I don’t know enough ArchLinux to setup an environment corresponding to an earlier version of ArchLinux, which would be required to finish this. I gave up, maybe the XZ release wasn’t prepared on ArchLinux after all. Actually XZ became a good example for this writeup anyway: while you would think this should be trivial, the fact is that it isn’t! (There is another aspect here: fingerprinting the versions used to prepare release tarballs allows you to infer what kind of OS maintainers are using to make releases on, which is interesting on its own.)

I made some small attempts to reproduce the tarball for GNU Shepherd version 1.0.4 too, but I still haven’t managed to complete it.

Do you want a supply-chain challenge for the Easter weekend? Pick some well-known software and try to re-create the official release tarballs from the corresponding Git checkout. Is anyone able to reproduce anything these days? Bonus points for wrapping it up as a merge request to my project.

Happy Supply-Chain Security Hacking!

17 April, 2025 07:24PM by simon

hackergotchi for Jonathan Dowland

Jonathan Dowland

Hledger UI themes

Last year I intended to write an update on my use of hledger, but that was waylaid for various reasons and I need to revisit how (if) I'm using it, so that's put off for longer. I do want to mention one contribution I made upstream: a dark theme for the UI, and some unfinished work on consistent colours.

Consistent terminal colours are an interesting issue: the most common terminal colour modes (8 and 256) use indexing into a palette, but the definition of the colours is ambiguous: the 8-colour palette is formally specified by ANSI as names (red, green, etc.); the 256-colour palette is effectively defined by xterm (a useful chart) but I'm not sure all terminal emulators that support it have chosen the same colour values.

A consequence of indexed-colour is that the end-user may redefine what the colour values are. Whether this is a good thing or a bad thing depends on your point of view. As an end-user, it's attractive to be able to tune the colour scheme; but as a software author, it means you have no real idea what your users are going to see, and matters like ensuring contrast are impossible.

Some terminals support 24-bit "true" colour, in which the colours are specified as an RGB triplet. Using these mean the software author can be reasonably sure all users will see the same thing (for a fungible definition of "same"), at the cost of user configurability. However, since it's less well supported, we start having to worry about fallback behaviour.

In the case of hledger-ui, which provides several colour schemes, that's probably OK, because the user configurability is achieved by choosing one of the schemes. (or writing your own, in extremis). However, the dark theme I contributed uses the 8-colour palette, in common with the other themes, and my explorations into using predictable colours are unfinished.

17 April, 2025 09:35AM

Arturo Borrero González

My experience in the Debian LTS and ELTS projects

Debian

Last year, I decided to start participating in the Debian LTS and ELTS projects. It was a great opportunity to engage in something new within the Debian community. I had been following these projects for many years, observing their evolution and how they gained traction both within the ecosystem and across the industry.

I was curious to explore how contributors were working internally — especially how they managed security patching and remediation for older software. I’ve always felt this was a particularly challenging area, and I was fortunate to experience it firsthand.

As of April 2025, the Debian LTS project was primarily focused on providing security maintenance for Debian 11 Bullseye. Meanwhile, the Debian ELTS project was targeting Debian 8 Jessie, Debian 9 Stretch, and Debian 10 Buster.

During my time with the projects, I worked on a variety of packages and CVEs. Some of the most notable ones include:

There are several technical highlights I’d like to share — things I learned or had to apply while participating:

  • CI/CD pipelines: We used CI/CD pipelines on salsa.debian.org all the times to automate tasks such as building, linting, and testing packages. For any package I worked on that lacked CI/CD integration, setting it up became my first step.

  • autopkgtest: There’s a strong emphasis on autopkgtest as the mechanism for running functional tests and ensuring that security patches don’t introduce regressions. I contributed by both extending existing test suites and writing new ones from scratch.

  • Toolchain complexity for older releases: Working with older Debian versions like Jessie brought some unique challenges. Getting a development environment up and running often meant troubleshooting issues with sbuild chroots, qemu images, and other tools that don’t “just work” like they tend to on Debian stable.

  • Community collaboration: The people involved in LTS and ELTS are extremely helpful and collaborative. Requests for help, code reviews, and general feedback were usually answered quickly.

  • Shared ownership: This collaborative culture also meant that contributors would regularly pick up work left by others or hand off their own tasks when needed. That mutual support made a big difference.

  • Backporting security fixes: This is probably the most intense —and most rewarding— activity. It involves manually adapting patches to work on older codebases when upstream patches don’t apply cleanly. This requires deep code understanding and thorough testing.

  • Upstream collaboration: Reaching out to upstream developers was a key part of my workflow. I often asked if they could provide patches for older versions or at least review my backports. Sometimes they were available, but most of the time, the responsibility remained on us.

  • Diverse tech stack: The work exposed me to a wide range of programming languages and frameworks—Python, Java, C, Perl, and more. Unsurprisingly, some modern languages (like Go) are less prevalent in older releases like Jessie.

  • Security team interaction: I had frequent contact with the core Debian Security Team—the folks responsible for security in Debian stable. This gave me a broader perspective on how Debian handles security holistically.

In March 2025, I decided to scale back my involvement in the projects due to some changes in my personal life. Still, this experience has been one of the highlights of my career, and I would definitely recommend it to others.

I’m very grateful for the warm welcome I received from the LTS/ELTS community, and I don’t rule out the possibility of rejoining the LTS/ELTS efforts in the future.

The Debian LTS/ELTS projects are currently coordinated by folks at Freexian. Many thanks to Freexian and sponsors for providing this opportunity!

17 April, 2025 09:00AM

April 15, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

submitted

Today I submitted my PhD thesis, 8 years since I started (give or take). Next step, Viva.

Normal service may resume shortly…

15 April, 2025 03:43PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

AsioHeaders 1.30.2-1 on CRAN: New Upstream

Another new (stable) release of the AsioHeaders package arrived at CRAN just now. Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost – but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost).

The update last week, kindly prepared by Charlie Gao, had overlooked / not covered one other nag discovered by CRAN. This new release, based on the current stable upstream release, does that.

The short NEWS entry for AsioHeaders follows.

Changes in version 1.30.2-0 (2025-04-15

  • Upgraded to Asio 1.30.2 (Dirk in #13 fixing #12)

  • Added two new badges to README.md

Thanks to my CRANberries, there is a diffstat report for this release. Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

15 April, 2025 11:05AM

Russell Coker

What Desktop PCs Need

It seems to me that we haven’t had much change in the overall design of desktop PCs since floppy drives were removed, and modern PCs still have bays the size of 5.25″ floppy drives despite having nothing modern that can fit in such spaces other than DVD drives (which aren’t really modern) and carriers for 4*2.5″ drives both of which most people don’t use. We had the PC System Design Guide [1] which was last updated in 2001 which should have been updated more recently to address some of these issues, the thing that most people will find familiar in that standard is the colours for audio ports. Microsoft developed the Legacy Free PC [2] concept which was a good one. There’s a lot of things that could be added to the list of legacy stuff to avoid, TPM 1.2, 5.25″ drive bays, inefficient PSUs, hardware that doesn’t sleep when idle or which prevents the CPU from sleeping, VGA and DVI ports, ethernet slower than 2.5Gbit, and video that doesn’t include HDMI 2.1 or DisplayPort 2.1 for 8K support. There are recently released high-end PCs on sale right now with 1gbit ethernet as standard and hardly any PCs support resolutions above 4K properly.

Here are some of the things that I think should be in a modern PC System Design Guide.

Power Supply

The power supply is a core part of the computer and it’s central location dictates the layout of the rest of the PC. GaN PSUs are more power efficient and therefore require less cooling. A 400W USB power supply is about 1/4 the size of a standard PC PSU and doesn’t have a cooling fan. A new PC standard should include less space for the PSU except for systems with multiple CPUs or that are designed for multiple GPUs.

A Dell T630 server has an option of a 1600W PSU that is 20*8.5*4cm = 680cc. The typical dimensions of an ATX PSU are 15*8.6*14cm = 1806cc. The SFX (small form factor variant of ATX) PSU is 12.5*6.3*10cm = 787cc. There is a reason for the ATX and SFX PSUs having a much worse ratio of power to size and that is the airflow. Server class systems are designed for good airflow and can efficiently cool the PSU with less space and they are also designed for uses where people are less concerned about fan noise. But the 680cc used for a 1600W Dell server PSU that predates GaN technology could be used for a modern GaN PSU that supplies the ~600W needed for a modern PC while being quiet. There are several different smaller size PSUs for name-brand PCs (where compatibility with other systems isn’t needed) that have been around for ~20 years but there hasn’t been a standard so all white-box PC systems have had really large PSUs.

PCs need USB-C PD ports that can charge a laptop etc. There are phones that can draw 80W for fast charging and it’s not unreasonable to expect a PC to be able to charge a phone at it’s maximum speed.

GPUs should have USB-C alternate mode output and support full USB functionality over the cable as well as PD that can power the monitor. Having a monitor with a separate PSU, a HDMI or DP cable to the PC, and a USB cable between PC and monitor is an annoyance. There should be one cable between PC and monitor and then keyboard, mouse, etc should connect to the monior.

All devices that are connected to a PC should use USB-C for power connection. That includes monitors that are using HDMI or DisplayPort for video, desktop switches, home Wifi APs, printers, and speakers (even when using line-in for the audio signal). The European Commission Common Charger Directive is really good but it only covers portable devices, keyboards, and mice.

Motherboard Features

Latest verions of Wifi and Bluetooth on the motherboard (this is becoming a standard feature).

On motherboard video that supports 8K resolution. An option of a PCIe GPU is a good thing to have but it would be nice if the motherboard had enough video capabilities to satisfy most users. There are several options for video that have a higher resolution than 4K and making things just work at 8K means that there will be less e-waste in future.

ECC RAM should be a standard feature on all motherboards, having a single bit error cause a system crash is a MS-DOS thing, we need to move past that.

There should be built in hardware for monitoring the system status that is better than BIOS beeps on boot. Lenovo laptops have a feature for having the BIOS play a tune on a serious error with an Android app to decode the meaning of the tune, we could have a standard for this. For desktop PCs there should be a standard for LCD status displays similar to the ones on servers, this would be cheap if everyone did it.

Case Features

The way the Framework Laptop can be expanded with modules is really good [3]. There should be something similar for PC cases. While you can buy USB devices for these things they are messy and risk getting knocked out of their sockets when moving cables around. While the Framework laptop expansion cards are much more expensive than other devices with similar functions that are aimed at a mass market if there was a standard for PCs then the devices to fit them would become cheap.

The PC System Design Guide specifies colors for ports (which is good) but not the feel of them. While some ports like Ethernet ports allow someone to feel which way the connector should go it isn’t possible to easily feel which way a HDMI or DisplayPort connector should go. It would be good if there was a standard that required plastic spikes on one side or some other way of feeling which way a connector should go.

GPU Placement

In modern systems it’s fairly common to have a high heatsink on the CPU with a fan to blow air in at the front and out the back of the PC. The GPU (which often dissipates twice as much heat as the CPU) has fans blowing air in sideways and not out the back. This gives some sort of compromise between poor cooling and excessive noise. What we need is to have air blown directly through a GPU heatsink and out of the case. One option for a tower case that needs minimal changes is to have the PCIe slot nearest the bottom of the case used for the GPU and have a grille in the bottom to allow air to go out, the case could have feet to keep it a few cm above the floor or desk. Another possibility is to have a PCIe slot parallel to the rear surface of the case (right angles to the other PCIe slots).

A common case with desktop PCs is to have the GPU use more than half the total power of the PC. The placement of the GPU shouldn’t be an afterthought, it should be central to the design.

Is a PCIe card even a good way of installing a GPU? Could we have a standard GPU socket on the motherboard next to the CPU socket and use the same type of heatsink and fan for GPU and CPU?

External Cooling

There are a range of aftermarket cooling devices for laptops that push cool air in the bottom or suck it out the side. We need to have similar options for desktop PCs. I think it would be ideal to have a standard attachments for airflow on the front and back of tower PCs. The larger a fan is the slower it can spin to give the same airflow and therefore the less noise it will produce. Instead of just relying on 10cm fans at the front and back of a PC to push air in and suck it out you could have a conical rubber duct connected to a 30cm diameter fan. That would allow quieter fans to do most of the work in pushing air through the PC and also allow the hot air to be directed somewhere suitable. When doing computer work in summer it’s not great to have a PC sending 300+W of waste heat into the room you are in. If it could be directed out a window that would be good.

Noise

For restricting noise of PCs we have industrial relations legislation that seems to basically require that workers not be exposed to noise louder than a blender, so if a PC is quieter than that then it’s OK. For name brand PCs there are specs about how much noise is produced but there are usually caveats like “under typical load” or “with a typical feature set” that excuse them from liability if the noise is louder than expected. It doesn’t seem possible for someone to own a PC, determine that the noise from it is what is acceptable, and then buy another that is close to the same.

We need regulations about this, and the EU seems the best jurisdiction for it as they cover the purchase of a lot of computer equipment that is also sold without change in other countries. The regulations need to also cover updates, for example I have a Dell T630 which is unreasonably loud and Dell support doesn’t have much incentive to be particularly helpful about it. BIOS updates routinely tweak things like fan speeds without the developers having an incentive to keep it as quiet as it was when it was sold.

What Else?

Please comment about other things you think should be standard PC features.

15 April, 2025 10:19AM by etbe

April 13, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Influencers: Red Hat, Inc’s IPO, 1999, post-mortem on the directed share offer to open source developer community

Red Hat, Inc decided to list shares on the stock market for the first time in 1999.

The plan was announced in June 1999. Around 21 July 1999, Red Hat sent a private email to external developers and volunteers offering them the opportunity to buy the shares at the opening price.

Subject: A personal invitation from Red Hat

Dear open source community member: In appreciation of your contribution
to the open source community, Red Hat is pleased to offer you this personal,
non-transferable, opportunity.

Red Hat couldn't have grown this far without the ongoing help and
support of the open source community,

...

Therefore, we have reserved a portion of the stock in our offering for
distribution online to certain members of the open source community.
We invite you to participate.

...

It is important to emphasize that Red Hat was not giving the shares away for free. People had to pay for them. They were not stock options either, they were full shares, which meant people had to submit payment for the full price of every share they wanted.

There were limits on the offer. According to the prospectus filed with the SEC, a total of 800,000 shares were reserved out of the 6 million shares in the IPO.

At the request of Red Hat, the underwriters have reserved up to 800,000 shares of common stock for sale at the initial public offering price through a directed share program, to directors, officers and employees of Red Hat and to open source software developers and other persons that Red Hat believes have contributed to the success of the open source software community and the growth of Red Hat.

The IPO took place on 11 August 1999.

On 17 August 1999, ZDNet published a report, "Linux hackers miss IPO boat". In their report, they tell us that the offer was sent to 3,500 open source developers and approximately 2,000 developers responded. Out of that, approximately 200 developers were rejected and didn't get any stock due to SEC regulations that protect inexperience investors from this type of offer. The report notes that each developer was entitled to buy a minimum of 100 shares and a maximum of 400 shares.

If all 3,500 recipients had asked for 400 shares each that would have been a demand for 1.4 million shares, well in excess of the 800,000 shares reserved for the program. It appears that the reserved shares were not fully subscribed by volunteers and E*Trade started offering some of them to the rest of their customers.

In practice, it appears that 1,800 people successfully asked for the shares but we don't know how many each person received. It is in the range from 180,000 to 720,000 shares. The developers were given shares at a price of $14 per share. The share price went up to $50 per share and a few days later it was $85.25 per share at the time of the ZDNet article. 720,000 shares at $85.25 per share means that volunteers had acquired $61.4 million of equity in Red Hat.

Under SEC rules, management can not publicly promote their company in the three months before an IPO and in the month after the IPO.

By sending this offer to 3,500 developers Red Hat was able to create an army of influencers who discussed the IPO far and wide.

Looking at the Slashdot archives, we find approximately a dozen different Slashdot reports about the IPO. Each of those reports has hundreds of comments.

Some related news reports:

Wired emphasizes many of the offers were sent to Debian Developers.

CNet published a report about the negative responses in the community.

C. Scott Ananian writes about jumping through hoops to get some shares and the pain of being excluded from the offer.

Sonar published an article looking back at the scheme and they include some fresh quotes from Bob Young.

Linux World published an article back in the day and then decided to hide it. The article is quoted in this 2018 blog post from Harish Pillay.

On the debian-private gossip network, of which thousands of messages have already been leaked for the period before the IPO, there was intense discussion about the proposition.

Subject: Re: RedHat Surprise
Date: Wed, 21 Jul 1999 09:34:29 -0500 (EST)
From: chris mckillop <cdmckill@warg.uwaterloo.ca>
Reply-To: chris mckillop <cdm@debian.org>
To: Ivan E. Moore II <rkrusty@tdyc.com>
CC: debian-private@lists.debian.org

On Wed, 21 Jul 1999, Ivan E. Moore II wrote:

> > > Would it be possible for uninterested Debian developers to pool their
> > > offers towards a common "Debian" buyout?  Perhaps we could end up with
> > > some kind of shareholder voting power?  (of course, then we'd need
> > > a new mailing list, debian-redhat-takeover)
> > > 
> > 
> > 	This would be amazing!  Is it legal??
> 
> <side note> Now, if Debian wanted to as a whole do this for the sole purpose
> of an investment and not for some monopolistic control standpoint than that
> would be a different story. (not saying that's what the intent was, but that's
> just what I fear).  </sn>
> 

	I am looking for a way to put say, $300 up into the IPO as
a Canadian.  With the offer RedHat is making, is it legal for
someone to do this?

	chris


^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
chris mckillop - cdm@debian.org    "The faster I go, the behinder I get."
Debian GNU/Linux                     -- Lewis Carroll http://www.debian.org/
Waterloo Aerial Robotics Group - http://ece.uwaterloo.ca/~warg/

Buying out a company is not so easy. A few months later, in November 1999, Red Hat simply created more shares, diluting the value of existing shares. The new shares were used to acquire a competitor, Cygnus. The cash raised from the IPO gave Red Hat the balance sheet to justify the takeover on terms that were good for Red Hat management.

The proposition sent to open source volunteers helped Red Hat identify potential recruits without having to go through recruitment agencies.

In modern times, we see a lot more news reports around the ethical issues facing influencers on social media. In 1999, there were no regulations for influencers. In fact, the term influencer didn't even exist in the sense that we know it today.

Another way to view this directed share offer: people owning the shares would be more favourable to Red Hat's interests and less likely to be one hundred percent objective in discussions about controversial topics like systemd. We could think of it is a form of social engineering attack.

We can see the impact in this email where people are asked to be polite to Red Hat:

Subject: RedHat Surprise
Date: Wed, 21 Jul 1999 00:09:50 -0500 (EST)
From: Matthew R. Pavlovich <mpav@purdue.edu>
To: debian-devel@lists.debian.org, debian-private@lists.debian.org

In light of RedHat's recent offer to many debian developers, I want to
make a suggestion to those who do not accept RedHat's method of making the
offer.  
Do not send back a nasty e-mail message cursing them for spamming.  If you
feel obligated to tell them your opinion, please consider using your
non-debian e-mail address.  It is very easy for people to mistaken the
opinion of one for the opinion of the whole.  EVERY e-mail message you
send with a debian.org e-mail address reflects Debian as an organization.  
Redhat did not have to include anyone in their offer.  This is a very
considerate and thoughtful gesture on their part.  Redhat is good for
Linux as a whole.  We may have some differences, but IMO they should not
be viewed as our enemy.  They are going to be investing a lot of money
into Linux development, which means good paying jobs for open source
developers.  
IF YOU READ ONE SECTION, READ THIS:
-------
I am not saying anyone is wrong for having an opinion or that they should not express it, the focus of my point is to stress taking into account
the fact that individual opinions are taken as opinions of the entire
organization.

 Matthew R. Pavlovich

In other words, the organization doesn't have a consistent definition of spam that we are willing to defend.

FOSDEM recently fell into the same trap, inviting Jack Dorsey for "bring your boss day". FOSDEM used to be an event for developers.

From time to time, discussions appear on Debian mailing lists about whether everybody should declare their conflicts of interest, that is, their employer, their shareholdings and their romantic partners who participate in the same community.

By creating these hidden financial relationships that span multiple free software projects, Red Hat was creating the foundation for cult-like behavior. As more and more serious transgressions have occurred over the years, people have routinely covered them up. People focus on protecting reputations over protecting the truth.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

13 April, 2025 06:00PM

hackergotchi for Michael Prokop

Michael Prokop

OpenSSH penalty behavior in Debian/trixie #newintrixie

This topic came up at a customer of mine in September 2024, when working on Debian/trixie support. Since then I wanted to blog about it to make people aware of this new OpenSSH feature and behavior. I finally found some spare minutes at Debian’s BSP in Vienna, so here we are. :)

Some of our Q/A jobs failed to run against Debian/trixie, in the debug logs we found:

debug1: kex_exchange_identification: banner line 0: Not allowed at this time

This Not allowed at this time pointed to a new OpenSSH feature. OpenSSH introduced options to penalize undesirable behavior with version 9.8p1, see OpenSSH Release Notes, and also sshd source code.

FTR, on the SSH server side, you’ll see messages like that:

Apr 13 08:57:11 grml sshd-session[2135]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55792 ssh2 [preauth]
Apr 13 08:57:11 grml sshd-session[2135]: Disconnecting authenticating user root 10.100.15.42 port 55792: Too many authentication failures [preauth]
Apr 13 08:57:12 grml sshd-session[2137]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55800 ssh2 [preauth]
Apr 13 08:57:12 grml sshd-session[2137]: Disconnecting authenticating user root 10.100.15.42 port 55800: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd-session[2139]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55804 ssh2 [preauth]
Apr 13 08:57:13 grml sshd-session[2139]: Disconnecting authenticating user root 10.100.15.42 port 55804: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd-session[2141]: error: maximum authentication attempts exceeded for root from 10.100.15.42 port 55810 ssh2 [preauth]
Apr 13 08:57:13 grml sshd-session[2141]: Disconnecting authenticating user root 10.100.15.42 port 55810: Too many authentication failures [preauth]
Apr 13 08:57:13 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55818 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55824 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55838 on [10.100.15.230]:22 penalty: failed authentication
Apr 13 08:57:14 grml sshd[1417]: drop connection #0 from [10.100.15.42]:55854 on [10.100.15.230]:22 penalty: failed authentication

This feature certainly is useful and has its use cases. But if you f.e. run automated checks to ensure that specific logins aren’t working, be careful: you might hit the penalty feature, lock yourself out but also consecutive checks then don’t behave as expected. Your login checks might fail, but only because the penalty behavior kicks in. The login you’re verifying still might be working underneath, but you don’t actually check for it exactly. Furthermore legitimate traffic from systems which accept connections from many users or behind shared IP addresses, like NAT and proxies could be denied.

To disable this new behavior, you can set PerSourcePenalties no in your sshd_config, but there are also further configuration options available, see PerSourcePenalties and PerSourcePenaltyExemptList settings in sshd_config(5) for further details.

13 April, 2025 02:05PM by mika

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in March 2025

13 April, 2025 04:38AM by Ben Hutchings

FOSS activity in February 2025

13 April, 2025 04:30AM by Ben Hutchings

FOSS activity in November 2024

13 April, 2025 04:23AM by Ben Hutchings

April 11, 2025

Reproducible Builds

Reproducible Builds in March 2025

Welcome to the third report in 2025 from the Reproducible Builds project. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As usual, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.

Table of contents:

  1. Debian bookworm live images now fully reproducible from their binary packages
  2. “How NixOS and reproducible builds could have detected the xz backdoor”
  3. LWN: Fedora change aims for 99% package reproducibility
  4. Python adopts PEP standard for specifying package dependencies
  5. OSS Rebuild real-time validation and tooling improvements
  6. SimpleX Chat server components now reproducible
  7. Three new scholarly papers
  8. Distribution roundup
  9. An overview of “Supply Chain Attacks on Linux distributions”
  10. diffoscope & strip-nondeterminism
  11. Website updates
  12. Reproducibility testing framework
  13. Upstream patches

Debian bookworm live images now fully reproducible from their binary packages

Roland Clobus announced on our mailing list this month that all the major desktop variants (ie. Gnome, KDE, etc.) can be reproducibly created for Debian bullseye, bookworm and trixie from their (pre-compiled) binary packages.

Building reproducible Debian live images does not require building from reproducible source code, but this is still a remarkable achievement. Some large proportion of the binary packages that comprise these live images can (and were) built reproducibly, but live image generation works at a higher level. (By contrast, “full” or end-to-end reproducibility of a bootable OS image will, in time, require both the compile-the-packages the build-the-bootable-image stages to be reproducible.)

Nevertheless, in response, Roland’s announcement generated significant congratulations as well as some discussion regarding the finer points of the terms employed: a full outline of the replies can be found here.

The news was also picked up by Linux Weekly News (LWN) as well as to Hacker News.


How NixOS and reproducible builds could have detected the xz backdoor

Julien Malka aka luj published an in-depth blog post this month with the highly-stimulating title “How NixOS and reproducible builds could have detected the xz backdoor for the benefit of all”.

Starting with an dive into the relevant technical details of the XZ Utils backdoor, Julien’s article goes on to describe how we might avoid the xz “catastrophe” in the future by building software from trusted sources and building trust into untrusted release tarballs by way of comparing sources and leveraging bitwise reproducibility, i.e. applying the practices of Reproducible Builds.

The article generated significant discussion on Hacker News as well as on Linux Weekly News (LWN).


LWN: Fedora change aims for 99% package reproducibility

Linux Weekly News (LWN) contributor Joe Brockmeier has published a detailed round-up on how Fedora change aims for 99% package reproducibility. The article opens by mentioning that although Debian has “been working toward reproducible builds for more than a decade”, the Fedora project has now:

…progressed far enough that the project is now considering a change proposal for the Fedora 43 development cycle, expected to be released in October, with a goal of making 99% of Fedora’s package builds reproducible. So far, reaction to the proposal seems favorable and focused primarily on how to achieve the goal—with minimal pain for packagers—rather than whether to attempt it.

The Change Proposal itself is worth reading:

Over the last few releases, we [Fedora] changed our build infrastructure to make package builds reproducible. This is enough to reach 90%. The remaining issues need to be fixed in individual packages. After this Change, package builds are expected to be reproducible. Bugs will be filed against packages when an irreproducibility is detected. The goal is to have no fewer than 99% of package builds reproducible.

Further discussion can be found on the Fedora mailing list as well as on Fedora’s Discourse instance.


Python adopts PEP standard for specifying package dependencies

Python developer Brett Cannon reported on Fosstodon that PEP 751 was recently accepted. This design document has the purpose of describing “a file format to record Python dependencies for installation reproducibility”. As the abstract of the proposal writes:

This PEP proposes a new file format for specifying dependencies to enable reproducible installation in a Python environment. The format is designed to be human-readable and machine-generated. Installers consuming the file should be able to calculate what to install without the need for dependency resolution at install-time.

The PEP, which itself supersedes PEP 665, mentions that “there are at least five well-known solutions to this problem in the community”.


OSS Rebuild real-time validation and tooling improvements

OSS Rebuild aims to automate rebuilding upstream language packages (e.g. from PyPI, crates.io, npm registries) and publish signed attestations and build definitions for public use.

OSS Rebuild is now attempting rebuilds as packages are published, shortening the time to validating rebuilds and publishing attestations.

Aman Sharma contributed classifiers and fixes for common sources of non-determinism in JAR packages.

Improvements were also made to some of the core tools in the project:

  • timewarp for simulating the registry responses from sometime in the past.
  • proxy for transparent interception and logging of network activity.
  • and stabilize, yet another nondeterminism fixer.


SimpleX Chat server components now reproducible

SimpleX Chat is a privacy-oriented decentralised messaging platform that eliminates user identifiers and metadata, offers end-to-end encryption and has a unique approach to decentralised identity. Starting from version 6.3, however, Simplex has implemented reproducible builds for its server components. This advancement allows anyone to verify that the binaries distributed by SimpleX match the source code, improving transparency and trustworthiness.


Three new scholarly papers

Aman Sharma of the KTH Royal Institute of Technology of Stockholm, Sweden published a paper on Build and Runtime Integrity for Java (PDF). The paper’s abstract notes that “Software Supply Chain attacks are increasingly threatening the security of software systems” and goes on to compare build- and run-time integrity:

Build-time integrity ensures that the software artifact creation process, from source code to compiled binaries, remains untampered. Runtime integrity, on the other hand, guarantees that the executing application loads and runs only trusted code, preventing dynamic injection of malicious components.

Aman’s paper explores solutions to safeguard Java applications and proposes some novel techniques to detect malicious code injection. A full PDF of the paper is available.


In addition, Hamed Okhravi and Nathan Burow of Massachusetts Institute of Technology (MIT) Lincoln Laboratory along with Fred B. Schneider of Cornell University published a paper in the most recent edition of IEEE Security & Privacy on Software Bill of Materials as a Proactive Defense:

The recently mandated software bill of materials (SBOM) is intended to help mitigate software supply-chain risk. We discuss extensions that would enable an SBOM to serve as a basis for making trust assessments thus also serving as a proactive defense.

A full PDF of the paper is available.


Lastly, congratulations to Giacomo Benedetti of the University of Genoa for publishing their PhD thesis. Titled Improving Transparency, Trust, and Automation in the Software Supply Chain, Giacomo’s thesis:

addresses three critical aspects of the software supply chain to enhance security: transparency, trust, and automation. First, it investigates transparency as a mechanism to empower developers with accurate and complete insights into the software components integrated into their applications. To this end, the thesis introduces SUNSET and PIP-SBOM, leveraging modeling and SBOMs (Software Bill of Materials) as foundational tools for transparency and security. Second, it examines software trust, focusing on the effectiveness of reproducible builds in major ecosystems and proposing solutions to bolster their adoption. Finally, it emphasizes the role of automation in modern software management, particularly in ensuring user safety and application reliability. This includes developing a tool for automated security testing of GitHub Actions and analyzing the permission models of prominent platforms like GitHub, GitLab, and BitBucket.


Distribution roundup

In Debian this month:


The IzzyOnDroid Android APK repository reached another milestone in March, crossing the 40% coverage mark — specifically, more than 42% of the apps in the repository is now reproducible

Thanks to funding by NLnet/Mobifree, the project was also to put more time into their tooling. For instance, developers can now run easily their own verification builder in “less than 5 minutes”. This currently supports Debian-based systems, but support for RPM-based systems is incoming. Future work in the pipeline, including documentation, guidelines and helpers for debugging.


Fedora developer Zbigniew Jędrzejewski-Szmek announced a work-in-progress script called fedora-repro-build which attempts to reproduce an existing package within a Koji build environment. Although the project’s README file lists a number of “fields will always or almost always vary” (and there are a non-zero list of other known issues), this is an excellent first step towards full Fedora reproducibility (see above for more information).


Lastly, in openSUSE news, Bernhard M. Wiedemann posted another monthly update for his work there.


An overview of Supply Chain Attacks on Linux distributions

Fenrisk, a cybersecurity risk-management company, has published a lengthy overview of Supply Chain Attacks on Linux distributions. Authored by Maxime Rinaudo, the article asks:

[What] would it take to compromise an entire Linux distribution directly through their public infrastructure? Is it possible to perform such a compromise as simple security researchers with no available resources but time?


diffoscope & strip-nondeterminism

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 290, 291, 292 and 293 and 293 to Debian:

  • Bug fixes:

    • file(1) version 5.46 now returns XHTML document for .xhtml files such as those found nested within our .epub tests. []
    • Also consider .aar files as APK files, at least for the sake of diffoscope. []
    • Require the new, upcoming, version of file(1) and update our quine-related testcase. []
  • Codebase improvements:

    • Ensure all calls to our_check_output in the ELF comparator have the potential CalledProcessError exception caught. [][]
    • Correct an import masking issue. []
    • Add a missing subprocess import. []
    • Reformat openssl.py. []
    • Update copyright years. [][][]

In addition, Ivan Trubach contributed a change to ignore the st_size metadata entry for directories as it is essentially arbitrary and introduces unnecessary or even spurious changes. []


Website updates

Once again, there were a number of improvements made to our website this month, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In March, a number of changes were made by Holger Levsen, including:

  • reproduce.debian.net-related:

    • Add links to two related bugs about buildinfos.debian.net. []
    • Add an extra sync to the database backup. []
    • Overhaul description of what the service is about. [][][][][][]
    • Improve the documentation to indicate that need to fix syncronisation pipes. [][]
    • Improve the statistics page by breaking down output by architecture. []
    • Add a copyright statement. []
    • Add a space after the package name so one can search for specific packages more easily. []
    • Add a script to work around/implement a missing feature of debrebuild. []
  • Misc:

    • Run debian-repro-status at the end of the chroot-install tests. [][]
    • Document that we have unused diskspace at Ionos. []

In addition:

And finally, node maintenance was performed by Holger Levsen [][][] and Mattia Rizzolo [][].


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

11 April, 2025 10:00PM

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

this is bits from DPL for March (sorry for the delay, I was waiting for some additional input).

Conferences

In March, I attended two conferences, each with a distinct motivation.

I joined FOSSASIA to address the imbalance in geographical developer representation. Encouraging more developers from Asia to contribute to Free Software is an important goal for me, and FOSSASIA provided a valuable opportunity to work towards this.

I also attended Chemnitzer Linux-Tage, a conference I have been part of for over 20 years. To me, it remains a key gathering for the German Free Software community –a place where contributors meet, collaborate, and exchange ideas.

I have a remark about submitting an event proposal to both FOSDEM and FOSSASIA:

    Cross distribution experience exchange

As Debian Project Leader, I have often reflected on how other Free Software distributions address challenges we all face. I am interested in discussing how we can learn from each other to improve our work and better serve our users. Recognizing my limited understanding of other distributions, I aim to bridge this gap through open knowledge exchange. My hope is to foster a constructive dialogue that benefits the broader Free Software ecosystem. Representatives of other distributions are encouraged to participate in this BoF –whether as contributors or official co-speakers. My intention is not to drive the discussion from a Debian-centric perspective but to ensure that all distributions have an equal voice in the conversation.

This event proposal was part of my commitment from my 2024 DPL platform, specifically under the section "Reaching Out to Learn". Had it been accepted, I would have also attended FOSDEM. However, both FOSDEM and FOSSASIA rejected the proposal.

In hindsight, reaching out to other distribution contributors beforehand might have improved its chances. I may take this approach in the future if a similar opportunity arises. That said, rejecting an interdistribution discussion without any feedback is, in my view, a missed opportunity for collaboration.

FOSSASIA Summit

The 14th FOSSASIA Summit took place in Bangkok. As a leading open-source technology conference in Asia, it brings together developers, startups, and tech enthusiasts to collaborate on projects in AI, cloud computing, IoT, and more.

With a strong focus on open innovation, the event features hands-on workshops, keynote speeches, and community-driven discussions, emphasizing open-source software, hardware, and digital freedom. It fosters a diverse, inclusive environment and highlights Asia's growing role in the global FOSS ecosystem.

I presented a talk on Debian as a Global Project and led a packaging workshop. Additionally, to further support attendees interested in packaging, I hosted an extra self-organized workshop at a hacker café, initiated by participants eager to deepen their skills.

There was another Debian related talk given by Ananthu titled "The Herculean Task of OS Maintenance - The Debian Way!"

To further my goal of increasing diversity within Debian –particularly by encouraging more non-male contributors– I actively engaged with attendees, seeking opportunities to involve new people in the project. Whether through discussions, mentoring, or hands-on sessions, I aimed to make Debian more approachable for those who might not yet see themselves as contributors. I was fortunate to have the support of Debian enthusiasts from India and China, who ran the Debian booth and helped create a welcoming environment for these conversations. Strengthening diversity in Free Software is a collective effort, and I hope these interactions will inspire more people to get involved.

Chemnitzer Linuxtage

The Chemnitzer Linux-Tage (CLT) is one of Germany's largest and longest-running community-driven Linux and open-source conferences, held annually in Chemnitz since 2000. It has been my favorite conference in Germany, and I have tried to attend every year.

Focusing on Free Software, Linux, and digital sovereignty, CLT offers a mix of expert talks, workshops, and exhibitions, attracting hobbyists, professionals, and businesses alike. With a strong grassroots ethos, it emphasizes hands-on learning, privacy, and open-source advocacy while fostering a welcoming environment for both newcomers and experienced Linux users.

Despite my appreciation for the diverse and high-quality talks at CLT, my main focus was on connecting with people who share the goal of attracting more newcomers to Debian. Engaging with both longtime contributors and potential new participants remains one of the most valuable aspects of the event for me.

I was fortunate to be joined by Debian enthusiasts staffing the Debian booth, where I found myself among both experienced booth volunteers –who have attended many previous CLT events– and young newcomers. This was particularly reassuring, as I certainly can't answer every detailed question at the booth. I greatly appreciate the knowledgeable people who represent Debian at this event and help make it more accessible to visitors.

As a small point of comparison –while FOSSASIA and CLT are fundamentally different events– the gender ratio stood out. FOSSASIA had a noticeably higher proportion of women compared to Chemnitz. This contrast highlighted the ongoing need to foster more diversity within Free Software communities in Europe.

At CLT, I gave a talk titled "Tausend Freiwillige, ein Ziel" (Thousand Volunteers, One Goal), which was video recorded. It took place in the grand auditorium and attracted a mix of long-term contributors and newcomers, making for an engaging and rewarding experience.

Kind regards Andreas.

11 April, 2025 10:00PM by Andreas Tille

hackergotchi for Gunnar Wolf

Gunnar Wolf

Culture as a positive freedom

This post is an unpublished review for La cultura libre como libertad positiva
Please note: This review is not meant to be part of my usual contributions to ACM's «Computing Reviews». I do want, though, to share it with people that follow my general interests and such stuff.

This article was published almost a year ago, and I read it just after relocating from Argentina back to Mexico. I came from a country starting to realize the shock it meant to be ruled by an autocratic, extreme right-wing president willing to overrun its Legislative and bent on destroying the State itself — not too different from what we are now witnessing on a global level.

I have been a strong proponent and defender of Free Software and of Free Culture throughout my adult life. And I have been a Socialist since my early teenage years. I cannot say there is a strict correlation between them, but there is a big intersection of people and organizations who aligns to both sides — And Ártica (and Mariana Fossatti) are clearly among them.

Freedom is a word that has brought us many misunderstanding throughout the past many decades. We will say that Freedom can only be brought hand-by-hand with Equality, Fairness and Tolerance. But the extreme-right wing (is it still bordering Fascism, or has it finally embraced it as its true self?) that has grown so much in many countries over the last years also seems to have appropriated the term, even taking it as their definition. In English (particularly, in USA English), liberty is a more patriotic term, and freedom is more personal (although the term used for the market is free market); in Spanish, we conflate them both under libre.

Mariana refers to a third blog, by Rolando Astarita, where the author introduces the concepts positive and negative freedom/liberties. Astarita characterizes negative freedom as an individual’s possibility to act without interferences or coertion, and is limited by other people’s freedom, while positive freedom is the real capacity to exercise one’s autonomy and achieve self-realization; this does not depend on a person on its own, but on different social conditions; Astarita understands the Marxist tradition to emphasize on the positive freedom.

Mariana brings this definition to our usual discussion on licensing: If we follow negative freedom, we will understand free licenses as the idea of access without interference to cultural or information goods, as long as it’s legal (in order not to infringe other’s property rights). Licensing is seen as a private content, and each individual can grant access and use to their works at will.

The previous definition might be enough for many, but she says, is missing something important. The practical effect of many individuals renouncing a bit of control over their property rights produce, collectively, the common goods. They constitute a pool of knowledge or culture that are no longer an individual, contractual issue, but grow and become social, collective. Negative freedom does not go further, but positive liberty allows broadening the horizon, and takes us to a notion of free culture that, by strengthening the commons, widens social rights.

She closes the article by stating (and I’ll happily sign as if they were my own words) that we are Free Culture militants «not only because it affirms the individual sovereignty to deliver and receive cultural resources, in an intellectual property framework guaranteed by the state. Our militancy is of widening the cultural enjoying and participation to the collective through the defense of common cultural goods» (…) «We want to build Free Culture for a Free Society. But a Free Society is not a society of free owners, but a society emancipated from the structures of economic power and social privilege that block this potential collective».

11 April, 2025 02:41PM

hackergotchi for Bits from Debian

Bits from Debian

DebConf25 Registration and Call for Proposals are open

The 26th edition of the Debian annual conference will be held in Brest, France, from July 14th to July 20th, 2025. The main conference will be preceded by DebCamp, from July 7th to July 13th. We invite everyone interested to register for the event to attend DebConf25 in person. You can also submit a talk or event proposal if you're interested in presenting your work in Debian at DebConf25.

Registration can be done by creating an account on the DebConf25 website and clicking on "Register" in the profile section.

As always, basic registration is free of charge. If you are attending the conference in a professional capacity or as a representative of your company, we kindly ask that you consider registering in one of our paid categories. This helps cover the costs of organizing the event while also supporting subsidizing other community members attendance. The last day to register with guaranteed swag is 9th June.

We encourage eligible individuals to apply for a diversity bursary. Travel, food, and accommodation bursaries are available. More details can be found on the bursary information page. The last day to apply for a bursary is April 14th. Applicants should receive feedback on their bursary application by April 25th.

The call for proposals for talks, discussions and other activities is also open. To submit a proposal, you need to create an account on the website and click the "Submit Talk Proposal" button in the profile section. The last day to submit and have your proposal considered for the main conference schedule, with video coverage guaranteed, is May 25th.

DebConf25 is also looking for sponsors; if you are interested or think you know of others who would be willing to help, please get in touch with sponsors@debconf.org.

All important dates can be found on the link here.

See you in Brest!

11 April, 2025 10:00AM by Anupa Ann Joseph, Sahil Dhiman

April 10, 2025

Thorsten Alteholz

My Debian Activities in March 2025

Debian LTS

This was my hundred-twenty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4096-1] librabbitmq security update to one CVE related to credential visibility when using tools on the command line.
  • [DLA 4103-1] suricata security update to fix second CVEs related to bypass of HTTP-based signature, mishandling of multiple fragmented packets, logic errors, infinite loops, buffer overflows, unintended file access and using large amount of memory.

Last but not least I started to work on the second batch of fixes for suricata CVEs and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the eightieth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1360-1] ffmpeg security update to fix three CVEs in Stretch related to out-of-bounds read, assert errors and NULL pointer dereferences.
  • [ELA-1361-1] ffmpeg security update to fix four CVEs in Buster related to out-of-bounds read, assert errors and NULL pointer dereferences.
  • [ELA-1362-1] librabbitmq security update to fix two CVEs in Stretch and Buster related to heap memory corruption due to integer overflow and credential visibility when using the tools on the command line.
  • [ELA-1363-1] librabbitmq security update to fix one CVE in Jessie related to credential visibility when using the tools on the command line.
  • [ELA-1367-1] suricata security update to fix five CVEs in Buster related to bypass of HTTP-based signature, mishandling of multiple fragmented packets, logic errors, infinite loops and buffer overflows.

Last but not least I started to work on the second batch of fixes for suricata CVEs and attended the monthly LTS/ELTS meeting.

Debian Printing

This month I uploaded new packages or new upstream or bugfix versions of:

  • cups-filters to make it work with a new upstream version of qpdf again.

This work is generously funded by Freexian!

Debian Matomo

This month I uploaded new packages or new upstream or bugfix versions of:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded new packages or new upstream or bugfix versions of:

Unfortunately I had a rather bad experience with package hijacking this month. Of course errors can always happen, but when I am forced into a discussion about the advantages of hijacking, I am speechless about such self-centered behavior. Oh fellow Debian Developers, is it really that hard to acknowledge a fault and tidy up afterwards? What a sad trend.

Debian IoT

Unfortunately I didn’t found any time to work on this topic.

Debian Mobcom

This month I uploaded new upstream or bugfix versions of almost all packages. First I uploaded them to experimental and afterwards to unstable to get the latest upstream versions into Trixie.

misc

This month I uploaded new packages or new upstream or bugfix versions of:

meep and meep-mpi-default are no longer supported on 32bit architectures.

FTP master

This month I accepted 343 and rejected 38 packages. The overall number of packages that got accepted was 347.

10 April, 2025 10:42PM by alteholz

John Goerzen

Announcing the NNCPNET Email Network

From 1995 to 2019, I ran my own mail server. It began with a UUCP link, an expensive long-distance call for me then. Later, I ran a mail server in my apartment, then ran it as a VPS at various places.

But running an email server got difficult. You can’t just run it on a residential IP. Now there’s SPF, DKIM, DMARC, and TLS to worry about. I recently reviewed mail hosting services, and don’t get me wrong: I still use one, and probably will, because things like email from my bank are critical.

But we’ve lost the ability to tinker, to experiment, to have fun with email.

Not anymore. NNCPNET is an email system that runs atop NNCP. I’ve written a lot about NNCP, including a less-ambitious article about point-to-point email over NNCP 5 years ago. NNCP is to UUCP what ssh is to telnet: a modernization, with modern security and features. NNCP is an asynchronous, onion-routed, store-and-forward network. It can use as a transport anything from the Internet to a USB stick.

NNCPNET is a set of standards, scripts, and tools to facilitate a broader email network using NNCP as the transport. You can read more about NNCPNET on its wiki!

The “easy mode” is to use the Docker container (multi-arch, so you can use it on your Raspberry Pi) I provide, which bundles:

  • Exim mail server
  • NNCP
  • Verification and routing tools I wrote. Because NNCP packets are encrypted and signed, we get sender verification “for free”; my tools ensure the From: header corresponds with the sending node.
  • Automated nodelist tools; it will request daily nodelist updates and update its configurations accordingly, so new members can be communicated with
  • Integration with the optional, opt-in Internet email bridge
    • It is open to all. The homepage has a more extensive list of features.

      I even have mailing lists running on NNCPNET; see the interesting addresses page for more details.

      There is extensive documentation, and of course the source to the whole thing is available.

      The gateway to Internet SMTP mail is off by default, but can easily be enabled for any node. It is a full participant, in both directions, with SPF, DKIM, DMARC, and TLS.

      You don’t need any inbound ports for any of this. You don’t need an always-on Internet connection. You don’t even need an Internet connection at all. You can run it from your laptop and still use Thunderbird to talk to it via its optional built-in IMAP server.

10 April, 2025 12:52AM by John Goerzen

April 09, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

AsioHeaders 1.28.2-1 on CRAN: New Upstream

A new release of the AsioHeaders package arrived at CRAN earlier today. Asio provides a cross-platform C++ library for network and low-level I/O programming. It is also included in Boost – but requires linking when used as part of Boost. This standalone version of Asio is a header-only C++ library which can be used without linking (just like our BH package with parts of Boost).

This update brings a new upstream version which helps the three dependent packages using AsiooHeaders to remain compliant at CRAN, and has been prepared by Charlie Gao. Otherwise I made some routine updates to packaging since the last release in late 2022.

The short NEWS entry for AsioHeaders follows.

Changes in version 1.28.2-1 (2025-04-08)

  • Standard maintenance to CI and other packaging aspects

  • Upgraded to Asio 1.28.2 (Charlie Gao in #11 fixing #10)

Thanks to my CRANberries, there is a diffstat report for this release. Comments and suggestions about AsioHeaders are welcome via the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.

09 April, 2025 01:50AM

April 05, 2025

Russell Coker

HP z840

Many PCs with DDR4 RAM have started going cheap on ebay recently. I don’t know how much of that is due to Windows 11 hardware requirements and how much is people replacing DDR4 systems with DDR5 systems.

I recently bought a z840 system on ebay, it’s much like the z640 that I recently made my workstation [1] but is designed strictly as a 2 CPU system. The z640 can run with 2 CPUs if you have a special expansion board for a second CPU which is very expensive on eBay and and which doesn’t appear to have good airflow potential for cooling. The z840 also has a slightly larger case which supports more DIMM sockets and allows better cooling.

The z640 and z840 take the same CPUs if you use the E5-2xxx series of CPU that is designed for running in 2-CPU mode. The z840 runs DDR4 RAM at 2400 as opposed to 2133 for the z640 for reasons that are not explained. The z840 has more PCIe slots which includes 4*16x slots that support bifurcation.

The z840 that I have has the HP Z-Cooler [2] installed. The coolers are mounted on a 45 degree angle (the model depicted at the right top of the first page of that PDF) and the system has a CPU shroud with fans that mount exactly on top of the CPU heatsinks and duct the hot air out without going over other parts. The technology of the z840 cooling is very impressive. When running two E5-2699A CPUs which are listed as “145W typical TDP” with all 44 cores in use the system is very quiet. It’s noticeably louder than the z640 but is definitely fine to have at your desk. In a typical office you probably wouldn’t hear it when it’s running full bore. If I was to have one desktop PC or server in my home the z840 would definitely be the machine I choose for that.

I decided to make the z840 a build server to share the resource with friends and to use for group coding projects. I often have friends visit with laptops to work on FOSS stuff and a 44 core build server is very useful for that.

The system is by far the fastest system I’ve ever owned even though I don’t have fast storage for it yet. But 256G of RAM allows enough caching that storage speed doesn’t matter too much.

Here is building the SE Linux “refpolicy” package on the z640 with E5-2696 v3 CPU and the z840 with two E5-2699A v4 CPUs:

257.10user 47.18system 1:40.21elapsed 303%CPU (0avgtext+0avgdata 416408maxresident)k
66904inputs+1519912outputs (74major+8154395minor)pagefaults 0swaps

222.15user 24.17system 1:13.80elapsed 333%CPU (0avgtext+0avgdata 416192maxresident)k
5416inputs+0outputs (64major+8030451minor)pagefaults 0swaps

Here is building Warzone2100 on the z640 and the z840:

6887.71user 178.72system 16:15.09elapsed 724%CPU (0avgtext+0avgdata 1682160maxresident)k
1555480inputs+8918768outputs (114major+27133734minor)pagefaults 0swaps

6055.96user 77.05system 8:00.20elapsed 1277%CPU (0avgtext+0avgdata 1682100maxresident)k
117640inputs+0outputs (46major+11460968minor)pagefaults 0swaps

It seems that the refpolicy package can’t use many more than 18 cores as it is only 37% faster when building with 44 cores available. Building Warzone is slightly more than twice as fast so it can really use all the available cores. According to Passmark the E5-2699A v4 is 22% faster than the E5-2696 v3.

I highly recommend buying a z640 if you see one at a good price.

05 April, 2025 10:52AM by etbe

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Cisco 2504 password extraction

I needed this recently, so I took a trip into Ghidra and learned enough to pass it on:

If you have an AireOS-based wireless controller (Cisco 2504, vWLC, etc.; basically any of the now-obsolete Cisco WLC series), and you need to pick out the password, you can go look in the XML files in /mnt/application/xml/aaaapiFileDbCfgData.xml (if you have a 2504, you can just take out the CompactFlash card and mount the fourth partition or run strings on it; if it's a vWLC you can use the disk image similarly). You will find something like (hashes have been changed to not leak my own passwords :-) ):

    <userDatabase index="0" arraySize="2048">
      <userName>61646d696e000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000</userName>
      <serviceType>6</serviceType>
      <WLAN-id>0</WLAN-id>
      <accountCreationTimestamp>946686833</accountCreationTimestamp>
      <passwordStore>
        <ps_type>PS_STATIC_AES128CBC_SHA1</ps_type>
        <iv>3f7b4fcfcd3b944751a8614ebf80a0a0</iv>
        <mac>874d482bbc56b24ee776e80bbf1f5162</mac>
        <max_passwd_len>50</max_passwd_len>
        <passwd_len>16</passwd_len>
        <passwd>8614c0d0337989017e9576b82662bc120000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000</passwd>
      </passwordStore>
      <telnetEnable>1</telnetEnable>
    </userDatabase>

“userName” is obviously just “admin” in plain hex. Ignore the HMAC; it's seemingly only used for integrity checking. The password is encrypted with a static key embedded in /sbin/switchdrvr, namely 834156f9940f09c0a8d00f019f850005. So you can just ask OpenSSL to decrypt it:

> printf $( echo '8614c0d0337989017e9576b82662bc12' | sed 's/\(..\)/\\x&/g' ) | openssl aes-128-cbc -d -K 834156f9940f09c0a8d00f019f850005 -iv 3f7b4fcfcd3b944751a8614ebf80a0a0 | xxd -g 1
00000000: 70 61 73 73 77 6f 72 64                          password

And voila. (There are some other passwords floating around there in the XML files, where I believe that this master key is used to encrypt other keys, and occasionally things seem to be double-hex-encoded, but I haven't really bothered looking at it.)

When you have the actual key, it's easy to just search for and others have found the same thing, but for “show run” output, so searching for e.g. “PS_STATIC_AES128CBC_SHA1” found nothing. But now at least you know.

Update: Just to close the loop: The contents of <mac> is a HMAC-SHA1 of a concatenation of 00 00 00 01 <iv> <passwd> (supposedly maybe 01 00 00 00 instead, depending on endian of the underlying system; both MIPS and x86 controllers exist), where <passwd> is the encrypted password (without the extra tacked-on zeros, and the HMAC key is 44C60835E800EC06FFFF89444CE6F789. So it's doubly useless for password cracking; just decrypt the plaintext password instead. :-)

05 April, 2025 09:57AM

Russell Coker

More About the HP ML110 Gen9 and z640

In May 2021 I bought a ML110 Gen9 to use as a deskside workstation [1]. I started writing this post in April 2022 when it had been my main workstation for almost a year. While this post was in a draft state in Feb 2023 I upgraded it to an 18 core E5-2696 v3 CPU [2]. It’s now March 2025 and I have replaced it.

Hardware Issues

My previous state with this was not having adequate cooling to allow it to boot and not having a PCIe power cable for a video card. As an experiment I connected the CPU fan to the PCIe fan power and discovered that all power and monitoring wires for the CPU and PCIe fans are identical. This allowed me to buy a CPU fan which was cheaper ($26.09 including postage) and easier to obtain than a PCIe fan (presumably due to CPU fans being more commonly used and manufactured in larger quantities). I had to be creative in attaching the CPU fan as it’s cable wasn’t long enough to reach the usual location for a PCIe fan. The PCIe fan also required a baffle to direct the air to the right place which annoyingly HP apparently doesn’t ship with the low end servers, so I made one from a Corn Flakes packet and duct tape.

The Wikipedia page listing AMD GPUs lists many newer ones that draw less than 80W and don’t need a PCIe power cable. I ordered a Radeon RX560 4G video card which cost $246.75. It only uses 8 lanes of PCIe but that’s enough for me, the only 3D game I play is Warzone 2100 which works well at 4K resolution on that card. It would be really annoying if I had to just spend $246.75 to get the system working, but I had another system in need of a better video card which had a PCIe power cable so the effective cost was small. I think of it as upgrading 2 systems for $123 each.

The operation of the PCIe video card was a little different than non-server systems. The built in VGA card displayed the hardware status at the start and then kept displaying that after the system had transitioned to PCIe video. This could be handy in some situations if you know what it’s doing but was confusing initially.

Booting

One insidious problem is that when booting in “legacy” mode the boot process takes an unreasonably long time and often hangs, the UEFI implementation on this system seems much more reliable and also supports booting from NVMe.

Even with UEFI the boot process on this system was slow. Also the early stage of the power on process involves fans being off and the power light flickering which leads you to think that it’s not booting and needs to have the power button pressed again – which turns it off. The Dell power on sequence of turning most LEDs on and instantly running the fans at high speed leaves no room for misunderstanding. This is also something that companies making electric cars could address. When turning on a machine you should never be left wondering if it is actually on.

Noise

This was always a noisy system. When I upgraded the CPU from an 8 core with 85W “typical TDP” to an 18 core with 145W “typical TDP” it became even louder. Then over time as dust accumulated inside the machine it became louder still until it was annoyingly loud outside the room when all 18 cores were busy.

Replacement

I recently blogged about options for getting 8K video to work on Linux [3]. This requires PCIe power which the z640s have (all the ones I have seen have it I don’t know if all that HP made have it) and which the cheaper models in the ML-110 line don’t have. Since then I have ordered an Intel Arc card which apparently has 190W TDP. There are adaptors to provide PCIe power from SATA or SAS power which I could have used, but having a E5-2696 v3 CPU that draws 145W [4] and a GPU that draws 190W [4] in a system with a 350W PSU doesn’t seem viable.

I replaced it with one of the HP z640 workstations I got in 2023 [5].

The current configuration of the z640 has 3*32G RDIMMs compared to the ML110 having 8*32G, going from 256G to 96G is a significant decrease but most tasks run well enough like that. A limitation of the z640 is that when run with a single CPU it only has 4 DIMM slots which gives a maximum of 512G if you get 128G LRDIMMs, but as all DDR4 DIMMs larger than 32G are unreasonably expensive at this time the practical limit is 128G (which costs about $120AU). In this case I have 96G because the system I’m using has a motherboard problem which makes the fourth DIMM slot unusable. Currently my desire to get more than 96G of RAM is less than my desire to avoid swapping CPUs.

At this time I’m not certain that I will make my main workstation the one that talks to an 8K display. But I really want to keep my options open and there are other benefits.

The z640 boots faster. It supports PCIe bifurcation (with a recent BIOS) so I now have 4 NVMe devices in a single PCIe slot. It is very quiet, the difference is shocking. I initially found it disconcertingly quiet.

The biggest problem with the z640 is having only 4 DIMM sockets and the particular one I’m using has a problem limiting it to 3. Another problem with the z640 when compared to the ML110 Gen9 is that it runs the RAM at 2133 while the ML110 runs it at 2400, that’s a significant performance reduction. But the benefits outweigh the disadvantages.

Conclusion

I have no regrets about buying the ML-110. It was the only DDR4 ECC system that was in the price range I wanted at the time. If I knew that the z640 systems would run so quietly then I might have replaced it earlier. But it was only late last year that 32G DIMMs became affordable, before then I had 8*16G DIMMs to give 128G because I had some issues of programs running out of memory when I had less.

05 April, 2025 09:13AM by etbe

April 04, 2025

hackergotchi for Gunnar Wolf

Gunnar Wolf

Naming things revisited

How long has it been since you last saw a conversation over different blogs syndicated at the same planet? Well, it’s one of the good memories of the early 2010s. And there is an opportunity to re-engage! 😃

I came across Evgeni’s post “naming things is hard� in Planet Debian. So, what names have I given my computers?

I have had many since the mid-1990s I also had several during the decade before that, but before Linux, my computers didn’t hve a formal name. Naming my computers something nice Linux gave me.

I have forgotten many. Some of the names I have used:

  • My years in Iztacala: I worked as a sysadmin between 1999 and 2003. When I arrived, we already had two servers, campus and tlali, and one computer pending installation, ollin. The credit for their names is not mine.
    • campus: A mighty SPARCstation 5! Because it was the main (and for some time, the only!) server in our campus.
    • tlali: A regular PC used as a Linux server. “Tlaliâ€� means something like lands in náhuatl, the prehispanic language spoken in central Mexico. My workplace was Iztacala, which translates as “the place where there are white housesâ€�; “tlaliâ€� and “caliâ€� are related words.
    • ollin: was a big IBM RS/6000 system running AIX. It came to us, probably already obsolete, as a (useless) donation from Fundación UNAM; I don’t recall the exact model, but it looked very much like this one. Ran on AIX. We had no software for it, and frankly… never really got it to be productive. Funnily, its name “Ollinâ€� means “movementâ€� in Náhuatl. I added some servers to the lineup during the two years I was in Iztacala:
    • tlamantli: An Alpha 21164 server that doubled as my desktop. Given the tradition in Iztacala of naming things in Náhuatl, but trying to be somewhat funny, tlamantli just means a thing; I understand the word is usually bound to a quantifier.
    • tepancuate: A regular PC system we set up with OpenBSD as a firewall. It means “wallâ€� in Náhuatl.
  • Following the first CONSOL (National Free Software Conference), I was invited to work as a programmer at UPN, Universidad Pedagógica Nacional in 2003–2004. There I was not directly in charge of any of the servers (I mostly used ajusco, managed by Víctor, named after the mountain on whose slopes our campus was). But my only computer there was:
    • shmate: , meaning old rag in yiddish. The word shmate is used like thingy, although it would usually mean old and slightly worn-out thingy. It was a quite nice machine, though. I had a Pentium 4 with 512MB RAM, not bad for 2003!
  • I started my present work at Instituto de Investigaciones Económicas, UNAM 20 years ago(!), in 2005. Here I am a systems administrator, so naturally I am in charge of the servers. And over the years, we have had a fair share of machines:
    • mosca: is my desktop. It has changed hardware several times (of course) over the years, but it’s still the same Debian Sid install I did in January 2005 (I must have reinstalled once, when I got it replaced by an AMD64). Its name is the Spanish name for the common fly. I have often used it to describe my work, since I got in the early 1990s an automated bilingual translator called TRANSLATE; it came on seven 5.25â€� floppies. As a teenager, I somehow got my hands on a copy, and installed it in my 80386SX. Fed it its own README to see how it fared. And the first sentence made me burst in laughter: «TRANSLATE performs on the fly translation» ⇒ «TRADUCE realiza traducción sobre la mosca». Starting then, I always think of «on the fly» as «sobre la mosca». As Groucho said, I guess… Time flies like an arrow, but fruit flies like a banana.
    • lafa When I got there, we didn’t have any servers; for some time, I took one of the computer lab’s systems to serve our web page and receive mail. But when we got some budget approved, we bought a fsckin-big server. Big as in four-rack-units. Double CPUs (not multicore, but two independent early Xeon CPUs, if I’m not mistaken. Still, it was still a 32 bits system). ל×�פה (lafa) is a big, more flexible kind of Arab bread than pita; I loved it when I lived in Israel. And there is an album (and song) by Teapacks, an Israeli group I am very fond of, «hajaim shelja belafa» (your life in a lafa), saying, «hey, brother! Your life is in a lafa. You throw everything in a big pita. You didn’t have time to chew, you already swallowed it».
    • joma: Our firewall. חו×�×” means wall in Hebrew.
    • baktun: lafa was great, but over the years, it got old. After many years, I finally got the Institute to buy a second server. We got it in December 2012. There was a lot of noise around then because the world was supposed to end on 2012.12.21, as the Mayan calendar reached a full long cycle. This long cycle is called /baktun/. So, it was fitting as the name of the new server.
    • teom: As lafa was almost immediately decomissioned and turned into a virtual machine in the much bigger baktun,, I wanted to split services, make off-hardware backups, and such. Almost two years later, my request was approved and we bought a second server. But instead of buying it from a “regularâ€� provider, we got it off a stash of machines bought by our university’s central IT entity. To my surprise, it had the exact same hardware configuration as baktun, bought two years earlier. Even the serial number was absurdly close. So, I had it as baktun’s long-lost twin. Hence, תְ×�וֹ×� (transliterated as teom), the Hebrew word for twin. About a year after teom arrived to my life, my twin children were also born, but their naming followed a completely different logic process than my computers 😉
  • At home or on the road: I am sure I am missing several systems over the years.
    • pato: The earliest system I had that I remember giving a name to. I built a 80386SX in 1991, buying each component separately. The box had a 1-inch square for integrators to put their branding — And after some time, I carefully printed and applied a label that said Catarmáquina PATO (the first word, very small). Pato (duck) is how we’d call a no-brand system. Catarmáquina because it was the system where I ran my BBS, CatarSYS (1992-1994).
    • malenkaya: In 2008 I got a 9â€� Acer Aspire One netbook (Atom N270 i386, 1GB RAM). I really loved that machine! Although it was quite limited, it was my main computer while on the road for almost five years. malenkaya means small (for female) in Russian.
    • matlalli: After malenkaya started being too limited for my regular use, I bought its successor Acer Aspire One model. This one was way larger (10.1 inches screen) and I wasn’t too happy about it at the beginning, but I ended up loving it. So much, in fact, that we bought at least four very similar such computers for us and our family members. This computer was dirt cheap, and endured five further years of lugging everywhere. matlalli is due to its turquoise color: it is the Náhuatl word for blue or green.
    • cajita: In 2014 I got a beautiful Cubox i4 Pro computer. It took me some time to get it to boot and be generally useful, but it ended up being my home server for many years, until I had a power supply malfunction which bricked it. cajita means little box in Spanish.
    • pitentzin: Another 10.1â€� Acer Aspire One (the last in the lineup; the CPU is a Celeron 877, so it does run AMD64, and it supports up to 16GB RAM, I think I have it with 12). We originally bought it for my family in Argentina, but they didn’t really use it much, and after a couple of years we got it back. We decided it would be the computer for the kids, at least for the time being. And although it is a 2013 laptop, it’s still our everyday media station driver. Oh, and the name pitentzin? Náhuatl for /children/.
    • tliltik: In 2018, I bought a second-hand Thinkpad X230. It was my daily driver for about three years. I reflashed its firmware with CoreBoot, and repeated the experience for seven people IIRC in DebConf18. With it, I learned to love the Thinkpad keyboard. Naturally for a thinkpad, tliltik means black in Náhuatl.
    • uesebe: When COVID struck, we were all sent home, and my university lent me a nice recently bought Intel i7 HP laptop. At first, I didn’t want to mess up its Windows install (so I set up a USB-drive-based installation, hence the name uesebe); when it was clear the lockdown was going to be long (and that tliltik had too many aches to be used for my daily work), I transferred the install to its HDD and used it throughout the pandemic, until mid 2022.
    • bolex: I bought this computer for my father in 2020. After he passed away in May 2022, I took his computer, and named it bolex because that’s the brand of the 8mm cinema camera he loved and had since 1955, and with which he created most of his films. It is really an entry-level machine, though (a single-core, dual-threaded Celeron), and it was too limited when I started distance-teaching again, so I had to store it as an emergency system.
    • yogurtu: During the pandemics, I spent quite a bit of time fiddling with the Raspberry Pi family. But all in all, while they are nice machines for many uses, they are too limited to be daily drivers. Or even enough for taking i.e. to Debconf and have them be my conference computer. I bought an almost-new-but-used (≈2 year old) Yoga C630 ARM laptop. I often brag about my happy experience with it, and how it brings a reasonably powerful ARM Linux system to my everyday life. In our last DebConf, I didn’t even pick up my USB-C power connector every day; the battery just lasts over ten hours of active work. But I’m not here doing ads, right? yogurtu naturally is derived from the Yoga brand it has, but is taken from Yogurtu Nghé, a fictional character by the Argentinian comical-musical group Les Luthiers, that has marked my life.
    • misnenet: Towards mid 2023, when it was clear that bolex would not be a good daily driver, and considering we would be spending six months in Argentina, I bought a new desktop system. It seems I have something for small computers: I decided for a refurbished HP EliteDesk 800 G5 Mini i7 system. I picked it because, at close to 18×18×3.5cm it perfectly fits in my DebConf18 bag. A laptop, it is clearly not, but it can easily travel with me when needed. Oh, and the name? Because for this model, HP uses different enclosures based on the kind of processor: The i3 model has a flat, black aluminum top… But mine has lots of tiny holes, covering two areas of roughly 15×7cm, with a tiny hole every ~2mm, and with a solid strip between them. Of course, ×�ִסנֶנֶת (misnenet, in Hebrew) means strainer.

04 April, 2025 07:17PM

hackergotchi for Guido Günther

Guido Günther

Booting an Android custom kernel on a Pixel 3a for QMI debugging

As you might know I'm not much of an Android user (let alone developer) but in order to figure out how something low level works you sometimes need to peek at how vendor kernels handles this. For that it is often useful to add additional debugging.

One such case is QMI communication going on in Qualcomm SOCs. Joel Selvaraj wrote some nice tooling for this.

To make use of this a rooted device and a small kernel patch is needed and what would be a no-brainer with Linux Mobile took me a moment to get it to work on Android. Here's the steps I took on a Pixel 3a to first root the device via Magisk, then build the patched kernel and put that into a boot.img to boot it.

Flashing the factory image

If you still have Android on the device you can skip this step.

You can get Android 12 from developers.google.com. I've downloaded sargo-sp2a.220505.008-factory-071e368a.zip. Then put the device into Fastboot mode (Power + Vol-Down), connect it to your PC via USB, unzip/unpack the archive and reflash the phone:

unpack sargo-sp2a.220505.008-factory-071e368a.zip
./flash-all.sh

This wipes your device! I had to run it twice since it would time out on the first run. Note that this unpacked zip contains another zip (image-sargo-sp2a.220505.008.zip) which will become useful below.

Enabling USB debugging

Now boot Android and enable Developer mode by going to SettingsAbout then touching Build Number (at the very bottom) 7 times.

Go back one level, then go to SystemDeveloper Options and enable "USB Debugging".

Obtaining boot.img

There are several ways to get boot.img. If you just flashed Android above then you can fetch boot.img from the already mentioned image-sargo-sp2a.220505.008.zip:

unzip image-sargo-sp2a.220505.008.zip boot.img

If you want to fetch the exact boot.img from your device you can use TWRP (see the very end of this post).

Becoming root with Magisk

Being able to su via adb will later be useful to fetch kernel logs. For that we first download Magisk as APK. At the time of writing v28.1 is current.

Once downloaded we upload the APK and the boot.img from the previous step onto the phone (which needs to have Android booted):

adb push Magisk-v28.1.apk /sdcard/Download
adb push boot.img /sdcard/Download

In Android open the Files app, navigate to /sdcard/Download and install the Magisk APK by opening the APK.

We now want to patch boot.img to get su via adb to work (so we can run dmesg). This happens by hitting Install in the Magisk app, then "Select a file to patch". You then select the boot.img we just uploaded.

The installation process will create a magisk_patched-<random>.img in /sdcard/Download. We can pull that file via adb back to our PC:

adb pull /sdcard/Download/magisk_patched-28100_3ucVs.img

Then reboot the phone into fastboot (adb reboot bootloader) and flash it (this is optional see below):

fastboot flash boot magisk_patched-28100_3ucVs.img

Now boot the phone again, open the Magisk app, go to SuperUser at the bottom and enable Shell.

If you now connect to your phone via adb again and now su should work:

adb shell
su

As noted above if you want to keep your Android installation pristine you don't even need to flash this Magisk enabled boot.img. I've flashed it so I have su access for other operations too. If you don't want to flash it you can still test boot it via:

fastboot boot magisk_patched-28100_3ucVs.img

and then perform the same adb shell su check as above.

Building the custom kernel

For our QMI debugging to work we need to patch the kernel a bit and place that in boot.img too. So let's build the kernel first. For that we install the necessary tools (which are thankfully packaged in Debian) and fetch the Android kernel sources:

sudo apt install repo android-platform-tools-base kmod ccache build-essential mkbootimg
mkdir aosp-kernel && cd aosp-kernel
repo init -u https://android.googlesource.com/kernel/manifest -b android-msm-bonito-4.9-android12L
repo sync

With that we can apply Joel's kernel patches and also compile in the touch controller driver so we don't need to worry if the modules in the initramfs match the kernel. The kernel sources are in private/msm-google. I've just applied the diffs on top with patch and modified the defconfig and committed the changes. The resulting tree is here.

We then build the kernel:

PATH=/usr/sbin:$PATH ./build_bonito.sh

The resulting kernel is at ./out/android-msm-pixel-4.9/private/msm-google/arch/arm64/boot/Image.lz4-dtb.

In order to boot that kernel I found it to be the simplest to just replace the kernel in the Magisk patched boot.img as we have that already. In case you have already deleted that for any reason we can always fetch the current boot.img from the phone via TWRP (see below).

Preparing a new boot.img

To replace the kernel in our Magisk enabled magisk_patched-28100_3ucVs.img from above with the just built kernel we can use mkbootimgfor that. I basically copied the steps we're using when building the boot.img on the Linux Mobile side:

ARGS=$(unpack_bootimg --format mkbootimg --out tmp --boot_img magisk_patched-28100_3ucVs.img)
CLEAN_PARAMS="$(echo "${ARGS}" | sed -e "s/ --cmdline '.*'//" -e "s/ --board '.*'//")"
cp android-kernel/out/android-msm-pixel-4.9/private/msm-google/arch/arm64/boot/Image.lz4-dtb tmp/kernel
mkbootimg -o "boot.patched.img" ${CLEAN_PARAMS} --cmdline "${ARGS}"

This will give you a boot.patched.img with the just built kernel.

Boot the new kernel via fastboot

We can now boot the new boot.patched.img. No need to flash that onto the device for that:

fastboot boot boot.patched.img

Fetching the kernel logs

With that we can fetch the kernel logs with the debug output via adb:

adb shell su -c 'dmesg -t' > dmesg_dump.xml

or already filtering out the QMI commands:

adb shell su -c 'dmesg -t'  | grep "@QMI@" | sed -e "s/@QMI@//g" &> sargo_qmi_dump.xml

That's it. You can apply this method for testing out other kernel patches as well. If you want to apply the above to other devices you basically need to make sure you patch the right kernel sources, the other steps should be very similar.

In case you just need a rooted boot.img for sargo you can find a patched one here.

If this procedure can be improved / streamlined somehow please let me know.

Appendix: Fetching boot.img from the phone

If, for some reason you lost boot.img somewhere on the way you can always use TWRP to fetch the boot.img currently in use on your phone.

First get TWRP for the Pixel 3a. You can boot that directly by putting your device into fastboot mode, then running:

fastboot boot twrp-3.7.1_12-1-sargo.img

Within TWRP select BackupBoot and backup the file. You can then use adb shell to locate the backup in /sdcard/TWRP/BACKUPS/ and pull it:

adb pull /sdcard/TWRP/BACKUPS/97GAY10PWS/2025-04-02--09-24-24_SP2A220505008/boot.emmc.win

You now have the device's boot.img on your PC and can e.g. replace the kernel or make modifications to the initramfs.

04 April, 2025 04:46PM

hackergotchi for Evgeni Golov

Evgeni Golov

naming things is hard

I got a new laptop (a Lenovo Thinkpad X1 Carbon Gen 12, more on that later) and as always with new pets, it needed a name.

My naming scheme is roughly "short japanese words that somehow relate to the machine".

The current (other) machines at home are (not all really in use):

  • Thinkpad X1 Carbon G9 - tanso (炭素), means carbon
  • Thinkpad T480s - yatsu (八), means 8, as it's a T480s
  • Thinkpad X201s - nana (七), means 7, as it was my first i7 CPU
  • Thinkpad X61t - obon (御盆), means tray, which in German is "Tablett" and is close to "tablet"
  • Thinkpad X300 - atae (与え) means gift, as it was given to me at a very low price, almost a gift
  • Thinkstation P410 - kangae (考え), means thinking, and well, it's a Thinkstation
  • self-built homeserver - sai (さい), means dice, which in German is "Würfel", which is the same as cube, and the machine used to have an almost cubic case
  • Raspberry Pi 4 - aita (開いた), means open, it's running OpenWRT
  • Sun Netra T1 - nisshoku (日食), means solar eclipse
  • Apple iBook G4 13 - ringo (林檎), means apple

Then, I happen to rent a few servers:

  • ippai (一杯), means "a cup full", the VM is hosted at "netcup.de"
  • genshi (原子), means "atom", the machine has an Atom CPU
  • shokki (織機), means loom, which in German is Webstuhl or Webmaschine, and it's the webserver

I also had machines in the past, that are no longer with me:

  • Thinkpad X220 - rodo (労働) means work, my first work laptop
  • Thinkpad X31 - chiisai (小さい) means small, my first X series
  • Thinkpad Z61m - shinkupaddo (シンクパッド) means Thinkpad, my first Thinkpad

And also servers from the past:

  • chikara (力) means power, as it was a rather powerful (for that time) Xeon server
  • hozen (保全), means preservation, it was a backup host

So, what shall I call the new one? It will be "juuni" (十二), which means 12. Creative, huh?

04 April, 2025 07:59AM by evgeni

April 03, 2025

hackergotchi for Gregor Herrmann

Gregor Herrmann

Debian MountainCamp, Innsbruck, 16–18 May 2025

the days are getting warmer (in the northern hemisphere), debian is getting colder, & quite a few debian events are taking place.

in innsbruck, we are organizing MountainCamp, an event in the tradition of SunCamp & SnowCamp: no schedule, no talks, meet other debian people, fix bugs, come up with crazy ideas, have fun, develop things.

interested? head over to the information & signup page on the debian wiki.

03 April, 2025 09:42PM

hackergotchi for Junichi Uekawa

Junichi Uekawa

I was hoping to go to debconf but the frequent travel is painful for me right now that I probably won't make it.

I was hoping to go to debconf but the frequent travel is painful for me right now that I probably won't make it.

03 April, 2025 01:29AM by Junichi Uekawa

April 02, 2025

Paul Wise

FLOSS Activities March 2025

Changes

Issues

Sponsors

The SWH work was sponsored. All other work was done on a volunteer basis.

02 April, 2025 01:05AM

April 01, 2025

hackergotchi for Colin Watson

Colin Watson

Free software activity in March 2025

Most of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

OpenSSH

Changes in dropbear 2025.87 broke OpenSSH’s regression tests. I cherry-picked the fix.

I reviewed and merged patches from Luca Boccassi to send and accept the COLORTERM and NO_COLOR environment variables.

Python team

Following up on last month, I fixed some more uscan errors:

  • python-ewokscore
  • python-ewoksdask
  • python-ewoksdata
  • python-ewoksorange
  • python-ewoksutils
  • python-processview
  • python-rsyncmanager

I upgraded these packages to new upstream versions:

  • bitstruct
  • django-modeltranslation (maintained by Freexian)
  • django-yarnpkg
  • flit
  • isort
  • jinja2 (fixing CVE-2025-27516)
  • mkdocstrings-python-legacy
  • mysql-connector-python (fixing CVE-2025-21548)
  • psycopg3
  • pydantic-extra-types
  • pydantic-settings
  • pytest-httpx (fixing a build failure with httpx 0.28)
  • python-argcomplete
  • python-cymem
  • python-djvulibre
  • python-ecdsa
  • python-expandvars
  • python-holidays
  • python-json-log-formatter
  • python-keycloak (fixing a build failure with httpx 0.28)
  • python-limits
  • python-mastodon (in the course of which I found #1101140 in blurhash-python and proposed a small cleanup to slidge)
  • python-model-bakery
  • python-multidict
  • python-pip
  • python-rsyncmanager
  • python-service-identity
  • python-setproctitle
  • python-telethon
  • python-trio
  • python-typing-extensions
  • responses
  • setuptools-scm
  • trove-classifiers
  • zope.testrunner

In bookworm-backports, I updated python-django to 3:4.2.19-1.

Although Debian’s upgrade to python-click 8.2.0 was reverted for the time being, I fixed a number of related problems anyway since we’re going to have to deal with it eventually:

dh-python dropped its dependency on python3-setuptools in 6.20250306, which was long overdue, but it had quite a bit of fallout; in most cases this was simply a question of adding build-dependencies on python3-setuptools, but in a few cases there was a missing build-dependency on python3-typing-extensions which had previously been pulled in as a dependency of python3-setuptools. I fixed these bugs resulting from this:

We agreed to remove python-pytest-flake8. In support of this, I removed unnecessary build-dependencies from pytest-pylint, python-proton-core, python-pyzipper, python-tatsu, python-tatsu-lts, and python-tinycss, and filed #1101178 on eccodes-python and #1101179 on rpmlint.

There was a dnspython autopkgtest regression on s390x. I independently tracked that down to a pylsqpack bug and came up with a reduced test case before realizing that Pranav P had already been working on it; we then worked together on it and I uploaded their patch to Debian.

I fixed various other build/test failures:

I enabled more tests in python-moto and contributed a supporting fix upstream.

I sponsored Maximilian Engelhardt to reintroduce zope.sqlalchemy.

I fixed various odds and ends of bugs:

I contributed a small documentation improvement to pybuild-autopkgtest(1).

Rust team

I upgraded rust-asn1 to 0.20.0.

Science team

I finally gave in and joined the Debian Science Team this month, since it often has a lot of overlap with the Python team, and Freexian maintains several packages under it.

I fixed a uscan error in hdf5-blosc (maintained by Freexian), and upgraded it to a new upstream version.

I fixed python-vispy: missing dependency on numpy abi.

Other bits and pieces

I fixed debconf should automatically be noninteractive if input is /dev/null.

I fixed a build failure with GCC 15 in yubihsm-shell (maintained by Freexian).

Prompted by a CI failure in debusine, I submitted a large batch of spelling fixes and some improved static analysis to incus (#1777, #1778) and distrobuilder.

After regaining access to the repository, I fixed telegnome: missing app icon in ‘About’ dialogue and made a new 0.3.7 release.

01 April, 2025 12:17PM by Colin Watson

hackergotchi for Guido Günther

Guido Günther

Free Software Activities March 2025

Another short status update of what happened on my side last month. Some more ModemManager bits landed, Phosh 0.46 is out, haptic feedback is now better tunable plus some more. See below for details (no April 1st joke in there, I promise):

phosh

  • Fix swapped arguments in ABI check (MR)
  • Sync packaging with Debian so testing packages becomes easier (MR)
  • Fix crash when primary output goes away (MR)
  • More consistent button press feedback (MR
  • Undraft the lockscreen wallpaper branch (MR) - another ~2y old MR out of the way.
  • Indicate ongoing WiFi scans (MR)
  • Limit ABI compliance check to public headers (MR)
  • Document most gsettings in a manpage (MR)
  • (Hopefully) make integration test more robust (MR)
  • Drop superfluous build invocation in CI by fixing the missing dep (MR)
  • Fix top-panel icon size (MR)
  • Release 0.46~rc1, 0.46.0
  • Simplify adding new symbols (MR)
  • Fix crash when taking screenshot on I/O starved system (MR)
  • Split media-player and mpris-manger (MR)
  • Handle Cell Broadcast notification categories (MR)

phoc

  • xwayland: Allow views to use opacity: (MR)
  • Track wlroots 0.19.x (MR)
  • Initial support for workspaces (MR)
  • Don't crash when gtk-layer-shell wants to reposition popups (MR)
  • Some cleanups split out of other MRs (MR)
  • Release 0.46~rc1, 0.46.0
  • Add meson dist job and work around meson not applying patches in meson dist (MR, MR)
  • Small render to allow Vulkan renderer to work (MR)
  • Fix possible crash when closing applications (MR)
  • Rename XdgSurface to XdgToplevel to prevent errors like the above (MR)

phosh-osk-stub

  • Make switching into (and out of) symbol2 level more pleasant (MR)
  • Simplify UI files as prep for the GTK4 switch (MR)
  • Release 0.46~rc1, 0.46.0)

phosh-mobile-settings

  • Format meson files (MR)
  • Allow to set lockscren wallpaper (MR)
  • Allow to set maximum haptic feedback (MR)
  • Release 0.46~rc1, 0.46.0
  • Avoid warnings when running CI/autopkgtest (MR)

phosh-tour

pfs

  • Add search when opening files (MR)
  • Show loading state when opening folders (MR)
  • Move demo to its own folder (MR)
  • Release 0.0.2

xdg-desktop-portal-gtk

  • Add some support for v2 of the notification portal (MR)
  • Make two function static (MR)

xdg-desktop-portal-phosh

  • Add preview for lockscreen wallpapers (MR)
  • Update to newer pfs to support search (MR)
  • Release 0.46~rc1), 0.46.0
  • Add initial support for notification portal v2 (MR) thus finally allowing flatpaks to submit proper feedback.
  • Style consistency (MR, MR)
  • Add Cell Broadcast categories (MR)

meta-phosh

  • Small release helper tweaks (MR)

feedbackd

  • Allow for vibra patterns with different magnitudes (MR)
  • Allow to tweak maximum haptic feedback strength (MR)
  • Split out libfeedback.h and check more things in CI (MR)
  • Tweak haptic in default profile a bit (MR)
  • dev-vibra: Allow to use full magnitude range (MR)
  • vibra-periodic: Use [0.0, 1.0] as ranges for magnitude (MR)
  • Release 0.8.0, 0.8.1)
  • Only cancel feedback if ever inited (MR)

feedbackd-device-themes

  • Increase button feedback for sarge (MR)

gmobile

  • Release 0.2.2
  • Format and validate meson files (MR)

livi

  • Don't emit properties changed on position changes (MR)

Debian

  • libmbim: Update to 1.31.95 (MR)
  • libmbim: Upload to unstable and add autopkgtest (MR)
  • libqmi: Update to 1.35.95 (MR)
  • libqmi: Upload to unstable and add autopkgtest (MR)
  • modemmanager: Update to 1.23.95 to experimental and add autopkgtest (MR)
  • modemmanager: Upload to unstable (MR)
  • modemmanager: Add missing nodoc build deps (MR)
  • Package osmo-cbc: (Repo)
  • feedbackd: Depend on adduser (MR)
  • feedbackd: Release 0.8.0, 0.8.1
  • feedbackd-device-themes: Release 0.8.0, 0.8.1
  • phosh: Release 0.46~rc1, 0.46.0
  • phoc: Release 0.46~rc1, 0.46.0
  • phosh-osk-stub: Release 0.46~rc1, 0.46.0
  • xdg-desktop-portal-phosh: Release 0.46~rc1, 0.46.0
  • phosh-mobile-settings: Release 0.46~rc1, 0.46.0, fix autopkgtest
  • phosh-tour: Release 0.46.0
  • gmobile: Release 0.2.2-1
  • gmobile: Ensure udev rules are applied on updates (MR)

git-buildpackage

  • Ease creating packages from scratch and document that better (MR, Testcase MR)

feedbackd-device-themes

  • Tweak some haptic for oneplus,fajita (MR)
  • Drop superfluous periodic feedbacks and cleanup CI (MR)

wlroots

  • xwm: Allow to set opacity (MR)

ModemManager

  • Fix typos (MR)
  • Add support for setting channels via libmm-glib and mmcli (MR)

Tuba

  • Set input-hint for OSK word completion (MR)

xdg-spec

  • Propose _NET_WM_WINDOW_OPACITY (which is around since ages) (MR)

gnome-calls

  • Help startup ordering (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is (as usual) slightly incomplete. Thanks for the contributions!

  • phosh: Remove usage of phosh_{app_grid, overview}handlesearch (MR)
  • phosh: app-grid-button: Prepare for GTK 4 by using gestures and other migrations (MR) - merged
  • phosh: valign search results (MR) - merged
  • phosh: top-panel: Hide setting's details on fold (MR) - merged
  • phosh: Show frame with an animation (MR) - merged
  • phosh: Use gtk_widget_set_visible (MR) - merged
  • phosh: Thumbnail aspect ration tweak (MR) - merged
  • phosh: Add clang/llvm ci step (MR)
  • mobile-broadband-provider-info: Bild APN (MR) - merged
  • iio-sensor-proxy: Buffer driver probing fix (MR) - merged
  • iio-sensor-proxy: Double free (MR) - merged
  • debian: Autopkgtests for ModemManager (MR)
  • debian: gitignore: phosh-pim debian build directory (MR)
  • debian: Better autopkgtests for MM (MR) - merged
  • feedbackd: tests: Depend on daemon for integration test (MR) - merged
  • libcmatrix: Various improvements (MR)
  • gmobile/hwdb: Add Sargo (MR) - merged
  • gmobile/hwdb: Add xiaomi-daisy (MR) - merged
  • gmobile/hwdb: Add SHIFT6mq (MR) - merged
  • meta-posh: Add reproducibility check (MR) - merged
  • git-buildpackage: Dependency fixes (MR) - merged
  • git-buildpackage: Rename tracking (MR)

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 April, 2025 08:05AM

March 31, 2025

Simon Josefsson

On Binary Distribution Rebuilds

I rebuilt (the top-50 popcon) Debian and Ubuntu packages, on amd64 and arm64, and compared the results a couple of months ago. Since then the Reproduce.Debian.net effort has been launched. Unlike my small experiment, that effort is a full-scale rebuild with more architectures. Their goal is to reproduce what is published in the Debian archive.

One differences between these two approaches are the build inputs: The Reproduce Debian effort use the same build inputs which were used to build the published packages. I’m using the latest version of published packages for the rebuild.

What does that difference imply? I believe reproduce.debian.net will be able to reproduce more of the packages in the archive. If you build a C program using one version of GCC you will get some binary output; and if you use a later GCC version you are likely to end up with a different binary output. This is a good thing: we want GCC to evolve and produce better output over time. However it means in order to reproduce the binaries we publish and use, we need to rebuild them using whatever build dependencies were used to prepare those binaries. The conclusion is that we need to use the old GCC to rebuild the program, and this appears to be the Reproduce.Debian.Net approach.

It would be a huge success if the Reproduce.Debian.net effort were to reach 100% reproducibility, and this seems to be within reach.

However I argue that we need go further than that. Being able to rebuild the packages reproducible using older binary packages only begs the question: can we rebuild those older packages? I fear attempting to do so ultimately leads to a need to rebuild 20+ year old packages, with a non-negligible amount of them being illegal to distribute or are unable to build anymore due to bit-rot. We won’t solve the Trusting Trust concern if our rebuild effort assumes some initial binary blob that we can no longer build from source code.

I’ve made an illustration of the effort I’m thinking of, to reach something that is stronger than reproducible rebuilds. I am calling this concept a Idempotent Rebuild, which is an old concept that I believe is the same as John Gilmore has described many years ago.

The illustration shows how the Debian main archive is used as input to rebuild another “stage #0” archive. This stage #0 archive can be compared with diffoscope to the main archive, and all differences are things that would be nice to resolve. The packages in the stage #0 archive is used to prepare a new container image with build tools, and the stage #0 archive is used as input to rebuild another version of itself, called the “stage #1” archive. The differences between stage #0 and stage #1 are also useful to analyse and resolve. This process can be repeated many times. I believe it would be a useful property if this process terminated at some point, where the stage #N archive was identical to the stage #N-1 archive. If this would happen, I label the output archive as an Idempotent Rebuild of the distribution.

How big is N today? The simplest assumption is that it is infinity. Any build timestamp embedded into binary packages will change on every iteration. This will cause the process to never terminate. Fixing embedded timestamps is something that the Reproduce.Debian.Net effort will also run into, and will have to resolve.

What other causes for differences could there be? It is easy to see that generally if some output is not deterministic, such as the sort order of assembler object code in binaries, then the output will be different. Trivial instances of this problem will be caught by the reproduce.debian.net effort as well.

Could there be higher order chains that lead to infinite N? It is easy to imagine the existence of these, but I don’t know how they would look like in practice.

An ideal would be if we could get down to N=1. Is that technically possible? Compare building GCC, it performs an initial stage 0 build using the system compiler to produce a stage 1 intermediate, which is used to build itself again to stage 2. Stage 1 and 2 is compared, and on success (identical binaries), the compilation succeeds. Here N=2. But this is performed using some unknown system compiler that is normally different from the GCC version being built. When rebuilding a binary distribution, you start with the same source versions. So it seems N=1 could be possible.

I’m unhappy to not be able to report any further technical progress now. The next step in this effort is to publish the stage #0 build artifacts in a repository, so they can be used to build stage #1. I already showed that stage #0 was around ~30% reproducible compared to the official binaries, but I didn’t save the artifacts in a reusable repository. Since the official binaries were not built using the latest versions, it is to be expected that the reproducibility number is low. But what happens at stage #1? The percentage should go up: we are now compare the rebuilds with an earlier rebuild, using the same build inputs. I’m eager to see this materialize, and hope to eventually make progress on this. However to build stage #1 I believe I need to rebuild a much larger number packages in stage #0, it could be roughly similar to the “build-essentials-depends” package set.

I believe the ultimate end goal of Idempotent Rebuilds is to be able to re-bootstrap a binary distribution like Debian from some other bootstrappable environment like Guix. In parallel to working on a achieving the 100% Idempotent Rebuild of Debian, we can setup a Guix environment that build Debian packages using Guix binaries. These builds ought to eventually converge to the same Debian binary packages, or there is something deeply problematic happening. This approach to re-bootstrap a binary distribution like Debian seems simpler than rebuilding all binaries going back to the beginning of time for that distribution.

What do you think?

PS. I fear that Debian main may have already went into a state where it is not able to rebuild itself at all anymore: the presence and assumption of non-free firmware and non-Debian signed binaries may have already corrupted the ability for Debian main to rebuild itself. To be able to complete the idempotent and bootstrapped rebuild of Debian, this needs to be worked out.

31 March, 2025 08:21AM by simon

Russ Allbery

Review: Ghostdrift

Review: Ghostdrift, by Suzanne Palmer

Series: Finder Chronicles #4
Publisher: DAW
Copyright: May 2024
ISBN: 0-7564-1888-7
Format: Kindle
Pages: 378

Ghostdrift is a science fiction adventure and the fourth (and possibly final) book of the Finder Chronicles. You should definitely read this series in order and not start here, even though the plot of this book would stand alone.

Following The Scavenger Door, in which he made enemies even more dramatically than he had in the previous books, Fergus Ferguson has retired to the beach on Coralla to become a tea master and take care of his cat. It's a relaxing, idyllic life and a much-needed total reset. Also, he's bored. The arrival of his alien friend Qai, in some kind of trouble and searching for him, is a complex balance between relief and disappointment.

Bas Belos is one of the most notorious pirates of the Barrens. He has someone he wants Fergus to find: his twin sister, who disappeared ten years ago. Fergus has an unmatched reputation for finding things, so Belos kidnapped Qai's partner to coerce her into finding Fergus. It's not an auspicious beginning to a relationship, and Qai was ready to fight once they got her partner back, but Belos makes Fergus an offer of payment that, startlingly, is enough for him to take the job mostly voluntarily.

Ghostdrift feels a bit like a return to Finder. Fergus is once again alone among strangers, on an assignment that he's mostly not discussing with others, piecing together clues and navigating tricky social dynamics. I missed his friends, particularly Ignatio, and while there are a few moments with AI ships, they play less of a role.

But Fergus is so very good at what he does, and Palmer is so very good at writing it. This continues to be competence porn at its best. Belos's crew thinks Fergus is a pirate recruited from a prison colony, and he quietly sets out to win their trust with a careful balance of self-deprecation and unflappable skill, helped considerably by the hidden gift he acquired in Finder. The character development is subtle, but this feels like a Fergus who understands friendship and other people at a deeper and more satisfying level than the Fergus we first met three books ago.

Palmer has a real talent for supporting characters and Ghostdrift is no exception. Belos's crew are criminals and murderers, and Palmer does remind the reader of that occasionally, but they're also humans with complex goals and relationships. Belos has earned their loyalty by being loyal and competent in a rough world where those attributes are rare. The morality of this story reminds me of infiltrating a gang: the existence of the gang is not a good thing, and the things they do are often indefensible, but they are an understandable reaction to a corrupt social system. The cops (in this case, the Alliance) are nearly as bad, as we've learned over the past couple of books, and considerably more insufferable. Fergus balances the ethical complexity in a way that I found satisfyingly nuanced, while quietly insisting on his own moral lines.

There is a deep science fiction plot here, possibly the most complex of the series so far. The disappearance of Belos's sister is the tip of an iceberg that leads to novel astrophysics, dangerous aliens, mysterious ruins, and an extended period on a remote and wreck-strewn planet. I groaned a bit when the characters ended up on the planet, since treks across primitive alien terrain with jury-rigged technology are one of my least favorite science fiction tropes, but I need not have worried. Palmer knows what she's doing; the pace of the plot does slow a bit at first, but it quickly picks up again, adding enough new setting and plot complications that I never had a chance to be bored by alien plants. It helps that we get another batch of excellent supporting characters for Fergus to observe and win over.

This series is such great science fiction. Each book becomes my new favorite, and Ghostdrift is no exception. The skeleton of its plot is a satisfying science fiction mystery with multiple competing factions, hints of fascinating galactic politics, complicated technological puzzles, and a sense of wonder that reminds me of reading Larry Niven's Known Space series. But the characters are so much better and more memorable than classic SF; compared to Fergus, Niven's Louis Wu barely exists and is readily forgotten as soon as the story is over. Fergus starts as a quiet problem-solver, but so much character depth unfolds over the course of this series. The ending of this book was delightfully consistent with everything we've learned about Fergus, but also the sort of ending that it's hard to imagine the Fergus from Finder knowing how to want.

Ghostdrift, like each of the books in this series, reaches a satisfying stand-alone conclusion, but there is no reason within the story for this to be the last of the series. The author's acknowledgments, however, says that this the end. I admit to being disappointed, since I want to read more about Fergus and there are numerous loose ends that could be explored. More importantly, though, I hope Palmer will write more novels in any universe of her choosing so that I can buy and read them.

This is fantastic stuff. This review comes too late for the Hugo nominating deadline, but I hope Palmer gets a Best Series nomination for the Finder Chronicles as a whole. She deserves it.

Rating: 9 out of 10

31 March, 2025 04:21AM

March 30, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

It's always the best ones that die first

Berge Schwebs Bjørlo, aged 40, died on March 4th in an avalanche together with his friend Ulf, while on winter holiday.

When writing about someone who recently died, it is common to make lists. Lists of education, of where they worked, on projects they did.

But Berge wasn't common. Berge was an outlier. A paradox, even.

Berge was one of my closest friends; someone who always listened, someone you could always argue with (“I'm a pacifist, but I'm aware that this is an extreme position”) but could rarely be angry at. But if you ask around, you'll see many who say similar things; how could someone be so close to so many at the same time?

Berge had running jokes going on 20 years or more. Many of them would be related to his background from Bergen; he'd often talk about “the un-central east” (aka Oslo), yet had to admit at some point that actually started liking the city. Or about his innate positivity (“I'm in on everything but suicide and marriage!”). I know a lot of people have described his humor as dry, but I found him anything but. Just a free flow of living.

He lived his life in free software, but rarely in actually writing code; I don't think I've seen a patch from him, and only the occasional bug report. Instead, he would spend his time guiding others; he spent a lot of time in PostgreSQL circles, helping people with installation or writing queries or chiding them for using an ORM (“I don't understand why people love to make life so hard for themselves”) or just discussing life, love and everything. Somehow, some people's legacy is just the number of others they touched, and Berge touched everyone he met. Kindness is not something we do well in the free software community, but somehow, it came natural to him. I didn't understand until after he died why he was so chronically bad at reading backlog and hard to get hold of; he was interacting with so many people, always in the present and never caring much about the past.

I remember that Berge once visited my parents' house, and was greeted by our dog, who after a pat promptly went back to relaxing lazily on the floor. “Awh! If I were a dog, that's the kind of dog I'd be.” In retrospect, for someone who lived a lot of his life in 300 km/h (at times quite literally), it was an odd thing to say, but it was just one of those paradoxes.

Berge loved music. He'd argue for intensely political punk, but would really consume everything with great enthuisasm and interest. One of the last albums I know he listened to was Thomas Dybdahl's “… that great October sound”:

Tear us in different ways but leave a thread throughout the maze
In case I need to find my way back home
All these decisions make for people living without faith
Fumbling in the dark nowhere to roam

Dreamweaver
I'll be needing you tomorrow and for days to come
Cause I'm no daydreamer
But I'll need a place to go if memory fails me & let you slip away

Berge wasn't found by a lazy dog. He was found by Shane, a very good dog.

Somehow, I think he would have approved of that, too.

Picture of Berge

30 March, 2025 10:45PM

Swiss JuristGate

Link between institutional abuse, Swiss jurists, Debianism and FSFE

Friday, an expert in the subject of persecution asked me a question: is there a link between the paedophiles and the Swiss jurists?

I reflected on the subject and I found several links between the cultural problems.

At the same time, the BBC published a report on Justin Welby, former head of the Anglican church. He resigned because of the John Smyth QC scandal. John Smyth was a powerful lawyer who had also been a judge for six years. Smyth simultaneously had the role of Reader in the church.

I wrote several blogs about the links between the Code of Conduct gaslighting and the document Crimen Sollicitationis.

The pope sent the Crimen Sollicitationis to each diocese in 1962. It was a secret document. Article 70 insists on total secrecy about abuse:

70. All these official communications shall always be made under the secret of the Holy Office; and, since they are of the utmost importance for the common good of the Church, the precept to make them is binding under pain of grave [sin].

Moreover, we are not even supposed to discuss the existance of the document:

TO BE KEPT CAREFULLY IN THE SECRET ARCHIVE OF THE CURIA FOR INTERNAL USE.

Gerhard Ulrich was a human rights activist. He published a list of judges, their mistakes and their conflicts of interest. In Switzerland, the conflicts of interest are a private subject under article 173(3) of the Swiss criminal code:

The accused is not permitted to lead evidence in support of and is criminally liable for statements that are made or disseminated with the primary intention of accusing someone of disreputable conduct without there being any public interest or any other justified cause, and particularly where such statements refer to a person’s private or family life.

In 2001, the secret document became widely known.

When FINMA published a jugement against the Swiss jurists in 2023, they redacted the names, they redacted the dates and even worse, they redacted most of the paragraphs.

FINMA jugement, décision, Parreaux Thiebaud & Partners, Justicia SA, Justiva SA, Mathieu Parreaux, Gaelle Jeanmonod

 

A little bit like Crimen Sollicitationis, certainly.

We find the same problems in the case of John Smyth, Anglican church and Justin Welby:

From 2013, the Church of England knew “at the highest level” about the abuse, the report says, but failed to refer it either to the police or to the relevant authorities in South Africa, where Smyth died while under investigation by the police.

The Swiss authorities, the bar association of Geneva and FINMA had knowledge of Parreaux, Thiébaud & Partners since 2021 or earlier. Why did they redact the majority of paragraphs from the judgment? They wanted to hide their pre-existing knowledge of the scandal.

Why did the church authorities or Swiss authorities protect people like John Smyth and Mathieu Parreaux? Men like that have knowledge of all the scandals and institutional failings throughout their career. We discussed the same question in the (leaked) debian-private mailing list. Each time somebody is bullied and punished by the overlords, there is a risk that they will publish a full copy of debian-private.

We have filled in the secrets in the JuristGate web site.

When I acquired Swiss citizenship, I had to take an oath. ( la promesse solennelle vaudois, Loi sur le droit de cité vaudois 2018).

You promise to be true to the federal constitution and the constitution of the Canton of Vaud. You promise to maintain and defend on every occasion and with all your powers the rights, freedoms and independence of your new country, to develop and advance her reputation and wealth and equally to avoid all that could cause her loss or damage.

There is a problem: after the death of my father, racist Swiss women started writing gossip. The rudeness of selfish and arrogant people who impose upon my family at a time of grief outweighs the seriousness of the oath.

Swiss authorities closed the legal protection insurance. The director of FINMA resigned with a payout of 581,000 Swiss francs, but he did not provide replacement lawyers for the clients.

At the same time, the racist Swiss women demanded a publication.

FSFE uses the trademark of the FSF without authorisation. They received a bequest of EUR 150,000. They declared the bequest a secret subject.

The Swiss citizenship oath implies that we have to find a private solution to the Debian crisis, but the racist Swiss women wanted a publication of lies after the death of my father.

cut your face off

 

The truth is clear, my father died.

In 2018, the intern Renata D'Avila published a blog about the risks of location tracking services. She chose a quote from the French philosopher Pierre-Joseph Proudhon:

To be GOVERNED is to be watched, inspected, spied upon, directed, ... then, at the slightest resistance, the first word of complaint, to be repressed, fined, vilified, harassed, hunted down, abused, clubbed, disarmed, bound, choked, imprisoned, judged, condemned, shot, deported, sacrificed, sold, betrayed; and to crown all, mocked, ridiculed, derided, outraged, dishonored.

In Australia, Lilie James rejected her boyfriend's demand for location services on her phone and she was clubbed to death. My intern had predicted the crime with her use of the quote alongside her description of problems with Google.

According to the Debian Social Contract, point 3:

We won't hide problems.

but the Debianists threatened my intern by sending her secret emails:

Debian "Community Team" (political police) to Renata, private email of 13 June 2018: Reinforcing positive attitudes and steps that you see in Debian towards women inclusion can also motivate yourself, the other Debian contributors, and possible newcomers, to go on working in that strategy and foster diversity in Debian. This does not mean to avoid criticism or hiding problems, but providing a more accurate vision of how the Brazilian Debian community works towards our common goals.

The threat:

Finally, we would like to say a word about the participation in Debian events that is financed (at least in part) by Debian. We believe that a matching fund, (Mini)DebConf bursary or any other financial help to attend a Debian event is a big endorsement from Debian to the person who receives it, and we believe that your behaviour in MiniDebConf Curitiba 2018 did not match the excellence that we expect for a bursary applicant. Thus, we are considering requesting a rejection of your application to the bursaries team.

Renata did not come back to any Debian events.

In Australia's Royal Commission into Institutional Abuse, we find the same thing:

Culture of secrecy

We are satisfied that there was a prevailing culture within the Archdiocese, led by Archbishop Little, of dealing with complaints internally and confidentially to avoid scandal to the Church.

and again between the priests and their victims:

He said that, on both occasions, Father Daniel encouraged BTH to remain silent by reference to the seal of confession.

The seal of confession is like the Code of Conduct gaslighting in the free software projects. If the victim doesn't maintain the silence, they will be blocked from becoming a priest or a developer.

Frans Pop published his suicide note the night before the Debian Day anniversary. When colleagues discussed his death outside the secret debian-private mailing list, they suffered immediate reprisals.

Unfortunately we also had the misfortune to read information on twitter that, to our current knowledge, must have been gathered from this -private list. We think this a very unfortunate event that is missing every kind of common sense and decency, and therefore saw forced to suspend the accounts of the people leaked from membership on this list.

Pascal/Lia Daobing: You both are no longer subscribed to debian-private, for the next 4 weeks. The one and only reason this list exists is to have a place where we can share information that is not immediately leaked into the public. Twitter is not -private. Please, in the future, respect the rules of the environment you are in, especially in such a special case like this.

The next death was Adrian von Bidder. It was Palm Sunday, the same day as the marriage between Carla and I. Adrian von Bidder died in Switzerland the official report has not been published. Yet.

Switzerland and the Catholic Church both openly promote their culture of secrecy. Debianists have claimed to be committed to transparency so the imposition of secrecy in Debian demonstrates an even greater lack of integrity.

Joerg Jaspert wrote about Frans Pop:

He has done a lot of work for the project, invested a lot of his time and appearently (read the statement of his parents) the Debian project was a very important part of his life.

The last words in the last email of Frans Pop, sent the night before Debian Day:

All mails I ever sent to d-private (and mails quoting them) shall remain private.

Chris Lamb, Debian, Reproducible Builds, Google

30 March, 2025 09:30PM

Russ Allbery

Review: Cascade Failure

Review: Cascade Failure, by L.M. Sagas

Series: Ambit's Run #1
Publisher: Tor
Copyright: 2024
ISBN: 1-250-87126-3
Format: Kindle
Pages: 407

Cascade Failure is a far-future science fiction adventure with a small helping of cyberpunk vibes. It is the first of a (so far) two-book series, and was the author's first novel.

The Ambit is an old and small Guild ship, not much to look at, but it holds a couple of surprises. One is its captain, Eoan, who is an AI with a deep and insatiable curiosity that has driven them and their ship farther and farther out into the Spiral. The other is its surprisingly competent crew: a battle-scarred veteran named Saint who handles the fighting, and a talented engineer named Nash who does literally everything else. The novel opens with them taking on supplies at Aron Outpost. A supposed Guild deserter named Jalsen wanders into the ship looking for work.

An AI ship with a found-family crew is normally my catnip, so I wanted to love this book. Alas, I did not.

There were parts I liked. Nash is great: snarky, competent, and direct. Eoan is a bit distant and slightly more simplistic of a character than I was expecting, but I appreciated the way Sagas put them firmly in charge of the ship and departed from the conventional AI character presentation. Once the plot starts in earnest (more on that in a moment), we meet Anke, the computer hacker, whose charming anxiety reaction is a complete inability to stop talking and who adds some needed depth to the character interactions. There's plenty of action, a plot that makes at least some sense, and a few moments that almost achieved the emotional payoff the author was attempting.

Unfortunately, most of the story focuses on Saint and Jal, and both of them are irritatingly dense cliches.

The moment Jal wanders onto the Ambit in the first chapter, the reader is informed that Jal, Saint, and Eoan have a history. The crew of the Ambit spent a year looking for Jal and aren't letting go of him now that they've found him. Jal, on the other hand, clearly blames Saint for something and is not inclined to trust him. Okay, fine, a bit generic of a setup but the writing moved right along and I was curious enough.

It then takes a full 180 pages before the reader finds out what the hell is going on with Saint and Jal. Predictably, it's a stupid misunderstanding that could have been cleared up with one conversation in the second chapter.

Cascade Failure does not contain a romance (and to the extent that it hints at one, it's a sapphic romance), but I swear Saint and Jal are both the male protagonist from a certain type of stereotypical heterosexual romance novel. They're both the brooding man with the past, who is too hurt to trust anyone and assumes the worst because he's unable to use his words or ask an open question and then listen to the answer. The first half of this book is them being sullen at each other at great length while both of them feel miserable. Jal keeps doing weird and suspicious things to resolve a problem that would have been far more easily resolved by the rest of the crew if he would offer any explanation at all. It's not even suspenseful; we've read about this character enough times to know that he'll turn out to have a heart of gold and everything will be a misunderstanding. I found it tedious. Maybe people who like slow burn romances with this character type will have a less negative reaction.

The real plot starts at about the time Saint and Jal finally get their shit sorted out. It turns out to have almost nothing to do with either of them. The environmental control systems of worlds are suddenly failing (hence the book title), and Anke, the late-arriving computer programmer and terraforming specialist, has a rather wild theory about what's happening. This leads to a lot of action, some decent twists, and a plot that felt very cyberpunk to me, although unfortunately it culminates in an absurdly-cliched action climax.

This book is an action movie that desperately wants to make you feel all the feels, and it worked about as well as that typically works in action movies for me. Jaded cynicism and an inability to communicate are not the ways to get me to have an emotional reaction to a book, and Jal (once he finally starts talking) is so ridiculously earnest that it's like reading the adventures of a Labrador puppy. There was enough going on that it kept me reading, but not enough for the story to feel satisfying. I needed a twist, some depth, way more Nash and Anke and way less of the men, something.

Everyone is going to compare this book to Firefly, but Firefly had better banter, created more complex character interactions due to the larger and more varied crew, and played the cynical mercenary for laughs instead of straight, all of which suited me better. This is not a bad book, particularly once it gets past the halfway point, but it's not that memorable either, at least for me. If you're looking for a space adventure with heavy action hero and military SF vibes that wants to be about Big Feelings but gets there in mostly obvious ways, you could do worse. If you're looking for a found-family starship crew story more like Becky Chambers, I think you'll find this one a bit too shallow and obvious.

Not really recommended, although there's nothing that wrong with it and I'm sure other people's experience will differ.

Followed by Gravity Lost, which I'm unlikely to read.

Rating: 6 out of 10

30 March, 2025 04:42AM

March 28, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Banned evidence: Ars Technica forums censored email predicting DebConf23 death, Abraham Raji & Debian cover-up

Various blogs have referred to an email predicting the death of Abraham Raji. The email was published for the first time in the Ars Technica forum and then the whole thread was locked and deleted.

The email is real. I made some inquiries with various coroners who handled previous Debian-related deaths. I wrote back to the Cambridgeshire Coroner's office on 9 September 2023, the first day of DebConf23. I predicted there was a higher risk in this group and three days later Abraham Raji died on the DebConf day trip.

Why doesn't Ars Technica want anybody to see this email? Quite simply, the email proves once again that I was right about the toxic culture.

Subject: Re: Inquest Christopher Rutter - Information Request
Date: Sat, 9 Sep 2023 18:59:26 +0200
From: Daniel Pocock <daniel@pocock.pro>
To: Coroners <Coroners@cambridgeshire.gov.uk>


Hi [redacted],

I've updated the document with some extra email evidence and two more
deaths, both of those being under management from a doctoral candidate
at Cambridge.

Based on my own experience of both Debian culture, the Pell situation
and the evidence in these emails, I feel that there is an ongoing risk
to the health of people who engage with this culture.

Please kindly confirm if the coroner can escalate this to the relevant
people or whether you need somebody to present the document in person.

Regards,

Daniel

The key emails from various web sites, including the suicide discussions, have been placed in a single document that can be forwarded to the relevant police or coroner each time a new victim dies in similar circumstances.

Employers and families are totally unaware of what some people are doing in the debian-private (leaked) conflict zone. The brother of Frans Pop told me that Debianists came to the funeral but they kept him in the dark. Not any more.

Ars Technica moderators suggested the conversation with the coroner could be spam.

It is a creepy coincidence that earlier in the same year, I had been talking to the Carabinieri about the tactics used to silence victims of blackmail and abuse. We were having the conversation in the very same hour that Cardinal George Pell was having his surgery. He died the same day. Pell's name was mentioned again to the Cambridgeshire coroner and Abraham Raji died.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

 

Ars Technica, banned, censored, evidence

 

Don't forget that this latest discussion only came up after we realized that my former Outreachy intern had predicted the circumstances involved in the death of Lilie James.

 

Lilie James, graduation

 

Please see the chronological history of how the Debian harassment and abuse culture evolved.

28 March, 2025 07:30PM

Ian Jackson

Rust is indeed woke

Rust, and resistance to it in some parts of the Linux community, has been in my feed recently. One undercurrent seems to be the notion that Rust is woke (and should therefore be rejected as part of culture wars).

I’m going to argue that Rust, the language, is woke. So the opponents are right, in that sense. Of course, as ever, dissing something for being woke is nasty and fascist-adjacent.

Community

The obvious way that Rust may seem woke is that it has the trappings, and many of the attitudes and outcomes, of a modern, nice, FLOSS community. Rust certainly does better than toxic environments like the Linux kernel, or Debian. This is reflected in a higher proportion of contributors from various kinds of minoritised groups. But Rust is not outstanding in this respect. It certainly has its problems. Many other projects do as well or better.

And this is well-trodden ground. I have something more interesting to say:

Technological values - particularly, compared to C/C++

Rust is woke technology that embodies a woke understanding of what it means to be a programming language.

Ostensible values

Let’s start with Rust’s strapline:

A language empowering everyone to build reliable and efficient software.

Surprisingly, this motto is not mere marketing puff. For Rustaceans, it is a key goal which strongly influences day-to-day decisions (big and small).

Empowering everyone is a key aspect of this, which aligns with my own personal values. In the Rust community, we care about empowerment. We are trying to help liberate our users. And we want to empower everyone because everyone is entitled to technological autonomy. (For a programming language, empowering individuals means empowering their communities, of course.)

This is all very airy-fairy, but it has concrete consequences:

Attitude to the programmer’s mistakes

In Rust we consider it a key part of our job to help the programmer avoid mistakes; to limit the consequences of mistakes; and to guide programmers in useful directions.

If you write a bug in your Rust program, Rust doesn’t blame you. Rust asks “how could the compiler have spotted that bug”.

This is in sharp contrast to C (and C++). C nowadays is an insanely hostile programming environment. A C compiler relentlessly scours your program for any place where you may have violated C’s almost incomprehensible rules, so that it can compile your apparently-correct program into a buggy executable. And then the bug is considered your fault.

These aren’t just attitudes implicitly embodied in the software. They are concrete opinions expressed by compiler authors, and also by language proponents. In other words:

Rust sees programmers writing bugs as a systemic problem, which must be addressed by improvements to the environment and the system. The toxic parts of the C and C++ community see bugs as moral failings by individual programmers.

Sound familiar?

The ideology of the hardcore programmer

Programming has long suffered from the myth of the “rockstar”. Silicon Valley techbro culture loves this notion.

In reality, though, modern information systems are far too complicated for a single person. Developing systems is a team sport. Nontechnical, and technical-adjacent, skills are vital: clear but friendly communication; obtaining and incorporating the insights of every member of your team; willingness to be challenged. Community building. Collaboration. Governance.

The hardcore C community embraces the rockstar myth: they imagine that a few super-programmers (or super-reviewers) are able to spot bugs, just by being so brilliant. Of course this doesn’t actually work at all, as we can see from the atrocious bugfest that is the Linux kernel.

These “rockstars” want us to believe that there is a steep hierarchy in programmming; that they are at the top of this hierarchy; and that being nice isn’t important.

Sound familiar?

Memory safety as a power struggle

Much of the modern crisis of software reliability arises from memory-unsafe programming languages, mostly C and C++.

Addressing this is a big job, requiring many changes. This threatens powerful interests; notably, corporations who want to keep shipping junk. (See also, conniptions over the EU Product Liability Directive.)

The harms of this serious problem mostly fall on society at large, but the convenience of carrying on as before benefits existing powerful interests.

Sound familiar?

Memory safety via Rust as a power struggle

Addressing this problem via Rust is a direct threat to the power of established C programmers such as gatekeepers in the Linux kernel. Supplanting C means they will have to learn new things, and jostle for status against better Rustaceans, or be replaced. More broadly, Rust shows that it is practical to write fast, reliable, software, and that this does not need (mythical) “rockstars”.

So established C programmer “experts” are existing vested interests, whose power is undermined by (this approach to) tackling this serious problem.

Sound familiar?

Notes

This is not a RIIR manifesto

I’m not saying we should rewrite all the world’s C in Rust. We should not try to do that.

Rust is often a good choice for new code, or when a rewrite or substantial overhaul is needed anyway. But we’re going to need other techniques to deal with all of our existing C. CHERI is a very promising approach. Sandboxing, emulation and automatic translation are other possibilities. The problem is a big one and we need a toolkit, not a magic bullet.

But as for Linux: it is a scandal that substantial new drivers and subsystems are still being written in C. We could have been using Rust for new code throughout Linux years ago, and avoided very many bugs. Those bugs are doing real harm. This is not OK.

Disclosure

I first learned C from K&R I in 1989. I spent the first three decades of my life as a working programmer writing lots and lots of C. I’ve written C++ too. I used to consider myself an expert C programmer, but nowadays my C is a bit rusty and out of date. Why is my C rusty? Because I found Rust, and immediately liked and adopted it (despite its many faults).

I like Rust because I care that the software I write actually works: I care that my code doesn’t do harm in the world.

On the meaning of “woke”

The original meaning of “woke” is something much more specific, to do with racism. For the avoidance of doubt, I don’t think Rust is particularly antiracist.

I’m using “woke” (like Rust’s opponents are) in the much broader, and now much more prevalent, culture wars sense.

Pithy conclusion

If you’re a senior developer who knows only C/C++, doesn’t want their authority challenged, and doesn’t want to have to learn how to write better software, you should hate Rust.

Also you should be fired.


Edited 2025-03-28 17:10 UTC to fix minor problems and add a new note about the meaning of the word "woke".



comment count unavailable comments

28 March, 2025 05:09PM

John Goerzen

Why You Should (Still) Use Signal As Much As Possible

As I write this in March 2025, there is a lot of confusion about Signal messenger due to the recent news of people using Signal in government, and subsequent leaks.

The short version is: there was no problem with Signal here. People were using it because they understood it to be secure, not the other way around.

Both the government and the Electronic Frontier Foundation recommend people use Signal. This is an unusual alliance, and in the case of the government, was prompted because it understood other countries had a persistent attack against American telephone companies and SMS traffic.

So let’s dive in. I’ll cover some basics of what security is, what happened in this situation, and why Signal is a good idea.

This post isn’t for programmers that work with cryptography every day. Rather, I hope it can make some of these concepts accessible to everyone else.

What makes communications secure?

When most people are talking about secure communications, they mean some combination of these properties:

  1. Privacy - nobody except the intended recipient can decode a message.
  2. Authentication - guarantees that the person you are chatting with really is the intended recipient.
  3. Ephemerality - preventing a record of the communication from being stored. That is, making it more like a conversation around the table than a written email.
  4. Anonymity - keeping your set of contacts to yourself and even obfuscating the fact that communications are occurring.

If you think about it, most people care the most about the first two. In fact, authentication is a key part of privacy. There is an attack known as man in the middle in which somebody pretends to be the intended recipient. The interceptor reads the messages, and then passes them on to the real intended recipient. So we can’t really have privacy without authentication.

I’ll have more to say about these later. For now, let’s discuss attack scenarios.

What compromises security?

There are a number of ways that security can be compromised. Let’s think through some of them:

Communications infrastructure snooping

Let’s say you used no encryption at all, and connected to public WiFi in a coffee shop to send your message. Who all could potentially see it?

  • The owner of the coffee shop’s WiFi
  • The coffee shop’s Internet provider
  • The recipient’s Internet provider
  • Any Internet providers along the network between the sender and the recipient
  • Any government or institution that can compel any of the above to hand over copies of the traffic
  • Any hackers that compromise any of the above systems

Back in the early days of the Internet, most traffic had no encryption. People were careful about putting their credit cards into webpages and emails because they knew it was easy to intercept them. We have been on a decades-long evolution towards more pervasive encryption, which is a good thing.

Text messages (SMS) follow a similar path to the above scenario, and are unencrypted. We know that all of the above are ways people’s texts can be compromised; for instance, governments can issue search warrants to obtain copies of texts, and China is believed to have a persistent hack into western telcos. SMS fails all four of our attributes of secure communication above (privacy, authentication, ephemerality, and anonymity).

Also, think about what information is collected from SMS and by who. Texts you send could be retained in your phone, the recipient’s phone, your phone company, their phone company, and so forth. They might also live in cloud backups of your devices. You only have control over your own phone’s retention.

So defenses against this involve things like:

  • Strong end-to-end encryption, so no intermediate party – even the people that make the app – can snoop on it.
  • Using strong authentication of your peers
  • Taking steps to prevent even app developers from being able to see your contact list or communication history

You may see some other apps saying they use strong encryption or use the Signal protocol. But while they may do that for some or all of your message content, they may still upload your contact list, history, location, etc. to a central location where it is still vulnerable to these kinds of attacks.

When you think about anonymity, think about it like this: if you send a letter to a friend every week, every postal carrier that transports it – even if they never open it or attempt to peak inside – will be able to read the envelope and know that you communicate on a certain schedule with that friend. The same can be said of SMS, email, or most encrypted chat operators. Signal’s design prevents it from retaining even this information, though nation-states or ISPs might still be able to notice patterns (every time you send something via Signal, your contact receives something from Signal a few milliseconds later). It is very difficult to provide perfect anonymity from well-funded adversaries, even if you can provide very good privacy.

Device compromise

Let’s say you use an app with strong end-to-end encryption. This takes away some of the easiest ways someone could get to your messages. But it doesn’t take away all of them.

What if somebody stole your phone? Perhaps the phone has a password, but if an attacker pulled out the storage unit, could they access your messages without a password? Or maybe they somehow trick or compel you into revealing your password. Now what?

An even simpler attack doesn’t require them to steal your device at all. All they need is a few minutes with it to steal your SIM card. Now they can receive any texts sent to your number - whether from your bank or your friend. Yikes, right?

Signal stores your data in an encrypted form on your device. It can protect it in various ways. One of the most important protections is ephemerality - it can automatically delete your old texts. A text that is securely erased can never fall into the wrong hands if the device is compromised later.

An actively-compromised phone, though, could still give up secrets. For instance, what if a malicious keyboard app sent every keypress to an adversary? Signal is only as secure as the phone it runs on – but still, it protects against a wide variety of attacks.

Untrustworthy communication partner

Perhaps you are sending sensitive information to a contact, but that person doesn’t want to keep it in confidence. There is very little you can do about that technologically; with pretty much any tool out there, nothing stops them from taking a picture of your messages and handing the picture off.

Environmental compromise

Perhaps your device is secure, but a hidden camera still captures what’s on your screen. You can take some steps against things like this, of course.

Human error

Sometimes humans make mistakes. For instance, the reason a reporter got copies of messages recently was because a participant in a group chat accidentally added him (presumably that participant meant to add someone else and just selected the wrong name). Phishing attacks can trick people into revealing passwords or other sensitive data. Humans are, quite often, the weakest link in the chain.

Protecting yourself

So how can you protect yourself against these attacks? Let’s consider:

  • Use a secure app like Signal that uses strong end-to-end encryption where even the provider can’t access your messages
  • Keep your software and phone up-to-date
  • Be careful about phishing attacks and who you add to chat rooms
  • Be aware of your surroundings; don’t send sensitive messages where people might be looking over your shoulder with their eyes or cameras

There are other methods besides Signal. For instance, you could install GnuPG (GPG) on a laptop that has no WiFi card or any other way to connect it to the Internet. You could always type your messages on that laptop, encrypt them, copy the encrypted text to a floppy disk (or USB device), take that USB drive to your Internet computer, and send the encrypted message by email or something. It would be exceptionally difficult to break the privacy of messages in that case (though anonymity would be mostly lost). Even if someone got the password to your “secure” laptop, it wouldn’t do them any good unless they physically broke into your house or something. In some ways, it is probably safer than Signal. (For more on this, see my article How gapped is your air?)

But, that approach is hard to use. Many people aren’t familiar with GnuPG. You don’t have the convenience of sending a quick text message from anywhere. Security that is hard to use most often simply isn’t used. That is, you and your friends will probably just revert back to using insecure SMS instead of this GnuPG approach because SMS is so much easier.

Signal strikes a unique balance of providing very good security while also being practical, easy, and useful. For most people, it is the most secure option available.

Signal is also open source; you don’t have to trust that it is as secure as it says, because you can inspect it for yourself. Also, while it’s not federated, I previously addressed that.

Government use

If you are a government, particularly one that is highly consequential to the world, you can imagine that you are a huge target. Other nations are likely spending billions of dollars to compromise your communications. Signal itself might be secure, but if some other government can add spyware to your phones, or conduct a successful phishing attack, you can still have your communications compromised.

I have no direct knowledge, but I think it is generally understood that the US government maintains communications networks that are entirely separate from the Internet and can only be accessed from secure physical locations and secure rooms. These can be even more secure than the average person using Signal because they can protect against things like environmental compromise, human error, and so forth. The scandal in March of 2025 happened because government employees were using Signal rather than official government tools for sensitive information, had taken advantage of Signal’s ephemerality (laws require records to be kept), and through apparent human error had directly shared this information with a reporter. Presumably a reporter would have lacked access to the restricted communications networks in the first place, so that wouldn’t have been possible.

This doesn’t mean that Signal is bad. It just means that somebody that can spend billions of dollars on security can be more secure than you. Signal is still a great tool for people, and in many cases defeats even those that can spend lots of dollars trying to defeat it.

And remember - to use those restricted networks, you have to go to specific rooms in specific buildings. They are still not as convenient as what you carry around in your pocket.

Conclusion

Signal is practical security. Do you want phone companies reading your messages? How about Facebook or X? Have those companies demonstrated that they are completely trustworthy throughout their entire history?

I say no. So, go install Signal. It’s the best, most practical tool we have.


This post is also available on my website, where it may be periodically updated.

28 March, 2025 02:51AM by John Goerzen

March 27, 2025

hackergotchi for Bits from Debian

Bits from Debian

Viridien Platinum Sponsor of DebConf25

viridien-logo

We are pleased to announce that Viridien has committed to sponsor DebConf25 as a Platinum Sponsor.

Viridien is an advanced technology, digital and Earth data company that pushes the boundaries of science for a more prosperous and sustainable future.

Viridien has been using Debian-based systems to power most of its HPC infrastructure and its cloud platform since 2009 and currently employs two active Debian Project Members.

As a Platinum Sponsor, Viridien is contributing to the Debian annual Developers' conference, directly supporting the progress of Debian and Free Software. Viridien contributes to strengthen the community that collaborates on the Debian project from all around the world throughout all of the year.

Thank you very much, Viridien, for your support of DebConf25!

Become a sponsor too!

DebConf25 will take place from 14 to 20 July 2025 in Brest, France, and will be preceded by DebCamp, from 7 to 13 July 2025.

DebConf25 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf25 website at https://debconf25.debconf.org/sponsors /become-a-sponsor/.

27 March, 2025 10:50AM by Sahil Dhiman

March 24, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Anticipated in 2018: Lilie James & Location tracking, Googlists complained

The ABC published a summary of the Lilie James inquest with a focus on the Location Tracking issues. The coroner hasn't completed their official report yet and these news reports are only summaries of the evidence. (See my previous blog about what people hide from coroners).

In the winter 2017/2018 round of Outreachy, I selected Renata D'Avila from Brazil to be an intern for Debian ( Debian official announcement and Outreachy project list).

During the application process, we ask each applicant to do a small programming task and submit the results. I was startled to see Renata giving help to the women she was competing against. It turns out that while tech industry diversity programs try to attract interns who are fresh out of university, Renata had already worked as a schoolteacher for a number of years and helping the other women was just part of her nature.

In the middle of the program, Renata published a blog post with the title The right to be included in the conversation. Renata's blog post features a screenshot of Google Maps with lines marked on it showing how Googlists have tracked her movements around Porto Alegre, here it is again, along with some analysis:

Google, Stalking, Harassment, women, interns, Outreachy

 

Renata's blog opens with a quote from Pierre-Joseph Proudhon that anticipates the prospect of being clubbed to death:

To be GOVERNED is to be watched, inspected, spied upon, directed, ... then, at the slightest resistance, the first word of complaint, to be repressed, fined, vilified, harassed, hunted down, abused, clubbed, disarmed, bound, choked, imprisoned, judged, condemned, shot, deported, sacrificed, sold, betrayed; and to crown all, mocked, ridiculed, derided, outraged, dishonored.

The ABC report mimics the quote chosen by Renata:

The inquest examining Ms James's death at St Andrew's Cathedral School in 2023, heard she had tried to set boundaries with Thijssen the weekend before.

Creepy. But it gets worse.

The Googlists couldn't stand this. Google is one of the companies that contributes money to these diversity internships. When women are selected for the program, their blog posts are syndicated into various web sites where they are seen by many Google employees and their followers. It was really shocking for them when this blog about how creepy they are suddenly appeared all over the open source eco-system.

Various rumors appeared. They created rumors about "behavior", rumours about "harassment" and rumors about "abuse". They created rumors that I was dating an intern.

They sent threats to Renata, which she didn't tell me about until a few months later. I published some of those communications.

Debian "Community Team" (political police) to Renata, private email of 13 June 2018: Reinforcing positive attitudes and steps that you see in Debian towards women inclusion can also motivate yourself, the other Debian contributors, and possible newcomers, to go on working in that strategy and foster diversity in Debian. This does not mean to avoid criticism or hiding problems, but providing a more accurate vision of how the Brazilian Debian community works towards our common goals.

In other words, we can tell fairy tales but nobody is allowed to speculate about the negative risks associated with location tracking or anything else that comes from Googlists. Because now that it actually happened to a location tracking victim, we can all say I chose the right woman for the internship.

Please watch Renata speaking in this video. They continue spending vast sums of money on "diversity" internships but diversity of thought is not welcome.

Related: the Code of Conduct gaslighting in open-source software hobbyist groups may violate the new coercive control laws too.

Even more scary are the predictions I made when Donald Trump was elected for the first time.

Googlists have spent over US$130,000 to try and discredit me, to stop women telling me stuff and to stop us making predictions that are uncomfortably close to the truth.

RIP Lilie James

Lilie James, graduation

24 March, 2025 09:30PM