Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

September 28, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

Whisper (pipewire tool)

Whisper (pipewire tool)

It's time to mint a new blog tag…

I want to write to pour praise on some software I recently discovered.

I'm not up to speed on Pipewire—the latest piece of Linux plumbing related to audio—nor how it relates to the other bits (Pulseaudio, ALSA, JACK, what else?). I recently tried to plug something into the line-in port on my external audio interface, and wished to hear it on the machine. A simple task, you'd think.

I'll refrain from writing about the stuff that didn't work well and focus on the thing that did: A little tool called Whisper, which is designed to let you listen to a microphone through your speakers.

_Whisper_'s UI. Screenshot from upstream.

Whisper's UI. Screenshot from upstream.

Whisper does a great job of hiding the complexity of what lies beneath and asking two questions: which microphone, and which speakers? In my case this alone was not quite enough, as I was presented with two identically-named "SB Live Extigy" "microphone" devices, but that's easily resolved with trial and error.

More stuff like this please!

28 September, 2024 03:22PM

September 26, 2024

Antoine Beaupré

Why I should be running Debian unstable right now

So a common theme on the Internet about Debian is so old. And right, I am getting close to the stage that I feel a little laggy: I am using a bunch of backports for packages I need, and I'm missing a bunch of other packages that just landed in unstable and didn't make it to backports for various reasons.

I disagree that "old" is a bad thing: we definitely run Debian stable on a fleet of about 100 servers and can barely keep up, I would make it older. And "old" is a good thing: (port) wine and (any) beer needs time to age properly, and so do humans, although some humans never seem to grow old enough to find wisdom.

But at this point, on my laptop, I am feeling like I'm missing out. This page, therefore, is an evolving document that is a twist on the classic NewIn game. Last time I played seems to be #newinwheezy (2013!), so really, I'm due for an update. (To be fair to myself, I do keep tabs on upgrades quite well at home and work, which do have their share of "new in", just after the fact.)

New packages to explore

Those tools are shiny new things available in unstable or perhaps Trixie (testing) already that I am not using yet, but I find interesting enough to list here.

  • backdown: clever file deduplicator
  • codesearch: search all of Debian's source code (tens of thousands of packages) from the commandline! (see also dcs-cli, not in Debian)
  • dasel: JSON/YML/XML/CSV parser, similar to jq, but different syntax, not sure I'd grow into it, but often need to parse YML like JSON and failing
  • fyi: notify-send replacement
  • git-subrepo: git-submodule replacement I am considering
  • gtklock: swaylock replacement with bells and whistles, particularly interested in showing time, battery and so on
  • hyprland: possible Sway replacement, but there are rumors of a toxic community (rebuttal, I haven't reviewed either in detail), so approach carefully)
  • kooha: simple screen recorder with audio support, currently using wf-recorder which is a more.. minimalist option
  • linescroll: rate graphs on live logs, mostly useful on servers though
  • ruff: faster Python formatter and linter, flake8/black/isort replacement, alas not mypy/LSP unfortunately, designed to be ran alongside such a tool, which is not possible in Emacs eglot right now, but is possible in lsp-mode
  • sfwbar: pretty status bar, may replace waybar, which i am somewhat unhappy with (my UTC clock disappears randomly)
  • spytrap-adb: cool spy gear
  • trippy: trippy network analysis tool, kind of an improved MTR

New packages I won't use

Those are packages that I have tested because I found them interesting, but ended up not using, but I think people could find interesting anyways.

  • kew: surprisingly fast music player, parsed my entire library (which is huge) instantaneously and just started playing (I still use Supersonic, for which I maintain a flatpak on my Navidrome server)
  • mdformat: good markdown formatter, think black or gofmt but for markdown), but it didn't actually do what I needed, and it's not quite as opinionated as it should (or could) be)

Backports already in use

Those are packages I already use regularly, which have backports or that can just be installed from unstable:

  • asn: IP address forensics
  • markdownlint: markdown linter, I use that a lot
  • poweralertd: pops up "your battery is almost empty" messages
  • sway-notification-center: used as part of my status bar, yet another status bar basically, a little noisy, stuck in a libc dep update
  • tailspin: used to color logs

Out of date packages

Those are packages that are in Debian stable (Bookworm) already, but that are somewhat lacking and could benefit from an upgrade.

Last words

If you know of cool things I'm missing out of, then by all means let me know!

That said, overall, this is a pretty short list! I have most of what I need in stable right now, and if I wasn't a Debian developer, I don't think I'd be doing the jump now. But considering how easier it is to develop Debian (and how important it is to test the next release!), I'll probably upgrade soon.

Previously, I was running Debian testing (which why the slug on that article is why-trixie), but now I'm actually considering just running unstable on my laptop directly anyways. It's been a long time since we had any significant instability there, and I can typically deal with whatever happens, except maybe when I'm traveling, and then it's easy to prepare for that (just pin testing).

26 September, 2024 01:40PM

September 25, 2024

Russell Coker

The PiKVM

Hardware

I have just setup a PiKVM, here’s the Amazon link for the KVM hardware (case and Pi hat etc) and here’s an Amazon link for a Pi4 to match.

The PiKVM web site has good documentation [1] and they have a YouTube channel with videos showing how to assemble the devices [2]. It’s really convenient being able to change the playback speed from low speeds like 1/4 original speed) to double speed when watching such a video. One thing to note is that there are some revisions to the hardware that aren’t covered in the videos, the device I received had some improvements that made it easier to assemble which weren’t in the video.

When you buy the device and Pi you need to also get a SD card of at least 4G in size, a CR1220 battery for real-time clock, and a USB-2/3 to USB-C cable for keyboard/mouse MUST NOT BE USB-C to USB-C! When I first tried using it I used a USB-C to USB-C cable for keyboard and mouse and it didn’t work for reasons I don’t understand (I welcome comments with theories about this). You also need a micro-HDMI to HDMI cable to get video output if you want to set it up without having to find the IP address and ssh to it.

The system has a bright OLED display to show the IP address and some other information which is very handy.

The hardware is easy enough for a 12yo to setup. The construction of the parts are solid and well engineered with everything fitting together nicely. It has a PCI/PCIe slot adaptor for controlling power and sending LED status over the connection which I didn’t test. I definitely recommend this.

Software

This is the download link for the RaspberryPi images for the PiKVM [3]. The “v3” image matches the hardware from the Amazon link I provided.

The default username/password is root/root. Connect it to a HDMI monitor and USB keyboard to change the password etc. If you control the DHCP server you can find the IP address it’s using and ssh to it to change the password (it is configured to allow ssh as root with password authentication).

If you get the kit to assemble it (as opposed to buying a completed unit already assembled) then you need to run the following commands as root to enable the OLED display. This means that after assembling it you can’t get the IP address without plugging in a monitor with a micro-HDMI to HDMI cable or having access to the DHCP server logs.

rw
systemctl enable --now kvmd-oled kvmd-oled-reboot kvmd-oled-shutdown
systemctl enable --now kvmd-fan
ro

The default webadmin username/password is admin/admin.

To change the passwords run the following commands:

rw
kvmd-htpasswd set admin
passwd root
ro

It is configured to have the root filesystem mounted read-only which is something I thought had gone out of fashion decades ago. I don’t think that modern versions of the Ext3/4 drivers are going to corrupt your filesystem if you have it mounted read-write when you reboot.

By default it uses a self-signed SSL certificate so with a Chrome based browser you get an error when you connect where you have to select “advanced” and then tell it to proceed regardless. I presume you could use the DNS method of Certbot authentication to get a SSL certificate to use on an internal view of your DNS to make it work normally with SSL.

The web based software has all the features you expect from a KVM. It shows the screen in any resolution up to 1920*1080 and proxies keyboard and mouse. Strangely “lsusb” on the machine being managed only reports a single USB device entry for it which covers both keyboard and mouse.

Managing Computers

For a tower PC disconnect any regular monitor(s) and connect a HDMI port to the HDMI input on the KVM. Connect a regular USB port (not USB-C) to the “OTG” port on the KVM, then it should all just work.

For a laptop connect the HDMI port to the HDMI input on the KVM. Connect a regular USB port (not USB-C) to the “OTG” port on the KVM. Then boot it up and press Fn-F8 for Dell, Fn-F7 for Lenovo or whatever the vendor code is to switch display output to HDMI during the BIOS initialisation, then Linux will follow the BIOS and send all output to the HDMI port for the early stages of booting. Apparently Lenovo systems have the Fn key mapped in the BIOS so an external keyboard could be used to switch between display outputs, but the PiKVM software doesn’t appear to support that. For other systems (probably including the Dell laptops that interest me) the Fn key apparently can’t be simulated externally. So for using this to work on laptops in another city I need to have someone local press Fn-F8 at the right time to allow me to change BIOS settings.

It is possible to configure the Linux kernel to mirror display to external HDMI and an internal laptop screen. But this doesn’t seem useful to me as the use cases for this device don’t require that. If you are using it for a server that doesn’t have iDRAC/ILO or other management hardware there will be no other “monitor” and all the output will go through the only connected HDMI device. My main use for it in the near future will be for supporting remote laptops, when Linux has a problem on boot as an easier option than talking someone through Linux commands and for such use it will be a temporary thing and not something that is desired all the time.

For the gdm3 login program you can copy the .config/monitors.xml file from a GNOME user session to the gdm home directory to keep the monitor settings. This configuration option is decent for the case where a fixed set of monitors are used but not so great if your requirement is “display a login screen on anything that’s available”. Is there an xdm type program in Debian/Ubuntu that supports this by default or with easy reconfiguration?

Conclusion

The PiKVM is a well engineered and designed product that does what’s expected at a low price. There are lots of minor issues with using it which aren’t the fault of the developers but are due to historical decisions in the design of BIOS and Linux software. We need to change the Linux software in question and lobby hardware vendors for BIOS improvements.

The feature for connecting to an ATX PSU was unexpected and could be really handy for some people, it’s not something I have an immediate use for but is something I could possibly use in future. I like the way they shipped the hardware for it as part of the package giving the user choices about how they use it, many vendors would make it an optional extra that costs another $100. This gives the PiKVM more functionality than many devices that are much more expensive.

The web UI wasn’t as user friendly as it might have been, but it’s a lot better than iDRAC so I don’t have a serious complaint about it. It would be nice if there was an option for creating macros for keyboard scancodes so I could try and emulate the Fn options and keys for volume control on systems that support it.

25 September, 2024 11:01PM by etbe

September 24, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppFastAD 0.0.4 on CRAN: Updated Again

A new release 0.0.4 of the RcppFastAD package by James Yang and myself is now on CRAN.

RcppFastAD wraps the FastAD header-only C++ library by James which provides a C++ implementation of both forward and reverse mode of automatic differentiation. It offers an easy-to-use header library (which we wrapped here) that is both lightweight and performant. With a little of bit of Rcpp glue, it is also easy to use from R in simple C++ applications. This release updates the quick fix in release 0.0.3 from a good week ago. James took a good look and properly disambiguated the statement that lead clang to complain, so we are back to compiling as C++17 under all compilers which makes for a slightly wider reach.

The NEWS file for this release follows.

Changes in version 0.0.4 (2024-09-24)

  • The package now properly addresses a clang warning on empty variadic macros arguments and is back to C++17 (James in #10)

Courtesy of my CRANberries, there is also a diffstat report for the most recent release. More information is available at the repository or the package page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

24 September, 2024 05:15PM

September 23, 2024

hackergotchi for Jonathan McDowell

Jonathan McDowell

The (lack of a) return-to-office conspiracy

During COVID companies suddenly found themselves able to offer remote working where it hadn’t previously been on offer. That’s changed over the past 2 or so years, with most places I’m aware of moving back from a fully remote situation to either some sort of hybrid, or even full time office attendance. For example last week Amazon announced a full return to office, having already pulled remote-hired workers in for 3 days a week.

I’ve seen a lot of folk stating they’ll never work in an office again, and that RTO is insanity. Despite being lucky enough to work fully remotely (for a role I’d been approached about before, but was never prepared to relocate for), I feel the objections from those who are pro-remote often fail to consider the nuances involved. So let’s talk about some of the reasons why companies might want to enforce some sort of RTO.

Real estate value

Let’s clear this one up first. It’s not about real estate value, for most companies. City planners and real estate investors might care, but even if your average company owned their building they’d close it in an instant all other things being equal. An unoccupied building costs a lot less to maintain. And plenty of companies rent and would save money even if there’s a substantial exit fee.

Occupancy levels

That said, once you have anyone in the building the equation changes. If you’re having to provide power, heating, internet, security/front desk staff etc, you want to make sure you’re getting your money’s worth. There’s no point heating a building that can seat 100 for only 10 people present. One option is to downsize the building, but that leads to not being able to assign everyone a desk, for example. No one I know likes hot desking. There are also scheduling problems about ensuring there are enough desks for everyone who might turn up on a certain day, and you’ve ruled out the option of company/office wide events.

Coexistence builds relationships

As a remote worker I wish it wasn’t true that most people find it easier to form relationships in person, but it is. Some of this can be worked on with specific “teambuilding” style events, rather than in office working, but I know plenty of folk who hate those as much as they hate the idea of being in the office. I am lucky in that I work with a bunch of folk who are terminally online, so it’s much easier to have those casual conversations even being remote, but I also accept I miss out on some things because I’m just not in the office regularly enough. You might not care about this (“I just need to put my head down and code, not talk to people”), but don’t discount it as a valid reason why companies might want their workers to be in the office. This often matters even more for folk at the start of their career, where having a bunch of experience folk around to help them learn and figure things out ends up working much better in person (my first job offered to let me go mostly remote when I moved to Norwich, but I said no as I knew I wasn’t ready for it yet).

Coexistence allows for unexpected interactions

People hate the phrase “water cooler chat”, and I get that, but it covers the idea of casual conversations that just won’t happen the same way when people are remote. I experienced this while running Black Cat; every time Simon and I met up in person we had a bunch of useful conversations even though we were on IRC together normally, and had a VoIP setup that meant we regularly talked too. Equally when I was at Nebulon there were conversations I overheard in the office where I was able to correct a misconception or provide extra context. Some of this can be replicated with the right online chat culture, but I’ve found many places end up with folk taking conversations to DMs, or they happen in “private” channels. It happens more naturally in an office environment.

It’s easier for bad managers to manage bad performers

Again, this falls into the category of things that shouldn’t be true, but are. Remote working has increased the ability for people who want to slack off to do so without being easily detected. Ideally what you want is that these folk, if they fail to perform, are then performance managed out of the organisation. That’s hard though, there are (rightly) a bunch of rights workers have (I’m writing from a UK perspective) around the procedure that needs to be followed. Managers need organisational support in this to make sure they get it right (and folk are given a chance to improve), which is often lacking.

Summary

Look, I get there are strong reasons why offering remote is a great thing from the company perspective, but what I’ve tried to outline here is that a return-to-office mandate can have some compelling reasons behind it too. Some of those might be things that wouldn’t exist in an ideal world, but unfortunately fixing them is a bigger issue than just changing where folk work from. Not acknowledging that just makes any reaction against office work seem ill-informed, to me.

23 September, 2024 05:31PM

September 22, 2024

hackergotchi for Adnan Hodzic

Adnan Hodzic

Effortless Linux backups: Power of OpenZFS Snapshots on Ubuntu 24.04

Linux snapshots? Back in the day (mid 2000’s) ReiserFS was my go to Linux filesystem, it was fast & reliable. But then after its creator...

22 September, 2024 04:00PM by Adnan Hodzic

September 21, 2024

Jamie McClelland

How do I warm up an IP Address?

After years on the waiting list, May First was just given a /24 block of IP addresses. Excellent.

Now we want to start using them for, among other things, sending email.

I haven’t added a new IP address to our mail relays in a while and things seems to change regularly in the world of email so I’m curious: what’s the best 2024 way to warm up IP addresses, particularly using postfix?

Sendergrid has a nice page on the topic. It establishes the number of messages to send per day. But I’m not entirely sure how to fit messages per day into our setup.

We use round robin DNS to direct email to one of several dozen email relay servers using postfix. And unfortunately our DNS software (knot) doesn’t have a way to add weights to ensure some IPs show up more often than others (much less limit the specific number of messages a given relay should get).

Postfix has some nice knobs for rate limiting, particularly: default_destination_recipient_limit and default_destination_rate_delay

If default_destination_recipient_limit is over 1, then default_destination_rate_delay is equal to the minimum delay between sending email to the same domain.

So, I’m staring our IP addresses out at 30m - which prevents any single domain from receiving more than 2 messages per hour. Sadly, there are a lot of different domain names that deliver to the same set of popular corporate MX servers, so I am not sure I can accurately control how many messages a given provider sees coming from a given IP address. But it’s a start.

A bigger problem is that messages that exceed the limit hang out in the active queue until they can be sent without violating the rate limit. Since I can’t fully control the number of messages a given queue receives (due to my inability to control the DNS round robin weights), a lot of messages are going to be severely delayed, especially ones with an @gmail.com domain name.

I know I can temporarily set relayhost to a different queue and flush deferred messages, however, as far as I can tell, it doesn’t work with active messages.

To help mitigate the problem I’m only using our bulk mail queue to warm up IPs, but really, this is not ideal.

Suggestions welcome!

Update #1

If you are running postfix in a multi-instance setup and you have instances that are already warmed up, you can move active messages between queues with these steps:

# Put the message on hold in the warming up instance
postsuper -c /etc/postfix-warmingup -h $queueid
# Copy to a warmed up instance
cp --preserve=mode,ownership,timestamp /var/spool/postfix-warmingup/hold/$queueid /var/spool/postfix-warmedup/incoming/
# Queue the message
postqueue -c /etc/postfix-warmedup -i $queueid
# Delete from the original queue.
postsuper -c /etc/postfix-warmingup -d $queueid

After just 12 hours we had thousands of messages piling up. This warm up method was never going to work without the ability to move them to a faster queue.

[Additional update: be sure to reload the postfix instance after flushing the queue so messages are drained from the active queue on the correct schedule. See update #4.]

Update #2

After 24 hours, most email is being accepted as far as I can tell. I am still getting a small percentage of email deferred by Yahoo with:

421 4.7.0 [TSS04] Messages from 204.19.241.9 temporarily deferred due to unexpected volume or user complaints - 4.16.55.1; see https://postmaster.yahooinc.com/error-codes (in reply

So I will keep it as 30m for another 24 hours or so and then move to 15m. Now that I can flush the backlog of active messages I am in less of a hurry.

Update #3

Well, this doesn’t seem to be working the way I want it to.

When a message arrives faster than the designated rate limit, it remains in the active queue.

I’m entirely sure how the timing is supposed to work, but at this point I’m down to a 5m rate delay, and the active messages are just hanging out for a lot longer than 5m. I tried flushing the queue, but that only seems to affect the deferred messages. I finally got them re-tried with systemctl reload. I wonder if there is a setting to control this retry? Or better yet, why can’t these messages that exceed the rate delayed be deferred instead?

Update #4

I think I see why I was confused in Update #3 about the timing. I suspect that when I move messages out of the active queue it screws up the timer. Reloading the instance resets the timer. Every time you muck with active messages, you should reload.

21 September, 2024 12:27PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

50 years of queries

This post is a review for Computing Reviews for 50 years of queries , a article published in Communications of the ACM

The relational model is probably the one innovation that brought computers to the mainstream for business users. This article by Donald Chamberlin, creator of one of the first query languages (that evolved into the ubiquitous SQL), presents its history as a commemoration of the 50th anniversary of his publication of said query language.

The article begins by giving background on information processing before the advent of today’s database management systems: with systems storing and processing information based on sequential-only magnetic tapes in the 1950s, adopting a record-based, fixed-format filing system was far from natural. The late 1960s and early 1970s saw many fundamental advances, among which one of the best known is E. F. Codd’s relational model. The first five pages (out of 12) present the evolution of the data management community up to the 1974 SIGFIDET conference. This conference was so important in the eyes of the author that, in his words, it is the event that “starts the clock” on 50 years of relational databases.

The second part of the article tells about the growth of the structured English query language (SEQUEL)– eventually renamed SQL–including the importance of its standardization and its presence in commercial products as the dominant database language since the late 1970s. Chamberlin presents short histories of the various implementations, many of which remain dominant names today, that is, Oracle, Informix, and DB2. Entering the 1990s, open-source communities introduced MySQL, PostgreSQL, and SQLite.

The final part of the article presents controversies and criticisms related to SQL and the relational database model as a whole. Chamberlin presents the main points of controversy throughout the years: 1) the SQL language lacks orthogonality; 2) SQL tables, unlike formal relations, might contain null values; and 3) SQL tables, unlike formal relations, may contain duplicate rows. He explains the issues and tradeoffs that guided the language design as it unfolded. Finally, a section presents several points that explain how SQL and the relational model have remained, for 50 years, a “winning concept,” as well as some thoughts regarding the NoSQL movement that gained traction in the 2010s.

This article is written with clear language and structure, making it easy and pleasant to read. It does not drive a technical point, but instead is a recap on half a century of developments in one of the fields most important to the commercial development of computing, written by one of the greatest authorities on the topic.

21 September, 2024 05:03AM

September 18, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rblpapi 0.3.15: Updated and New BLP Library

bloomberg terminal

Version 0.3.15 of the Rblpapi package arrived on CRAN today. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg (but note that a valid Bloomberg license and installation is required).

This is the fifteenth release since the package first appeared on CRAN in 2016. This release updates to the current version 3.24.6 of the Bloomberg API, and rounds out a few corners in the packaging from continuous integration to the vignette.

The detailed list of changes follow below.

Changes in Rblpapi version 0.3.15 (2024-09-18)

  • A warning is now issued if more than 1000 results are returned (John in #377 addressing #375)

  • A few typos in the rblpapi-intro vignette were corrected (Michael Streatfield in #378)

  • The continuous integration setup was updated (Dirk in #388)

  • Deprecation warnings over char* where C++ class Name is now preferred have been addressed (Dirk in #391)

  • Several package files have been updated (Dirk in #392)

  • The request formation has been corrected, and an example was added (Dirk and John in #394 and #396)

  • The Bloomberg API has been upgraded to release 3.24.6.1 (Dirk in #397)

Courtesy of my CRANberries, there is also a diffstat report for the this release. As always, more detailed information is at the Rblpapi repo or the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 September, 2024 02:52PM

Jamie McClelland

Gmail vs Tor vs Privacy

A legit email went to spam. Here are the redacted, relevant headers:

[redacted]
X-Spam-Flag: YES
X-Spam-Level: ******
X-Spam-Status: Yes, score=6.3 required=5.0 tests=DKIM_SIGNED,DKIM_VALID,
[redacted]
	*  1.0 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL
	*      [185.220.101.64 listed in xxxxxxxxxxxxx.zen.dq.spamhaus.net]
	*  3.0 RCVD_IN_SBL_CSS Received via a relay in Spamhaus SBL-CSS
	*  2.5 RCVD_IN_AUTHBL Received via a relay in Spamhaus AuthBL
	*  0.0 RCVD_IN_PBL Received via a relay in Spamhaus PBL
[redacted]
[very first received line follows...]
Received: from [10.137.0.13] ([185.220.101.64])
        by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-378956d2ee6sm12487760f8f.83.2024.09.11.15.05.52
        for <xxxxx@mayfirst.org>
        (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
        Wed, 11 Sep 2024 15:05:53 -0700 (PDT)

At first I though a Gmail IP address was listed in spamhaus - I even opened a ticket. But then I realized it wasn’t the last hop that Spamaus is complaining about, it’s the first hop, specifically the ip 185.220.101.64 which appears to be a Tor exit node.

The sender is using their own client to relay email directly to Gmail. Like any sane person, they don’t trust Gmail to protect their privacy, so they are sending via Tor. But WTF, Gmail is not stripping the sending IP address from the header.

I’m a big fan of harm reduction and have always considered using your own client to relay email with Gmail as a nice way to avoid some of the surveillance tax Google imposes.

However, it seems that if you pursue this option you have two unpleasant choices:

  • Embed your IP address in every email message or
  • Use Tor and have your email messages go to spam

I supposed you could also use a VPN, but I doubt the IP reputation of most VPN exit nodes are going to be more reliable than Tor.

18 September, 2024 12:27PM

September 17, 2024

hackergotchi for Benjamin Mako Hill

Benjamin Mako Hill

My Chair

I realize that because I have several chairs, the phrase “my chair” is ambiguous. To reduce confusion, I will refer to the head of my academic department as “my office chair” going forward.

17 September, 2024 10:11PM by Benjamin Mako Hill

hackergotchi for Jonathan Dowland

Jonathan Dowland

ouch, part 2

Things developed since my last post. Some lesions opened up on my ankle which was initially good news: the pain substantially reduced. But they didn’t heal fast enough and so medics decided on surgical debridement. That was last night. It seemed to be successful and I’m in recovery from surgery as I write. It’s hard to predict the near-future, a lot depends on how well and fast I heal.

I’ve got a negative-pressure dressing on it, which is incredible: a constantly maintained suction to aid in debridement and healing. Modern medicine feels like a sci fi novel.

17 September, 2024 12:53PM

ouch, part 3

The debridement operation was a success: nothing bad grew afterwards. I was discharged after a couple of nights with crutches, instructions not to weight-bear, a remarkable, portable negative-pressure "Vac" pump that lived by my side, and some strong painkillers.

About two weeks later, I had a skin graft. The surgeon took some skin from my thigh and stitched it over the debridement wound. I was discharged same-day, again with the Vac pump, and again with instructions not to weight-bear, at least for a few days.

This time I only kept the Vac pump for a week, and after a dressing change (the first time I saw the graft), I was allowed to walk again. Doing so is strangely awkward, and sometimes a little painful. I have physio exercises to help me regain strength and understanding about what I can do.

The donor site remained bandaged for another week before I saw it. I was expecting a stitched cut, but the surgeons have removed the top few layers only, leaving what looks more like a graze or sun-burn. There are four smaller, tentative-looking marks adjacent, suggesting they got it right on the fifth attempt. I'm not sure but I think these will all fade away to near-invisibility with time, and they don't hurt at all.

I've now been off work for roughly 12 weeks, but I think I am returning very soon. I am looking forward to returning to some sense of normality. It's been an interesting experience. I thought about writing more about what I've gone through, in particular my experiences in Hospital, dealing with the bureaucracy and things falling "between the gaps". Hanif Kureishi has done a better job than I could. It's clear that the NHS is staffed by incredibly passionate people, but there are a lot of structural problems that interfere with care.

17 September, 2024 12:53PM

Russ Allbery

Review: The Book That Broke the World

Review: The Book That Broke the World, by Mark Lawrence

Series: Library Trilogy #2
Publisher: Ace
Copyright: 2024
ISBN: 0-593-43796-9
Format: Kindle
Pages: 366

The Book That Broke the World is high fantasy and a direct sequel to The Book That Wouldn't Burn. You should not start here. In a delightful break from normal practice, the author provides a useful summary of the previous volume at the start of this book to jog your memory.

At the end of The Book That Wouldn't Burn, the characters were scattered and in various states of corporeality after some major revelations about the nature of the Library and the first appearance of the insectile Skeer. The Book That Wouldn't Burn picks up where it left off, and there is a lot more contact with the Skeer, but my guess that they would be the next viewpoint characters does not pan out. Instead, we get a new group and a new protagonist: Celcha, whose sees angels who come to visit her brother.

I have complaints, but before I launch into those, I should say that I liked this book apart from the totally unnecessary cannibalism. (I'll get to that.) Livira is a bit sidelined, which is regrettable, but Celcha and her brother are interesting new characters, and both Arpix and Clovis, supporting characters in the first book, get some excellent character development. Similar to the first book, this is a puzzle box story full of world-building tidbits with intellectually-satisfying interactions. Lawrence elaborates and complicates his setting in ways that don't contradict earlier parts of the story but create more room and depth for the characters to be creative. I came away still invested in this world and eager to find out how Lawrence pulls the world-building and narrative threads together.

The biggest drawback of this book is that it's not new. My thought after finishing the first book of the series was that if Lawrence had enough world-building ideas to fill three books to that same level of density, this had the potential of being one of my favorite fantasy series of all time. By the end of the second book, I concluded that this is not the case. Instead of showing us new twists and complications the way the first book did throughout, The Book That Broke the World mostly covers the same thematic ground from some new angles. It felt like Lawrence was worried the reader of the first book may not have understood the theme or the world-building, so he spent most of the second book nailing down anything that moved.

I found that frustrating. One of the best parts of The Book That Wouldn't Burn was that Lawrence trusted the reader to keep up, which for me hit the glorious but rare sweet spot of pacing where I was figuring out the world at roughly the same pace as the characters. It surprised me in some very enjoyable ways. The Book That Broke the World did not surprise me. There are a few new things, which I enjoyed, and a few elaborations and developments of ideas, which I mostly enjoyed, but I saw the big plot twist coming at least fifty pages before it happened and found the aftermath more annoying than revelatory. It doesn't help that the plot rests on character misunderstandings, one of my least favorite tropes.

One of the other disappointments of this book is that the characters stop using the Library as a library. The Library at the center of this series is a truly marvelous piece of world-building with numerous fascinating features that are unrelated to its contents, but Livira used it first and foremost as a repository of books. The first book was full of characters solving problems by finding a relevant book and reading it.

In The Book That Broke the World, sadly, this is mostly gone. The Library is mostly reduced to a complicated Big Dumb Object setting. It's still a delightful bit of world-building, and we learn about a few new features, but I only remember two places where the actual books are important to the story. Even the book referenced in the title is mostly important as an artifact with properties unrelated to the words that it contains or to the act of reading it. I think this is a huge lost opportunity and something I hope Lawrence fixes in the last book of the trilogy.

This book instead focuses on the politics around the existence of the Library itself. Here I'm cautiously optimistic, although a lot is going to depend on the third book. Lawrence has set up a three-sided argument between groups that I will uncharitably describe as the libertarian techbros, the "burn it all down" reactionaries, and the neoliberal centrist technocrats. All three of those positions suck, and Lawrence had better be setting the stage for Livira to find a different path. Her unwillingness to commit to any of those sides gives me hope, but bringing this plot to a satisfying conclusion is going to be tricky. I hope I like what Lawrence comes up with, but it feels far from certain.

It doesn't help that he's started delivering some points with a sledgehammer, and that's where we get to the unnecessary cannibalism. Thankfully this is a fairly small part of the tail end of the book, but it was an unpleasant surprise that I did not want in this novel and that I don't think made the story any better.

It's tempting to call the cannibalism gratuitous, but it does fit one of the main themes of this story, namely that humans are depressingly good at using any rule-based object in unexpected and nasty ways that are contrary to the best intentions of the designer. This is the fundamental challenge of the Library as a whole and the question that I suspect the third book will be devoted to addressing, so I understand why Lawrence wanted to emphasize his point. The reason why there is cannibalism here is directly related to a profound misunderstanding of the properties of the library, and I detected an echo of one of C.S. Lewis's arguments in The Last Battle about the nature of Hell.

The problem, though, is that this is Satanic baby-killerism, to borrow a term from Fred Clark. There are numerous ways to show this type of perversion of well-intended systems, which I know because Lawrence used other ones in the first book that were more subtle but equally effective. One of the best parts of The Book That Wouldn't Burn is that there were few real villains. The conflict was structural, all sides had valid perspectives, and the ethical points of that story were made with some care and nuance.

The problem with cannibalism as it's used here is not merely that it's gross and disgusting and off-putting to the reader, although it is all of those things. If I wanted to read horror, I would read horror novels. I don't appreciate surprise horror used for shock value in regular fantasy. But worse, it's an abandonment of moral nuance. The function of cannibalism in this story is like the function of Satanic baby-killers: it's to signal that these people are wholly and irredeemably evil. They are the Villains, they are Wrong, and they cease to be characters and become symbols of what the protagonists are fighting. This is destructive to the story because it's designed to provoke a visceral short-circuit in the reader and let the author get away with sloppy story-telling. If the author needs to use tactics like this to point out who is the villain, they have failed to set up their moral quandary properly.

The worst part is that this was entirely unnecessary because Lawrence's story-telling wasn't sloppy and he set up his moral quandary just fine. No one was confused about the ethical point here. I as the reader was following without difficulty, and had appreciated the subtlety with which Lawrence posed the question. But apparently he thought he was too subtle and decided to come back to the point with a pile-driver. I think that seriously injured the story. The ethical argument here is much more engaging and thought-provoking when it's more finely balanced.

That's a lot of complaints, mostly because this is a good book that I badly wanted to be a great book but which kept tripping over its own feet. A lot of trilogies have weak second books. Hopefully this is another example of the mid-story sag, and the finale will be worthy of the start of the story. But I have to admit the moral short-circuiting and the de-emphasis of the actual books in the library has me a bit nervous. I want a lot out of the third book, and I hope I'm not asking this author for too much.

If you liked the first book, I think you'll like this one too, with the caveat that it's quite a bit darker and more violent in places, even apart from the surprise cannibalism. But if you've not started this series, you may want to wait for the third book to see if Lawrence can pull off the ending.

Followed by The Book That Held Her Heart, currently scheduled for publication in April of 2025.

Rating: 7 out of 10

17 September, 2024 02:57AM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

nanotime 0.3.10 on CRAN: Update

A minor update 0.3.10 for our nanotime package is now on CRAN. nanotime relies on the RcppCCTZ package (as well as the RcppDate package for additional C++ operations) and offers efficient high(er) resolution time parsing and formatting up to nanosecond resolution, using the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from a rigorous refactoring by Leonardo who not only rejigged nanotime internals in S4 but also added new S4 types for periods, intervals and durations.

This release updates one S4 methods to very recent changes in r-devel for which CRAN had reached out. This concerns the setdiff() method when applied to two nanotime objects. As it only affected R 4.5.0, due next April, if rebuilt in the last two or so weeks it will not have been visible to that many users, if any. In any event, it now works again for that setup too, and should be going forward.

We also retired one demo function from the very early days, apparently it relied on ggplot2 features that have since moved on. If someone would like to help out and resurrect the demo, please get in touch. We also cleaned out some no longer used tests, and updated DESCRIPTION to what is required now. The NEWS snippet below has the full details.

Changes in version 0.3.10 (2024-09-16)

  • Retire several checks for Solaris in test suite (Dirk in #130)

  • Switch to Authors@R in DESCRIPTION as now required by CRAN

  • Accommodate R-devel change for setdiff (Dirk in #133 fixing #132)

  • No longer ship defunction ggplot2 demo (Dirk fixing #131)

Thanks to my CRANberries, there is a diffstat report for this release. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository – and all documentation is provided at the nanotime documentation site.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

17 September, 2024 12:58AM

September 16, 2024

Russ Allbery

Review: The Wings Upon Her Back

Review: The Wings Upon Her Back, by Samantha Mills

Publisher: Tachyon
Copyright: 2024
ISBN: 1-61696-415-4
Format: Kindle
Pages: 394

The Wings Upon Her Back is a political steampunk science fantasy novel. If the author's name sounds familiar, it may be because Samantha Mills's short story "Rabbit Test" won Nebula, Locus, Hugo, and Sturgeon awards. This is her first novel.

Winged Zemolai is a soldier of the mecha god and the protege of Mecha Vodaya, the Voice. She has served the city-state of Radezhda by defending it against all enemies, foreign and domestic, for twenty-six years. Despite that, it takes only a moment of errant mercy for her entire life to come crashing down. On a whim, she spares a kitchen worker who was concealing a statue of the scholar god, meaning that he was only pretending to worship the worker god like all workers should. Vodaya is unforgiving and uncompromising, as is the sleeping mecha god. Zemolai's wings are ripped from her back and crushed in the hand of the god, and she's left on the ground to die of mechalin withdrawal.

The Wings Upon Her Back is told in two alternating timelines. The main one follows Zemolai after her exile as she is rescued by a young group of revolutionaries who think she may be useful in their plans. The other thread starts with Zemolai's childhood and shows the reader how she became Winged Zemolai: her scholar family, her obsession with flying, her true devotion to the mecha god, and the critical early years when she became Vodaya's protege. Mills maintains the separate timelines through the book and wraps them up in a rather neat piece of symbolic parallelism in the epilogue.

I picked up this book on a recommendation from C.L. Clark, and yes, indeed, I can see why she liked this book. It's a story about a political awakening, in which Zemolai slowly realizes that she has been manipulated and lied to and that she may, in fact, be one of the baddies. The Wings Upon Her Back is more personal than some other books with that theme, since Zemolai was specifically (and abusively) groomed for her role by Vodaya. Much of the book is Zemolai trying to pull out the hooks that Vodaya put in her or, in the flashback timeline, the reader watching Vodaya install those hooks.

The flashback timeline is difficult reading. I don't think Mills could have left it out, but she says in the afterword that it was the hardest part of the book to write and it was also the hardest part of the book to read. It fills in some interesting bits of world-building and backstory, and Mills does a great job pacing the story revelations so that both threads contribute equally, but mostly it's a story of manipulative abuse. We know from the main storyline that Vodaya's tactics work, which gives those scenes the feel of a slow-motion train wreck. You know what's going to happen, you know it will be bad, and yet you can't look away.

It occurred to me while reading this that Emily Tesh's Some Desperate Glory told a similar type of story without the flashback structure, which eliminates the stifling feeling of inevitability. I don't think that would not have worked for this story. If you simply rearranged the chapters of The Wings Upon Her Back into a linear narrative, I would have bailed on the book. Watching Zemolai being manipulated would have been too depressing and awful for me to make it to the payoff without the forward-looking hope of the main timeline. It gave me new appreciation for the difficulty of what Tesh pulled off.

Mills uses this interwoven structure well, though. At about 90% through this book I had no idea how it could end in the space remaining, but it reaches a surprising and satisfying conclusion. Mills uses a type of ending that normally bothers me, but she does it by handling the psychological impact so well that I couldn't help but admire it. I'm avoiding specifics because I think it worked better when I wasn't expecting it, but it ties beautifully into the thematic point of the book.

I do have one structural objection, though. It's one of those problems I didn't notice while reading, but that started bothering me when I thought back through the story from a political lens. The Wings Upon Her Back is Zemolai's story, her redemption arc, and that means she drives the plot. The band of revolutionaries are great characters (particularly Galiana), but they're supporting characters. Zemolai is older, more experienced, and knows critical information they don't have, and she uses it to effectively take over. As setup for her character arc, I see why Mills did this. As political praxis, I have issues.

There is a tendency in politics to believe that political skill is portable and repurposable. Converting opposing operatives to the cause is welcomed not only because they indicate added support, but also because they can use their political skill to help you win instead. To an extent this is not wrong, and is probably the most true of combat skills (which Zemolai has in abundance). But there's an underlying assumption that politics is symmetric, and a critical reason why I hold many of the political positions that I do hold is that I don't think politics is symmetric.

If someone has been successfully stoking resentment and xenophobia in support of authoritarians, converts to an anti-authoritarian cause, and then produces propaganda stoking resentment and xenophobia against authoritarians, this is in some sense an improvement. But if one believes that resentment and xenophobia are inherently wrong, if one's politics are aimed at reducing the resentment and xenophobia in the world, then in a way this person has not truly converted. Worse, because this is an effective manipulation tactic, there is a strong tendency to put this type of political convert into a leadership position, where they will, intentionally or not, start turning the anti-authoritarian movement into a copy of the authoritarian movement they left. They haven't actually changed their politics because they haven't understood (or simply don't believe in) the fundamental asymmetry in the positions. It's the same criticism that I have of realpolitik: the ends do not justify the means because the means corrupt the ends.

Nothing that happens in this book is as egregious as my example, but the more I thought about the plot structure, the more it bothered me that Zemolai never listens to the revolutionaries she joins long enough to wrestle with why she became an agent of an authoritarian state and they didn't. They got something fundamentally right that she got wrong, and perhaps that should have been reflected in who got to make future decisions. Zemolai made very poor choices and yet continues to be the sole main character of the story, the one whose decisions and actions truly matter. Maybe being wrong about everything should be disqualifying for being the main character, at least for a while, even if you think you've understood why you were wrong.

That problem aside, I enjoyed this. Both timelines were compelling and quite difficult to put down, even when they got rather dark. I could have done with less body horror and a few fewer fight scenes, but I'm glad I read it.

Science fiction readers should be warned that the world-building, despite having an intricate and fascinating surface, is mostly vibes. I started the book wondering how people with giant metal wings on their back can literally fly, and thought the mentions of neural ports, high-tech materials, and immune-suppressing drugs might mean that we'd get some sort of explanation. We do not: heavier-than-air flight works because it looks really cool and serves some thematic purposes. There are enough hints of technology indistinguishable from magic that you could make up your own explanations if you wanted to, but that's not something this book is interested in. There's not a thing wrong with that, but don't get caught by surprise if you were in the mood for a neat scientific explanation of apparent magic.

Recommended if you like somewhat-harrowing character development with a heavy political lens and steampunk vibes, although it's not the sort of book that I'd press into the hands of everyone I know. The Wings Upon Her Back is a complete story in a single novel.

Content warning: the main character is a victim of physical and emotional abuse, so some of that is a lot. Also surgical gore, some torture, and genocide.

Rating: 7 out of 10

16 September, 2024 02:03AM

September 15, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppFastAD 0.0.3 on CRAN: Updated

A new release 0.0.3 of the RcppFastAD package by James Yang and myself is now on CRAN.

RcppFastAD wraps the FastAD header-only C++ library by James which provides a C++ implementation of both forward and reverse mode of automatic differentiation. It offers an easy-to-use header library (which we wrapped here) that is both lightweight and performant. With a little of bit of Rcpp glue, it is also easy to use from R in simple C++ applications. This release turns compilation to the C++20 standard as newer clang++ versions complained about a particular statement (it took to be C++20) when compiled under C++17. So we obliged.

The NEWS file for these two initial releases follows.

Changes in version 0.0.3 (2024-09-15)

  • The package now compiles under the C++20 standard to avoid a warning under clang++-18 (Dirk addressing #9)

  • Minor updates to continuous integration and badges have been made as well

Courtesy of my CRANberries, there is also a diffstat report for the most recent release. More information is available at the repository or the package page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

15 September, 2024 11:19PM

Russell Coker

Kogan AX1800 Wifi6 Mesh

I previously blogged about the difficulties in getting a good Wifi mesh network setup [1].

I bought the Kogan AX1800 Wifi6 Mesh with 3 nodes for $140, the price has now dropped to $130. It’s only Wifi 6 (not 6E which has the extra 6GHz frequency) because all the 6E ones were more expensive than I felt like paying.

I’ve got it running and it’s working really well. One of my laptops has a damaged wire connecting to it’s Wifi device which decreased the signal to a degree that I could usually only connect to wifi when in the computer room (and then walk with it to another room once connected). Now I can connect that laptop to wifi in any part of my home. I can now get decent wifi access in my car in front of my home which covers the important corner case of walking to my car and then immediately asking Google maps for directions. Previously my phone would be deciding whether to switch away from wifi due to poor signal and that would delay getting directions, now I get directions quickly on Google Maps.

I’ve done tests with the Speedtest.net Android app and now get speeds of about 52Mbit/17Mbit in all parts of my home which is limited only by the speed of my NBN connection (one of the many reasons for hating conservatives is giving us expensive slow Internet). As my main reason for buying the devices is for Internet access they have clearly met my reason for purchase and probably meet the requirements for most people as well. Getting that speed is not trivial, my neighbours have lots of Wifi APs and bandwidth is congested. My Kogan 4K Android TV now plays 4K Netflix without pausing even though it only supports 2.4GHz wifi, so having a wifi mesh node next to the TV seems to help it.

I did some tests with the Olive Tree FTP server on a Galaxy Note 9 phone running the stock Samsung Android and got over 10MByte (80Mbit) upload and 8Mbyte (64Mbit) download speeds. This might be limited by the Android app or might be limited by the older version of Android. But it still gives higher speeds than my home Internet connection and much higher speeds than I need from an Android device.

Running iperf on Linux laptops talking to a Linux workstation that’s wired to the main mesh node I get speeds of 27.5Mbit from an old laptop on 2.4GHz wifi, 398Mbit from a new Wifi5 laptop when near the main mesh node, and 91Mbit from the same laptop when at the far end of my home. So not as fast as I’d like but still acceptable speeds.

The claims about Wifi 6 vs Wifi 5 speeds are that 6 will be about 3* faster. That would be 20% faster than the Gigabit ethernet ports on the wifi nodes. So while 2.5Gbit ethernet on Wifi 6 APs would be a good feature to have it seems that it might provide a 20% benefit at some future time when I have laptops with Wifi 6. At this time all the devices with 2.5Gbit ethernet cost more than I wanted to pay so I’m happy with this. It will probably be quite a while before laptops with Wifi 6 are in the price range I feel like paying.

For Wifi 6E it seems that anything less than 2.5Gbit ethernet will be a significant bottleneck. But I expect that by the time I buy a Wifi 6E mesh they will all have 2.5Gbit ethernet as standard.

The configuration of this device was quite easy via the built in web pages, everything worked pretty much as I expected and I hardly had to look at the manual. The mesh nodes are supposed to connect to each other when you press hardware buttons but that didn’t work for me so I used the web admin page to tell them to connect which worked perfectly. The admin of this seemed to be about as good as it gets.

Conclusion

The performance of this mesh hardware is quite decent. I can’t know for sure if it’s good or bad because performance really depends on what interference there is. But using this means that for me the Internet connection is now the main bottleneck for all parts of my home and I think it’s quite likely that most people in Australia who buy it will find the same result.

So for everyone in Australia who doesn’t have fiber to their home this seems like an ideal set of mesh hardware. It’s cheap, easy to setup, has no cloud stuff to break your configuration, gives quite adequate speed, and generally just does the job.

15 September, 2024 12:15PM by etbe

September 14, 2024

hackergotchi for Evgeni Golov

Evgeni Golov

Fixing the volume control in an Alesis M1Active 330 USB Speaker System

I've a set of Alesis M1Active 330 USB on my desk to listen to music. They were relatively inexpensive (~100€), have USB and sound pretty good for their size/price.

They were also sitting on my desk unused for a while, because the left speaker didn't produce any sound. Well, almost any. If you'd move the volume knob long enough you might have found a position where the left speaker would work a bit, but it'd be quieter than the right one and stop working again after some time. Pretty unacceptable when you want to listen to music.

Given the right speaker was working just fine and the left would work a bit when the volume knob is moved, I was quite certain which part was to blame: the potentiometer.

So just open the right speaker (it contains all the logic boards, power supply, etc), take out the broken potentiometer, buy a new one, replace, done. Sounds easy?

Well, to open the speaker you gotta loosen 8 (!) screws on the back. At least it's not glued, right? Once the screws are removed you can pull out the back plate, which will bring the power supply, USB controller, sound amplifier and cables, lots of cables: two pairs of thick cables, one to each driver, one thin pair for the power switch and two sets of "WTF is this, I am not going to trace pinouts today", one with a 6 pin plug, one with a 5 pin one.

Unplug all of these! Yes, they are plugged, nice. Nope, still no friggin' idea how to get to the potentiometer. If you trace the "thin pair" and "WTF1" cables, you see they go inside a small wooden box structure. So we have to pull the thing from the front?

Okay, let's remove the plastic part of the knob Right, this looks like a potentiometer. Unscrew it. No, no need for a Makita wrench, I just didn't have anything else in the right size (10mm).

right Alesis M1Active 330 USB speaker with a Makita wrench where the volume knob is

Still, no movement. Let's look again from the inside! Oh ffs, there are six more screws inside, holding the front. Away with them! Just need a very long PH1 screwdriver.

Now you can slowly remove the part of the front where the potentiometer is. Be careful, the top tweeter is mounted to the front, not the main case and so is the headphone jack, without an obvious way to detach it. But you can move away the front far enough to remove the small PCB with the potentiometer and the LED.

right Alesis M1Active 330 USB speaker open

Great, this was the easy part!

The only thing printed on the potentiometer is "A10K". 10K is easy -- 10kOhm. A?! Wikipedia says "A" means "logarithmic", but only if made in the US or Asia. In Europe that'd be "linear". "B" in US/Asia means "linear", in Europe "logarithmic". Do I need to tap the sign again? (The sign is a print of XKCD#927.) My multimeter says in this case it's something like logarithmic. On the right channel anyway, the left one is more like a chopping board. And what's this green box at the end? Oh right, this thing also turns the power on and off. So it's a power switch.

Where the fuck do I get a logarithmic 10kOhm stereo potentiometer with a power switch? And then in the exact right size too?!

Of course not at any of the big German electronics pharmacies. But AliExpress saves the day, again. It's even the same color!

Soldering without pulling out the cable out of the case was a bit challenging, but I've managed it and now have stereo sound again. Yay!

PS: Don't operate this thing open to try it out. 230V are dangerous!

14 September, 2024 06:38PM by evgeni

September 13, 2024

hackergotchi for Daniel Pocock

Daniel Pocock

Houston Energy & Climate Week 2024

The inaugural Houston Energy & Climate Week was held from 9 to 13 September 2024.

The schedule included a wide range of events about energy and climate issues on both a local and global scale.

Australian American Chamber of Commerce (AACC) Texas

On the afternoon of Tuesday, 10 September, the Australian American Chamber of Commerce (AACC) Texas ran a panel discussion involving businesses and entrepreneurs with Australian connections.

Many people commented that it was well organized and fun and one of the highlights of the week.

Lucas Marks, Green Li-ion, Gabrielle Hall, Geeta Thakorlal, Houston Climate Week 2024

 

Megan Lund, Woodside, Dr. Suman Khatiwada, Syzygy Plasmonics, Lucas Marks, Green Li-ion, Gabrielle Hall, Houston Climate Week 2024

 

Megan Lund, Woodside, Dr. Suman Khatiwada, Syzygy Plasmonics, Lucas Marks, Green Li-ion, Houston Climate Week 2024

Energy Tech Nexus Founder After Party

For those who survived the Australians and a full day of activities for the opening of Energy Tech Nexus in Texas, there was an after party in the penthouse of the Esperson building.

Juliana Garaizar, Payal Patel, Jason Ethier, Houston Climate Week 2024

 

Juliana Garaizar, Payal Patel, Jason Ethier, Houston Climate Week 2024

 

Juliana Garaizar, Payal Patel, Jason Ethier, Houston Climate Week 2024

Foxy Moxy cocktails

The fun continued into the following evening with a cocktail reception at the Houston Moxy hotel. Sustainable beverages were available for people to try.

Smiling Oak sustainable whiskey is produced locally in Texas and seeks to reuse barrels and other materials in the production process.

Houston Climate Week 2024

 

Houston Climate Week 2024

 

Houston Climate Week 2024

The Moxy team provided a wide range of food and drinks.

Christina Dunn Scott, Houston Climate Week 2024

Telmont wines provided champagne from France.

Ana Colmenares, Telmont wines, Houston Climate Week 2024

 

Ana Colmenares, Telmont wines, Houston Climate Week 2024

 

Houston Climate Week 2024

 

Christina Dunn Scott, Ingrid Velasco, Houston Climate Week 2024

 

Houston Climate Week 2024

 

Pankaj Tanwar, CarbonBetter, Houston Climate Week 2024

Katie Mehnert (ALLY Energy) gave a speech thanking the volunteers, sponsors and partners who participated in the cocktail event.

Katie Mehnert, Houston Climate Week 2024

 

Katie Mehnert, Houston Climate Week 2024

 

Katie Mehnert, Houston Climate Week 2024

 

Katie Mehnert, Houston Climate Week 2024

 

Houston Climate Week 2024

 

Katie Mehnert, Houston Climate Week 2024

 

Katie Mehnert, Ana Colmenares, Houston Climate Week 2024

 

Katie Mehnert, Joe Pelati, Houston Climate Week 2024

 

Houston Climate Week 2024

 

Katie Mehnert, Christina Dunn Scott, Houston Climate Week 2024

 

Thalia Kruger, Katie Mehnert, Naomie Jabbari, Houston Climate Week 2024

 

Thalia Kruger, Katie Mehnert, Naomie Jabbari, Houston Climate Week 2024

 

Emma Vafi, Katie Mehnert, Naomie Jabbari, Houston Climate Week 2024

 

Ana Colmenares, Telmont wines, Houston Climate Week 2024

 

Katie Mehnert, Houston Climate Week 2024

 

13 September, 2024 07:30PM

September 12, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 14.0.2-1 on CRAN: Updates

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1164 other packages on CRAN, downloaded 36.1 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 595 times according to Google Scholar.

Conrad released two small incremental releases to version 14.0.0. We did not immediately bring these to CRAN as we have to be mindful of the desired upload cadence of ‘once every one or two months’. But as 14.0.2 has been stable for a few weeks, we now decided to bring it to CRAN. Changes since the last CRAN release are summarised below, and overall fairly minimal. On the package side, we reorder what citation() returns, and now follow CRAN requirements via Authors@R.

Changes in RcppArmadillo version 14.0.2-1 (2024-09-11)

  • Upgraded to Armadillo release 14.0.2 (Stochastic Parrot)

    • Optionally use C++20 memory alignment

    • Minor corrections for several corner-cases

  • The order of items displayed by citation() is reversed (Conrad in #449)

  • The DESCRIPTION file now uses an Authors@R field with ORCID IDs

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

12 September, 2024 10:29PM

September 11, 2024

Jamie McClelland

MariaDB mystery

I keep getting an error in our backup logs:

Sep 11 05:08:03 Warning: mysqldump: Error 2013: Lost connection to server during query when dumping table `1C4Uonkwhe_options` at row: 1402
Sep 11 05:08:03 Warning: Failed to dump mysql databases ic_wp

It’s a WordPress database having trouble dumping the options table.

The error log has a corresponding message:

Sep 11 13:50:11 mysql007 mariadbd[580]: 2024-09-11 13:50:11 69577 [Warning] Aborted connection 69577 to db: 'ic_wp' user: 'root' host: 'localhost' (Got an error writing communication packets)

The Internet is full of suggestions, almost all of which either focus on the network connection between the client and the server or the FEDERATED plugin. We aren’t using the federated plugin and this error happens when conneting via the socket.

Check it out - what is better than a consistently reproducible problem!

It happens if I try to select all the values in the table:

root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

It happens when I specifiy one specific offset:

root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

It happens if I specify the field name explicitly:

root@mysql007:~# mysql --protocol=socket -e 'select option_id,option_name,option_value,autoload from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

It doesn’t happen if I specify the key field:

root@mysql007:~# mysql --protocol=socket -e 'select option_id from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
+-----------+
| option_id |
+-----------+
|  16296351 |
+-----------+
root@mysql007:~#

It does happen if I specify the value field:

root@mysql007:~# mysql --protocol=socket -e 'select option_value from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

It doesn’t happen if I query the specific row by key field:

root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
+-----------+----------------------+--------------+----------+
| option_id | option_name          | option_value | autoload |
+-----------+----------------------+--------------+----------+
|  16296351 | z_taxonomy_image8905 |              | yes      |
+-----------+----------------------+--------------+----------+
root@mysql007:~#

Hm. Surely there is some funky non-printing character in that option_value right?

root@mysql007:~# mysql --protocol=socket -e 'select CHAR_LENGTH(option_value) from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
+---------------------------+
| CHAR_LENGTH(option_value) |
+---------------------------+
|                         0 |
+---------------------------+
root@mysql007:~# mysql --protocol=socket -e 'select HEX(option_value) from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
+-------------------+
| HEX(option_value) |
+-------------------+
|                   |
+-------------------+
root@mysql007:~#

Resetting the value to an empty value doesn’t make a difference:

root@mysql007:~# mysql --protocol=socket -e 'update 1C4Uonkwhe_options set option_value = "" where option_id = 16296351' ic_wp
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

Deleting the row in question causes the error to specify a new offset:

root@mysql007:~# mysql --protocol=socket -e 'delete from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~# mysqldump ic_wp > /dev/null
mysqldump: Error 2013: Lost connection to server during query when dumping table `1C4Uonkwhe_options` at row: 1401
root@mysql007:~#

If I put the record I deleted back in, we return to the old offset:

root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_options VALUES(16296351,"z_taxonomy_image8905","","yes");' ic_wp 
root@mysql007:~# mysqldump ic_wp > /dev/null
mysqldump: Error 2013: Lost connection to server during query when dumping table `1C4Uonkwhe_options` at row: 1402
root@mysql007:~#

I’m losing my little mind. Let’s get drastic and create a whole new table, copy over the data delicately working around the deadly offset:

oot@mysql007:~# mysql --protocol=socket -e 'create table 1C4Uonkwhe_new_options like 1C4Uonkwhe_options;' ic_wp 
root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_new_options select * from 1C4Uonkwhe_options limit 1402 offset 0;' ic_wp 
--- There is only 33 more records, not sure how to specify unlimited limit but 100 does the trick.
root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_new_options select * from 1C4Uonkwhe_options limit 100 offset 1403;' ic_wp 

Now let’s make sure all is working properly:

root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_new_options' ic_wp >/dev/null;

Now let’s examine which row we are missing:

root@mysql007:~# mysql --protocol=socket -e 'select option_id from 1C4Uonkwhe_options where option_id not in (select option_id from 1C4Uonkwhe_new_options) ;' ic_wp 
+-----------+
| option_id |
+-----------+
|  18405297 |
+-----------+
root@mysql007:~#

Wait, what? I was expecting option_id 16296351.

Oh, now we are getting somewhere. And I see my mistake: when using offsets, you need to use ORDER BY or you won’t get consistent results.

root@mysql007:~# mysql --protocol=socket -e 'select option_id from 1C4Uonkwhe_options order by option_id limit 1 offset 1402' ic_wp ;
+-----------+
| option_id |
+-----------+
|  18405297 |
+-----------+
root@mysql007:~#

Now that I have the correct row… what is in it:

root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options where option_id = 18405297' ic_wp ;
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#

Well, that makes a lot more sense. Let’s start over with examining the value:

root@mysql007:~# mysql --protocol=socket -e 'select CHAR_LENGTH(option_value) from 1C4Uonkwhe_options where option_id = 18405297' ic_wp ;
+---------------------------+
| CHAR_LENGTH(option_value) |
+---------------------------+
|                  50814767 |
+---------------------------+
root@mysql007:~#

Wow, that’s a lot of characters. If it were a book, it would be 35,000 pages long (I just discovered this site). It’s a LONGTEXT field so it should be able to handle it. But now I have a better idea of what could be going wrong. The name of the option is “rewrite_rules” so it seems like something is going wrong with the generation of that option.

I imagine there is some tweak I can make to allow MariaDB to cough up the value (read_buffer_size? tmp_table_size?). But I’ll start with checking in with the database owner because I don’t think 35,000 pages of rewrite rules is appropriate for any site.

11 September, 2024 12:27PM

September 10, 2024

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

GS1900-10HP web session hijack

While fiddling around, I found a (fairly serious) vulnerability in Zyxel's GS1900-10HP and related switches; today Zyxel released an advisory with updated firmware, so I can publish my side of it as well. (Unfortunately there's no Zyxel bounty program, but Zyxel PSIRT has been forthcoming all along, which I guess is all you can hope for.)

The CVE (CVE-2024-38270) is sparse on details, so I'll simply paste my original message to Zyxel below:

Hi,

GS1900-10HP (probably also many other switches in the same series),
firmware V2.80(AAZI.0) (also older ones) generate web authentication
tokens in an unsafe way. This makes it possible for an attacker
to guess them and hijack the session.

web_util_randStr_generate() contains code that is functionally
the same as this:

        char token[17];
        struct timeval now;
        gettimeofday(&now, NULL);
        srandom(now.tv_sec + now.tv_usec);
        for (int i = 0; i < 16; ++i) {
                long r = random() % 62;
                char c;
                if (r < 10) {
                        c = r + '0';  // 0..9
                } else if (r < 36) {
                        c = r + ('A' - 10);  // A..Z
                } else {
                        c = r + ('a' - 36);  // a..z
                }
                token[i] = c;
        }
        token[16] = 0;

(random() comes from uclibc, but it has the same generator as glibc,
so the code runs just as well on desktop Linux)

This token is generated on initial login, and stored in a cookie
on the client. This has multiple problems:

First, the clock is a known quantity; even if the switch is not on SNTP,
it is trivial to get its idea of time-of-day by just doing a HTTP
request and looking at the Date header. This means that if an attacker
knows precisely when the administrator logged in (for instance, by observing
a HTTPS login on the network), they will have a very limited range of
possible tokens to check.

Second, tv_sec and tv_usec are combined in an improper way, canceling
out much of the intended entropy. As long as one assumes that the
administrator logged in less than a day ago, the entire range of possible
seeds it contained within the range [now - 86400, now + 999999], i.e.
only about 1.1M possible cookies, which can simply be tried serially
even if one did not observe the original login. There is no brute-force
protection on the web interface.

I have verified that this attack is practical, by simply generating all the
tokens and asking for the status page repeatedly (it is trivial to see
whether it returns an authentication success or failure). The switch can
sustain about one try every 96 ms on average against an attacker on a local
LAN (there is no keepalive or multithreading, so the most trivial code is
seemingly also the best one), which means that an attack will succeed on
average after about 15 hours; my test run succeeded after a bit under three
hours. If there are multiple administrator sessions active, the expected time
to success is of course lower, although the tries are also somewhat slower
because the switch has to deal with the keepalive traffic from the admins.

This is a straightforward case of CWE-330 (Use of Insufficiently Random
Values), with subcategories CWE-331, CWE-334, CWE-335, CWE-337, CWE-339,
CWE-340, CWE-341 and probably others. The suggested fix is simple: Read
entropy from /dev/urandom or another good source, instead of using random().
(Make sure that you don't get bias issues due to the use of modulo; you can
use e.g. rejection sampling.)

Session timeout does help against this attack (by default, it is 3 minutes),
but only as long as the administrator has not kept a tab open. If the tab is
left open, that keeps on making background requests that refreshes the token
every five seconds, guaranteeing a 100% success rate if given a day or two.

There is also _tons_ of outdated software on the switch (kernel from 2008,
OpenSSH from 2013, netkit-telnetd which is no longer maintained, a fork of
a very old NET-SNMP, etc.), but I did not check whether there are any
relevant security holes or whether you have actually backported patches.

I haven't verified what their fix looks like, but it's probably somewhere there in the GPL dump. :-)

10 September, 2024 07:55AM

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in August 2024

10 September, 2024 12:51AM by Ben Hutchings

September 09, 2024

FOSS activity in July 2024

09 September, 2024 11:57PM by Ben Hutchings

hackergotchi for Wouter Verhelst

Wouter Verhelst

NBD: Write Zeroes and Rotational

The NBD protocol has grown a number of new features over the years. Unfortunately, some of those features are not (yet?) supported by the Linux kernel.

I suggested a few times over the years that the maintainer of the NBD driver in the kernel, Josef Bacik, take a look at these features, but he hasn't done so; presumably he has other priorities. As with anything in the open source world, if you want it done you must do it yourself.

I'd been off and on considering to work on the kernel driver so that I could implement these new features, but I never really got anywhere.

A few months ago, however, Christoph Hellwig posted a patch set that reworked a number of block device drivers in the Linux kernel to a new type of API. Since the NBD mailinglist is listed in the kernel's MAINTAINERS file, this patch series were crossposted to the NBD mailinglist, too, and when I noticed that it explicitly disabled the "rotational" flag on the NBD device, I suggested to Christoph that perhaps "we" (meaning, "he") might want to vary the decision on whether a device is rotational depending on whether the NBD server signals, through the flag that exists for that very purpose, whether the device is rotational.

To which he replied "Can you send a patch".

That got me down the rabbit hole, and now, for the first time in the 20+ years of being a C programmer who uses Linux exclusively, I got a patch merged into the Linux kernel... twice.

So, what do these things do?

The first patch adds support for the ROTATIONAL flag. If the NBD server mentions that the device is rotational, it will be treated as such, and the elevator algorithm will be used to optimize accesses to the device. For the reference implementation, you can do this by adding a line "rotational = true" to the relevant section (relating to the export where you want it to be used) of the config file.

It's unlikely that this will be of much benefit in most cases (most nbd-server installations will be exporting a file on a filesystem and have the elevator algorithm implemented server side and then it doesn't matter whether the device has the rotational flag set), but it's there in case you wish to use it.

The second set of patches adds support for the WRITE_ZEROES command. Most devices these days allow you to tell them "please write a N zeroes starting at this offset", which is a lot more efficient than sending over a buffer of N zeroes and asking the device to do DMA to copy buffers etc etc for just zeroes.

The NBD protocol has supported its own WRITE_ZEROES command for a while now, and hooking it up was reasonably simple in the end. The only problem is that it expects length values in bytes, whereas the kernel uses it in blocks. It took me a few tries to get that right -- and then I also fixed up handling of discard messages, which required the same conversion.

09 September, 2024 03:00PM

Antoine Beaupré

Playing with fonts again

I am getting increasingly frustrated by Fira Mono's lack of italic support so I am looking at alternative fonts again.

Commit Mono

This time I seem to be settling on either Commit Mono or Space Mono. For now I'm using Commit Mono because it's a little more compressed than Fira and does have a italic version. I don't like how Space Mono's parenthesis (()) is "squarish", it feels visually ambiguous with the square brackets ([]), a big no-no for my primary use case (code).

So here I am using a new font, again. It required changing a bunch of configuration files in my home directory (which is in a private repository, sorry) and Emacs configuration (thankfully that's public!).

One gotcha is I realized I didn't actually have a global font configuration in Emacs, as some Faces define their own font family, which overrides the frame defaults.

This is what it looks like, before:

A dark terminal showing the test sheet in Fira Mono
Fira Mono

After:

A dark terminal showing the test sheet in Fira Mono
Commit Mono

(Notice how those screenshots are not sharp? I'm surprised too. The originals look sharp on my display, I suspect this is something to do with the Wayland transition. I've tried with both grim and flameshot, for what its worth.)

They are pretty similar! Commit Mono feels a bit more vertically compressed maybe too much so, actually -- the line height feels too low. But it's heavily customizable so that's something that's relatively easy to fix, if it's really a problem. Its weight is also a little heavier and wider than Fira which I find a little distracting right now, but maybe I'll get used to it.

All characters seem properly distinguishable, although, if I'd really want to nitpick I'd say the © and ® are too different, with the latter (REGISTERED SIGN) being way too small, basically unreadable here. Since I see this sign approximately never, it probably doesn't matter at all.

I like how the ampersand (&) is more traditional, although I'll miss the exotic one Fira produced... I like how the back quotes (`, GRAVE ACCENT) drop down low, nicely aligned with the apostrophe. As I mentioned before, I like how the bar on the "f" aligns with the other top of letters, something in Fira mono that really annoys me now that I've noticed it (it's not aligned!).

A UTF-8 test file

Here's the test sheet I've made up to test various characters. I could have sworn I had a good one like this lying around somewhere but couldn't find it so here it is, I guess.

US keyboard coverage:

abcdefghijklmnopqrstuvwxyz`1234567890-=[]\;',./
ABCDEFGHIJKLMNOPQRSTUVWXYZ~!@#$%^&*()_+{}|:"<>?

latin1 coverage: ¡¢£¤¥¦§¨©ª«¬­®¯°±²³´µ¶·¸¹º»¼½¾¿
EURO SIGN, TRADE MARK SIGN: €™

ambiguity test:

e¢coC0ODQ iI71lL!|¦
b6G&0B83  [](){}/\.…·•
zs$S52Z%  ´`'"‘’“”«»

all characters in a sentence, uppercase:

the quick fox jumps over the lazy dog
THE QUICK FOX JUMPS OVER THE LAZY DOG

same, in french:

Portez ce vieux whisky au juge blond qui fume.

dès noël, où un zéphyr haï me vêt de glaçons würmiens, je dîne
d’exquis rôtis de bœuf au kir, à l’aÿ d’âge mûr, &cætera.

DÈS NOËL, OÙ UN ZÉPHYR HAÏ ME VÊT DE GLAÇONS WÜRMIENS, JE DÎNE
D’EXQUIS RÔTIS DE BŒUF AU KIR, À L’AŸ D’ÂGE MÛR, &CÆTERA.

Ligatures test:

-<< -< -<- <-- <--- <<- <- -> ->> --> ---> ->- >- >>-
=<< =< =<= <== <=== <<= <= => =>> ==> ===> =>= >= >>=
<-> <--> <---> <----> <=> <==> <===> <====> :: ::: __
<~~ </ </> /> ~~> == != /= ~= <> === !== !=== =/= =!=
<: := *= *+ <* <*> *> <| <|> |> <. <.> .> +* =* =: :>
(* *) /* */ [| |] {| |} ++ +++ \/ /\ |- -| <!-- <!---

Box drawing alignment tests:
                                                                   █
╔══╦══╗  ┌──┬──┐  ╭──┬──╮  ╭──┬──╮  ┏━━┳━━┓ ┎┒┏┑   ╷  ╻ ┏┯┓ ┌┰┐    ▉ ╱╲╱╲╳╳╳
║┌─╨─┐║  │╔═╧═╗│  │╒═╪═╕│  │╓─╁─╖│  ┃┌─╂─┐┃ ┗╃╄┙  ╶┼╴╺╋╸┠┼┨ ┝╋┥    ▊ ╲╱╲╱╳╳╳
║│╲ ╱│║  │║   ║│  ││ │ ││  │║ ┃ ║│  ┃│ ╿ │┃ ┍╅╆┓   ╵  ╹ ┗┷┛ └┸┘    ▋ ╱╲╱╲╳╳╳
╠╡ ╳ ╞╣  ├╢   ╟┤  ├┼─┼─┼┤  ├╫─╂─╫┤  ┣┿╾┼╼┿┫ ┕┛┖┚     ┌┄┄┐ ╎ ┏┅┅┓ ┋ ▌ ╲╱╲╱╳╳╳
║│╱ ╲│║  │║   ║│  ││ │ ││  │║ ┃ ║│  ┃│ ╽ │┃ ░░▒▒▓▓██ ┊  ┆ ╎ ╏  ┇ ┋ ▍
║└─╥─┘║  │╚═╤═╝│  │╘═╪═╛│  │╙─╀─╜│  ┃└─╂─┘┃ ░░▒▒▓▓██ ┊  ┆ ╎ ╏  ┇ ┋ ▎
╚══╩══╝  └──┴──┘  ╰──┴──╯  ╰──┴──╯  ┗━━┻━━┛          └╌╌┘ ╎ ┗╍╍┛ ┋ ▏▁▂▃▄▅▆▇█

Dashes alignment test:

HYPHEN-MINUS, MINUS SIGN, EN, EM DASH, HORIZONTAL BAR, LOW LINE
--------------------------------------------------
−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
––––––––––––––––––––––––––––––––––––––––––––––––––
——————————————————————————————————————————————————
――――――――――――――――――――――――――――――――――――――――――――――――――
__________________________________________________

Update: here is another such sample sheet, it's pretty good and has support for more languages while being still relatively small.

So there you have it, got completely nerd swiped by typography again. Now I can go back to writing a too-long proposal again.

Sources and inspiration for the above:

  • the unicode(1) command, to lookup individual characters to disambiguate, for example, - (U+002D HYPHEN-MINUS, the minus sign next to zero on US keyboards) and − (U+2212 MINUS SIGN, a math symbol)

  • searchable list of characters and their names - roughly equivalent to the unicode(1) command, but in one page, amazingly the /usr/share/unicode database doesn't have any one file like this

  • bits/UTF-8-Unicode-Test-Documents - full list of UTF-8 characters

  • UTF-8 encoded plain text file - nice examples of edge cases, curly quotes example and box drawing alignment test which, incidentally, showed me I needed specific faces customisation in Emacs to get the Markdown code areas to display properly, also the idea of comparing various dashes

  • sample sentences in many languages - unused, "Sentences that contain all letters commonly used in a language"

  • UTF-8 sampler - unused, similar

Other fonts

In my previous blog post about fonts, I had a list of alternative fonts, but it seems people are not digging through this, so I figured I would redo the list here to preempt "but have you tried Jetbrains mono" kind of comments.

My requirements are:

  • no ligatures: yes, in the previous post, I wanted ligatures but I have changed my mind. after testing this, I find them distracting, confusing, and they often break the monospace nature of the display (note that some folks wrote emacs code to selectively enable ligatures which is an interesting compromise)z
  • monospace: this is to display code
  • italics: often used when writing Markdown, where I do make use of italics... Emacs falls back to underlining text when lacking italics which is hard to read
  • free-ish, ultimately should be packaged in Debian

Here is the list of alternatives I have considered in the past and why I'm not using them:

  • agave: recommended by tarzeau, not sure I like the lowercase a, a bit too exotic, packaged as fonts-agave

  • Cascadia code: optional ligatures, multilingual, not liking the alignment, ambiguous parenthesis (look too much like square brackets), new default for Windows Terminal and Visual Studio, packaged as fonts-cascadia-code

  • Fira Code: ligatures, was using Fira Mono from which it is derived, lacking italics except for forks, interestingly, Fira Code succeeds the alignment test but Fira Mono fails to show the X signs properly! packaged as fonts-firacode

  • Hack: no ligatures, very similar to Fira, italics, good alternative, fails the X test in box alignment, packaged as fonts-hack

  • Hermit: no ligatures, smaller, alignment issues in box drawing and dashes, packaged as fonts-hermit somehow part of cool-retro-term

  • IBM Plex: irritating website, replaces Helvetica as the IBM corporate font, no ligatures by default, italics, proportional alternatives, serifs and sans, multiple languages, partial failure in box alignment test (X signs), fancy curly braces contrast perhaps too much with the rest of the font, packaged in Debian as fonts-ibm-plex

  • Inconsolata: no ligatures, maybe italics? more compressed than others, feels a little out of balance because of that, packaged in Debian as fonts-inconsolata

  • Intel One Mono: nice legibility, no ligatures, alignment issues in box drawing, not packaged in Debian

  • Iosevka: optional ligatures, italics, multilingual, good legibility, has a proportional option, serifs and sans, line height issue in box drawing, fails dash test, not in Debian

  • Jetbrains Mono: (mandatory?) ligatures, good coverage, originally rumored to be not DFSG-free (Debian Free Software Guidelines) but ultimately packaged in Debian as fonts-jetbrains-mono

  • Monoid: optional ligatures, feels much "thinner" than Jetbrains, not liking alignment or spacing on that one, ambiguous 2Z, problems rendering box drawing, packaged as fonts-monoid

  • Mononoki: no ligatures, looks good, good alternative, suggested by the Debian fonts team as part of fonts-recommended, problems rendering box drawing, em dash bigger than en dash, packaged as fonts-mononoki

  • Source Code Pro: italics, looks good, but dash metrics look whacky, not in Debian

  • spleen: bitmap font, old school, spacing issue in box drawing test, packaged as fonts-spleen

  • sudo: personal project, no ligatures, zero originally not dotted, relied on metrics for legibility, spacing issue in box drawing, not in Debian

  • victor mono: italics are cursive by default (distracting), ligatures by default, looks good, more compressed than commit mono, good candidate otherwise, has a nice and compact proof sheet

So, if I get tired of Commit Mono, I might probably try, in order:

  1. Hack
  2. Jetbrains Mono
  3. IBM Plex Mono

Iosevka, Monoki and Intel One Mono are also good options, but have alignment problems. Iosevka is particularly disappointing as the EM DASH metrics are just completely wrong (much too wide).

This was tested using the Programming fonts site which has all the above fonts, which cannot be said of Font Squirrel or Google Fonts, amazingly. Other such tools:

Also note that there is now a package in Debian called fnt to manage fonts like this locally, including in-line previews (that don't work in bookworm but should be improved in trixie and later).

09 September, 2024 01:50PM

September 08, 2024

Thorsten Alteholz

My Debian Activities in August 2024

FTP master

This month I accepted 441 and rejected 15 packages. The overall number of packages that got accepted was 442.

I am ashamed of some occurrences that happened this month and I apologize for this. Unfortunately I have no idea how to prevent this in the future without becoming a solo entertainer.

Debian LTS

This was my hundred-twenty-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

  • [#1073518] bookworm-pu: cups 2.4.2-3+deb12u6 has been closed
  • [#1074439] bookworm-pu: cups 2.4.2-3+deb12u7 has been closed
  • [#1073519] bullseye-pu: cups 2.3.3op2-3+deb11u7 has been closed
  • [#1074438] bullseye-pu: cups 2.3.3op2-3+deb11u8 has been closed

Unfortunately Bullseye was not handed over to LTS in August. So I only prepared new packages of asterisk, libvirt and tinyproxy and will upload them next month.

Last but not least I did a week of FD this month.

Debian ELTS

This month was the seventy-third ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1160-1]tiff security update for two CVEs in Jessie and Stretch. The Buster upload was already done before. This upload fixed a segmentation fault and a memory leak
  • [ELA-1161-1]libvirt security update for six CVEs to fix issues related to use-after-free, an off-by-one, a null pointer dereference, a badly handled mutex, a privilege escalation and breaking out of the sVirt confinement. In this case only Jessie and Stretch needed an update.
  • [ELA-1166-1]frr security update for one CVEs in Buster to fix a missing length check.

I also did a week of FD.

Debian Printing

This month I uploaded …

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream or bugfix version of:

Debian Mobcom

The following packages have been prepared by the GSoC student Nathan:

It was so much fun working with Nathan. Unfortunately GSoC is over now, but Nathan will continue working in Debian and become a Debian Maintainer.

misc

This month I uploaded new upstream or bugfix versions of:

I also filed an RM bug against meep-openmpi. As Adrian made me ware, this package is no longer needed.

08 September, 2024 11:37PM by alteholz

Dima Kogan

GNU Make: details regarding intermediate files

Suppose I have this Makefile:

a: b
      touch $@
b:
      touch $@

# A common chain of build steps
%-GENERATED.c: %-generate
      touch $@
%.o: %.c
      touch $@
%.so: %-GENERATED.o
      touch $@
xxx-GENERATED.o: CFLAGS += adsf

# Imitates .d files created with "gcc -MMD". Does not exist on the initial build
ifneq ($(wildcard xxx.so),)
xxx-GENERATED.o: xxx-GENERATED.c
endif

This is all very simple build-system stuff. Let's see how it works:

$ rm -rf a b xxx-GENERATED.c xxx-GENERATED.o xxx.so
  [start from a clean slate]

$ touch xxx-generate xxx.h
  [Files that would be available in a project exist; xxx-generate is some tool]
  [that would generate xxx-GENERATED.c                                        ]

$ touch a
  ["a" exists but the file "b" it depends on does not]

$ make a xxx.so

  touch b
  touch a
  touch xxx-GENERATED.c
  touch xxx-GENERATED.o
  touch xxx.so
  rm xxx-GENERATED.c

  [It built everything, but then deleted xxx-GENERATED.c]

$ make a xxx.so

  remake: 'a' is up to date.
  touch xxx-GENERATED.c
  touch xxx-GENERATED.o
  touch xxx.so

  [It knew to not rebuild "a", but the missing xxx-GENERATED.c caused it to]
  [re-build stuff                                                          ]

Well that's not good. What if we add .SECONDARY: to the end of the Makefile to mark everything as a secondary file?

$ rm -rf a b xxx-GENERATED.c xxx-GENERATED.o xxx.so
$ touch xxx-generate xxx.h
$ touch a

$ make a xxx.so

  remake: 'a' is up to date.
  touch xxx-GENERATED.c
  touch xxx-GENERATED.o
  touch xxx.so

  [It didn't bother rebuilding "a" even though its prerequisites "b" doesn't]
  [exist. But it didn't delete the xxx-GENERATED.c at least                 ]

$ make a xxx.so

  remake: 'a' is up to date.
  remake: 'xxx.so' is up to date.

  [It knew to not rebuild anything. Great.]

So it doesn't work right with or without .SECONDARY:, but it's much closer with it. The solution is to mark everything as not an intermediate file. mrbuild cannot do this without a bleeding-edge version of GNU Make, but users of mrbuild can do this by explicitly mentioning specific files in rules. This would suffice:

___dummy___: file1 file2

Detailed notes are in a commit in mrbuild (mrbuild 1.13) and in a post to LKML by Masahiro Yamada.

08 September, 2024 07:31PM by Dima Kogan

Jacob Adams

Linux's Bedtime Routine

How does Linux move from an awake machine to a hibernating one? How does it then manage to restore all state? These questions led me to read way too much C in trying to figure out how this particular hardware/software boundary is navigated.

This investigation will be split into a few parts, with the first one going from invocation of hibernation to synchronizing all filesystems to disk.

This article has been written using Linux version 6.9.9, the source of which can be found in many places, but can be navigated easily through the Bootlin Elixir Cross-Referencer:

https://elixir.bootlin.com/linux/v6.9.9/source

Each code snippet will begin with a link to the above giving the file path and the line number of the beginning of the snippet.

A Starting Point for Investigation: /sys/power/state and /sys/power/disk

These two system files exist to allow debugging of hibernation, and thus control the exact state used directly. Writing specific values to the state file controls the exact sleep mode used and disk controls the specific hibernation mode1.

This is extremely handy as an entry point to understand how these systems work, since we can just follow what happens when they are written to.

Show and Store Functions

These two files are defined using the power_attr macro:

kernel/power/power.h:80

#define power_attr(_name) \
static struct kobj_attribute _name##_attr = {   \
    .attr   = {             \
        .name = __stringify(_name), \
        .mode = 0644,           \
    },                  \
    .show   = _name##_show,         \
    .store  = _name##_store,        \
}

show is called on reads and store on writes.

state_show is a little boring for our purposes, as it just prints all the available sleep states.

kernel/power/main.c:657

/*
 * state - control system sleep states.
 *
 * show() returns available sleep state labels, which may be "mem", "standby",
 * "freeze" and "disk" (hibernation).
 * See Documentation/admin-guide/pm/sleep-states.rst for a description of
 * what they mean.
 *
 * store() accepts one of those strings, translates it into the proper
 * enumerated value, and initiates a suspend transition.
 */
static ssize_t state_show(struct kobject *kobj, struct kobj_attribute *attr,
			  char *buf)
{
	char *s = buf;
#ifdef CONFIG_SUSPEND
	suspend_state_t i;

	for (i = PM_SUSPEND_MIN; i < PM_SUSPEND_MAX; i++)
		if (pm_states[i])
			s += sprintf(s,"%s ", pm_states[i]);

#endif
	if (hibernation_available())
		s += sprintf(s, "disk ");
	if (s != buf)
		/* convert the last space to a newline */
		*(s-1) = '\n';
	return (s - buf);
}

state_store, however, provides our entry point. If the string “disk” is written to the state file, it calls hibernate(). This is our entry point.

kernel/power/main.c:715

static ssize_t state_store(struct kobject *kobj, struct kobj_attribute *attr,
			   const char *buf, size_t n)
{
	suspend_state_t state;
	int error;

	error = pm_autosleep_lock();
	if (error)
		return error;

	if (pm_autosleep_state() > PM_SUSPEND_ON) {
		error = -EBUSY;
		goto out;
	}

	state = decode_state(buf, n);
	if (state < PM_SUSPEND_MAX) {
		if (state == PM_SUSPEND_MEM)
			state = mem_sleep_current;

		error = pm_suspend(state);
	} else if (state == PM_SUSPEND_MAX) {
		error = hibernate();
	} else {
		error = -EINVAL;
	}

 out:
	pm_autosleep_unlock();
	return error ? error : n;
}

kernel/power/main.c:688

static suspend_state_t decode_state(const char *buf, size_t n)
{
#ifdef CONFIG_SUSPEND
	suspend_state_t state;
#endif
	char *p;
	int len;

	p = memchr(buf, '\n', n);
	len = p ? p - buf : n;

	/* Check hibernation first. */
	if (len == 4 && str_has_prefix(buf, "disk"))
		return PM_SUSPEND_MAX;

#ifdef CONFIG_SUSPEND
	for (state = PM_SUSPEND_MIN; state < PM_SUSPEND_MAX; state++) {
		const char *label = pm_states[state];

		if (label && len == strlen(label) && !strncmp(buf, label, len))
			return state;
	}
#endif

	return PM_SUSPEND_ON;
}

Could we have figured this out just via function names? Sure, but this way we know for sure that nothing else is happening before this function is called.

Autosleep

Our first detour is into the autosleep system. When checking the state above, you may notice that the kernel grabs the pm_autosleep_lock before checking the current state.

autosleep is a mechanism originally from Android that sends the entire system to either suspend or hibernate whenever it is not actively working on anything.

This is not enabled for most desktop configurations, since it’s primarily for mobile systems and inverts the standard suspend and hibernate interactions.

This system is implemented as a workqueue2 that checks the current number of wakeup events, processes and drivers that need to run3, and if there aren’t any, then the system is put into the autosleep state, typically suspend. However, it could be hibernate if configured that way via /sys/power/autosleep in a similar manner to using /sys/power/state to manually enable hibernation.

kernel/power/main.c:841

static ssize_t autosleep_store(struct kobject *kobj,
			       struct kobj_attribute *attr,
			       const char *buf, size_t n)
{
	suspend_state_t state = decode_state(buf, n);
	int error;

	if (state == PM_SUSPEND_ON
	    && strcmp(buf, "off") && strcmp(buf, "off\n"))
		return -EINVAL;

	if (state == PM_SUSPEND_MEM)
		state = mem_sleep_current;

	error = pm_autosleep_set_state(state);
	return error ? error : n;
}

power_attr(autosleep);
#endif /* CONFIG_PM_AUTOSLEEP */

kernel/power/autosleep.c:24

static DEFINE_MUTEX(autosleep_lock);
static struct wakeup_source *autosleep_ws;

static void try_to_suspend(struct work_struct *work)
{
	unsigned int initial_count, final_count;

	if (!pm_get_wakeup_count(&initial_count, true))
		goto out;

	mutex_lock(&autosleep_lock);

	if (!pm_save_wakeup_count(initial_count) ||
		system_state != SYSTEM_RUNNING) {
		mutex_unlock(&autosleep_lock);
		goto out;
	}

	if (autosleep_state == PM_SUSPEND_ON) {
		mutex_unlock(&autosleep_lock);
		return;
	}
	if (autosleep_state >= PM_SUSPEND_MAX)
		hibernate();
	else
		pm_suspend(autosleep_state);

	mutex_unlock(&autosleep_lock);

	if (!pm_get_wakeup_count(&final_count, false))
		goto out;

	/*
	 * If the wakeup occurred for an unknown reason, wait to prevent the
	 * system from trying to suspend and waking up in a tight loop.
	 */
	if (final_count == initial_count)
		schedule_timeout_uninterruptible(HZ / 2);

 out:
	queue_up_suspend_work();
}

static DECLARE_WORK(suspend_work, try_to_suspend);

void queue_up_suspend_work(void)
{
	if (autosleep_state > PM_SUSPEND_ON)
		queue_work(autosleep_wq, &suspend_work);
}

The Steps of Hibernation

Hibernation Kernel Config

It’s important to note that most of the hibernate-specific functions below do nothing unless you’ve defined CONFIG_HIBERNATION in your Kconfig4. As an example, hibernate itself is defined as the following if CONFIG_HIBERNATE is not set.

include/linux/suspend.h:407

static inline int hibernate(void) { return -ENOSYS; }

Check if Hibernation is Available

We begin by confirming that we actually can perform hibernation, via the hibernation_available function.

kernel/power/hibernate.c:742

if (!hibernation_available()) {
	pm_pr_dbg("Hibernation not available.\n");
	return -EPERM;
}

kernel/power/hibernate.c:92

bool hibernation_available(void)
{
	return nohibernate == 0 &&
		!security_locked_down(LOCKDOWN_HIBERNATION) &&
		!secretmem_active() && !cxl_mem_active();
}

nohibernate is controlled by the kernel command line, it’s set via either nohibernate or hibernate=no.

security_locked_down is a hook for Linux Security Modules to prevent hibernation. This is used to prevent hibernating to an unencrypted storage device, as specified in the manual page kernel_lockdown(7). Interestingly, either level of lockdown, integrity or confidentiality, locks down hibernation because with the ability to hibernate you can extract bascially anything from memory and even reboot into a modified kernel image.

secretmem_active checks whether there is any active use of memfd_secret, and if so it prevents hibernation. memfd_secret returns a file descriptor that can be mapped into a process but is specifically unmapped from the kernel’s memory space. Hibernating with memory that not even the kernel is supposed to access would expose that memory to whoever could access the hibernation image. This particular feature of secret memory was apparently controversial, though not as controversial as performance concerns around fragmentation when unmapping kernel memory (which did not end up being a real problem).

cxl_mem_active just checks whether any CXL memory is active. A full explanation is provided in the commit introducing this check but there’s also a shortened explanation from cxl_mem_probe that sets the relevant flag when initializing a CXL memory device.

drivers/cxl/mem.c:186

* The kernel may be operating out of CXL memory on this device,
* there is no spec defined way to determine whether this device
* preserves contents over suspend, and there is no simple way
* to arrange for the suspend image to avoid CXL memory which
* would setup a circular dependency between PCI resume and save
* state restoration.

Check Compression

The next check is for whether compression support is enabled, and if so whether the requested algorithm is enabled.

kernel/power/hibernate.c:747

/*
 * Query for the compression algorithm support if compression is enabled.
 */
if (!nocompress) {
	strscpy(hib_comp_algo, hibernate_compressor, sizeof(hib_comp_algo));
	if (crypto_has_comp(hib_comp_algo, 0, 0) != 1) {
		pr_err("%s compression is not available\n", hib_comp_algo);
		return -EOPNOTSUPP;
	}
}

The nocompress flag is set via the hibernate command line parameter, setting hibernate=nocompress.

If compression is enabled, then hibernate_compressor is copied to hib_comp_algo. This synchronizes the current requested compression setting (hibernate_compressor) with the current compression setting (hib_comp_algo).

Both values are character arrays of size CRYPTO_MAX_ALG_NAME (128 in this kernel).

kernel/power/hibernate.c:50

static char hibernate_compressor[CRYPTO_MAX_ALG_NAME] = CONFIG_HIBERNATION_DEF_COMP;

/*
 * Compression/decompression algorithm to be used while saving/loading
 * image to/from disk. This would later be used in 'kernel/power/swap.c'
 * to allocate comp streams.
 */
char hib_comp_algo[CRYPTO_MAX_ALG_NAME];

hibernate_compressor defaults to lzo if that algorithm is enabled, otherwise to lz4 if enabled5. It can be overwritten using the hibernate.compressor setting to either lzo or lz4.

kernel/power/Kconfig:95

choice
	prompt "Default compressor"
	default HIBERNATION_COMP_LZO
	depends on HIBERNATION

config HIBERNATION_COMP_LZO
	bool "lzo"
	depends on CRYPTO_LZO

config HIBERNATION_COMP_LZ4
	bool "lz4"
	depends on CRYPTO_LZ4

endchoice

config HIBERNATION_DEF_COMP
	string
	default "lzo" if HIBERNATION_COMP_LZO
	default "lz4" if HIBERNATION_COMP_LZ4
	help
	  Default compressor to be used for hibernation.

kernel/power/hibernate.c:1425

static const char * const comp_alg_enabled[] = {
#if IS_ENABLED(CONFIG_CRYPTO_LZO)
	COMPRESSION_ALGO_LZO,
#endif
#if IS_ENABLED(CONFIG_CRYPTO_LZ4)
	COMPRESSION_ALGO_LZ4,
#endif
};

static int hibernate_compressor_param_set(const char *compressor,
		const struct kernel_param *kp)
{
	unsigned int sleep_flags;
	int index, ret;

	sleep_flags = lock_system_sleep();

	index = sysfs_match_string(comp_alg_enabled, compressor);
	if (index >= 0) {
		ret = param_set_copystring(comp_alg_enabled[index], kp);
		if (!ret)
			strscpy(hib_comp_algo, comp_alg_enabled[index],
				sizeof(hib_comp_algo));
	} else {
		ret = index;
	}

	unlock_system_sleep(sleep_flags);

	if (ret)
		pr_debug("Cannot set specified compressor %s\n",
			 compressor);

	return ret;
}
static const struct kernel_param_ops hibernate_compressor_param_ops = {
	.set    = hibernate_compressor_param_set,
	.get    = param_get_string,
};

static struct kparam_string hibernate_compressor_param_string = {
	.maxlen = sizeof(hibernate_compressor),
	.string = hibernate_compressor,
};

We then check whether the requested algorithm is supported via crypto_has_comp. If not, we bail out of the whole operation with EOPNOTSUPP.

As part of crypto_has_comp we perform any needed initialization of the algorithm, loading kernel modules and running initialization code as needed6.

Grab Locks

The next step is to grab the sleep and hibernation locks via lock_system_sleep and hibernate_acquire.

kernel/power/hibernate.c:758

sleep_flags = lock_system_sleep();
/* The snapshot device should not be opened while we're running */
if (!hibernate_acquire()) {
	error = -EBUSY;
	goto Unlock;
}

First, lock_system_sleep marks the current thread as not freezable, which will be important later7. It then grabs the system_transistion_mutex, which locks taking snapshots or modifying how they are taken, resuming from a hibernation image, entering any suspend state, or rebooting.

The GFP Mask

The kernel also issues a warning if the gfp mask is changed via either pm_restore_gfp_mask or pm_restrict_gfp_mask without holding the system_transistion_mutex.

GFP flags tell the kernel how it is permitted to handle a request for memory.

include/linux/gfp_types.h:12

 * GFP flags are commonly used throughout Linux to indicate how memory
 * should be allocated.  The GFP acronym stands for get_free_pages(),
 * the underlying memory allocation function.  Not every GFP flag is
 * supported by every function which may allocate memory.

In the case of hibernation specifically we care about the IO and FS flags, which are reclaim operators, ways the system is permitted to attempt to free up memory in order to satisfy a specific request for memory.

include/linux/gfp_types.h:176

 * Reclaim modifiers
 * -----------------
 * Please note that all the following flags are only applicable to sleepable
 * allocations (e.g. %GFP_NOWAIT and %GFP_ATOMIC will ignore them).
 *
 * %__GFP_IO can start physical IO.
 *
 * %__GFP_FS can call down to the low-level FS. Clearing the flag avoids the
 * allocator recursing into the filesystem which might already be holding
 * locks.

gfp_allowed_mask sets which flags are permitted to be set at the current time.

As the comment below outlines, preventing these flags from being set avoids situations where the kernel needs to do I/O to allocate memory (e.g. read/writing swap8) but the devices it needs to read/write to/from are not currently available.

kernel/power/main.c:24

/*
 * The following functions are used by the suspend/hibernate code to temporarily
 * change gfp_allowed_mask in order to avoid using I/O during memory allocations
 * while devices are suspended.  To avoid races with the suspend/hibernate code,
 * they should always be called with system_transition_mutex held
 * (gfp_allowed_mask also should only be modified with system_transition_mutex
 * held, unless the suspend/hibernate code is guaranteed not to run in parallel
 * with that modification).
 */
static gfp_t saved_gfp_mask;

void pm_restore_gfp_mask(void)
{
	WARN_ON(!mutex_is_locked(&system_transition_mutex));
	if (saved_gfp_mask) {
		gfp_allowed_mask = saved_gfp_mask;
		saved_gfp_mask = 0;
	}
}

void pm_restrict_gfp_mask(void)
{
	WARN_ON(!mutex_is_locked(&system_transition_mutex));
	WARN_ON(saved_gfp_mask);
	saved_gfp_mask = gfp_allowed_mask;
	gfp_allowed_mask &= ~(__GFP_IO | __GFP_FS);
}

Sleep Flags

After grabbing the system_transition_mutex the kernel then returns and captures the previous state of the threads flags in sleep_flags. This is used later to remove PF_NOFREEZE if it wasn’t previously set on the current thread.

kernel/power/main.c:52

unsigned int lock_system_sleep(void)
{
	unsigned int flags = current->flags;
	current->flags |= PF_NOFREEZE;
	mutex_lock(&system_transition_mutex);
	return flags;
}
EXPORT_SYMBOL_GPL(lock_system_sleep);

include/linux/sched.h:1633

#define PF_NOFREEZE		0x00008000	/* This thread should not be frozen */

Then we grab the hibernate-specific semaphore to ensure no one can open a snapshot or resume from it while we perform hibernation. Additionally this lock is used to prevent hibernate_quiet_exec, which is used by the nvdimm driver to active its firmware with all processes and devices frozen, ensuring it is the only thing running at that time9.

kernel/power/hibernate.c:82

bool hibernate_acquire(void)
{
	return atomic_add_unless(&hibernate_atomic, -1, 0);
}

Prepare Console

The kernel next calls pm_prepare_console. This function only does anything if CONFIG_VT_CONSOLE_SLEEP has been set.

This prepares the virtual terminal for a suspend state, switching away to a console used only for the suspend state if needed.

kernel/power/console.c:130

void pm_prepare_console(void)
{
	if (!pm_vt_switch())
		return;

	orig_fgconsole = vt_move_to_console(SUSPEND_CONSOLE, 1);
	if (orig_fgconsole < 0)
		return;

	orig_kmsg = vt_kmsg_redirect(SUSPEND_CONSOLE);
	return;
}

The first thing is to check whether we actually need to switch the VT

kernel/power/console.c:94

/*
 * There are three cases when a VT switch on suspend/resume are required:
 *   1) no driver has indicated a requirement one way or another, so preserve
 *      the old behavior
 *   2) console suspend is disabled, we want to see debug messages across
 *      suspend/resume
 *   3) any registered driver indicates it needs a VT switch
 *
 * If none of these conditions is present, meaning we have at least one driver
 * that doesn't need the switch, and none that do, we can avoid it to make
 * resume look a little prettier (and suspend too, but that's usually hidden,
 * e.g. when closing the lid on a laptop).
 */
static bool pm_vt_switch(void)
{
	struct pm_vt_switch *entry;
	bool ret = true;

	mutex_lock(&vt_switch_mutex);
	if (list_empty(&pm_vt_switch_list))
		goto out;

	if (!console_suspend_enabled)
		goto out;

	list_for_each_entry(entry, &pm_vt_switch_list, head) {
		if (entry->required)
			goto out;
	}

	ret = false;
out:
	mutex_unlock(&vt_switch_mutex);
	return ret;
}

There is an explanation of the conditions under which a switch is performed in the comment above the function, but we’ll also walk through the steps here.

Firstly we grab the vt_switch_mutex to ensure nothing will modify the list while we’re looking at it.

We then examine the pm_vt_switch_list. This list is used to indicate the drivers that require a switch during suspend. They register this requirement, or the lack thereof, via pm_vt_switch_required.

kernel/power/console.c:31

/**
 * pm_vt_switch_required - indicate VT switch at suspend requirements
 * @dev: device
 * @required: if true, caller needs VT switch at suspend/resume time
 *
 * The different console drivers may or may not require VT switches across
 * suspend/resume, depending on how they handle restoring video state and
 * what may be running.
 *
 * Drivers can indicate support for switchless suspend/resume, which can
 * save time and flicker, by using this routine and passing 'false' as
 * the argument.  If any loaded driver needs VT switching, or the
 * no_console_suspend argument has been passed on the command line, VT
 * switches will occur.
 */
void pm_vt_switch_required(struct device *dev, bool required)

Next, we check console_suspend_enabled. This is set to false by the kernel parameter no_console_suspend, but defaults to true.

Finally, if there are any entries in the pm_vt_switch_list, then we check to see if any of them require a VT switch.

Only if none of these conditions apply, then we return false.

If a VT switch is in fact required, then we move first the currently active virtual terminal/console10 (vt_move_to_console) and then the current location of kernel messages (vt_kmsg_redirect) to the SUSPEND_CONSOLE. The SUSPEND_CONSOLE is the last entry in the list of possible consoles, and appears to just be a black hole to throw away messages.

kernel/power/console.c:16

#define SUSPEND_CONSOLE	(MAX_NR_CONSOLES-1)

Interestingly, these are separate functions because you can use TIOCL_SETKMSGREDIRECT (an ioctl11) to send kernel messages to a specific virtual terminal, but by default its the same as the currently active console.

The locations of the previously active console and the previous kernel messages location are stored in orig_fgconsole and orig_kmsg, to restore the state of the console and kernel messages after the machine wakes up again. Interestingly, this means orig_fgconsole also ends up storing any errors, so has to be checked to ensure it’s not less than zero before we try to do anything with the kernel messages on both suspend and resume.

drivers/tty/vt/vt_ioctl.c:1268

/* Perform a kernel triggered VT switch for suspend/resume */

static int disable_vt_switch;

int vt_move_to_console(unsigned int vt, int alloc)
{
	int prev;

	console_lock();
	/* Graphics mode - up to X */
	if (disable_vt_switch) {
		console_unlock();
		return 0;
	}
	prev = fg_console;

	if (alloc && vc_allocate(vt)) {
		/* we can't have a free VC for now. Too bad,
		 * we don't want to mess the screen for now. */
		console_unlock();
		return -ENOSPC;
	}

	if (set_console(vt)) {
		/*
		 * We're unable to switch to the SUSPEND_CONSOLE.
		 * Let the calling function know so it can decide
		 * what to do.
		 */
		console_unlock();
		return -EIO;
	}
	console_unlock();
	if (vt_waitactive(vt + 1)) {
		pr_debug("Suspend: Can't switch VCs.");
		return -EINTR;
	}
	return prev;
}

Unlike most other locking functions we’ve seen so far, console_lock needs to be careful to ensure nothing else is panicking and needs to dump to the console before grabbing the semaphore for the console and setting a couple flags.

Panics

Panics are tracked via an atomic integer set to the id of the processor currently panicking.

kernel/printk/printk.c:2649

/**
 * console_lock - block the console subsystem from printing
 *
 * Acquires a lock which guarantees that no consoles will
 * be in or enter their write() callback.
 *
 * Can sleep, returns nothing.
 */
void console_lock(void)
{
	might_sleep();

	/* On panic, the console_lock must be left to the panic cpu. */
	while (other_cpu_in_panic())
		msleep(1000);

	down_console_sem();
	console_locked = 1;
	console_may_schedule = 1;
}
EXPORT_SYMBOL(console_lock);

kernel/printk/printk.c:362

/*
 * Return true if a panic is in progress on a remote CPU.
 *
 * On true, the local CPU should immediately release any printing resources
 * that may be needed by the panic CPU.
 */
bool other_cpu_in_panic(void)
{
	return (panic_in_progress() && !this_cpu_in_panic());
}

kernel/printk/printk.c:345

static bool panic_in_progress(void)
{
	return unlikely(atomic_read(&panic_cpu) != PANIC_CPU_INVALID);
}

kernel/printk/printk.c:350

/* Return true if a panic is in progress on the current CPU. */
bool this_cpu_in_panic(void)
{
	/*
	 * We can use raw_smp_processor_id() here because it is impossible for
	 * the task to be migrated to the panic_cpu, or away from it. If
	 * panic_cpu has already been set, and we're not currently executing on
	 * that CPU, then we never will be.
	 */
	return unlikely(atomic_read(&panic_cpu) == raw_smp_processor_id());
}

console_locked is a debug value, used to indicate that the lock should be held, and our first indication that this whole virtual terminal system is more complex than might initially be expected.

kernel/printk/printk.c:373

/*
 * This is used for debugging the mess that is the VT code by
 * keeping track if we have the console semaphore held. It's
 * definitely not the perfect debug tool (we don't know if _WE_
 * hold it and are racing, but it helps tracking those weird code
 * paths in the console code where we end up in places I want
 * locked without the console semaphore held).
 */
static int console_locked;

console_may_schedule is used to see if we are permitted to sleep and schedule other work while we hold this lock. As we’ll see later, the virtual terminal subsystem is not re-entrant, so there’s all sorts of hacks in here to ensure we don’t leave important code sections that can’t be safely resumed.

Disable VT Switch

As the comment below lays out, when another program is handling graphical display anyway, there’s no need to do any of this, so the kernel provides a switch to turn the whole thing off. Interestingly, this appears to only be used by three drivers, so the specific hardware support required must not be particularly common.

drivers/gpu/drm/omapdrm/dss
drivers/video/fbdev/geode
drivers/video/fbdev/omap2

drivers/tty/vt/vt_ioctl.c:1308

/*
 * Normally during a suspend, we allocate a new console and switch to it.
 * When we resume, we switch back to the original console.  This switch
 * can be slow, so on systems where the framebuffer can handle restoration
 * of video registers anyways, there's little point in doing the console
 * switch.  This function allows you to disable it by passing it '0'.
 */
void pm_set_vt_switch(int do_switch)
{
	console_lock();
	disable_vt_switch = !do_switch;
	console_unlock();
}
EXPORT_SYMBOL(pm_set_vt_switch);

The rest of the vt_switch_console function is pretty normal, however, simply allocating space if needed to create the requested virtual terminal and then setting the current virtual terminal via set_console.

Virtual Terminal Set Console

With set_console, we begin (as if we haven’t been already) to enter the madness that is the virtual terminal subsystem. As mentioned previously, modifications to its state must be made very carefully, as other stuff happening at the same time could create complete messes.

All this to say, calling set_console does not actually perform any work to change the state of the current console. Instead it indicates what changes it wants and then schedules that work.

drivers/tty/vt/vt.c:3153

int set_console(int nr)
{
	struct vc_data *vc = vc_cons[fg_console].d;

	if (!vc_cons_allocated(nr) || vt_dont_switch ||
		(vc->vt_mode.mode == VT_AUTO && vc->vc_mode == KD_GRAPHICS)) {

		/*
		 * Console switch will fail in console_callback() or
		 * change_console() so there is no point scheduling
		 * the callback
		 *
		 * Existing set_console() users don't check the return
		 * value so this shouldn't break anything
		 */
		return -EINVAL;
	}

	want_console = nr;
	schedule_console_callback();

	return 0;
}

The check for vc->vc_mode == KD_GRAPHICS is where most end-user graphical desktops will bail out of this change, as they’re in graphics mode and don’t need to switch away to the suspend console.

vt_dont_switch is a flag used by the ioctls11 VT_LOCKSWITCH and VT_UNLOCKSWITCH to prevent the system from switching virtual terminal devices when the user has explicitly locked it.

VT_AUTO is a flag indicating that automatic virtual terminal switching is enabled12, and thus deliberate switching to a suspend terminal is not required.

However, if you do run your machine from a virtual terminal, then we indicate to the system that we want to change to the requested virtual terminal via the want_console variable and schedule a callback via schedule_console_callback.

drivers/tty/vt/vt.c:315

void schedule_console_callback(void)
{
	schedule_work(&console_work);
}

console_work is a workqueue2 that will execute the given task asynchronously.

Console Callback

drivers/tty/vt/vt.c:3109

/*
 * This is the console switching callback.
 *
 * Doing console switching in a process context allows
 * us to do the switches asynchronously (needed when we want
 * to switch due to a keyboard interrupt).  Synchronization
 * with other console code and prevention of re-entrancy is
 * ensured with console_lock.
 */
static void console_callback(struct work_struct *ignored)
{
	console_lock();

	if (want_console >= 0) {
		if (want_console != fg_console &&
		    vc_cons_allocated(want_console)) {
			hide_cursor(vc_cons[fg_console].d);
			change_console(vc_cons[want_console].d);
			/* we only changed when the console had already
			   been allocated - a new console is not created
			   in an interrupt routine */
		}
		want_console = -1;
	}
...

console_callback first looks to see if there is a console change wanted via want_console and then changes to it if it’s not the current console and has been allocated already. We do first remove any cursor state with hide_cursor.

drivers/tty/vt/vt.c:841

static void hide_cursor(struct vc_data *vc)
{
	if (vc_is_sel(vc))
		clear_selection();

	vc->vc_sw->con_cursor(vc, false);
	hide_softcursor(vc);
}

A full dive into the tty driver is a task for another time, but this should give a general sense of how this system interacts with hibernation.

Notify Power Management Call Chain

kernel/power/hibernate.c:767

pm_notifier_call_chain_robust(PM_HIBERNATION_PREPARE, PM_POST_HIBERNATION)

This will call a chain of power management callbacks, passing first PM_HIBERNATION_PREPARE and then PM_POST_HIBERNATION on startup or on error with another callback.

kernel/power/main.c:98

int pm_notifier_call_chain_robust(unsigned long val_up, unsigned long val_down)
{
	int ret;

	ret = blocking_notifier_call_chain_robust(&pm_chain_head, val_up, val_down, NULL);

	return notifier_to_errno(ret);
}

The power management notifier is a blocking notifier chain, which means it has the following properties.

include/linux/notifier.h:23

 *	Blocking notifier chains: Chain callbacks run in process context.
 *		Callouts are allowed to block.

The callback chain is a linked list with each entry containing a priority and a function to call. The function technically takes in a data value, but it is always NULL for the power management chain.

include/linux/notifier.h:49

struct notifier_block;

typedef	int (*notifier_fn_t)(struct notifier_block *nb,
			unsigned long action, void *data);

struct notifier_block {
	notifier_fn_t notifier_call;
	struct notifier_block __rcu *next;
	int priority;
};

The head of the linked list is protected by a read-write semaphore.

include/linux/notifier.h:65

struct blocking_notifier_head {
	struct rw_semaphore rwsem;
	struct notifier_block __rcu *head;
};

Because it is prioritized, appending to the list requires walking it until an item with lower13 priority is found to insert the current item before.

kernel/notifier.c:252

/*
 *	Blocking notifier chain routines.  All access to the chain is
 *	synchronized by an rwsem.
 */

static int __blocking_notifier_chain_register(struct blocking_notifier_head *nh,
					      struct notifier_block *n,
					      bool unique_priority)
{
	int ret;

	/*
	 * This code gets used during boot-up, when task switching is
	 * not yet working and interrupts must remain disabled.  At
	 * such times we must not call down_write().
	 */
	if (unlikely(system_state == SYSTEM_BOOTING))
		return notifier_chain_register(&nh->head, n, unique_priority);

	down_write(&nh->rwsem);
	ret = notifier_chain_register(&nh->head, n, unique_priority);
	up_write(&nh->rwsem);
	return ret;
}

kernel/notifier.c:20

/*
 *	Notifier chain core routines.  The exported routines below
 *	are layered on top of these, with appropriate locking added.
 */

static int notifier_chain_register(struct notifier_block **nl,
				   struct notifier_block *n,
				   bool unique_priority)
{
	while ((*nl) != NULL) {
		if (unlikely((*nl) == n)) {
			WARN(1, "notifier callback %ps already registered",
			     n->notifier_call);
			return -EEXIST;
		}
		if (n->priority > (*nl)->priority)
			break;
		if (n->priority == (*nl)->priority && unique_priority)
			return -EBUSY;
		nl = &((*nl)->next);
	}
	n->next = *nl;
	rcu_assign_pointer(*nl, n);
	trace_notifier_register((void *)n->notifier_call);
	return 0;
}

Each callback can return one of a series of options.

include/linux/notifier.h:18

#define NOTIFY_DONE		0x0000		/* Don't care */
#define NOTIFY_OK		0x0001		/* Suits me */
#define NOTIFY_STOP_MASK	0x8000		/* Don't call further */
#define NOTIFY_BAD		(NOTIFY_STOP_MASK|0x0002)
						/* Bad/Veto action */

When notifying the chain, if a function returns STOP or BAD then the previous parts of the chain are called again with PM_POST_HIBERNATION14 and an error is returned.

kernel/notifier.c:107

/**
 * notifier_call_chain_robust - Inform the registered notifiers about an event
 *                              and rollback on error.
 * @nl:		Pointer to head of the blocking notifier chain
 * @val_up:	Value passed unmodified to the notifier function
 * @val_down:	Value passed unmodified to the notifier function when recovering
 *              from an error on @val_up
 * @v:		Pointer passed unmodified to the notifier function
 *
 * NOTE:	It is important the @nl chain doesn't change between the two
 *		invocations of notifier_call_chain() such that we visit the
 *		exact same notifier callbacks; this rules out any RCU usage.
 *
 * Return:	the return value of the @val_up call.
 */
static int notifier_call_chain_robust(struct notifier_block **nl,
				     unsigned long val_up, unsigned long val_down,
				     void *v)
{
	int ret, nr = 0;

	ret = notifier_call_chain(nl, val_up, v, -1, &nr);
	if (ret & NOTIFY_STOP_MASK)
		notifier_call_chain(nl, val_down, v, nr-1, NULL);

	return ret;
}

Each of these callbacks tends to be quite driver-specific, so we’ll cease discussion of this here.

Sync Filesystems

The next step is to ensure all filesystems have been synchronized to disk.

This is performed via a simple helper function that times how long the full synchronize operation, ksys_sync takes.

kernel/power/main.c:69

void ksys_sync_helper(void)
{
	ktime_t start;
	long elapsed_msecs;

	start = ktime_get();
	ksys_sync();
	elapsed_msecs = ktime_to_ms(ktime_sub(ktime_get(), start));
	pr_info("Filesystems sync: %ld.%03ld seconds\n",
		elapsed_msecs / MSEC_PER_SEC, elapsed_msecs % MSEC_PER_SEC);
}
EXPORT_SYMBOL_GPL(ksys_sync_helper);

ksys_sync wakes and instructs a set of flusher threads to write out every filesystem, first their inodes15, then the full filesystem, and then finally all block devices, to ensure all pages are written out to disk.

fs/sync.c:87

/*
 * Sync everything. We start by waking flusher threads so that most of
 * writeback runs on all devices in parallel. Then we sync all inodes reliably
 * which effectively also waits for all flusher threads to finish doing
 * writeback. At this point all data is on disk so metadata should be stable
 * and we tell filesystems to sync their metadata via ->sync_fs() calls.
 * Finally, we writeout all block devices because some filesystems (e.g. ext2)
 * just write metadata (such as inodes or bitmaps) to block device page cache
 * and do not sync it on their own in ->sync_fs().
 */
void ksys_sync(void)
{
	int nowait = 0, wait = 1;

	wakeup_flusher_threads(WB_REASON_SYNC);
	iterate_supers(sync_inodes_one_sb, NULL);
	iterate_supers(sync_fs_one_sb, &nowait);
	iterate_supers(sync_fs_one_sb, &wait);
	sync_bdevs(false);
	sync_bdevs(true);
	if (unlikely(laptop_mode))
		laptop_sync_completion();
}

It follows an interesting pattern of using iterate_supers to run both sync_inodes_one_sb and then sync_fs_one_sb on each known filesystem16. It also calls both sync_fs_one_sb and sync_bdevs twice, first without waiting for any operations to complete and then again waiting for completion17.

When laptop_mode is enabled the system runs additional filesystem synchronization operations after the specified delay without any writes.

mm/page-writeback.c:111

/*
 * Flag that puts the machine in "laptop mode". Doubles as a timeout in jiffies:
 * a full sync is triggered after this time elapses without any disk activity.
 */
int laptop_mode;

EXPORT_SYMBOL(laptop_mode);

However, when running a filesystem synchronization operation, the system will add an additional timer to schedule more writes after the laptop_mode delay. We don’t want the state of the system to change at all while performing hibernation, so we cancel those timers.

mm/page-writeback.c:2198

/*
 * We're in laptop mode and we've just synced. The sync's writes will have
 * caused another writeback to be scheduled by laptop_io_completion.
 * Nothing needs to be written back anymore, so we unschedule the writeback.
 */
void laptop_sync_completion(void)
{
	struct backing_dev_info *bdi;

	rcu_read_lock();

	list_for_each_entry_rcu(bdi, &bdi_list, bdi_list)
		del_timer(&bdi->laptop_mode_wb_timer);

	rcu_read_unlock();
}

As a side note, the ksys_sync function is simply called when the system call sync is used.

fs/sync.c:111

SYSCALL_DEFINE0(sync)
{
	ksys_sync();
	return 0;
}

The End of Preparation

With that the system has finished preparations for hibernation. This is a somewhat arbitrary cutoff, but next the system will begin a full freeze of userspace to then dump memory out to an image and finally to perform hibernation. All this will be covered in future articles!

  1. Hibernation modes are outside of scope for this article, see the previous article for a high-level description of the different types of hibernation. 

  2. Workqueues are a mechanism for running asynchronous tasks. A full description of them is a task for another time, but the kernel documentation on them is available here: https://www.kernel.org/doc/html/v6.9/core-api/workqueue.html  2

  3. This is a bit of an oversimplification, but since this isn’t the main focus of this article this description has been kept to a higher level. 

  4. Kconfig is Linux’s build configuration system that sets many different macros to enable/disable various features. 

  5. Kconfig defaults to the first default found 

  6. Including checking whether the algorithm is larval? Which appears to indicate that it requires additional setup, but is an interesting choice of name for such a state. 

  7. Specifically when we get to process freezing, which we’ll get to in the next article in this series. 

  8. Swap space is outside the scope of this article, but in short it is a buffer on disk that the kernel uses to store memory not current in use to free up space for other things. See Swap Management for more details. 

  9. The code for this is lengthy and tangential, thus it has not been included here. If you’re curious about the details of this, see kernel/power/hibernate.c:858 for the details of hibernate_quiet_exec, and drivers/nvdimm/core.c:451 for how it is used in nvdimm

  10. Annoyingly this code appears to use the terms “console” and “virtual terminal” interchangeably. 

  11. ioctls are special device-specific I/O operations that permit performing actions outside of the standard file interactions of read/write/seek/etc.  2

  12. I’m not entirely clear on how this flag works, this subsystem is particularly complex. 

  13. In this case a higher number is higher priority. 

  14. Or whatever the caller passes as val_down, but in this case we’re specifically looking at how this is used in hibernation. 

  15. An inode refers to a particular file or directory within the filesystem. See Wikipedia for more details. 

  16. Each active filesystem is registed with the kernel through a structure known as a superblock, which contains references to all the inodes contained within the filesystem, as well as function pointers to perform the various required operations, like sync. 

  17. I’m including minimal code in this section, as I’m not looking to deep dive into the filesystem code at this time. 

08 September, 2024 12:00AM

September 07, 2024

Sergio Durigan Junior

Chatting in the 21st century

Several people have been asking me to explain and/or write about my solution for chatting nowadays. I realize that the current scenario is much more complex than, say, 10 or 20 years ago. Back then, this post would probably be more about the IRC client I used than about different chatting technologies.

I have also spent a non trivial amount of time setting things up the way I want, so I understand that it’s about time to write about my setup not only because I think it can be helpful to others, but also because I would like to document things for myself.

The backbone: Matrix

I chose to use Matrix as the place where I integrate everything. Despite there being some heavy (and justified) criticism on the protocol itself, it serves me well for what I need right now. Obviously, I don’t like the fact that I have to provide Matrix and all of its accompanying bridges a VPS with 4GB of RAM and 3 vCPUs, but I think that that ship has sailed, unfortunately.

In an ideal world, I would be using XMPP and dedicating only a fraction of the resources I’m using today to have a full chat system. And since I have been running my personal XMPP server for more than a decade now, I did try to find a solution that would allow me to keep using it, but unfortunately the protocol became almost a hobbyist thing, so there’s that.

A few disclaimers

I self-host everything, including my Matrix server. Much of what I did won’t work if you don’t self-host Matrix, so keep that in mind.

This won’t be a post teaching you how to deploy the services. My intention is to describe what I use and for what purpose.

Also, as much as I try to use Debian packages for everything I do, I opted to deploy all services using a community-maintained Ansible playbook which is very well written and organized: matrix-docker-ansible-deploy.

Last but not least, as I said above, you will likely need a machine with a good amount of RAM, CPU and storage, especially if you deploy Synapse as your Matrix homeserver (which is what I recommend if you plan to use the bridges I’ll mention). My current VPS has 4GB of RAM, 3 vCPUs and 80GB of storage (of which I’m currently using approximately 55GB).

Problem #1: my Matrix client(s)

There are a lot of clients that can talk the Matrix protocol, but most of them are either web clients or GUI programs. I live on the terminal, more specifically inside Emacs, so I settled for the amazing ement.el Emacs mode. It works surprisingly well, but unfortunately doesn’t support end-to-end encryption out of the box; for that, you have to hook it up with pantalaimon. Unfortunately, the project seems abandoned and therefore I don’t recommend you to use it. I don’t use it myself.

When I have to reply some E2E encrypted message from another user, I go to my web browser and use my self-hosted Element client. It’s a nuisance, but one that I’m willing to accept because of security concerns.

If you’re into web clients and don’t want to use Element (because it is heavy), you can try Cinny. It’s lightweight and supports a decent set of features.

If you’re a terminal lover but don’t use Emacs, you may want to try gomuks or iamb.

Problem #2: IRC bridging

There are basically two types of IRC bridges for Matrix:

  • The regular and most used matrix-appservice-irc. This bridge takes Matrix to IRC (think of IRC users with the [m] suffix appended to their nicknames), and is what the matrix.org and other big homeservers (including matrix.debian.social) use. It’s a complex service which allows thousands of Matrix users to connect to IRC networks, but that unfortunately has complex problems and is only worth using if you intend to host a community server.

  • A bouncer-like bridge called Heisenbridge. This is what I use personally. It takes IRC to Matrix, which means that people on IRC will not know that you’re using Matrix. This bridge is much simpler, and because it acts like a bouncer it’s pretty much impossible for it to cause problems with the IRC network.

Due to the fact that I sometimes like to use other IRC clients, I still run a regular ZNC bouncer, and I use Heisenbridge to connect to my ZNC. This means that I can use, e.g., ERC inside Emacs and my Matrix bridge at the same time. But you don’t necessarily need to run another bouncer; you can simply use Heisenbridge and connect directly to the IRC network(s) you want.

A word of caution, though: unlike ZNC, Heisenbridge doesn’t support per-user configuration when you use it in bouncer mode. This is the reason why you need to self-host it, and why it’s not possible to offer the service to other users (they would have access to your IRC network configuration otherwise).

It’s also worth talking about logs. I find that keeping logs of everything that goes on IRC has saved me a bunch of times, and so I find it really important to continue doing that. Unfortunately, neither ement.el nor Element support logging things out of the box (at least not that I know). This is also one of the reasons why I still keep my ZNC around: I configure it to log everything.

Problem #3: Telegram

I don’t use Telegram myself, but unfortunately several people from the Debian community do, especially in Brazil. There is a whole Debian community on Telegram, and I wanted to be able to bridge our Debian Matrix channels to their Telegram counterparts.

I am currently using mautrix-telegram for that, and it’s working great. You need someone with a Telegram account to configure their credentials so that the bridge can connect to it, but afterwards it’s really easy to bridge channels together.

Problem #4: GitLab webhooks

Something else I wanted to be able to do was to receive notifications regarding new issues, merge requests and other activities from Salsa. For this, I’m using maubot, which is awesome and has a huge list of plugins. I’m using the gitlab one.

Final thoughts

Overall, I’m satisfied with the setup I have now. It has certainly taken some time and effort to find the right tool for each problem I needed to solve, and I still feel like there are some rough edges to soften (like the fact that my Emacs client doesn’t support E2E encryption out of the box, or the whole logging situation), but otherwise things are working fine and I haven’t had any big problems with the deployment. You do have to be much more careful about stuff (for example, when I installed an unrelated service that “hijacked” my Apache configuration and made Matrix’s federation silently stop working), though.

If you have more specific questions about any part of my setup, shoot me an email and I’ll do my best to help.

Happy chatting!

07 September, 2024 09:25PM

Paul Wise

FLOSS Activities August 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

  • Regressions in adequate (1 2 3 4)

Review

  • Debian publicity: reviewed Debian birthday post

Administration

  • Debian servers: contributed to a review of current Debian Partners

Communication

  • Respond to queries from Debian users and contributors on IRC

Sponsors

All work was done on a volunteer basis.

07 September, 2024 08:46AM

September 04, 2024

hackergotchi for Junichi Uekawa

Junichi Uekawa

Google docs has some tab feature.

Google docs has some tab feature. I received a note that my script addTodayDate may need to be modified to handle the feature. tabs API. But then I don't think I need it yet, so I will just use the default tab.

04 September, 2024 11:08PM by Junichi Uekawa

Reproducible Builds

Reproducible Builds in August 2024

Welcome to the August 2024 report from the Reproducible Builds project!

Our reports attempt to outline what we’ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Table of contents:

  1. LWN: The history, status, and plans for reproducible builds
  2. Intermediate Autotools build artifacts removed from PostgreSQL distribution tarballs
  3. Distribution news
  4. Mailing list news
  5. diffoscope
  6. Website updates
  7. Upstream patches
  8. Reproducibility testing framework

LWN: The history, status, and plans for reproducible builds

The free software newspaper of record, Linux Weekly News, published an in-depth article based on Holger Levsen’s talk, Reproducible Builds: The First Eleven Years which was presented at the recent DebConf24 conference in Busan, South Korea.

Titled The history, status, and plans for reproducible builds and written by Jake Edge, LWN’s article not only summarises Holger’s talk and clarifies its message but it links to external information as well. Holger’s original talk can also be watched on the DebConf24 webpage (direct .webm link and his HTML slides are available also). There are also a significant number of comments on LWN’s page as well.

Holger Levsen also headed a scheduled discussion session at DebConf24 on Preserving *other* build artifacts addressing a topic where a number of Debian packages are (or would like to) produce results that are neither the .deb files, the build logs nor the logs of CI tests. This is an issue for reproducible builds as this “4th type” of build artifact are typically shipped within the binary .deb packages, and are invariably non-deterministic; thus making the .deb files unreproducible. (A direct .webm link and HTML slides are available).


Intermediate Autotools build artifacts removed from PostgreSQL distribution tarballs

Peter Eisentraut wrote a detailed blog post on the subject of “The new PostgreSQL 17 make dist”. Like many projects, the PostgreSQL database has previously pre-built parts of its GNU Autotools build system: “the reason for this is a mix of convenience and traditional practice”. Peter astutely notes that this arrangement in the build system is “quite tricky” as:

You need to carefully maintain the different states of “clean source code”, “partially built source code”, and “fully built source code”, and the commands to transition between them.

However, Peter goes on to mention that:

… a lot more attention is nowadays paid to the software supply chain. There are security and legal reasons for this. When users install software, they want to know where it came from, and they want to be sure that they got the right thing, not some fake version or some version of dubious legal provenance.

And cites the XZ Utils backdoor as a reason to care about transparent and reproducible ways of distributing and communicating a source tarball and provenance. Because of this, intermediate build artifacts are now henceforth essentially disallowed from PostgreSQL distribution tarballs.

Distribution news

In Debian this month, 30 reviews of Debian packages were added, 17 were updated and 10 were removed this month adding to our knowledge about identified issues. One issue type was added by Chris Lamb, too. []

In addition, an issue was filed to update the Salsa CI pipeline (used by 1,000s of Debian packages) to no longer test for reproducibility with reprotest’s build_path variation. Holger Levsen provided a rationale for this change in the issue, which has already been made to the tests being performed by tests.reproducible-builds.org.


In Arch Linux this month, Jelle van der Waa published a short blog post on the topic of Investigating creating reproducible images with mkosi, motivated by the desire to make it possible for anyone to “re-recreate the official Arch cloud image bit-by-bit identical on their own machine as per [the] reproducible builds definition.” In addition, Jelle filed a patch for pacman, the Arch Linux package manager, to respect the SOURCE_DATE_EPOCH environment variable when installing a package.


In openSUSE news, Bernhard M. Wiedemann published another report for that distribution.


In Android news, the IzzyOnDroid project added 49 new rebuilder recipes and now features 256 total reproducible applications representing 21% of the total offerings in the repository. IzzyOnDroid is “an F-Droid style repository for Android apps[:] applications in this repository are official binaries built by the original application developers, taken from their resp. repositories (mostly GitHub).”


Mailing list news

From our mailing list this month:

  • Bernhard M. Wiedemann posted a brief message to the list with some helpful information regarding nondeterminism within Rust binaries, positing the use of the codegen-units = 16 default and resulting in a bug being filed in the Rust issue tracker. []

  • Bernhard also wrote to the list, following up to a thread in November 2023, on attempts to make the LibreOffice suite of office applications build reproducibly. In the thread from this month, Bernhard could announce that the four patches previously mentioned have landed in LibreOffice upstream.

  • Fay Stegerman linked the mailing list to a thread she made on the Signal issue tracker regarding whether “device-specific binaries [can] ever be considered meaningfully reproducible”. In particular: “the whole part about ‘allow[ing] multiple third parties to come to a consensus on a “correct” result’ breaks down completely when ‘correct’ is device-specific and not something everyone can agree on.” []

  • Developer kpcyrd posted an update for source code indexing project, whatsrc.org. Announcing that it now importing packages from live-bootstrap (“a usable Linux system [that is] created with only human-auditable, and wherever possible, human-written, source code”) into its database of provenance data.

  • Lastly, Mechtilde Stehmann posted an update to an earlier thread about how Java builds are not reproducible on the armhf architecture, enquiring how they might gain temporary access to such a machine in order to perform some deeper testing. []


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb released versions 274, 275, 276 and 277, uploaded these to Debian, and made the following changes as well:

  • New features:

    • Strip ANSI escapes—usually colour codes—from the output of the Procyon Java decompiler. []
    • Factor out a method for stripping ANSI escapes. []
    • Append output from dumppdf(1) in more cases, avoiding situations where we fallback to a binary diff. []
    • Add support for versions of Perl’s IO::Compress::Zip version 2.212. []
  • Bug fixes:

    • Also catch RuntimeError exceptions when importing the PyPDF library so that it, or, crucially, its transitive dependencies, cannot not cause diffoscope to traceback at runtime and build time. []
    • Do not call marshal.load(…) of precompiled Python bytecode as it, alas, inherently unsafe. Replace for now with a brief summary of the code section of .pyc. [][]
    • Don’t include excessive debug output when calling dumppdf(1). []
  • Testsuite-related changes:

    • Don’t bother to check version number in test_python.py: the fixture for this test is fixed. [][]
    • Update test_zip text fixtures and definitions to support new changes to the Perl IO::Compress library. []

In addition, Mattia Rizzolo updated the available architectures for a number of test dependencies [] and Sergei Trofimovich fixed an issue to avoid diffoscope crashing when hashing directory symlinks [] and Vagrant Cascadian proposed GNU Guix updates for diffoscope versions 275 and 276 and 277.


Website updates

There were a rather substantial number of improvements made to our website this month, including:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In August, a number of changes were made by Holger Levsen, including:

  • Temporarily install the openssl-provider-legacy package for the Debian unstable environments for running diffoscope due to Debian bug #1078944. [][][][]
  • Mark Debian armhf architecture nodes as being down due to proxy down. [][]
  • Detect proxy failures. [][][]
  • Run the index-buildinfo for the builtin-pho script with the -q switch. []
  • Disable all Arch Linux reproducible jobs. []

In addition, Mattia Rizzolo updated the website configuration to install the ruby-jekyll-sitemap package as it is now used in the website [], Roland Clobus updated the script to build Debian ‘live’ images to treat openQA issues as warnings [], and Vagrant Cascadian marked the cbxi4b node as down [].



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

04 September, 2024 01:27PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

loading (unintended consequences?)

For their 30th anniversary (ish; the Covid pandemic pushed the date out a bit) British electronic music duo Orbital released the compilation 30 something. The track list mostly looks like a best hits list, which — given their prior compilation celebrating 20 years looks much the same — would appear superfluous. However, they’ve rearranged and re-recorded all their songs for 30, to reflect their live arrangements. The reworkings are sufficiently distinct from the original versions (in some cases I prefer them) and elevate the release. The couple of new tracks are also fun, and many of the remixes on the second disc are worth a listen too.

cover art from Orbital - 30 Something

But what I actually sat down to write about was the cover artwork. They often have designs which riff on the notion of a circle (given their name) and the 30-something art (both for the album and single takes from it) adapts a “loading” spinner-like device from computing (I suppose it mostly closely resembles the spinner from macOS).

A possibly unintended effect of the pattern occurs when you view it on a display which is adjusting its brightness, such as if you’re listening to it on a phone, the screen is off, and you pick it up. The brightest part of the spinner is visible first, and the rest fade into visibility in sequence. The first time you see this is unexpected and very cool. (I've tried to recreate it in the picture below, but I don't think it's worked.)

Although I've suffixed the titled of this post unintended consequences?, It's quite possible this was deliberate.

screenshot of the artwork displayed on my phone

I’ve got the pattern on a t-shirt and my kids love to call out “Daddy’s loading!” In my convalescence it’s taken on a special sort of resonance because at times I’ve felt I’m in a holding state: waiting for an appointment to be made; waiting a polite interval before chasing an appointment; waiting for treatment to start after attending an appointment. Thankfully I’m at the end of that now, I hope.

04 September, 2024 09:06AM

September 02, 2024

hackergotchi for Gunnar Wolf

Gunnar Wolf

Free and open source software and other market failures

This post is a review for Computing Reviews for Free and open source software and other market failures , a article published in Communications of the ACM

Understanding the free and open-source software (FOSS) movement has, since its beginning, implied crossing many disciplinary boundaries. This article describes FOSS’s history, explaining its undeniable success throughout the 1990s, and why the movement today feels in a way as if it were on autopilot, lacking the “steam” it once had.

The author presents several examples of different industries where, as it happened with FOSS in computing, fundamental innovations happened not because the leading companies of each field are attentive to customers’ needs, but to a certain degree, despite them not even considering those needs, it is typically due to the hubris that comes from being a market leader.

Kemp exemplifies his hypothesis by presenting the messy landscape of the commercial, mutually incompatible systems of Unix in the 1980s. Different companies had set out to implement their particular flavor of “open Unix computers,” but with clear examples of vendor lock-in techniques. He speculates that, “if we had been able to buy a reasonably priced and solid Unix for our 32-bit PCs … nobody would be running FreeBSD or Linux today, except possibly as an obscure hobby.” He states that the FOSS movement was born out of the utter market failure of the different Unix vendors.

The focus of the article shifts then to the FOSS movement itself: 25 years ago, as FOSS systems slowly gained acceptance and then adoption in the “serious market” and at the center of the dot-com boom of the early 2000s, Linux user groups (LUGs) with tens of thousands of members bloomed throughout the world; knowing this history, why have all but a few of them vanished into oblivion?

Kemp suggests that the strength and vitality that LUGs had ultimately reflects the anger that prompted technical users to take the situation into their own hands and fix it; once the software industry was forced to change, the strongly cohesive FOSS movement diluted. “The frustrations and anger of [information technology, IT] in 2024,” Kamp writes, “are entirely different from those of 1991.” As an example, the author closes by citing the difficulty of maintaining–despite having the resources to do so–an aging legacy codebase that needs to continue working year after year.

02 September, 2024 07:08PM

hackergotchi for Jonathan Carter

Jonathan Carter

Debian Day South Africa 2024

Beer, cake and ISO testing amidst rugby and jazz band chaos

On Saturday, the Debian South Africa team got together in Cape Town to celebrate Debian’s 31st birthday and to perform ISO testing for the Debian 11.11 and 12.7 point releases.

We ran out of time to organise a fancy printed cake like we had last year, but our improvisation worked out just fine!

We thought that we had allotted plenty of time for all of our activities for the day, and that there would be plenty of time for everything including training, but the day zipped by really fast. We hired a venue at a brewery, which is usually really nice because they have an isolated area with lots of space and a big TV – nice for presentations, demos, etc. But on this day, there was a big rugby match between South Africa and New Zealand, and as it got closer to the game, the place just got louder and louder (especially as a band started practicing and doing sound tests for their performance for that evening) and it turned out our space was also double-booked later in the afternoon, so we had to relocate.

Even amidst all the chaos, we ended up having a very productive day and we even managed to have some fun!

Four people from our local team performed ISO testing for the very first time, and in total we covered 44 test cases locally. Most of the other testers were the usual crowd in the UK, we also did a brief video call with them, but it was dinner time for them so we had to keep it short. Next time we’ll probably have some party line open that any tester can also join.

Logo

We went through some more iterations of our local team logo that Tammy has been working on. They’re turning out very nice and have been in progress for more than a year, I guess like most things Debian, it will be ready when it’s ready!

Debian 11.11 and Debian 12.7 released, and looking ahead towards Debian 13

Both point releases tested just fine and was released later in the evening. I’m very glad that we managed to be useful and reduce total testing time and that we managed to cover all the test cases in the end.

A bunch of things we really wanted to fix by the time Debian 12 launched are now finally fixed in 12.7. There’s still a few minor annoyances, but over all, Debian 13 (trixie) is looking even better than Debian 12 was around this time in the release cycle.

Freeze dates for trixie has not yet been announced, I hope that the release team announces those sooner rather than later, also KDE Plasma 6 hasn’t yet made its way into unstable, I’ve seen quite a number of people ask about this online, so hopefully that works out.

And by the way, the desktop artwork submissions for trixie ends in two weeks! More information about that is available on the Debian wiki if you’re interested in making a contribution. There are already 4 great proposals.

Debian Local Groups

Organising local events for Debian is probably easier than you think, and Debian does make funding available for events. So, if you want to grow Debian in your area, feel free to join us at -localgroups on the OFTC IRC network, also plumbed on Matrix at -localgroups:matrix.debian.social – where we’ll try to answer any questions you might have and guide you through the process!

Oh and btw… South Africa won the Rugby!

02 September, 2024 01:01PM by jonathan

hackergotchi for Daniel Pocock

Daniel Pocock

Peter Eckersley is back (online)

When Peter Eckersley passed away in 2022, donors contributed a vast sum of money to preserve his brain.

Sadly, nobody remembered to preserve Peter's blog.

Peter had chosen to use a domain name from Iceland, pde.is. Iceland has become well known for hosting web sites that come under attack for their accurate coverage of the toxic culture in some free software organizations.

The sum raised to freeze Peter's brain was $29,007 according to GoFundMe. The cost of registering an .is domain was just EUR 68 with Orange Website, an Icelandic hosting company.

Reading Peter's blog posts again? Priceless.

Back in our student days, I was the sysadmin at the Virtual Moreland Community Network. The service hosted various organizations that we both participated in and I was also hosting one of Peter's email accounts.

Running certbot for pde.is

After securing Peter's domain, I immediately wanted to run certbot from Peter's Let's Encrypt project and obtain a certificate. Should it really be this easy to obtain a certificate for a domain previously owned by somebody else? Make of that what you will.

Here is the certificate. Notice the timestamp, the domain was registered and the certificate was issued on 22 June 2024, the same time that another well known University of Melbourne alumnus, Julian Assange, was packing up his prison cell with help from Kristinn Hrafnsson. Another uncanny coincidence.

Remembering the Edward Brocklesby affair in Debian

The Debian community is still waiting for answers about Edward Brocklesby, the Debian Developer who was secretly expelled just a few weeks after uploading a new version of the SSH2 package.

If Peter hadn't frozen his brain, he would turn in his grave at the manner in which Debian has been hijacked today.

Other blogs about Peter Eckersley

02 September, 2024 12:00PM

September 01, 2024

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

this are my bits from DPL for August.

Happy Birthday Debian

On 16th of August Debian celebrated its 31th birthday. Since I'm unable to write a better text than our great publicity team I'm simply linking to their article for those who might have missed it:

https://bits.debian.org/2024/08/debian-turns-31.html

Removing more packages from unstable

Helmut Grohne argued for more aggressive package removal and sought consensus on a way forward. He provided six examples of processes where packages that are candidates for removal are consuming valuable person-power. I’d like to add that the Bug of the Day initiative (see below) also frequently encounters long-unmaintained packages with popcon votes sometimes as low as zero, and often fewer than ten.

Helmut's email included a list of packages that would meet the suggested removal criteria. There was some discussion about whether a popcon vote should be included in these criteria, with arguments both for and against it. Although I support including popcon, I acknowledge that Helmut has a valid point in suggesting it be left out.

While I’ve read several emails in agreement, Scott Kitterman made a valid point "I don't think we need more process. We just need someone to do the work of finding the packages and filing the bugs." I agree that this is crucial to ensure an automated process doesn’t lead to unwanted removals. However, I don’t see "someone" stepping up to file RM bugs against other maintainers' packages. As long as we have strict ownership of packages, many people are hesitant to touch a package, even for fixing it. Asking for its removal might be even less well-received. Therefore, if an automated procedure were to create RM bugs based on defined criteria, it could help reduce some of the social pressure.

In this aspect the opinion of Niels Thykier is interesting: "As much as I want automation, I do not mind the prototype starting as a semi-automatic process if that is what it takes to get started."

The urgency of the problem to remove packages was put by CharlesPlessy into the words: "So as of today, it is much less work to keep a package rotting than removing it." My observation when trying to fix the Bug of the Day exactly fits this statement.

I would love for this discussion to lead to more aggressive removals that we can agree upon, whether they are automated, semi-automated, or managed by a person processing an automatically generated list (supported by an objective procedure). To use an analogy: I’ve found that every image collection improves with aggressive pruning. Similarly, I’m convinced that Debian will improve if we remove packages that no longer serve our users well.

DEP14 / DEP18

There are two DEPs that affect our workflow for maintaining packages—particularly for those who agree on using Git for Debian packages. DEP-14 recommends a standardized layout for Git packaging repositories, which benefits maintainers working across teams and makes it easier for newcomers to learn a consistent repository structure.

DEP-14 stalled for various reasons. Sam Hartman suspected it might be because 'it doesn't bring sufficient value.' However, the assumption that git-buildpackage is incompatible with DEP-14 is incorrect, as confirmed by its author, Guido Günther. As one of the two key tools for Debian Git repositories (besides dgit) fully supports DEP-14, though the migration from the previous default is somewhat complex.

Some investigation into mass-converting older formats to DEP-14 was conducted by the Perl team, as Gregor Hermann pointed out..

The discussion about DEP-14 resurfaced with the suggestion of DEP-18. Guido Günther proposed the title Encourage Continuous Integration and Merge Request-Based Collaboration for Debian Packages’, which more accurately reflects the DEP's technical intent.

Otto Kekäläinen, who initiated DEP-18 (thank you, Otto), provided a good summary of the current status. He also assembled a very helpful overview of Git and GitLab usage in other Linux distros.

More Salsa CI

As a result of the DEP-18 discussion, Otto Kekäläinen suggested implementing Salsa CI for our top popcon packages.

I believe it would be a good idea to enable CI by default across Salsa whenever a new repository is created.

Progress in Salsa migration

In my campaign, I stated that I aim to reduce the number of packages maintained outside Salsa to below 2,000. As of March 28, 2024, the count was 2,368. Today, it stands at 2,187 (UDD query: SELECT DISTINCT count(*) FROM sources WHERE release = 'sid' and vcs_url not like '%salsa%' ;).

After a third of my DPL term (OMG), we've made significant progress, reducing the amount in question (369 packages) by nearly half. I'm pleased with the support from the DDs who moved their packages to Salsa. Some packages were transferred as part of the Bug of the Day initiative (see below).

Bug of the Day

As announced in my 'Bits from the DPL' talk at DebConf, I started an initiative called Bug of the Day. The goal is to train newcomers in bug triaging by enabling them to tackle small, self-contained QA tasks. We have consistently identified target packages and resolved at least one bug per day, often addressing multiple bugs in a single package.

In several cases, we followed the Package Salvaging procedure outlined in the Developers Reference. Most instances were either welcomed by the maintainer or did not elicit a response. Unfortunately, there was one exception where the recipient of the Package Salvage bug expressed significant dissatisfaction. The takeaway is to balance formal procedures with consideration for the recipient’s perspective.

I'm pleased to confirm that the Matrix channel has seen an increase in active contributors. This aligns with my hope that our efforts would attract individuals interested in QA work. I’m particularly pleased that, within just one month, we have had help with both fixing bugs and improving the code that aids in bug selection.

As I aim to introduce newcomers to various teams within Debian, I also take the opportunity to learn about each team's specific policies myself. I rely on team members' assistance to adapt to these policies. I find that gaining this practical insight into team dynamics is an effective way to understand the different teams within Debian as DPL.

Another finding from this initiative, which aligns with my goal as DPL, is that many of the packages we addressed are already on Salsa but have not been uploaded, meaning their VCS fields are not published. This suggests that maintainers are generally open to managing their packages on Salsa. For packages that were not yet on Salsa, the move was generally welcomed.

Publicity team wants you

The publicity team has decided to resume regular meetings to coordinate their efforts. Given my high regard for their work, I plan to attend their meetings as frequently as possible, which I began doing with the first IRC meeting.

During discussions with some team members, I learned that the team could use additional help. If anyone interested in supporting Debian with non-packaging tasks reads this, please consider introducing yourself to debian-publicity@lists.debian.org. Note that this is a publicly archived mailing list, so it's not the best place for sharing private information.

Kind regards Andreas.

01 September, 2024 10:00PM by Andreas Tille

hackergotchi for Colin Watson

Colin Watson

Free software activity in August 2024

All but about four hours of my Debian contributions this month were sponsored by Freexian. (I ended up going a bit over my 20% billing limit this month.)

You can also support my work directly via Liberapay.

man-db and friends

I released libpipeline 1.5.8 and man-db 2.13.0.

Since autopkgtests are great for making sure we spot regressions caused by changes in dependencies, I added one to man-db that runs the upstream tests against the installed package. This required some preparatory work upstream, but otherwise was surprisingly easy to do.

OpenSSH

I fixed the various 9.8 regressions I mentioned last month: socket activation, libssh2, and Twisted. There were a few other regressions reported too: TCP wrappers support, openssh-server-udeb, and xinetd were all broken by changes related to the listener/per-session binary split, and I fixed all of those.

Once all that had made it through to testing, I finally uploaded the first stage of my plan to split out GSS-API support: there are now openssh-client-gssapi and openssh-server-gssapi packages in unstable, and if you use either GSS-API authentication or key exchange then you should install the corresponding package in order for upgrades to trixie+1 to work correctly. I’ll write a release note once this has reached testing.

Multiple identical results from getaddrinfo

I expect this is really a bug in a chroot creation script somewhere, but I haven’t been able to track down what’s causing it yet. My sbuild chroots, and apparently Lucas Nussbaum’s as well, have an /etc/hosts that looks like this:

$ cat /var/lib/schroot/chroots/sid-amd64/etc/hosts
127.0.0.1       localhost
127.0.1.1       [...]
127.0.0.1       localhost ip6-localhost ip6-loopback

The last line clearly ought to be ::1 rather than 127.0.0.1; but things mostly work anyway, since most code doesn’t really care which protocol it uses to talk to localhost. However, a few things try to set up test listeners by calling getaddrinfo("localhost", ...) and binding a socket for each result. This goes wrong if there are duplicates in the resulting list, and the test output is typically very confusing: it looks just like what you’d see if a test isn’t tearing down its resources correctly, which is a much more common thing for a test suite to get wrong, so it took me a while to spot the problem.

I ran into this in both python-asyncssh (#1052788, upstream PR) and Ruby (ruby3.1/#1069399, ruby3.2/#1064685, ruby3.3/#1077462, upstream PR). The latter took a while since Ruby isn’t one of my languages, but hey, I’ve tackled much harder side quests. I NMUed ruby3.1 for this since it was showing up as a blocker for openssl testing migration, but haven’t done the other active versions (yet, anyway).

OpenSSL vs. cryptography

I tend to care about openssl migrating to testing promptly, since openssh uploads have a habit of getting stuck on it otherwise.

Debian’s OpenSSL packaging recently split out some legacy code (cryptography that’s no longer considered a good idea to use, but that’s sometimes needed for compatibility) to an openssl-legacy-provider package, and added a Recommends on it. Most users install Recommends, but package build processes don’t; and the Python cryptography package requires this code unless you set the CRYPTOGRAPHY_OPENSSL_NO_LEGACY=1 environment variable, which caused a bunch of packages that build-depend on it to fail to build.

After playing whack-a-mole setting that environment variable in a few packages’ build process, I decided I didn’t want to be caught in the middle here and filed an upstream issue to see if I could get Debian’s OpenSSL team and cryptography’s upstream talking to each other directly. There was some moderately spirited discussion and the issue remains open, but for the time being the OpenSSL team has effectively reverted the change so it’s no longer a pressing problem.

GCC 14 regressions

Continuing from last month, I fixed build failures in pccts (NMU) and trn4.

Python team

I upgraded alembic, automat, gunicorn, incremental, referencing, pympler (fixing compatibility with Python >= 3.10), python-aiohttp, python-asyncssh (fixing CVE-2023-46445, CVE-2023-46446, and CVE-2023-48795), python-avro, python-multidict (fixing a build failure with GCC 14), python-tokenize-rt, python-zipp, pyupgrade, twisted (fixing CVE-2024-41671 and CVE-2024-41810), zope.exceptions, zope.interface, zope.proxy, zope.security, zope.testrunner. In the process, I added myself to Uploaders for zope.interface; I’m reasonably comfortable with the Zope Toolkit and I seem to be gradually picking up much of its maintenance in Debian.

A few of these required their own bits of yak-shaving:

I improved some Multi-Arch: foreign tagging (python-importlib-metadata, python-typing-extensions, python-zipp).

I fixed build failures in pipenv, python-stdlib-list, psycopg3, and sen, and fixed autopkgtest failures in autoimport (upstream PR), python-semantic-release and rstcheck.

Upstream for zope.file (not in Debian) filed an issue about a test failure with Python 3.12, which I tracked down to a Python 3.12 compatibility PR in zope.security.

I made python-nacl build reproducibly (upstream PR).

I moved aliased files from / to /usr in timekpr-next (#1073722).

Installer team

I applied a patch from Ubuntu to make os-prober support building with the noudeb profile (#983325).

01 September, 2024 01:29PM by Colin Watson

hackergotchi for Guido Günther

Guido Günther

Free Software Activities August 2024

Another short status update of what happened on my side last month.

Quite a bit of time went into helping organize the FrOSCon FOSS on Mobile dev room (day 1, day 2, summary) but that was all worth it and fun - so was releasing Phosh 0.41.0 (which incidetally happened right before FrOScon). A three years old MR to xdg-spec to add call categories landed (thanks Matthias) allowing us to finally provide proper feedback for e.g. IM calls too. The rest was some OSK improvements (around Indic language support via varnam and layout configuration), some Cell Broadcast advancements (thanks to NGI0 for supporting this) but also some fixes. Here's the details:

Phosh

  • Debug crash when swiping away keyboard on lockscreen (MR).
  • Fix outdated clock when swiping back from lockscreen plugins (MR)
  • Avoid deprecation warning (MR)
  • Better handle mobile network generation bit masks (MR)
  • Improve docs that end up in the libphosh-rs docs (MR)
  • Modernize ModemManager backend in preparation for Cellbroadcast support (MR)
  • Remove hacks from Cell Broadcast support MR (MR). Still draft but not much todo left once the ModemManager side landed
  • Remove deprecated UI props and add a check so they don't creep back in (MR)
  • Allow to use ASAN when feedbackd is a subproject (MR)
  • Fix crash when Wi-Fi hot spot quick setting gets disabled (MR)
  • Don't allow to change hotspot state on the lock screen (MR)
  • Prepare and release Phosh 0.41.0~rc1 and Phosh 0.41.0
  • Prepare 0.41.1 (MR)

Phoc

  • Don't reject gesture when we cross another surface (MR)

phosh-mobile-settings

  • Drop redundant enums (MR)
  • Remember last used panel (MR)
  • Fix initial state of move up/down popovers (MR)
  • Allow to select OSK layouts (MR). This ensures only actually available layouts can be selected. Currently used by phosh-osk-stub but can easily be extended to squeekboard once it provides the information.

libphosh-rs

phosh-osk-stub

  • Allow to open OSK Settings panel when screen is not locked (MR)
  • Unswap Enter and Backspace (MR)
  • Bug fix release 0.41.1
  • Use varnam_learn() for better completions in the varnam completer (MR)
  • Export layout information (MR)
  • Reduce flicker when launching settings (MR)

phosh-wallpapers

  • Avoid new event sounds not being picked up due to stale caches (MR)
  • Improve phone-hangup sound (MR)

meta-phosh

  • Add release helpers (MR)

phosh-recipes

Debian

  • Upload Phosh 0.41.0~rc1 and 0.41.0 releases
  • Robustify release script a bit (MR)
  • Enable binding lib in phosh (MR)
  • Move govarnam and varnam schemes packages into the input method team
  • Upload varnam schemes to sid (MR)
  • Make varnam-schemes reproducible, add autopkgtests and run upstream test during build (MR)
  • Build wlroots with xcb-errors support (MR)

Mobian

  • Help mobian-recipes with newer debos: (MR)

ModemManager

  • Rework most bits of Cell Broadcast to move it closer to undraft status (MR). (Remaining bits affect enabling of unsolicited messages and setting channels).

Calls

  • Use official notification category (MR)
  • Use AdwAboutDialog (MR)

gnome-bluetooth

  • Fix some deprecations (MR)
  • Make pairing dialog adaptive (MR)
  • Allow to use with Phosh without imposing more API/ABI guarantees (MR

gnome-settings-daemon

  • Fix crash when hitting an error condition (which could then bring down the whole session): (MR)

feedbackd

  • Install the udev rule via meson (MR to makes it easier for distros to pick up rule changes
  • Sync packaging with Debian (MR)
  • Document used gsettings (MR)

Chatty

  • Update information at matrix.org (MR)
  • Implement more unified push bits: (MR
  • Document things a bit (MR
  • Chase libcmatrix API changes (MR)

Libcmatrix

Eigenvalue

  • Catch up with libcmatrix API changes (MR)

kunifiedpush

  • Avoid broken URLs when using ntfy (MR)

gir-rustdoc

  • Improve error message when not running in CI (MR)

python-dbusmock

  • Drop outdated comments (MR)

matrix spec

  • propose some hints for Mobile clients (MR)

sound-theme spec

  • propose new sound name for cell broadcasts (MR)

varname-schemes

  • Make reproducible (MR)
  • Don't ignore errors in build scripts (MR)
  • Allow to run test against installed schemes (MR
  • Fix build with recent ruby (MR)

FroSCon

Help Development

If you want to support my work see donations. This includes a list of hardware we want to improve support for. Thanks a lot to all current and past donors.

01 September, 2024 12:20PM

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Zyxel GS1900 firmware source dump

I asked Zyxel for a source dump for GPLed firmware on their GS1900-8HP switches, and after months, they finally obliged (they seemingly had no idea that it should just be, well, available). So I'm dumping it here in case anyone else wants it.

I haven't tried actually building it, but notably, it seems to contain the entire CLI, since they base it on Quagga's vtysh (which is GPL).

01 September, 2024 09:58AM

Russ Allbery

Review: Reasons Not to Worry

Review: Reasons Not to Worry, by Brigid Delaney

Publisher: Harper
Copyright: 2022
Printing: October 2023
ISBN: 0-06-331484-3
Format: Kindle
Pages: 295

Reasons Not to Worry is a self-help non-fiction book about stoicism, focusing specifically on quotes from Seneca, Epictetus, and Marcus Aurelius. Brigid Delaney is a long-time Guardian columnist who has written on a huge variety of topics, including (somewhat relevantly to this book) her personal experiences trying weird fads.

Stoicism is having a moment among the sort of men who give people life advice in podcast form. Ryan Holiday, a former marketing executive, has made a career out of being the face of stoicism in everyone's podcast feed (and, of course, hosting his own). He is far from alone. If you pay attention to anyone in the male self-help space right now (Cal Newport, in my case), you have probably heard something vague about the "wisdom of the stoics."

Given that the core of stoicism is easily interpreted as a strategy for overcoming your emotions with logic, this isn't surprising. Philosophies that lean heavily on college dorm room logic, discount emotion, and argue that society is full of obvious flaws that can be analyzed and debunked by one dude with some blog software and a free afternoon have been very popular in tech circles for the past ten to fifteen years, and have spread to some extent into popular culture. Intriguingly, though, stoicism is a system of virtue ethics, which means it is historically in opposition to consequentialist philosophies like utilitarianism, the ethical philosophy behind effective altruism and other related Silicon Valley fads.

I am pretty exhausted with the whole genre of men talking to each other about how to live a better life — Cal Newport by himself more than satisfies the amount of that I want to absorb — but I was still mildly curious about stoicism. My education didn't provide me with a satisfying grounding in major historical philosophical movements, so I occasionally look around for good introductions. Stoicism also has some reputation as an anxiety-reduction technique, and I could use more of those. When I saw a Discord recommendation for Reasons Not to Worry that specifically mentioned its lack of bro perspective, I figured I'd give it a shot.

Reasons Not to Worry is indeed not a bro book, although I would have preferred fewer appearances of the author's friend Andrew, whose opinions on stoicism I could not possibly care less about. What it is, though, is a shallow and credulous book that falls squarely in the middle of the lightweight self-help genre. Delaney is here to explain why stoicism is awesome and to convince you that a school of Greek and Roman philosophers knew exactly how you should think about your life today. If this sounds quasi-religious, well, I'll get to that.

Delaney does provide a solid introduction to stoicism that I think is a bit more approachable than reading the relevant Wikipedia article. In her presentation, the core of stoicism is the practice of four virtues: wisdom, courage, moderation, and justice. The modern definition of "stoic" as someone who is impassive in the presence of pleasure or pain is somewhat misleading, but Delaney does emphasize a goal of ataraxia, or tranquility of mind. By making that the goal rather than joy or pleasure, stoicism tries to avoid the trap of the hedonic treadmill in favor of a more achievable persistent contentment.

As an aside, some quick Internet research makes me doubt Delaney's summary here. Other material about stoicism I found focuses on apatheia and associates ataraxia with Epicureanism instead. But I won't start quibbling with Delaney's definitions; I'm not qualified and this review is already too long.

The key to ataraxia, in Delaney's summary of stoicism, is to focus only on those parts of life we can control. She summarizes those as our character, how we treat others, and our actions and reactions. Everything else — wealth, the esteem of our colleagues, good health, good fortune — is at least partly outside of our control, and therefore we should enjoy it when we have it but try to be indifferent to whether it will last. Attempting to control things that are outside of our control is doomed to failure and will disturb our tranquility. Essentially all of this book is elaborations and variations on this theme, specialized to some specific area of life like social media, anxiety, or grief and written in the style of a breezy memoir.

If you're familiar with modern psychological treatment frameworks like cognitive behavioral therapy or acceptance and commitment therapy, this summary of stoicism may sound familiar. (Apparently this is not an accident; the predecessor to CBT used stoicism as a philosophical basis.) Stoicism, like those treatment approaches, tries to refocus your attention on the things that you can improve and de-emphasizes the things outside of your control. This is a lot of the appeal, at least to me (and I think to Delaney as well).

Hearing that definition, you may have some questions. Why those virtues specifically? They sound good, but all virtues sound good almost by definition. Is there any measure of your success in following those virtues outside your subjective feeling of ataraxia? Does the focus on only things you can control lead to ignoring problems only mostly outside of your control, where your actions would matter but only to a small degree? Doesn't this whole philosophy sound a little self-centered? What do non-stoic virtue ethics look like, and why do they differ from stoicism? What is the consequentialist critique of stoicism?

This is where the shortcomings of this book become clear: Delaney is not very interested in questions like this. There are sections on some of those topics, particularly the relationship between stoicism and social justice, but her treatment is highly unsatisfying. She raises the question, talks about her doubts about stoicism's applicability, and then says that, after further thought, she decided stoicism is entirely consistent with social justice and the stoics were right after all. There is a little bit more explanation than that, but not much. Stoicism can apparently never be wrong; it can only be incompletely understood.

Self-help books often fall short here, and I suspect this may be what the audience wants. Part of the appeal of the self-help genre is artificial certainty. Becoming a better manager, starting a business, becoming more productive, or working out an entire life philosophy are not problems amenable to a highly approachable and undemanding book. We all know that at some level, but the seductive allure of the self-help genre is the promise of simplifying complex problems down to a few approachable bullet points. Here is a life philosophy in a neatly packaged form, and if you just think deeply about its core principles, you will find they can be applied to any situation and any doubts you were harboring will turn out to be incorrect.

I am all too familiar with this pattern because it's also how fundamentalist Christianity works. The second time Delaney talked about her doubts about the applicability of stoicism and then claimed a few pages later that those doubts disappeared with additional thought and discussion, my radar went off. This book was sounding less like a thoughtful examination of one specific philosophy out of many and more like the soothing adoption of religious certainty by a convert. I was therefore entirely unsurprised when Delaney all but says outright in the epilogue that she's adopted stoicism as her religion and approaches it with the same dedicated practice that she used to bring to Catholicism. I think this is where a lot of self-help books end up, although most of them don't admit it.

There's nothing wrong with this, to be clear. It sounds like she was looking for a non-theistic religion, found one that she liked, and is excited to tell other people about it. But it's a profound mismatch with what I was looking for in an introduction to stoicism. I wanted context, history, and a frank discussion of the problems with adopting philosophy to everyday issues. I also wanted some acknowledgment that it is highly unlikely that a few men who lived 2000 years ago in a wildly different social context, and with drastically limited information about cultures other than their own, figured out a foolproof recipe for how to approach life. The subsequent two millennia of philosophical debates prove that stoicism didn't end the argument, and that a lot of other philosophers thought that stoicism got a few things wrong. You would never know that from this book.

What I wanted is outside the scope of this sort of undemanding self-help book, though, and this is the problem that I keep having with philosophy. The books I happen across are either nigh-incomprehensibly dense and academic, or they're simplified into catechism. This was the latter. That's probably more the fault of my reading selection than it is the fault of the book, but it was still annoying.

What I will say for this book, and what I suspect may be the most useful property of self-help books in general, is that it prompts you to think about basic stoic principles without getting in the way of your thoughts. It's like background music for the brain: nothing Delaney wrote was very thorny or engaging, but she kept quietly and persistently repeating the basic stoic formula and turning my thoughts back to it. Some of those thoughts may have been useful? As a source of prompts for me to ponder, Reasons Not to Worry was therefore somewhat successful. The concept of not trying to control things outside of my control is simple but valid, and it probably didn't hurt me to spend a week thinking about it.

"It kind of works as an undemanding meditation aid" is not a good enough reason for me to recommend this book, but maybe that's what someone else is looking for.

Rating: 5 out of 10

01 September, 2024 03:36AM

August 31, 2024

Andrew Cater

Debian release weekend - media team update 202408311900 UTC

 We're doing fairly well: Debian release team have been working really hard on a double point release today. Final release for Bullseye as 11.11 as it moves to LTS.

12.7 Bookworm install media finishing tests - it's been quite a long day so far.

For 11.11 we're part way through media tests.

We've been joined by a lot of enthusiastic folk from Cape Town who've been a great help. Always nice to see old friends and new people join us on IRC - and they've just joined us for a short video call.

This has gone well: two release day media checking and bug-squashing groups on two continents is excellent.

Dear Cape Town - feel free to join us for the next time and we'll hold the video call open for longer. If we don't see any of you here in Cambridge for mini-Debconf, we'll meet up in Brest for Debconf 25.



31 August, 2024 06:42PM by Andrew Cater (noreply@blogger.com)

Russell Coker

Vincent Bernat

Fixing layout shifts caused by web fonts

In 2020, Google introduced Core Web Vitals metrics to measure some aspects of real-world user experience on the web. This blog has consistently achieved good scores for two of these metrics: Largest Contentful Paint and Interaction to Next Paint. However, optimizing the third metric, Cumulative Layout Shift, which measures unexpected layout changes, has been more challenging. Let’s face it: optimizing for this metric is not really useful for a site like this one. But getting a better score is always a good distraction. 💯

To prevent the “flash of invisible text� when using web fonts, developers should set the font-display property to swap in @font-face rules. This method allows browsers to initially render text using a fallback font, then replace it with the web font after loading. While this improves the LCP score, it causes content reflow and layout shifts if the fallback and web fonts are not metrically compatible. These shifts negatively affect the CLS score. CSS provides properties to address this issue by overriding font metrics when using fallback fonts: size-adjust, ascent-override, descent-override, and line-gap-override.

Two comprehensive articles explain each property and their computation methods in detail: Creating Perfect Font Fallbacks in CSS and Improved font fallbacks.

Interactive tuning tool

Instead of computing each property from font average metrics, I put together a tool for interactively tuning fallback fonts.1

Instructions

  1. Load your custom font.

  2. Select a fallback font to tune.

  3. Adjust the size-adjust property to match the width of your custom font with the fallback font. With a proportional font, it is not possible to achieve a perfect match.

  4. Fine-tune the ascent-override property. Aim to align the final dot of the last paragraph while monitoring the font’s baseline. For more precise adjustment, disable the “� option.

  5. Modify the descent-override property. The goal is to make the two boxes match. You may need to alternate between this and the previous property for optimal results.

  6. If necessary, adjust the line-gap-override property. This step is typically not required.

The process needs to be repeated for each fallback font. Some platforms may not include certain fonts. Notably, Android lacks most fonts found in other operating systems. It replaces Georgia with Noto Serif, which is not metrically-compatible.

Tool

This tool is not available from the Atom feed.

Results

For the body text of this blog, I get the following CSS definition:

@font-face {
  font-family: Merriweather;
  font-style: normal;
  font-weight: 400;
  src: url("../fonts/merriweather.woff2") format("woff2");
  font-display: swap;
}
@font-face {
  font-family: "Fallback for Merriweather";
  src: local("Noto Serif"), local("Droid Serif");
  size-adjust: 98.3%;
  ascent-override: 99%;
  descent-override: 27%;
}
@font-face {
  font-family: "Fallback for Merriweather";
  src: local("Georgia");
  size-adjust: 106%;
  ascent-override: 90.4%;
  descent-override: 27.3%;
}

font-family: Merriweather, "Fallback for Merriweather", serif;

After a month, the CLS metric improved to 0:

Core Web Vitals scores for vincent.bernat.ch showing all 6 metrics as green. Notably the Cumulative Layout Shift is 0.
Recent Core Web Vitals scores for vincent.bernat.ch

About custom fonts

Using safe web fonts or a modern font stack is often simpler. However, I prefer custom web fonts. Merriweather and Iosevka, which are used in this blog, enhance the reading experience. An alternative approach could be to use Georgia as a serif option. Unfortunately, most default monospace fonts are ugly.

Furthermore, paragraphs that combine proportional and monospace fonts can create visual disruption. This occurs due to mismatched vertical metrics or weights. To address this issue, I adjust Iosevka’s metrics and weight to align with Merriweather’s characteristics.


  1. Similar tools already exist, like the Fallback Font Generator, but they were missing a few features, such as the ability to load the fallback font or to have decimals for the CSS properties. And no source code. ↩�

31 August, 2024 01:07PM by Vincent Bernat

Andrew Cater

Debian release weekend - Bullseye and Bookworm 20240831

A double length Debian release
means the Release Team don't get much peace
What with last minute breaks
And the time that it takes
Treat them with respect today, please

The media teams on the hook
As we follow our normal play book
With laptops all primed
The images are timed
Once we're told we'll start taking our look

This is the last time for 11
And for Bookworm, it's just 12.7
Give us time for each test
As we all do our best
With our ThinkPads - I see at least seven :)

31 August, 2024 11:40AM by Andrew Cater (noreply@blogger.com)

Russ Allbery

Review: The Shepherd's Crown

Review: The Shepherd's Crown, by Terry Pratchett

Series: Discworld #41
Publisher: Harper
Copyright: 2015
Printing: 2016
ISBN: 0-06-242998-1
Format: Trade paperback
Pages: 276

The Shepherd's Crown is the 41st and final Discworld novel and the 5th and final Tiffany Aching novel. You should not start here.

There is a pretty major character event in the second chapter of this book. I'm not going to say directly what it is, but you will likely be able to guess from the rest of the review. If you're particularly adverse to spoilers, you may want to skip reading this until you've read the book.

Tiffany Aching is extremely busy. Witches are responsible for all the little tasks that fall between the cracks, and there are a lot of cracks. The better she gets at her job, the more of the job there seems to be.

"Well," said Tiffany, "there's too much to be done and not enough people to do it."

The smile that the kelda gave her was a strange one. The little woman said, "Do ye let them try? Ye mustn't be afraid to ask for help. Pride is a good thing, my girl, but it will kill you in time."

And that's before an earth-shattering change in the world of witches, one that leaves Tiffany shuttling between Lancre and the Chalk trying to be too many things to too many people. Plus the kelda is worried some deeper trouble is brewing. And then Tiffany gets an exiled elven queen who has never understood the worth of other people dumped on her, and has to figure out what to do with her.

The starting idea is great. I continue to be impressed with how well Pratchett handles Tiffany's coming-of-age story. Finding one's place in the world isn't one lesson or event; it's layers of them, with each new growth in responsibility uncovering new things to learn that are often quite different from the previous problems. Tiffany has worked through child problems, adolescent problems, and new adulthood problems. Now she's on a course towards burnout, which is exactly the kind of problem Tiffany would have given her personality.

Even better, the writing at the start of The Shepherd's Crown is tight and controlled and sounds like Pratchett, which was a relief after the mess of Raising Steam. The contrast is so sharp that I found myself wondering if parts of this book had been written earlier, or if Pratchett found a new writing or editing method. The characters all sound like themselves, and although some of the turns of phrase are not quite as sharp as in earlier books, they're at least at the level of Snuff.

Unfortunately, it doesn't last. There are some great moments and some good quotes, but the writing starts to slip at about the two-thirds point, the sentences began to meander, the characters start repeating the name of the person they're talking to, and the narration becomes increasingly strained. It felt like Pratchett knew the emotional tone he wanted to evoke but couldn't find a subtle way to express it, so the story and the characters start to bludgeon the reader with Grand Statements. It's never as bad as Raising Steam, but it doesn't slip smoothly off the page to rewrite your brain the way that Pratchett could at his best.

What makes this worse is that the plot is not very interesting. I wanted to read a book about Tiffany understanding burnout, asking for help, and possibly also about mental load and how difficult delegation is. There is some movement in that direction: she takes on some apprentices, although we don't see as much of her interactions with them as I'd like, and there's an intriguing new male character who wants to be a witch. I wish Pratchett had been able to give Geoffrey his own book. He and his goat were the best part of the story, but it felt rushed and I think he would have had more impact if the reader got to see him develop his skills over time the way that we did with Tiffany.

But, alas, all of that is side story to the main plot, which is about elves.

As you may know from previous reviews, I do not get along with Pratchett's conception of elves. I find them boring and too obviously evil, and have since Lords and Ladies. Villains have never been one of Pratchett's strengths, and I think his elves are my least favorite. One of the goals of this book is to try to make them less one-note by having Tiffany try to teach one of them empathy, but I didn't find any of the queen's story arc convincing. If Pratchett had pulled those threads together with something more subtle, emotional, and subversive, I think it could have worked, but instead we got another battle royale, and Lords and Ladies did that better.

"Granny never said as she was better than others. She just got on with it and showed 'em and people worked it out for themselves."

And so we come to the end. I wish I could say that the quality held up through the whole series, and it nearly did, but alas it fell apart a bit at the end. Raising Steam I would skip entirely. The Shepherd's Crown is not that bad, but it's minor Pratchett that's worth reading mainly because it's the send-off (and there are a lot of reasons within the story to think Pratchett knew that when writing it). There are a few great lines, some catharsis, and a pretty solid ending for Tiffany, but it's probably not a book that I'll re-read.

Content warning: major character death.

Special thanks to Emmet Asher-Perrin, whose Tor.com/Reactor re-read of all of Discworld got me to pick the series up again and finally commit to reading all of it. I'm very glad I did.

Rating: 6 out of 10

31 August, 2024 04:47AM

August 30, 2024

hackergotchi for Steve McIntyre

Steve McIntyre

Party like it's 2024

It (was) that time of year again - last weekend we hosted a bunch of nice people at our place in Cambridge for the annual Debian UK OMGWTFBBQ!

can you BBQ gin??

Lots of friends, lots of good food and drink. Of course lots of geeky discussions about Debian, networking, random computer languages and... screws? And of course some card games to keep us laughing into each night!

beer anyone?

Many thanks to a number of awesome friendly people for again sponsoring the important refreshments for the weekend. It's hungry/thirsty work celebrating like this!

30 August, 2024 05:24PM

Russ Allbery

Review: Thornhedge

Review: Thornhedge, by T. Kingfisher

Publisher: Tor
Copyright: 2023
ISBN: 1-250-24410-2
Format: Kindle
Pages: 116

Thornhedge is a fantasy novella by T. Kingfisher, the pen name that Ursula Vernon uses for her adult writing. It won the 2024 Hugo Award for best novella. No matter how much my brain wants to misspell the title, it is a story about a hedge, not a Neolithic earthwork.

The fairy was the greenish-tan color of mushroom stems and her skin bruised blue-black, like mushroom flesh. She had a broad, frog-like face and waterweed hair. She was neither beautiful nor made of malice, as many of the Fair Folk are said to be.

There is a princess asleep in a tower, surrounded by a wall of thorns. Toadling's job is to keep anyone from foolishly breaking in. At first, it was a constant struggle and all that she could manage, but with time, the flood of princes slowed to a trickle. A road was built and abandoned. People fled. There was a plague. With any luck, the tower was finally forgotten.

Then a knight shows up. Not a very rich knight, nor a very successful knight. Just a polite and very persistent knight who wants to get into the tower that Toadling does not want him to get into.

As you might have guessed, this is a Sleeping Beauty retelling. As you may have also guessed from the author, or from the cover text that says "not all curses should be broken," this version is a bit different. How and why it departs from the original is a surprise that slowly unfolds over the course of the story, in parallel to a delicate, cautious, and delightfully kind-hearted conversation between the knight and the fairy.

If you have read a T. Kingfisher story before, particularly one of her fractured fairy tales, you know what to expect. Toadling is one of her typical well-meaning, earnest, slightly awkward protagonists who is just trying to do the right thing in a confusing world full of problems and dangers. She's constantly overwhelmed and yet she keeps going, because what else is there to do. Like a lot of Kingfisher's writing, it's a story about quiet courage from someone who doesn't consider herself courageous. One of the twists this time is that the knight is a character from a similar vein: doggedly unwilling to leave any problem alone, but equally determined to try to be kind. The two of them together make for a story with a gentle and rather melancholy tone.

We do, eventually, learn the whole backstory of the tower, the wall of thorns, and Toadling. There is a god, a rather memorable one, who is frustratingly cryptic in the way that gods are. There are monsters who are more loving than most humans. There are humans who turn out to be surprisingly decent when it matters. And, like most of Kingfisher's writing, there is a constant awareness of how complicated the world is, how full it is of people who are just trying to get through each day, and how heavy of burdens people can shoulder when they don't see another way.

This story pulled me right in. It is not horror, although there are a few odd bits like there always are in Kingfisher stories. Your largest risk as a reader is that it might make you cry if stories about earnest people doing their best in overwhelming situations hit you that way. My primary complaint is that there was nowhere near enough ending for me. After everything I learned about the characters, I wanted to spend some time with them outside of the bounds of the story. Kingfisher points the reader in a direction and then leaves the rest to your imagination, and I can see why she chose that story construction, but I wanted more catharsis than I got.

That complaint aside, this is quintessential T. Kingfisher, and I am unsurprised that it won a Hugo. If you've read any of her other fractured fairy tales, or the 2023 Hugo winner for best novel, you know the sort of stories she tells, and you probably know whether you will like this. I am one of the people who like this.

Rating: 8 out of 10

30 August, 2024 03:28AM

hackergotchi for Steve McIntyre

Steve McIntyre

A birthday gift to remember!

Warning: If you're not into meat, you might want to skip the rest of this...

This year, I turned 50. Wow. Lots of friends and family turned up to help me celebrate, with a BBQ (of course!). I was very grateful for a lovely set of gifts from those awesome people, and I have a number of driving experiences to book in the next year or so. I'm going to have so much fun driving silly cars on and off road!

However, the most surprising gift was something totally different - a full-day course of hands-on pork butchery. I was utterly bemused - I've never considered doing anything like this at all, and I'd certainly never talked to friends about anything like it either. I was shocked, but in a good way!

So, two weekends back Jo and I went over to Empire Farm in Somerset. We stayed nearby so we could be ready on-site early on Sunday morning, and then we joined three other people doing the course. Jo was there to observe, i.e. to watch and take (lots of!) pictures.

I can genuinely say that this was the most fun surprise gift I've ever received! David Coldman, the master butcher working with us, has been in the industry for many years. He was an excellent teacher, showing us everything we needed to know and being very patient with us when we needed it. It was great to hear his philosophy too - he only uses the very best locally-sourced meat and focuses on quality over quantity. He showed us all the different cuts of pork that a butcher will make, and we were encouraged to take everything home - no waste here!

half a pig

At the beginning of the day, we each started with half a pig. Over the next several hours, we steadily worked our way through a series of cuts with knife and saw, making the remaining pig smaller and smaller as we went.

saw

knife

We finished the day with three sets of meat. First, a stack of vacuum-packed joints, chops and steaks ready for cooking and eating at home. Second: a box of off-cuts that we minced and made into sausages at the end of the day. Finally: a bag of skin and bones. Our friend's dog got some of the bones, and Jo turned a lot of the skin into crackling that we shared with friends at the OMGWTFBBQ the next weekend.

sausages

This was an amazing day. Massive thanks to my good friend Chris Walker for suggesting this gift. As I told David on the day: this was the most fun surprise gift I've ever received. Good hands-on teaching in a new craft is an incredible thing to experience, and I can't recommend this course highly enough.

30 August, 2024 12:46AM

August 29, 2024

hackergotchi for Jonathan Carter

Jonathan Carter

Orphaning bcachefs-tools in Debian

Around a decade ago, I was happy to learn about bcache – a Linux block cache system that implements tiered storage (like a pool of hard disks with SSDs for cache) on Linux. At that stage, ZFS on Linux was nowhere close to where it is today, so any progress on gaining more ZFS features in general Linux systems was very welcome. These days we care a bit less about tiered storage, since any cost benefit in using anything else than nvme tends to quickly evaporate compared to time you eventually lose on it.

In 2015, it was announced that bcache would grow into its own filesystem. This was particularly exciting and it caused quite a buzz in the Linux community, because it brought along with it more features that compare with ZFS (and also btrfs), including built-in compression, built-in encryption, check-summing and RAID implementations.

Unlike ZFS, it didn’t have a dkms module, so if you wanted to test bcachefs back then, you’d have to pull the entire upstream bcachefs kernel source tree and compile it. Not ideal, but for a promise of a new, shiny, full-featured filesystem, it was worth it.

In 2019, it seemed that the time has come for bcachefs to be merged into Linux, so I thought that it’s about time we have the userspace tools (bcachefs-tools) packaged in Debian. Even if the Debian kernel wouldn’t have it yet by the time the bullseye (Debian 11) release happened, it might still have been useful for a future backported kernel or users who roll their own.

By total coincidence, the first git snapshot that I got into Debian (version 0.1+git20190829.aa2a42b) was committed exactly 5 years ago today.

It was quite easy to package it, since it was written in C and shipped with a makefile that just worked, and it made it past NEW into unstable in 19 January 2020, just as I was about to head off to FOSDEM as the pandemic started, but that’s of course a whole other story.

Fast-forwarding towards the end of 2023, version 1.2 shipped with some utilities written in Rust, this caused a little delay, since I wasn’t at all familiar with Rust packaging yet, so I shipped an update that didn’t yet include those utilities, and saw this as an opportunity to learn more about how the Rust eco-system worked and Rust in Debian.

So, back in April the Rust dependencies for bcachefs-tools in Debian didn’t at all match the build requirements. I got some help from the Rust team who says that the common practice is to relax the dependencies of Rust software so that it builds in Debian. So errno, which needed the exact version 0.2, was relaxed so that it could build with version 0.4 in Debian, udev 0.7 was relaxed for 0.8 in Debian, memoffset from 0.8.5 to 0.6.5, paste from 1.0.11 to 1.08 and bindgen from 0.69.9 to 0.66.

I found this a bit disturbing, but it seems that some Rust people have lots of confidence that if something builds, it will run fine. And at least it did build, and the resulting binaries did work, although I’m personally still not very comfortable or confident about this approach (perhaps that might change as I learn more about Rust).

With that in mind, at this point you may wonder how any distribution could sanely package this. The problem is that they can’t. Fedora and other distributions with stable releases take a similar approach to what we’ve done in Debian, while distributions with much more relaxed policies (like Arch) include all the dependencies as they are vendored upstream.

As it stands now, bcachefs-tools is impossible to maintain in Debian stable. While my primary concerns when packaging, are for Debian unstable and the next stable release, I also keep in mind people who have to support these packages long after I stopped caring about them (like Freexian who does LTS support for Debian or Canonical who has long-term Ubuntu support, and probably other organisations that I’ve never even heard of yet). And of course, if bcachfs-tools don’t have any usable stable releases, it doesn’t have any LTS releases either, so anyone who needs to support bcachefs-tools long-term has to carry the support burden on their own, and if they bundle it’s dependencies, then those as well.

I’ll admit that I don’t have any solution for fixing this. I suppose if I were upstream I might look into the possibility of at least supporting a larger range of recent dependencies (usually easy enough if you don’t hop onto the newest features right away) so that distributions with stable releases only need to concern themselves with providing some minimum recent versions, but even if that could work, the upstream author is 100% against any solution other than vendoring all its dependencies with the utility and insisting that it must only be built using these bundled dependencies. I’ve made 6 uploads for this package so far this year, but still I constantly get complaints that it’s out of date and that it’s ancient. If a piece of software is considered so old that it’s useless by the time it’s been published for two or three months, then there’s no way it can survive even a usual stable release cycle, nevermind any kind of long-term support.

With this in mind (not even considering some hostile emails that I recently received from the upstream developer or his public rants on lkml and reddit), I decided to remove bcachefs-tools from Debian completely. Although after discussing this with another DD, I was convinced to orphan it instead, which I have now done. I made an upload to experimental so that it’s still available if someone wants to work on it (without having to go through NEW again), it’s been removed from unstable so that it doesn’t migrate to testing, and the ancient (especially by bcachefs-tools standards) versions that are in stable and oldstable will be removed too, since they are very likely to cause damage with any recent kernel versions that support bcachefs.

And so, my adventure with bcachefs-tools comes to an end. I’d advise that if you consider using bcachefs for any kind of production use in the near future, you first consider how supportable it is long-term, and whether there’s really anyone at all that is succeeding in providing stable support for it.

29 August, 2024 01:04PM by jonathan

August 25, 2024

hackergotchi for Thomas Lange

Thomas Lange

Custom Live Media, also for Newer Hardware

At this years Debian conference in South Korea I've presented1 the new feature of the FAIme web service. You can now build your own Debian live media/ISO.

The web interface provides various settings, for e.g. adding a user name and its password, selecting the Debian release (stable or testing), the desktop environment and the language. Additionally you can add your own list of packages, that will be installed into the live environment. It's possible to define a custom script that gets executed during the boot process. For remote access to the live system, you can easily sepcify a github, gitlab or salsa account, whose public ssh key will be used for passwordless root access. If your hardware needs special grub settings, you may also add those. I'm thinking about adding an autologin checkbox, so the live media could be used for a kiosk system.

And finally newer hardware is supported with the help of the backports kernel for the Debian stable release (aka bookworm). This combination is not available from the official Debian live images or the netinst media because the later has some complicated dependencies which are not that easy to resolve2. At DebConf24 I've talked to Alper who has some ideas3 how to improve the Debian installer environment which then may support a backports kernel.

The FAI web service for live ISO is available at

      https://fai-project.org/FAIme/live

25 August, 2024 01:52PM

August 24, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

Fediverse and feeds

It's clear that Twitter has been circling the drain for years, but things have been especially bad in recent times. I haven't quit (I have some sympathy with the viewpoint don't cede territory to fascists) but I try to read it much less, and I certainly post much less.

Especially at the moment, I really appreciate distractions.

Last time I wrote about Mastodon (by which I meant the Fediverse1), I was looking for a new instance to try. I settled on Debian's social instance2. I'm now trying to put any energy I might spend engaging on Twitter, into engaging in the Fediverse instead. (You can follow me via the handle @jon@dow.land, I think, which should repoint to my actual handle, @jmtd@pleroma.debian.social.)

There are other potential successors to Twitter: two big ones are Bluesky and Facebook-owned Threads. They are effectively cookie-cutter copies of the Twitter model, and so, we will repeat the same mistakes there. Sadly I see the majority of communities and sub-cultures I follow are migrating to one or the other of these.

The Fediverse (or the Mastodon-ish bits of it) should avoid the fate of Twitter. JWZ puts it better and more succinctly than I can.

The Fedi experience is, sadly, pretty clunky. So I want to try and write a bit from time to time with tips and tricks that might improve people's experiences.

First up, something I discovered only today about Mastodon instances. As JWZ noted, If you are worried about picking the "right" Mastodon instance, don't. Just spin the wheel.. You can spend too much time trying to guess a good answer to this. Better to just get started.

At the same time, individual instances are supposed to cater to specific niches. So it could be useful to sample the public posts from an entire instance. For example, to find people to follow, or decide to hop over to that instance yourself. You can't (I think) follow an entire instance from within yours, but, they usually have a public page which shows you the latest traffic.

For example, the infosec-themed instance infosec.exchange has one here: https://infosec.exchange/public/local

These pages don't provide RSS or Atom feeds3, sadly. I hope that's on the software's roadmap, and hasn't been spurned for ideological reasons. For now at least, OpenRSS provide RSS/Atom feeds for many Mastodon instances. For example, an RSS/Atom feed of the above: https://openrss.org/infosec.exchange/public/local

One can add these feeds to your Feed reader and over time get a flavour for the kind of discourse that takes place on given instances.

I think the OpenRSS have to manually add Mastodon instances to their service. I tried three instances and only one (infosec.exchange) worked. I'm not sure but I think trying an instance that doesn't work automatically puts it on OpenRSS's backlog.


  1. the Fediverse-versus-Mastodon nomenclature problem is just the the tip of the iceberg, in terms of adoption problems. Mastodon provides a twitter-like service that participates in the Fediverse. But it isn't correct to call the twitter-like service "Mastodon" because other softwares also participate in/provide that service. And it's not correct to call it "Fediverse" because that describes a bigger thing, with e.g. youtube clones also taking part. I'm not sure what the right term should be for "the twitter-like thing". Also, everything I wrote here is probably subtly wrong.
  2. Debian's instance actually runs Pleroma, an alternative to Mastodon. Why should it matter? I think it's healthy for there to be more than one implementation in an open ecosystem. However the experience can be janky, as the features don't perfectly align, some Mastodon features/APIs are not documented/standardised/etc.
  3. I have to remind myself that the concept of RSS/Atom feeds and Feedreaders might need explaining to a modern audience too. Perhaps in another blog post.

24 August, 2024 05:27PM

Russell Coker

Wifi 6E Mesh

I am looking into getting a Wifi mesh network. The aim is to use it for providing access to devices through my home especially for devices on the congested 2.4GHz frequency. Ideally I want 6GHz Wifi6E for the communication between mesh nodes as well as for talking to the few devices that are new enough to support it (I like buying cheap second hand devices). 2.5Gbit ethernet connections on all mesh nodes would be good too.

Wifi 7 is semi-released, you can buy devices even though the specs aren’t entirely finalised. I expect that next year when Wifi 7 devices are more common the second hand prices of Wifi 6E will drop. Currently Wifi 6E devices are somewhat expensive.

One major problem at the moment is “cloud configuration”. Here is a 41 page forum thread of TP-Link customers asking in vain for non-cloud configuration [1]. The problems with cloud configuration are that it doesn’t allow configuration without Internet access (so no fixing things when internet breaks and no use for a private network without Internet), it relies on a proprietary phone app (so a problem with your phone breaks everything), and it adds a dependency on an unpaid service that TP-Link might decide to turn off at some future time. The TP -Link Deco X55 AX3000 looks like a good set of devices, it currently costs $328 for a set of three Wifi 6 (not 6E) devices is a good deal, pity that the poor software options let it down.

TP-Link also seems to be scanning web traffic and sending the analysis to an external site [2], it seems to be operating as malware. The TP-Link software seems to be most accurately described as malware.

There is the OpenWrt project for open firmware on Wifi APs which is a great project [3] but it doesn’t seem to support any Wifi 6 mesh systems yet. If most Wifi hardware requires malware for operation it seems that running a VPN over Wifi is the way to go. A hostile party being able to sniff your home network is much worse than a hostile party sniffing public Internet traffic.

The Google Nest mesh devices have good specs and price, $359 for a three node Wifi 6E mesh that has 2.5Gbit ethernet. But they can only be configured with a Google app for Android or iOS and require a Gmail account. Giving Google the ability to shut down all my stuff by deleting my gmail account is not acceptable. Also Google is well known for cancelling services [4]. A mitigating factor is that there should be enough of those devices sold to make them a good target for an OpenWRT port.

As an aside it looks like the TailScale mesh VPN system could be a solution to the security issues related to malware on Wifi APs problem [5]. There is also HeadScale which is the fully open source variant of that [6]. Even when the vendor isn’t overtly hostile they can make mistakes so encryption is good.

Kogan is selling an own-brand Wifi 6 mesh network package that comes with 1/2/3 devices for $70/$120/$140. It doesn’t do Wifi 6E but supports the better encoding methods of Wifi 6 over Wifi 5 and will be good for bridging a LAN in one part of a house to a Wifi 2.4GHz or Ethernet connected device in another part. They also support up to 7 nodes so you could buy two of the 3 device packages and run one network with 2 and another with 4. The pricing is very competitive and they support web based administration!

I’ve just ordered the $140 Wifi 6 pack from Kogan. If it doesn’t do what I want then I can find someone else who will be happy with whatever functionality it gives and $140 is an amount I can risk without concern. If it works well then I might upgrade to Wifi 6E or Wifi 7 next year and deploy the Wifi 6 one for a relative. It seems that for my needs a cheap and OK Wifi 6 device is better than an expensive Wifi 6E device.

24 August, 2024 07:56AM by etbe

Is Secure Boot Worth Using?

With news like this one cited by Bruce Schneier [1] people are asking whether it’s worth using Secure Boot.

Regarding the specific news article, this is always a risk with distributed public key encryption systems. Lose control of one private key and attackers can do bad things. That doesn’t make it bad it just makes it less valuable. If you want to setup a system for a government agency, bank, or other high value target then it’s quite reasonable to expect an adversary to purchase systems of the same make and model to verify that their attacks will work. If you want to make your home PC a little harder to attack then you can expect that the likely adversaries won’t bother with such things. You don’t need security to be perfect, making a particular attack slightly more difficult than other potential attacks gives a large part of the benefit.

The purpose of Secure Boot is to verify the boot loader with a public key signature and then have the boot loader verify the kernel. Microsoft signs the “shim” that is used by each Linux distribution to load GRUB (or another boot loader). So when I configure a Debian system with Secure Boot enabled that doesn’t stop anyone from booting Ubuntu. From the signatures on the boot loader etc there is no difference from my Debian installation and a rescue image from Debian, Ubuntu, or another distribution booted by a hostile party to do things against my interests. The difference between the legitimate OS image and malware is a matter of who boots it and the reason for booting it.

It is possible to deconfigure Microsoft keys from UEFI to only boot from your own key, this document describes what is necessary to do that [2]. Basically if you boot without using any “option ROMs” (which among other things means the ROM from your video card) then you can disable the MS keys.

If it’s impossible to disable the MS keys that doesn’t make it impossible to gain a benefit from the Secure Boot process. You can use a block device decryption process that involves a signature of the kernel and the BIOS being used as part of the decryption for the device. So if a system is booted with the wrong kernel and the user doesn’t recognise it then they will find that they can’t unlock the device with the password. I think it’s possible on some systems to run the Secure Boot functionality in a non-enforcing mode such that it will use a bootloader without a valid signature but still use the hash for TPM calculations, that appears impossible on my Thinkpad Yoga Gen3 which only has enabled and disabled as options but should work on Dell laptops which have an option to run Secure Boot in permissive mode.

I believe that the way of the future is to use something like EFIStub [3] to create unified kernel images with a signed kernel, initrd, and command-line parameters in a single bundle which can be loaded directly by the UEFI BIOS. From the perspective of a distribution developer it’s good to have many people using the current standard functionality of shim and GRUB for EFI as a step towards that goal.

CloudFlare has a good blog post about Linux kernel hardening [4]. In that post they cover the benefits of a full secure boot setup (which is difficult at the current time) and the way that secure boot enables the lockdown module for kernel integrity. When Secure Boot is detected by the kernel it automatically enables lockdown=integrity functionality (see this blog post for an explanation of lockdown [5]). It is possible to enable this by putting “lockdown=integrity” on the kernel command line or “lockdown=confidentiality” if you want even more protection, but it happens by default with Secure Boot. Secure Boot is something you can set to get a selection of security features enabled and get a known minimum level of integrity even if the signatures aren’t used for anything useful, restricting a system to only boot kernels from MS, Debian, Ubuntu, Red Hat, etc is not useful.

For most users I think that Secure Boot is a small increase in security but testing it on a large number of systems allows increasing the overall security of operating systems which benefits the world. Also I think that having features like EFIStub usable for a large portion of the users (possibly the majority of users) is something that can be expected to happen in the lifetime of hardware being purchased now. So ensuring that Secure Boot works with GRUB now will facilitate using EFIStub etc in future years.

The Secure Boot page on the Debian wiki is worth reading, and also worth updating for people who want to contribute [6].

24 August, 2024 03:45AM by etbe

August 22, 2024

hackergotchi for Thomas Goirand

Thomas Goirand

Packaging Home Assistant

During Debconf, Edward Betts and myself started packaging Home Assistant for Debian. It consists of hundreds of Python packages. So far, we counted at least 675 packages. That’s a lot, though most packages are just libraries to talk with some IoT devices and some APIs. It’s fairly easy to create a new package: it takes me about 15 to 20 minutes, probably half that time to Edward. And it’s a lot of fun. So far in one month of time, we managed to package about 1 third of the list (probably 200+ Python packages already). Once we’ve done all the dependencies, we may start to have fun with the core of the application! At the current speed, hopefully we’ll be done before the end of the year. Edward and myself have swear to make at least one package a day, which I’ve been doing so far, and Edward did a way more… We also received contributions from Silton0506, Tianyu, piotr, EiPi Fun, sourabhtk37, and Count-Dracula, as per the very bottom of the TODO list in the wiki (see link below).

If you have a bit of free time, we’d love to have more contributors. Here’s were to get the needed information:

We created a team in Salsa: https://salsa.debian.org/homeassistant-team/

Our TODO list: https://wiki.debian.org/Python/HomeAssistant

Our DDPO Q/A page: https://qa.debian.org/developer.php?login=team%2Bhomeassistant%40tracker.debian.org

Feel free to join us on IRC: #debian-homeassistant

Discussing with a lot of people about it, I realized that A LOT of DDs are actually using Home Assistant. Wouldn’t you like it better if it was just a “apt install” away ? Any DD can simply take a package in the wiki, open an ITP, upload it’s debianized source on Salsa, and upload to the Debian archive. Most are very easy simple packages to make.

22 August, 2024 10:20PM by Goirand Thomas

hackergotchi for Jonathan McDowell

Jonathan McDowell

Thoughts on Advent of Code + Rust

Diego wrote about his dislike for Advent of Code and that reminded me I hadn’t written up my experience from 2023. Mostly because, spoiler, I never actually completed it and always intended to do so and then write it up. I think it’s time to accept I’m not going to do that, and write down some thoughts before I forget all of them. These are somewhat vague, given the time that’s elapsed, but I think still relevant. You might also find Roger’s problem write up interesting.

I’ve tried AoC a couple of times before; I think I had a very brief attempt back in 2021, and I got 4 days in for 2022. For Advent of Code 2023 I tried much harder to actually complete the challenges, and got most of the way there. I didn’t allow myself to move on to the next day until fully completing the previous day, and didn’t end up doing the second half of December 24th, or any of December 25th.

Rust

First I want to talk about Rust, which is the language I chose to use for the problems. I’ve dabbled a little in it, but I’d like more familiarity with the basic language, and some programming problems seemed like a good way to get that. It’s a language I want to like; I’ve spent a lot of my career writing C, do more in Go these days, and generally think Rust promises a low level, run-time light environment like C but with the rough edges taken off.

I set myself the challenge of using just bare Rust; no external crates, no use of cargo. I was accused of playing on hard mode by doing this, but it really wasn’t the intention - I figured that I should be able to do what I needed without recourse to anything outside the core language, and didn’t want what seemed like the extra complexity of dealing with cargo.

That caused problems, however. I’m used to by-default generic error handling in Go through the error type, but Rust seems to have much more tightly typed errors. I was pointed at anyhow as the right way to do this in Rust. I still find this surprising; I ended up using unwrap() a lot when I think with more generic error handling I could have used ?.

The other thing I discovered is that by default rustc is heavy on the debug output. I got significantly better results on some of the solutions with rustc -O -C target-cpu=native source.rs. I probably shouldn’t be surprised by this, but worth noting.

Rust, to me, has a syntax only a C++ programmer could love. I am not a C++ programmer. Coming from C I found Go to be a nice, simple syntax to learn. Rust has not been the same. There’s a lot more punctuation, and it’s not always clear to me what it’s doing. This applies more when reading other people’s code than when writing it myself, obviously, but I see a lot of Rust code that could give Perl a run for its money in terms of looking like line noise.

The borrow checker didn’t bug me too much, but did add overhead to my thinking. The Rust compiler is generally very good at outputting helpful error messages when the programmer is an idiot. I ended up having to use a RefCell for one solution, and using .iter() for loops rather than explicit iterators (why, why is this different?). I also kept forgetting to explicitly mark variables as mutable when declaring them.

Things I liked? There’s a rich set of first class data types. Look, I’m a C programmer, I’m easily pleased. You give me some sort of hash array and I’ll be happy. Rust manages that, tuples, strings, all the standard bits any modern language can provide. The whole impl thing for adding methods to structures I like as a way of providing some abstraction, though I think Go has a nicer syntax for it. The compiler, as mentioned, is great at spitting out useful errors for the most part. Also although I wasn’t using external crates for AoC I do appreciate there’s a decent ecosystem there now (though that brings up another gripe: rust seems to still be a fairly fast moving target, to the extent I can no longer rely on the compiler in Debian stable to be able to compile random projects I find).

Advent of Code

Let’s talk about the advent of code bit now. Hopefully it’s long enough since it came out that this won’t be spoilers for anyone, but if you haven’t attempted the 2023 AoC and might, you might want to stop reading here.

First, a refresher on the format for those who might not be aware of it. Problems are posted daily from December 1st until the 25th. Each is in 2 parts; the second part is not viewable until you have provided the correct answer for the first part. There’s a whole leaderboard thing going on, but the puzzle opens at midnight UTC-5 so generally by the time I wake up and have time to look the problem has been solved many times over; no chance of getting listed.

Credit to AoC creator, Eric Wastl, for writing up the set of problems in an entertaining fashion. I quite enjoyed seeing how the puzzle would be phrased each day, and the whole thing obviously brings a lot of joy to folk I know.

I always start AoC thinking it’ll be a fun set of puzzles to solve. Then something happens and I miss a day or two, and all of a sudden I’ve a bunch of catching up to do and it’s all a bit more of a chore. I hit that at some points this time, but made a concerted effort to try and power through it.

That perseverance was required up front, because I found the second part of Day 1 to be ill specified, and had to iterate a few times to actually calculate the desired solution (IIRC, issues about whether sevenone at the end of a line ended up as 7 or 1 really tripped me up). I don’t recall any other problems that bit me as hard on the specification as this one, but it happening up front was unfortunate.

The short example input doesn’t always help with this either; either it’s not enough to be able to extrapolate patterns, or it doesn’t show all the variations you need to account for (that aren’t fully specified in the text), or in a few cases it turned out I needed to understand the shape of the actual data to produce a solution that could actually complete in a reasonable time.

Which brings me to another matter, sometimes brute force doesn’t actually work. This is fine, but the second part of the day’s problem can change the approach you’d take. So sometimes I got lucky in the way I handled the first half, and doing the second half was a simple 5 minute tweak, and sometimes I had to entirely change the way I was storing data.

You might claim that if I was a better programmer I’d have always produced a first half solution that was amenable to extension for the second half. First, I dispute that; I think there are always situations where the problem domain can change in enough directions that you can’t handle all of them without a lot of effort. Secondly, I didn’t find AoC an environment that encouraged me to optimise for generic solutions. Maybe some of the puzzles in isolation would allow for that, but a month of daily problems to solve while still engaging in regular life meant I hacked things up, took short cuts based on the knowledge I had of the input data, etc, etc.

Overall I can see the appeal, but the sheer quantity and the fact I write code as part of my day job just made it feel too much like a chore, rather than a fun mental exercise. I did wonder how they’d look as a set of interview puzzles (obviously a subset, rather than all of them), but I’m not sure how you’d actually use them for that - I wouldn’t want anyone to have to solve them in a live interview.

So, in case it’s not obvious, I’m not planning to engage in AoC again this yet. But I’m continuing to persevere with Rust (though most of my work stuff is thankfully still Go).

22 August, 2024 05:48PM

hackergotchi for Matthew Garrett

Matthew Garrett

What the fuck is an SBAT and why does everyone suddenly care

Short version: Secure Boot Advanced Targeting and if that's enough for you you can skip the rest you're welcome.

Long version: When UEFI Secure Boot was specified, everyone involved was, well, a touch naive. The basic security model of Secure Boot is that all the code that ends up running in a kernel-level privileged environment should be validated before execution - the firmware verifies the bootloader, the bootloader verifies the kernel, the kernel verifies any additional runtime loaded kernel code, and now we have a trusted environment to impose any other security policy we want. Obviously people might screw up, but the spec included a way to revoke any signed components that turned out not to be trustworthy: simply add the hash of the untrustworthy code to a variable, and then refuse to load anything with that hash even if it's signed with a trusted key.

Unfortunately, as it turns out, scale. Every Linux distribution that works in the Secure Boot ecosystem generates their own bootloader binaries, and each of them has a different hash. If there's a vulnerability identified in the source code for said bootloader, there's a large number of different binaries that need to be revoked. And, well, the storage available to store the variable containing all these hashes is limited. There's simply not enough space to add a new set of hashes every time it turns out that grub (a bootloader initially written for a simpler time when there was no boot security and which has several separate image parsers and also a font parser and look you know where this is going) has another mechanism for a hostile actor to cause it to execute arbitrary code, so another solution was needed.

And that solution is SBAT. The general concept behind SBAT is pretty straightforward. Every important component in the boot chain declares a security generation that's incorporated into the signed binary. When a vulnerability is identified and fixed, that generation is incremented. An update can then be pushed that defines a minimum generation - boot components will look at the next item in the chain, compare its name and generation number to the ones stored in a firmware variable, and decide whether or not to execute it based on that. Instead of having to revoke a large number of individual hashes, it becomes possible to push one update that simply says "Any version of grub with a security generation below this number is considered untrustworthy".

So why is this suddenly relevant? SBAT was developed collaboratively between the Linux community and Microsoft, and Microsoft chose to push a Windows update that told systems not to trust versions of grub with a security generation below a certain level. This was because those versions of grub had genuine security vulnerabilities that would allow an attacker to compromise the Windows secure boot chain, and we've seen real world examples of malware wanting to do that (Black Lotus did so using a vulnerability in the Windows bootloader, but a vulnerability in grub would be just as viable for this). Viewed purely from a security perspective, this was a legitimate thing to want to do.

(An aside: the "Something has gone seriously wrong" message that's associated with people having a bad time as a result of this update? That's a message from shim, not any Microsoft code. Shim pays attention to SBAT updates in order to avoid violating the security assumptions made by other bootloaders on the system, so even though it was Microsoft that pushed the SBAT update, it's the Linux bootloader that refuses to run old versions of grub as a result. This is absolutely working as intended)

The problem we've ended up in is that several Linux distributions had not shipped versions of grub with a newer security generation, and so those versions of grub are assumed to be insecure (it's worth noting that grub is signed by individual distributions, not Microsoft, so there's no externally introduced lag here). Microsoft's stated intention was that Windows Update would only apply the SBAT update to systems that were Windows-only, and any dual-boot setups would instead be left vulnerable to attack until the installed distro updated its grub and shipped an SBAT update itself. Unfortunately, as is now obvious, that didn't work as intended and at least some dual-boot setups applied the update and that distribution's Shim refused to boot that distribution's grub.

What's the summary? Microsoft (understandably) didn't want it to be possible to attack Windows by using a vulnerable version of grub that could be tricked into executing arbitrary code and then introduce a bootkit into the Windows kernel during boot. Microsoft did this by pushing a Windows Update that updated the SBAT variable to indicate that known-vulnerable versions of grub shouldn't be allowed to boot on those systems. The distribution-provided Shim first-stage bootloader read this variable, read the SBAT section from the installed copy of grub, realised these conflicted, and refused to boot grub with the "Something has gone seriously wrong" message. This update was not supposed to apply to dual-boot systems, but did anyway. Basically:

1) Microsoft applied an update to systems where that update shouldn't have been applied
2) Some Linux distros failed to update their grub code and SBAT security generation when exploitable security vulnerabilities were identified in grub

The outcome is that some people can't boot their systems. I think there's plenty of blame here. Microsoft should have done more testing to ensure that dual-boot setups could be identified accurately. But also distributions shipping signed bootloaders should make sure that they're updating those and updating the security generation to match, because otherwise they're shipping a vector that can be used to attack other operating systems and that's kind of a violation of the social contract around all of this.

It's unfortunate that the victims here are largely end users faced with a system that suddenly refuses to boot the OS they want to boot. That should never happen. I don't think asking arbitrary end users whether they want secure boot updates is likely to result in good outcomes, and while I vaguely tend towards UEFI Secure Boot not being something that benefits most end users it's also a thing you really don't want to discover you want after the fact so I have sympathy for it being default on, so I do sympathise with Microsoft's choices here, other than the failed attempt to avoid the update on dual boot systems.

Anyway. I was extremely involved in the implementation of this for Linux back in 2012 and wrote the first prototype of Shim (which is now a massively better bootloader maintained by a wider set of people and that I haven't touched in years), so if you want to blame an individual please do feel free to blame me. This is something that shouldn't have happened, and unless you're either Microsoft or a Linux distribution it's not your fault. I'm sorry.

comment count unavailable comments

22 August, 2024 08:52AM

August 19, 2024

Client-side filtering of private data is a bad idea

(The issues described in this post have been fixed, I have not exhaustively researched whether any other issues exist)

Feeld is a dating app aimed largely at alternative relationship communities (think "classier Fetlife" for the most part), so unsurprisingly it's fairly popular in San Francisco. Their website makes the claim:

Can people see what or who I'm looking for?
No. You're the only person who can see which genders or sexualities you're looking for. Your curiosity and privacy are always protected.


which is based on you being able to restrict searches to people of specific genders, sexualities, or relationship situations. This sort of claim is one of those things that just sits in the back of my head worrying me, so I checked it out.

First step was to grab a copy of the Android APK (there are multiple sites that scrape them from the Play Store) and run it through apk-mitm - Android apps by default don't trust any additional certificates in the device certificate store, and also frequently implement certificate pinning. apk-mitm pulls apart the apk, looks for known http libraries, disables pinning, and sets the appropriate manifest options for the app to trust additional certificates. Then I set up mitmproxy, installed the cert on a test phone, and installed the app. Now I was ready to start.

What became immediately clear was that the app was using graphql to query. What was a little more surprising is that it appears to have been implemented such that there's no server state - when browsing profiles, the client requests a batch of profiles along with a list of profiles that the client has already seen. This has the advantage that the server doesn't need to keep track of a session, but also means that queries just keep getting larger and larger the more you swipe. I'm not a web developer, I have absolutely no idea what the tradeoffs are here, so I point this out as a point of interest rather than anything else.

Anyway. For people unfamiliar with graphql, it's basically a way to query a database and define the set of fields you want returned. Let's take the example of requesting a user's profile. You'd provide the profile ID in question, and request their bio, age, rough distance, status, photos, and other bits of data that the client should show. So far so good. But what happens if we request other data?

graphql supports introspection to request a copy of the database schema, but this feature is optional and was disabled in this case. Could I find this data anywhere else? Pulling apart the apk revealed that it's a React Native app, so effectively a framework for allowing writing of native apps in Javascript. Sometimes you'll be lucky and find the actual Javascript source there, but these days it's more common to find Hermes blobs. Fortunately hermes-dec exists and does a decent job of recovering something that approximates the original input, and from this I was able to find various lists of database fields.

So, remember that original FAQ statement, that your desires would never be shown to anyone else? One of the fields mentioned in the app was "lookingFor", a field that wasn't present in the default profile query. What happens if we perform the incredibly complicated hack of exporting a profile query as a curl statement, add "lookingFor" into the set of requested fields, and run it?

Oops.

So, point 1 is that you can't simply protect data by having your client not ask for it - private data must never be released. But there was a whole separate class of issue that was an even more obvious issue.

Looking more closely at the profile data returned, I noticed that there were fields there that weren't being displayed in the UI. Those included things like "ageRange", the range of ages that the profile owner was interested in, and also whether the profile owner had already "liked" or "disliked" your profile (which means a bunch of the profiles you see may already have turned you down, but the app simply didn't show that). This isn't ideal, but what was more concerning was that profiles that were flagged as hidden were still being sent to the app and then just not displayed to the user. Another example of this is that the app supports associating your profile with profiles belonging to partners - if one of those profiles was then hidden, the app would stop showing the partnership, but was still providing the profile ID in the query response and querying that ID would still show the hidden profile contents.

Reporting this was inconvenient. There was no security contact listed on the website or in the app. I ended up finding Feeld's head of trust and safety on Linkedin, paying for a month of Linkedin Pro, and messaging them that way. I was then directed towards a HackerOne program with a link to terms and conditions that 404ed, and it took a while to convince them I was uninterested in signing up to a program without explicit terms and conditions. Finally I was just asked to email security@, and successfully got in touch. I heard nothing back, but after prompting was told that the issues were fixed - I then looked some more, found another example of the same sort of issue, and eventually that was fixed as well. I've now been informed that work has been done to ensure that this entire class of issue has been dealt with, but I haven't done any significant amount of work to ensure that that's the case.

You can't trust clients. You can't give them information and assume they'll never show it to anyone. You can't put private data in a database with no additional acls and just rely on nobody ever asking for it. You also can't find a single instance of this sort of issue and fix it without verifying that there aren't other examples of the same class. I'm glad that Feeld engaged with me earnestly and fixed these issues, and I really do hope that this has altered their development model such that it's not something that comes up again in future.

(Edit to add: as far as I can tell, pictures tagged as "private" which are only supposed to be visible if there's a match were appropriately protected, and while there is a "location" field that contains latitude and longitude this appears to only return 0 rather than leaking precise location. I also saw no evidence that email addresses, real names, or any billing data was leaked in any way)

comment count unavailable comments

19 August, 2024 07:03PM