Just a "warn your brothers" for people foolish enough to
use GKE and run on the Rapid release channel.
Update from version 1.31.1-gke.1146000 to 1.31.1-gke.1678000 is causing
trouble whenever NetworkPolicy resources and a readinessProbe (or health check)
are configured. As a workaround we started to remove the NetworkPolicy
resources. E.g. when kustomize is involved with a patch like this:
We tried to update to the latest version - right now 1.31.1-gke.2008000 - which
did not change anything.
Behaviour is pretty much erratic, sometimes it still works and sometimes the traffic
is denied. It also seems that there is some relevant fix in 1.31.1-gke.1678000
because that is now the oldest release of 1.31.1 which I can find in the regular and
rapid release channels. The last known good version 1.31.1-gke.1146000 is not
available to try a downgrade.
The number of FAIme jobs has reached 30.000. Yeah!
At the end of this November the FAIme web service for building customized ISOs turns 7 years old.
It had reached 10.000 jobs in March 2021 and 20.000 jobs were reached in
June 2023. A nice increase of the usage.
Here are some statistics for the jobs processed in 2024:
Type of jobs
3%
cloud image
11%
live ISO
86%
install ISO
Distribution
2%
bullseye
8%
trixie
12%
ubuntu 24.04
78%
bookworm
Misc
18% used a custom postinst script
11% provided their ssh pub key for passwordless root login
50% of the jobs didn't included a desktop environment at
all, the others used GNOME, XFCE or KDE or the Ubuntu desktop the most.
The biggest ISO was a FAIme job which created a live ISO with a desktop and some additional packages
This job took 30min to finish and the resulting ISO was 18G in size.
Execution Times
The cloud and live ISOs need more time for their creation because the
FAIme server needs to unpack and install all packages. For the install
ISO the packages are only downloaded. The amount of software
packages also affects the build time.
Every ISO is build in a VM on an old 6-core E5-1650 v2.
Times given are calculated from the jobs of the past two weeks.
Job type
Avg
Max
install no desktop
1 min
2 min
install GNOME
2 min
5 min
The times for Ubuntu without and with desktop are one minute higher than those mentioned above.
Job type
Avg
Max
live no desktop
4 min
6 min
live GNOME
8 min
11 min
The times for cloud images are similar to live images.
A New Feature
For a few weeks now, the system has been showing the number of jobs
ahead of you in the queue when you submit a job that cannot be
processed immediately.
The Next Milestone
At the end of this years the FAI project will be 25 years old.
If you have a success story of your FAI usage to share please post it
to the linux-fai mailing list or send it to me.
Do you know the FAI questionnaire ? A lot of
reports are already available.
Here's an overview what happened in the past 20 years in the FAI
project.
About FAIme
FAIme is the service for building your own customized ISO via a web
interface. You can create an installation or live ISO or a cloud
image. Several Debian releases can be selected and also Ubuntu
server or Ubuntu desktop installation ISOs can be customized.
Multiple options are available like selecting a desktop and the language, adding your own package
list, choosing a partition layout, adding a user, choosing a backports
kernel, adding a postinst script and some more.
This looks straightforward and is far from it. I expect tool support will
improve in the future. Meanwhile, this blog post serves as a step by step
explanation for what is going on in code that I'm about to push to my team.
Let's take this relatively straightforward python code. It has a function
printing an int, and a decorator that makes it argument optional, taking it
from a global default if missing:
It lacks functools.wraps and typing, though. Let's add them.
Adding functools.wraps
Adding a simple @functools.wraps, mock unexpectedly stops working:
# python3 test1.py
Answer: 12
Answer: 42
Mocked answer: 12
Traceback (most recent call last):
File "/home/enrico/lavori/freexian/tt/test1.py", line 42, in <module>
fiddle.print()
File "<string>", line 2, in print
File "/usr/lib/python3.11/unittest/mock.py", line 186, in checksig
sig.bind(*args, **kwargs)
File "/usr/lib/python3.11/inspect.py", line 3211, in bind
return self._bind(args, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/inspect.py", line 3126, in _bind
raise TypeError(msg) from None
TypeError: missing a required argument: 'value'
This is the new code, with explanations and a fix:
# Introduce functoolsimportfunctoolsfromunittestimportmockdefault=42defwith_default(f):@functools.wraps(f)defwrapped(self,value=None):ifvalueisNone:value=defaultreturnf(self,value)# Fix:# del wrapped.__wrapped__returnwrappedclassFiddle:@with_defaultdefprint(self,value):assertvalueisnotNoneprint("Answer:",value)fiddle=Fiddle()fiddle.print(12)fiddle.print()defmocked(self,value=None):print("Mocked answer:",value)withmock.patch.object(Fiddle,"print",autospec=True,side_effect=mocked):fiddle.print(12)# mock's autospec uses inspect.getsignature, which follows __wrapped__ set# by functools.wraps, which points to a wrong signature: the idea that# value is optional is now lostfiddle.print()
Adding typing
For simplicity, from now on let's change Fiddle.print to match its wrapped signature:
# Give up with making value not optional, to simplify things :(defprint(self,value:int|None=None)->None:assertvalueisnotNoneprint("Answer:",value)
Typing with ParamSpec
# Introduce typing, try with ParamSpecimportfunctoolsfromtypingimportTYPE_CHECKING,ParamSpec,Callablefromunittestimportmockdefault=42P=ParamSpec("P")defwith_default(f:Callable[P,None])->Callable[P,None]:# Using ParamSpec we forward arguments, but we cannot use them!@functools.wraps(f)defwrapped(self,value:int|None=None)->None:ifvalueisNone:value=defaultreturnf(self,value)returnwrappedclassFiddle:@with_defaultdefprint(self,value:int|None=None)->None:assertvalueisnotNoneprint("Answer:",value)
mypy complains inside the wrapper, because while we forward arguments we don't
constrain them, so we can't be sure there is a value in there:
test2.py:17: error: Argument 2 has incompatible type "int"; expected "P.args" [arg-type]
test2.py:19: error: Incompatible return value type (got "_Wrapped[P, None, [Any, int | None], None]", expected "Callable[P, None]") [return-value]
test2.py:19: note: "_Wrapped[P, None, [Any, int | None], None].__call__" has type "Callable[[Arg(Any, 'self'), DefaultArg(int | None, 'value')], None]"
Typing with Callable
We can use explicit Callable argument lists:
# Introduce typing, try with CallableimportfunctoolsfromtypingimportTYPE_CHECKING,Callable,TypeVarfromunittestimportmockdefault=42A=TypeVar("A")# Callable cannot represent the fact that the argument is optional, so now mypy# complains if we try to omit itdefwith_default(f:Callable[[A,int|None],None])->Callable[[A,int|None],None]:@functools.wraps(f)defwrapped(self:A,value:int|None=None)->None:ifvalueisNone:value=defaultreturnf(self,value)returnwrappedclassFiddle:@with_defaultdefprint(self,value:int|None=None)->None:assertvalueisnotNoneprint("Answer:",value)ifTYPE_CHECKING:reveal_type(Fiddle.print)fiddle=Fiddle()fiddle.print(12)# !! Too few arguments for "print" of "Fiddle" [call-arg]fiddle.print()defmocked(self,value=None):print("Mocked answer:",value)withmock.patch.object(Fiddle,"print",autospec=True,side_effect=mocked):fiddle.print(12)fiddle.print()
Now mypy complains when we try to omit the optional argument, because Callable
cannot represent optional arguments:
test3.py:32: note: Revealed type is "def (test3.Fiddle, Union[builtins.int, None])"
test3.py:37: error: Too few arguments for "print" of "Fiddle" [call-arg]
test3.py:46: error: Too few arguments for "print" of "Fiddle" [call-arg]
Callable cannot express complex signatures such as functions that take a
variadic number of arguments, overloaded functions, or functions that have
keyword-only parameters. However, these signatures can be expressed by
defining a Protocol class with a call() method:
Let's do that!
Typing with Protocol, take 1
# Introduce typing, try with ProtocolimportfunctoolsfromtypingimportTYPE_CHECKING,Protocol,TypeVar,Generic,castfromunittestimportmockdefault=42A=TypeVar("A",contravariant=True)classPrinter(Protocol,Generic[A]):def__call__(_,self:A,value:int|None=None)->None:...defwith_default(f:Printer[A])->Printer[A]:@functools.wraps(f)defwrapped(self:A,value:int|None=None)->None:ifvalueisNone:value=defaultreturnf(self,value)returncast(Printer,wrapped)classFiddle:# function has a __get__ method to generated bound versions of itself# the Printer protocol does not define it, so mypy is now unable to type# the bound method correctly@with_defaultdefprint(self,value:int|None=None)->None:assertvalueisnotNoneprint("Answer:",value)ifTYPE_CHECKING:reveal_type(Fiddle.print)fiddle=Fiddle()# !! Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"fiddle.print(12)fiddle.print()defmocked(self,value=None):print("Mocked answer:",value)withmock.patch.object(Fiddle,"print",autospec=True,side_effect=mocked):fiddle.print(12)fiddle.print()
New mypy complaints:
test4.py:41: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle" [arg-type]
test4.py:42: error: Missing positional argument "self" in call to "__call__" of "Printer" [call-arg]
test4.py:50: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle" [arg-type]
test4.py:51: error: Missing positional argument "self" in call to "__call__" of "Printer" [call-arg]
What happens with class methods, is that the function object has a __get__
method that generates a bound versions of itself. Our Printer protocol does not
define it, so mypy is now unable to type the bound method correctly.
Typing with Protocol, take 2
So... we add the function descriptor methos to our Protocol!
# Introduce typing, try with Protocol, harder!importfunctoolsfromtypingimportTYPE_CHECKING,Protocol,TypeVar,Generic,cast,overload,Unionfromunittestimportmockdefault=42A=TypeVar("A",contravariant=True)# We now produce typing for the whole function descriptor protocol## See https://github.com/python/typing/discussions/1040classBoundPrinter(Protocol):"""Protocol typing for bound printer methods."""def__call__(_,value:int|None=None)->None:"""Bound signature."""classPrinter(Protocol,Generic[A]):"""Protocol typing for printer methods."""# noqa annotations are overrides for flake8 being confused, giving either D418:# Function/ Method decorated with @overload shouldn't contain a docstring# or D105:# Missing docstring in magic method## F841 is for vulture being confused:# unused variable 'objtype' (100% confidence)@overloaddef__get__(# noqa: D105self,obj:A,objtype:type[A]|None=None# noqa: F841)->BoundPrinter:...@overloaddef__get__(# noqa: D105self,obj:None,objtype:type[A]|None=None# noqa: F841)->"Printer[A]":...def__get__(self,obj:A|None,objtype:type[A]|None=None# noqa: F841)->Union[BoundPrinter,"Printer[A]"]:"""Implement function descriptor protocol for class methods."""def__call__(_,self:A,value:int|None=None)->None:"""Unbound signature."""defwith_default(f:Printer[A])->Printer[A]:@functools.wraps(f)defwrapped(self:A,value:int|None=None)->None:ifvalueisNone:value=defaultreturnf(self,value)returncast(Printer,wrapped)classFiddle:# function has a __get__ method to generated bound versions of itself# the Printer protocol does not define it, so mypy is now unable to type# the bound method correctly@with_defaultdefprint(self,value:int|None=None)->None:assertvalueisnotNoneprint("Answer:",value)fiddle=Fiddle()fiddle.print(12)fiddle.print()defmocked(self,value=None):print("Mocked answer:",value)withmock.patch.object(Fiddle,"print",autospec=True,side_effect=mocked):fiddle.print(12)fiddle.print()
Oddly enough, as I parked the car, Allie Sherlock's first single was playing
on the radio. I photographed
Allie Sherlock and Zoe Clark four months ago. We can look forward to
the day when Fergus's hit Roadkill comes on the radio while driving.
Again this year, Arm offered to
host us for
a mini-debconf
in Cambridge. Roughly 60 people turned up on 10-13 October to the Arm
campus, where they made us really welcome. They even had some
Debian-themed treats made to spoil us!
Hacking together
For the first two days, we had a "mini-debcamp" with disparate
group of people working on all sorts of things: Arm support, live
images, browser stuff, package uploads, etc. And (as is traditional)
lots of people doing last-minute work to prepare slides for their
talks.
Sessions and talks
Saturday and Sunday were two days devoted to more traditional
conference sessions. Our talks covered a typical range of Debian
subjects: a DPL "Bits" talk, an update from the Release Team, live
images. We also had some wider topics: handling your own data, what to
look for in the upcoming Post-Quantum Crypto world, and even me
talking about the ups and downs of Secure Boot. Plus a random set of
lightning talks too! :-)
Video team awesomeness
Lots of volunteers from the DebConf video team were on hand too
(both on-site and remotely!), so our talks were both streamed live and
recorded for posterity - see the links from the individual talk pages
in the wiki,
or http://meetings-archive.debian.net/pub/debian-meetings/2024/MiniDebConf-Cambridge/
for the full set if you'd like to see more.
A great time for all
Again, the mini-conf went well and feedback from attendees was very
positive. Thanks to all our helpers, and of course to our
sponsor: Arm for providing the venue
and infrastructure for the event, and all the food and drink too!
Photo credits: Andy Simpkins, Mark Brown, Jonathan Wiltshire. Thanks!
Late last month there was an announcement of a “severity 9.9 vulnerability” allowing remote code execution that affects “all GNU/Linux systems (plus others)” [1]. For something to affect all Linux systems that would have to be either a kernel issue or a sshd issue. The announcement included complaints about the lack of response of vendors and “And YES: I LOVE hyping the sh1t out of this stuff because apparently sensationalism is the only language that forces these people to fix”.
He seems to have a different experience to me of reporting bugs, I have had plenty of success getting bugs fixed without hyping them. I just report the bug, wait a while, and it gets fixed. I have reported potential security bugs without even bothering to try and prove that they were exploitable (any situation where you can make a program crash is potentially exploitable), I just report it and it gets fixed. I was very dubious about his ability to determine how serious a bug is and to accurately report it so this wasn’t a situation where I was waiting for it to be disclosed to discover if it affected me. I was quite confident that my systems wouldn’t be at any risk.
Analysis
Not All Linux Systems Run CUPS
When it was published my opinion was proven to be correct, it turned out to be a series of CUPS bugs [2]. To describe that as “all GNU/Linux systems (plus others)” seems like a vast overstatement, maybe a good thing to say if you want to be a TikTok influencer but not if you want to be known for computer security work.
For the Debian distribution the cups-browsed package (which seems to be the main exploitable one) is recommended by cups-daemon, as I have my Debian systems configured to not install recommended packages by default that means that it wasn’t installed on any of my systems. Also the vast majority of my systems don’t do printing and therefore don’t have any part of CUPS installed.
CUPS vs NAT
The next issue is that in Australia most home ISPs don’t have IPv6 enabled and CUPS doesn’t do the things needed to allow receiving connections from the outside world via NAT with IPv4. If inbound port 631 is blocked on both TCP and USP as is the default on Australian home Internet or if there is a correctly configured firewall in place then the network is safe from attack. There is a feature called uPnP port forwarding [3] to allow server programs to ask a router to send inbound connections to them, this is apparently usually turned off by default in router configuration. If it is enabled then there are Debian packages of software to manage this, the miniupnpc package has the client (which can request NAT changes on the router) [4]. That package is not installed on any of my systems and for my home network I don’t use a router that runs uPnP.
The only program I knowingly run that uses uPnP is Warzone2100 and as I don’t play network games that doesn’t happen. Also as an aside in version 4.4.2-1 of warzone2100 in Debian and Ubuntu I made it use Bubblewrap to run the game in a container. So a Remote Code Execution bug in Warzone 2100 won’t be an immediate win for an attacker (exploits via X11 or Wayland are another issue).
To check SE Linux access I first use the “semanage fcontext” command to check the context of the binary, cupsd_exec_t means that the daemon runs as cupsd_t. Then I checked what file access is granted with the sesearch program, mostly just access to temporary files, cupsd config files, the faillog, the Kerberos cache files (not used on the Kerberos client systems I run), Samba run files (might be a possibility of exploiting something there), and the security_t used for interfacing with kernel security infrastructure. I then checked the access to the security class and found that it is permitted to check contexts and access-vectors – not access that can be harmful.
The next test was to use sesearch to discover what capabilities are granted, which unfortunately includes the sys_admin capability, that is a capability that allows many sysadmin tasks that could be harmful (I just checked the Fedora source and Fedora 42 has the same access). Whether the sys_admin capability can be used to do bad things with the limited access cupsd_t has to device nodes etc is not clear. But this access is undesirable.
So the SE Linux policy in Debian and Fedora will stop cupsd_t from writing SETUID programs that can be used by random users for root access and stop it from writing to /etc/shadow etc. But the sys_admin capability might allow it to do hostile things and I have already uploaded a changed policy to Debian/Unstable to remove that. The sys_rawio capability also looked concerning but it’s apparently needed to probe for USB printers and as the domain has no access to block devices it is otherwise harmless. Below are the commands I used to discover what the policy allows and the output from them.
This is an example of how not to handle security issues. Some degree of promotion is acceptable but this is very excessive and will result in people not taking security announcements seriously in future. I wonder if this is even a good career move by the researcher in question, will enough people believe that they actually did something good in this that it outweighs the number of people who think it’s misleading at best?
Whilst researching what synth to buy, I learned of the Behringer1
Model-D2: a 2018 clone of the 1970 Moog Minimoog, in a desktop form
factor.
Behringer Model-D
In common with the original Minimoog, it's a monophonic analogue synth,
featuring three audible oscillators3 , Moog's famous 12-ladder filter and
a basic envelope generator. The model-d has lost the keyboard from the
original and added some patch points for the different stages, enabling
some slight re-routing of the audio components.
1970 Moog Minimoog
Since I was focussing on more fundamental, back-to-basics
instruments,
this was very appealing to me. I'm very curious to find out what's so compelling
about the famous Moog sound. The relative lack of features feels like an
advantage: less to master. The additional patch points makes it a little
more flexible and offer a potential gateway into the world of modular synthesis.
The Model-D is also very affordable: about £ 200 GBP. I'll never
own a real Moog.
For this to work, I would need to supplement it with some other equipment.
I'd need a keyboard (or press the Micron into service as a controller); I
would want some way of recording and overdubbing (same as with any synth).
There are no post-mix effects on the Model-D, such as delay, reverb or
chorus, so I may also want something to add those.
What stopped me was partly the realisation that there was little chance that a
perennial beginner, such as I, could eek anything novel out of a synthesiser
design that's 54 years old. Perhaps that shouldn't matter, but it gave me
pause. Whilst the Model-D has patch points, I don't have anything to connect
to them, and I'm firmly wanting to avoid the Modular Synthesis money pit.
The lack of effects, and polyphony could make it hard to live-sculpt a tone.
I started characterizing the Model-D as the "heart" choice, but it seemed
wise to instead go for a "head" choice.
Maybe another day!
There's a whole other blog post of material I could write about
Behringer and their clones of classic synths, some long out of production,
and others, not so much. But, I decided to skip on that for now.↩
taken from the fact that the Minimoog was a productised version
of Moog's fourth internal prototype, the model D.↩
What benefits do these things offer when a general purpose computer can do so
many things nowadays? Is there a USB keyboard that you can connect to a
laptop or phone to do these things? I presume that all recent phones have the
compute power to do all the synthesis you need if you have the right
software. Is it just a lack of software and infrastructure for doing it on
laptops/phones that makes synthesisers still viable?
I've decided to turn my response into a post of its own.
The issue is definitely not compute power. You can indeed attach a USB keyboard
to a computer and use a plethora of software synthesisers, including very
faithful emulations of all the popular classics. The raw compute power of
modern hardware synths is comparatively small: I’ve been told the modern Korg
digital synths are on a par with a raspberry pi. I’ve seen some DSPs which are
32 bit ARMs, and other tools which are roughly equivalent to arduinos.
I can think of four reasons hardware synths remain popular with some despite
the above:
As I touched on in my original synth post, computing dominates my
life outside of music already. I really wanted something separate from
that to keep mental distance from work.
Synths have hard real-time requirements. They don't have raw power in
compute terms, but they absolutely have to do their job within microseconds
of being instructed to, with no exceptions. Linux still has a long way to go
for hard real-time.
The Linux audio ecosystem is… complex. Dealing with pipewire, pulseaudio,
jack, alsa, oss, and anything else I've forgotten, as well as their failure
modes, is too time consuming.
The last point is to do with creativity and inspiration. A good synth is
more than the sum of its parts: it's an instrument, carefully designed and
its components integrated by musically-minded people who have set out to
create something to inspire. There are plenty of synths which aren't good
instruments, but have loads of features: they’re boxes of "stuff". Good
synths can't do it all: they often have limitations which you have to
respond to, work around or with, creatively. This was expressed better than
I could by Trent Reznor in the video archetype of a synthesiser:
I nearly did, but ultimately I didn't buy an Arturia Microfreak.
The Microfreak is a small form factor hybrid synth with a distinctive style.
It's priced at the low end of the market and it is overflowing with features.
It has a weird 2-octave keyboard which is a stylophone-style capacitive strip
rather than weighted keys. It seems to have plenty of controls, but given the
amount of features it has, much of that functionality is inevitably buried in
menus. The important stuff is front and centre, though. The digital
oscillators are routed through an analog filter. The Microfreak gained sampler
functionality in a firmware update that surprised and delighted its owners.
I watched a load of videos about the Microfreak, but the above review from
musician Stimming stuck
in my mind because it made a comparison between the Microfreak and Teenage
Engineering's OP-1.
The Teenage Engineering OP-1.
I'd been lusting after the OP-1 since it appeared in 2011: a
pocket-sized1 music making machine with eleven synthesis engines, a
sampler, and less conventional features such as an FM radio, a large colour
OLED display, and a four track recorder. That last feature in particular was
really appealing to me: I loved the idea of having an all-in-one machine to try
and compose music. Even then, I was not keen on involving conventional
computers in music making.
Of course in many ways it is a very compromised machine. I never did buy a
OP-1, and by now they've replaced it with a new model (the OP-1 field)
that costs 50% more (but doesn't seem to do 50% more) I'm still not buying one.
Framing the Microfreak in terms of the OP-1 made the penny drop for me.
The Microfreak doesn't have the four-track functionality, but almost no synth
has: I'm going to have to look at something external to provide that. But it
might capture a similar sense of fun; it's something I could use on the sofa,
in the spare room, on the train, during lunchbreaks at work, etc.
So I didn't buy the Microfreak. Maybe one day in the future once I'm further
down the road. Instead, I started to concentrate my search on more fundamental,
back-to-basics instruments…
A new minor release of the drat package
arrived on CRAN today, which is
just over a year since the previous release. drat stands for
drat R Archive Template, and helps with easy-to-create and
easy-to-use repositories for R packages. Since its inception in
early 2015 it has found reasonably widespread adoption among R users
because repositories with marked releases is the better way to
distribute code.
Because for once it really is as your mother told you: Friends
don’t let friends install random git commit snapshots. Properly
rolled-up releases it is. Just how CRAN shows us: a model that has
demonstrated for over two-and-a-half decades how to do this.
And you can too: drat is easy to use, documented by six
vignettes and just works. Detailed information about
drat is at its documentation site. That
said, and ‘these days’, if you mainly care about github code then r-universe is there too, also
offering binaries its makes and all that jazz. But sometimes you just
want to, or need to, roll a local repository and drat can help
you there.
This release contains a small PR (made by Arne Holmin just after the
previous release) adding support for an ‘OSflacour’ variable (helpful
for macOS). We also corrected an issue with one test file being
insufficiently careful of using git2r only when installed,
and as usual did a round of maintenance for the package concerning both
continuous integration and documentation.
The NEWS file summarises the release as follows:
Changes in drat
version 0.2.5 (2024-10-21)
Function insertPackage has a new optional argument
OSflavour (Arne Holmin in #142)
A test file conditions correctly about git2r being present (Dirk)
Several smaller packaging updates and enhancements to continuous
integration and documentation have been added (Dirk)
I'm in the unlucky position to have to deal with GitHub. Thus
I've a terraform module in a project which deals with
populating organization secrets in our GitHub organization, and
assigning repositories access to those secrets.
Since the GitHub terraform provider internally works mostly
with repository IDs, not slugs (this human readable
organization/repo format), we've to do some mapping in between.
In my case it looks like this:
#tfvars Input for Module
org_secrets = {
"SECRET_A" = {
repos = [
"infra-foo",
"infra-baz",
"deployment-foobar",
]
"SECRET_B" = {
repos = [
"job-abc",
"job-xyz",
]
}
}
# Module Code
/*
Limitation: The GH search API which is queried returns at most 1000
results. Thus whenever we reach that limit this approach will no longer work.
The query is also intentionally limited to internal repositories right now.
*/
data "github_repositories" "repos" {
query = "org:myorg archived:false -is:public -is:private"
include_repo_id = true
}
/*
The properties of the github_repositories.repos data source queried
above contains only lists. Thus we've to manually establish a mapping
between the repository names we need as a lookup key later on, and the
repository id we got in another list from the search query above.
*/
locals {
# Assemble the set of repository names we need repo_ids for
repos = toset(flatten([for v in var.org_secrets : v.repos]))
# Walk through all names in the query result list and check
# if they're also in our repo set. If yes add the repo name -> id
# mapping to our resulting map
repos_and_ids = {
for i, v in data.github_repositories.repos.names : v => data.github_repositories.repos.repo_ids[i]
if contains(local.repos, v)
}
}
resource "github_actions_organization_secret" "org_secrets" {
for_each = var.org_secrets
secret_name = each.key
visibility = "selected"
# the logic how the secret value is sourced is omitted here
plaintext_value = data.xxx
selected_repository_ids = [
for r in each.value.repos : local.repos_and_ids[r]
if can(local.repos_and_ids[r])
]
}
Now if we do something bad, delete a repository and forget to remove it
from the configuration for the module, we receive some error message that a (numeric)
repository ID could not be found. Pretty much useless for the average user because
you've to figure out which repository is still in the configuration list, but got deleted
recently.
Luckily terraform supports since version
1.2 precondition checks, which we can use in an output-block
to provide the information which repository is missing. What we
need is the set of missing repositories and the validation condition:
locals {
# Debug facility in combination with an output and precondition check
# There we can report which repository we still have in our configuration
# but no longer get as a result from the data provider query
missing_repos = setsubtract(local.repos, data.github_repositories.repos.names)
}
# Debug facility - If we can not find every repository in our
# search query result, report those repos as an error
output "missing_repos" {
value = local.missing_repos
precondition {
condition = length(local.missing_repos) == 0
error_message = format("Repos in config missing from resultset: %v", local.missing_repos)
}
}
Now you only have to be aware that GitHub is GitHub and the TF provider has open bugs,
but is not supported by GitHub and you will encounter
inconsistent results. But
it works, even if your terraform apply failed that way.
As usual with these every-two-year posts, probably of direct interest only
to California residents. Maybe the more obscure things we're voting on
will be a minor curiosity to people elsewhere. I'm a bit late this year,
although not as late as last year, so a lot of people may have already
voted, but I've been doing this for a while and wanted to keep it up.
This post will only be about the ballot propositions. I don't have
anything useful to say about the candidates that isn't hyper-local. I
doubt anyone who has read my posts will be surprised by which candidates
I'm voting for.
As always with Calfornia ballot propositions, it's worth paying close
attention to which propositions were put on the ballot by the legislature,
usually because there's some state law requirement (often that I disagree
with) that they be voted on by the public, and propositions that were put
on the ballot by voter petition. The latter are often poorly written and
have hidden problems. As a general rule of thumb, I tend to default to
voting against propositions added by petition. This year, one can
conveniently distinguish by number: the single-digit propositions were
added by the legislature, and the two-digit ones were added by petition.
Proposition 2: YES. Issue $10 billion in bonds for public school
infrastructure improvements. I generally vote in favor of spending
measures like this unless they have some obvious problem. The opposition
argument is a deranged rant against immigrants and government debt and
fails to point out actual problems. The opposition argument also claims
this will result in higher property taxes and, seriously, if only that
were true. That would make me even more strongly in favor of it.
Proposition 3: YES. Enshrines the right to marriage without
regard to sex or race into the California state constitution. This is
already the law given US Supreme Court decisions, but fixing California
state law is a long-overdue and obvious cleanup step. One of the quixotic
things I would do if I were ever in government, which I will never be,
would be to try to clean up the laws to make them match reality, repealing
all of the dead clauses that were overturned by court decisions or are
never enforced. I am in favor of all measures in this direction even when
I don't agree with the direction of the change; here, as a bonus, I also
strongly agree with the change.
Proposition 4: YES. Issue $10 billion in bonds for
infrastructure improvements to mitigate climate risk. This is basically
the same argument as Proposition 2. The one drawback of this measure is
that it's kind of a mixed grab bag of stuff and probably some of it should
be supported out of the general budget rather than bonds, but I consider
this a minor problem. We definitely need to ramp up climate risk
mitigation efforts.
Proposition 5: YES. Reduces the required super-majority to pass
local bond measures for affordable housing from 67% to 55%. The fact that
this requires a supermajority at all is absurd, California desperately
needs to build more housing of any kind however we can, and publicly
funded housing is an excellent idea.
Proposition 6: YES. Eliminates "involuntary servitude" (in other
words, "temporary" slavery) as a legally permissible punishment for crimes
in the state of California. I'm one of the people who think the 13th
Amendment to the US Constitution shouldn't have an exception for
punishment for crimes, so obviously I'm in favor of this. This is one
very, very tiny step towards improving the absolutely atrocious prison
conditions in the state.
Proposition 32: YES. Raises the minimum wage to $18 per hour
from the current $16 per hour, over two years, and ties it to inflation.
This is one of the rare petition-based propositions that I will vote in
favor of because it's very straightforward, we clearly should be raising
the minimum wage, and living in California is absurdly expensive because
we refuse to build more housing (see Propositions 5 and 33). The
opposition argument is the standard lie that a higher minimum wage will
increase unemployment, which we know from numerous other natural
experiments is simply not true.
Proposition 33: NO. Repeals Costa-Hawkins, which prohibits local
municipalities from enacting rent control on properties built after 1995.
This one is going to split the progressive vote rather badly, I suspect.
California has a housing crisis caused by not enough housing supply. It
is not due to vacant housing, as much as some people would like you to
believe that; the numbers just don't add up. There are way more people
living here and wanting to live here than there is housing, so we need to
build more housing.
Rent control serves a valuable social function of providing stability to
people who already have housing, but it doesn't help, and can hurt, the
project of meeting actual housing demand. Rent control alone
creates a two-tier system where people who have housing are protected but
people who don't have housing have an even harder time getting housing
than they do today. It's therefore quite consistent with the general
NIMBY playbook of trying to protect the people who already have housing by
making life harder for the people who do not, while keeping the housing
supply essentially static.
I am in favor of rent control in conjunction with real measures to
increase the housing supply. I am therefore opposed to this proposition,
which allows rent control without any effort to increase housing supply.
I am quite certain that, if this passes, some municipalities will use it
to make constructing new high-density housing incredibly difficult by
requiring it all be rent-controlled low-income housing, thus cutting off
the supply of multi-tenant market-rate housing entirely. This is already
a common political goal in the part of California where I live. Local
neighborhood groups advocate for exactly this routinely in local political
fights.
Give me a mandate for new construction that breaks local zoning
obstructionism, including new market-rate housing to maintain a healthy
lifecycle of housing aging into affordable housing as wealthy people move
into new market-rate housing, and I will gladly support rent control
measures as part of that package. But rent control on its own just
allocates winners and losers without addressing the underlying problem.
Proposition 34: NO. This is an excellent example of why I vote
against petition propositions by default. This is a law designed to
affect exactly one organization in the state of California: the AIDS
Healthcare Foundation. The reason for this targeting is disputed; one
side claims it's because of the AHF support for Proposition 33, and
another side claims it's because AHF is a slumlord abusing California
state funding. I have no idea which side of this is true. I also don't
care, because I am fundamentally opposed to writing laws this way. Laws
should establish general, fair principles that are broadly applicable, not
be written with bizarrely specific conditions (health care providers that
operate multifamily housing) that will only be met by a single
organization. This kind of nonsense creates bad legal codes and the legal
equivalent of technical debt. Just don't do this.
Proposition 35: YES. I am, reluctantly, voting in favor of this
even though it is a petition proposition because it looks like a useful
simplification and cleanup of state health care funding, makes an expiring
tax permanent, and is supported by a very wide range of organizations that
I generally trust to know what they're talking about. No opposition
argument was filed, which I think is telling.
Proposition 36: NO. I am resigned to voting down attempts to
start new "war on drugs" nonsense for the rest of my life because the
people who believe in this crap will never, ever, ever stop. This one has
bonus shoplifting fear-mongering attached, something that touches on nasty
local politics that have included large retail chains manipulating crime
report statistics to give the impression that shoplifting is up
dramatically. It's yet another round of the truly horrific California
"three strikes" criminal penalty obsession, which completely
misunderstands both the causes of crime and the (almost nonexistent)
effectiveness of harsh punishment as deterrence.
Ada Lovelace Day was
celebrated on October 8 in 2024, and on this occasion, to celebrate and
raise awareness of the contributions of women to the STEM fields we
interviewed some of the women in Debian.
Here we share their thoughts, comments, and concerns with the hope of inspiring
more women to become part of the Sciences, and of course, to work inside of
Debian.
This article was simulcasted to the debian-women mail list.
Beatrice Torracca
1. Who are you?
I am Beatrice, I am Italian. Internet technology and everything computer-related
is just a hobby for me, not my line of work or the subject of my academic
studies. I have too many interests and too little time. I would like to do lots
of things and at the same time I am too Oblomovian to do any.
2. How did you get introduced to Debian?
As a user I started using newsgroups when I had my first dialup connection and
there was always talk about this strange thing called
Linux. Since moving from DR DOS to Windows was a shock
for me, feeling like I lost the control of my machine, I tried Linux with
Debian Potato and I never strayed
away from Debian since then for my personal equipment.
3. How long have you been into Debian?
Define "into". As a user... since Potato, too many years to count. As a
contributor, a similar amount of time, since early 2000 I think. My first
archived email about contributing to the translation of the description of
Debian packages dates 2001.
4. Are you using Debian in your daily life? If yes, how?
Yes!! I use testing. I have it on my desktop PC at home and I have it on my
laptop. The desktop is where I have a local IMAP server that fetches all the
mails of my email accounts, and where I sync and back up all my data. On both I
do day-to-day stuff (from email to online banking, from shopping to taxes), all
forms of entertainment, a bit of work if I have to work from home
(GNU R for statistics,
LibreOffice... the usual suspects). At work I am
required to have another OS, sadly, but I am working on setting up a
Debian Live system to use there too.
Plus if at work we start doing bioinformatics there might be a Linux machine in
our future... I will of course suggest and hope for a Debian system.
5. Do you have any suggestions to improve women's participation in Debian?
This is a tough one. I am not sure. Maybe, more visibility for the women already
in the Debian Project, and make the newcomers feel seen, valued and welcomed. A
respectful and safe environment is key too, of course, but I think Debian made
huge progress in that aspect with the
Code of Conduct. I am a big fan of
promoting diversity and inclusion; there is always room for improvement.
Ileana Dumitrescu (ildumi)
1. Who are you?
I am just a girl in the world who likes cats and packaging
Free Software.
2. How did you get introduced to Debian?
I was tinkering with a computer running Debian a few years ago, and I decided to
learn more about Free Software. After a search or two, I found
Debian Women.
3. How long have you been into Debian?
I started looking into contributing to Debian in 2021. After contacting Debian
Women, I received a lot of information and helpful advice on different ways I
could contribute, and I decided package maintenance was the best fit for me. I
eventually became a Debian Maintainer in 2023, and I continue to maintain a few
packages in my spare time.
4. Are you using Debian in your daily life? If yes, how?
Yes, it is my favourite GNU/Linux operating system! I use it for email,
chatting, browsing, packaging, etc.
5. Do you have any suggestions to improve women's participation in Debian?
The mailing list for Debian Women may
attract more participation if it is utilized more. It is where I started, and I
imagine participation would increase if it is more engaging.
Kathara Sasikumar (kathara)
1. Who are you?
I'm Kathara Sasikumar, 22 years old and a recent Debian user turned Maintainer
from India. I try to become a creative person through sketching or playing
guitar chords, but it doesn't work! xD
2. How did you get introduced to Debian?
When I first started college, I was that overly enthusiastic student who signed
up for every club and volunteered for anything that crossed my path just like
every other fresher.
But then, the pandemic hit, and like many, I hit a low point. COVID depression
was real, and I was feeling pretty down. Around this time, the
FOSS Club at my college suddenly became more active.
My friends, knowing I had a love for free software, pushed me to join the club.
They thought it might help me lift my spirits and get out of the slump I was in.
At first, I joined only out of peer pressure, but once I got involved, the club
really took off. FOSS Club became more and more active during the pandemic, and
I found myself spending more and more time with it.
A year later, we had the opportunity to host a
MiniDebConf at our college. Where I got to
meet a lot of Debian developers and maintainers, attending their talks
and talking with them gave me a wider perspective on Debian, and I loved the
Debian philosophy.
At that time, I had been distro hopping but never quite settled down. I
occasionally used Debian but never stuck around. However, after the MiniDebConf,
I found myself using Debian more consistently, and it truly connected with me.
The community was incredibly warm and welcoming, which made all the difference.
3. How long have you been into Debian?
Now, I've been using Debian as my daily driver for about a year.
4. Are you using Debian in your daily life? If yes, how?
It has become my primary distro, and I use it every day for continuous learning
and working on various software projects with free and open-source tools. Plus,
I've recently become a Debian Maintainer (DM) and have taken on the
responsibility of maintaining a few packages. I'm looking forward to
contributing more to the Debian community 🙂
Rhonda D'Vine (rhonda)
1. Who are you?
My name is Rhonda, my pronouns are she/her, or per/pers. I'm 51 years old,
working in IT.
2. How did you get introduced to Debian?
I was already looking into Linux because of university, first it was
SuSE. And people played around with gtk. But when they
packaged GNOME and it just didn't even install I
looked for alternatives. A working colleague from back then gave me a CD of
Debian. Though I couldn't install from it because
Slink didn't recognize the pcmcia
drive. I had to install it via floppy disks, but apart from that it was
quite well done. And the early GNOME was working, so I never looked back. 🙂
3. How long have you been into Debian?
Even before I was more involved, a colleague asked me whether I could help with
translating the release documentation. That was my first contribution to Debian,
for the slink release in early 1999. And I was using some other software before
on my SuSE systems, and I wanted to continue to use them on Debian obviously. So
that's how I got involved with packaging in Debian. But I continued to help with
translation work, for a long period of time I was almost the only person active
for the German part of the website.
4. Are you using Debian in your daily life? If yes, how?
Being involved with Debian was a big part of the reason I got into my jobs since
a long time now. I always worked with maintaining Debian (or
Ubuntu) systems.
Privately I run Debian on my laptop, with occasionally switching to Windows in
dual boot when (rarely) needed.
5. Do you have any suggestions to improve women's participation in Debian?
There are factors that we can't influence, like that a lot of women are pushed
into care work because patriarchal structures work that way, and don't have the
time nor energy to invest a lot into other things. But we could learn to
appreciate smaller contributions better, and not focus so much on the quantity
of contributions. When we look at longer discussions on mailing lists, those
that write more mails actually don't contribute more to the discussion, they
often repeat themselves without adding more substance. Through working on our
own discussion patterns this could create a more welcoming environment for a lot
of people.
Sophie Brun (sophieb)
1. Who are you?
I'm a 44 years old French woman. I'm married and I have 2 sons.
2. How did you get introduced to Debian?
In 2004 my boyfriend (now my husband) installed Debian on my personal computer
to introduce me to Debian. I knew almost nothing about Open Source. During my
engineering studies, a professor mentioned the existence of Linux,
Red Hat in particular, but without giving any details.
I've been a user since 2004. But I only started contributing to Debian in 2015:
I had quit my job and I wanted to work on something more meaningful. That's why
I joined my husband in Freexian, his company.
Unlike most people I think, I started contributing to Debian for my work. I only
became a DD in 2021 under gentle social pressure and when I felt confident
enough.
4. Are you using Debian in your daily life? If yes, how?
Of course I use Debian in my professional life for almost all the tasks: from
administrative tasks to Debian packaging.
I also use Debian in my personal life. I have very basic needs:
Firefox,
LibreOffice, GnuCash
and Rhythmbox are the main
applications I need.
Sruthi Chandran (srud)
1. Who are you?
A feminist, a librarian turned Free Software advocate and a Debian Developer.
Part of Debian Outreach team and
DebConf Committee.
2. How did you get introduced to Debian?
I got introduced to the free software world and Debian through my husband. I
attended many Debian events with him. During one such event, out of curiosity, I
participated in a Debian packaging workshop. Just after that I visited a Tibetan
community in India and they mentioned that there was no proper Tibetan font in
GNU/Linux. Tibetan font was my first package in Debian.
3. How long have you been into Debian?
I have been contributing to Debian since 2016 and Debian Developer since 2019.
4. Are you using Debian in your daily life? If yes, how?
I haven't used any other distro on my laptop since I got introduced to Debian.
5. Do you have any suggestions to improve women's participation in Debian?
I was involved with actively mentoring newcomers to Debian since I started
contributing myself. I specially work towards reducing the gender gap inside the
Debian and Free Software community in general. In my experience, I believe that
visibility of already existing women in the community will encourage more women
to participate. Also I think we should reintroduce mentoring through
debian-women.
Tássia Camões Araújo (tassia)
1. Who are you?
Tássia Camões Araújo, a Brazilian living in Canada. I'm a passionate learner who
tries to push myself out of my comfort zone and always find something new to
learn. I also love to mentor people on their learning journey. But I don't
consider myself a typical geek. My challenge has always been to not get
distracted by the next project before I finish the one I have in my hands. That
said, I love being part of a community of geeks and feel empowered by it. I love
Debian for its technical excellence, and it's always reassuring to know that
someone is taking care of the things I don't like or can't do. When I'm not
around computers, one of my favorite things is to feel the wind on my cheeks,
usually while skating or riding a bike; I also love music, and I'm always
singing a melody in my head.
2. How did you get introduced to Debian?
As a student, I was privileged to be introduced to FLOSS at the same time I was
introduced to computer programming. My university could not afford to have labs
in the usual proprietary software model, and what seemed like a limitation at
the time turned out to be a great learning opportunity for me and my colleagues.
I joined this student-led initiative to "liberate" our servers and build
LTSP-based labs - where a single powerful computer could power a few dozen
diskless thin clients. How revolutionary it was at the time! And what an
achievement! From students to students, all using Debian. Most of that group
became close friends; I've married one of them, and a few of them also found
their way to Debian.
3. How long have you been into Debian?
I first used Debian in 2001, but my first real connection with the community was
attending DebConf 2004. Since then, going to DebConfs has become a habit. It is
that moment in the year when I reconnect with the global community and my
motivation to contribute is boosted. And you know, in 20 years I've seen people
become parents, grandparents, children grow up; we've had our own child and had
the pleasure of introducing him to the community; we've mourned the loss of
friends and healed together. I'd say Debian is like family, but not the kind you
get at random once you're born, Debian is my family by choice.
4. Are you using Debian in your daily life? If yes, how?
5. Do you have any suggestions to improve women's participation in Debian?
I think the most effective way to inspire other women is to give visibility to
active women in our community. Speaking at conferences, publishing content,
being vocal about what we do so that other women can see us and see themselves
in those positions in the future. It's not easy, and I don't like being in the
spotlight. It took me a long time to get comfortable with public speaking, so I
can understand the struggle of those who don't want to expose themselves. But I
believe that this space of vulnerability can open the way to new connections. It
can inspire trust and ultimately motivate our next generation. It's with this in
mind that I publish these lines.
Another point we can't neglect is that in Debian we work on a volunteer basis,
and this in itself puts us at a great disadvantage. In our societies, women
usually take a heavier load than their partners in terms of caretaking and other
invisible tasks, so it is hard to afford the free time needed to volunteer. This
is one of the reasons why I bring my son to the conferences I attend, and so far
I have received all the support I need to attend DebConfs with him. It is a way
to share the caregiving burden with our community - it takes a village to raise
a child. Besides allowing us to participate, it also serves to show other women
(and men) that you can have a family life and still contribute to Debian.
My feeling is that we are not doing super well in terms of diversity in Debian
at the moment, but that should not discourage us at all. That's the way it is
now, but that doesn't mean it will always be that way. I feel like we go through
cycles. I remember times when we had many more active female contributors, and
I'm confident that we can improve our ratio again in the future. In the
meantime, I just try to keep going, do my part, attract those I can, reassure
those who are too scared to come closer. Debian is a wonderful community, it is
a family, and of course a family cannot do without us, the women.
These interviews were conducted via email exchanges in October, 2024. Thanks to
all the wonderful women who participated in this interview. We really appreciate
your contributions in Debian and to Free/Libre software.
In the past I haven’t had a high opinion of MG cars, decades ago they were small and expensive and didn’t seem to offer anything I wanted. As there’s a conveniently located MG dealer I decided to try out an MG electric car and see if they are any good. I brought two friends along who are also interested in new technology.
I went to the MG dealer without any preconceptions or much prior knowledge of the MG electric cars apart from having vaguely noticed that they were significantly cheaper than Teslas. I told the salesperson that I didn’t have a model in mind and I just wanted to see what MG offers, so they offered me a test driver of a “MG4 64 EXCITE”. The MG web site isn’t very good and doesn’t give an indication of what this model costs, my recollection is that it’s something like $40,000, the base model is advertised at $30,990. I’m not particularly interested in paying for extras above the base model and the only really desirable feature that the “Excite 64” offers over the “Excite 51” is the extra range (the numbers 51 and 64 represent the battery capacity in KWh). The base model has a claimed range of 350KM which is more than I drive in a typical week, generally there are only about 4 days a year when I need to drive more than 300KM in a day and on those rare days I can spend a bit of time at a charging station without much inconvenience.
The experience of driving an MG4 is not much different from other EVs I’ve driven, the difference between that and the Genesis GV60 (which was advertised at $117,000) [1] isn’t significant. The Genesis has some nice camera features giving views from all directions and showing a view of the side on the dash when you put your turn indicator on. Also some models of Genesis (not the one I test drove) have cameras instead of side mirrors. The MG4 lacks most of those cameras but has a very effective reversing camera which estimates the distance to an “obstacle” behind you in cm. Some of the MG electric cars have a sunroof or moonroof (sunroof that just opens to transparent glass not open to the air), the one I tested didn’t have them and I didn’t feel I was missing much. While a moonroof is a nice feature I probably won’t want to pay as much extra as they will demand for it.
The dash of the MG4 doesn’t have any simulation of the old fashioned dash unlike the Genesis GV60 which had a display in the same location as is traditionally used which displays analogue instruments (except when the turn indicators are on). The MG4 has two tablets, a big one in the middle of the front for controlling heating/cooling and probably other things like the radio and a small one visible through the steering wheel which has the instruments. I didn’t have to think about the instruments, they just did the job which is great.
For second hand cars I looked at AutoTrader which seems to be the only Australian site for second hand cars that allows specifying electric as a search criteria [2]. For the EVs advertised on that site the cheapest are around $13,000 for cars about 10 years old and $21,000 for a 5yo LEAF. If you could only afford to spend $21,000 on a car then a 5yo LEAF would definitely be better than nothing, but when comparing a 5yo car for $21,000 and a new car for $31,000 the new car is the obvious choice if you can afford it. There was an Australian company importing used LEAFs and other EVs and selling them over the web for low prices, if they were still around and still selling LEAFs for $15,000 then that would make LEAF vs MG3 a difficult decision for me. But with the current prices for second hand LEAFs the decision is easy.
When I enrolled for the test drive the dealer took my email address and sent me an automated message with details about the test drive and an email address to ask for more information. The email address they used bounced all mail, even from my gmail account. They had a contact form on their web site but that also doesn’t get a response. MG really should periodically test their dealer’s email addresses, they are probably losing sales because of this.
On the same day I visited a Hyundai dealer to see what they had to offer. A salesman there said that the cheapest Hyundai was $60,000 and suggested that I go elsewhere if I am prepared to buy a lesser car to save money. I don’t need to get negged by a car dealer and I really don’t think there’s much scope for a car to be significantly better than the MG3 while also not competing with the Genesis cars. Genesis is a Hyundai brand and their cars are very nice, but the prices are well outside the range I’m prepared to pay.
Next I have to try the BYD. From what I’ve heard they are mostly selling somewhat expensive cars in Australia (a colleague recently got one which was about $60,000 which he is extremely happy with) but hopefully they have some of the cheaper ones available too. I don’t want to flex on my neighbors, I just want a reliable and moderately comfortable car that doesn’t cost too much.
Cybersecurity is a vital topic for Switzerland and
social engineering attacks are a significant issue in the realm
of cybersecurity.
Organizations like Google, Facebook and LinkedIn could be seen as a
very effective social engineering attack against Swiss culture and
privacy.
Frans Pop, the
Debian Day Volunteer Suicide Victim, had sent at least one of
his suicide notes on debian-private gossip network the
night before Debian Day. If an organization can get into somebody's
head like that, such that decisions about life and death revolve
around this software, we could contemplate the possibility that
Frans Pop died under the influence of a social engineering culture.
Adrian von Bidder died on the same day that Carla and I got married.
Why can't we ask questions about that?
Switzerland reportedly has
a higher per-capita ratio of Debian Developers than any other country
except perhaps Ireland. Yet according to Shuttleworth's
email, many of these people have a loyalty to Debian culture that is above
their loyalty to Swiss employers and Swiss law. This dual
allegience appears to be a sign that they are under the sway of
social engineering or at risk of external influence.
By way of background, in 2006, Adrian and Diana got
married. In 2007,
the suicide petition to Basel Stadt authorities
was signed by A. von Bidder.
In August 2010, we had the confirmed suicide of Frans Pop, the
warning from Mark Shuttleworth and a sustained period of stress
among volunteers in the Debian Developer world.
In April 2011, Adrian von Bidder died. It was discussed like a suicide
but they told us casually that it could be a heart attack.
There was no comment about whether the couple had any children
during the five years of their marriage.
On 28 April 2011, very soon after von Bidder died, Diana modified his blog,
adding a new post:
Sadly, I have to make an end to this blog. Adrian - my husband - died on april 17th of a heart attack.
Adrian von Bidder had made various blog posts with critical commentary
about the risks of social media and other devious enterprises.
Many of his concerns have been proven correct by the passage of time.
Yet I feel the manner in which Diana writes "I have to make an end to this
blog" has an air of disapproval for Adrian's work. Then again,
this must have been a very disturbing time for Diana and on top
of that, English may not be her native language so the tone
of her comments may not reflect her real thoughts and feelings about
the subject.
Some time later, Diana completely erased the blog, removed the DNS
entry for blog. and placed a picture on the main page
fortytwo.ch.
The picture's metadata tells us
it was taken on 20 January 2011 with a Canon EOS 40D, possibly
the camera Adrian discussed in some of his blog posts.
We know that other Debian Developers in Switzerland were subject
to social engineering attacks involving blackmail and public
humiliation. One of those cases was the blackmail of
Daniel Baumann. Did Adrian von Bidder receive similar messages
in the days before his heart attack?
Did Adrian von Bidder communicate with anybody before his
heart attack, for example, leaving a note? In English-speaking countries,
all these things are published by the coroner's office. In
Switzerland, it is the opposite, evidence is only given to those
in close proximity to the deceased. At the time, Diana may not
have known about the earlier suicide of Frans Pop. She may not
have realized there was the risk of a connection between deaths
in a single community. Now
the suicide cluster is public knowledge, is it time for a fresh
discussion about that?
Most cybersecurity experts around the world believe that
transparency is important for education and mitigating risks.
Here is a photo of Diana and Adrian on their wedding day:
Hitler and the Nazis were obsessed with the idea that Jews
could be identified by a distinctive smell. While America was
building the A-bomb, Hitler
diverted science funding to research the Jewish smell.
The smell was rumored to resemble sulfur.
It makes the case that there was a shift in the way that smell, beginning in the late nineteenth century, was used to not simply demarcate groups but, in addition, to supposedly detect ‘race’ and ethnicity.
Prominent Debian Developer Daniel Pocock has recently released
details of the
Swiss harassment judgment. His former landlady, an organizer of
the SVP senioren (far right Swiss seniors group) had started rumors
about a smell coming from Pocock's cats. Even the judge asked
if it could be acceptable to pose questions about this imaginary smell.
Obviously the judge was not familiar with this awkward similarity
to the persecution of Jewish and African people throughout
history.
For about six years now, people have been creating gossip about
harassment and abuse against various Debian co-authors. Nobody ever
provided any evidence.
Earlier this year, when I nominated in the European elections, the
misfits were desperate to attack me but they didn't have any grounds to do so.
They waited until the last minute before voting began and on
6 June 2024, the day before voting, they
published a document that appears to be invalid, full of forgeries, racism
and nonsense.
But wait, there really was a harassment case and a judgment.
With the Irish General Election approaching, I am considering whether
to nominate again and it is really important that people can see the
truth about who really harassed who.
Swiss racism, cats of colour, women harassing women and a 10,000 Swiss franc settlement
The only mistake I made was taking black cats to Zurich.
The real Debian harassment story is about women harassing women
and occasionally, a woman harassing our cats and women harassing men.
In Switzerland, both in the law and in the culture, when you have
a harassment problem like this the matter is usually settled privately
and everybody moves on with their life as quickly as possible.
Carla and our black cats, who are also female victims, were subject
to racism from a white Swiss woman. We received a payment of CHF 10,000.
Surely I would have rushed to publish that on my blog the same day.
But I didn't publish it before. When the WeMakeFedora case was resolved,
I immediately put it on my blog. But in the case of the harassment
in Zurich, I wanted to respect all the parties involved, I wanted to respect
the Swiss cultural approach to such disputes in Switzerland and just
put it out of my mind and get on with serious problems.
Nonetheless, Debianists, including people like Axel Beckert at
ETH Zurich and at the Google office in Zurich have been stirring
up rumors about the harassment and paw behavior for six years.
Ironically, the Google engineering headquarters for Europe
is located in Zurich and Google's role in spreading rumors about
the harassment case had actually undermined the privacy that
people used to take for granted in Switzerland.
Women harassing women: a common problem
In the case of serious violent crime against women, the majority
of perpetrators appear to be male.
In the case of less tangible crimes, like harassment, stalking,
racism and even sexism, we can find many cases where women are
either protagonists or associates of an offender.
The recent Netflix series
Baby Reindeer
cast a spotlight on the story of a woman harassing a male
employee at a bar.
In 2021, we saw a female volunteer, Molly de Blanc, started an online
petition
harassing her former boss, Dr Richard Stallman at FSF. Approximately
three thousand people joined the petition but a petition about a person
is not a real petition at all, it is harassment. de Blanc made the petition
more than two years after leaving her job at FSF.
In a previous blog, I looked at the case of another non-developing
Debian volunteer, Laura Arjona,
harassing one of my female interns in the Outreachy program.
After learning that this goes on behind mentors' backs, I didn't
volunteer to be a mentor again.
Then there was Amaya Rodrigo Sastre who helped spread the rumors
that Ted Walther's partner at the DebConf6 dinner was
alleged to be a prostitute. In fact, the woman was a dentist and
these rumors were disastrous for her reputation.
Ariadne Conill from the Alpine Linux project, which has no
relationship to Switzerland as far as I can tell, was spreading
the rumor that my intern in Google Summer of Code was my girlfriend.
The rumor was offensive to me but even more offensive to the intern
because
that was the year she got married.
Shortly before DebConf15, we received
nasty messages from Margarita (Marga) Manterola of Google telling us
that Carla is not welcome to eat the food at DebConf, despite the
fact that other woman like Marga go there with their husbands every year.
While waiting for the train to go down the
Uetliberg one day, Carla and I were talking to a British woman
in the playground beside the railway station. The woman told
us about her Swiss landlady, a little old lady, who had been
whinging and whining about the behavior of her small children.
The Swiss landlady had become quite obsessed and had even been
caught at the window taking pictures of the way the children played
inside their home.
Looking at the
invalid and falsified legal documents distributed
by rogue members of Debian, we can find various references to
my Irish heritage. Everybody seems to know that I was born and
raised in Australia. I acquired Irish citizenship because my mother
is from Ireland. We find that the racist women in Switzerland, and
we'll see more of them in this blog, are not classifying people based
on our skills and talents, they are obsessed about little things
like my mother's Irish heritage. In fact, some of these
documents were prepared by two women in Zurich,
Pascale Koster and Albane die Ziegler. The documents don't mention
that I am a citizen of three countries, they emphasize my Irish
heritage as some kind of a hint to their racist colleagues
that my mother and I should be treated badly in Zurich.
What we see here is another example of women being offensive to
other women.
One of the most well known examples of women exhibiting poor
behavior to other women in Zurich was the infamous Oprah Winfrey
handbag incident. A woman in the handbag shop refused to let
Oprah look at a particular handbag. Oprah gives a testimony about
her experience with the Swiss saleswoman (Kauffrau) in this
video:
This brings us to the point where we will consider the paw behavior
of a Swiss landlady towards Carla and our black cats, who are both female
cats, so there was a female offender and three female victims.
I don't wish to make the generalization that all women are like
this. I've worked with many professional women who act with
integrity in everything they do. But when we see gossipmongers
making up stories about harassment in groups like Debian, we need
to remember the risk of listening to attention seekers
and their paid lawyers/liars. Gossip and
social engineering attacks go hand in hand and if we care
about cybersecurity, we need to call out gossip behavior.
Harassment and racism are not only Swiss problems
Before rushing to any conclusion about racism in Switzerland,
we need to remember that there is racism in every country.
When we look at the concerns about Brexit in the United Kingdom,
there was a lot of racism during the campaign period before the
referendum. Some of the practical changes in the UK, like
canceling the driving licenses of foreigners, actually happened
before the Brexit referendum. Likewise, whenever there is a Swiss
referendum about the relationship with the EU, some people
may voice racist opinions about the subject but there may be
some valid political or economic discussions that take place
at the same time.
We can also ask the question: are there times when Swiss citizens
are subject to extreme acts of bullying or extreme injustice by
employers, landladies or the public authorities? In fact,
some examples do exist.
Looking at the JuristGate
affair, we can see that the rogue legal protection scheme, which
smells like a ponzi scheme, had both Swiss customers and foreign
customers. All the customers lost their money at the same time.
When FINMA shut down the rogue insurance, they hid the details
from everybody, both Swiss and foreign clients were kept in the dark
to an equal extent. Therefore, there was extraordinary injustice,
there were some foreign clients but racism wasn't the main theme
in JuristGate.
When I look at
the case of Adrian von Bidder (avbb / cmot),
the Debian Developer who died on our wedding day,
I wonder if he had one of the same bad experiences that
foreigners often complain about in Switzerland. For example, did one
of the health insurance companies bungle a treatment for his wife
or did an employer fail to make contributions to his pension scheme
and then go into liquidation?
Here is a photo of Diana and Adrian on their wedding day:
In Swiss culture, sensitivity about the cause of death is
an important cultural consideration. After blogging the
initial evidence about how the death was discussed in
the debian-private gossip channel, I came to realize that Adrian's
widow, Diana, was listed as a member of the Basel City
parliament. In such cases, there is obviously even more opportunity
to ask questions about the interaction between the death, any
environmental or cultural factors, whether in Debian or in his community
but at the same time, the cultural aversion to asking those questions
is a very steep obstacle.
Real harassment, real evidence ordered chronologically
Some time in 2017 or 2018, Chris Lamb, former leader of the
Debian project, started making mischievous references to harassment.
He didn't provide any facts, dates, victims or evidence.
Most of the larger property management companies in Zurich and
Switzerland are somewhat consistent in their application of tenancy
regulations.
When people find a nice apartment with a responsible landlord, they
usually keep the apartment for a very long time.
Some smaller buildings, usually sized between five and ten apartments,
are owned by a resident landlord/landlady. This gives rise to the
phenomena where the landlady and tenant may cross paths almost every day.
It goes without saying that the turnover of tenants in some of these
owner-occupier buildings is much higher than in the buildings owned by
a silent investor.
Web sites advertising the apartments sometimes have a checkbox and
filter option for potential tenants to exclude apartments with a resident
landlady (Vermieter wohnt im Mehrfamilienhaus). Most people
who have had a bad experience with one of these will go out of
their way to avoid them in future.
Due to the very high turnover in buildings with a resident landlady,
the number of advertisements for such apartments is disproportionate
to the number of buildings that don't have a resident landlady.
Laundry duties & the status of women
Very new buildings in Switzerland have a washing machine and
clothes dryer in every apartment. Most traditional buildings and
some new buildings have a laundry room or drying room shared by all the
tenants. Most buildings have a handwritten roster where the tenants
can reserve the machines for a particular day.
You may only have one reservation to use the laundry every two weeks.
If that reservation falls on a work day and you have multiple loads
of washing to do then it can be very inconvenient. Nonetheless,
nobody sees any urgency to change this system. There is a prevailing
attitude that the wife or girlfriend will stay home on the laundry day
and ensure that all the clothes are nicely washed, dried and folded
and the laundry room is left in a proper state for the tenant
who will use it on the following day.
Switzerland is notable for its neutral status and hosting diplomats
from around the world at the United Nations in Geneva. But if
the washing machine breaks down and one tenant's drying time
runs over into the next day, there is anything but diplomacy
and tenants regress to communicating with each other through handwritten
notes written in
one of the four official Swiss languages.
The application process, religious harassment and cats
When tenants arrive to visit a prospective apartment, they are
given an application form that must be completed for the landlord
or letting manager.
They tend to ask more questions than necessary. It is not unusual
to find questions about your religious affiliations on the form.
We can quickly find examples of these forms in a search engine
by searching for words like Anmeldungformular and
Konfession (
Example 1,
Example 2,
Example 3).
In effect, if your religion has been
persecuted in Switzerland,
you may well feel that filling out the application form
is an experience of harassment.
News articles appear from time to time about whether or not
you should declare your religion. (
Example 1,
Example 2,
Example 3).
Not every Anmeldungformular asks about religion but
it is almost certain they will ask about your pets and musical
instruments. It is a good idea to answer those questions honestly
in any country. While some landlords and letting agents will decline
certain requests, others will be quite
happy to direct you to the most suitable apartments for your
lifestyle.
Whenever we applied for any apartment in Switzerland, we did
so with total honesty and integrity. We declared our cats
(Katzen):
Specifically, we have written Hauskatzen, which literally
translates to house cats. In other words, we are not
talking about something exotic like a tiger or panther.
No room for undocumented aliens
The confession of cat ownership led to a flurry of paperwork
mediated by the letting agent. Everybody who rents an apartment
in Switzerland is expected to purchase a civil liability insurance
and pay three months of rent as a security deposit.
In our case, that simply wasn't enough. The landlady insisted
that we sign a guarantee against any paw behavior by our cats:
Costs anticipated by this document were already anticipated
by the security deposit and our civil liability insurance. Therefore,
I feel this additional cat contract was superfluous. Can we call
it harassment or bullying?
Fair wear and tear
Switzerland has high standards for construction and due to
the level of wealth, even the most mundane apartments typically
have very high quality components in their bathrooms and kitchens.
It is typical to have mixer taps on the showers and sinks, good water
pressure and wall mounted toilets.
When tenancies are concluded in Switzerland, the apartment or
house is subject to a forensic examination that may last several
hours.
It is expected that the tenant leaving an apartment will arrange
to have it cleaned back to the original state before the inspection
day.
Even if the bathroom is 30 or 40 years old, the high quality
components still look like new after each cleaning.
Nonetheless, internal components like washers and gaskets don't
last forever, no matter how beautiful the sinks and toilet bowls
appear on the outside.
In this particular apartment we experienced the failure of both
the shower mixer and the gasket joining the cistern to the toilet
bowl. Both of these things failed within a short span of time.
The plumber came promptly to make the necessary repairs.
Nonetheless, after the drama about whether our cats were a national
security risk, we were never on a good footing with this
particular landlady. She was 76 years old and the far right party,
of which she was a member, was constantly warning her to be
on the lookout for mischievous foreigners.
If you look at the far right propaganda circulated in advance of
referendums and elections in Switzerland, the foreigners are
typically depicted in black, like our cats.
At Kaltbad on the Rigi, we found a white cat in the snow:
A large professional landlord company with thousands of
apartments probably wouldn't worry about the cost of repairing
these washers and gaskets. On the other hand, for these owner-occupier
landladies who like to micro-manage their tenancies, some of them
stay up all night worrying about whether
tenants (or cats) do something like this as a prank.
Here is the report about the shower defect about two weeks
after we moved in. There is no way that tenants or cats could
have put rust into the pipes. These are simply the problems of
an old building.
Subject: bath / shower water problems
Date: Thu, 1 Dec 2016 09:39:29 +0100
From: Daniel Pocock <daniel@pocock.pro>
To: Letting agent
Hi [redacted],
The plumber visited today, he replaced the dishwasher door and the
shower hose.
He also looked at the flow from the hot water tap in the shower. He
found a lot of rust inside the tap.
He removed the hot and cold taps, cleaned out the taps and ran the water
directly from the pipes in the wall. A lot of rust came out of both hot
and cold pipes.
- the hot water pipe is now flowing better, but it is still less than normal
- water from both hot and cold pipes still has a slight red colour
He said he will contact you to explain and discuss how it can be fixed.
Regards,
Daniel
Cat smell letter
While I was on a trip to the UK, Carla received this ugly letter:
It says there is an unknown smell in the common areas and
it asks if the smell could come from our cats or deficiencies
in cleanliness.
Carla and the cats were really sad.
We contacted our legal insurance and had a lawyer draft a
response. We hoped that would be the end of the matter.
The window nazi
Then came the windows. There are 11 apartments in the
building and somebody would sometimes open one of the windows
in the stairwell and leave it open.
The landlady become obsessed with closing the windows and
leaving handwritten notes on the windows.
Mediation requested
After some months of receiving insults in the post and in the
common areas, it reached a point
where we had to take legal action. We demanded a mediation
session at the tribunal of Zurich.
Our cats were members of our family. Everybody loved our
cats. My Italian cousin came to stay with them on several occasions:
Remarkably, the landlady sent an expensive lawyer to repeat
the accusations about a cat smell, the window in the stairs and
a dirty towel that another tenant found in the washing room.
There were no fingerprints, no paw-prints, no video evidence,
no DNA evidence, not even a whisker to link any
of these problems to us. It was just a witchhunt and as we had
black cats, we were the most recent arrivals in the building,
and we were foreigners, we felt we had been victimized.
Here is the accusation about a disobedient tenant who opens the window
in the stairs:
Swiss lawyer tried to deceive Swiss judge about far right membership
Early in the mediation session, the lawyer for the landlady claimed
that it wasn't clear whether or not she was really a member of the
far right political party.
We were able to show the judge that the landlady had a web site
promoting the party. Here is one of the photos, she is chairing a meeting
and the poster attached to the table has her name and face on it.
The filename tells us it is a meeting of the far right seniors committee
(SVP senioren):
In this photo, she is standing beside then president of the
Kanton parliament, Dr. Christian Huber:
Shortly after the photo
was taken, Dr Huber resigned from the parliament and resigned
from the SVP in mysterious circumstances.
Dr Huber and his spouse spent the next ten years traveling around the
European Union by houseboat. This is ironic of course, a leader
from an anti-immigration/anti-EU party living like a refugee in a boat
in the EU. In Australia the far right uses the term
boat people as a derogatory term for immigrants who travel by boat.
Mystery smell: who is defaming who?
Here is the accusation about a mysterious smell. The lawyer
is saying it is not clear where it comes from because he doesn't
want to be caught defaming foreigners directly. He doesn't provide any
expert evidence or witnesses, he basically says the landlady has a
hunch about this smell and the judge should trust the landlady.
The letting agent is also in the room and if the rumor was credible
he would have surely commented on it. I don't think he wanted
to comment about the smell at all so it came down to the expensive
lawyer to talk this imaginary smell into existence.
When I hear references to these mysterious smells, I feel it is
a way for the jurists to give each other a wink and a nod and ask
for the foreigners to be punished.
Every time Carla went down to the laundry in the basement,
the little old lady would appear. We don't know if she had
video surveillance cameras or if she spent all her day going
up and down the steps to check on the laundry.
Nonetheless, Carla had become quite upset about the cat letter
and the intrusions in the laundry and at some point I had to
start doing the laundry because it was impossible for Carla to
go down there alone.
The landlady was taken aback by the sight of a man in the laundry.
She started calling Carla's employer. We don't know what she was
hoping to achieve. Was she trying to determine if Carla had absconded?
Or was she trying to find out why the employer expected Carla to work
on laundry day?
The lawyer sent a stern letter demanding that these phone calls to
Carla's employer must cease immediately.
Frau [----] hat letzte Woche beim Arbeitsort meiner Mandantin
angerufen und unter Vorwand, sie wolle mit ihr sprechen,
gegenüber der Chefin meiner Mandnatin während ca. 30 Minuten
meine Mandanten im Zusammenhang mit dem vorliegenden Verfahren
angeschwärzt, resp. diese in ihrer Ehre verletzt.
Ich fordere Sie auf, Ihre Klientin über die Tragweite der
Bestimmungen über strafbare Handlungen gegen üble Nachrede
und Verleumdung zu informieren.
Es gab und gibt keinen Grund der direkten Kontaktaufnahme und
insbesondere keinen Grund für Ihr Klientin, beim Arbeitsort
meiner Mandantin anzurufen.
Sollte es noch einmal vorkommen, dass Ihre Klientin gegenüber
meinen Mandanten oder Dritten ausfällig wird und sich sonst
rassistisch äussert, so wird dies entsprechende Konsequenzen haben.
Ich denke auch nicht, dass das Verhalten Ihrer Klientin die
Verhandlungsbereitschaft meiner Mandanten bezüglich des
vorliegenden Verfahrens erhöht.
and translated into English:
Last week, [----] called my client's place of work and,
under the pretext that she wanted to speak to her,
spent around 30 minutes denigrating my client in connection
with the current proceedings to my client's boss, and insulted her honor.
I request that you inform your client of the scope of the
provisions on criminal offenses against slander and defamation.
There was and is no reason to make direct contact and in particular
no reason for your client to call my client's place of work.
Should your client become abusive towards my client or third parties
or otherwise make racist comments, this will have the appropriate
consequences.
I also do not think that your client's behavior increases my
client's willingness to negotiate with regard to the current proceedings.
Would a female judge in Zurich be any more sympathetic than a
female landlady? Maybe not. Here, the landlady's lawyer is explaining that
if the man (me) is busy with my job, the woman (Carla) can look for
another flat. The judge and the translator are both female.
Nobody calls out the sexism.
The search for a flat in Zurich is not a trivial task. In
German, the press refer to it as the Wohnungslotterie. When
a new building is about to be completed, hundreds of prospective
tenants line up outside to submit copies of their Anmeldungformular
in person.
What we see here is Swiss feminism, that is feminism for Swiss women.
I don't think it's up to a man to give the definition of feminism.
But I feel it is safe to say that Swiss feminism or Australian feminism
are contradictions because it is basically privileged women from
rich countries who go to university and become jurists and meddle
in the lives of women from other countries.
One of the reasons we are in court in the first place is because
Carla didn't feel comfortable being that woman from latin America
who does laundry with the Swiss landlady looking over her shoulder.
When Swiss families want to apply for apartments,
they send their foreign nannies to stand in those queues and submit
the forms.
"In the early days ... every client meeting I would be asked to get the coffee. The other male graduates
were never asked to do such things,"
She's right: in more than twenty years since I graduated, nobody ever
asked me to make coffee in the workplace. And when I tried to share
responsibility for doing the laundry in Zurich, the landlady was
opposed to the idea. She seemed to feel that women like Carla were
easier to control.
In Renens, Canton Vaud, a white cat can sleep on the steps at
the railway station and nobody complains about the risk that
somebody might trip over the cat. Every ten minutes, the metro arrives
at the top of the steps and hundreds of people come down the steps to
search for their trains. There is a serious risk that somebody could
trip over the cat and suffer an injury. If it was a black cat, would
the police come with dogs to remove it?
Everybody in west Lausanne seems to know this cat but nobody
knows who it belongs to.
Here is the part of the trial where they talk about the landlady
calling Carla's workplace about the laundry:
Who owns that towel?
Given the lack of evidence about the imaginary cat smell, the
landlady had tried to diversify her legal strategy by introducing
a dirty towel that somebody found in the washing room.
Most landlords would simply provide a basket for
lost property. Even at Swiss prices, the cost of a basket for
these elusive towels and socks would be far less than the cost
of the lawyers.
The cat smell trial consisted of four jurists, an interpreter, the
letting manager and an engineer, myself. The combined cost of our
time was over CHF 2,000 per hour for three hours in court
debating the anxieties of a landlady who didn't show up.
In comparison, many Swiss residents drive over to Germany or
France each weekend for shopping. At
Action in France, you can buy another towel and a lost property
basket for a combined cost of less than ten Swiss francs.
The fact they tried to bring this towelgate affair into the courtroom
only proves that they had no serious case in the first place.
They were clutching at straws.
Speaking English in a Zurich courtroom
I think the judge realized that the landlady had a very weak case
and on top of that, the landlady's lawyer had been somewhat deceptive
about the political connection. The judge decided to continue the
mediation session using the English language.
The far right Swiss landlady was unable to sleep due to the
imaginary smell, the sight of a man doing laundry and our stubborn refusal
to take phone calls during our working hours about every little drama
in the missing towels department. Yet my family had far
more serious concerns due to my father's health. I tried to explain
that in the court but were they listening?
Switzerland is a very small country and many people live in the same
valley where they grew up with their parents. Even if they move from
their valley to a city like Zurich, they can always reach most of
their extended family with a short journey by train.
In the most hostile company where I worked in Switzerland, a line
manager's mother had developed a terminal illness and had less than
six months to live. The manager went back to his country for a number of
months and the company strategy, organization and culture was totally unable
to cope with this situation.
Nonetheless, in our case, Carla's aunt was getting very old and
my father was very ill. The financial cost of the mediation session
where we spoke about missing towels and the imaginary cat smell was
greater than the financial cost of a trip to Australia to see my father.
The judge and I seem to agree there are cultural differences but
the extent to which some people react to small differences is
extraordinary:
Defending the honor of black cats before a Swiss judge
Vous promettez d’être fidèle à la Constitution fédérale et à la Constitution du Canton de Vaud.
Vous promettez de maintenir et de défendre en toute occasion et de tout votre pouvoir les droits, les libertés et
l’indépendance de votre nouvelle patrie, de procurer et d’avancer son honneur et profit, comme aussi d’éviter tout ce qui
pourrait lui porter perte ou dommage
and translated into English:
You promise to be true to the federal constitution and the constitution
of the Canton of Vaud.
You promise to maintain and defend on every occasion and with all your
powers the rights, freedoms and independence of your new country,
to develop and advance her reputation and wealth and equally to
avoid all that could cause her loss or damage.
What does an oath like this mean in practice? In the Zurich courthouse,
I defended the honor and reputation of our black cats before a
Swiss tribunal:
Remarkably, the judge repeats the question about whether there
could be a smell. This was so offensive to us as a family.
In fact, these rumors about smells have Holocaust origins.
Hitler commissioned significant scientific research to determine
if the Jews have a distinctive smell. When the judge tried
to legitimize these black-cat-smell comments in Zurich, I couldn't believe
what I was hearing.
When the
Albanian whistleblowers came to Zurich, they slept with the
cats. Here is Anisa Kuci from OpenStreetmap, Wikimedia and
GNOME Foundation on our sofa bed with Buffy the black kitten
sleeping beside her:
If people want to confirm the cat smell was a lie, just ask Anisa.
Switzerland vs Australia, which country is more beautiful
I feel that honesty is always important in any relationship.
When we see courtrooms on television, the witnesses promise to
tell the whole truth, the complete truth and nothing but the truth.
I guess that mantra stuck in my head. I simply told the tribunal
that I didn't really want that apartment anyway because Australia
is more beautiful. At that very moment, the jurists stop speaking English
and revert to German.
In fact, both Switzerland and Australia have some amazing geographic
and cultural features and I think we were just unlucky with this
particular landlady from the SVP senioren (far right seniors) cabal.
Far right dictator or eccentric old lady?
While this landlady was definitely a member of the far right party,
her behavior was rather foolish and I don't think every member of the
far right party behaves like this. Many of the people in the far right
party own small businesses and they don't want to start silly disputes
with their customers and tourists over things like a missing towel.
In this case, I suspect the propaganda of the far right party
has become mixed up with the aging process and contributed to
behavior that is erratic.
Most political parties and religions try to exploit the insecurities
of little old ladies like this in the hope little old ladies
will leave bequests to the party or the religion in question.
With that in mind, I don't blame the landlady alone for the pain
my family experienced in Zurich.
Google and Debian forcing the harassment verdict into the spotlight
While we had to collect a lot of evidence at the time of the dispute,
I never imagined publishing this case on my blog.
The only reason I am publishing this is because of vague rumors
about a harassment case being distributed on the web sites of Debian,
the World Intellectual Property Organization (WIPO) in Geneva and
some other web sites.
I don't want to encourage cat enthusiasts to seek revenge against
this little old lady. If she is still alive today, and I haven't
even bothered to check, she would be well into her eighties and there
would be no benefit whatsoever from harassing her.
The case was resolved with a cash settlement of CHF 10,000, equivalent
to EUR 10,500 or USD 10,000.
The cats were transported in a box to a new home:
Here is the judgment in German. We've redacted parts of it
to avoid identifying anybody. Ultimately, this was another case
of a woman instigating harassment, a lot like Baby Reindeer:
Chris Lamb and Molly de Blanc violated Swiss privacy
Soon after the harassment case was finished, it was Chris Lamb
and Molly de Blanc who started a gossip campaign.
Some of these women spreading rumors in the free software
community are particularly vicious.
One of the cats, Floe, died shortly after the relocation.
de Blanc then showed up at FOSDEM in Brussels with her
infamous speech about
putting cats behind bars:
de Blanc's behavior was a horrible act of trolling after the
death of our beloved cat.
Carla and I did not choose to make the harassment verdict
public. We didn't have any vendetta with that little old lady. We just
wanted to get on with our lives.
The far right landlady paid the compensation money on time. She
has a right to get on with her life too. She is well into her
eighties now and Google is violating her privacy
with the ongoing gossip about harassment.
The ten thousand Swiss francs we received is less than half
the cost of the handbag that Oprah Winfrey wanted to see in
Bahnhofstrasse, Zurich.
What we see is a range of women, both the landlady and Molly de Blanc,
meddling in peoples' lives. Women and female cats
are victims of these stalkers but the stalkers are women too.
I haven't blogged until now: I should have done from Thursday onwards.
It's
a joy to be here in Cambridge at ARM HQ. Lots of people I recognise
from last year here: lots *not* here because this mini-conference is a
month before the next one in Toulouse and many people can't attend both.
Two
days worth of chatting, working on bits and pieces, chatting and
informal meetings was a very good and useful way to build relationships
and let teams find some space for themselves.
Lots of quiet hacking going on - a few loud conversations. A new ARM machine in mini-ITX format - see Steve McIntyre's blog on planet.debian.org about Rock 5 ITX.
Two
days worth of talks for Saturday and Sunday. For some people, this is a
first time. Lightning talks are particularly good to break down
barriers - three slides and five minutes (and the chance for a bit of
gamesmanship to break the rules creatively).
Longer talks: a
couple from Steve Capper of ARM were particularly helpful to those
interested in upcoming development. A couple of the talks in the
schedule are traditional: if the release team are here, they tell us
what they are doing, for example.
ARM are main sponsors and have
been very generous in giving us conference and facilities space. Fast
network, coffee and interested people - what's not to like :)
[EDIT/UPDATE - And my talk is finished and went fairly well: slides have now been uploaded and the talk is linked from the Mini-DebConf pages]
The thirteenth release of the qlcal package
arrivied at CRAN today.
qlcal
delivers the calendaring parts of QuantLib. It is provided (for the R
package) as a set of included files, so the package is self-contained
and does not depend on an external QuantLib library (which can be
demanding to build). qlcal covers
over sixty country / market calendars and can compute holiday lists, its
complement (i.e. business day lists) and much more. Examples
are in the README at the repository, the package page,
and course at the CRAN package
page.
This releases synchronizes qlcal with
the QuantLib release 1.36 (made
this week) and contains some minor updates to two calendars.
Changes in version 0.0.13
(2024-10-15)
Synchronized with QuantLib 1.36 released yesterday
Calendar updates for South Korea and Poland
Courtesy of my CRANberries, there
is a diffstat report for this
release. See the project page
and package documentation for more details, and more examples. If you
like this or other open-source work I do, you can sponsor me at
GitHub.
Way back (more than 10 years ago) when I was doing DVD-based backups,
I knew that normal DVDs/Blu-Rays are no long-term archival solutions,
and that if I was real about doing optical media backups, I need to
switch to M-Disc. I actually
bought a (small stack) of M-Disc Blu-Rays, but never used them.
I then switched to other backups solutions, and forgot about the whole
topic. Until, this week, while sorting stuff, I happened upon a set of
DVD backups from a range of years, and was very curious whether they
are still readable after many years.
And, to my surprise, there were no surprises! Went backward in time, and:
I also found stack of dual-layer DVD+R from 2012-2014, some for sure
Verbatim, and some unmarked (they were intended to be printed on), but
likely Verbatim as well. All worked just fine. Just that, even at
~8GiB per disk, backing up raw photo files took way too many disks,
even in 2014 😅.
At this point I was happy that all 12+ DVDs I found, ranging from 10
to 14 years, are all good. Then I found a batch of 3 CDs! Here the
results were mixed:
2003: two TDK “CD-R80�, “Mettalic�, 700MB: fully readable, after
21 years!
unknown year, likely around 1999-2003, but no later, “Creation�
CD-R, 700MB: read errors to the extent I can’t even read the disk
signature (isoinfo -d).
I think the takeaway is that for all explicitly selected media - TDK,
JVC and Verbatim - they hold for 10-20 years. Valid reads from summer
2003 is mind boggling for me, for (IIRC) organic media - not sure
about the “TDK metallic� substrate. And when you just pick whatever
(“Creation�), well, the results are mixed.
Note that in all this, it was about CDs and DVDs. I have no idea how
Blu-Rays behave, since I don’t think I ever wrote a Blu-Ray. In any
case, surprising to me, and makes me rethink a bit my backup
options. Sizes from 25 to 100GB Blu-Rays are reasonable for most
critical data. And they’re WORM, as opposed to most LTO media, which
is re-writable (and to some small extent, prone to accidental wiping).
Now, I should check those M-Disks to see if they can still be written
to, after 10 years 😀
I want to write to pour praise on some software I recently discovered.
I'm not up to speed on Pipewire—the latest piece of Linux plumbing related
to audio—nor how it relates to the other bits (Pulseaudio, ALSA, JACK, what
else?). I recently tried to plug something into the line-in port on my external
audio interface, and wished to hear it on the machine. A simple task, you'd
think.
I'll refrain from writing about the stuff that didn't work well and
focus on the thing that did: A little tool called Whisper, which
is designed to let you listen to a microphone through your speakers.
Whisper's UI. Screenshot from upstream.
Whisper does a great job of hiding the complexity of what lies beneath and
asking two questions: which microphone, and which speakers? In my case this
alone was not quite enough, as I was presented with two identically-named "SB
Live Extigy" "microphone" devices, but that's easily resolved with trial and
error.
RcppDate wraps
the featureful date
library written by Howard
Hinnant for use with R. This header-only modern C++ library has been
in pretty wide-spread use for a while now, and adds to C++11/C++14/C++17
what will be (with minor modifications) the ‘date’ library in C++20.
This release, the first in 3 1/2 years, syncs the code with the
recent date 3.0.2
release from a few days ago. It also updates a few packaging details
such as URLs, badges or continuous integration.
Changes in version 0.0.4
(2024-10-14)
Updated to upstream version 3.0.2 (and adjusting one
pragma)
Several small updates to overall packaging and testing
When setting up your YubiKey you have the option to require the user to touch the device to authorize an operation (be it signing, decrypting, or authenticating). While web browsers often provide clear prompts for this, other applications like SSH or GPG will not. Instead the operation will just hang without any visual indication that user input is required. The YubiKey itself will blink, but depending on where it is plugged in that is not very visible.
yubikey-touch-detector (fresh in unstable) solves this issue by providing a way for your desktop environment to signal the user that the device is waiting for a touch. It provides an event feed on a socket that other components can consume. It comes with libnotify support and there are some custom integrations for other environments.
For GNOME and KDE libnotify support should be sufficient, however you still need to turn it on:
I would still have preferred a more visible, more modal prompt. I guess that would be an exercise for another time, listening to the socket and presenting a window. But for now, desktop notifications will do for me.
PS: I have not managed to get SSH's no-touch-required to work with YubiKey 4, while it works just fine with a YubiKey 5.
A long time ago a computer was a woman (I think almost exclusively a women, not a man) who was employed to do a lot of repetitive mathematics – typically for accounting and stock / order processing.
Then along came Lyons, who deployed an artificial computer to perform
the same task, only with fewer errors in less time. Modern day
computing was born – we had entered the age of the Digital Computer.
These computers were large, consumed huge amounts of power but were precise, and gave repeatable, verifiable results.
Over time the huge mainframe digital computers have shrunk in size,
increased in performance, and consume far less power – so much so that
they often didn’t need the specialist CFC based, refrigerated liquid
cooling systems of their bigger mainframe counterparts, only requiring
forced air flow, and occasionally just convection cooling. They shrank
so far and became cheep enough that the Personal Computer became to be,
replacing the mainframe with its time shared resources with a machine
per user. Desktop or even portable “laptop” computers were everywhere.
We networked them together, so now we can share information around
the office, a few computers were given specialist tasks of being
available all the time so we could share documents, or host databases
these servers were basically PCs designed to operate 24×7, usually more
powerful than their desktop counterparts (or at least with faster
storage and networking).
Next we joined these networks together and the internet was born. The dream of a paperless office might actually become realised – we can now send email (and documents) from one organisation (or individual) to another via email. We can make our specialist computers applications available outside just the office and web servers / web apps come of age.
Fast forward a few years and all of a sudden we need huge data-halls
filled with “Rack scale” machines augmented with exotic GPUs and NPUs
again with refrigerated liquid cooling, all to do the same task that we
were doing previously without the magical buzzword that has been named
AI; because we all need another dot com bubble or block chain band
waggon to jump aboard. Our AI enabled searches take slightly longer,
consume magnitudes more power, and best of all the results we are given
may or may not be correct….
Progress, less precise answers, taking longer, consuming more power,
without any verification and often giving a different result if you
repeat your question AND we still need a personal computing device to
access this wondrous thing.
Remind me again why we are here?
(time lines and huge swaves of history simply ignored to make an
attempted comic point – this is intended to make a point and not be
scholarly work)
I've been exploring typesetting and formatting code within
text documents such as papers, or my thesis. Up until now,
I've been using the listings package without thinking
much about it. By default, some sample Haskell code
processed by listings looks like this (click any of the
images to see larger, non-blurry versions):
It's formatted with a monospaced font, with some keywords highlighted,
but not syntactic symbols.
There are several other options for typesetting and formatting code in LaTeX
documents. For Haskell in particular, there is the preprocessor lhs2tex,
The default output of which looks like this:
A proportional font, but it's taken pains to preserve vertical alignment, which
is syntactically significant for Haskell. It looks a little cluttered to me,
and I'm not a fan of nearly everything being italic. Again, symbols aren't
differentiated, but it has substituted them for more typographically
pleasing alternatives: -> has become →, and \ is now λ.
Another option is perhaps the newest, the LaTeX package minted, which
leverages the Python Pygments program. Here's the same code again. It
defaults to monospace (the choice of font seems a lot clearer to me than the
default for listings), no symbolic substitution, and liberal use of colour:
An informal survey of the samples so far showed that the minted output was
the most popular.
All of these packages can be configured to varying degrees. Here are some
examples of what I've achieved with a bit of tweaking
listings adjusted with colour and some symbols substituted (but sadly not the two together)
lhs2tex adjusted to be less italic, sans-serif and use some colour
All of this has got me wondering whether there are straightforward empirical
answers to some of these questions of style.
Firstly, I'm pretty convinced that symbolic substitution is valuable. When
writing Haskell, we write ->, \, /= etc. not because it's most legible,
but because it's most practical to type those symbols on the most widely
available keyboards and popular keyboard layouts.1 Of the three
options listed here, symbolic substitution is possible with listings and
lhs2tex, but I haven't figured out if minted can do it (which is really
the question: can pygments do it?)
I'm unsure about proportional versus monospaced fonts. We typically use
monospaced fonts for editing computer code, but that's at least partly for
historical reasons. Vertical alignment is often very important in source code,
and it can be easily achieved with monospaced text; it's also sometimes
important to have individual characters (., etc.) not be de-emphasised by being
smaller than any other character.
lhs2tex, at least, addresses vertical alignment whilst using proportional
fonts. I guess the importance of identifying individual significant characters
is just as true in a code sample within a larger document as it is within
plain source code.
From a (brief) scan of research on this topic, it seems that proportional
fonts result in marginally quicker reading times for regular prose. It's
not clear whether those results carry over into reading computer code in
particular, and the margin is slim in any case. The drawbacks of monospaced
text mostly apply when the volume of text is large, which is not the case
for the short code snippets I am working with.
I still have a few open questions:
Is colour useful for formatting code in a PDF document?
does this open up a can of accessibility worms?
What should be emphasised (or de-emphasised)
Why is the minted output most popular: Could the choice of font
be key? Aspects of the font other than proportionality (serifs? Size
of serifs? etc)
The Haskell package Data.List.Unicode lets the programmer
use a range of unicode symbols in place of ASCII approximations, such
as ∈ instead of elem, ≠ instead of /=. Sadly, it's not possible
to replace the denotation for an anonymous function, \, with λ this
way.↩
It's been a while since I've posted about arm64 hardware. The last
machine I spent my own money on was
a SolidRun
Macchiatobin, about 7 years ago. It's a small (mini-ITX) board
with a 4-core arm64 SoC (4 * Cortex-A72) on it, along with things like
a DIMM socket for memory, lots of networking, 3 SATA disk interfaces.
The Macchiatobin was a nice machine compared to many earlier
systems, but it took quite a bit of effort to get it working to my
liking. I replaced the on-board U-Boot firmware binary with an EDK2
build, and that helped. After a few iterations we got a new build
including graphical output on a PCIe graphics card. Now it worked much
more like a "normal" x86 computer.
I still have that machine running at home, and it's been a
reasonably reliable little build machine for arm development and
testing. It's starting to show its age, though - the onboard USB ports
no longer work, and so it's no longer useful for doing things like
installation testing. :-/
So...
I was involved in a conversation in the #debian-arm IRC channel a
few weeks ago, and diederik suggested
the Radxa Rock 5
ITX. It's another mini-ITX board, this time using a Rockchip
RK3588 CPU. Things have moved on - the CPU is now an 8-core big.LITTLE
config: 4*Cortex A76 and 4*Cortex A55. The board has NVMe on-board,
4*SATA, built-in Mali graphics from the CPU, soldered-on memory. Just
about everything you need on an SBC for a small low-power desktop, a
NAS or whatever. And for about half the price I paid for the
Macchiatobin. I hit "buy" on one of the listed websites. :-)
A few days ago, the new board landed. I picked the version with
24GB of RAM and bought the matching heatsink and fan. I set it up in
an existing case borrowed from another old machine and tried the Radxa
"Debian" build. All looked OK, but I clearly wasn't going to stay with
that. Onwards to running a native Debian setup!
I installed an EDK2 build
from https://github.com/edk2-porting/edk2-rk3588
onto the onboard SPI flash, then rebooted with a Debian 12.7
(Bookworm) arm64 installer image on a USB stick. How much trouble
could this be?
I was shocked! It Just Worked (TM)
I'm running a standard Debian arm64 system. The graphical installer
ran just fine. I installed onto the NVMe, adding an Xfce desktop for
some simple tests. Everything Just Worked. After many
years of fighting with a range of different arm machines (from simple
SBCs to desktops and servers), this was without doubt the most
straightforward setup I've ever done. Wow!
It's possible to go and spend a lot of money on
an Ampere machine, and
I've seen them work well too. But for a hobbyist user (or even a
smaller business), the Rock 5 ITX is a lovely option. Total cost to me
for the board with shipping fees, import duty, etc. was just over
£240. That's great value, and I can wholeheartedly recommend this
board!
The two things that are missing compared to the Macchiatobin? This
is soldered-on memory (but hey, 24G is plenty for me!) It also doesn't
have a PCIe slot, but it has sufficient onboard network, video and
storage interfaces that I think it will cover most people's needs.
Where's the catch? It seems these are very popular
right now, so it can be difficult to find these machines in stock
online.
FTAOD, I should also point out: I bought this machine entirely with
my own money, for my own use for development and testing. I've had no
contact with the Radxa or Rockchip folks at all here, I'm
just so happy with this machine that I've felt the
need to shout about it! :-)
So a common theme on the Internet about Debian is so old. And
right, I am getting close to the stage that I feel a little laggy: I
am using a bunch of backports for packages I need, and I'm missing a
bunch of other packages that just landed in unstable and didn't make
it to backports for various reasons.
I disagree that "old" is a bad thing: we definitely run Debian stable
on a fleet of about 100 servers and can barely keep up, I would make
it older. And "old" is a good thing: (port) wine and (any) beer
needs time to age properly, and so do humans, although some humans
never seem to grow old enough to find wisdom.
But at this point, on my laptop, I am feeling like I'm missing
out. This page, therefore, is an evolving document that is a twist
on the classic NewIn game. Last time I played seems to be
#newinwheezy
(2013!), so really, I'm due for an update. (To be fair to myself, I do
keep tabs on upgrades quite well at home and
work, which do have their share of "new in", just after the fact.)
New packages to explore
Those tools are shiny new things available in unstable or perhaps
Trixie (testing) already that I am not using yet, but I find
interesting enough to list here.
trippy: trippy network analysis tool, kind of an improved MTR
New packages I won't use
Those are packages that I have tested because I found them
interesting, but ended up not using, but I think people could find
interesting anyways.
kew: surprisingly fast music player, parsed my entire library
(which is huge) instantaneously and just started playing (I still
use Supersonic, for which I maintain a flatpak on my
Navidrome server)
mdformat: good markdown formatter, think black or gofmt but
for markdown), but it didn't actually do what I needed, and
it's not quite as opinionated as it should (or could) be)
Backports already in use
Those are packages I already use regularly, which have backports or
that can just be installed from unstable:
If you know of cool things I'm missing out of, then by all means let
me know!
That said, overall, this is a pretty short list! I have most of what I
need in stable right now, and if I wasn't a Debian developer, I don't
think I'd be doing the jump now. But considering how easier it is to
develop Debian (and how important it is to test the next release!),
I'll probably upgrade soon.
Previously, I was running Debian testing (which why the slug on that
article is why-trixie), but now I'm actually considering just
running unstable on my laptop directly anyways. It's been a long time
since we had any significant instability there, and I can typically
deal with whatever happens, except maybe when I'm traveling, and then
it's easy to prepare for that (just pin testing).
I finally figured out how to have an application launcher with my usual Emacs
completion keybindings:
This is with Icomplete. If you use another completion framework it will look
different. Crucially, it’s what you are already used to using inside Emacs,
with the same completion style (flex vs. orderless vs. …), bindings etc..
The dmenu_emacsclient script is
here.
It relies on the function spw/sway-completing-read from my
init.el.
As usual, this code is available for your reuse under the terms of the GNU
GPL. Please see the license and copyright information in the linked files.
You also probably want a for_window directive in your Sway config to enable
floating the window, and perhaps to resize it. Enjoy having your Emacs
completion bindings for application launching, too!
As DebConf22 was coming to an end, in Kosovo, talking with Eeveelweezel they
invited me to prepare a talk to give for the Chicago Python User
Group. I replied that I’m not really that much of a Python
guy… But would think about a topic. Two years passed. I meet Eeveelweezel
again for DebConf24 in Busan, South Korea. And the topic came up again. I had
thought of some ideas, but none really pleased me. Again, I do write some Python
when needed, and I teach using Python, as it’s the language I find my students
can best cope with. But delivering a talk to ChiPy?
As I give this filesystem as a project to my students (and not as a mere
homework), I always ask them to try and provide a good, polished, professional
interface, not just the simplistic menu I often get. And I tell them the best
possible interface would be if they provide support for FIUnamFS transparently,
usable by the user without thinking too much about it. With high probability,
that would mean: Use FUSE.
But, in the six semesters I’ve used this project (with 30-40 students per
semester group), only one student has bitten the bullet and presented a FUSE
implementation.
And of course, there isn’t a single interface to work from. In Python only, we
can find
python-fuse,
Pyfuse,
Fusepy… Where to start from?
…So I setup to try and help.
Over the past couple of weeks, I have been slowly working on my own version, and
presenting it as a progressive set of tasks, adding filesystem calls, and
being careful to thoroughly document what I write (but… maybe my documentation
ends up obfuscating the intent? I hope not — and, read on, I’ve provided some
remediation).
I registered a GitLab project for a hand-holding guide to writing FUSE-based
filesystems in Python. This
is a project where I present several working FUSE filesystem implementations,
some of them RAM-based, some passthrough-based, and I intend to add to this also
filesystems backed on pseudo-block-devices (for implementations such as my
FIUnamFS).
They all provide something that could be seen as useful, in a way that’s easy to
teach, in just some tens of lines. And, in case my comments/documentation are
too long to read, uncommentfs will happily strip all comments and whitespace
automatically! 😉
Of course, I will also share this project with my students in the next couple of
weeks… And hope it manages to lure them into implementing FUSE in Python. At
some point, I shall report!
This month I accepted 441 and rejected 29 packages. The overall number of packages that got accepted was 448.
I couldn’t believe my eyes, but this month I really accepted the same number of packages as last month.
Debian LTS
This was my hundred-twenty-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:
[unstable] libcupsfilters security update to fix one CVE related to validation of IPP attributes obtained from remote printers
[unstable] cups-filters security update to fix two CVEs related to validation of IPP attributes obtained from remote printers
[unstable] cups security update to fix one CVE related to validation of IPP attributes obtained from remote printers
[DSA 5778-1] prepared package for cups-filters security update to fix two CVEs related to validation of IPP attributes obtained from remote printers
[DSA 5779-1] prepared package for cups security update to fix one CVE related to validation of IPP attributes obtained from remote printers
[DLA 3905-1] cups-filters security update to fix two CVEs related to validation of IPP attributes obtained from remote printers
[DLA 3904-1] cups security update to fix one CVE related to validation of IPP attributes obtained from remote printers
[DLA 3905-1] cups-filters security update to fix two CVEs related to validation of IPP attributes obtained from remote printers
Despite the announcement the package libppd in Debian is not affected by the CVEs related to CUPS. By pure chance there is an unrelated package with the same name in Debian. I also answered some question about the CUPS related uploads. Due to the CUPS issues, I postponed my work on other packages to October.
Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.
Debian ELTS
This month was the seventy-fourth ELTS month. During my allocated time I uploaded or worked on:
[ELA-1186-1]cups-filters security update for two CVEs in Stretch and Buster to fix the IPP attribute related CVEs.
[ELA-1187-1]cups-filters security update for one CVE in Jessie to fix the IPP attribute related CVEs (the version in Jessie was not affected by the other CVE).
I also started to work on updates for cups in Buster, Stretch and Jessie, but their uploads will happen only in October.
I also did a week of FD and attended the monthly LTS/ELTS meeting.
Debian Printing
This month I uploaded …
… libcupsfilters to also fix a dependency and autopkgtest issue besides the security fix mentioned above.
… splix for a new upstream version. This package is managed now by OpenPrinting.
Last but not least I tried to prepare an update for hplip. Unfortunately this is a nerve-stretching task and I need some more time.
Most of the uploads were related to package migration to testing. As some of them are in non-free or contrib, one has to build all binary versions. From my point of view handling packages in non-free or contrib could be very much improved, but well, they are not part of Debian …
Anyway, starting in December there is an Outreachy project that takes care of automatic updates of these packages. So hopefully it will be much easier to keep those package up to date. I will keep you informed.
Debian IoT
This month I uploaded new upstream or bugfix versions of:
This month I did source uploads of all the packages that were prepared last month by Nathan and started the transition. It went rather smooth except for a few packages where the new version did not propagate to the tracker and they got stuck in old failing autopkgtest. Anyway, in the end all packages migrated to testing.
I also uploaded new upstream releases or fixed bugs in:
The Sovol SV08
is a 3D printer which is a semi-assembled clone of Voron 2.4,
an open-source design. It's not the cheapest of printers, but for
what you get, it's extremely good value for money—as long as you can
deal with certain, err, quality issues.
Anyway, I have one, and one of the fun things about an open design
is that you can switch out things to your liking. (If you just want a tool,
buy something else. Bambu P1S, for instance, if you can live with
a rather closed ecosystem. It's a bit like an iPhone in that aspect,
really.) So I've put together a spreadsheet with some of the more common
choices:
It doesn't contain any of the really difficult mods, and it also
doesn't cover pure printables. And none of the dreaded macro stuff
that people seem to be obsessing over (it's really like being
in the 90s with people's mIRC scripts all over again sometimes :-/),
except where needed to make hardware work.
This time I seem to be settling on either Commit Mono or Space
Mono. For now I'm using Commit Mono because it's a little more
compressed than Fira and does have a italic version. I don't like how
Space Mono's parenthesis (()) is "squarish", it feels visually
ambiguous with the square brackets ([]), a big no-no for my primary
use case (code).
So here I am using a new font, again. It required changing a bunch of
configuration files in my home directory (which is in a private
repository, sorry) and Emacs configuration (thankfully that's
public!).
One gotcha is I realized I didn't actually have a global font
configuration in Emacs, as some Faces define their own font
family, which overrides the frame defaults.
This is what it looks like, before:
After:
(Notice how those screenshots are not sharp? I'm surprised too. The
originals look sharp on my display, I suspect this is something to
do with the Wayland transition. I've tried with both grim and
flameshot, for what its worth.)
They are pretty similar! Commit Mono feels a bit more vertically
compressed maybe too much so, actually -- the line height feels too
low. But it's heavily customizable so that's something that's
relatively easy to fix, if it's really a problem. Its weight is also a
little heavier and wider than Fira which I find a little distracting
right now, but maybe I'll get used to it.
I like how the ampersand (&) is more traditional, although I'll miss
the exotic one Fira produced... I like how the back quotes (`,
GRAVE ACCENT) drop down low, nicely aligned with the apostrophe. As
I mentioned before, I like how the bar on the "f" aligns with the
other top of letters, something in Fira mono that really annoys me now
that I've noticed it (it's not aligned!).
A UTF-8 test file
Here's the test sheet I've made up to test various characters. I could
have sworn I had a good one like this lying around somewhere but
couldn't find it so here it is, I guess.
So there you have it, got completely nerd swiped by typography
again. Now I can go back to writing a too-long proposal again.
Sources and inspiration for the above:
the unicode(1) command, to lookup individual characters to
disambiguate, for example, - (U+002D HYPHEN-MINUS, the minus
sign next to zero on US keyboards) and − (U+2212 MINUS SIGN, a
math symbol)
searchable list of characters and their names - roughly
equivalent to the unicode(1) command, but in one page, amazingly
the /usr/share/unicode database doesn't have any one file like
this
UTF-8 encoded plain text file - nice examples of edge cases,
curly quotes example and box drawing alignment test which,
incidentally, showed me I needed specific faces customisation in
Emacs to get the Markdown code areas to display properly, also the
idea of comparing various dashes
In my previous blog post about fonts, I
had a list of alternative fonts, but it seems people are not digging
through this, so I figured I would redo the list here to preempt "but
have you tried Jetbrains mono" kind of comments.
My requirements are:
no ligatures: yes, in the previous post, I wanted ligatures but
I have changed my mind. after testing this, I find them distracting,
confusing, and they often break the monospace nature of the display
(note that some folks wrote emacs code to selectively enable
ligatures which is an interesting compromise)z
monospace: this is to display code
italics: often used when writing Markdown, where I do make use of
italics... Emacs falls back to underlining text when lacking italics
which is hard to read
free-ish, ultimately should be packaged in Debian
Here is the list of alternatives I have considered in the past and why
I'm not using them:
agave: recommended by tarzeau, not sure I like the lowercase
a, a bit too exotic, packaged as fonts-agave
Cascadia code: optional ligatures, multilingual, not liking the
alignment, ambiguous parenthesis (look too much like square
brackets), new default for Windows Terminal and Visual Studio,
packaged as fonts-cascadia-code
Fira Code: ligatures, was using Fira Mono from which it is derived,
lacking italics except for forks, interestingly, Fira Code succeeds
the alignment test but Fira Mono fails to show the X signs properly!
packaged as fonts-firacode
Hack: no ligatures, very similar to Fira, italics, good
alternative, fails the X test in box alignment, packaged as
fonts-hack
IBM Plex: irritating website, replaces Helvetica as the IBM
corporate font, no ligatures by default, italics, proportional alternatives,
serifs and sans, multiple languages, partial failure in box alignment test (X signs),
fancy curly braces contrast perhaps too much with the rest of the
font, packaged in Debian as fonts-ibm-plex
Inconsolata: no ligatures, maybe italics? more compressed than
others, feels a little out of balance because of that, packaged in
Debian as fonts-inconsolata
Intel One Mono: nice legibility, no ligatures, alignment issues
in box drawing, not packaged in Debian
Iosevka: optional ligatures, italics, multilingual, good
legibility, has a proportional option, serifs and sans, line height
issue in box drawing, fails dash test, not in Debian
Monoid: optional ligatures, feels much "thinner" than
Jetbrains, not liking alignment or spacing on that one, ambiguous
2Z, problems rendering box drawing, packaged as fonts-monoid
Mononoki: no ligatures, looks good, good alternative, suggested
by the Debian fonts team as part of fonts-recommended, problems
rendering box drawing, em dash bigger than en dash, packaged as
fonts-mononoki
spleen: bitmap font, old school, spacing issue in box drawing
test, packaged as fonts-spleen
sudo: personal project, no ligatures, zero originally not
dotted, relied on metrics for legibility, spacing issue in box
drawing, not in Debian
victor mono: italics are cursive by default (distracting),
ligatures by default, looks good, more compressed than commit mono,
good candidate otherwise, has a nice and compact proof sheet
So, if I get tired of Commit Mono, I might probably try, in order:
Hack
Jetbrains Mono
IBM Plex Mono
Iosevka, Monoki and Intel One Mono are also good options, but have
alignment problems. Iosevka is particularly disappointing as the EM
DASH metrics are just completely wrong (much too wide).
Also note that there is now a package in Debian called fnt to
manage fonts like this locally, including in-line previews (that don't
work in bookworm but should be improved in trixie and later).
Our reports attempt to outline what we’ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.
Binsider can perform static and dynamic analysis, inspect strings, examine linked libraries, and perform hexdumps, all within a user-friendly terminal user interface!
95% fixed by [merge request] !12680 when -fobject-determinism is enabled. […]
The linked merge request has since been merged, and Rodrigo goes on to say that:
After that patch is merged, there are some rarer bugs in both interface file determinism (eg. #25170) and in object determinism (eg. #25269) that need to be taken care of, but the great majority of the work needed to get there should have been merged already. When merged, I think we should close this one in favour of the more specific determinism issues like the two linked above.
Fay Stegerman let everyone know that she started a thread on the Fediverse about the problems caused by unreproducible zlib/deflate compression in .zip and .apk files and later followed up with the results of her subsequent investigation.
Long-time developer kpcyrd wrote that “there has been a recent public discussion on the Arch Linux GitLab [instance] about the challenges and possible opportunities for making the Linux kernel package reproducible”, all relating to the CONFIG_MODULE_SIG flag. […]
Bernhard M. Wiedemann followed-up to an in-person conversation at our recent Hamburg 2024 summit on the potential presence for Reproducible Builds in recognised standards. […]
Fay Stegerman also wrote about her worry about the “possible repercussions for RB tooling of Debian migrating from zlib to zlib-ng” as reproducibility requires identical compressed data streams. […]
Martin Monperrus wrote the list announcing the latest release of maven-lockfile that is designed aid “building Maven projects with integrity”. […]
Lastly, Bernhard M. Wiedemann wrote about potential role of reproducible builds in combatting silent data corruption, as detailed in a recent Tweet and scholarly paper on faulty CPU cores. […]
This is a report of Part 1 of my journey: building 100% bit-reproducible packages for every package that makes up [openSUSE’s] minimalVM image. This target was chosen as the smallest useful result/artifact. The larger package-sets get, the more disk-space and build-power is required to build/verify all of them.
A hermetic build system manages its own build dependencies, isolated from the host file system, thereby securing the build process. Although, in recent years, new artifact-based build technologies like Bazel offer build hermeticity as a core functionality, no empirical study has evaluated how effectively these new build technologies achieve build hermeticity. This paper studies 2,439 non-hermetic build dependency packages of 70 Bazel-using open-source projects by analyzing 150 million Linux system file calls collected in their build processes. We found that none of the studied projects has a completely hermetic build process, largely due to the use of non-hermetic top-level toolchains. […]
Distribution work
In Debian this month, 14 reviews of Debian packages were added, 12 were updated and 20 were removed, all adding to our knowledge about identified issues. A number of issue types were updated as well. […][…]
In addition, Holger opened 4 bugs against the debrebuild component of the devscripts suite of tools. In particular:
#1081839: Fails with E: mmdebstrap failed to run error.
Last month, an issue was filed to update the Salsa CI pipeline (used by 1,000s of Debian packages) to no longer test for reproducibility with reprotest’s build_path variation. Holger Levsen provided a rationale for this change in the issue, which has already been made to the tests being performed by tests.reproducible-builds.org. This month, this issue was closed by Santiago R. R., nicely explaining that build path variation is no longer the default, and, if desired, how developers may enable it again.
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading version 278 to Debian:
New features:
Add a helpful contextual message to the output if comparing Debian .orig tarballs within .dsc files without the ability to “fuzzy-match” away the leading directory. […]
Bug fixes:
Drop removal of calculated os.path.basename from GNU readelf output. […]
Correctly invert “X% similar” value and do not emit “100% similar”. […]
Misc:
Temporarily remove procyon-decompiler from Build-Depends as it was removed from testing (via #1057532). (#1082636)
disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into system calls to reliably flush out reproducibility issues. This month, version 0.5.11-4 was uploaded to Debian unstable by Holger Levsen making the following changes:
Replace build-dependency on the obsolete pkg-config package with one on pkgconf, following a Lintian check. […]
Bump Standards-Version field to 4.7.0, with no related changes needed. […]
In addition, reprotest is our tool for building the same source code twice in different environments and then checking the binaries produced by each build for any differences. This month, version 0.7.28 was uploaded to Debian unstable by Holger Levsen including a change by Jelle van der Waa to move away from the pipes Python module to shlex, as the former will be removed in Python version 3.13 […].
Android toolchain core count issue reported
Fay Stegerman reported an issue with the Android toolchain where a part of the build system generates a different classes.dex file (and thus a different .apk) depending on the number of cores available during the build, thereby breaking Reproducible Builds:
We’ve rebuilt [tag v3.6.1] multiple times (each time in a fresh container): with 2, 4, 6, 8, and 16 cores available, respectively:
With 2 and 4 cores we always get an unsigned APK with SHA-256 14763d682c9286ef….
With 6, 8, and 16 cores we get an unsigned APK with SHA-256 35324ba4c492760… instead.
reproducibility settings [being] applied to some of Gradle’s built-in tasks that should really be the default. Compatible with Java 8 and Gradle 8.3 or later.
Website updates
There were a rather substantial number of improvements made to our website this month, including:
Chris Lamb:
Attempt to use GitLab CI to ‘artifact’ the website; hopefully useful for testing branches. […]
Correct the linting rule whilst building the website. […]
Make a number of small changes to Kees’ post written by Vagrant. […][…][…]
Jelle van der Waa completely modernised the System Images documentation, noting that “a lot has changed since 2017(!); ext4, erofs and FAT filesystems can now be made reproducible”. […]
Developer RyanSquared replaced the continuous integration test link for Arch Linux on our Projects page with an external instance […][…] as well as updated the documentation to reflect the dependencies required to build the website […].
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
sphobjinv (duplicates fix from Debian bug #1082706)
Reproducibility testing framework
The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In September, a number of changes were made by Holger Levsen, including:
Add support for powercycling OpenStack instances. […]
Update the fail2ban to ban hosts for 4 weeks in total […][…] and take care to never ban our own Jenkins instance. […]
In addition, Vagrant Cascadian recorded a disk failure for the virt32b and virt64b nodes […], performed some maintenance of the cbxi4a node […][…] and marked most armhf architecture systems as being back online.
Finally, If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
I'm pleased to welcome Louis-Philippe Véronneau as a new Lintian
maintainer. He humorously acknowledged his new role, stating,
"Apparently I'm a Lintian maintainer now". I remain confident that
we can, and should, continue modernizing our policy checker, and I see
this as one important step toward that goal.
SPDX name / license tools
There was a discussion about deprecating the unique names for DEP-5 and
migrating to fully compliant SPDX names.
Simon McVittie wrote: "Perhaps our Debian-specific names are
better, but the relevant question is whether they are sufficiently
better to outweigh the benefit of sharing effort and specifications with
the rest of the world (and I don't think they are)." Also Charles
Plessy sees the value of deprecating the Debian ones and align on
SPDX.
The thread on debian-devel list contains several practical hints for
writing debian/copyright files.
I continued reaching out to teams in September. One common pattern I've
noticed is that most teams lack a clear strategy for attracting new
contributors. Here's an example snippet from one of my outreach emails,
which is representative of the typical approach:
Q: Do you have some strategy to gather new contributors for your team?
A: No.
Q: Can I do anything for you?
A: Everything that can help to have more than 3 guys :-D
Well, only the first answer, "No," is typical. To help the JavaScript
team, I'd like to invite anyone with JavaScript experience to join the
team's mailing list and offer to learn and contribute. While I've only
built a JavaScript package once, I know this team has developed
excellent tools that are widely adopted by others. It's an active and
efficient team, making it a great starting point for those looking to
get involved in Debian. You might also want to check out the
"Little tutorial for JS-Team beginners".
Given the lack of a strategy to actively recruit new contributors--a
common theme in the responses I've received--I recommend reviewing my
talk from DebConf23 about teams. The Debian Med team would have
struggled significantly in my absence (I've paused almost all work with
the team since becoming DPL) if I hadn't consistently focused on
bringing in new members. I'm genuinely proud of how the team has managed
to keep up with the workload (thank you, Debian Med team!). Of course,
onboarding newcomers takes time, and there's no guarantee of long-term
success, but if you don't make the effort, you'll never find out.
OS underpaid
The Register, in its article titled
"Open Source Maintainers Underpaid, Swamped by Security, Going Gray",
summarizes the 2024 State of the
Open Source Maintainer Report. I find this to be an interesting read,
both in general and in connection with the challenges mentioned in the
previous paragraph about finding new team members.
Freexian specializes in Free Software with a particular focus on Debian
GNU/Linux. Freexian can assist with consulting, training, technical support,
packaging, or software development on projects involving use or development of Free
software.
All of Freexian's employees and partners are well-known contributors in the Free
Software community, a choice that is integral to Freexian's business model.
About the Debian Partners Program
The Debian Partners Program
was created to recognize companies and organizations that help and provide
continuous support to the project with services, finances, equipment, vendor
support, and a slew of other technical and non-technical services.
Partners provide critical assistance, help, and support which has advanced and
continues to further our work in providing the 'Universal Operating System' to
the world.
After (quite) a summer break, here comes the 4th article of the 5-episode blog post series on Polis, written by Guido Berhörster, member of staff at my company Fre(i)e Software GmbH.
Have fun with the read on Guido's work on Polis,
Mike
Creating (a) new frontend(s) for Polis (this article)
Current status and roadmap
4. Creating (a) new frontend(s) for Polis
Why a new frontend was needed...
Our initial experiences of working with Polis, the effort required to implement
more invasive changes and the desire of iterating changes more rapidly
ultimately lead to the decision to create a new foundation for frontend
development that would be independent of but compatible with the upstream
project.
Our primary objective was thus not to develop another frontend but rather to
make frontend development more flexible and to facilitate experimentation and
rapid prototyping of different frontends by providing abstraction layers and
building blocks.
This also implied developing a corresponding backend since the Polis backend
is tightly coupled to the frontend and is neither intended to be used by
third-party projects nor supporting cross-domain requests due to the
expectation of being embedded as an iframe on third-party websites.
The long-term plan for achieving our objectives is to provide three abstraction
layers for building frontends:
a stable cross-domain HTTP API
a low-level JavaScript library for interacting with the HTTP API
a high-level library of WebComponents as a framework-neutral way of rapidly
building frontends
The Particiapp Project
Under the umbrella of the Particiapp project we have so far developed two
new components:
the example frontend project which currently contains both the client
library and an experimental example frontend built with it
Both the participation frontend and backend are fully compatible and require
an existing Polis installation and can be run alongside the upstream frontend.
More specifically, the administration frontend and common backend are required to
administrate conversations and send out notifications and the statistics
processing server is required for processing the voting results.
Particiapi server
For the backend the Python language and the Flask framework were
chosen as a technological basis mainly due to developer mindshare, a large
community and ecosystem and the smaller dependency chain and maintenance
overhead compared to Node.js/npm. Instead of integrating specific identity
providers we adopted the OpenID Connectstandard as an abstraction
layer for authentication which allows delegating authentication either to a
self-hosted identity provider or a large number of existing external
identity providers.
Particiapp Example Frontend
The experimental example frontend serves both as a test bed for the client
library and as a tool for better understanding the needs of frontend designers.
It also features a completely redesigned user interface and results
visualization in line with our goals. Branded variants are currently used for
evaluation and testing by the stakeholders.
In order to simplify evaluation, development, testing and deployment a Docker
Compose configuration is made available which contains all necessary
components for running Polis with our experimental example frontend. In
addition, a development environment is provided which includes a preconfigured
OpenID Connect identity provider (KeyCloak), SMTP-Server with web
interface (MailDev), and a database frontend (PgAdmin). The new
frontend can also be tested using our public demo server.
Almost all of my Debian contributions this month were
sponsored by Freexian.
You can also support my work directly via
Liberapay.
Pydantic
My main Debian project for the month turned out to be getting
Pydantic back into a good state in
Debian testing. I’ve used Pydantic quite a bit in various projects, most
recently in Debusine, so
I have an interest in making sure it works well in Debian. However, it had
been stalled on 1.10.17 for quite a while due to the complexities of getting
2.x packaged. This was partly making sure everything else could cope with
the transition, but in practice mostly sorting out packaging of its new
Rust dependencies. Several
other people (notably Alexandre Detiste, Andreas Tille, Drew Parsons, and
Timo Röhling) had made some good progress on this, but nobody had quite got
it over the line and it seemed a bit stuck.
Learning Rust is on my to-do list, but merely not knowing a language hasn’t
stopped me before.
So I learned how the Debian Rust team’s packaging
works, upgraded a few
packages to new upstream versions (including
rust-half
and upstream rust-idna test
fixes), and packaged rust-jiter. After a
lot of waiting around for various things and chasing some failures in other
packages I was eventually able to get current versions of both pydantic-core
and pydantic into testing.
I’m looking forward to being able to drop our clunky v1 compatibility code
once debusine can rely on running on trixie!
I upgraded python-yubihsm, yubihsm-connector, and yubihsm-shell to new
upstream versions.
I noticed that I could enable some tests in python-yubihsm and
yubihsm-shell; I’d previously thought the whole test suite required a real
YubiHSM device, but when I looked closer it turned out that this was only
true for some tests.
I fixed yubihsm-shell build failures on some 32-bit architectures (upstream
PRs #431,
#432), and also made it
build
reproducibly.
buildbot was in a bit of a mess due to
being incompatible with SQLAlchemy 2.0. Fortunately by the time I got to it
upstream had committed a workable set of patches, and the main difficulty
was figuring out what to cherry-pick since they haven’t made a new upstream
release with all of that yet. I figured this out and got us up to 4.0.3.
Adrian Bunk asked whether python-zipp
should be removed from trixie. I spent some time investigating this and
concluded that the answer was no, but looking into it was an interesting
exercise anyway.
On the other hand, I looked into flask-appbuilder, concluded that it should
be removed, and filed a removal request.
I upgraded importlib-resources, ipywidgets, jsonpickle, pydantic-settings,
pylint (fixing a test failure),
python-aiohttp-session, python-apptools, python-asyncssh,
python-django-celery-beat, python-django-rules, python-limits,
python-multidict, python-persistent, python-pkginfo, python-rt, python-spur,
python-zipp, stravalib, transmissionrpc, vulture, zodbpickle,
zope.exceptions (adopting it),
zope.i18nmessageid, zope.proxy, and zope.security to new upstream versions.
debmirror
The experimental and *-proposed-updates suites used to not have
Contents-* files, and a long time ago debmirror was changed to just skip
those files in those suites. They were added to the Debian archive some
time ago, but debmirror carried on skipping them anyway. Once I realized
what was going on, I removed these unnecessary special cases
(#819925,
#1080168).
Another short status update of what happened on my side last
month. Besides the usual amount of housekeeping last month was a lot
about getting old issues resolved by finishing some stale merge
requests and work in pogress MRs. I also pushed out the Phosh 0.42.0
Release
Sanitize versions as this otherwise breaks the libphosh-rs build (MR)
lockscreen: Swap deck and carousel to avoid triggering the plugins page when entering pin and let the lockscreen shrink to smaller sizes (MR) (two more year old usability issues out of the way)
Ensure we send enough feedback when phone is blanked/locked (MR). This should be way easier now for apps as they
don't need to do anything and we can avoid duplicate feedback sent from e.g. Chatty.
Fix possible use after free when activating notifications on the lock screen (MR)
Don't lose preedit when switching applications, opening menus, etc (MR). This fixes the case (e.g. with word completion in phosh-osk-stub enabled) where it looks to the user as if the last typed word would get lost when switching from a text editor to another app or when opening a menu
Fix word salad with presage completer when entering cursor navigation mode (and in some other cases)
(MR 1). Presage
has the best completion but was marked experimental due to that.
Submit preedit on changes to terminal and emoji layout (MR)
Unbreak and modernize CI a bit (MR). A passing CI is so much more motivating for contributers and reviewers.
Fotema
Fix app-id and hence the icon shown in Phosh's overview (MR)
Help Development
If you want to support my work see donations. This includes
a list of hardware we want to improve support for. Thanks a lot to all current and past donors.
A new minor release 0.1.5 of RApiSerialize
arrived on CRAN today. The RApiSerialize
package is used by both my RcppRedis
as well as by Travers excellent qs package. This release adds
an optional C++ namespace, available when the API header file is
included in a C++ source file. And as one often does, the release also
brings a few small updates to different aspects of the packaging.
Changes in version 0.1.4
(2024-09-28)
Add C++ namespace in API header (Dirk in #9
closing #8)
Several packaging updates: switched to Authors@R, README.md badge
updates, added .editorconfig and cleanup
The Reproducible Builds project relies on several projects, supporters and sponsors for financial support, but they are also valued as ambassadors who spread the word about our project and the work that we do.
Vagrant Cascadian: Could you tell me a bit about yourself? What sort
of things do you work on?
Kees Cook: I’m a Free Software junkie living in Portland, Oregon, USA.
I have been focusing on the upstream Linux kernel’s protection
of itself. There is a lot of support that the kernel provides
userspace to defend itself, but when I first started focusing on this
there was not as much attention given to the kernel protecting
itself. As userspace got more hardened the kernel itself became a
bigger target. Almost 9 years ago I formally announced the Kernel Self-Protection Project
because the work necessary was way more than my time and expertise could do
alone. So I just try to get people to help as much as possible; people who
understand the ARM architecture, people who understand the memory management
subsystem to help, people who understand how to make the kernel less buggy.
Vagrant: Could you describe the path that lead you to working on this
sort of thing?
Kees: I have always been interested in security through the aspect of
exploitable flaws. I always thought it was like a magic trick to make a
computer do something that it was very much not designed to do and seeing how
easy it is to subvert bugs. I wanted to improve that fragility. In 2006, I
started working at Canonical on Ubuntu and was mainly focusing on bringing
Debian and Ubuntu up to what was the state of the art for Fedora and Gentoo’s
security hardening efforts. Both had really pioneered a lot of userspace
hardening with compiler flags and ELF
stuff and many other things for hardened
binaries. On the whole, Debian had not really paid attention to it. Debian’s
packaging building process at the time was sort of a chaotic free-for-all as
there wasn’t centralized build methodology for defining things. Luckily that
did slowly change over the years. In Ubuntu we had the opportunity to apply top
down build rules for hardening all the packages. In 2011 Chrome OS was
following along and took advantage of a bunch of the security hardening work as
they were based on ebuild out of Gentoo
and when they looked for someone to
help out they reached out to me. We recognized the Linux kernel was pretty much
the weakest link in the Chrome OS security posture and I joined them to help
solve that. Their userspace was pretty well handled but the kernel had a lot
of weaknesses, so focusing on hardening was the next place to go. When I
compared notes with other users of the Linux kernel within Google there were a
number of common concerns and desires. Chrome OS already had an “upstream
first” requirement, so I tried to consolidate the concerns and solve them
upstream. It was challenging to land anything in other kernel team repos at
Google, as they (correctly) wanted to minimize their delta from upstream, so I
needed to work on any major improvements entirely in upstream and had a lot of
support from Google to do that. As such, my focus shifted further from working
directly on Chrome OS into being entirely upstream and being more of a
consultant to internal teams, helping with integration or sometimes
backporting. Since the volume of needed work was so gigantic I needed to find
ways to inspire other developers (both inside and outside of Google) to help.
Once I had a budget I tried to get folks paid (or hired) to work on these areas
when it wasn’t already their job.
Vagrant: So my understanding of some of your recent work is basically
defining undefined behavior in the language or compiler?
Kees: I’ve found the term “undefined behavior” to have a really strict
meaning within the compiler community, so I have tried to redefine my goal as
eliminating “unexpected behavior” or “ambiguous language constructs”. At the
end of the day ambiguity leads to bugs, and bugs lead to exploitable security
flaws. I’ve been taking a four-pronged approach: supporting the work people are
doing to get rid of ambiguity, identify new areas where ambiguity needs to be
removed, actually removing that ambiguity from the C language, and then dealing
with any needed refactoring in the Linux kernel source to adapt to the new
constraints.
None of this is particularly novel; people have recognized how dangerous some
of these language constructs are for decades and decades but I think it is a
combination of hard problems and a lot of refactoring that nobody has the
interest/resources to do. So, we have been incrementally going after the lowest
hanging fruit. One clear example in recent years was the elimination of C’s
“implicit fall-through” in switch statements. The language would just fall
through between adjacent cases if a break (or other code flow directive)
wasn’t present. But this is ambiguous: is the code meant to fall-through, or
did the author just forget a break statement? By defining the “[[fallthrough]]” statement,
and requiring its use in
Linux,
all switch statements now have explicit code flow, and the entire class of
bugs disappeared. During our refactoring we actually found that 1 in 10 added
“[[fallthrough]]” statements were actually missing break statements. This
was an extraordinarily common bug!
So getting rid of that ambiguity is where we have been. Another area I’ve been
spending a bit of time on lately is looking at how defensive security work has
challenges associated with metrics. How do you measure your defensive security
impact? You can’t say “because we installed locks on the doors, 20% fewer
break-ins have happened.” Much of our signal is always secondary or
retrospective, which is frustrating: “This class of flaw was used X much over
the last decade so, and if we have eliminated that class of flaw and will never
see it again, what is the impact?” Is the impact infinity? Attackers will just
move to the next easiest thing. But it means that exploitation gets
incrementally more difficult. As attack surfaces are reduced, the expense of
exploitation goes up.
Vagrant: So it is hard to identify how effective this is… how bad would it be
if people just gave up?
Kees: I think it would be pretty bad, because as we have seen, using
secondary factors, the work we have done in the industry at large, not just the
Linux kernel, has had an impact. What we, Microsoft, Apple, and everyone else
is doing for their respective software ecosystems, has shown that the price of
functional exploits in the black market has gone up. Especially for really
egregious stuff like a zero-click remote code execution.
If those were cheap then obviously we are not doing something right, and it
becomes clear that it’s trivial for anyone to attack the infrastructure that
our lives depend on. But thankfully we have seen over the last two decades that
prices for exploits keep going up and up into millions of dollars. I think it
is important to keep working on that because, as a central piece of modern
computer infrastructure, the Linux kernel has a giant target painted on it. If
we give up, we have to accept that our computers are not doing what they were
designed to do, which I can’t accept. The safety of my grandparents shouldn’t
be any different from the safety of journalists, and political activists, and
anyone else who might be the target of attacks. We need to be able to trust our
devices otherwise why use them at all?
Vagrant: What has been your biggest success in recent years?
Kees: I think with all these things I am not the only actor. Almost
everything that we have been successful at has been because of a lot
of people’s work, and one of the big ones that has been coordinated
across the ecosystem and across compilers was initializing stack variables to 0 by default.
This feature was added in
Clang,
GCC,
and MSVC
across the board even though there were a lot of fears about forking the C language.
The worry was that developers would come to depend on zero-initialized stack
variables, but this hasn’t been the case because we still warn about
uninitialized variables when the compiler can figure that out. So you still
still get the warnings at compile time but now you can count on the contents of
your stack at run-time and we drop an entire class of uninitialized variable flaws.
While the exploitation of this class has mostly been around memory content
exposure, it has also been used for control flow attacks.
So that was politically and technically a large challenge: convincing people it
was necessary, showing its utility, and implementing it in a way that everyone
would be happy with, resulting in the elimination of a large and persistent
class of flaws in C.
Vagrant: In a world where things are generally Reproducible do you see ways
in which that might affect your work?
Kees: One of the questions I frequently get is, “What version of the Linux
kernel has feature $foo?” If I know how things are built, I can answer with
just a version number. In a Reproducible Builds scenario I can count on the
compiler version, compiler flags, kernel configuration, etc. all those things
are known, so I can actually answer definitively that a certain feature exists.
So that is an area where Reproducible Builds affects me most directly.
Indirectly, it is just being able to trust the binaries you are running are
going to behave the same for the same build environment is critical for sane
testing.
Kees: I have! One subset of tree-wide refactoring that we do when getting
rid of ambiguous language usage in the kernel is when we have to make source
level changes to satisfy some new compiler requirement but where the binary
output is not expected to change at all. It is mostly about getting the
compiler to understand what is happening, what is intended in the cases where
the old ambiguity does actually match the new unambiguous description of what
is intended. The binary shouldn’t change. We have
used diffoscope to compare
the before and after binaries to confirm that “yep, there is no change in
binary”.
Vagrant: You cannot just use checksums for that?
Kees: For the most part, we need to only compare the text segments. We try
to hold as much stable as we can, following the
Reproducible Builds documentation for the kernel,
but there are macros in the kernel that are sensitive to source line numbers
and as a result those will change the layout of the data segment (and sometimes
the text segment too). With diffoscope there’s flexibility where I can exclude
or include different comparisons. Sometimes I just go look at what diffoscope
is doing and do that manually, because I can tweak that a little harder, but
diffoscope is definitely the default. Diffoscope is awesome!
Vagrant: Where has reproducible builds affected you?
Kees: One of the notable wins of reproducible builds lately was
dealing with the fallout of the XZ backdoor and just being able to ask
the question “is my build environment running the expected
code?” and to be able to compare the output generated from one
install that never had a vulnerable XZ and one that did have a
vulnerable XZ and compare the results of what you get. That was
important for kernel builds because the XZ threat actor was working to
expand their influence and capabilities to include Linux kernel
builds, but they didn’t finish their work before they were noticed. I
think what happened with Debian proving the build infrastructure was not affected is an
important example of how people would have needed to verify the kernel
builds too.
Vagrant: What do you want to see for the near or distant future in security work?
Kees: For reproducible builds in the kernel, in the work that has been
going on in the ClangBuiltLinux project, one of the driving forces of
code and usability quality has been the
continuous integration work. As soon as something breaks, on the
kernel side, the Clang side, or something in between the two, we get a
fast signal and can chase it and fix the bugs quickly. I would like to
see someone with funding to maintain a reproducible kernel build
CI. There have been places where there are certain
architecture configurations or certain build configuration where we lose
reproducibility and right now we have sort of a standard open source
development feedback loop where those things get fixed but the time
in between introduction and fix can be large. Getting a CI for
reproducible kernels would give us the opportunity to shorten that
time.
Vagrant: Well, thanks for that! Any last closing thoughts?
Kees: I am a big fan of reproducible builds, thank you for all your work.
The world is a safer place because of it.
Vagrant: Likewise for your work!
For more information about the Reproducible Builds project, please see our website at
reproducible-builds.org. If you are interested in
ensuring the ongoing security of the software that underpins our civilisation
and wish to sponsor the Reproducible Builds project, please reach out to the
project by emailing
contact@reproducible-builds.org.
The PiKVM web site has good documentation [1] and they have a YouTube channel with videos showing how to assemble the devices [2]. It’s really convenient being able to change the playback speed from low speeds like 1/4 original speed) to double speed when watching such a video. One thing to note is that there are some revisions to the hardware that aren’t covered in the videos, the device I received had some improvements that made it easier to assemble which weren’t in the video.
When you buy the device and Pi you need to also get a SD card of at least 4G in size, a CR1220 battery for real-time clock, and a USB-2/3 to USB-C cable for keyboard/mouse MUST NOT BE USB-C to USB-C! When I first tried using it I used a USB-C to USB-C cable for keyboard and mouse and it didn’t work for reasons I don’t understand (I welcome comments with theories about this). You also need a micro-HDMI to HDMI cable to get video output if you want to set it up without having to find the IP address and ssh to it.
The system has a bright OLED display to show the IP address and some other information which is very handy.
The hardware is easy enough for a 12yo to setup. The construction of the parts are solid and well engineered with everything fitting together nicely. It has a PCI/PCIe slot adaptor for controlling power and sending LED status over the connection which I didn’t test. I definitely recommend this.
The default username/password is root/root. Connect it to a HDMI monitor and USB keyboard to change the password etc. If you control the DHCP server you can find the IP address it’s using and ssh to it to change the password (it is configured to allow ssh as root with password authentication).
If you get the kit to assemble it (as opposed to buying a completed unit already assembled) then you need to run the following commands as root to enable the OLED display. This means that after assembling it you can’t get the IP address without plugging in a monitor with a micro-HDMI to HDMI cable or having access to the DHCP server logs.
The default webadmin username/password is admin/admin.
To change the passwords run the following commands:
rw
kvmd-htpasswd set admin
passwd root
ro
It is configured to have the root filesystem mounted read-only which is something I thought had gone out of fashion decades ago. I don’t think that modern versions of the Ext3/4 drivers are going to corrupt your filesystem if you have it mounted read-write when you reboot.
By default it uses a self-signed SSL certificate so with a Chrome based browser you get an error when you connect where you have to select “advanced” and then tell it to proceed regardless. I presume you could use the DNS method of Certbot authentication to get a SSL certificate to use on an internal view of your DNS to make it work normally with SSL.
The web based software has all the features you expect from a KVM. It shows the screen in any resolution up to 1920*1080 and proxies keyboard and mouse. Strangely “lsusb” on the machine being managed only reports a single USB device entry for it which covers both keyboard and mouse.
Managing Computers
For a tower PC disconnect any regular monitor(s) and connect a HDMI port to the HDMI input on the KVM. Connect a regular USB port (not USB-C) to the “OTG” port on the KVM, then it should all just work.
For a laptop connect the HDMI port to the HDMI input on the KVM. Connect a regular USB port (not USB-C) to the “OTG” port on the KVM. Then boot it up and press Fn-F8 for Dell, Fn-F7 for Lenovo or whatever the vendor code is to switch display output to HDMI during the BIOS initialisation, then Linux will follow the BIOS and send all output to the HDMI port for the early stages of booting. Apparently Lenovo systems have the Fn key mapped in the BIOS so an external keyboard could be used to switch between display outputs, but the PiKVM software doesn’t appear to support that. For other systems (probably including the Dell laptops that interest me) the Fn key apparently can’t be simulated externally. So for using this to work on laptops in another city I need to have someone local press Fn-F8 at the right time to allow me to change BIOS settings.
It is possible to configure the Linux kernel to mirror display to external HDMI and an internal laptop screen. But this doesn’t seem useful to me as the use cases for this device don’t require that. If you are using it for a server that doesn’t have iDRAC/ILO or other management hardware there will be no other “monitor” and all the output will go through the only connected HDMI device. My main use for it in the near future will be for supporting remote laptops, when Linux has a problem on boot as an easier option than talking someone through Linux commands and for such use it will be a temporary thing and not something that is desired all the time.
For the gdm3 login program you can copy the .config/monitors.xml file from a GNOME user session to the gdm home directory to keep the monitor settings. This configuration option is decent for the case where a fixed set of monitors are used but not so great if your requirement is “display a login screen on anything that’s available”. Is there an xdm type program in Debian/Ubuntu that supports this by default or with easy reconfiguration?
Conclusion
The PiKVM is a well engineered and designed product that does what’s expected at a low price. There are lots of minor issues with using it which aren’t the fault of the developers but are due to historical decisions in the design of BIOS and Linux software. We need to change the Linux software in question and lobby hardware vendors for BIOS improvements.
The feature for connecting to an ATX PSU was unexpected and could be really handy for some people, it’s not something I have an immediate use for but is something I could possibly use in future. I like the way they shipped the hardware for it as part of the package giving the user choices about how they use it, many vendors would make it an optional extra that costs another $100. This gives the PiKVM more functionality than many devices that are much more expensive.
The web UI wasn’t as user friendly as it might have been, but it’s a lot better than iDRAC so I don’t have a serious complaint about it. It would be nice if there was an option for creating macros for keyboard scancodes so I could try and emulate the Fn options and keys for volume control on systems that support it.
RcppFastAD
wraps the FastAD
header-only C++ library by James which provides a C++
implementation of both forward and reverse mode of automatic
differentiation. It offers an easy-to-use header library (which we
wrapped here) that is both lightweight and performant. With a little of
bit of Rcpp glue, it is also easy to
use from R in simple C++ applications. This release updates the quick
fix in release
0.0.3 from a good week ago. James took a good look and
properly disambiguated the statement that lead clang to complain, so we
are back to compiling as C++17 under all compilers which makes for a
slightly wider reach.
The NEWS file for this release follows.
Changes in version 0.0.4
(2024-09-24)
The package now properly addresses a clang warning on empty variadic
macros arguments and is back to C++17 (James in #10)
During COVID companies suddenly found themselves able to offer remote working where it hadn’t previously been on offer. That’s changed over the past 2 or so years, with most places I’m aware of moving back from a fully remote situation to either some sort of hybrid, or even full time office attendance. For example last week Amazon announced a full return to office, having already pulled remote-hired workers in for 3 days a week.
I’ve seen a lot of folk stating they’ll never work in an office again, and that RTO is insanity. Despite being lucky enough to work fully remotely (for a role I’d been approached about before, but was never prepared to relocate for), I feel the objections from those who are pro-remote often fail to consider the nuances involved. So let’s talk about some of the reasons why companies might want to enforce some sort of RTO.
Real estate value
Let’s clear this one up first. It’s not about real estate value, for most companies. City planners and real estate investors might care, but even if your average company owned their building they’d close it in an instant all other things being equal. An unoccupied building costs a lot less to maintain. And plenty of companies rent and would save money even if there’s a substantial exit fee.
Occupancy levels
That said, once you have anyone in the building the equation changes. If you’re having to provide power, heating, internet, security/front desk staff etc, you want to make sure you’re getting your money’s worth. There’s no point heating a building that can seat 100 for only 10 people present. One option is to downsize the building, but that leads to not being able to assign everyone a desk, for example. No one I know likes hot desking. There are also scheduling problems about ensuring there are enough desks for everyone who might turn up on a certain day, and you’ve ruled out the option of company/office wide events.
Coexistence builds relationships
As a remote worker I wish it wasn’t true that most people find it easier to form relationships in person, but it is. Some of this can be worked on with specific “teambuilding” style events, rather than in office working, but I know plenty of folk who hate those as much as they hate the idea of being in the office. I am lucky in that I work with a bunch of folk who are terminally online, so it’s much easier to have those casual conversations even being remote, but I also accept I miss out on some things because I’m just not in the office regularly enough. You might not care about this (“I just need to put my head down and code, not talk to people”), but don’t discount it as a valid reason why companies might want their workers to be in the office. This often matters even more for folk at the start of their career, where having a bunch of experience folk around to help them learn and figure things out ends up working much better in person (my first job offered to let me go mostly remote when I moved to Norwich, but I said no as I knew I wasn’t ready for it yet).
Coexistence allows for unexpected interactions
People hate the phrase “water cooler chat”, and I get that, but it covers the idea of casual conversations that just won’t happen the same way when people are remote. I experienced this while running Black Cat; every time Simon and I met up in person we had a bunch of useful conversations even though we were on IRC together normally, and had a VoIP setup that meant we regularly talked too. Equally when I was at Nebulon there were conversations I overheard in the office where I was able to correct a misconception or provide extra context. Some of this can be replicated with the right online chat culture, but I’ve found many places end up with folk taking conversations to DMs, or they happen in “private” channels. It happens more naturally in an office environment.
It’s easier for bad managers to manage bad performers
Again, this falls into the category of things that shouldn’t be true, but are. Remote working has increased the ability for people who want to slack off to do so without being easily detected. Ideally what you want is that these folk, if they fail to perform, are then performance managed out of the organisation. That’s hard though, there are (rightly) a bunch of rights workers have (I’m writing from a UK perspective) around the procedure that needs to be followed. Managers need organisational support in this to make sure they get it right (and folk are given a chance to improve), which is often lacking.
Summary
Look, I get there are strong reasons why offering remote is a great thing from the company perspective, but what I’ve tried to outline here is that a return-to-office mandate can have some compelling reasons behind it too. Some of those might be things that wouldn’t exist in an ideal world, but unfortunately fixing them is a bigger issue than just changing where folk work from. Not acknowledging that just makes any reaction against office work seem ill-informed, to me.
After years on the waiting list, May First was just
given a /24 block of IP addresses. Excellent.
Now we want to start using them for, among other things, sending email.
I haven’t added a new IP address to our mail relays in a while and things seems
to change regularly in the world of email so I’m curious: what’s the best 2024
way to warm up IP addresses, particularly using postfix?
Sendergrid has a nice page on the
topic. It
establishes the number of messages to send per day. But I’m not entirely sure
how to fit messages per day into our setup.
We use round robin DNS to direct email to one of several dozen email relay
servers using postfix. And unfortunately our DNS software
(knot) doesn’t
have a way to add weights to ensure some IPs show up more often than others
(much less limit the specific number of messages a given relay should get).
If default_destination_recipient_limit is over 1, then
default_destination_rate_delay is equal to the minimum delay between sending
email to the same domain.
So, I’m staring our IP addresses out at 30m - which prevents any single domain
from receiving more than 2 messages per hour. Sadly, there are a lot of
different domain names that deliver to the same set of popular corporate MX
servers, so I am not sure I can accurately control how many messages a given
provider sees coming from a given IP address. But it’s a start.
A bigger problem is that messages that exceed the limit hang out in the
active queue until they can be sent without violating the rate limit. Since I
can’t fully control the number of messages a given queue receives (due to my
inability to control the DNS round robin weights), a lot of messages are going
to be severely delayed, especially ones with an @gmail.com domain name.
I know I can temporarily set relayhost to a different queue and flush
deferred messages, however, as far as I can tell, it doesn’t work with
active messages.
To help mitigate the problem I’m only using our bulk mail queue to warm up IPs,
but really, this is not ideal.
Suggestions welcome!
Update #1
If you are running postfix in a multi-instance setup and you have instances
that are already warmed up, you can move active messages between queues with
these steps:
# Put the message on hold in the warming up instance
postsuper -c /etc/postfix-warmingup -h $queueid
# Copy to a warmed up instance
cp --preserve=mode,ownership,timestamp /var/spool/postfix-warmingup/hold/$queueid /var/spool/postfix-warmedup/incoming/
# Queue the message
postqueue -c /etc/postfix-warmedup -i $queueid
# Delete from the original queue.
postsuper -c /etc/postfix-warmingup -d $queueid
After just 12 hours we had thousands of messages piling up. This warm up method
was never going to work without the ability to move them to a faster queue.
[Additional update: be sure to reload the postfix instance after flushing the queue so
messages are drained from the active queue on the correct schedule. See update #4.]
Update #2
After 24 hours, most email is being accepted as far as I can tell. I am still
getting a small percentage of email deferred by Yahoo with:
421 4.7.0 [TSS04] Messages from 204.19.241.9 temporarily deferred due to unexpected volume or user complaints - 4.16.55.1; see https://postmaster.yahooinc.com/error-codes (in reply
So I will keep it as 30m for another 24 hours or so and then move to 15m. Now
that I can flush the backlog of active messages I am in less of a hurry.
Update #3
Well, this doesn’t seem to be working the way I want it to.
When a message arrives faster than the designated rate limit, it remains in the active queue.
I’m entirely sure how the timing is supposed to work, but at this point I’m
down to a 5m rate delay, and the active messages are just hanging out for a lot
longer than 5m. I tried flushing the queue, but that only seems to affect the
deferred messages. I finally got them re-tried with systemctl reload. I
wonder if there is a setting to control this retry? Or better yet, why can’t
these messages that exceed the rate delayed be deferred instead?
Update #4
I think I see why I was confused in Update #3 about the timing. I suspect that
when I move messages out of the active queue it screws up the timer. Reloading
the instance resets the timer. Every time you muck with active messages, you
should reload.
The relational model is probably the one innovation that brought computers to the mainstream for business users. This article by Donald Chamberlin, creator of one of the first query languages (that evolved into the ubiquitous SQL), presents its history as a commemoration of the 50th anniversary of his publication of said query language.
The article begins by giving background on information processing before the advent of today’s database management systems: with systems storing and processing information based on sequential-only magnetic tapes in the 1950s, adopting a record-based, fixed-format filing system was far from natural. The late 1960s and early 1970s saw many fundamental advances, among which one of the best known is E. F. Codd’s relational model. The first five pages (out of 12) present the evolution of the data management community up to the 1974 SIGFIDET conference. This conference was so important in the eyes of the author that, in his words, it is the event that “starts the clock” on 50 years of relational databases.
The second part of the article tells about the growth of the structured English query language (SEQUEL)– eventually renamed SQL–including the importance of its standardization and its presence in commercial products as the dominant database language since the late 1970s. Chamberlin presents short histories of the various implementations, many of which remain dominant names today, that is, Oracle, Informix, and DB2. Entering the 1990s, open-source communities introduced MySQL, PostgreSQL, and SQLite.
The final part of the article presents controversies and criticisms related to SQL and the relational database model as a whole. Chamberlin presents the main points of controversy throughout the years: 1) the SQL language lacks orthogonality; 2) SQL tables, unlike formal relations, might contain null values; and 3) SQL tables, unlike formal relations, may contain duplicate rows. He explains the issues and tradeoffs that guided the language design as it unfolded. Finally, a section presents several points that explain how SQL and the relational model have remained, for 50 years, a “winning concept,” as well as some thoughts regarding the NoSQL movement that gained traction in the 2010s.
This article is written with clear language and structure, making it easy and pleasant to read. It does not drive a technical point, but instead is a recap on half a century of developments in one of the fields most important to the commercial development of computing, written by one of the greatest authorities on the topic.
A legit email went to spam. Here are the redacted, relevant headers:
[redacted]
X-Spam-Flag: YES
X-Spam-Level: ******
X-Spam-Status: Yes, score=6.3 required=5.0 tests=DKIM_SIGNED,DKIM_VALID,
[redacted]
* 1.0 RCVD_IN_XBL RBL: Received via a relay in Spamhaus XBL
* [185.220.101.64 listed in xxxxxxxxxxxxx.zen.dq.spamhaus.net]
* 3.0 RCVD_IN_SBL_CSS Received via a relay in Spamhaus SBL-CSS
* 2.5 RCVD_IN_AUTHBL Received via a relay in Spamhaus AuthBL
* 0.0 RCVD_IN_PBL Received via a relay in Spamhaus PBL
[redacted]
[very first received line follows...]
Received: from [10.137.0.13] ([185.220.101.64])
by smtp.gmail.com with ESMTPSA id ffacd0b85a97d-378956d2ee6sm12487760f8f.83.2024.09.11.15.05.52
for <xxxxx@mayfirst.org>
(version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
Wed, 11 Sep 2024 15:05:53 -0700 (PDT)
At first I though a Gmail IP address was listed in spamhaus - I even opened a
ticket. But then I realized it wasn’t the last hop that Spamaus is complaining
about, it’s the first hop, specifically the ip 185.220.101.64 which appears
to be a Tor exit node.
The sender is using their own client to relay email directly to Gmail. Like any
sane person, they don’t trust Gmail to protect their privacy, so they are
sending via Tor. But WTF, Gmail is not stripping the sending IP address from
the header.
I’m a big fan of harm reduction and have always considered using your own
client to relay email with Gmail as a nice way to avoid some of the
surveillance tax Google imposes.
However, it seems that if you pursue this option you have two unpleasant
choices:
Embed your IP address in every email message or
Use Tor and have your email messages go to spam
I supposed you could also use a VPN, but I doubt the IP reputation of most VPN
exit nodes are going to be more reliable than Tor.
I realize that because I have several chairs, the phrase “my chair” is ambiguous. To reduce confusion, I will refer to the head of my academic department as “my office chair” going forward.
Review: The Book That Broke the World, by Mark Lawrence
Series:
Library Trilogy #2
Publisher:
Ace
Copyright:
2024
ISBN:
0-593-43796-9
Format:
Kindle
Pages:
366
The Book That Broke the World is high fantasy and a direct sequel
to The Book That Wouldn't Burn. You
should not start here. In a delightful break from normal practice, the
author provides a useful summary of the previous volume at the start of
this book to jog your memory.
At the end of The Book That Wouldn't Burn, the characters were
scattered and in various states of corporeality after some major
revelations about the nature of the Library and the first appearance of
the insectile Skeer. The Book That Wouldn't Burn picks up where it
left off, and there is a lot more contact with the Skeer, but my guess
that they would be the next viewpoint characters does not pan out.
Instead, we get a new group and a new protagonist: Celcha, whose sees
angels who come to visit her brother.
I have complaints, but before I launch into those, I should say that I
liked this book apart from the totally unnecessary cannibalism. (I'll get
to that.) Livira is a bit sidelined, which is regrettable, but Celcha and
her brother are interesting new characters, and both Arpix and Clovis,
supporting characters in the first book, get some excellent character
development. Similar to the first book, this is a puzzle box story full
of world-building tidbits with intellectually-satisfying interactions.
Lawrence elaborates and complicates his setting in ways that don't
contradict earlier parts of the story but create more room and depth for
the characters to be creative. I came away still invested in this world
and eager to find out how Lawrence pulls the world-building and narrative
threads together.
The biggest drawback of this book is that it's not new. My thought after
finishing the first book of the series was that if Lawrence had enough
world-building ideas to fill three books to that same level of density,
this had the potential of being one of my favorite fantasy series of all
time. By the end of the second book, I concluded that this is not the
case. Instead of showing us new twists and complications the way the
first book did throughout, The Book That Broke the World mostly
covers the same thematic ground from some new angles. It felt like
Lawrence was worried the reader of the first book may not have understood
the theme or the world-building, so he spent most of the second book
nailing down anything that moved.
I found that frustrating. One of the best parts of The Book That
Wouldn't Burn was that Lawrence trusted the reader to keep up, which for
me hit the glorious but rare sweet spot of pacing where I was figuring out
the world at roughly the same pace as the characters. It surprised me in
some very enjoyable ways. The Book That Broke the World did not
surprise me. There are a few new things, which I enjoyed, and a few
elaborations and developments of ideas, which I mostly enjoyed, but I saw
the big plot twist coming at least fifty pages before it happened and
found the aftermath more annoying than revelatory. It doesn't help that
the plot rests on character misunderstandings, one of my least favorite
tropes.
One of the other disappointments of this book is that the characters stop
using the Library as a library. The Library at the center of this series
is a truly marvelous piece of world-building with numerous fascinating
features that are unrelated to its contents, but Livira used it first and
foremost as a repository of books. The first book was full of characters
solving problems by finding a relevant book and reading it.
In The Book That Broke the World, sadly, this is mostly gone. The
Library is mostly reduced to a complicated Big Dumb Object setting. It's
still a delightful bit of world-building, and we learn about a few new
features, but I only remember two places where the actual books are
important to the story. Even the book referenced in the title is mostly
important as an artifact with properties unrelated to the words that it
contains or to the act of reading it. I think this is a huge lost
opportunity and something I hope Lawrence fixes in the last book of the
trilogy.
This book instead focuses on the politics around the existence of the
Library itself. Here I'm cautiously optimistic, although a lot is going
to depend on the third book. Lawrence has set up a three-sided argument
between groups that I will uncharitably describe as the libertarian
techbros, the "burn it all down" reactionaries, and the neoliberal
centrist technocrats. All three of those positions suck, and Lawrence had
better be setting the stage for Livira to find a different path. Her
unwillingness to commit to any of those sides gives me hope, but bringing
this plot to a satisfying conclusion is going to be tricky. I hope I like
what Lawrence comes up with, but it feels far from certain.
It doesn't help that he's started delivering some points with a
sledgehammer, and that's where we get to the unnecessary cannibalism.
Thankfully this is a fairly small part of the tail end of the book, but it
was an unpleasant surprise that I did not want in this novel and that I
don't think made the story any better.
It's tempting to call the cannibalism gratuitous, but it does fit one of
the main themes of this story, namely that humans are depressingly good at
using any rule-based object in unexpected and nasty ways that are contrary
to the best intentions of the designer. This is the fundamental challenge
of the Library as a whole and the question that I suspect the third book
will be devoted to addressing, so I understand why Lawrence wanted to
emphasize his point. The reason why there is cannibalism here is directly
related to a profound misunderstanding of the properties of the library,
and I detected an echo of one of C.S. Lewis's arguments in
The Last Battle about the nature of
Hell.
The problem, though, is that this is Satanic baby-killerism, to borrow a
term from
Fred Clark. There are numerous ways to show this type of perversion of
well-intended systems, which I know because Lawrence used other ones in
the first book that were more subtle but equally effective. One of the
best parts of The Book That Wouldn't Burn is that there were few
real villains. The conflict was structural, all sides had valid
perspectives, and the ethical points of that story were made with some
care and nuance.
The problem with cannibalism as it's used here is not merely that it's
gross and disgusting and off-putting to the reader, although it is all of
those things. If I wanted to read horror, I would read horror novels. I
don't appreciate surprise horror used for shock value in regular fantasy.
But worse, it's an abandonment of moral nuance. The function of
cannibalism in this story is like the function of Satanic baby-killers:
it's to signal that these people are wholly and irredeemably evil. They
are the Villains, they are Wrong, and they cease to be characters and
become symbols of what the protagonists are fighting. This is destructive
to the story because it's designed to provoke a visceral short-circuit in
the reader and let the author get away with sloppy story-telling. If the
author needs to use tactics like this to point out who is the villain,
they have failed to set up their moral quandary properly.
The worst part is that this was entirely unnecessary because Lawrence's
story-telling wasn't sloppy and he set up his moral quandary just fine.
No one was confused about the ethical point here. I as the reader was
following without difficulty, and had appreciated the subtlety with which
Lawrence posed the question. But apparently he thought he was too subtle
and decided to come back to the point with a pile-driver. I think that
seriously injured the story. The ethical argument here is much more
engaging and thought-provoking when it's more finely balanced.
That's a lot of complaints, mostly because this is a good book that I
badly wanted to be a great book but which kept tripping over its own feet.
A lot of trilogies have weak second books. Hopefully this is another
example of the mid-story sag, and the finale will be worthy of the start
of the story. But I have to admit the moral short-circuiting and the
de-emphasis of the actual books in the library has me a bit nervous. I
want a lot out of the third book, and I hope I'm not asking this author
for too much.
If you liked the first book, I think you'll like this one too, with the
caveat that it's quite a bit darker and more violent in places, even apart
from the surprise cannibalism. But if you've not started this series, you
may want to wait for the third book to see if Lawrence can pull off the
ending.
Followed by The Book That Held Her Heart, currently scheduled for
publication in April of 2025.
Review: The Wings Upon Her Back, by Samantha Mills
Publisher:
Tachyon
Copyright:
2024
ISBN:
1-61696-415-4
Format:
Kindle
Pages:
394
The Wings Upon Her Back is a political steampunk science fantasy
novel. If the author's name sounds familiar, it may be because Samantha
Mills's short story "Rabbit Test" won Nebula, Locus, Hugo, and Sturgeon
awards. This is her first novel.
Winged Zemolai is a soldier of the mecha god and the protege of Mecha
Vodaya, the Voice. She has served the city-state of Radezhda by defending
it against all enemies, foreign and domestic, for twenty-six years.
Despite that, it takes only a moment of errant mercy for her entire life
to come crashing down. On a whim, she spares a kitchen worker who was
concealing a statue of the scholar god, meaning that he was only
pretending to worship the worker god like all workers should. Vodaya is
unforgiving and uncompromising, as is the sleeping mecha god. Zemolai's
wings are ripped from her back and crushed in the hand of the god, and
she's left on the ground to die of mechalin withdrawal.
The Wings Upon Her Back is told in two alternating timelines. The
main one follows Zemolai after her exile as she is rescued by a young
group of revolutionaries who think she may be useful in their plans. The
other thread starts with Zemolai's childhood and shows the reader how she
became Winged Zemolai: her scholar family, her obsession with flying, her
true devotion to the mecha god, and the critical early years when she
became Vodaya's protege. Mills maintains the separate timelines through
the book and wraps them up in a rather neat piece of symbolic parallelism
in the epilogue.
I picked up this book on a recommendation from C.L. Clark, and yes, indeed, I can see why she liked this book. It's a
story about a political awakening, in which Zemolai slowly realizes that
she has been manipulated and lied to and that she may, in fact, be one of
the baddies. The Wings Upon Her Back is more personal than some
other books with that theme, since Zemolai was specifically (and
abusively) groomed for her role by Vodaya. Much of the book is Zemolai
trying to pull out the hooks that Vodaya put in her or, in the flashback
timeline, the reader watching Vodaya install those hooks.
The flashback timeline is difficult reading. I don't think Mills could
have left it out, but she says in the afterword that it was the hardest
part of the book to write and it was also the hardest part of the book to
read. It fills in some interesting bits of world-building and backstory,
and Mills does a great job pacing the story revelations so that both
threads contribute equally, but mostly it's a story of manipulative abuse.
We know from the main storyline that Vodaya's tactics work, which gives
those scenes the feel of a slow-motion train wreck. You know what's going
to happen, you know it will be bad, and yet you can't look away.
It occurred to me while reading this that Emily Tesh's
Some Desperate Glory told a similar type
of story without the flashback structure, which eliminates the stifling
feeling of inevitability. I don't think that would not have worked for
this story. If you simply rearranged the chapters of The Wings Upon
Her Back into a linear narrative, I would have bailed on the book.
Watching Zemolai being manipulated would have been too depressing and
awful for me to make it to the payoff without the forward-looking hope of
the main timeline. It gave me new appreciation for the difficulty of what
Tesh pulled off.
Mills uses this interwoven structure well, though. At about 90% through
this book I had no idea how it could end in the space remaining, but it
reaches a surprising and satisfying conclusion. Mills uses a type of
ending that normally bothers me, but she does it by handling the
psychological impact so well that I couldn't help but admire it. I'm
avoiding specifics because I think it worked better when I wasn't
expecting it, but it ties beautifully into the thematic point of the book.
I do have one structural objection, though. It's one of those problems I
didn't notice while reading, but that started bothering me when I thought
back through the story from a political lens. The Wings Upon Her
Back is Zemolai's story, her redemption arc, and that means she drives
the plot. The band of revolutionaries are great characters (particularly
Galiana), but they're supporting characters. Zemolai is older, more
experienced, and knows critical information they don't have, and she uses
it to effectively take over. As setup for her character arc, I see why
Mills did this. As political praxis, I have issues.
There is a tendency in politics to believe that political skill is
portable and repurposable. Converting opposing operatives to the cause is
welcomed not only because they indicate added support, but also because
they can use their political skill to help you win instead. To an extent
this is not wrong, and is probably the most true of combat skills (which
Zemolai has in abundance). But there's an underlying assumption that
politics is symmetric, and a critical reason why I hold many of the
political positions that I do hold is that I don't think politics is
symmetric.
If someone has been successfully stoking resentment and xenophobia in
support of authoritarians, converts to an anti-authoritarian cause, and
then produces propaganda stoking resentment and xenophobia against
authoritarians, this is in some sense an improvement. But if one believes
that resentment and xenophobia are inherently wrong, if one's politics are
aimed at reducing the resentment and xenophobia in the world, then in a
way this person has not truly converted. Worse, because this is an
effective manipulation tactic, there is a strong tendency to put this type
of political convert into a leadership position, where they will,
intentionally or not, start turning the anti-authoritarian movement into a
copy of the authoritarian movement they left. They haven't actually
changed their politics because they haven't understood (or simply don't
believe in) the fundamental asymmetry in the positions. It's the same
criticism that I have of realpolitik: the ends do not justify the
means because the means corrupt the ends.
Nothing that happens in this book is as egregious as my example, but the
more I thought about the plot structure, the more it bothered me that
Zemolai never listens to the revolutionaries she joins long enough to
wrestle with why she became an agent of an authoritarian state and they
didn't. They got something fundamentally right that she got wrong, and
perhaps that should have been reflected in who got to make future
decisions. Zemolai made very poor choices and yet continues to be the
sole main character of the story, the one whose decisions and actions
truly matter. Maybe being wrong about everything should be disqualifying
for being the main character, at least for a while, even if you think
you've understood why you were wrong.
That problem aside, I enjoyed this. Both timelines were compelling and
quite difficult to put down, even when they got rather dark. I could have
done with less body horror and a few fewer fight scenes, but I'm glad I
read it.
Science fiction readers should be warned that the world-building, despite
having an intricate and fascinating surface, is mostly vibes. I started
the book wondering how people with giant metal wings on their back can
literally fly, and thought the mentions of neural ports, high-tech
materials, and immune-suppressing drugs might mean that we'd get some sort
of explanation. We do not: heavier-than-air flight works because it looks
really cool and serves some thematic purposes. There are enough hints of
technology indistinguishable from magic that you could make up your own
explanations if you wanted to, but that's not something this book is
interested in. There's not a thing wrong with that, but don't get caught
by surprise if you were in the mood for a neat scientific explanation of
apparent magic.
Recommended if you like somewhat-harrowing character development with a
heavy political lens and steampunk vibes, although it's not the sort of
book that I'd press into the hands of everyone I know. The Wings
Upon Her Back is a complete story in a single novel.
Content warning: the main character is a victim of physical and emotional
abuse, so some of that is a lot. Also surgical gore, some torture, and
genocide.
I bought the Kogan AX1800 Wifi6 Mesh with 3 nodes for $140, the price has now dropped to $130. It’s only Wifi 6 (not 6E which has the extra 6GHz frequency) because all the 6E ones were more expensive than I felt like paying.
I’ve got it running and it’s working really well. One of my laptops has a damaged wire connecting to it’s Wifi device which decreased the signal to a degree that I could usually only connect to wifi when in the computer room (and then walk with it to another room once connected). Now I can connect that laptop to wifi in any part of my home. I can now get decent wifi access in my car in front of my home which covers the important corner case of walking to my car and then immediately asking Google maps for directions. Previously my phone would be deciding whether to switch away from wifi due to poor signal and that would delay getting directions, now I get directions quickly on Google Maps.
I’ve done tests with the Speedtest.net Android app and now get speeds of about 52Mbit/17Mbit in all parts of my home which is limited only by the speed of my NBN connection (one of the many reasons for hating conservatives is giving us expensive slow Internet). As my main reason for buying the devices is for Internet access they have clearly met my reason for purchase and probably meet the requirements for most people as well. Getting that speed is not trivial, my neighbours have lots of Wifi APs and bandwidth is congested. My Kogan 4K Android TV now plays 4K Netflix without pausing even though it only supports 2.4GHz wifi, so having a wifi mesh node next to the TV seems to help it.
I did some tests with the Olive Tree FTP server on a Galaxy Note 9 phone running the stock Samsung Android and got over 10MByte (80Mbit) upload and 8Mbyte (64Mbit) download speeds. This might be limited by the Android app or might be limited by the older version of Android. But it still gives higher speeds than my home Internet connection and much higher speeds than I need from an Android device.
Running iperf on Linux laptops talking to a Linux workstation that’s wired to the main mesh node I get speeds of 27.5Mbit from an old laptop on 2.4GHz wifi, 398Mbit from a new Wifi5 laptop when near the main mesh node, and 91Mbit from the same laptop when at the far end of my home. So not as fast as I’d like but still acceptable speeds.
The claims about Wifi 6 vs Wifi 5 speeds are that 6 will be about 3* faster. That would be 20% faster than the Gigabit ethernet ports on the wifi nodes. So while 2.5Gbit ethernet on Wifi 6 APs would be a good feature to have it seems that it might provide a 20% benefit at some future time when I have laptops with Wifi 6. At this time all the devices with 2.5Gbit ethernet cost more than I wanted to pay so I’m happy with this. It will probably be quite a while before laptops with Wifi 6 are in the price range I feel like paying.
For Wifi 6E it seems that anything less than 2.5Gbit ethernet will be a significant bottleneck. But I expect that by the time I buy a Wifi 6E mesh they will all have 2.5Gbit ethernet as standard.
The configuration of this device was quite easy via the built in web pages, everything worked pretty much as I expected and I hardly had to look at the manual. The mesh nodes are supposed to connect to each other when you press hardware buttons but that didn’t work for me so I used the web admin page to tell them to connect which worked perfectly. The admin of this seemed to be about as good as it gets.
Conclusion
The performance of this mesh hardware is quite decent. I can’t know for sure if it’s good or bad because performance really depends on what interference there is. But using this means that for me the Internet connection is now the main bottleneck for all parts of my home and I think it’s quite likely that most people in Australia who buy it will find the same result.
So for everyone in Australia who doesn’t have fiber to their home this seems like an ideal set of mesh hardware. It’s cheap, easy to setup, has no cloud stuff to break your configuration, gives quite adequate speed, and generally just does the job.
I've a set of Alesis M1Active 330 USB on my desk to listen to music.
They were relatively inexpensive (~100€), have USB and sound pretty good for their size/price.
They were also sitting on my desk unused for a while, because the left speaker didn't produce any sound.
Well, almost any.
If you'd move the volume knob long enough you might have found a position where the left speaker would work a bit,
but it'd be quieter than the right one and stop working again after some time.
Pretty unacceptable when you want to listen to music.
Given the right speaker was working just fine and the left would work a bit when the volume knob is moved,
I was quite certain which part was to blame: the potentiometer.
So just open the right speaker (it contains all the logic boards, power supply, etc),
take out the broken potentiometer, buy a new one, replace, done.
Sounds easy?
Well, to open the speaker you gotta loosen 8 (!) screws on the back.
At least it's not glued, right?
Once the screws are removed you can pull out the back plate, which will bring the power supply,
USB controller, sound amplifier and cables, lots of cables: two pairs of thick cables, one to each driver,
one thin pair for the power switch and two sets of "WTF is this, I am not going to trace pinouts today",
one with a 6 pin plug, one with a 5 pin one.
Unplug all of these!
Yes, they are plugged, nice.
Nope, still no friggin' idea how to get to the potentiometer.
If you trace the "thin pair" and "WTF1" cables, you see they go inside a small wooden box structure.
So we have to pull the thing from the front?
Okay, let's remove the plastic part of the knob
Right, this looks like a potentiometer.
Unscrew it.
No, no need for a Makita wrench, I just didn't have anything else in the right size (10mm).
Still, no movement.
Let's look again from the inside!
Oh ffs, there are six more screws inside, holding the front.
Away with them!
Just need a very long PH1 screwdriver.
Now you can slowly remove the part of the front where the potentiometer is.
Be careful, the top tweeter is mounted to the front, not the main case and so is the headphone jack, without an obvious way to detach it.
But you can move away the front far enough to remove the small PCB with the potentiometer and the LED.
Great, this was the easy part!
The only thing printed on the potentiometer is "A10K".
10K is easy -- 10kOhm.
A?!
Wikipedia says "A" means "logarithmic", but only if made in the US or Asia.
In Europe that'd be "linear".
"B" in US/Asia means "linear", in Europe "logarithmic".
Do I need to tap the sign again?
(The sign is a print of XKCD#927.)
My multimeter says in this case it's something like logarithmic.
On the right channel anyway, the left one is more like a chopping board.
And what's this green box at the end?
Oh right, this thing also turns the power on and off.
So it's a power switch.
Where the fuck do I get a logarithmic 10kOhm stereo potentiometer with a power switch?
And then in the exact right size too?!
The fun continued into the following evening with a cocktail
reception at the
Houston Moxy hotel. Sustainable beverages were available for people
to try.
Smiling Oak sustainable whiskey is produced locally in Texas
and seeks to reuse barrels and other materials in the production process.
The Moxy team provided a wide range of food and drinks.
Sep 11 05:08:03 Warning: mysqldump: Error 2013: Lost connection to server during query when dumping table `1C4Uonkwhe_options` at row: 1402
Sep 11 05:08:03 Warning: Failed to dump mysql databases ic_wp
It’s a WordPress database having trouble dumping the options table.
The error log has a corresponding message:
Sep 11 13:50:11 mysql007 mariadbd[580]: 2024-09-11 13:50:11 69577 [Warning] Aborted connection 69577 to db: 'ic_wp' user: 'root' host: 'localhost' (Got an error writing communication packets)
The Internet is full of suggestions, almost all of which either focus on the
network connection between the client and the server or the FEDERATED plugin.
We aren’t using the federated plugin and this error happens when conneting via
the socket.
Check it out - what is better than a consistently reproducible problem!
It happens if I try to select all the values in the table:
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
It happens when I specifiy one specific offset:
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
It happens if I specify the field name explicitly:
root@mysql007:~# mysql --protocol=socket -e 'select option_id,option_name,option_value,autoload from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
root@mysql007:~# mysql --protocol=socket -e 'select option_value from 1C4Uonkwhe_options limit 1 offset 1402' ic_wp
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
It doesn’t happen if I query the specific row by key field:
Hm. Surely there is some funky non-printing character in that option_value right?
root@mysql007:~# mysql --protocol=socket -e 'select CHAR_LENGTH(option_value) from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
+---------------------------+
| CHAR_LENGTH(option_value) |
+---------------------------+
| 0 |
+---------------------------+
root@mysql007:~# mysql --protocol=socket -e 'select HEX(option_value) from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
+-------------------+
| HEX(option_value) |
+-------------------+
| |
+-------------------+
root@mysql007:~#
Resetting the value to an empty value doesn’t make a difference:
root@mysql007:~# mysql --protocol=socket -e 'update 1C4Uonkwhe_options set option_value = "" where option_id = 16296351' ic_wp
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
Deleting the row in question causes the error to specify a new offset:
root@mysql007:~# mysql --protocol=socket -e 'delete from 1C4Uonkwhe_options where option_id = 16296351' ic_wp
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options' ic_wp > /dev/null
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~# mysqldump ic_wp > /dev/null
mysqldump: Error 2013: Lost connection to server during query when dumping table `1C4Uonkwhe_options` at row: 1401
root@mysql007:~#
If I put the record I deleted back in, we return to the old offset:
root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_options VALUES(16296351,"z_taxonomy_image8905","","yes");' ic_wp
root@mysql007:~# mysqldump ic_wp > /dev/null
mysqldump: Error 2013: Lost connection to server during query when dumping table `1C4Uonkwhe_options` at row: 1402
root@mysql007:~#
I’m losing my little mind. Let’s get drastic and create a whole new table, copy over the data delicately working around
the deadly offset:
oot@mysql007:~# mysql --protocol=socket -e 'create table 1C4Uonkwhe_new_options like 1C4Uonkwhe_options;' ic_wp
root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_new_options select * from 1C4Uonkwhe_options limit 1402 offset 0;' ic_wp
--- There is only 33 more records, not sure how to specify unlimited limit but 100 does the trick.
root@mysql007:~# mysql --protocol=socket -e 'insert into 1C4Uonkwhe_new_options select * from 1C4Uonkwhe_options limit 100 offset 1403;' ic_wp
Now let’s make sure all is working properly:
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_new_options' ic_wp >/dev/null;
Now let’s examine which row we are missing:
root@mysql007:~# mysql --protocol=socket -e 'select option_id from 1C4Uonkwhe_options where option_id not in (select option_id from 1C4Uonkwhe_new_options) ;' ic_wp
+-----------+
| option_id |
+-----------+
| 18405297 |
+-----------+
root@mysql007:~#
Wait, what? I was expecting option_id 16296351.
Oh, now we are getting somewhere. And I see my mistake: when using offsets, you need to use ORDER BY or you won’t get consistent results.
root@mysql007:~# mysql --protocol=socket -e 'select option_id from 1C4Uonkwhe_options order by option_id limit 1 offset 1402' ic_wp ;
+-----------+
| option_id |
+-----------+
| 18405297 |
+-----------+
root@mysql007:~#
Now that I have the correct row… what is in it:
root@mysql007:~# mysql --protocol=socket -e 'select * from 1C4Uonkwhe_options where option_id = 18405297' ic_wp ;
ERROR 2013 (HY000) at line 1: Lost connection to server during query
root@mysql007:~#
Well, that makes a lot more sense. Let’s start over with examining the value:
root@mysql007:~# mysql --protocol=socket -e 'select CHAR_LENGTH(option_value) from 1C4Uonkwhe_options where option_id = 18405297' ic_wp ;
+---------------------------+
| CHAR_LENGTH(option_value) |
+---------------------------+
| 50814767 |
+---------------------------+
root@mysql007:~#
Wow, that’s a lot of characters. If it were a book, it would be 35,000 pages
long (I just discovered this
site). It’s a LONGTEXT
field so it should be able to handle it. But now I have a better idea of what
could be going wrong. The name of the option is “rewrite_rules” so it seems
like something is going wrong with the generation of that option.
I imagine there is some tweak I can make to allow MariaDB to cough up the value
(read_buffer_size? tmp_table_size?). But I’ll start with checking in with
the database owner because I don’t think 35,000 pages of rewrite rules is
appropriate for any site.
While fiddling around, I found a (fairly serious) vulnerability in Zyxel's
GS1900-10HP and related switches; today Zyxel released an advisory
with updated firmware, so I can publish my side of it as well. (Unfortunately
there's no Zyxel bounty program, but Zyxel PSIRT has been forthcoming all
along, which I guess is all you can hope for.)
The CVE (CVE-2024-38270) is sparse on details, so I'll simply paste my
original message to Zyxel below:
Hi,
GS1900-10HP (probably also many other switches in the same series),
firmware V2.80(AAZI.0) (also older ones) generate web authentication
tokens in an unsafe way. This makes it possible for an attacker
to guess them and hijack the session.
web_util_randStr_generate() contains code that is functionally
the same as this:
char token[17];
struct timeval now;
gettimeofday(&now, NULL);
srandom(now.tv_sec + now.tv_usec);
for (int i = 0; i < 16; ++i) {
long r = random() % 62;
char c;
if (r < 10) {
c = r + '0'; // 0..9
} else if (r < 36) {
c = r + ('A' - 10); // A..Z
} else {
c = r + ('a' - 36); // a..z
}
token[i] = c;
}
token[16] = 0;
(random() comes from uclibc, but it has the same generator as glibc,
so the code runs just as well on desktop Linux)
This token is generated on initial login, and stored in a cookie
on the client. This has multiple problems:
First, the clock is a known quantity; even if the switch is not on SNTP,
it is trivial to get its idea of time-of-day by just doing a HTTP
request and looking at the Date header. This means that if an attacker
knows precisely when the administrator logged in (for instance, by observing
a HTTPS login on the network), they will have a very limited range of
possible tokens to check.
Second, tv_sec and tv_usec are combined in an improper way, canceling
out much of the intended entropy. As long as one assumes that the
administrator logged in less than a day ago, the entire range of possible
seeds it contained within the range [now - 86400, now + 999999], i.e.
only about 1.1M possible cookies, which can simply be tried serially
even if one did not observe the original login. There is no brute-force
protection on the web interface.
I have verified that this attack is practical, by simply generating all the
tokens and asking for the status page repeatedly (it is trivial to see
whether it returns an authentication success or failure). The switch can
sustain about one try every 96 ms on average against an attacker on a local
LAN (there is no keepalive or multithreading, so the most trivial code is
seemingly also the best one), which means that an attack will succeed on
average after about 15 hours; my test run succeeded after a bit under three
hours. If there are multiple administrator sessions active, the expected time
to success is of course lower, although the tries are also somewhat slower
because the switch has to deal with the keepalive traffic from the admins.
This is a straightforward case of CWE-330 (Use of Insufficiently Random
Values), with subcategories CWE-331, CWE-334, CWE-335, CWE-337, CWE-339,
CWE-340, CWE-341 and probably others. The suggested fix is simple: Read
entropy from /dev/urandom or another good source, instead of using random().
(Make sure that you don't get bias issues due to the use of modulo; you can
use e.g. rejection sampling.)
Session timeout does help against this attack (by default, it is 3 minutes),
but only as long as the administrator has not kept a tab open. If the tab is
left open, that keeps on making background requests that refreshes the token
every five seconds, guaranteeing a 100% success rate if given a day or two.
There is also _tons_ of outdated software on the switch (kernel from 2008,
OpenSSH from 2013, netkit-telnetd which is no longer maintained, a fork of
a very old NET-SNMP, etc.), but I did not check whether there are any
relevant security holes or whether you have actually backported patches.
I haven't verified what their fix looks like, but it's probably somewhere
there in the GPL dump. :-)