Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

February 10, 2025

Antoine Beaupré

Qalculate hacks

This is going to be a controversial statement because some people are absolute nerds about this, but, I need to say it.

Qalculate is the best calculator that has ever been made.

I am not going to try to convince you of this, I just wanted to put out my bias out there before writing down those notes. I am a total fan.

This page will collect my notes of cool hacks I do with Qalculate. Most examples are copy-pasted from the command-line interface (qalc(1)), but I typically use the graphical interface as it's slightly better at displaying complex formulas. Discoverability is obviously also better for the cornucopia of features this fantastic application ships.

Qalc commandline primer

On Debian, Qalculate's CLI interface can be installed with:

apt install qalc

Then you start it with the qalc command, and end up on a prompt:

anarcat@angela:~$ qalc
> 

Then it's a normal calculator:

anarcat@angela:~$ qalc
> 1+1

  1 + 1 = 2

> 1/7

  1 / 7 ≈ 0.1429

> pi

  pi ≈ 3.142

> 

There's a bunch of variables to control display, approximation, and so on:

> set precision 6
> 1/7

  1 / 7 ≈ 0.142857
> set precision 20
> pi

  pi ≈ 3.1415926535897932385

When I need more, I typically browse around the menus. One big issue I have with Qalculate is there are a lot of menus and features. I had to fiddle quite a bit to figure out that set precision command above. I might add more examples here as I find them.

Bandwidth estimates

I often use the data units to estimate bandwidths. For example, here's what 1 megabit per second is over a month ("about 300 GiB"):

> 1 megabit/s * 30 day to gibibyte 

  (1 megabit/second) × (30 days) ≈ 301.7 GiB

Or, "how long will it take to download X", in this case, 1GiB over a 100 mbps link:

> 1GiB/(100 megabit/s)

  (1 gibibyte) / (100 megabits/second) ≈ 1 min + 25.90 s

Password entropy

To calculate how much entropy (in bits) a given password structure, you count the number of possibilities in each entry (say, [a-z] is 26 possibilities, "one word in a 8k dictionary" is 8000), extract the base-2 logarithm, multiplied by the number of entries.

For example, an alphabetic 14-character password is:

> log2(26*2)*14

  log₂(26 × 2) × 14 ≈ 79.81

... 80 bits of entropy. To get the equivalent in a Diceware password with a 8000 word dictionary, you would need:

> log2(8k)*x = 80

  (log₂(8 × 000) × x) = 80 ≈

  x ≈ 6.170

... about 6 words, which gives you:

> log2(8k)*6

  log₂(8 × 1000) × 6 ≈ 77.79

78 bits of entropy.

Exchange rates

You can convert between currencies!

> 1 EUR to USD

  1 EUR ≈ 1.038 USD

Even fake ones!

> 1 BTC to USD

  1 BTC ≈ 96712 USD

This relies on a database pulled form the internet (typically the central european bank rates, see the source). It will prompt you if it's too old:

It has been 256 days since the exchange rates last were updated.
Do you wish to update the exchange rates now? y

As a reader pointed out, you can set the refresh rate for currencies, as some countries will require way more frequent exchange rates.

The graphical version has a little graphical indicator that, when you mouse over, tells you where the rate comes from.

Other conversions

Here are other neat conversions extracted from my history

> teaspoon to ml

  teaspoon = 5 mL

> tablespoon to ml

  tablespoon = 15 mL

> 1 cup to ml 

  1 cup ≈ 236.6 mL

> 6 L/100km to mpg

  (6 liters) / (100 kilometers) ≈ 39.20 mpg

> 100 kph to mph

  100 kph ≈ 62.14 mph

> (108km - 72km) / 110km/h

  ((108 kilometers) − (72 kilometers)) / (110 kilometers/hour) ≈
  19 min + 38.18 s

Completion time estimates

This is a more involved example I often do.

Background

Say you have started a long running copy job and you don't have the luxury of having a pipe you can insert pv(1) into to get a nice progress bar. For example, rsync or cp -R can have that problem (but not tar!).

(Yes, you can use --info=progress2 in rsync, but that estimate is incremental and therefore inaccurate unless you disable the incremental mode with --no-inc-recursive, but then you pay a huge up-front wait cost while the entire directory gets crawled.)

Extracting a process start time

First step is to gather data. Find the process start time. If you were unfortunate enough to forget to run date --iso-8601=seconds before starting, you can get a similar timestamp with stat(1) on the process tree in /proc with:

$ stat /proc/11232
  File: /proc/11232
  Size: 0               Blocks: 0          IO Block: 1024   directory
Device: 0,21    Inode: 57021       Links: 9
Access: (0555/dr-xr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2025-02-07 15:50:25.287220819 -0500
Modify: 2025-02-07 15:50:25.287220819 -0500
Change: 2025-02-07 15:50:25.287220819 -0500
 Birth: -

So our start time is 2025-02-07 15:50:25, we shave off the nanoseconds there, they're below our precision noise floor.

If you're not dealing with an actual UNIX process, you need to figure out a start time: this can be a SQL query, a network request, whatever, exercise for the reader.

Saving a variable

This is optional, but for the sake of demonstration, let's save this as a variable:

> start="2025-02-07 15:50:25"

  save("2025-02-07T15:50:25"; start; Temporary; ; 1) =
  "2025-02-07T15:50:25"

Estimating data size

Next, estimate your data size. That will vary wildly with the job you're running: this can be anything: number of files, documents being processed, rows to be destroyed in a database, whatever. In this case, rsync tells me how many bytes it has transferred so far:

# rsync -ASHaXx --info=progress2 /srv/ /srv-zfs/
2.968.252.503.968  94%    7,63MB/s    6:04:58  xfr#464440, ir-chk=1000/982266) 

Strip off the weird dots in there, because that will confuse qalculate, which will count this as:

  2.968252503968 bytes ≈ 2.968 B

Or, essentially, three bytes. We actually transferred almost 3TB here:

  2968252503968 bytes ≈ 2.968 TB

So let's use that. If you had the misfortune of making rsync silent, but were lucky enough to transfer entire partitions, you can use df (without -h! we want to be more precise here), in my case:

Filesystem              1K-blocks       Used  Available Use% Mounted on
/dev/mapper/vg_hdd-srv 7512681384 7258298036  179205040  98% /srv
tank/srv               7667173248 2870444032 4796729216  38% /srv-zfs

(Otherwise, of course, you use du -sh $DIRECTORY.)

Digression over bytes

Those are 1 K bytes which is actually (and rather unfortunately) Ki, or "kibibytes" (1024 bytes), not "kilobytes" (1000 bytes). Ugh.

> 2870444032 KiB

  2870444032 kibibytes ≈ 2.939 TB
> 2870444032 kB

  2870444032 kilobytes ≈ 2.870 TB

At this scale, those details matter quite a bit, we're talking about a 69GB (64GiB) difference here:

> 2870444032 KiB - 2870444032 kB

  (2870444032 kibibytes) − (2870444032 kilobytes) ≈ 68.89 GB

Anyways. Let's take 2968252503968 bytes as our current progress.

Our entire dataset is 7258298064 KiB, as seen above.

Solving a cross-multiplication

We have 3 out of four variables for our equation here, so we can already solve:

> (now-start)/x = (2996538438607 bytes)/(7258298064 KiB) to h

  ((actual − start) / x) = ((2996538438607 bytes) / (7258298064
  kibibytes))

  x ≈ 59.24 h

The entire transfer will take about 60 hours to complete! Note that's not the time left, that is the total time.

To break this down step by step, we could calculate how long it has taken so far:

> now-start

  now − start ≈ 23 h + 53 min + 6.762 s

> now-start to s

  now − start ≈ 85987 s

... and do the cross-multiplication manually, it's basically:

x/(now-start) = (total/current)

so:

x = (total/current) * (now-start)

or, in Qalc:

> ((7258298064  kibibytes) / ( 2996538438607 bytes) ) *  85987 s

  ((7258298064 kibibytes) / (2996538438607 bytes)) × (85987 secondes) ≈
  2 d + 11 h + 14 min + 38.81 s

It's interesting it gives us different units here! Not sure why.

Now and built-in variables

The now here is actually a built-in variable:

> now

  now ≈ "2025-02-08T22:25:25"

There is a bewildering list of such variables, for example:

> uptime

  uptime = 5 d + 6 h + 34 min + 12.11 s

> golden

  golden ≈ 1.618

> exact

  golden = (√(5) + 1) / 2

Computing dates

In any case, yay! We know the transfer is going to take roughly 60 hours total, and we've already spent around 24h of that, so, we have 36h left.

But I did that all in my head, we can ask more of Qalc yet!

Let's make another variable, for that total estimated time:

> total=(now-start)/x = (2996538438607 bytes)/(7258298064 KiB)

  save(((now − start) / x) = ((2996538438607 bytes) / (7258298064
  kibibytes)); total; Temporary; ; 1) ≈
  2 d + 11 h + 14 min + 38.22 s

And we can plug that into another formula with our start time to figure out when we'll be done!

> start+total

  start + total ≈ "2025-02-10T03:28:52"

> start+total-now

  start + total − now ≈ 1 d + 11 h + 34 min + 48.52 s

> start+total-now to h

  start + total − now ≈ 35 h + 34 min + 32.01 s

That transfer has ~1d left, or 35h24m32s, and should complete around 4 in the morning on February 10th.

But that's icing on top. I typically only do the cross-multiplication and calculate the remaining time in my head.

I mostly did the last bit to show Qalculate could compute dates and time differences, as long as you use ISO timestamps. Although it can also convert to and from UNIX timestamps, it cannot parse arbitrary date strings (yet?).

Other functionality

Qalculate can:

  • Plot graphs;
  • Use RPN input;
  • Do all sorts of algebraic, calculus, matrix, statistics, trigonometry functions (and more!);
  • ... and so much more!

I have a hard time finding things it cannot do. When I get there, I typically need to resort to programming code in Python, use a spreadsheet, and others will turn to more complete engines like Maple, Mathematica or R.

But for daily use, Qalculate is just fantastic.

And it's pink! Use it!

Further reading and installation

This is just scratching the surface, the fine manual has more information, including more examples. There is also of course a qalc(1) manual page which also ships an excellent EXAMPLES section.

Qalculate is packaged for over 30 Linux distributions, but also ships packages for Windows and MacOS. There are third-party derivatives as well including a web version and an Android app.

10 February, 2025 04:22PM

hackergotchi for Philipp Kern

Philipp Kern

20 years

20 years ago, I got my Debian Developer account. I was 18 at the time, it was Shrove Tuesday and - as is customary - I was drunk when I got the email. There was so much that I did not know - which is also why the process took 1.5 years from the time I applied. I mostly only maintained a package or two. I'm still amazed that Christian Perrier and Joerg Jaspert put sufficient trust in me at that time. Nevertheless now feels like a good time for a personal reflection of my involvement in Debian.

During my studies I took on more things. In January 2008 I joined the Release Team as an assistant, which taught me a lot of code review. I have been an Application Manager on the side.

Going to my first Debconf was really a turning point. My first one was Mar del Plata in Argentina in August 2008, when I was 21. That was quite an excitement, traveling that far from Germany for the first time. The personal connections I made there made quite the difference. It was also a big boost for motivation. I attended 8 (Argentina), 9 (Spain), 10 (New York), 11 (Bosnia and Herzegovina), 12 (Nicaragua), 13 (Switzerland), 14 (Portland), 15 (Germany), 16 (South Africa), and hopefully I'll make it to this year's in Brest. At all of them I did not see much of the countries as I prioritized all of my time focused on Debian, even skipping some of the day trips in favor of team meetings. Yet I am very grateful to the project (and to my employer) for shipping me there.

I ended up as Stable Release Manager for a while, from August 2008 - when Martin Zobel-Helas moved into DSA - until I got dropped in March 2020. I think my biggest achievements were pushing for the creation of -updates in favor of a separate volatile archive and a change of the update policy to allow for more common sense updates in the main archive vs. the very strict "breakage or security" policy we had previously. I definitely need to call out Adam D. Barratt for being the partner in crime, holding up the fort for even longer.

In 2009 I got too annoyed at the existing wanna-build team not being responsive anymore and pushed for the system to be given to a new team. I did not build it and significant contributions were done by other people (like Andreas Barth and Joachim Breitner, and later Aurelien Jarno). I mostly reworked the way the system was triggered, investigated when it broke and was around when people wanted things merged.

In the meantime I worked sys/netadmin jobs while at university, both paid and as a volunteer with the students' council. For a year or two I was the administrator of a System z mainframe IBM donated to my university. We had a mainframe course and I attended two related conferences. That's where my s390(x) interest came from, although credit for the port needs to go to Aurelien Jarno.

Since completing university in 2013 I have been working for a company for almost 12 years. Debian experience was very relevant to the job and I went on maintaining a Linux distro or two at work - before venturing off into security hardening. People in megacorps - in my humble opinion - disappear from the volunteer projects because a) they might previously have been studying and thus had a lot more time on their hands and b) the job is too similar to the volunteer work and thus the same brain cells used for work are exhausted and can't be easily reused for volunteer work. I kept maintaining a couple of things (buildds, some packages) - mostly because of a sense of commitment and responsibility, but otherwise kind of scaled down my involvement. I also felt less connected as I dropped off IRC.

Last year I finally made it to Debian events again: MiniDebconf in Berlin, where we discussed the aftermath of the xz incident, and the Debian BSP in Salzburg. I rejoined IRC using the Matrix bridge. That also rekindled my involvement, with me guiding a new DD through NM and ending up in DSA. To be honest, only in the last two or three years I felt like a (more) mature old-timer.

I have a new gig at work lined up to start soon and next to that I have sysadmining for Debian. It is pretty motivating to me that I can just get things done - something that is much harder to achieve at work due to organizational complexities. It balances out some frustration I'd otherwise have. The work is different enough to be enjoyable and the people I work with are great.

The future

I still think the work we do in Debian is important, as much as I see a lack of appreciation in a world full of containers. We are reaping most of the benefits of standing on the shoulders of giants and of great decisions made in the past (e.g. the excellent Debian policy, but also the organizational model) that made Debian what it is today.

Given the increase in size and complexity of what Debian ships - and the somewhat dwindling resource of developer time, it would benefit us to have better processes for large-scale changes across all packages. I greatly respect the horizontal effects that are currently being driven and that suck up a lot of energy.

A lot of our infrastructure is also aging and not super well maintained. Many take it for granted that the services we have keep existing, but most are only maintained by a person or two, if even. Software stacks are aging and it is even a struggle to have all necessary packages in the next release.

Hopefully I can contribute a bit or two to these efforts in the future.

10 February, 2025 11:12AM by Philipp Kern (noreply@blogger.com)

Swiss JuristGate

Unredacted FINMA judgment into Parreaux, Thiébaud & Partners / Justicia SA [TOP SECRET]

In September 2023, Gaelle Jeanmonod at FINMA published a summary of the judgment against Parreaux, Thiébaud & Partners and their successor Justicia SA.

Madame Jeanmonod redacted the name of the company, the dates and other key details. We have recreated the unredacted judgment.

Many paragraphs are missing. The document released by Madame Jeanmonod only includes paragraphs 55 to 65 and the paragraph 69.

Some entire sentences appear to be missing and replaced with the symbol (...).

Details about the original publication on the FINMA site.

FINMA Judgment, Parreaux Thiebaud & Partners, Justicia SA, Justiva SA, Mathieu Parreaux, Gaelle Jeanmonod

 

Key to symbols:

SymbolMeaning
PTPParreaux, Thiébaud & Partners
AMathieu Parreaux
XParreaux, Thiébaud & Partners
YJusticia SA

Important: we recommended reading together with the full chronological history published in the original blog post by Daniel Pocock.

Provision of insurance services without autorisation

Judgment of the financial markets regulator FINMA de 2023

Summary

Following numerous reports that Parreaux, Thiébaud & Partners was operating an insurance business without authorisation, FINMA conducted investigations that led to the opening of enforcement proceedings. In fact, Parreaux, Thiébaud & Partners offered legal subscriptions for companies and individuals, which provided unlimited access to various legal services for an annual fee. In addition, Parreaux, Thiébaud & Partners also financed, in certain situations, advances on costs to pay lawyers' and court fees in the form of a loan at a 0&percnt interest rate. According to its general terms and conditions, Parreaux, Thiébaud & Partners then obtained reimbursement of this loan from the legal costs to be received at the end of the proceedings in the event of victory. In the event of loss, the balance constituted a non-repayable loan. With regard to the areas of law that were partially covered and to disputes prior to the signing of the contract, the claim was partially covered by 50&percnt.

During the procedure, FINMA appointed an investigation officer within Parreaux, Thiébaud & Partners. While the investigation officer's work had already begun, the activities of Parreaux, Thiébaud & Partners were taken over by Justicia SA in [late 2021 or early 2022]. From that point on, Parreaux, Thiébaud & Partners ceased its activities for new clients. Clients who had taken out a subscription with Parreaux, Thiébaud & Partners prior to the month of (…) were informed when renewing their subscription that their subscription had been transferred to Justicia SA. FINMA then extended the procedure and the mandate of the investigation officer to the latter. The business model of Justicia SA is almost identical to that of Parreaux, Thiébaud & Partners. The main difference concerns the terms of repayment of the loan which, according to the general terms and conditions of Justicia SA, was also repayable in the event of defeat according to the "terms agreed between the parties".

The report of the investigating officer contains in particular a detailed analysis of the activity of the two companies as well as a sample examination of client files.

By decision of [April?] 2023, FINMA held that the conditions set by case law to qualify an insurance activity were met and therefore found that Parreaux, Thiébaud & Partners, Justicia SA as well as Mathieu Parreaux, managing partner of Parreaux, Thiébaud & Partners and director of Justicia SA, carried out an insurance activity without having the required authorisation.

FINMA then found that Parreaux, Thiébaud & Partners, Justicia SA and Mathieu Parreaux had carried out insurance activities without the necessary authorisation, appointed a liquidator and ordered the immediate liquidation of the two companies. FINMA also ordered the confiscation of the liquidation proceeds in favour of the Confederation, ordered Mathieu Parreaux to refrain from carrying out, without the necessary authorisation, any activity subject to authorisation under the financial market laws and published the order to refrain for a period of 2 years on its website.

Key points from the judgment

(…)

1. Engaging in insurance transactions without the right to do so

(55) The LSA is intended in particular to protect policyholders against the risks of insolvency of insurance companies and against abuse2. Insurance companies established in Switzerland that carry out direct insurance or reinsurance activities must first obtain authorisation from FINMA and are subject to its supervision3. Where special circumstances justify it, FINMA may release from supervision an insurance company for which the insurance activity is of little economic importance or only affects a limited circle of policyholders4.

(56) In accordance with Art. 2 para. 4 LSA, it is up to the Federal Council to define the activity in Switzerland in the field of insurance. In an ordinance dated 9 November 2005, the Federal Council clarified that, regardless of the method and place of conclusion of the contract, there is an insurance activity in Switzerland when a natural or legal person domiciled in Switzerland is the policyholder or insured5. Furthermore, the LSA applies to all insurance activities of Swiss insurance companies, both for insurance activities in Switzerland and abroad. Thus, even insurance contracts concluded from Switzerland but which relate exclusively to risks located abroad with policyholders domiciled abroad are subject to the LSA. In such cases, there may also be concurrent foreign supervisory jurisdiction at the policyholder's domicile6.

(57) Since the legislature did not define the concept of insurance, the Federal Court developed five cumulative criteria to define it7: the existence of a risk, the service provided by the policyholder consisting of the payment of a premium, the insurance service, the autonomous nature of the transaction and the compensation of risks on the basis of statistical data. It is appropriate to examine below whether the services provided by Parreaux, Thiébaud & Partners and Justicia SA respectively meet the criteria of the given definition of the insurance activity.

(58) The existence of a risk: this is the central element for the qualification of insurance. The object of an insurance is always a risk or a danger, i.e. an event whose occurrence is possible but uncertain. The risk or its financial consequences are transferred from the insured to the insurer8. The uncertainty assumed by the insurer typically consists of determining whether and when the event that triggers the obligation to pay benefits occurs. The uncertainty can also result from the consequences of an event (already certain)9. In a judgment of 21 January 2011, the Federal Court, for example, acknowledged that the rental guarantee insurer who undertakes to pay the lessor the amount of the rental guarantee in place of the tenant while reserving the right to take action against the latter to obtain reimbursement of the amount paid, bears the risk of the tenant's insolvency. Thus, the risk of non-payment by the tenant is sufficient in itself to qualify this risk as an insurance risk10.

(59) In this case, the purpose of the legal subscriptions offered by Parreaux, Thiébaud & Partners / Justicia SA is the transfer of a risk from the clients to Parreaux, Thiébaud & Partners / Justicia SA. Indeed, when the client concludes a legal subscription, Parreaux, Thiébaud & Partners / Justicia SA assumes the risk of having to provide legal services and bear administrative costs, respectively lawyers' fees, court fees or expert fees incurred by legal matters. When a client reports a claim, Parreaux, Thiébaud & Partners / Justicia SA bears the risk and therefore the financial consequences arising from the need for legal assistance in question. In cases where there is a claim prior to the conclusion of the subscription, Parreaux, Thiébaud & Partners / Justicia SA will cover 50&percnt of the costs for this claim, but will continue to bear the risk for any future disputes that may arise during the term of the subscription. In this sense, Parreaux, Thiébaud & Partners / Justicia SA provide services that go beyond those offered by traditional legal protection insurance, which, however, has no influence on the existence of an uncertain risk transferred to Parreaux, Thiébaud & Partners / Justicia SA upon conclusion of the subscription. Furthermore, it was found during the investigation that, in at least one case, Parreaux, Thiébaud & Partners covered the fees without entering into a loan agreement with the client; it was therefore not provided for these advances to be repaid, contrary to what was provided for in the general terms and conditions of Parreaux, Thiébaud & Partners. Furthermore, it could not be established that the new wording of the general terms and conditions of Justicia SA providing for the repayment of the loan regardless of the outcome of the proceedings had been implemented. To date, no loan has been repaid. These elements allow us to conclude that the risk of having to pay for legal services and advances on fees are borne by Parreaux, Thiébaud & Partners and Justicia SA in place of the clients. Finally, in accordance with the case law of the Federal Court, even if the loan granted by Justicia SA is accompanied by an obligation to repay, the simple fact of bearing the risk of insolvency of its clients is sufficient to justify the classification of insurance risk.

(60) The insured's benefit (the premium) and the insurance benefit: In order to qualify a contract as an insurance contract, it is essential that the policyholder's obligation to pay the premiums is offset by an obligation on the part of the insurer to provide benefits. The insured must therefore be entitled to the insurer's benefit at the time of the occurrence of the insured event11. To date, the Federal Court has not ruled on the question of whether the promise to provide a service (assistance, advice, etc.) constitutes an insurance benefit. However, recent doctrine shows that the provision of services can also be considered as insurance benefits. Furthermore, this position is confirmed and defended by the Federal Council with regard to legal protection insurance, which it defined in Art. 161 OS as follows: "By the legal protection insurance contract, the insurance company undertakes, against payment of a premium, to reimburse the costs incurred by legal matters or to provide services in such matters"12.

(61) In this case, when a client enters into a legal subscription contract with Parreaux, Thiébaud & Partners/Justicia SA, he agrees to pay an annual premium which then allows him to have access to a catalogue of services depending on the subscription chosen. Parreaux, Thiébaud & Partners/Justicia SA undertakes for their part to provide legal assistance to the client if necessary, provided that the conditions for taking charge of the case are met. Parreaux, Thiébaud & Partners/Justicia SA leaves itself a wide margin of discretion in deciding whether it is a case of prior art or whether the case has little chance of success. In these cases, the services remain partially covered, up to 50&percnt. This approach is more generous than the practice of legal insurance companies on the market. In fact, cases of prior art are not in principle covered by legal protection insurance and certain areas are also often excluded from the range of services included in the contract.

(62) The autonomous nature of the transaction: The autonomy of the transaction is essential to the insurance business, even though the nature of an insurance transaction does not disappear simply because it is linked in the same agreement to services of another type. In order to determine whether the insurance service is presented simply as an ancillary agreement or a modality of the entire transaction, the respective importance of the two elements of the contract in the specific case must be taken into account and this must be assessed in the light of the circumstances13.

(63) In this case, the obligation for Parreaux, Thiébaud & Partners/Justicia SA to provide legal services to clients who have subscribed to the subscriptions and to bear administrative costs, respectively lawyers' fees, court fees or expert fees does not represent a commitment that would be incidental or complementary to another existing contract or to another predominant service between Parreaux, Thiébaud & Partners/Justicia SA and the clients. On the contrary, the investigation showed that the legal subscriptions offered are autonomous contracts.

(64) Risk compensation based on statistical data: Finally, the case law requires, as another characteristic of the insurance business, that the company compensates the risks assumed in accordance with the laws of statistics. The requirements set by the Federal Court for this criterion are not always formulated uniformly in judicial practice. The Federal Court does not require a correct actuarial calculation but rather risk compensation based on statistical data14. Furthermore, it has specified that it is sufficient for the risk compensation to be carried out according to the law of large numbers and according to planning based on the nature of the business15. In another judgment16, the Federal Court adopted a different approach and considered that the criterion of risk compensation based on statistical data is met when the income from the insurance business allows expenses to be covered while leaving a safety margin. Finally, in another judgment17, the High Court deduced from the fact that the products were offered to an indeterminate circle of people that the risks would be logically distributed among all customers according to the laws of statistics and large numbers18.

(65) In this case, the risks assumed by Parreaux, Thiébaud & Partners/Justicia SA are offset by the laws of statistics, at the very least by the compensation of risks according to the law of large numbers. Knowing that only a very small part of their clientele will use the services provided by Parreaux, Thiébaud & Partners/Justicia SA, the latter are counting on the fact that the income from the contributions from legal subscriptions will be used to cover the expenses incurred for clients whose cases must be handled by Parreaux, Thiébaud & Partners/Justicia SA while leaving a safety margin. Indeed, the analysis of the files revealed that when a client reports a case to Parreaux, Thiébaud & Partners/Justicia SA, the costs incurred to handle the case are at least three times higher than the contribution paid. Support in this proportion is only possible by assuming that only a few clients will need legal assistance and by ensuring that all contributions are used to cover these costs. (…).

(66) (…) The investigation, however, revealed that there is indeed an economic adequacy between the services provided to clients by Parreaux, Thiébaud & Partners / Justicia SA and the subscription fees it collects. In this way, Parreaux, Thiébaud & Partners / Justicia SA offsets its own risks, namely the costs related to the legal services it provides as well as the risk of not obtaining repayment of the loan granted to the client, by the diversification of risks that occurs when a large number of corresponding transactions are concluded, i.e. according to the law of large numbers. In view of the above, there is no doubt that the risk compensation criterion is met within the framework of the business model of Parreaux, Thiébaud & Partners / Justicia SA.

(69) (…) In view of the above, it is established that Parreaux, Thiébaud & Partners and Justicia SA have exercised, respectively exercise, an insurance activity within the meaning of Art. 2 para. 1 let. a in relation to Art. 3 para. 1 LSA and Art. 161 OS without having the required authorisation from FINMA. Indeed, upon conclusion of a subscription, clients can request legal services from Parreaux, Thiébaud & Partners/Justicia SA against payment of an annual premium. In addition to these services, the latter grant a loan to clients to cover legal costs and lawyers' fees. Although these loans are repayable "according to the agreed terms", none of these terms appear to exist in practice and no loan repayments have been recorded. Finally, the mere fact of bearing the risk of insolvency of clients is sufficient for the insurance risk criterion to be met. Furthermore, in view of the current number of legal subscription contracts held by Justicia SA, the turnover generated by its legal subscriptions and the fact that Justicia SA, and before it Parreaux, Thiébaud & Partners, offers its services to an unlimited number of persons, there are no special circumstances within the meaning of Art. 2 para. 3 LSA allowing Parreaux, Thiébaud & Partners and Justicia SA to be released from supervision under Art. 2 para. 1 LSA.

(…)

Dispositif


  1. Loi fédérale sur la surveillance des entreprises d'assurance (LSA; RS 961.01).
  2. Art. 1 al. 2 LSA.
  3. Art. 2 al. 1 let. a en relation avec l’art. 3 al. 1 LSA.
  4. Art. 2 al. 3 LSA.
  5. Art. 1 al. 1 let. a OS.
  6. HEISS/MÖNNICH, in: Hsu/Stupp (éd.), Basler Kommentar, Versicherungsaufsichtsgesetz, Bâle 2013, nos 5 s ad art. 2 LSA et les références citées.
  7. ATF 114 Ib 244 consid. 4.a et les références citées.
  8. HEISS/MÖNNICH, op. cit., nos 15 ss ad art. 2 LSA et les références citées.
  9. HEISS/MÖNNICH, op. cit., nos 5 s. ad art. 2 LSA et les références citées.
  10. TF 2C_410/2010 du 21 janvier 2011 consid. 3.2 et 4.2.
  11. HEISS/MÖNNICH, op. cit., nos 23 ss ad art. 2 LSA et les références citées.
  12. HEISS/MÖNNICH, op. cit., nos 26 ss ad art. 2 LSA et les références citées.
  13. HEISS/MÖNNICH, op. cit., nos 30ss ad art. 2 LSA et les références citées.
  14. ATF 107 Ib 54 consid. 5.
  15. Ibid.
  16. ATF 92 I 126, consid. 3.
  17. TF 2C_410/2010 du 21 janvier 2010 consid. 3.4.
  18. HEISS/MÖNNICH, op. cit., nos 34 ss ad art. 2 LSA et les références citées.

10 February, 2025 10:30AM

Russ Allbery

Review: The Scavenger Door

Review: The Scavenger Door, by Suzanne Palmer

Series: Finder Chronicles #3
Publisher: DAW
Copyright: 2021
ISBN: 0-7564-1516-0
Format: Kindle
Pages: 458

The Scavenger Door is a science fiction adventure and the third book of the Finder Chronicles. While each of the books of this series stand alone reasonably well, I would still read the series in order. Each book has some spoilers for the previous book.

Fergus is back on Earth following the events of Driving the Deep, at loose ends and annoying his relatives. To get him out of their hair, his cousin sends him into the Scottish hills to find a friend's missing flock of sheep. Fergus finds things professionally, but usually not livestock. It's an easy enough job, though; the lead sheep was wearing a tracker and he just has to get close enough to pick it up. The unexpected twist is also finding a metal fragment buried in a hillside that has some strange resonance with the unwanted gift that Fergus got in Finder.

Fergus's alien friend Ignatio is so alarmed by the metal fragment that he turns up in person in Fergus's cousin's bar in Scotland. Before he arrives, Fergus gets a mysteriously infuriating warning visit from alien acquaintances he does not consider friends. He has, as usual, stepped into something dangerous and complicated, and now somehow it's become his problem.

So, first, we get lots of Ignatio, who is an enthusiastic large ball of green fuzz with five limbs who mostly speaks English but does so from an odd angle. This makes me happy because I love Ignatio and his tendency to take things just a bit too literally.

SANTO'S, the sign read. Under it, in smaller letters, was CURIOSITIES AND INCONVENIENCES FOR COMMENDABLE SUMS.

"Inconveniences sound just like my thing," Fergus said. "You two want to wait in the car while I check it out?"

"Oh, no, I am not missing this," Isla said, and got out of the podcar.

"I am uncertain," Ignatio said. "I would like some curiouses, but not any inconveniences. Please proceed while I decide, and if there is also murdering or calamity or raisins, you will yell right away, yes?"

Also, if your story setup requires a partly-understood alien artifact that the protagonist can get some explanations for but not have the mystery neatly solved for them, Ignatio's explanations are perfect.

"It is a door. A doorbell. A... peephole? A key. A control light. A signal. A stop-and-go sign. A road. A bridge. A beacon. A call. A map. A channel. A way," Ignatio said. "It is a problem to explain. To say a doorkey is best, and also wrong. If put together, a path may be opened."

"And then?"

"And then the bad things on the other side, who we were trying to lock away, will be free to travel through."

Second, the thing about Palmer's writing that continues to impress me is her ability to take a standard science fiction plot, one whose variations I've read probably dozens of times before, and still make it utterly engrossing. This book is literally a fetch quest. There are a bunch of scattered fragments, Fergus has to find them and keep them from being assembled, various other people are after the same fragments, and Fergus either has to get there first or get the fragments back from them. If you haven't read this book before, you've played the video game or watched the movie. The threat is basically a Stargate SG-1 plot. And yet, this was so much fun.

The characters are great. This book leans less on found family than the last one and a bit more on actual family. When I started reading this series, Fergus felt a bit bland in the way that adventure protagonists sometimes can, but he's fleshed out nicely as the series goes along. He's not someone who tends to indulge in big emotions, but now the reader can tell that's because he's the kind of person who finds things to do in order to keep from dwelling on things he doesn't want to think about. He's unflappable in a quietly competent way while still having a backstory and emotional baggage and a rich inner life that the reader sees in glancing fragments.

We get more of Fergus's backstory, particularly around Mars, but I like that it's told in anecdotes and small pieces. The last thing Fergus wants to do is wallow in his past trauma, so he doesn't and finds something to do instead. There's just enough detail around the edges to deepen his character without turning the book into a story about Fergus's emotions and childhood. It's a tricky balancing act that Palmer handles well.

There are also more sentient ships, and I am so in favor of more sentient ships.

"When I am adding a new skill, I import diagnostic and environmental information specific to my platform and topology, segregate the skill subroutines to a dedicated, protected logical space, run incremental testing on integration under all projected scenarios and variables, and then when I am persuaded the code is benevolent, an asset, and provides the functionality I was seeking, I roll it into my primary processing units," Whiro said. "You cannot do any of that, because if I may speak in purely objective terms you may incorrectly interpret as personal, you are made of squishy, unreliable goo."

We get the normal pieces of a well-done fetch quest: wildly varying locations, some great local characters (the US-based trauma surgeons on vacation in Australia were my favorites), and believable antagonists. There are two other groups looking for the fragments, and while one of them is the standard villain in this sort of story, the other is an apocalyptic cult whose members Fergus mostly feels sorry for and who add just the right amount of surreality to the story. The more we find out about them, the more believable they are, and the more they make this world feel like realistic messy chaos instead of the obvious (and boring) good versus evil patterns that a lot of adventure plots collapse into.

There are things about this book that I feel like I should be criticizing, but I just can't. Fetch quests are usually synonymous with lazy plotting, and yet it worked for me. The way Fergus gets dumped into the middle of this problem starts out feeling as arbitrary and unmotivated as some video game fetch quest stories, but by the end of the book it starts to make sense. The story could arguably be described as episodic and cliched, and yet I was thoroughly invested. There are a few pacing problems at the very end, but I was too invested to care that much. This feels like a book that's better than the sum of its parts.

Most of the story is future-Earth adventure with some heist elements. The ending goes in a rather different direction but stays at the center of the classic science fiction genre. The Scavenger Door reaches a satisfying conclusion, but there are a ton of unanswered questions that will send me on to the fourth (and reportedly final) novel in the series shortly.

This is great stuff. It's not going to win literary awards, but if you're in the mood for some classic science fiction with fun aliens and neat ideas, but also benefiting from the massive improvements in characterization the genre has seen in the past forty years, this series is perfect. Highly recommended.

Followed by Ghostdrift.

Rating: 9 out of 10

10 February, 2025 04:03AM

February 09, 2025

Antoine Beaupré

A slow blogging year

Well, 2024 will be remembered, won't it? I guess 2025 already wants to make its mark too, but let's not worry about that right now, and instead let's talk about me.

A little over a year ago, I was gloating over how I had such a great blogging year in 2022, and was considering 2023 to be average, then went on to gather more stats and traffic analysis... Then I said, and I quote:

I hope to write more next year. I've been thinking about a few posts I could write for work, about how things work behind the scenes at Tor, that could be informative for many people. We run a rather old setup, but things hold up pretty well for what we throw at it, and it's worth sharing that with the world...

What a load of bollocks.

A bad year for this blog

2024 was the second worst year ever in my blogging history, tied with 2009 at a measly 6 posts for the year:

anarcat@angela:anarc.at$ curl -sSL https://anarc.at/blog/ | grep 'href="\./' | grep -o 20[0-9][0-9] | sort | uniq -c | sort -nr | grep -v 2025 | tail -3
      6 2024
      6 2009
      3 2014

I did write about my work though, detailing the migration from Gitolite to GitLab we completed that year. But after August, total radio silence until now.

Loads of drafts

It's not that I have nothing to say: I have no less than five drafts in my working tree here, not counting three actual drafts recorded in the Git repository here:

anarcat@angela:anarc.at$ git s blog
## main...origin/main
?? blog/bell-bot.md
?? blog/fish.md
?? blog/kensington.md
?? blog/nixos.md
?? blog/tmux.md
anarcat@angela:anarc.at$ git grep -l '\!tag draft'
blog/mobile-massive-gallery.md
blog/on-dying.mdwn
blog/secrets-recovery.md

I just don't have time to wrap those things up. I think part of me is disgusted by seeing my work stolen by large corporations to build proprietary large language models while my idols have been pushed to suicide for trying to share science with the world.

Another part of me wants to make those things just right. The "tagged drafts" above are nothing more than a huge pile of chaotic links, far from being useful for anyone else than me, and even then.

The on-dying article, in particular, is becoming my nemesis. I've been wanting to write that article for over 6 years now, I think. It's just too hard.

Writing elsewhere

There's also the fact that I write for work already. A lot. Here are the top-10 contributors to our team's wiki:

anarcat@angela:help.torproject.org$ git shortlog --numbered --summary --group="format:%al" | head -10
  4272  anarcat
   423  jerome
   117  zen
   116  lelutin
   104  peter
    58  kez
    45  irl
    43  hiro
    18  gaba
    17  groente

... but that's a bit unfair, since I've been there half a decade. Here's the last year:

anarcat@angela:help.torproject.org$ git shortlog --since=2024-01-01 --numbered --summary --group="format:%al" | head -10
   827  anarcat
   117  zen
   116  lelutin
    91  jerome
    17  groente
    10  gaba
     8  micah
     7  kez
     5  jnewsome
     4  stephen.swift

So I still write the most commits! But to truly get a sense of the amount I wrote in there, we should count actual changes. Here it is by number of lines (from commandlinefu.com):

anarcat@angela:help.torproject.org$ git ls-files | xargs -n1 git blame --line-porcelain | sed -n 's/^author //p' | sort -f | uniq -ic | sort -nr | head -10
  99046 Antoine Beaupré
   6900 Zen Fu
   4784 Jérôme Charaoui
   1446 Gabriel Filion
   1146 Jerome Charaoui
    837 groente
    705 kez
    569 Gaba
    381 Matt Traudt
    237 Stephen Swift

That, of course, is the entire history of the git repo, again. We should take only the last year into account, and probably ignore the tails directory, as sneaky Zen Fu imported the entire docs from another wiki there...

anarcat@angela:help.torproject.org$ find [d-s]* -type f -mtime -365 | xargs -n1 git blame --line-porcelain 2>/dev/null | sed -n 's/^author //p' | sort -f | uniq -ic | sort -nr | head -10
  75037 Antoine Beaupré
   2932 Jérôme Charaoui
   1442 Gabriel Filion
   1400 Zen Fu
    929 Jerome Charaoui
    837 groente
    702 kez
    569 Gaba
    381 Matt Traudt
    237 Stephen Swift

Pretty good! 75k lines. But those are the files that were modified in the last year. If we go a little more nuts, we find that:

anarcat@angela:help.torproject.org$ $ git-count-words-range.py  | sort -k6 -nr | head -10
parsing commits for words changes from command: git log '--since=1 year ago' '--format=%H %al'
anarcat 126116 - 36932 = 89184
zen 31774 - 5749 = 26025
groente 9732 - 607 = 9125
lelutin 10768 - 2578 = 8190
jerome 6236 - 2586 = 3650
gaba 3164 - 491 = 2673
stephen.swift 2443 - 673 = 1770
kez 1034 - 74 = 960
micah 772 - 250 = 522
weasel 410 - 0 = 410

I wrote 126,116 words in that wiki, only in the last year. I also deleted 37k words, so the final total is more like 89k words, but still: that's about forty (40!) articles of the average size (~2k) I wrote in 2022.

(And yes, I did go nuts and write a new log parser, essentially from scratch, to figure out those word diffs. I did get the courage only after asking GPT-4o for an example first, I must admit.)

Let's celebrate that again: I wrote 90 thousand words in that wiki in 2024. According to Wikipedia, a "novella" is 17,500 to 40,000 words, which would mean I wrote about a novella and a novel, in the past year.

But interestingly, if I look at the repository analytics. I certainly didn't write that much more in the past year. So that alone cannot explain the lull in my production here.

Arguments

Another part of me is just tired of the bickering and arguing on the internet. I have at least two articles in there that I suspect is going to get me a lot of push-back (NixOS and Fish). I know how to deal with this: you need to write well, consider the controversy, spell it out, and defuse things before they happen. But that's hard work and, frankly, I don't really care that much about what people think anymore.

I'm not writing here to convince people. I have stop evangelizing a long time ago. Now, I'm more into documenting, and teaching. And, while teaching, there's a two-way interaction: when you give out a speech or workshop, people can ask questions, or respond, and you all learn something. When you document, you quickly get told "where is this? I couldn't find it" or "I don't understand this" or "I tried that and it didn't work" or "wait, really? shouldn't we do X instead", and you learn.

Here, it's static. It's my little soapbox where I scream in the void. The only thing people can do is scream back.

Collaboration

So.

Let's see if we can work together here.

If you don't like something I say, disagree, or find something wrong or to be improved, instead of screaming on social media or ignoring me, try contributing back. This site here is backed by a git repository and I promise to read everything you send there, whether it is an issue or a merge request.

I will, of course, still read comments sent by email or IRC or social media, but please, be kind.

You can also, of course, follow the latest changes on the TPA wiki. If you want to catch up with the last year, some of the "novellas" I wrote include:

(Well, no, you can't actually follow changes on a GitLab wiki. But we have a wiki-replica git repository where you can see the latest commits, and subscribe to the RSS feed.)

See you there!

09 February, 2025 06:47PM

hackergotchi for Daniel Pocock

Daniel Pocock

Debian’s Human Rights violations & Swiss women Nazi symbolism

The Universal Declaration of Human Rights was created as a response to the evils of fascism in Europe.

In recent blogs, we've looked at the way the Nazis plagiarised the work of Jewish authors. We have similarities today in Debian and other projects obfuscating the names of developers.

The Universal Declaration of Human Rights sought to prevent this type of dishonest behavior with Article 27. Article 27 concerns intellectual property and it is interrelated with many national laws about copyright and authorship. It contains two paragraphs.

The Swiss Intellectual Property Office has already told us that the forged Debian legal documents are invalid. Even more disturbing, we can show that these documents directly contradict Article 27 UDHR:

UDHR Article 27(1): Everyone has the right freely to participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits.

Debianists and racist Swiss women: En 2018, il a été exclu ... (In 2018, he was excluded ...)

UDHR Article 27(2): Everyone has the right to the protection of the moral and material interests resulting from any scientific, literary or artistic production of which he is the author.

Debianists and racist Swiss women: Il est un ancien développeur (he is a "former" developer, which is a nonsense, as copyright is enduring even after death)

Suggesting that developers and copyright can be extinguished by force is a direct violation of Article 27. A developer can sell their copyright but it can't be taken from us by a racist Swiss woman.

Let's not forget, one of the dates on that invalid document was 10 November, the day of the Kristallnacht, the giant pogrom that started the Holocaust.

Debian, Kristallnacht

 

Swiss law on Nazi symbolism

Switzerland is currently contemplating new laws to restrict Nazi salutes, Swastikas and other symbols such as Debian's Kristallnacht celebration.

In 2023, I briefly visited the Polymanga festival at Montreux. People make a tremendous effort to dress up for the festival.

Polymanga, Montreux, 2023

 

Some of the participants are photographed with the panorama of lake Geneva and the French mountains in the background:

Polymanga, Montreux, 2023

 

A young Swiss woman in costume went down to the lake and lifted her arm. Is it a Nazi salute? There is a clue on her arm, the Swastika. This is the full size photo so we can zoom in to check.

The north side of the lake is Swiss territory and the south side of the lake is French territory. Standing at the edge of the lake, the woman is projecting fascist symbolism across the border.

Polymanga, Montreux, 2023

 

Financial audits showed that over $120,000 from Debian bank accounts went to lawyerists. We can see that several women were involved in creating the racist attack, including Caroline Kuhnlein-Hofmann and Melanie Bron in Vaud and Pascale Köster and Albane die Ziegler at Walder Wyss in Zurich. In return, they are trashing the Universal Declaration of Human Rights (Article 27) and they are conspiring on the anniversary of the Kristallnacht. It makes the Swastika tattoo look tame in comparison.

For more about Nazi phenomena in the world around us, please see the Nazi.Compare web site.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

09 February, 2025 01:30PM

Antoine Beaupré

Last year on this blog

So this blog is now celebrating its 21st birthday (or 20 if you count from zero, or 18 if you want to be pedantic), and I figured I would do this yearly thing of reviewing how that went.

Number of posts

2022 was the official 20th anniversary in any case, and that was one of my best years on record, with 46 posts, surpassed only by the noisy 2005 (62) and matching 2006 (46). 2023, in comparison, was underwhelming: a feeble 11 posts! What happened!

Well, I was busy with other things, mostly away from keyboard, that I will not bore you with here...

The other thing that happened is that the one-liner I used to collect stats was broken (it counted folders and other unrelated files) and wildly overestimated 2022! Turns out I didn't write that much then:

anarc.at$ ls blog | grep '^[0-9][0-9][0-9][0-9].*.md' | sed s/-.*// | sort | uniq -c  | sort -n -k2
     57 2005
     43 2006
     20 2007
     20 2008
      7 2009
     13 2010
     16 2011
     11 2012
     13 2013
      5 2014
     13 2015
     18 2016
     29 2017
     27 2018
     17 2019
     18 2020
     14 2021
     28 2022
     10 2023
      1 2024

But even that is inaccurate because, in ikiwiki, I can tag any page as being featured on the blog. So we actually need to process the HTML itself because we don't have much better on hand without going through ikiwiki's internals:

anarcat@angela:anarc.at$ curl -sSL https://anarc.at/blog/ | grep 'href="\./' | grep -o 20[0-9][0-9] | sort | uniq -c 
     56 2005
     42 2006
     19 2007
     18 2008
      6 2009
     12 2010
     15 2011
     10 2012
     11 2013
      3 2014
     15 2015
     32 2016
     50 2017
     37 2018
     19 2019
     19 2020
     15 2021
     28 2022
     13 2023

Which puts the top 10 years at:

$ curl -sSL https://anarc.at/blog/ | grep 'href="\./' | grep -o 20[0-9][0-9] | sort | uniq -c  | sort -nr | head -10
     56 2005
     50 2017
     42 2006
     37 2018
     32 2016
     28 2022
     19 2020
     19 2019
     19 2007
     18 2008

Anyway. 2023 is certainly not a glorious year in that regard, in any case.

Visitors

In terms of visits, however, we had quite a few hits. According to Goatcounter, I had 122 300 visits in 2023! 2022, in comparison, had 89 363, so that's quite a rise.

What you read

I seem to have hit the Hacker News front page at least twice. I say "seem" because it's actually pretty hard to tell what the HN frontpage actually is on any given day. I had 22k visits on 2023-03-13, in any case, and you can't see me on the front that day. We do see a post of mine on 2023-09-02, all the way down there, which seem to have generated another 10k visits.

In any case, here were the most popular stories for you fine visitors:

  • Framework 12th gen laptop review: 24k visits, which is surprising for a 13k words article "without images", as some critics have complained. 15k referred by Hacker News. Good reference and time-consuming benchmarks, slowly bit-rotting.

    That is, by far, my most popular article ever. A popular article in 2021 or 2022 was around 6k to 9k, so that's a big one. I suspect it will keep getting traffic for a long while.

  • Calibre replacement considerations: 15k visits, most of which without a referrer. Was actually an old article, but I suspect HN brought it back to light. I keep updating that wiki page regularly when I find new things, but I'm still using Calibre to import ebooks.

  • Hacking my Kobo Clara HD: is not new but always gathering more and more hits, it had 1800 hits in the first year, 4600 hits last year and now brought 6400 visitors to the blog! Not directly related, but this iFixit battery replacement guide I wrote also seem to be quite popular

Everything else was published before 2023. Replacing Smokeping with Prometheus is still around and Looking at Wayland terminal emulators makes an entry in the top five.

Where you've been

People send less and less private information when they browse the web. The number of visitors without referrers was 41% in 2021, it rose to 44% in 2023. Most of the remaining traffic comes from Google, but Hacker News is now a significant chunk, almost as big as Google.

In 2021, Google represented 23% of my traffic, in 2022, it was down to 15% so 18% is actually a rise from last year, even if it seems much smaller than what I usually think of.

Ratio Referrer Visits
18% Google 22 098
13% Hacker News 16 003
2% duckduckgo.com 2 640
1% community.frame.work 1 090
1% missing.csail.mit.edu 918

Note that Facebook and Twitter do not appear at all in my referrers.

Where you are

Unsurprisingly, most visits still come from the US:

Ratio Country Visits
26% United States 32 010
14% France 17 046
10% Germany 11 650
6% Canada 7 425
5% United Kingdom 6 473
3% Netherlands 3 436

Those ratios are nearly identical to last year, but quite different from 2021, where Germany and France were more or less reversed.

Back in 2021, I mentioned there was a long tail of countries with at least one visit, with 160 countries listed. I expanded that and there's now 182 countries in that list, almost all of the 193 member states in the UN.

What you were

Chrome's dominance continues to expand, even on readers of this blog, gaining two percentage points from Firefox compared to 2021.

Ratio Browser Visits
49% Firefox 60 126
36% Chrome 44 052
14% Safari 17 463
1% Others N/A

It seems like, unfortunately, my Lynx and Haiku users have not visited in the past year. It seems like trying to read those metrics is like figuring out tea leaves...

In terms of operating systems:

Ratio OS Visits
28% Linux 34 010
23% macOS 28 728
21% Windows 26 303
17% Android 20 614
10% iOS 11 741

Again, Linux and Mac are over-represented, and Android and iOS are under-represented.

What is next

I hope to write more next year. I've been thinking about a few posts I could write for work, about how things work behind the scenes at Tor, that could be informative for many people. We run a rather old setup, but things hold up pretty well for what we throw at it, and it's worth sharing that with the world...

So anyway, thanks for coming, faithful reader, and see you in the coming 2024 year...

09 February, 2025 05:49AM

February 08, 2025

Thorsten Alteholz

My Debian Activities in January 2025

Debian LTS

This was my hundred-twenty-seventh month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 4014-1] gnuchess security update to fix one CVE related to arbitrary code execution via crafted PGN (Portable Game Notation) data.
  • [DLA 4015-1] rsync update to fix five CVEs related leaking information from the server or writing files outside of the client’s intended destination.
  • [DLA 4015-2] rsync update to fix an upstream regression.
  • [DLA 4039-1] ffmpeg update to fix three CVEs related to possible integer overflows, double-free on errors and out-of-bounds access.

As new CVEs for ffmpeg appeared, I started to work again for an update of this package

Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the seventy-eighth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1290-1] rsync update to fix five CVEs in Buster, Stretch and Jessie related leaking information from the server or writing files outside of the client’s intended destination.
  • [ELA-1290-2] rsync update to fix an upstream regression.
  • [ELA-1313-1] ffmpeg update to fix six CVEs in Buster related to possible integer overflows, double-free on errors and out-of-bounds access.
  • [ELA-1314-1] ffmpeg update to fix six CVEs in Stretch related to possible integer overflows, double-free on errors and out-of-bounds access.

As new CVEs for ffmpeg appeared, I started to work again for an update of this package

Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.

Debian Printing

This month I uploaded new packages or new upstream or bugfix versions of:

  • brlaser new upstream release (in new upstream repository)

This work is generously funded by Freexian!

Debian Matomo

This month I uploaded new packages or new upstream or bugfix versions of:

This work is generously funded by Freexian!

Debian Astro

This month I uploaded new packages or new upstream or bugfix versions of:

  • calceph sponsored upload of new upstream version
  • libxisf sponsored upload of new upstream version

Patrick, our Outreachy intern for the Debian Astro project, is doing very well and deals with task after task. He is working on automatic updates of the indi 3rd-party drivers and maybe the results of his work will already be part of Trixie.

Debian IoT

Unfortunately I didn’t found any time to work on this topic.

Debian Mobcom

This month I uploaded new packages or new upstream or bugfix versions of:

misc

This month I uploaded new upstream or bugfix versions of:

FTP master

This month I accepted 385 and rejected 37 packages. The overall number of packages that got accepted was 402.

08 February, 2025 06:41PM by alteholz

February 07, 2025

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

zigg 0.0.2 on CRAN: Micromaintenance

benchmark chart

The still very new package zigg which arrived on CRAN a week ago just received a micro-update at CRAN. zigg provides the Ziggurat pseudo-random number generator (PRNG) for Normal, Exponential and Uniform draws proposed by Marsaglia and Tsang (JSS, 2000), and extended by Leong et al. (JSS, 2005). This PRNG is lightweight and very fast: on my machine speedups for the Normal, Exponential, and Uniform are on the order of 7.4, 5.2 and 4.7 times faster than the default generators in R as illustrated in the benchmark chart borrowed from the git repo.

As wrote last week in the initial announcement, I had picked up their work in package RcppZiggurat and updated its code for the 64-buit world we now live in. That package alredy provided the Normal generator along with several competing implementations which it compared rigorously and timed them. As one of the generators was based on the GNU GSL via the implementation of Voss, we always ended up with a run-time dependency on the GSL too. No more: this new package is zero-depedency, zero-suggsts and hence very easy to deploy. Moreover, we also include a demonstration of four distinct ways of accessing the compiled code from another R package: pure and straight-up C, similarly pure C++, inclusion of the header in C++ as well as via Rcpp. The other advance is the resurrection of the second generator for the Exponential distribution. And following Burkardt we expose the Uniform too. The main upside of these generators is their excellent speed as can be seen in the comparison the default R generators generated by the example script timings.R:

Needless to say, speed is not everything. This PRNG comes the time of 32-bit computing so the generator period is likely to be shorter than that of newer high-quality generators. If in doubt, forgo speed and stick with the high-quality default generators.

This release essentially just completes the DESCRIPTION file and README.md now that this is a CRAN package. The short NEWS entry follows.

Changes in version 0.0.2 (2025-02-07)

  • Complete DESCRIPTION and README.md following initial CRAN upload

Courtesy of my CRANberries, there is a diffstat report relative to previous release. For more information, see the package page or the git repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

07 February, 2025 02:29PM

February 06, 2025

RcppArmadillo 14.2.3-1 on CRAN: Small Upstream Fix

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1215 other packages on CRAN, downloaded 38.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 612 times according to Google Scholar.

Conrad released a minor version 14.2.3 yesterday. As it has been two months since the last minor release, we prepared a new version for CRAN too which arrived there early this morning.

The changes since the last CRAN release are summarised below.

Changes in RcppArmadillo version 14.2.3-1 (2025-02-05)

  • Upgraded to Armadillo release 14.2.3 (Smooth Caffeine)

    • Minor fix for declaration of xSYCON and xHECON functions in LAPACK

    • Fix for rare corner-case in reshape()

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

06 February, 2025 02:35PM

Dominique Dumont

Drawbacks of using Cookiecutter with Cruft

Hi

Cookiecutter is a tool for building coding project templates. It’s often used to provide a scaffolding to build lots of similar project. I’ve seen it used to create Symfony projects and several cloud infrastructures deployed with Terraform. This tool was useful to accelerate the creation of new projects. ğŸ�ƒ

Since these templates were bound to evolve, the teams providing these template relied on cruft to update the code provided by the template in their user’s code. In other words, they wanted their users to apply a diff of the template modification to their code.

At the beginning, all was fine. But problems began to appear during the lifetime of these projects.

What went wrong ?

In both cases, we had the following scenario:

  • user team:
    • 🙂 creates new project with cookiecutter template
    • ğŸ˜� makes modification on their code, including on code provided by template
  • meanwhile, provider team:
    • ğŸ˜� makes modifications to cookiecutter template
    • 🙂 releases new template version
    • 🙂 asks his users to update code brought by template using cruft
  • user team then:
    • 🤨 runs cruft to update template code
    • 😵â€�💫 discovers a lot of code conflicts (similar to git merge conflicts)
    • 🤮 often rolls back cruft update and gives up on template update

User team giving up on updates is a major problem because these update may bring security or compliance fixes. 🚨

Note that code conflicts seen with cruft are similar to git merge conflicts, but harder to resolve because, unlike with a git merge, there’s no common ancestor, so 3-way merges are not possible.

From an organisation point of view, the main problem is the ambiguous ownership of the functionalities brought by template code: who own this code ? The provider team who writes the template or the user team who owns the repository of the code generated from the template ? Conflicts are bound to happen. �

Possible solutions to get out of this tar pit:

  • Assume that template are one shot. Template update are not practical in the long run.
  • Make sure that template are as thin as possible. They should contain minimal logic.
  • Move most if not all logic in separate libraries or scripts that are owned by provider team. This way update coming from provider team can be managed like external dependencies by upgrading the version of a dependency.

Of course your users won’t be happy to be faced with a manual migration from the old big template to the new one with external dependencies. On the other hand, this may be easier to sell than updates based on cruft since the painful work will happen once. Further updates will be done by incrementing dependency versions (which can be automated with renovate).

If many projects are to be created with this template, it may be more practical to provide use a CLI that will create a skeleton project. See for instance terragrunt scaffold command.

My name is Dominique Dumont, I’m a devops freelance. You can find the devops and audit services I propose on my website or reach out to me on LinkedIn.

All the best

06 February, 2025 01:49PM by dod

hackergotchi for Bits from Debian

Bits from Debian

Proxmox Platinum Sponsor of DebConf25

proxmox-logo

We are pleased to announce that Proxmox has committed to sponsor DebConf25 as a Platinum Sponsor.

Proxmox develops powerful, yet easy-to-use Open Source server software. The product portfolio from Proxmox, including server virtualization, backup, and email security, helps companies of any size, sector, or industry to simplify their IT infrastructures. The Proxmox solutions are based on the great Debian platform, and we are happy that we can give back to the community by sponsoring DebConf25.

With this commitment as Platinum Sponsor, Proxmox is contributing to the Debian annual Developers' conference, directly supporting the progress of Debian and Free Software. Proxmox contributes to strengthen the community that collaborates on Debian projects from all around the world throughout all of the year.

Thank you very much, Proxmox, for your support of DebConf25!

Become a sponsor too!

DebConf25 will take place from 14 to 20 July 2025 in Brest, France, and will be preceded by DebCamp, from 7 to 13 July 2025.

DebConf25 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf25 website at https://debconf25.debconf.org/sponsors /become-a-sponsor/.

06 February, 2025 10:50AM by Sahil Dhiman

Sven Hoexter

GKE version 1.31.1-gke.1678000+ is a baddy

Just a "warn your brothers" for people foolish enough to use GKE and run on the Rapid release channel.

Update from version 1.31.1-gke.1146000 to 1.31.1-gke.1678000 is causing trouble whenever NetworkPolicy resources and a readinessProbe (or health check) are configured. As a workaround we started to remove the NetworkPolicy resources. E.g. when kustomize is involved with a patch like this:

- patch: |-
    $patch: delete
    apiVersion: "networking.k8s.io/v1"
    kind: NetworkPolicy
    metadata:
        name: dummy
  target:
    kind: NetworkPolicy

We tried to update to the latest version - right now 1.31.1-gke.2008000 - which did not change anything. Behaviour is pretty much erratic, sometimes it still works and sometimes the traffic is denied. It also seems that there is some relevant fix in 1.31.1-gke.1678000 because that is now the oldest release of 1.31.1 which I can find in the regular and rapid release channels. The last known good version 1.31.1-gke.1146000 is not available to try a downgrade.

Update: 1.31.4-gke.1372000 in late January 2025 seems to finally fix it.

06 February, 2025 10:39AM

February 05, 2025

hackergotchi for Alberto García

Alberto García

Keeping your system-wide configuration files intact after updating SteamOS

Introduction

If you use SteamOS and you like to install third-party tools or modify the system-wide configuration some of your changes might be lost after an OS update. Read on for details on why this happens and what to do about it.


As you all know SteamOS uses an immutable root filesystem and users are not expected to modify it because all changes are lost after an OS update.

However this does not include configuration files: the /etc directory is not part of the root filesystem itself. Instead, it’s a writable overlay and all modifications are actually stored under /var (together with all the usual contents that go in that filesystem such as logs, cached data, etc).

/etc contains important data that is specific to that particular machine like the configuration of known network connections, the password of the main user and the SSH keys. This configuration needs to be kept after an OS update so the system can keep working as expected. However the update process also needs to make sure that other changes to /etc don’t conflict with whatever is available in the new version of the OS, and there have been issues due to some modifications unexpectedly persisting after a system update.

SteamOS 3.6 introduced a new mechanism to decide what to to keep after an OS update, and the system now keeps a list of configuration files that are allowed to be kept in the new version. The idea is that only the modifications that are known to be important for the correct operation of the system are applied, and everything else is discarded1.

However, many users want to be able to keep additional configuration files after an OS update, either because the changes are important for them or because those files are needed for some third-party tool that they have installed. Fortunately the system provides a way to do that, and users (or developers of third-party tools) can add a configuration file to /etc/atomic-update.conf.d, listing the additional files that need to be kept.

There is an example in /etc/atomic-update.conf.d/example-additional-keep-list.conf that shows what this configuration looks like.

Sample configuration file for the SteamOS updater

Developers who are targeting SteamOS can also use this same method to make sure that their configuration files survive OS updates. As an example of an actual third-party project that makes use of this mechanism you can have a look at the DeterminateSystems Nix installer:

https://github.com/DeterminateSystems/nix-installer/blob/v0.34.0/src/planner/steam_deck.rs#L273

As usual, if you encounter issues with this or any other part of the system you can check the SteamOS issue tracker. Enjoy!


  1. A copy is actually kept under /etc/previous to give the user the chance to recover files if necessary, and up to five previous snapshots are kept under /var/lib/steamos-atomupd/etc_backup ↩

05 February, 2025 04:13PM by berto

Reproducible Builds

Reproducible Builds in January 2025

Welcome to the first report in 2025 from the Reproducible Builds project!

Our monthly reports outline what we’ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security when relevant. As usual, though, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.

Table of contents:

  1. reproduce.debian.net
  2. Two new academic papers
  3. Distribution work
  4. On our mailing list…
  5. Upstream patches
  6. diffoscope
  7. Website updates
  8. Reproducibility testing framework

reproduce.debian.net

The last few months saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. Powering that is rebuilderd, our server designed monitor the official package repositories of Linux distributions and attempt to reproduce the observed results there.

This month, however, we are pleased to announce that in addition to the existing amd64.reproduce.debian.net and i386.reproduce.debian.net architecture-specific pages, we now build for a three more architectures (for a total of five) — arm64 armhf and riscv64.


Two new academic papers

Giacomo Benedetti, Oreofe Solarin, Courtney Miller, Greg Tystahl, William Enck, Christian Kästner, Alexandros Kapravelos, Alessio Merlo and Luca Verderame published an interesting article recently. Titled An Empirical Study on Reproducible Packaging in Open-Source Ecosystem, the abstract outlines its optimistic findings:

[We] identified that with relatively straightforward infrastructure configuration and patching of build tools, we can achieve very high rates of reproducible builds in all studied ecosystems. We conclude that if the ecosystems adopt our suggestions, the build process of published packages can be independently confirmed for nearly all packages without individual developer actions, and doing so will prevent significant future software supply chain attacks.

The entire PDF is available online to view.


In addition, Julien Malka, Stefano Zacchiroli and Théo Zimmermann of Télécom Paris’ in-house research laboratory, the Information Processing and Communications Laboratory (LTCI) published an article asking the question: Does Functional Package Management Enable Reproducible Builds at Scale?.

Answering strongly in the affirmative, the article’s abstract reads as follows:

In this work, we perform the first large-scale study of bitwise reproducibility, in the context of the Nix functional package manager, rebuilding 709,816 packages from historical snapshots of the nixpkgs repository[. We] obtain very high bitwise reproducibility rates, between 69 and 91% with an upward trend, and even higher rebuildability rates, over 99%. We investigate unreproducibility causes, showing that about 15% of failures are due to embedded build dates. We release a novel dataset with all build statuses, logs, as well as full diffoscopes: recursive diffs of where unreproducible build artifacts differ.

As above, the entire PDF of the article is available to view online.


Distribution work

There as been the usual work in various distributions this month, such as:

  • 10+ reviews of Debian packages were added, 11 were updated and 10 were removed this month adding to our knowledge about identified issues. A number of issue types were updated also.

  • The FreeBSD Foundation announced that “a planned project to deliver zero-trust builds has begun in January 2025”. Supported by the Sovereign Tech Agency, this project is centered on the various build processes, and that the “primary goal of this work is to enable the entire release process to run without requiring root access, and that build artifacts build reproducibly – that is, that a third party can build bit-for-bit identical artifacts.” The full announcement can be found online, which includes an estimated schedule and other details.


On our mailing list…

On our mailing list this month:

  • Following-up to a substantial amount of previous work pertaining the Sphinx documentation generator, James Addison asked a question pertaining to the relationship between SOURCE_DATE_EPOCH environment variable and testing that generated a number of replies.

  • Adithya Balakumar of Toshiba asked a question about whether it is possible to make ext4 filesystem images reproducible. Adithya’s issue is that even the smallest amount of post-processing of the filesystem results in the modification of the “Last mount” and “Last write” timestamps.

  • James Addison also investigated an interesting issue surrounding our disorderfs filesystem. In particular:

    FUSE (Filesystem in USErspace) filesystems such as disorderfs do not delete files from the underlying filesystem when they are deleted from the overlay. This can cause seemingly straightforward tests — for example, cases that expect directory contents to be empty after deletion is requested for all files listed within them — to fail.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 285, 286 and 287 to Debian:

  • Security fixes:

    • Validate the --css command-line argument to prevent a potential Cross-site scripting (XSS) attack. Thanks to Daniel Schmidt from SRLabs for the report. []
    • Prevent XML entity expansion attacks. Thanks to Florian Wilkens from SRLabs for the report.. [][]
    • Print a warning if we have disabled XML comparisons due to a potentially vulnerable version of pyexpat. []
  • Bug fixes:

    • Correctly identify changes to only the line-endings of files; don’t mark them as Ordering differences only. []
    • When passing files on the command line, don’t call specialize(…) before we’ve checked that the files are identical or not. []
    • Do not exit with a traceback if paths are inaccessible, either directly, via symbolic links or within a directory. []
    • Don’t cause a traceback if cbfstool extraction failed.. []
    • Use the surrogateescape mechanism to avoid a UnicodeDecodeError and crash when any decoding zipinfo output that is not UTF-8 compliant. []
  • Testsuite improvements:

    • Don’t mangle newlines when opening test fixtures; we want them untouched. []
    • Move to assert_diff in test_text.py. []
  • Misc improvements:

    • Drop unused subprocess imports. [][]
    • Drop an unused function in iso9600.py. []
    • Inline a call and check of Config().force_details; no need for an additional variable in this particular method. []
    • Remove an unnecessary return value from the Difference.check_for_ordering_differences method. []
    • Remove unused logging facility from a few comparators. []
    • Update copyright years. [][]

In addition, fridtjof added support for the ASAR .tar-like archive format. [][][][] and lastly, Vagrant Cascadian updated diffoscope in GNU Guix to version 285 [][] and 286 [][].


strip-nondeterminism is our sister tool to remove specific non-deterministic results from a completed build. This month version 1.14.1-1 was uploaded to Debian unstable by Chris Lamb, making the following the changes:

  • Clarify the --verbose and non --verbose output of bin/strip-nondeterminism so we don’t imply we are normalizing files that we are not. []
  • Bump Standards-Version to 4.7.0. []


Website updates

There were a large number of improvements made to our website this month, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In January, a number of changes were made by Holger Levsen, including:

  • reproduce.debian.net-related:

    • Add support for rebuilding the armhf architecture. [][]
    • Add support for rebuilding the arm64 architecture. [][][][]
    • Add support for rebuilding the riscv64 architecture. [][]
    • Move the i386 builder to the osuosl5 node. [][][][]
    • Don’t run our rebuilders on a public port. [][]
    • Add database backups on all builders and add links. [][]
    • Rework and dramatically improve the statistics collection and generation. [][][][][][]
    • Add contact info to the main page [], thumbnails [] as well as the new, missing architectures. []
    • Move the amd64 worker to the osuosl4 and node. []
    • Run the underlying debrebuild script under nice. []
    • Try to use TMPDIR when calling debrebuild. [][]
  • buildinfos.debian.net-related:

    • Stop creating buildinfo-pool_${suite}_${arch}.list files. []
    • Temporarily disable automatic updates of pool links. []
  • FreeBSD-related:

    • Fix the sudoers to actually permit builds. []
    • Disable debug output for FreeBSD rebuilding jobs. []
    • Upgrade to FreeBSD 14.2 [] and document that bmake was installed on the underlying FreeBSD virtual machine image [].
  • Misc:

    • Update the ‘real’ year to 2025. []
    • Don’t try to install a Debian bookworm kernel from ‘backports’ on the infom08 node which is running Debian trixie. []
    • Don’t warn about system updates for systems running Debian testing. []
    • Fix a typo in the ZOMBIES definition. [][]

In addition:

  • Ed Maste modified the FreeBSD build system to the clean the object directory before commencing a build. []

  • Gioele Barabucci updated the rebuilder stats to first add a category for network errors [] as well as to categorise failures without a diffoscope log [].

  • Jessica Clarke also made some FreeBSD-related changes, including:

    • Ensuring we clean up the object directory for second build as well. [][]
    • Updating the sudoers for the relevant rm -rf command. []
    • Update the cleanup_tmpdirs method to to match other removals. []
  • Jochen Sprickerhof:

  • Roland Clobus:

    • Update the reproducible_debstrap job to call Debian’s debootstrap with the full path [] and to use eatmydata as well [][].
    • Make some changes to deduce the CPU load in the debian_live_build job. []

Lastly, both Holger Levsen [] and Vagrant Cascadian [] performed some node maintenance.


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

05 February, 2025 11:49AM

February 04, 2025

Dominique Dumont

Azure API throttling strikes back

Hi

In my last blog, I explained how we resolved a throttling issue involving Azure storage API. In the end, I mentioned that I was not sure of the root cause of the throttling issue.

Even though we no longer had any problem in dev and preprod cluster, we still faced throttling issue with prod. The main difference between these 2 environments is that we have about 80 PVs in prod versus 15 in the other environments. Given that we manage 1500 pods in prod, 80 PVs does not look like a lot. 🤨

To continue the investigation, I’ve modified k8s-scheduled-volume-snapshotter to limit the number of snaphots done in a single cron run (see add maxSnapshotCount parameter pull request).

In prod, we used the modified snapshotter to trigger snapshots one by one.

Even with all previous snapshots cleaned up, we could not trigger a single new snapshot without being throttled🕳. I guess that, in the cron job, just checking the list of PV to snapshot was enough to exhaust our API quota. 😒

Azure doc mention that a leaky bucket algorithm is used for throttling. A full bucket holds tokens for 250 API calls, and the bucket gets 25 new tokens per second. Looks like that not enough.�

I was puzzled 😵�💫 and out of ideas 😶.

I looked for similar problems in AKS issues on GitHub where I found this comment that recommend using useDataPlaneAPI parameter in the CSI file driver. That was it! 😃

I was flabbergasted 🤯 by this parameter: why is CSI file driver able to use 2 APIs ? Why is one on them so limited ? And more importantly, why is the limited API the default one ?

Anyway, setting useDataPlaneAPI: "true" in our VolumeSnapshotClass manifest was the right solution. This indeed solved the throttling issue in our prod cluster. âš•

But not the snaphot issue 😑. Amongst the 80 PV, I still had 2 snaphots failing.🦗

Fortunately, the error was mentioned in the description of the failed snapshots: we had too many (200) snapshots for these shared volumes.

What ?? 😤 All these snapshots were cleaned up last week.

I then tried to delete these snaphots through azure console. But the console failed to delete these snapshot due to API throttling. Looks like Azure console is not using the right API. 🤡

Anyway, I went back to the solution explained in my previous blog, I listed all snapshots with az command. I indeed has a lot of snaphots, a lot of them dated Jan 19 and 20. There was often a new bogus snaphot created every minute.

These were created during the first attempt at fixing the throttling issue. I guess that even though CSI file driver was throttled, a snaphot was still created in the storage account, but the CSI driver did not see it and retried a minute later💥. What a mess.

Anyway, I’ve cleaned up again these bogus snapshot 🧨, and now, snaphot creation is working fine 🤸ğŸ�»â€�♂ï¸�.

For now.

All the best.

04 February, 2025 01:23PM by dod

hackergotchi for Daniel Pocock

Daniel Pocock

Paul Wise

FLOSS Activities January 2025

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Sponsors

All work was done on a volunteer basis.

04 February, 2025 02:44AM

February 02, 2025

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

this is bits from DPL for January.

Sovereign Tech Agency

I was recently pointed to Technologies and Projects supported by the Sovereign Tech Agency which is financed by the German Federal Ministry for Economic Affairs and Climate Action. It is a subsidiary of the Federal Agency for Disruptive Innovation, SPRIND GmbH.

It is worth sending applications there for distinct projects as that is their preferred method of funding. Distinguished developers can also apply for a fellowship position that pays up to 40hrs / week (32hrs when freelancing) for a year. This is esp. open to maintainers of larger numbers of packages in Debian (or any other Linux distribution).

There might be a chance that some of the Debian-related projects submitted to the Google Summer of Code that did not get funded could be retried with those foundations. As per the FAQ of the project: "The Sovereign Tech Agency focuses on securing and strengthening open and foundational digital technologies. These communities working on these are distributed all around the world, so we work with people, companies, and FOSS communities everywhere."

Similar funding organizations include the Open Technology Fund and FLOSS/fund. If you have a Debian-related project that fits these funding programs, they might be interesting options. This list is by no means exhaustive—just some hints I’ve received and wanted to share. More suggestions for such opportunities are welcome.

Year of code reviews

On the debian-devel mailing list, there was a long thread titled "Let's make 2025 a year when code reviews became common in Debian". It initially suggested something along the lines of: "Let's review MRs in Salsa." The discussion quickly expanded to include patches that have been sitting in the BTS for years, which deserve at least the same attention. One idea I'd like to emphasize is that associating BTS bugs with MRs could be very convenient. It’s not only helpful for documentation but also the easiest way to apply patches.

I’d like to emphasize that no matter what workflow we use—BTS, MRs, or a mix—it is crucial to uphold Debian’s reputation for high quality. However, this reputation is at risk as more and more old issues accumulate. While Debian is known for its technical excellence, long-standing bugs and orphaned packages remain a challenge. If we don’t address these, we risk weakening the high standards that Debian is valued for. Revisiting old issues and ensuring that unmaintained packages receive attention is especially important as we prepare for the Trixie release.

Debian Publicity Team will no longer post on X/Twitter

The Press Team has my full support in its decision to stop posting on X. As per the Publicity delegation:

  • The team is in charge of deciding the most suitable publication venue or venues for announcements and when they are published.

the team once decided to join Twitter, but circumstances have since changed. The current Press delegates have the institutional authority to leave X, just as their predecessors had the authority to join. I appreciate that the team carefully considered the matter, reinforced by the arguments developed on the debian-publicity list, and communicated its reasoning openly.

Kind regards,

Andreas.

02 February, 2025 11:00PM by Andreas Tille

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppUUID 1.1.2 on CRAN: Newly Adopted Package

The RcppUUID package on CRAN has been providing UUIDs (based on the underlying Boost library) for several years. Written by Artem Klemsov and maintained in this gitlab repo, the package is a very nice example of clean and straightforward library binding.

When we did our annual BH upgrade to 1.87.0 and check reverse dependencies, we noticed the RcppUUID needed a small and rather minor update which we showed as a short diff in an issue filed. Neither I nor CRAN heard from Artem, so the packaged ended up being archived last week. Which in turn lead me to make this minimal update to 1.1.2 to resurrect it, which CRAN processed more or less like a regular update given this explanation and so it arrived last Friday.

Courtesy of my CRANberries, there is also a a ‘new package’ note (no diffstat report yet). More detailed information is on the RcppUUID page, or the github repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

02 February, 2025 10:38PM

hackergotchi for Colin Watson

Colin Watson

Free software activity in January 2025

Most of my Debian contributions this month were sponsored by Freexian. If you appreciate this sort of work and are at a company that uses Debian, have a look to see whether you can pay for any of Freexian‘s services; as well as the direct benefits, that revenue stream helps to keep Debian development sustainable for me and several other lovely people.

You can also support my work directly via Liberapay.

Python team

We finally made Python 3.13 the default version in testing! I fixed various bugs that got in the way of this:

As with last month, I fixed a few more build regressions due to the removal of a deprecated intersphinx_mapping syntax in Sphinx 8.0:

I ported a few packages to Django 5.1:

I ported python-pypump to IPython 8.0.

I fixed python-datamodel-code-generator to handle isort 6, and contributed that upstream.

I fixed some packages to tolerate future versions of dh-python that will drop their dependency on python3-setuptools:

I removed the old python-celery-common transitional package from celery, since nothing in Debian needs it any more.

I fixed or helped to fix various other build/test failures:

I upgraded these packages to new upstream versions:

Rust team

I fixed rust-pyo3-ffi to avoid explicit Python version dependencies that were getting in the way of making Python 3.13 the default version.

Security tools packaging team

I uploaded libevt to fix a build failure on i386 and to tolerate future versions of dh-python that will drop their dependency on python3-setuptools.

Installer team

I helped with some testing of a debian-installer-utils patch as part of the /usr move. I need to get around to uploading this, since it looks OK now.

Other small things

Helmut Grohne reached out for help debugging a multi-arch coinstallability problem (you know it’s going to be complicated when even Helmut can’t figure it out on his own …) in binutils, and we had a call about that.

I reviewed and applied a new Romanian translation of debconf’s manual pages.

I did my twice-yearly refresh of debmirror’s mirror_size documentation, and applied a contribution to improve the example debmirror.conf.

I fixed an arguable preprocessor string handling bug in man-db, and applied a fix for out-of-tree builds.

02 February, 2025 07:48PM by Colin Watson

hackergotchi for Joachim Breitner

Joachim Breitner

Coding on my eInk Tablet

For many years I wished I had a setup that would allow me to work (that is, code) productively outside in the bright sun. It’s winter right now, but when its summer again it’s always a bit. this weekend I got closer to that goal.

TL;DR: Using code-server on a beefy machine seems to be quite neat.

Passively lit coding
Passively lit coding

Personal history

Looking back at my own old blog entries I find one from 10 years ago describing how I bought a Kobo eBook reader with the intent of using it as an external monitor for my laptop. It seems that I got a proof-of-concept setup working, using VNC, but it was tedious to set up, and I never actually used that. I subsequently noticed that the eBook reader is rather useful to read eBooks, and it has been in heavy use for that every since.

Four years ago I gave this old idea another shot and bought an Onyx BOOX Max Lumi. This is an A4-sized tablet running Android and had the very promising feature of an HDMI input. So hopefully I’d attach it to my laptop and it just works™. Turns out that this never worked as well as I hoped: Even if I set the resolution to exactly the tablet’s screen’s resolution I got blurry output, and it also drained the battery a lot, so I gave up on this. I subsequently noticed that the tablet is rather useful to take notes, and it has been in sporadic use for that.

Going off on this tangent: I later learned that the HDMI input of this device appears to the system like a camera input, and I don’t have to use Boox’s “monitor” app but could other apps like FreeDCam as well. This somehow managed to fix the resolution issues, but the setup still wasn’t as convenient to be used regularly.

I also played around with pure terminal approaches, e.g. SSH’ing into a system, but since my usual workflow was never purely text-based (I was at least used to using a window manager instead of a terminal multiplexer like screen or tmux) that never led anywhere either.

VSCode, working remotely

Since these attempts I have started a new job working on the Lean theorem prover, and working on or with Lean basically means using VSCode. (There is a very good neovim plugin as well, but I’m using VSCode nevertheless, if only to make sure I am dogfooding our default user experience).

My colleagues have said good things about using VSCode with the remote SSH extension to work on a beefy machine, so I gave this a try now as well, and while it’s not a complete game changer for me, it does make certain tasks (rebuilding everything after a switching branches, running the test suite) very convenient. And it’s a bit spooky to run these work loads without the laptop’s fan spinning up.

In this setup, the workspace is remote, but VSCode still runs locally. But it made me wonder about my old goal of being able to work reasonably efficient on my eInk tablet. Can I replicate this setup there?

VSCode itself doesn’t run on Android directly. There are project that run a Linux chroot or in termux on the Android system, and then you can VNC to connect to it (e.g. on Andronix)… but that did not seem promising. It seemed fiddly, and I probably should take it easy on the tablet’s system.

code-server, running remotely

A more promising option is code-server. This is a fork of VSCode (actually of VSCodium) that runs completely on the remote machine, and the client machine just needs a browser. I set that up this weekend and found that I was able to do a little bit of work reasonably.

Access

With code-server one has to decide how to expose it safely enough. I decided against the tunnel-over-SSH option, as I expected that to be somewhat tedious to set up (both initially and for each session) on the android system, and I liked the idea of being able to use any device to work in my environment.

I also decided against the more involved “reverse proxy behind proper hostname with SSL” setups, because they involve a few extra steps, and some of them I cannot do as I do not have root access on the shared beefy machine I wanted to use.

That left me with the option of using a code-server’s built-in support for self-signed certificates and a password:

$ cat .config/code-server/config.yaml
bind-addr: 1.2.3.4:8080
auth: password
password: xxxxxxxxxxxxxxxxxxxxxxxx
cert: true

With trust-on-first-use this seems reasonably secure.

Update: I noticed that the browsers would forget that I trust this self-signed cert after restarting the browser, and also that I cannot “install” the page (as a Progressive Web App) unless it has a valid certificate. But since I don’t have superuser access to that machine, I can’t just follow the official recommendation of using a reverse proxy on port 80 or 431 with automatic certificates. Instead, I pointed a hostname that I control to that machine, obtained a certificate manually on my laptop (using acme.sh) and copied the files over, so the configuration now reads as follows:

bind-addr: 1.2.3.4:3933
auth: password
password: xxxxxxxxxxxxxxxxxxxxxxxx
cert: .acme.sh/foobar.nomeata.de_ecc/foobar.nomeata.de.cer
cert-key: .acme.sh/foobar.nomeata.de_ecc/foobar.nomeata.de.key

(This is getting very specific to my particular needs and constraints, so I’ll spare you the details.)

Service

To keep code-server running I created a systemd service that’s managed by my user’s systemd instance:

~ $ cat ~/.config/systemd/user/code-server.service
[Unit]
Description=code-server
After=network-online.target

[Service]
Environment=PATH=/home/joachim/.nix-profile/bin:/nix/var/nix/profiles/default/bin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
ExecStart=/nix/var/nix/profiles/default/bin/nix run nixpkgs#code-server

[Install]
WantedBy=default.target

(I am using nix as a package manager on a Debian system there, hence the additional PATH and complex ExecStart. If you have a more conventional setup then you do not have to worry about Environment and can likely use ExecStart=code-server.

For this to survive me logging out I had to ask the system administrator to run loginctl enable-linger joachim, so that systemd allows my jobs to linger.

Git credentials

The next issue to be solved was how to access the git repositories. The work is all on public repositories, but I still need a way to push my work. With the classic VSCode-SSH-remote setup from my laptop, this is no problem: My local SSH key is forwarded using the SSH agent, so I can seamlessly use that on the other side. But with code-server there is no SSH key involved.

I could create a new SSH key and store it on the server. That did not seem appealing, though, because SSH keys on Github always have full access. It wouldn’t be horrible, but I still wondered if I can do better.

I thought of creating fine-grained personal access tokens that only me to push code to specific repositories, and nothing else, and just store them permanently on the remote server. Still a neat and convenient option, but creating PATs for our org requires approval and I didn’t want to bother anyone on the weekend.

So I am experimenting with Github’s git-credential-manager now. I have configured it to use git’s credential cache with an elevated timeout, so that once I log in, I don’t have to again for one workday.

$ nix-env -iA nixpkgs.git-credential-manager
$ git-credential-manager configure
$ git config --global credential.credentialStore cache
$ git config --global credential.cacheOptions "--timeout 36000"

To login, I have to https://github.com/login/device on an authenticated device (e.g. my phone) and enter a 8-character code. Not too shabby in terms of security. I only wish that webpage would not require me to press Tab after each character…

This still grants rather broad permissions to the code-server, but at least only temporarily

Android setup

On the client side I could now open https://host.example.com:8080 in Firefox on my eInk Android tablet, click through the warning about self-signed certificates, log in with the fixed password mentioned above, and start working!

I switched to a theme that supposedly is eInk-optimized (eInk by Mufanza). It’s not perfect (e.g. git diffs are unhelpful because it is not possible to distinguish deleted from added lines), but it’s a start. There are more eInk themes on the official Visual Studio Marketplace, but because code-server is a fork it cannot use that marketplace, and for example this theme isn’t on Open-VSX.

For some reason the F11 key doesn’t work, but going fullscreen is crucial, because screen estate is scarce in this setup. I can go fullscreen using VSCode’s command palette (Ctrl-P) and invoking the command there, but Firefox often jumps out of the fullscreen mode, which is annoying. I still have to pay attention to when that’s happening; maybe its the Esc key, which I am of course using a lot due to me using vim bindings.

A more annoying problem was that on my Boox tablet, sometimes the on-screen keyboard would pop up, which is seriously annoying! It took me a while to track this down: The Boox has two virtual keyboards installed: The usual Google ASOP keyboard, and the Onyx Keyboard. The former is clever enough to stay hidden when there is a physical keyboard attached, but the latter isn’t. Moreover, pressing Shift-Ctrl on the physical keyboard rotates through the virtual keyboards. Now, VSCode has many keyboard shortcuts that require Shift-Ctrl (especially on an eInk device, where you really want to avoid using the mouse). And the limited settings exposed by the Boox Android system do not allow you configure that or disable the Onyx keyboard! To solve this, I had to install the KISS Launcher, which would allow me to see more Android settings, and in particular allow me to disable the Onyx keyboard. So this is fixed.

I was hoping to improve the experience even more by opening the web page as a Progressive Web App (PWA), as described in the code-server FAQ. Unfortunately, that did not work. Firefox on Android did not recognize the site as a PWA (even though it recognizes a PWA test page). And I couldn’t use Chrome either because (unlike Firefox) it would not consider a site with a self-signed certificate as a secure context, and then code-server does not work fully. Maybe this is just some bug that gets fixed in later versions.

Now that I use a proper certificate, I can use it as a Progressive Web App, and with Firefox on Android this starts the app in full-screen mode (no system bars, no location bar). The F11 key still does’t work, and using the command palette to enter fullscreen does nothing visible, but then Esc leaves that fullscreen mode and I suddenly have the system bars again. But maybe if I just don’t do that I get the full screen experience. We’ll see.

I did not work enough with this yet to assess how much the smaller screen estate, the lack of colors and the slower refresh rate will bother me. I probably need to hide Lean’s InfoView more often, and maybe use the Error Lens extension, to avoid having to split my screen vertically.

I also cannot easily work on a park bench this way, with a tablet and a separate external keyboard. I’d need at least a table, or some additional piece of hardware that turns tablet + keyboard into some laptop-like structure that I can put on my, well, lap. There are cases for Onyx products that include a keyboard, and maybe they work on the lap, but they don’t have the Trackpoint that I have on my ThinkPad TrackPoint Keyboard II, and how can you live without that?

Conclusion

After this initial setup chances are good that entering and using this environment is convenient enough for me to actually use it; we will see when it gets warmer.

A few bits could be better. In particular logging in and authenticating GitHub access could be both more convenient and more safe – I could imagine that when I open the page I confirm that on my phone (maybe with a fingerprint), and that temporarily grants access to the code-server and to specific GitHub repositories only. Is that easily possible?

02 February, 2025 03:07PM by Joachim Breitner (mail@joachim-breitner.de)

hackergotchi for Junichi Uekawa

Junichi Uekawa

February.

February. This is entrance exam season for Tokyo Junior High Schools. Good luck to those who are going through it now.

02 February, 2025 05:56AM by Junichi Uekawa

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSpdlog 0.0.20 on CRAN: New Upstream, New Features

Version 0.0.20 of RcppSpdlog arrived on CRAN early this morning and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This release updates the code to the version 1.15.1 of spdlog which was released this morning as well. It also contains a contributed PR which illustrates logging in a multithreaded context.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.20 (2025-02-01)

  • New multi-threaded logging example (Young Geun Kim and Dirk via #22)

  • Upgraded to upstream release spdlog 1.15.1

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page, or the package documention site.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

02 February, 2025 01:48AM

February 01, 2025

hackergotchi for Guido Günther

Guido Günther

Free Software Activities January 2025

Another short status update of what happened on my side last month. Mostly focused on quality of life improvements in phosh and cleaning up and improving phoc this time around (including catching up with wlroots git) but some improvements for other things like phosh-osk-stub happened on the side line too.

phosh

  • Fix crash when switching bitween some fractional scales (MR)
  • Make layer surface code more flexible and fade in system modal dialogs (MR)
  • Auto close quick setting status pages (MR)
  • Cleanup and undraft the captive portal (MR). If this lands we have another 2y old MR out of the way.
  • Brush up monitor scaling quick setting originally submitted by Adam Honse (MR)
  • Fix modem interface regression introduced by the rework for cell broadcast (MR)
  • Fill background with primary color (MR), also ensures fallback
  • Release Phosh 0.44.1 (MR)
  • Don't forget to update background on dark uri changes (MR) when background can't be loaded
  • Fix crash in mpris widget (MR)
  • Reduce restyling on state changes (MR)
  • Dev Doc updates (MR)
  • Drop long unused widget (MR)
  • Improve Wi-Fi portal notification handling (MR)
  • Support sound capability on the notification server (MR)
  • Use systemd-cat to launch session (MR)

phoc

  • layer-surface: Don't arrange surfaces / set focus on finalize (MR
  • Update to wlroots 0.18.2 and some cleanups (MR)
  • Run scan-build in CI and fix detected issues (MR)
  • Simplify XWayland handling in PhocDesktop (MR)
  • Handle xdg's suspended surface state (in phone mode) (MR)
  • Invalidate layer shell list less often and take opacity into account for visibility check (MR)
  • Unbreak touch point debugging / deduplicate code (MR)
  • Release Phoc 0.44.1 (MR)
  • Fix initial alpha == 0.0 (MR)
  • Fix leak in output-shield and allow to set easing function (MR)
  • Smoothen mode/scale/orientation changes (MR)
  • Fix thumbnail rendering of floating windows (MR)
  • Fix damage tracking debugging (MR)
  • Allow to toggle some debugging flags at runtime (MR)
  • Fix subsurface damage tracking regression caused by the wlroots 0.18.x switch (MR)
  • Catch up with wlroots 0.19.x again (MR)
  • Backport the sensible bits of the 0.19.x branch to main to smoothen the next upgrade (MR)
  • Fix touch drag (MR) - basically the wlroots patch from below.
  • Make PhocViewChild less of a snow flake (MR)
  • Fix popup reposition damage (MR)
  • Draft: Deduplicate the View and LayerSurface subsurface/popup handling (MR). Needs 625 to land first.
  • popup: Try harder to find a suitable output (MR)

phosh-osk-stub

  • Let long press on shift toggle CapsLock (MR)
  • Add minimal GObject for application (MR). Lets merge the bits of the 1y old MR that still apply.
  • Unbrush 2y old MR to get rid of more globals (MR)
  • Update Unicode data, thanks GTK devs! (MR)

xdg-desktop-portal-phosh

  • Use GTK 4.17's portal avoidance (MR)

phosh-recipes

libcmatrix

  • Use Authorization header: (MR)

phrog

  • Ship example greetd config (MR)

Debian

  • Update libphosh-rs (MR)
  • Update phoc to new git snapshot (MR)
  • Upload phosh 0.44.1)
  • Backport touch fix (MR)

git-buildpackage

  • Make it work with Python 3.13, sigh (MR)
  • Typo fixes (MR)
  • Release 0.9.37 (MR)
  • Run codespell in CI (MR)

livi

  • Suspend video stream when toplevel is suspende (MR) - saves battery
  • Release 0.3.1 (MR)

feedbackd

  • Allow events to override the sound feedback with custom sounds (MR). Allows desktop/mobile shells like phosh to honour application prefs for notifications.

Wayland protocols

  • Propose notch/cuttout support protocol MR)

Wlroots

  • Allow to access opacity boolean (MR)
  • Unbreak drag and drop via touch (MR)

Bug reports

  • udev regression affecting gmobile (Bug). Many thanks to Yu Watanabe for providing the fix so quickly

Reviews

This is not code by me but reviews on other peoples code. The list is incomplete, but I hope to improve on this in the upcoming months. Thanks for the contributions!

  • phosh: Uninstall action (MR) - merged
  • phosh: Add home-enabled property (MR) - merged
  • phosh: Emergency prefs dialog improvements (MR) - merged
  • phosh: Bump gtk versions in ui file (MR - merged
  • phosh: Show week number (MR) - merged
  • phosh: compile schemas for plugins (MR) - merged
  • phosh: Reduce API surface (MR)
  • phosh: Use AdwEntryRow (MR) - merged
  • phosh-osk-stub: Il layout additions (MR) - merged
  • phosh-mobile-settings: Use 'meson setup' in CI (MR) - merged
  • phosh-tour autostart (MR)
  • livi flatpak update (MR) - merged
  • debian: phosh: Recommend kbd (MR) - merged
  • iio-sensor-proxy libssc support (MR)
  • git-buildpackage: Spelling fixes (MR) - merged
  • git-buildpackage: DEP spelling consistency (MR) - merged
  • git-buildpackage: Add branch layout diagram (MR) - merged

Help Development

If you want to support my work see donations.

Comments?

Join the Fediverse thread

01 February, 2025 11:24AM

Thomas Koch

Architecture Decision Logs

Posted on February 1, 2025

I recently discovered “Architecture Decision Logs” or Architecture Decision Records (ADL/ADR) in a software project and really like the idea of explicitly writing down such decisions. You can find complex templates, theory and process for these documents that might be appropriate for big projects. For a small project I don’t think that these are necessary and a free-form text file is probably fine.

Besides benefits of an ADL listed elsewhere, I also see benefits especially for free software projects:

  • Potential contributers can quickly decide whether they want to align with the decisions.
  • Discussions about project directions can be handled with less emotions.
  • A decision to fork a project can be based on diverging architecture decisions.
  • Potential users can decide whether the software aligns with their needs.
  • Code readers might have less WTF moments in which they believe the code author was stupid, incompetent and out of their mind.

The purpose of ADLs overlap with Project Requirements and Design documents (PRD, DD). While the latter should in theory be written before coding starts, ADLs are written during the development of the project and capture the thought process.

Thus ADLs are in my opinion more aligned with the reality of (agile) software development while PRDs and DDs are more aligned with hierarchic organizations in which development is driven by management decisions. As a consequence PRDs and DDs often don’t have much in common with the real software or hinder the development process since programmers feel pressured not to deviate from them.

01 February, 2025 12:00AM

January 31, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Arrested: Albanian Outreachy whistleblowers, Sonny Piers GNOME & Debian connections

Last year, we calculated that rogue Debianists have spent over $120,000 trying to censor web sites. Why are they giving so much money to the lawyerists? It comes down to the Debian suicide cluster. We can see that in the demand below, where they ask for the police to use all methods at their disposal to hunt down whisteblowers talking about the suicide cluster and abuse.

Axel Beckert, ETH Zurich, arrest whistleblowers

As it turns out, everybody already knows who the whistleblowers are. In fact, for some time now, they have been openly paying money to the whistleblowers, in the form of internships and jobs.

Even more fascinating, the Swiss police had already arrested the Albanian Outreachies when they arrived at Zurich airport on 19 September 2017. I was waiting at the airport to meet them and they were detained for a couple of hours in a cell. I took a photo of them when they were leaving captivity:

Kristi Progri, Anisa Kuci, Outreachy, GNOME, albania, whistleblowers

While they were in captivity, the Swiss police called me and spoke to me to ask questions about their itinerary in Switzerland and who was paying for these women to travel.

Looking at the other lawyerist documents, we can see that rogue Debianists are frequently claiming that these women were young female developers:

Jonathan Cohen, Debian, trafficking

If the discussions about the risk of trafficking are baseless, why did the Kanton Zurich police want to ask the same questions about the frequent movements of these women between Albania and the Schengen zone?

Why did the rogue Debianists pay $120,000 to lawyerists to insist that these women are developers? Looking at their profiles on the GNOME web site, they are employed to do administrative work. They were never trained or employed as developers anywhere.

The original Outreach Program for Women (OPW) started in the GNOME Foundation so it is interesting that both of these women have subsequently been employed there. OPW was renamed to Outreachy but it is the same program.

One of the women was an Outreachy intern for Mozilla.

The other woman was given a ticket from Albania to Brazil for DebConf19. She sat beside the former leader Chris Lamb at the DebConf dinner and a few weeks later an Outreachy internship was created for her. After Outreachy, she went to work for Wikimedia Italia for a while.

In an earlier blog, I explained that these women are two of the whistleblowers who revealed evidence about abuse in an Albanian organization funded by larger free software organizations.

In April 2024, during the process that censored the debian.news web site on World Press Freedom Day, I revealed that I had sent a confidential complaint about the underage issues to the Mozilla staff.

In May 2024, immediately after my disclosure, the GNOME Foundation Board frantically held a series of meetings about "personnel" issues:

GNOME Foundation Board minutes

In the last meeting, on 27 May 2024, the membership issue is mentioned. They removed Sonny Piers with the nasty "for cause" ramblings about a secret expulsion. This is total nonsense, you can't expel somebody "for cause" without being able to say what the "cause" actually is. There is a real possibility that Sonny Piers simply asked questions about the personnel matters noted in the previous board meetings.

The Executive Director, Holly Millions, resigned at much the same time. Maybe she learned the truth about the GNOME / Albania connection and didn't want anything to do with it.

In the first week of June, the entire Albanian web site was taken down and the staff profile of Kristi Progri on the GNOME site was changed to delete her links to the Albanian group.

These things are not coincidences. It looks like the employment of the Albanian women at GNOME and the elimination of evidence on the Albanian web site are all connected. These things happened at exactly the same time.

In October 2024, GNOME Foundation published a news report telling us they have to let go of Caroline Henriksen (Creative Director) and Melissa Wu (Director of Community Development) due to a budget shortfall.

The same news report promises transparency.

Yet at the same time, GNOME secretly added a second Albanian woman to their employee list.

Looking at the staff list from 21 June 2024 in the Wayback Machine we can only see one Albanian woman, Kristi Progri.

Looking at the staff list from 19 July 2024 we can see both Albanian women. Kristi Progri is still there as Director of Program Management and they added Anisa Kuci as an Administrative Assistant.

Both of these women started their careers on the same path at the same time. Both of them were in a police cell together in September 2017. Why is one of them given such a senior title and the other has a very junior title? Is this equality?

There are various hints that GNOME may have been paying Anisa Kuci for some time before they publicly added her name on the public web site. Why would GNOME be secretly paying money to an Albanian woman? We can see that Anisa was present in the photos from GUADEC 2022 in Mexico.

Then we can see the Conduct team mafia page. A snapshot from 27 March 2023 does not include the Albanian woman.

A later snapshot on 8 September 2023 includes the name Anisa Kuci. Does that mean that GNOME added the second Albanian woman to the payroll in secret a year before putting her name on their team list?

Did Sonny Piers discover this secret and ask questions about it?

Sonny's own blog report tells us:

The process and decision shocked me. I know people are looking for answers, but I want to protect people involved and the project/foundation. It was never an interpersonal conflict for me.

Reading between the lines, it looks like he asked questions about the personnel issues and somebody got really scared and decided to bundle him out the back door as quickly as possible.

They bamboozled him into silence with the false promise of mediation, which is noted in the GNOME forum post:

(criminal defamation by GNOMEists:) A Code of Conduct complaint was also made against Sonny Piers. The Foundation is engaged in a mediation process with him, which is still ongoing and so we are unable to share more information at this time.

But this is nonsense. After they have defamed him publicly, there is no real opportunity for mediation, he needs compensation and the community needs answers.

While Sonny and his supporters were hoping for this mediation, the toffs in the open source mafia used every face-to-face meeting in the last six months to further reinforce the persecution and turn people against Sonny Piers. They avoid leaving any written evidence of these tactics because they want to give Sonny false hope of a solution. They are afraid that Sonny or somebody else might start leaking the secret minutes of board meetings and gnome-private discussions.

There is a huge ethical problem when they pretend that the Conduct team members are volunteers but they are really staff members on a secret payroll. Staff members are an extension of the executive director so they are not really independent and they will have a tendency to make decisions to please their boss.

Being more specific, staff members on the Conduct team will generally try to cover up mistakes by fellow staff and censor anybody who asks questions about the staff expenditures. Yet with the critical state of the budget, those questions are vital to the survival of the GNOME Foundation.

Ironically, this is just how things worked under the Albanian dictator Enver Hoxha.

In every building, at least one neighbor was a member of the secret police or an informer for the police. These people would get cash bonuses and other privileges in exchange for reports about neighbors asking political questions.

The thing to remember is that both of these women were in the Albanian group from the beginning. They were responsible for managing other volunteers so they know the name and age of every girl who came to the OSCAL conferences and hackerspace. They know the names of developers from various companies who came to visit. As long as they are on the payroll somewhere those details will not be mentioned.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

31 January, 2025 09:00PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

ChatGPT is bullshit

This post is an unpublished review for ChatGPT is bullshit

As people around the world understand how LLMs behave, more and more people wonder as to why these models hallucinate, and what can be done about to reduce it. This provocatively named article by Michael Townsen Hicks, James Humphries and Joe Slater bring is an excellent primer to better understanding how LLMs work and what to expect from them.

As humans carrying out our relations using our language as the main tool, we are easily at awe with the apparent ease with which ChatGPT (the first widely available, and to this day probably the best known, LLM-based automated chatbot) simulates human-like understanding and how it helps us to easily carry out even daunting data aggregation tasks. It is common that people ask ChatGPT for an answer and, if it gets part of the answer wrong, they justify it by stating that it’s just a hallucination. Townsen et al. invite us to switch from that characterization to a more correct one: LLMs are bullshitting. This term is formally presented by Frankfurt [1]. To Bullshit is not the same as to lie, because lying requires to know (and want to cover) the truth. A bullshitter not necessarily knows the truth, they just have to provide a compelling description, regardless of what is really aligned with truth.

After introducing Frankfurt’s ideas, the authors explain the fundamental ideas behind LLM-based chatbots such as ChatGPT; a Generative Pre-trained Transformer (GPT)’s have as their only goal to produce human-like text, and it is carried out mainly by presenting output that matches the input’s high-dimensional abstract vector representation, and probabilistically outputs the next token (word) iteratively with the text produced so far. Clearly, a GPT’s ask is not to seek truth or to convey useful information — they are built to provide a normal-seeming response to the prompts provided by their user. Core data are not queried to find optimal solutions for the user’s requests, but are generated on the requested topic, attempting to mimic the style of document set it was trained with.

Erroneous data emitted by a LLM is, thus, not equiparable with what a person could hallucinate with, but appears because the model has no understanding of truth; in a way, this is very fitting with the current state of the world, a time often termed as the age of post-truth [2]. Requesting an LLM to provide truth in its answers is basically impossible, given the difference between intelligence and consciousness: Following Harari’s definitions [3], LLM systems, or any AI-based system, can be seen as intelligent, as they have the ability to attain goals in various, flexible ways, but they cannot be seen as conscious, as they have no ability to experience subjectivity. This is, the LLM is, by definition, bullshitting its way towards an answer: their goal is to provide an answer, not to interpret the world in a trustworthy way.

The authors close their article with a plea for literature on the topic to adopt the more correct “bullshit” term instead of the vacuous, anthropomorphizing “hallucination”. Of course, being the word already loaded with a negative meaning, it is an unlikely request.

This is a great article that mixes together Computer Science and Philosophy, and can shed some light on a topic that is hard to grasp for many users.

[1] Frankfurt, Harry (2005). On Bullshit. Princeton University Press.

[2] Zoglauer, Thomas (2023). Constructed truths: truth and knowledge in a post-truth world. Springer.

[3] Harari, Yuval Noah (2023. Nexus: A Brief History of Information Networks From the Stone Age to AI. Random House.

31 January, 2025 06:52PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

zigg 0.0.1 on CRAN: New Package!

Thrilled to announce a new package: zigg. It arrived on CRAN today after a few days of review in the ‘newbies’ queue. zigg provides the Ziggurat pseudo-random number generator for Normal, Exponential and Uniform draws proposed by Marsaglia and Tsang (JSS, 2000), and extended by Leong et al. (JSS, 2005).

I had picked up their work in package RcppZiggurat and updated its code for the 64-buit world we now live in. That package alredy provided the Normal generator along with several competing implementations which it compared rigorously and timed them. As one of the generators was based on the GNU GSL via the implementation of Voss, we always ended up with a run-time dependency on the GSL too. No more: this new package is zero-depedency, zero-suggsts and hence very easy to deploy. Moreover, we also include a demonstration of four distinct ways of accessing the compiled code from another R package: pure and straight-up C, similarly pure C++, inclusion of the header in C++ as well as via Rcpp.

The other advance is the resurrection of the second generator for the Exponential distribution. And following Burkardt we expose the Uniform too. The main upside of these generators is their excellent speed as can be seen in the comparison the default R generators generated by the example script timings.R:

Needless to say, speed is not everything. This PRNG comes the time of 32-bit computing so the generator period is likely to be shorter than that of newer high-quality generators. If in doubt, forgo speed and stick with the high-quality default generators.

The short NEWS entry follows.

Changes in version 0.0.1 (2021-01-30)

  • Initial version and CRAN upload

For more, see the package page or the git repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.

31 January, 2025 05:49PM

Russell Coker

January 29, 2025

hackergotchi for Keith Packard

Keith Packard

picolibc-i18n

Internationalization support in Picolibc

There are two major internationalization APIs in the C library: locales and iconv. Iconv is an isolated component which only performs charset conversion in ways that don't interact with anything else in the library. Locales affect pretty much every API that deals with strings and covers charset conversion along with a huge range of localized information from character classification to formatting of time, money, people's names, addresses and even standard paper sizes.

Picolibc inherits it's implementation of both of these from newlib. Given that embedded applications rarely need advanced functionality from either these APIs, I hadn't spent much time exploring this space.

Newlib locale code

When run on Cygwin, Newlib's locale support is quite complete as it leverages the underlying Windows locale support. Without Windows support, everything aside from charset conversion and character classification data is stubbed out at the bottom of the stack. Because the implementation can support full locale functionality, the implementation is designed for that, with large data structures and lots of code.

Charset conversion and character classification data for locales is all built-in; none of that can be loaded at runtime. There is support for all of the ISO-8859 charsets, three JIS variants, a bunch of Windows code pages and a few other single-byte encodings.

One oddity in this code is that when using a JIS locale, wide characters are stored in EUC-JP rather than Unicode. Every other locale uses Unicode. This means APIs like wctype are implemented by mapping the JIS-encoded character to Unicode and then using the underlying Unicode character classification tables. One consequence of this is that there isn't any Unicode to JIS mapping provided as it isn't necessary.

When testing the charset conversion and Unicode character classification data, I found numerous minor errors and a couple of pretty significant ones. The JIS conversion code had the most serious issue I found; most of the conversions are in a 2d array which is manually indexed with the wrong value for the length of each row. This led to nearly every translated value being incorrect.

The charset conversion tables and Unicode classification data are now generated using python charset support and the standard Unicode data files. In addition, tests have been added which compare Picolibc to the system C library for every supported charset.

Newlib iconv code

The iconv charset support is completely separate from the locale charset support with a much wider range of supported targets. It also supports loading charset data from files at runtime, which reduces the size of application images.

Because the iconv and locale implementations are completely separate, the charset support isn't the same. Iconv supports a lot more charsets, but it doesn't support all of those available to locales. For example, Iconv has Big5 support which locale lacks. Conversely, locale has Shift-JIS support which iconv does not.

There's also a difference in how charset names are mapped in the two APIs. The locale code has a small fixed set of aliases, which doesn't include things like US-ASCII or ANSI X3.4. In contrast, the iconv code has an extensive database of charset aliases which are compiled into the library.

Picolibc has a few tests for the iconv API which verify charset names and perform some translations. Without an external reference, it's hard to know if the results are correct.

POSIX vs C internationalization

In addition to including the iconv API, POSIX extends locale support in a couple of ways:

  1. Exposing locale objects via the newlocale, uselocale, duplocale and freelocale APIs.

  2. uselocale sets a per-thread locale, rather than the process-wide locale.

Goals for Picolibc internationalization support

For charsets, supporting UTF-8 should cover the bulk of embedded application needs, and even that is probably more than what most applications require. Most (all?) compilers use Unicode for wide character and string constants. That means wchar_t needs to be Unicode in every locale.

Aside from charset support, the rest of the locale infrastructure is heavily focused on creating human-consumable strings. I don't think it's a stretch to say that none of this is very useful these days, even for systems with sophisticated user interactions. For picolibc, the cost to provide any of this would be high.

Having two completely separate charset conversion datasets makes for a confusing and error-prone experience for developers. Replacing iconv with code that leverages the existing locale support for translating between multi-byte and wide-character representations will save a bunch of source code and improve consistency.

Embedded systems can be very sensitive to memory usage, both read-only and read-write. Applications not using internationalization capabilities shouldn't pay a heavy premium even when the library binary is built with support. For the most sensitive targets, the library should be configurable to remove unnecessary functionality.

Picolibc needs to be conforming with at least the C language standard, and as much of POSIX as makes sense. Fortunately, the requirements for C are modest as it only includes a few locale-related APIs and doesn't include iconv.

Finally, picolibc should test these APIs to make sure they conform with relevant standards, especially character set translation and character classification. The easiest way to do this is to reference another implementation of the same API and compare results.

Switching to Unicode for JIS wchar_t

This involved ripping the JIS to Unicode translations out of all of the wide character APIs and inserting them into the translations between multi-byte and wide-char representations. The missing Unicode to JIS translation was kludged by iterating over all JIS code points until a matching Unicode value was found. That's an obvious place for a performance improvement, but at least it works.

Tiny locale

This is a minimal implementation of locales which conforms with the C language standard while providing only charset translation and character classification data. It handles all of the existing charsets, but splits things into three levels

  1. ASCII
  2. UTF-8
  3. Extended, including any or all of: a. ISO 8859 b. Windows code pages and other 8-bit encodings c. JIS (JIS, EUC-JP and Shift-JIS)

When built for ASCII-only, all of the locale support is short-circuited, except for error checking. In addition, support in printf and scanf for wide characters is removed by default (it can be re-enabled with the -Dio-wchar=true meson option). This offers the smallest code size. Because the wctype APIs (e.g. iswupper) are all locale-specific, this mode restricts them to ASCII-only, which means they become wrappers on top of the ctype APIs with added range checking.

When built for UTF-8, character classification for wide characters uses tables that provide the full Unicode range. Setlocale now selects between two locales, "C" and "C.UTF-8". Any locale name other than "C" selects the UTF-8 version. If the locale name contains "." or "-", then the rest of the locale name is taken to be a charset name and matched against the list of supported charsets. In this mode, only "us_ascii", "ascii" and "utf-8" are recognized.

Because a single byte of a utf-8 string with the high-bit set is not a complete character, all of the ctype APIs in this mode can use the same implementation as the ASCII-only mode. This means the small ctype implementation is available.

Calling setlocale(LC_ALL, "C.UTF-8") will allow the application to use the APIs which translate between multi-byte and wide-characters to deal with UTF-8 encoded strings. In addition, scanf and printf can read and write UTF-8 strings into wchar_t strings.

Locale names are converted into locale IDs, an enumeration which lists the available locales. Each ID implies a specific charset as that's the only thing which differs between them. This means a locale can be encoded in a few bytes rather than an array of strings.

In terms of memory usage, applications not using locales and not using the wctype APIs should see only a small increase in code space. That's due to the wchar_t support added to printf and scanf which need to translate between multi-byte and wide-character representations. There aren't any tables required as ASCII and UTF-8 are directly convertible to Unicode. On ARM-v7m, The added code in printf and scanf add up to about 1kB and another 32 bytes of RAM is used.

The big difference when enabling extended charset support is that all of the charset conversion and character classification operations become table driven and dependent on the locale. Depending on the extended charsets supported, these can be quite large. With all of the extended charsets included, this adds an additional 30kB of code and static data and uses another 56 bytes of RAM.

There are two known gaps in functionality compared with the newlib code:

  1. Locale strings that encode different locales for different categories. That's nominally required by POSIX as LC_ALL is supposed to return a string sufficient to restore the locale, but the only category which actually matters is LC_CTYPE.

  2. No nl_langinfo support. This would be fairly easy to add, returning appropriate constant values for each parameter.

Tiny locale was merged to picolibc main in this PR

Tiny iconv

Replacing the bulky newlib iconv code was far easier than swapping locale implementations. Essentially all that iconv does is compute two functions, one which maps from multi-byte to wide-char in one locale and another which maps from wide-char to multi-byte in another locale.

Once the JIS locales were fixed to use Unicode, the new iconv implementation was straightforward. POSIX doesn't provide any _l version of mbrtowc or wcrtomb, so using standard C APIs would have been clunky. Instead, the implementation uses the internal APIs to compute the correct charset conversion functions. The entire implementation fits in under 200 lines of code.

Tiny iconv is in process in this PR

Future directions

Right now, both of these new bits of code sit in the source tree parallel to the old versions. I'm not seeing any particular reason to keep the old versions around; they have provided a useful point of comparison in developing the new code, but I don't think they offer any compelling benefits going forward.

29 January, 2025 09:53PM

Russ Allbery

Review: The Sky Road

Review: The Sky Road, by Ken MacLeod

Series: Fall Revolution #4
Publisher: Tor
Copyright: 1999
Printing: August 2001
ISBN: 0-8125-7759-0
Format: Mass market
Pages: 406

The Sky Road is the fourth book in the Fall Revolution series, but it represents an alternate future that diverges after (or during?) the events of The Sky Fraction. You probably want to read that book first, but I'm not sure reading The Stone Canal or The Cassini Division adds anything to this book other than frustration. Much more on that in a moment.

Clovis colha Gree is a aspiring doctoral student in history with a summer job as a welder. He works on the platform for the project, which the reader either slowly discovers from the book or quickly discovers from the cover is a rocket to get to orbit. As the story opens, he meets (or, as he describes it) is targeted by a woman named Merrial, a tinker who works on the guidance system. The early chapters provide only a few hints about Clovis's world: a statue of the Deliverer on a horse that forms the backdrop of their meeting, the casual carrying of weapons, hints that tinkers are socially unacceptable, and some division between the white logic and the black logic in programming.

Also, because this is a Ken MacLeod novel, everyone is obsessed with smoking and tobacco the way that the protagonists of erotica are obsessed with sex.

Clovis's story is one thread of this novel. The other, told in the alternating chapters, is the story of Myra Godwin-Davidova, chair of the governing Council of People's Commissars of the International Scientific and Technical Workers' Republic, a micronation embedded in post-Soviet Kazakhstan. Series readers will remember Myra's former lover, David Reid, as the villain of The Stone Canal and the head of the corporation Mutual Protection, which is using slave labor (sort of) to support a resurgent space movement and its attempt to take control of a balkanized Earth. The ISTWR is in decline and a minor power by all standards except one: They still have nuclear weapons.

So, first, we need to talk about the series divergence.

I know from reading about this book on-line that The Sky Road is an alternate future that does not follow the events of The Stone Canal and The Cassini Division. I do not know this from the text of the book, which is completely silent about even being part of a series.

More annoyingly, while the divergence in the Earth's future compared to The Cassini Division is obvious, I don't know what the Jonbar hinge is. Everything I can find on-line about this book is maddeningly coy. Wikipedia claims the divergence happens at the end of The Sky Fraction. Other reviews and the Wikipedia talk page claim it happens in the middle of The Stone Canal. I do have a guess, but it's an unsatisfying one and I'm not sure how to test its correctness. I suppose I shouldn't care and instead take each of the books on their own terms, but this is the type of thing that my brain obsesses over, and I find it intensely irritating that MacLeod didn't explain it in the books themselves. It's the sort of authorial trick that makes me feel dumb, and books that gratuitously make me feel dumb are less enjoyable to read.

The second annoyance I have with this book is also only partly its fault. This series, and this book in particular, is frequently mentioned as good political science fiction that explores different ways of structuring human society. This was true of some of the earlier books in a surprisingly superficial way. Here, I would call it hogwash.

This book, or at least the Myra portion of it, is full of people doing politics in a tactical sense, but like the previous books of this series, that politics is mostly embedded in personal grudges and prior romantic relationships. Everyone involved is essentially an authoritarian whose ability to act as they wish is only contested by other authoritarians and is largely unconstrained by such things as persuasion, discussions, elections, or even theory. Myra and most of the people she meets are profoundly cynical and almost contemptuous of any true discussion of political systems. This is the trappings and mechanisms of politics without the intellectual debate or attempt at consensus, turning it into a zero-sum game won by whoever can threaten the others more effectively.

Given the glowing reviews I've seen in relatively political SF circles, presumably I am missing something that other people see in MacLeod's approach. Perhaps this level of pettiness and cynicism is an accurate depiction of what it's like inside left-wing political movements. (What an appalling condemnation of left-wing political movements, if so.) But many of the on-line reviews lead me to instead conclude that people's understanding of "political fiction" is stunted and superficial. For example, there is almost nothing Marxist about this book — it contains essentially no economic or class analysis whatsoever — but MacLeod uses a lot of Marxist terminology and sets half the book in an explicitly communist state, and this seems to be enough for large portions of the on-line commentariat to conclude that it's full of dangerous, radical ideas. I find this sadly hilarious given that MacLeod's societies tend, if anything, towards a low-grade libertarianism that would be at home in a Robert Heinlein novel. Apparently political labels are all that's needed to make political fiction; substance is optional.

So much for the politics. What's left in Clovis's sections is a classic science fiction adventure in which the protagonist has a radically different perspective from the reader and the fun lies in figuring out the world-building through the skewed perspective of the characters. This was somewhat enjoyable, but would have been more fun if Clovis had any discernible personality. Sadly he instead seems to be an empty receptacle for the prejudices and perspective of his society, which involve a lot of quasi-religious taboos and an essentially magical view of the world. Merrial is a more interesting character, although as always in this series the romance made absolutely no sense to me and seemed to be conjured by authorial fiat and weirdly instant sexual attraction.

Myra's portion of the story was the part I cared more about and was more invested in, aided by the fact that she's attempting to do something more interesting than launch a crewed space vehicle for no obvious reason. She at least faces some true moral challenges with no obviously correct response. It's all a bit depressing, though, and I found Myra's unwillingness to ground her decisions in a more comprehensive moral framework disappointing. If you're going to make a protagonist the ruler of a communist state, even an ironic one, I'd like to hear some real political philosophy, some theory of sociology and economics that she used to justify her decisions. The bits that rise above personal animosity and vibes were, I think, said better in The Cassini Division.

This series was disappointing, and I can't say I'm glad to have read it. There is some small pleasure in finishing a set of award-winning genre books so that I can have a meaningful conversation about them, but the awards failed to find me better books to read than I would have found on my own. These aren't bad books, but the amount of enjoyment I got out of them didn't feel worth the frustration. Not recommended, I'm afraid.

Rating: 6 out of 10

29 January, 2025 05:16AM

January 28, 2025

hackergotchi for Bits from Debian

Bits from Debian

New Debian Developers and Maintainers (November and December 2024)

The following contributors got their Debian Developer accounts in the last two months:

  • Ananthu C V (weepingclown)
  • Andrea Pappacoda (tachi)
  • Athos Coimbra Ribeiro (athos)
  • Gioele Barabucci (gioele)
  • Jongmin Kim (jmkim)
  • Shengqi Chen (harry)
  • Frans Spiesschaert (frans)

The following contributors were added as Debian Maintainers in the last two months:

  • Tianyu Chen
  • Emmanuel FARHI
  • наб
  • Nicolas Schodet

Congratulations!

28 January, 2025 07:00PM by Jean-Pierre Giraud

Russ Allbery

Review: Moose Madness

Review: Moose Madness, by Mar Delaney

Publisher: Kalikoi
Copyright: May 2021
ASIN: B094HGT1ZB
Format: Kindle
Pages: 68

Moose Madness is a sapphic shifter romance novella (on the short side for a novella) by the same author as Wolf Country. It was originally published in the anthology Her Wild Soulmate, which appears to be very out of print.

Maggie (she hates the nickname Moose) grew up in Moose Point, a tiny fictional highway town in (I think) Alaska. (There is, unsurprisingly, an actual Moose Point in Alaska, but it's a geographic feature and not a small town.) She stayed after graduation and is now a waitress in the Moose Point Pub. She's also a shifter; specifically, she is a moose shifter like her mother, the town mayor. (Her father is a fox shifter.) As the story opens, the annual Moose Madness festival is about to turn the entire town into a blizzard of moose kitsch.

Fiona Barton was Maggie's nemesis in high school. She was the cool, popular girl, a red-headed wolf shifter whose friend group teased and bullied awkward and uncoordinated Maggie mercilessly. She was also Maggie's impossible crush, although the very idea seemed laughable. Fi left town after graduation, and Maggie hadn't thought about her for years. Then she walks into Moose Point Pub dressed in biker leathers, with piercings and one side of her head shaved, back in town for a wedding in her pack.

Much to the shock of both Maggie and Fi, they realize that they're soulmates as soon as their eyes meet. Now what?

If you thought I wasn't going to read the moose and wolf shifter romance once I knew it existed, you do not know me very well. I have been saving it for when I needed something light and fun. It seemed like the right palette cleanser after a very disappointing book.

Moose Madness takes place in the same universe as Wolf Country, which means there are secret shifters all over Alaska (and presumably elsewhere) and they have the strong magical version of love at first sight. If one is a shifter, one knows immediately as soon as one locks eyes with one's soulmate and this feeling is never wrong. This is not my favorite romance trope, but if I get moose shifter romance out of it, I'll endure.

As you can tell from the setup, this is enemies-to-lovers, but the whole soulmate thing shortcuts the enemies to lovers transition rather abruptly. There's a bit of apologizing and air-clearing at the start, but most of the novella covers the period right after enemies have become lovers and are getting to know each other properly. If you like that part of the arc, you will probably enjoy this, but be warned that it's slight and somewhat obvious. There's a bit of tension from protective parents and annoying pack mates, but it's sorted out quickly and easily. If you want the characters to work for the relationship, this is not the novella for you. It's essentially all vibes.

I liked the vibes, though! Maggie is easy to like, and Fi does a solid job apologizing. I wish there was quite a bit more moose than we get, but Delaney captures the combination of apparent awkwardness and raw power of a moose and has a good eye for how beautiful large herbivores can be. This is not the sort of book that gives a moment's thought to wolves being predators and moose being, in at least some sense, prey animals, so if you are expecting that to be a plot point, you will be disappointed. As with Wolf Country, Delaney elides most of the messier and more ethically questionable aspects of sometimes being an animal.

This is a sweet, short novella about two well-meaning and fundamentally nice people who are figuring out that middle school and high school are shitty and sometimes horrible but don't need to define the rest of one's life. It's very forgettable, but it made me smile, and it was indeed a good palette cleanser.

If you are, like me, the sort of person who immediately thought "oh, I have to read that" as soon as you saw the moose shifter romance, keep your expectations low, but I don't think this will disappoint. If you are not that sort of person, you can safely miss this one.

Rating: 6 out of 10

28 January, 2025 05:02AM

January 27, 2025

Review: The House That Fought

Review: The House That Fought, by Jenny Schwartz

Series: Uncertain Sanctuary #3
Publisher: Jenny Schwartz
Copyright: December 2020
Printing: September 2024
ASIN: B0DBX6GP8Z
Format: Kindle
Pages: 199

The House That Fought is the third and final book of the self-published space fantasy trilogy starting with The House That Walked Between Worlds. I read it as part of the Uncertain Sanctuary omnibus, which is reflected in the sidebar metadata.

At the end of the last book, one of Kira's random and vibe-based trust decisions finally went awry. She has been betrayed! She's essentially omnipotent, the betrayal does not hurt her in any way, and, if anything, it helps the plot resolution, but she has to spend some time feeling bad about it first. Eventually, though, the band of House residents return to the problem of Earth's missing magic.

By Earth here, I mean our world, which technically isn't called Earth in the confusing world-building of this series. Earth within this universe is an archetypal world that is the origin world for humans, the two types of dinosaurs, and Neanderthals. There are numerous worlds that have split off from it, including Human, the one world where humans are dominant, which is what we think of as Earth and what Kira calls Earth half the time. And by worlds, I mean entire universes (I think?), because traveling between "worlds" is dimensional travel, not space travel. But there is also space travel?

The world building started out confusing and has degenerated over the course of the series. Given that the plot, such as it is, revolves around a world-building problem, this is not a good sign.

Worse, though, is that the quality of the writing has become unedited, repetitive drivel. I liked the first book and enjoyed a few moments of the second book, but this conclusion is just bad. This is the sort of book that the maxim "show, don't tell" was intended to head off. The dull, thudding description of the justification for every character emotion leaves no room for subtlety or reader curiosity.

Evander was elf and I was human. We weren't the same. I had magic. He had the magic I'd unconsciously locked into his augmentations. We were different and in love. Speaking of our differences could be a trigger.

I peeked at him, worried. My customary confidence had taken a hit.

"We're different," he answered my unspoken question. "And we work anyway. We'll work to make us work."

There is page after page after page of this sort of thing: facile emotional processing full of cliches and therapy-speak, built on the most superficial of relationships. There's apparently a romance now, which happened with very little build-up, no real discussion or communication between the characters, and only the most trite and obvious relationship work.

There is a plot underneath all this, but it's hard to make it suspenseful given that Kira is essentially omnipotent. Schwartz tries to turn the story into a puzzle that requires Kira figure out what's going on before she can act, but this is undermined by the confusing world-building. The loose ends the plot has accumulated over the previous two books are mostly dropped, sometimes in a startlingly casual way. I thought Kira would care who killed her parents, for example; apparently, I was wrong.

The previous books caught my attention with a more subtle treatment of politics than I expect from this sort of light space fantasy. The characters had, I thought, a healthy suspicion of powerful people and a willingness to look for manipulation or ulterior motives. Unfortunately, we discover here that this is not due to an appreciation of the complexity of power and motive in governments. Instead, it's a reflexive bias against authority and structured society that sounds like an Internet libertarian complaining about taxes. Powerful people should be distrusted because all governments are corrupt and bad and steal your money in order to waste it. Oh, except for the cops and the military; they're generally good people you should trust.

In retrospect, I should have expected this turn given the degree to which Schwartz stressed the independence of sorcerers. I thought that was going somewhere more interesting than sorcerers as self-appointed vigilantes who are above the law and can and should do anything they damn well please. Sadly, it was not.

Adding to the lynch mob feeling, the ending of this book is a deeply distasteful bit of magical medieval punishment that I thought was vile, and which is, of course, justified by bad things happening to children. No societal problems were solved, but Kira got her petty revenge and got to be gleeful and smug about it. This is apparently what passes for a happy ending.

I don't even know what to say about the bizarre insertion of Christianity, which makes little sense given the rest of the world-building. It's primarily a way for Kira to avoid understanding or thinking about an important part of the plot. As sadly seems to often be the case in books like this, Kira's faith doesn't appear to prompt any moral analysis or thoughtful ethical concern about her unlimited power, just certainty that she's right and everyone else is wrong.

This was dire. It is one of those self-published books that I feel a little bad about writing this negative of a review about, because I think most of the problem was that the author's skill was not up to the story that she wanted to tell. This happens a lot in self-published fiction, particularly since Kindle Unlimited has started rewarding quantity over quality. But given how badly the writing quality degraded over the course of the series, and how offensive the ending was, I do want to warn other people off of the series.

There is so much better fiction out there. Avoid this one, and probably the rest of the series unless you're willing to stop after the first book.

Rating: 2 out of 10

27 January, 2025 05:14AM

January 26, 2025

Review: Dark Matters

Review: Dark Matters, by Michelle Diener

Series: Class 5 #4
Publisher: Eclipse
Copyright: October 2019
ISBN: 0-6454658-6-0
Format: Kindle
Pages: 307

Dark Matters is the fourth book in the science fiction semi-romance Class 5 series. There are spoilers for all of the previous books, and although enough is explained that you could make sense of the story starting here, I wouldn't recommend it. As with the other books in the series, it follows new protagonists, but the previous protagonists make an appearance.

You will be unsurprised to hear that the Tecran kidnapped yet another Earth woman. The repetitiveness of the setup would be more annoying if the book took itself too seriously, but it doesn't, and so I mostly find it entertaining. I thought Diener was going to dodge the obvious series structure, but now I am wondering if we're going to end up with one woman per Class 5 ship after all.

Lucy is not on a ship, however, Tecran or otherwise. She is a captive in a military research facility on the Tecran home world. The Tecran are in very deep trouble given the events of the previous book and have decided that Lucy's existence is a liability. Only the intervention of some sympathetic Tecran scientists she partly befriended during her captivity lets her escape the facility before it's destroyed. Now she's alone, on an alien world, being hunted by the military.

It's not entirely the fault of this book that it didn't tell the story that I wanted to read. The setup for Dark Matters implies this book will see the arrival of consequences for the Tecran's blatant violations of the Sentient Beings Agreement. I was looking forward to a more political novel about how such consequences could be administered. This is the sort of problem that we struggle with in our politics: Collective punishment isn't acceptable, but there have to be consequences sufficient to ensure that a state doesn't repeat the outlawed behavior, and yet attempting to deliver those consequences feels like occupation and can set off worse social ruptures and even atrocities. I wasn't expecting that deep of political analysis of what is, after all, a lighthearted SF adventure series, but Diener has been willing to touch on hard problems. The ethics of violence has been an ongoing theme of the series.

Alas for me, this is not what we get. The arriving cavalry, in the form of a Class 5 and the inevitable Grih hunk to serve as the love interest du jour, quickly become more interested in helping Lucy elude pursuers (or escape captors) than in the delicate political situation. The conflict between the local population is a significant story element, but only as backdrop. Instead, this reads like a thriller or an action movie, complete with alien predators and a cinematic set piece finale.

The political conflict between the Tecran and the United Council does reach a conclusion of sorts, but it's not that satisfying. Perhaps some of the political fallout will happen in future books, but here Diener simplifies the morality of the story in the climax and dodges out of the tricky ethical and social challenge of how to punish a sovereign nation. One of the things I like about this series is that it takes moral indignation seriously, but now that Diener has raised the (correct) complication that people have strong motivations to find excuses for the actions of their own side, I hope she can find a believable political resolution that isn't simple brute force.

This entry in the series wasn't bad, but it didn't grab me. Lucy was fine as a protagonist; her ability to manipulate the Tecran into making mistakes fits the longer time she's had to study them and keeps her distinct from the other protagonists. But the small bit of politics we do see is unsatisfying and conveniently simplistic, and this book mostly degenerates into generic action sequences. Bane, the Class 5 ship featured in this story, is great when he's active, and I continue to be entertained by the obsession the Class 5 ships have with Earth women, but he's sidelined for too much of the story. I felt like Diener focused on the least interesting part of the story setup.

If you've read this far, there's nothing wrong with this entry. You'll probably want to keep reading. But it felt like a missed opportunity.

Followed in publication order by Dark Ambitions, a novella that returns to Rose to tell a side story. The next novel is Dark Class, in which we'll presumably see the last kidnapped Earth woman.

Rating: 6 out of 10

26 January, 2025 05:35AM

January 25, 2025

hackergotchi for Steve Kemp

Steve Kemp

The CP/M emulator now works better!

I keep saying I'm "done" with my CP/M emulator, but then I keep overhauling it in significant ways. Today is no exception. In the past the emulator used breakpoints to detect when calls to the system BIOS, or BDOS, were made. That was possible because the BIOS and BDOS entry points are at predictable locations. For example a well-behaved program might make a system call with code like this:

    LD A,42
    LD C,4
    CALL 0x0005

So setting a breakpoint on 0x0005 would let you detect a system-call was being made, inspect the registers to see which system-call was being made and then carry out the appropriate action in your emulator before returning control back to the program. Unfortunately some binaries patch the RAM, changing the contents of the entry points, or changing internal jump-tables, etc. The end result is that sometimes code running at the fixed addresses is not your BIOS at all, but something else. By trapping/faulting/catching execution here you break things, badly.

So today's new release fixes that! No more breakpoints. Instead we deploy a "real BDOS" in RAM that will route system-calls to our host emulator via a clever trick. For BDOS functions the C-register will contain the system call to operate, our complete BDOS implementation is:

    OUT (C),C
    RET

The host program can catch writes to output ports, and will know that "OUT (3), 3" means "Invoke system call #3", for example. This means binary patches to entry-points, or any internal jump-tables won't confuse things and so long as control eventually reaches my BIOS or BDOS code areas things will work.

I also added a new console-input driver, since I have a notion of pluggable input and output devices, which just reads input from a file. Now I can prove that my code works. Pass the following file to the input-driver and we have automated testing:

A:
ERA HELLO.COM
ERA HELLO.HEX
ERA HELLO.PRN
hello
ASM HELLO
LOAD HELLO
DDT HELLO.com
t
t
t
t
t
t
t
t
t
C-c
EXIT

Here we:

  • Erase "HELLO.COM", "HELLO.HEX", "HELLO.PRN"
  • Invoke "hello[.com]" (which will fail, as we've just deleted it).
  • Then we assemble "HELLO.ASM" to "HELLO.HEX", then to "HELLO.COM"
  • Invoke DDT, the system debugger, and tell it to trace execution a bunch of times.
  • Finally we exit the debugger with "Ctrl-C"
  • And exit the emulator with "exit"

I can test the output and confirm there are no regressions. Neat.

Anyway new release, today. Happy.

25 January, 2025 11:00PM

hackergotchi for Bits from Debian

Bits from Debian

Infomaniak Platinum Sponsor of DebConf25

infomaniaklogo

We are pleased to announce that Infomaniak has committed to sponsor DebConf25 as a Platinum Sponsor.

Infomaniak is Switzerland’s leading developer of Web technologies. With operations all over Europe and based exclusively in Switzerland, the company designs and manages its own data centers powered by 100% renewable energy, and develops all its solutions locally, without outsourcing. With millions of users and the trust of public and private organizations across Europe - such as RTBF, the United Nations, central banks, over 3,000 radio and TV stations, as well as numerous cities and security bodies - Infomaniak stands for sovereign, sustainable and independent digital technology. The company offers a complete suite of collaborative tools, cloud hosting, streaming, marketing and events solutions, while being owned by its employees and self-financed exclusively by its customers.

With this commitment as Platinum Sponsor, Infomaniak is contributing to the Debian annual Developers' conference, directly supporting the progress of Debian and Free Software. Infomaniak contributes to strengthen the community that collaborates on Debian projects from all around the world throughout all of the year.

Thank you very much, Infomaniak, for your support of DebConf25!

Become a sponsor too!

DebConf25 will take place from 14th to July 20th 2025 in Brest, France, and will be preceded by DebCamp, from 7th to 13th July 2025.

DebConf25 is accepting sponsors! Interested companies and organizations should contact the DebConf team through sponsors@debconf.org, or visit the DebConf25 website at https://debconf25.debconf.org/sponsors/become-a-sponsor/.

25 January, 2025 10:22AM by Sahil Dhiman

January 24, 2025

hackergotchi for Jonathan Dowland

Jonathan Dowland

January 22, 2025

hackergotchi for Jonathan McDowell

Jonathan McDowell

Christmas Movies

I watch a lot of films. Since “completing” the IMDB Top 250 back in 2016 I’ve kept an eye on it, and while I don’t go out of my way to watch the films that newly appear in it I generally sit at over 240 watched. I should note I don’t consider myself a film buff/critic, however. I watch things for enjoyment, and a lot of the time that’s kicking back and relaxing and disengaging my brain. So I don’t get into writing reviews, just high level lists of things I’ve watched, sometimes with a few comments.

With that in mind, let’s talk about Christmas movies. Yes, I appreciate it’s the end of January, but generally during December we watch things that have some sort of Christmas theme. New ones if we can find them, but also some of what we consider “classics”. This almost always starts with Scrooged after we’ve put up the tree. I don’t always like Bill Murray (I couldn’t watch The Life Aquatic with Steve Zissou and I think Lost in Translation is overrated), but he’s in a bunch of things I really like, and Scrooged is one of those.

I don’t care where you sit on whether Die Hard is a Christmas movie or not, it’s a good movie and therefore regularly gets a December watch. Die Hard 2 also fits into that category of “sequel at least as good as the original”, though Helen doesn’t agree. We watched it anyway, and I finally made the connection between the William Sadler hotel scene and Michael Rooker’s in Mallrats.

It turns out I’m a Richard Curtis fan. Love Actually has not aged well; most times I watch it I find something new questionable about it, and I always end up hating Alan Rickman for cheating on Emma Thompson, but I do like watching it. He had a new one, That Christmas, out this year, so we watched it as well.

Another new-to-us film this year was Spirited. I generally like Ryan Reynolds, and Will Ferrell is good as long as he’s not too overboard, so I had high hopes. I enjoyed it, but for some reason not as much as I’d expected, and I doubt it’s getting added to the regular watch list.

Larry doesn’t generally like watching full length films, but he (and we), enjoyed The Grinch, which I actually hadn’t seen before. He’s not as fussed on The Muppet Christmas Carol, but we watched it every year, generally on Christmas or Boxing Day. Favourite thing I saw on the Fediverse in December was “Do you know there’s a book of The Muppet Christmas Carol, and they don’t mention that there’s muppets in it once?”

There are a various other light hearted Christmas films we regularly watch. This year included The Holiday (I have so many issues with even just the practicalities of a short notice house swap), and Last Christmas (lots of George Michael music, what’s not to love? Also it was only on this watch through that we realised the lead character is the Mother of Dragons).

We started, but could not finish, Carry On. I saw it described somewhere as copaganda, and that feels accurate. It does not accurately reflect any of my interactions with TSA at airports, especially during busy periods.

Things we didn’t watch this year, but are regularly in the mix, include Fatman, Violent Night (looking forward to the sequel, hopefully this year), and Lethal Weapon. Klaus is kinda at the other end of the spectrum, but very touching, and we’ve watched it a couple of years now.

Given what we seem to like, any suggestions for other films to add? It’s nice to have enough in the mix that we get some variety every year.

22 January, 2025 01:32PM

January 21, 2025

hackergotchi for Daniel Pocock

Daniel Pocock

Dr Richard Stallman in Montpellier, Robert Edward Ernest Pocock in France

Dr Richard Stallman has made the voyage from Boston to Montpellier, giving a talk to advance the freedom and liberty of the French.

My great grandfather Robert Edward Ernest Pocock made a long journey from Australia to France in 1916. Private Pocock arrived in France on 21 November 1916:

Robert Edward Ernest Pocock

All the documents can be found online at RecordSearch.NAA.gov.au using his name or service number 10768.

Shortly after arriving, Private Pocock was seconded to the 8th Field Artillery Brigade. The Australian War Memorial has pictures of the brigade promoting the liberty of France and Belgium:

9th Field Artillery Brigade, Australia, France, 1917

 

9th Field Artillery Brigade, Australia, France, 1917

 

9th Field Artillery Brigade, Australia, France, 1917

 

In March 2021, rogue developers began attacking Dr Richard Stallman with a cyberbullying campaign disguised as a petition. Their attacks were shut down, just like the fascists who invaded France.

I published this blog supporting the liberty of Dr Stallman. My blog and my Fedora account were immediately censored.

I created the WeMakeFedora.org planet site in parallel with the official Planet Fedora site so that people could continue reading the blog posts from censored developers.

Since then, over $120,000 has been spent by various organizations trying to censor my blog posts and hurt my family.

What does money like that buy?

Rogue Debianists are boasting about getting an invalid legal judgment from corrupt Swiss lawyers on the anniversary of the Kristallnacht. Look at the date, 10 November 2023. Kristallnacht was the beginning of the Holocaust. Debian money is paying for this extreme bullying. The Swiss Intellectual Property Office confirmed it is invalid.

They fooled corrupt WIPO lawyers to write more insults censoring the former debian.news site on World Press Freedom Day (3 May), all because I supported Dr Stallman's right to due process.

Dr Stallman is still standing and so am I. Here are some photos from Montpellier:

Richard Stallman, Montpellier

 

Richard Stallman, Montpellier

 

Richard Stallman, Montpellier

 

Richard Stallman, Montpellier

 

Richard Stallman, Montpellier

 

I asked the last question, asking Dr Stallman about the relationship between FSF and FSFE. Dr Stallman responded in French, confirming that there have been problems with FSFE since before the community elected me as the Fellowship representative in 2017.

Notice that the date when FSFE published the results of the election was 25 April 2017. That is ANZAC Day, the day when Australia remembers our anciens combattants.

Daniel Pocock, ANZAC Day

The day that they tried to attack my home was 21 November 2023. The attack failed and was remotely recorded. Notice that is 107 years after my great grandfather arrived in France on 21 November 1916.

In 2025, we are remembering the 80th anniversary of the end of World War 2 and Nazism. Dr Stallman is promoting freedom but we can't take freedom for granted. At FOSDEM, people need to ask questions about the waste of money and the rise of fascism in the largest free software organizations.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

21 January, 2025 02:00AM

January 20, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Migrating away from bcachefs

Pretty much exactly a year ago, I posted about how I was trying out this bcachefs thing, being cautiously optimistic (but reminding you to keep backups). Now I'm going the other way; I've converted my last bcachefs filesystem to XFS, and I don't intend to look at it again in the near future.

What changed in the meantime? Well, the short version is: I no longer trust bcachefs' future. Going into a new filesystem is invariably filled with rough edges, and I totally accepted that (thus the backups). But you do get a hope that things will get better, and for a filesystem developed by a single person (Kent Overstreet), that means you'll need to trust that person to a fairly large degree. Having both hung out in #bcache and seen how this plays out on LKML and against Debian, I don't really have that trust anymore.

To be clear: I've seen my share of bugs. Whenever you see Kent defending his filesystem, he usually emphasizes how he has a lot of happy users and very few bugs left and things are going to be great after Just The Next Refactor. (Having to call out this explicitly all the time is usually a bad sign in itself.) But, well, I've had catastrophic data loss bugs that went unfixed for weeks despite multiple people reporting them. I've seen strange read performance issues. I've had oopses. And when you go and ask about why you get e.g. hang messages in the kernel log, you get “oh, yeah, that's a known issue with compression, we're not entirely sure what to do about it”.

There are more things: SSD promotion/demotion doesn't always work. Erasure coding is known-experimental. Userspace fsck frequently hangs my system during boot (I need to go into a debug console and kill mount, and then the kernel mounts the filesystem). umount can take minutes sometimes. The filesystem claims to support quotas, but there's no way to actually make the Linux standard quota tools enable quotas on a multi-device filesystem. And you'll generally need to spent 8% on spare space for copygc, even if your filesystem consists of entirely static files.

You could argue that since I didn't report all of these bugs, I cannot expect them to be fixed either. But here's the main issue for me: Reporting bugs to bcachefs is not a pleasant experience. You hang around in #bcache on IRC, and perhaps Kent is awake, perhaps he's not, perhaps things get fixed or perhaps other things take priority. But you can expect to get flamed about running Debian, or perhaps more accurately, not being willing to spearhead Kent's crusade against Debian's Rust packaging policies. (No, you cannot stay neutral. No, you cannot just want to get your filesystem fixed. You are expected to actively go and fight the Rust team on Kent's behalf.) Kent has made it clear that for distributions to carry bcachefs-tools (which you need to, among others, mount filesystems), it's his way or the highway; ship exactly what he wants in the way that he wants it, or just go away. (Similarly, it's the “kernel's culture” and “an mm maintainer” that are the problems; it's not like Kent ought to change the way he wants to develop or anything.)

So I simply reverted back to something tried and trusted. It's a bit sad to lose the compression benefits, but I can afford those 100 extra gigabytes of disk space. And it was nice to have partial-SSD-partial-HDD filesystems (it worked better than dm-cache for me), but it turns out 1TB SSDs are cheap now and I could have my key filesystems (/home and /var/lib/postgres) entirely on SSD instead.

Look, I'm not saying bcachefs sucks. Nor that it is doomed; perhaps Kent will accept that he needs to work differently for it to thrive in the kernel (and the Linux ecosystem as a whole), no matter how right he feels he is. But with a filesystem this young, you inevitably have to accept some rough edges in return for some fun. And at some point, the fun just stopped for me.

Perhaps in a couple of years?

20 January, 2025 08:45PM

hackergotchi for Jonathan Dowland

Jonathan Dowland

dsafilter 20th Anniversary

Happy 20th birthday, dsafilter!

dsafilter is a mail filter I wrote two decades ago to solve a problem I had: I was dutifully subscribed to debian-security-announce to learn of new security package updates, but most were not relevant to me.

The filter creates a new, summarizing mail, reporting on whether the DSA was applicable to any package installed on the system running the filter, and attached the original DSA mail for reference. Users can then choose to drop mails for packages that aren't relevant.

In 2005 I'd been a Debian user for about 6 years, I'd met a few Debian developers in person and I was interested in getting involved. I started my journey to Developer later that same year. I published dsafilter, and I think I sent an announcement to debian-devel, but didn't do a great deal to socialise it, so I suspect nobody else is using it.

That said, I have been for the two decades, and I still am! What's notable to me about that is that I haven't had to modify the script at all to keep up with software changes, in particular, from the interpreter. I wrote it as a Ruby script. If I had chosen Perl, it would probably be the same story, but if I'd chosen Python, there's no chance at all that it would still be working today.

If it sounds interesting to you, please give it a try. I think it might be due some spring cleaning.

20 January, 2025 06:33PM

hackergotchi for Jaldhar Vyas

Jaldhar Vyas

January 19, 2025

François Marier

Blocking comment spammers on an Ikiwiki blog

Despite comments on my ikiwiki blog being fully moderated, spammers have been increasingly posting link spam comments on my blog. While I used to use the blogspam plugin, the underlying service was likely retired circa 2017 and its public repositories are all archived.

It turns out that there is a relatively simple way to drastically reduce the amount of spam submitted to the moderation queue: ban the datacentre IP addresses that spammers are using.

Looking up AS numbers

It all starts by looking at the IP address of a submitted comment:

From there, we can look it up using whois:

$ whois -r 2a0b:7140:1:1:5054:ff:fe66:85c5

% This is the RIPE Database query service.
% The objects are in RPSL format.
%
% The RIPE Database is subject to Terms and Conditions.
% See https://docs.db.ripe.net/terms-conditions.html

% Note: this output has been filtered.
%       To receive output for a database update, use the "-B" flag.

% Information related to '2a0b:7140:1::/48'

% Abuse contact for '2a0b:7140:1::/48' is 'abuse@servinga.com'

inet6num:       2a0b:7140:1::/48
netname:        EE-SERVINGA-2022083002
descr:          servinga.com - Estonia
geoloc:         59.4424455 24.7442221
country:        EE
org:            ORG-SG262-RIPE
mnt-domains:    HANNASKE-MNT
admin-c:        CL8090-RIPE
tech-c:         CL8090-RIPE
status:         ASSIGNED
mnt-by:         MNT-SERVINGA
created:        2020-02-18T11:12:49Z
last-modified:  2024-12-04T12:07:26Z
source:         RIPE

% Information related to '2a0b:7140:1::/48AS207408'

route6:         2a0b:7140:1::/48
descr:          servinga.com - Estonia
origin:         AS207408
mnt-by:         MNT-SERVINGA
created:        2020-02-18T11:18:11Z
last-modified:  2024-12-11T23:09:19Z
source:         RIPE

% This query was served by the RIPE Database Query Service version 1.114 (SHETLAND)

The important bit here is this line:

origin:         AS207408

which referts to Autonomous System 207408, owned by a hosting company in Germany called Servinga.

Looking up IP blocks

Autonomous Systems are essentially organizations to which IPv4 and IPv6 blocks have been allocated.

These allocations can be looked up easily on the command line either using a third-party service:

$ curl -sL https://ip.guide/as207408 | jq .routes.v4 >> servinga
$ curl -sL https://ip.guide/as207408 | jq .routes.v6 >> servinga

or a local database downloaded from IPtoASN.

This is what I ended up with in the case of Servinga:

[
  "45.11.183.0/24",
  "80.77.25.0/24",
  "194.76.227.0/24"
]
[
  "2a0b:7140:1::/48"
]

Preventing comment submission

While I do want to eliminate this source of spam, I don't want to block these datacentre IP addresses outright since legitimate users could be using these servers as VPN endpoints or crawlers.

I therefore added the following to my Apache config to restrict the CGI endpoint (used only for write operations such as commenting):

<Location /blog.cgi>
        Include /etc/apache2/spammers.include
        Options +ExecCGI
        AddHandler cgi-script .cgi
</Location>

and then put the following in /etc/apache2/spammers.include:

<RequireAll>
    Require all granted

    # https://ipinfo.io/AS207408
    Require not ip 46.11.183.0/24
    Require not ip 80.77.25.0/24
    Require not ip 194.76.227.0/24
    Require not ip 2a0b:7140:1::/48
</RequireAll>

Finally, I can restart the website and commit my changes:

$ apache2ctl configtest && systemctl restart apache2.service
$ git commit -a -m "Ban all IP blocks from Servinga"

Future improvements

I will likely automate this process in the future, but at the moment my blog can go for a week without a single spam message (down from dozens every day). It's possible that I've already cut off the worst offenders.

I have published the list I am currently using.

19 January, 2025 09:00PM

Monitoring and Time-Shifting YouTube Podcasts

While most podcasts are available on multiple platforms and either offer an RSS feed or have one that can be discovered, some are only available in the form of a YouTube channel. Thankfully, it's possible to both monitor them for new episodes (i.e. new videos), and time-shift the audio for later offline listening.

Subscribing to a channel via RSS is possible thanks to the built-in, but not easily discoverable, RSS feeds. See these instructions for how to do it. As an example, the RSS feed for the official Government of BC channel is https://www.youtube.com/feeds/videos.xml?channel_id=UC6n9tFQOVepHP3TIeYXnhSA.

When it comes to downloading the audio, the most reliable tool I have found is yt-dlp. Since the exact arguments needed to download just the audio as an MP3 are a bit of a mouthful, I wrote a wrapper script which also does a few extra things:

  • cleans up the filename so that it can be stored on any filesystem
  • adds ID3 tags so that MP3 players can have the metadata they need to display and group related podcast episodes together

If you find that script handy, you may also want to check out the script I have in the same GitHub repo to turn arbitrary video files into a podcast.

19 January, 2025 08:46PM

January 18, 2025

Dominique Dumont

How we solved storage API throttling on our Azure Kubernetes clusters

Hi

This issue was quite puzzling, so I’m sharing how we investigated this issue. I hope it can be useful for you.

My client informed me that he was no longer able to install new instances of his application.

k9s showed that only some pods could not be created, only the ones that created physical volume (PV). The description of these pods showed a HTTP error 429 when creating pods: New PVC could not be created because we were throttled by Azure storage API.

This issue was confirmed by Azure diagnostic console on Kubernetes ( menu “Diagnose and solve problems” → “Cluster and Control Plane Availability and Performance” → “Azure Resource Request Throttling“).

We had a lot of throttling:

2025-01-18_11-01-k8s-throttles.png

Which were explained by the high call rate:

2025-01-18_11-01-k8s-calls.png

The first clue was found at the bottom of Azure diagnostic page:

2025-01-18_11-27-throttles-by-user-agent.png

According, to this page, throttling is done by services whose user agent is:

Go/go1.23.1 (amd64-linux) go-autorest/v14.2.1 Azure-SDK-For-Go/v68.0.0
storage/2021-09-01microsoft.com/aks-operat azsdk-go-armcompute/v1.0.0 (go1.22.3; linux)

The main information is Azure-SDK-For-Go, which means the program making all these calls to storage API is written in Go. All our services are written in Typescript or Rust, so they are not suspect.

That leaves controllers running in kube-systems namespace. I could not find anything suspects in the logs of these services.

At that point I was convinced that a component in Kubernetes control plane was making all those calls. Unfortunately, AKS is managed by Microsoft and I don’t have access to the control plane logs.

However, we’re realized that we had quite a lot of volumesnapshots that are created in our clusters using k8s-scheduled-volume-snapshotter:

  • about 1800 on dev instead of 240
  • 1070 on preprod instead of 180
  • 6800 on prod instead of 2400

We suspected that kubernetes reconciliation loop is throttled when checking the status of all these snapshots. May be so, but we also had the same issues and throttle rates on preprod and prod were the number of snapshots were quite different.

We tried to get more information using Azure console on our snapshot account, but it was also broken by the throttling issue.

We were so puzzled that we decided to try Léodagan‘s advice (tout crâmer pour repartir sur des bases saines, loosely translated as “burn everything down to start from scratch”) and we destroyed 🧨 piece by piece our dev cluster while checking if the throttling stopped.

First, we removed all our applications, no change. �

Then, all ancillary components like rabbitmq, cert-manager were removed, no change. 😶

Then, we tried remove the namespace containing our applications. But, we faced another issue: Kubernetes was unable to remove the namespace because it could not destroy some PVC and volumesnapshots. � That was actually good news, because it meant that we were close to the actual issue. 🤗

🪓 We managed to destroy the PVC and volumesnapshots by removing their finalizers. Finalizers are some kind of markers that tell kubernetes that something needs to be done before actually deleting a resource.

The finalizers were removed with a command like:

kubectl patch volumesnapshots ${volumesnapshot} \
  -p '{\"metadata\":{\"finalizers\":null}}'  --type merge

Then, we got the first progress �: the throttling and high call rate stopped on our dev cluster.

To make sure that the snapshots were the issue, we re-installed the ancillary components and our applications. Everything was copacetic. 👌�

So, the problem was indeed with PVC and snapshots.

Even though we have backups outside of Azure, we weren’t really thrilled at trying Léodagan’s method 💥 on our prod cluster…

So we looked for a better fix to try on our preprod cluster. �

�� Poking around in PVC and volumesnapshots, I finally found this error message in the description on a volumesnapshotcontents:

Code="ShareSnapshotCountExceeded" Message="The total number of snapshots
for the share is over the limit."

The number of snapshots found in our cluster was not that high. So I wanted to check the snapshots present in our storage account using Azure console, which was still broken. ⚰�

Fortunately, Azure CLI is able to retry HTTP calls when getting 429 errors. I managed to get a list of snapshots with

az storage share list --account-name [redacted] --include-snapshots \
    | tee preprod-list.json

There, I found a lot of snapshots dating back from 2024. These were no longer managed by Kubernetes and should have been cleaned up. That was our smoking gun.

I guess that we had a chain of events like:

  • too many snapshots in some volumes
  • Kubernetes control plane tries to reconcile its internal status with Azure resources and frequently retries snapshot creation
  • API throttling kicks in
  • client not happy ☹ï¸�

To make things worse, k8s-scheduled-volume-snapshotter creates new snapshots when it cannot list the old ones. So we had 4 new snapshots per day instead of one. 🌊

Since we had the chain of events, fixing the issue was not too difficult (but quite long 😵�💫):

  1. stop k8s-scheduled-volume-snapshotter by disabling its cron job
  2. delete all volumesnapshots and volume snapshots contents from k8s.
  3. since Azure API was throttled, we also had to remove their finalizers
  4. delete all snapshots from azure using az command and a Perl script (this step took several hours)
  5. re-enable k8s-scheduled-volume-snapshotter

After these steps, preprod was back to normal. ğŸ�¯ I’m now applying the same recipe on prod. 💊

We still don’t know why we had all these stale snapshots. It may have been a human error or a bug in k8s-scheduled-volume-snapshotter.

Anyway, to avoid this problem is the future, we will:

  • setup an alert on the number of snapshots per volume
  • check with k8s-scheduled-volume-snapshotter author to better cope with throttling

My name is Dominique Dumont, I’m a devops freelance. You can find the devops and audit services I propose on my website or reach out to me on LinkedIn.

All the best

18 January, 2025 09:01AM by dod

January 17, 2025

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

Security concerns regarding OpenSSH mac sha1 in Debian

What is HMAC?

HMAC stands for Hash-Based Message Authentication Code. It’s a specific way to use a cryptographic hash function (like SHA-1, SHA-256, etc.) along with a secret key to produce a unique “fingerprint” of some data. This fingerprint allows someone else with the same key to verify that the data hasn’t been tampered with.

How HMAC Works

Keyed Hashing: The core idea is to incorporate the secret key into the hashing process. This is done in a specific way to prevent clever attacks that might try to bypass the security.
Inner and Outer Hashing: HMAC uses two rounds of hashing. First, the message and a modified version of the key are hashed together. Then, the result of that hash, along with another modified version of the key, are hashed again. This two-step process adds an extra layer of protection.

HMAC in OpenSSH

OpenSSH uses HMAC to ensure the integrity of messages sent back and forth during an SSH session. This prevents an attacker from subtly modifying data in transit.

HMAC-SHA1 with OpenSSH: Is it Weak?

SHA-1 itself is considered cryptographically broken. This means that with enough computing power, it’s possible to find collisions (two different messages that produce the same hash). However, HMAC-SHA1 is generally still considered secure for most purposes. This is because exploiting weaknesses in SHA-1 to break HMAC-SHA1 is much more difficult than just finding collisions in SHA-1.

Should you use it?

While HMAC-SHA1 might still be okay for now, it’s best practice to move to stronger alternatives like HMAC-SHA256 or HMAC-SHA512. OpenSSH supports these, and they provide a greater margin of safety against future attacks.

In Summary

HMAC is a powerful tool for ensuring data integrity. Even though SHA-1 has weaknesses, HMAC-SHA1 in OpenSSH is likely still safe for most users. However, to be on the safe side and prepare for the future, switching to HMAC-SHA256 or HMAC-SHA512 is recommended.

Following are instructions for creating dataproc clusters with sha1 mac support removed:

I can appreciate an excess of caution, and I can offer you some code to produce Dataproc instances which do not allow HMAC authentication using sha1.

Place code similar to this in a startup script or an initialization action that you reference when creating a cluster with gcloud dataproc clusters create:

#!/bin/bash
# remove mac specification from sshd configuration
sed -i -e 's/^macs.*$//' /etc/ssh/sshd_config
# place a new mac specification at the end of the service configuration
ssh -Q mac | perl -e \
  '@mac=grep{ chomp; ! /sha1/ }; print("macs ", join(",",@mac), $/)' >> /etc/ssh/sshd_config
# reload the new ssh service configuration
systemctl reload ssh.service

If this code is hosted on GCS, you can refer to it with

--initialization-actions=CLOUD_STORAGE_URI,[...]

or

--metadata startup-script-url=CLOUD_STORAGE_URI,[...]

17 January, 2025 10:47PM by C.J. Collier

hackergotchi for Daniel Pocock

Daniel Pocock

Pocock shot in the face, shot in the back, shot on Hitler’s birthday saving France, Belgium and FOSDEM

Every participant in World War I has a remarkable story to tell but there are things about the story of John Smith "Jack" Pocock that are still resonating to this day.

There are a range of connections beyond the name Pocock but we haven't quite confirmed exactly how Jack fits into the family tree. For now, all we can say is that he was a cousin of my great grandfather Robert Edward Ernest Pocock who also fought in World War I. Pocock family history / genealogy page.

John Smith Pocock was born in Bendigo and I also spent a few years in Bendigo. My father was also John.

On his enlistment papers he uses the name John Smith Pocock but elsewhere, including his tombstone, he is referred to as Jack Pocock. Our undergraduate engineering project involved an artificial intelligence framework called JACK(TM), it was subsequently used in the first autonomous drone mission at Graytown which is not far from Bendigo.

As FOSDEM approaches, the annual free software conference in Belgium's capital, it is important to remember that FOSDEM and Belgium wouldn't be the same if Australians like my great grandfather and his cousin hadn't travelled half way around the world to ensure that Belgium and France would continue to be the free countries that they are today.

Jack Pocock's medical history and series of hospitalizations show a dramatic similarity to the rhetoric of voluntary groups on the Internet today. Jack's first hospitalization resulted from being shot in the face on 28 May 1917.

John Smith Jack Pocock, Bendigo

On 12 October 1917, at the battle of Passchendaele, Jack was shot in the back.

John Smith Jack Pocock, Bendigo

Passchendaele was a particularly notorious battleground due to the mud. Many wounded soldiers fell off the duckboards and became stuck or drowned in the mud. Tens of thousands of these casualties were never recovered. Reading the stories about the mud at Passchendaele reminds me of the quagmire in the FSFE GA where everybody is pretending to promote openness and transparency while hiding their own conflicts of interest. Hitler himself was known to have been at Passchendaele around the time that Jack was shot in the back.

After another hospitalization, Jack returned to the battlefield again in 2018. He was injured again on 20 April 2018, the birthday of Adolf Hitler and he lost his left arm. 20 April, Hitler's birthday, was also the day my father died. Ever since then, my family and I have been attacked constantly by the German fascists at the FSFE in Berlin.

Jack was given a hero's welcome when he returned to Bendigo. Belgium and France were liberated thanks to the sacrifices of men like this.

John Smith Jack Pocock, Bendigo

At FOSDEM in 2013, we had a panel discussion about free, open, secure and convenient communications.

Daniel Pocock

17 January, 2025 09:30PM

Russell Coker

Systemd Hardening and Sending Mail

A feature of systemd is the ability to reduce the access that daemons have to the system. The restrictions include access to certain directories, system calls, capabilities, and more. The systemd.exec(5) man page describes them all [1]. To see an overview of the security of daemons run “systemd-analyze security” and to get details of one particular daemon run a command like “systemd-analyze security mon.service”.

I created a Debian wiki page for a systemd-analyze security goal [2]. At this time release goals aren’t a serious thing for Debian so this won’t result in release critical bug reports, but it is still something we can aim for.

For a simple daemon (EG BIND, dhcpd, and syslogd) this isn’t difficult to do. It might be difficult to understand the implications of some changes (especially when restricting system calls) but you can do some quick tests. The functionality of such programs has a limited scope and once you get it basically working it’s done.

For some daemons it’s harder. Network-Manager is one of the well known slightly more difficult cases as it could do things like starting a VPN connection. The larger scope and the use of plugins makes it difficult to test the combinations. The systemd restrictions apply to child processes too unlike restrictions by SE Linux and AppArmor which permit a child process to run in a different security context.

The messages when a daemon fails due to systemd restrictions are usually unclear which makes things harder to setup and makes it more important to get it right.

My “mon” package (which I forked upstream as etbe-mon [3] is one of the difficult daemons as local test can involve probing large parts of the system. But I have got that working reasonably well for most cases.

I have a bug report about running mon with Exim [4]. The problem with this is that Exim has a single process model which means that the process doing local delivery can be a child of the process that initially received the message. So the main mon process needs all the access for delivering mail (writing to /home etc). This also means that every other child of mon will get such access including programs that receive untrusted data from the Internet. Most of the extra access needed by Exim is not a problem, but /home access is a potential risk. It also means that more effort is needed when reviewing the access control.

The problem with this Exim design is that it applies to many daemons. Every daemon that sends email or that potentially could send email in some configuration needs extra access to be granted.

Can Exim be configured to have it’s sendmail -T” type operation just write a file in a spool directory for another program to process? Do we need to grant permissions to most of the system just for Exim?

17 January, 2025 04:50AM by etbe

January 16, 2025

hackergotchi for Adnan Hodzic

Adnan Hodzic

How I replaced myself with a genAI chatbot using Gemini

It’s been 5 years since I created auto-cpufreq. Today, it has over 6000 stars on GitHub, attracting 97 contributors, releasing 47 versions, and reaching what...

16 January, 2025 03:46PM by Adnan Hodzic

January 15, 2025

hackergotchi for Thomas Lange

Thomas Lange

FAI 6.2.5 and new ISO available

The new years starts with a FAI release. FAI 6.2.5 is available and contains many small improvements. A new feature is that the command fai-cd can now create ISOs for the ARM64 architecture.

The FAIme service uses the newest FAI version and the Debian most recent point release 12.9. The FAI CD images were also updated. The Debian packages of FAI 6.2.5 are available for Debian stable (aka bookworm) via the FAI repository adding this line to sources.list:

deb https://fai-project.org/download bookworm koeln

Using the tool extrepo, you can also add the FAI repository to your host

# extrepo enable fai

FAI 6.2.5 will soon be available in Debian testing via the official Debian mirrors.

15 January, 2025 11:40AM

January 14, 2025

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal Subway Foot Traffic Data, 2024 edition

Another year of data from Société de Transport de Montréal, Montreal's transit agency!

A few highlights this year:

  1. The closure of the Saint-Michel station had a drastic impact on D'Iberville, the station closest to it.

  2. The opening of the Royalmount shopping center nearly doubled the traffic of the De La Savane station.

  3. The Montreal subway continues to grow, but has not yet recovered from the pandemic. Berri-UQAM station (the largest one) is still below 1 million entries per quarter compared to its pre-pandemic record.

By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic.

Licences

  • The subway map displayed on this page, the original dataset and my modified dataset are licenced under CCO 1.0: they are in the public domain.

  • The R code I wrote is licensed under the GPLv3+. It's pretty much the same code as last year. STM apparently changed (again!) the way they are exporting data, and since it's now somewhat sane, I didn't have to rely on a converter script.

14 January, 2025 05:00AM by Louis-Philippe Véronneau

January 11, 2025

Andrew Cater

20250111 Release media testing for Debian 12.9

 We're part way through the testing of release media. RattusRattus, Isy, Sledge, smcv and Helen in Cambridge, a new tester Blew in Manchester, another new tester MerCury[m] and also  highvoltage in South Africa.

Everything is going well so far and we're chasing through the test schedule.

Sorry not to be there in Cambridgeshire with friends - but the room is fairly small and busy :) 


[UPDATE/EDIT - at 20250111 1701 - we're pretty much complete on the testing]

11 January, 2025 05:59PM by Andrew Cater (noreply@blogger.com)

January 09, 2025

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

RIP vorlon

I was very sad to hear that Steve Langasek, aka vorlon, has passed away from cancer. I hadn't talked to him in many years, but I did meet him at Debconf a couple of times, and more importantly: I was there when he was Release Manager for Debian.

Steve stepped up as one of the RMs at a point where Debian's releases were basically a hell march. Releases would drag on for years, freezes would be forever, at some point not a single package came through to testing over a glibc issue. In that kind of environment, and despite no small amount of toxicity surrounding it all, Steve pulled through and managed not only one, but two releases. If you've only seen the release status of Debian after this period, you won't really know how much must have happened in that decade.

The few times I met Steve, he struck me as not only knowledgeable, but also kind and not afraid to step up for people even it went against the prevailing winds. I wish we could all learn from that. Rest in peace, Steve, your passing is a huge loss for our communities.

09 January, 2025 08:00PM

Reproducible Builds

Reproducible Builds in December 2024

Welcome to the December 2024 report from the Reproducible Builds project!

Our monthly reports outline what we’ve been up to over the past month and highlight items of news from elsewhere in the world of software supply-chain security when relevant. As ever, however, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website.

Table of contents:

  1. reproduce.debian.net
  2. debian-repro-status
  3. On our mailing list
  4. Enhancing the Security of Software Supply Chains
  5. diffoscope
  6. Supply-chain attack in the Solana ecosystem
  7. Website updates
  8. Debian changes
  9. Other development news
  10. Upstream patches
  11. Reproducibility testing framework

reproduce.debian.net

Last month saw the introduction of reproduce.debian.net. Announced at the recent Debian MiniDebConf in Toulouse, reproduce.debian.net is an instance of rebuilderd operated by the Reproducible Builds project. rebuilderd is our server designed monitor the official package repositories of Linux distributions and attempts to reproduce the observed results there.

This month, however, we are pleased to announce that not only does the service now produce graphs, the reproduce.debian.net homepage itself has become a “start page” of sorts, and the amd64.reproduce.debian.net and i386.reproduce.debian.net pages have emerged. The first of these rebuilds the amd64 architecture, naturally, but it also is building Debian packages that are marked with the ‘no architecture’ label, all. The second builder is, however, only rebuilding the i386 architecture.

Both of these services were also switched to reproduce the Debian trixie distribution instead of unstable, which started with 43% of the archive rebuild with 79.3% reproduced successfully. This is very much a work in progress, and we’ll start reproducing Debian unstable soon.

Our i386 hosts are very kindly sponsored by Infomaniak whilst the amd64 node is sponsored by OSUOSL — thank you! Indeed, we are looking for more workers for more Debian architectures; please contact us if you are able to help.


debian-repro-status

Reproducible builds developer kpcyrd has published a client program for reproduce.debian.net (see above) that queries the status of the locally installed packages and rates the system with a percentage score. This tool works analogously to arch-repro-status for the Arch Linux Reproducible Builds setup.

The tool was packaged for Debian and is currently available in Debian trixie: it can be installed with apt install debian-repro-status.


On our mailing list

On our mailing list this month:

  • Bernhard M. Wiedemann wrote a detailed post on his “long journey towards a bit-reproducible Emacs package.” In his interesting message, Bernhard goes into depth about the tools that they used and the lower-level technical details of, for instance, compatibility with the version for glibc within openSUSE.

  • Shivanand Kunijadar posed a question pertaining to the reproducibility issues with encrypted images. Shivanand explains that they must “use a random IV for encryption with AES CBC. The resulting artifact is not reproducible due to the random IV used.” The message resulted in a handful of replies, hopefully helpful!

  • User Danilo posted an in interesting question related to their attempts in trying to achieve reproducible builds for Threema Desktop 2.0. The question resulted in a number of replies attempting to find the right combination of compiler and linker flags (for example).

  • Longstanding contributor David A. Wheeler wrote to our list announcing the release of the “Census III of Free and Open Source Software: Application Libraries” report written by Frank Nagle, Kate Powell, Richie Zitomer and David himself. As David writes in his message, the report attempts to “answer the question ‘what is the most popular Free and Open Source Software (FOSS)?’”.

  • Lastly, kpcyrd followed-up to a post from September 2024 which mentioned their desire for “someone” to implement “a hashset of allowed module hashes that is generated during the kernel build and then embedded in the kernel image”, thus enabling a deterministic and reproducible build. However, they are now reporting that “somebody implemented the hash-based allow list feature and submitted it to the Linux kernel mailing list”. Like kpcyrd, we hope it gets merged.


Enhancing the Security of Software Supply Chains: Methods and Practices

Mehdi Keshani of the Delft University of Technology in the Netherlands has published their thesis on “Enhancing the Security of Software Supply Chains: Methods and Practices”. Their introductory summary first begins with an outline of software supply chains and the importance of the Maven ecosystem before outlining the issues that it faces “that threaten its security and effectiveness”. To address these:

First, we propose an automated approach for library reproducibility to enhance library security during the deployment phase. We then develop a scalable call graph generation technique to support various use cases, such as method-level vulnerability analysis and change impact analysis, which help mitigate security challenges within the ecosystem. Utilizing the generated call graphs, we explore the impact of libraries on their users. Finally, through empirical research and mining techniques, we investigate the current state of the Maven ecosystem, identify harmful practices, and propose recommendations to address them.

A PDF of Mehdi’s entire thesis is available to download.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 283 and 284 to Debian:

  • Update copyright years. []
  • Update tests to support file 5.46. [][]
  • Simplify tests_quines.py::test_{differences,differences_deb} to simply use assert_diff and not mangle the test fixture. []


Supply-chain attack in the Solana ecosystem

A significant supply-chain attack impacted Solana, an ecosystem for decentralised applications running on a blockchain.

Hackers targeted the @solana/web3.js JavaScript library and embedded malicious code that extracted private keys and drained funds from cryptocurrency wallets. According to some reports, about $160,000 worth of assets were stolen, not including SOL tokens and other crypto assets.


Website updates

Similar to last month, there was a large number of changes made to our website this month, including:

  • Chris Lamb:

    • Make the landing page hero look nicer when the vertical height component of the viewport is restricted, not just the horizontal width.
    • Rename the “Buy-in” page to “Why Reproducible Builds?” []
    • Removing the top black border. [][]
  • Holger Levsen:

  • hulkoba:

    • Remove the sidebar-type layout and move to a static navigation element. [][][][]
    • Create and merge a new Success stories page, which “highlights the success stories of Reproducible Builds, showcasing real-world examples of projects shipping with verifiable, reproducible builds. These stories aim to enhance the technical resilience of the initiative by encouraging community involvement and inspiring new contributions.”. []
    • Further changes to the homepage. []
    • Remove the translation icon from the navigation bar. []
    • Remove unused CSS styles pertaining to the sidebar. []
    • Add sponsors to the global footer. []
    • Add extra space on large screens on the Who page. []
    • Hide the side navigation on small screens on the Documentation pages. []


Debian changes

There were a significant number of reproducibility-related changes within Debian this month, including:

  • Santiago Vila uploaded version 0.11+nmu4 of the dh-buildinfo package. In this release, the dh_buildinfo becomes a no-op — ie. it no longer does anything beyond warning the developer that the dh-buildinfo package is now obsolete. In his upload, Santiago wrote that “We still want packages to drop their [dependency] on dh-buildinfo, but now they will immediately benefit from this change after a simple rebuild.”

  • Holger Levsen filed Debian bug #1091550 requesting a rebuild of a number of packages that were built with a “very old version” of dpkg.

  • Fay Stegerman contributed to an extensive thread on the debian-devel development mailing list on the topic of “Supporting alternative zlib implementations”. In particular, Fay wrote about her results experimenting whether zlib-ng produces identical results or not.

  • kpcyrd uploaded a new rust-rebuilderd-worker, rust-derp, rust-in-toto and debian-repro-status to Debian, which passed successfully through the so-called NEW queue.

  • Gioele Barabucci filed a number of bugs against the debrebuild component/script of the devscripts package, including:

    • #1089087: Address a spurious extra subdirectory in the build path.
    • #1089201: Extra zero bytes added to .dynstr when rebuilding CMake projects.
    • #1089088: Some binNMUs have a 1-second offset in some timestamps.
  • Gioele Barabucci also filed a bug against the dh-r package to report that the Recommends and Suggests fields are missing from rebuilt R packages. At the time of writing, this bug has no patch and needs some help to make over 350 binary packages reproducible.

  • Lastly, 8 reviews of Debian packages were added, 11 were updated and 11 were removed this month adding to our knowledge about identified issues.


Other development news

In other ecosystem and distribution news:

  • Lastly, in openSUSE, Bernhard M. Wiedemann published another report for the distribution. There, Bernhard reports about the success of building ‘R-B-OS’, a partial fork of openSUSE with only 100% bit-reproducible packages. This effort was sponsored by the NLNet NGI0 initiative.


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In November, a number of changes were made by Holger Levsen, including:

  • reproduce.debian.net-related:

    • Add a new i386.reproduce.debian.net rebuilder. [][][][][][]
    • Make a number of updates to the documentation. [][][][][]
    • Run i386.reproduce.debian.net run on a public port to allow external workers. []
    • Add a link to the /api/v0/pkgs/list endpoint. []
    • Add support for a statistics page. [][][][][][]
    • Limit build logs to 20 MiB and diffoscope output to 10 MiB. []
    • Improve the frontpage. [][]
    • Explain that we’re testing arch:any and arch:all on the amd64 architecture, but only arch:any on i386. []
  • Misc:

    • Remove code for testing Arch Linux, which has moved to reproduce.archlinux.org. [][]
    • Don’t install dstat on Jenkins nodes anymore as its been removed from Debian trixie. []
    • Prepare the infom08-i386 node to become another rebuilder. []
    • Add debug date output for benchmarking the reproducible_pool_buildinfos.sh script. []
    • Install installation-birthday everywhere. []
    • Temporarily disable automatic updates of pool links on buildinfos.debian.net. []
    • Install Recommends by default on Jenkins nodes. []
    • Rename rebuilder_stats.py to rebuilderd_stats.py. []
    • r.d.n/stats: minor formatting changes. []
    • Install files under /etc/cron.d/ with the correct permissions. []

… and Jochen Sprickerhof made the following changes:

Lastly, Gioele Barabucci also classified packages affected by 1-second offset issue filed as Debian bug #1089088 [][][][], Chris Hofstaedtler updated the URL for Grml’s dpkg.selections file  [], Roland Clobus updated the Jenkins log parser to parse warnings from diffoscope [] and Mattia Rizzolo banned a number of bots and crawlers from the service [][].


If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

09 January, 2025 12:00PM

January 08, 2025

John Goerzen

Censorship Is Complicated: What Internet History Says about Meta/Facebook

In light of this week’s announcement by Meta (Facebook, Instagram, Threads, etc), I have been pondering this question: Why am I, a person that has long been a staunch advocate of free speech and encryption, leery of sites that talk about being free speech-oriented? And, more to the point, why an I — a person that has been censored by Facebook for mentioning the Open Source social network Mastodon — not cheering a “lighter touch”?

The answers are complicated, and take me back to the early days of social networking. Yes, I mean the 1980s and 1990s.

Before digital communications, there were barriers to reaching a lot of people. Especially money. This led to a sort of self-censorship: it may be legal to write certain things, but would a newspaper publish a letter to the editor containing expletives? Probably not.

As digital communications started to happen, suddenly people could have their own communities. Not just free from the same kinds of monetary pressures, but free from outside oversight (parents, teachers, peers, community, etc.) When you have a community that the majority of people lack the equipment to access — and wouldn’t understand how to access even if they had the equipment — you have a place where self-expression can be unleashed.

And, as J. C. Herz covers in what is now an unintentional history (her book Surfing on the Internet was published in 1995), self-expression WAS unleashed. She enjoyed the wit and expression of everything from odd corners of Usenet to the text-based open world of MOOs and MUDs. She even talks about groups dedicated to insults (flaming) in positive terms.

But as I’ve seen time and again, if there are absolutely no rules, then whenever a group gets big enough — more than a few dozen people, say — there are troublemakers that ruin it for everyone. Maybe it’s trolling, maybe it’s vicious attacks, you name it — it will arrive and it will be poisonous.

I remember the debates within the Debian community about this. Debian is one of the pillars of the Internet today, a nonprofit project with free speech in its DNA. And yet there were inevitably the poisonous people. Debian took too long to learn that allowing those people to run rampant was causing more harm than good, because having a well-worn Delete key and a tolerance for insults became a requirement for being a Debian developer, and that drove away people that had no desire to deal with such things. (I should note that Debian strikes a much better balance today.)

But in reality, there were never absolutely no rules. If you joined a BBS, you used it at the whim of the owner (the “sysop” or system operator). The sysop may be a 16-yr-old running it from their bedroom, or a retired programmer, but in any case they were letting you use their resources for free and they could kick you off for any or no reason at all. So if you caused trouble, or perhaps insulted their cat, you’re banned. But, in all but the smallest towns, there were other options you could try.

On the other hand, sysops enjoyed having people call their BBSs and didn’t want to drive everyone off, so there was a natural balance at play. As networks like Fidonet developed, a sort of uneasy approach kicked in: don’t be excessively annoying, and don’t be easily annoyed. Like it or not, it seemed to generally work. A BBS that repeatedly failed to deal with troublemakers could risk removal from Fidonet.

On the more institutional Usenet, you generally got access through your university (or, in a few cases, employer). Most universities didn’t really even know they were running a Usenet server, and you were generally left alone. Until you did something that annoyed somebody enough that they tracked down the phone number for your dean, in which case real-world consequences would kick in. A site may face the Usenet Death Penalty — delinking from the network — if they repeatedly failed to prevent malicious content from flowing through their site.

Some BBSs let people from minority communities such as LGBTQ+ thrive in a place of peace from tormentors. A lot of them let people be themselves in a way they couldn’t be “in real life”. And yes, some harbored trolls and flamers.

The point I am trying to make here is that each BBS, or Usenet site, set their own policies about what their own users could do. These had to be harmonized to a certain extent with the global community, but in a certain sense, with BBSs especially, you could just use a different one if you didn’t like what the vibe was at a certain place.

That this free speech ethos survived was never inevitable. There were many attempts to regulate the Internet, and it was thanks to the advocacy of groups like the EFF that we have things like strong encryption and a degree of freedom online.

With the rise of the very large platforms — and here I mean CompuServe and AOL at first, and then Facebook, Twitter, and the like later — the low-friction option of just choosing a different place started to decline. You could participate on a Fidonet forum from any of thousands of BBSs, but you could only participate in an AOL forum from AOL. The same goes for Facebook, Twitter, and so forth. Not only that, but as social media became conceived of as very large sites, it became impossible for a person with enough skill, funds, and time to just start a site themselves. Instead of neading a few thousand dollars of equipment, you’d need tens or hundreds of millions of dollars of equipment and employees.

All that means you can’t really run Facebook as a nonprofit. It is a business. It should be absolutely clear to everyone that Facebook’s mission is not the one they say it is — “[to] give people the power to build community and bring the world closer together.” If that was their goal, they wouldn’t be creating AI users and AI spam and all the rest. Zuck isn’t showing courage; he’s sucking up to Trump and those that will pay the price are those that always do: women and minorities.

Really, the point of any large social network isn’t to build community. It’s to make the owners their next billion. They do that by convincing people to look at ads on their site. Zuck is as much a windsock as anyone else; he will adjust policies in whichever direction he thinks the wind is blowing so as to let him keep putting ads in front of eyeballs, and stomp all over principles — even free speech — doing it. Don’t expect anything different from any large commercial social network either. Bluesky is going to follow the same trajectory as all the others.

The problem with a one-size-fits-all content policy is that the world isn’t that kind of place. For instance, I am a pacifist. There is a place for a group where pacifists can hang out with each other, free from the noise of the debate about pacifism. And there is a place for the debate. Forcing everyone that signs up for the conversation to sign up for the debate is harmful. Preventing the debate is often also harmful. One company can’t square this circle.

Beyond that, the fact that we care so much about one company is a problem on two levels. First, it indicates how succeptible people are to misinformation and such. I don’t have much to offer on that point. Secondly, it indicates that we are too centralized.

We have a solution there: Mastodon. Mastodon is a modern, open source, decentralized social network. You can join any instance, easily migrate your account from one server to another, and so forth. You pick an instance that suits you. There are thousands of others you can choose from. Some aggressively defederate with instances known to harbor poisonous people; some don’t.

And, to harken back to the BBS era, if you have some time, some skill, and a few bucks, you can run your own Mastodon instance.

Personally, I still visit Facebook on occasion because some people I care about are mainly there. But it is such a terrible experience that I rarely do. Meta is becoming irrelevant to me. They are on a path to becoming irrelevant to many more as well. Maybe this is the moment to go “shrug, this sucks” and try something better.

(And when you do, feel free to say hi to me at @jgoerzen@floss.social on Mastodon.)

08 January, 2025 02:59PM by John Goerzen

January 07, 2025

Jonathan Wiltshire

Using TPM for Automatic Disk Decryption in Debian 12

These days it’s straightforward to have reasonably secure, automatic decryption of your root filesystem at boot time on Debian 12. Here’s how I did it on an existing system which already had a stock kernel, secure boot enabled, grub2 and an encrypted root filesystem with the passphrase in key slot 0.

There’s no need to switch to systemd-boot for this setup but you will use systemd-cryptenroll to manage the TPM-sealed key. If that offends you, there are other ways of doing this.

Caveat

The parameters I’ll seal a key against in the TPM include a hash of the initial ramdisk. This is essential to prevent an attacker from swapping the image for one which discloses the key. However, it also means the key has to be re-sealed every time the image is rebuilt. This can be frequent, for example when installing/upgrading/removing packages which include a kernel module. You won’t get locked out (as long as you still have a passphrase in another slot), but will need to re-seal the key to restore the automation.

You can also choose not to include this parameter for the seal, but that opens the door to such an attack.

Caution: these are the steps I took on my own system. You may need to adjust them to avoid ending up with a non-booting system.

Check for a usable TPM device

We’ll bind the secure boot state, kernel parameters, and other boot measurements to a decryption key. Then, we’ll seal it using the TPM. This prevents the disk being moved to another system, the boot chain being tampered with and various other attacks.

# apt install tpm2-tools
# systemd-cryptenroll --tpm2-device list
PATH        DEVICE     DRIVER 
/dev/tpmrm0 STM0125:00 tpm_tis

Clean up older kernels including leftover configurations

I found that previously-removed (but not purged) kernel packages sometimes cause dracut to try installing files to the wrong paths. Identify them with:

# apt install aptitude
# aptitude search '~c'

Change search to purge or be more selective, this part is an exercise for the reader.

Switch to dracut for initramfs images

Unless you have a particular requirement for the default initramfs-tools, replace it with dracut and customise:

# mkdir /etc/dracut.conf.d
# echo 'add_dracutmodules+=" tpm2-tss crypt "' > /etc/dracut.conf.d/crypt.conf
# apt install dracut

Remove root device from crypttab, configure grub

Remove (or comment) the root device from /etc/crypttab and rebuild the initial ramdisk with dracut -f.

Edit /etc/default/grub and add ‘rd.auto rd.luks=1‘ to GRUB_CMDLINE_LINUX. Re-generate the config with update-grub.

At this point it’s a good idea to sanity-check the initrd contents with lsinitrd. Then, reboot using the new image to ensure there are no issues. This will also have up-to-date TPM measurements ready for the next step.

Identify device and seal a decryption key

# lsblk -ip -o NAME,TYPE,MOUNTPOINTS
NAME                                                    TYPE  MOUNTPOINTS
/dev/nvme0n1p4                                          part  /boot
/dev/nvme0n1p5                                          part  
`-/dev/mapper/luks-deff56a9-8f00-4337-b34a-0dcda772e326 crypt 
  |-/dev/mapper/lv-var                                  lvm   /var
  |-/dev/mapper/lv-root                                 lvm   /
  `-/dev/mapper/lv-home                                 lvm   /home

In this example my root filesystem is in a container on /dev/nvme0n1p5. The existing passphrase key is in slot 0.

# systemd-cryptenroll --tpm2-device=auto --tpm2-pcrs=7+8+9+14 /dev/nvme0n1p5
Please enter current passphrase for disk /dev/nvme0n1p5: ********
New TPM2 token enrolled as key slot 1.

The PCRs I chose (7, 8, 9 and 14) correspond to the secure boot policy, kernel command line (to prevent init=/bin/bash-style attacks), files read by grub including that crucial initrd measurement, and secure boot MOK certificates and hashes. You could also include PCR 5 for the partition table state, and any others appropriate for your setup.

Reboot

You should now be able to reboot and the root device will be unlocked automatically, provided the secure boot measurements remain consistent.

The key slot protected by a passphrase (mine is slot 0) is now your recovery key. Do not remove it!


Please consider supporting my work in Debian and elsewhere through Liberapay.

07 January, 2025 11:03PM by Jonathan

Thorsten Alteholz

My Debian Activities in December 2024

Debian LTS

This was my hundred-twenty-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

I worked on updates for ffmpeg and haproxy in all releases. Along the way I marked more CVEs as not-affected than I had to fix. So finally there was no upload needed for haproxy anymore. Unfortunately testing ffmpeg was not as easy, as the recommended “just look whether mpv can play random videos” is not really satisfying. So the upload will happen only in January.

I also wonder whether fixing glewlwyd is really worth the effort, as the software is already EOL upstream.

Debian ELTS

This month was the seventy-seventhth ELTS month. During my allocated time I worked on ffmpeg, haproxy, amanda and kmail-account-wizzard.

Like LTS, all CVEs of haproxy and some of ffmpeg could be marked as not-affected and testing of the other packages was/is not really straight forward. So the final upload will only happen in January as well.

Debian Printing

Unfortunately I didn’t found any time to work on this topic.

Debian Matomo

Thanks a lot to William Desportes for all fixes of my bad PHP packaging.

Debian Astro

This month I uploaded new packages or new upstream or bugfix versions of:

I again sponsored an upload of calceph.

Debian IoT

This month I uploaded new upstream or bugfix versions of:

Debian Mobcom

This month I uploaded new packages or new upstream or bugfix versions of:

misc

This month I uploaded new upstream or bugfix versions of:

I also sponsored uploads of emacs-lsp-docker, emacs-dape, emacs-oauth2, gpgmngr, libjs-jush.

FTP master

This month I accepted 330 and rejected 13 packages. The overall number of packages that got accepted was 335.

07 January, 2025 12:29PM by alteholz

Enrico Zini

Debugging printing to a remote printer

I upgraded to Debian testing/trixie, and my network printer stopped appearing in print dialogs. These are notes from the debugging session.

Check firewall configuration

I tried out kde, which installed plasma-firewall, which installed firewalld, which closed by default the ports used for printing.

For extra fun, appindicators are not working in Gnome and so firewall-applet is currently useless, although one can run firewall-config manually, or use the command line that might be more user friendly than the UI.

Step 1: change the zone for the home wifi to "Home":

firewall-cmd  --zone home --list-interfaces
firewall-cmd  --zone home --add-interface wlp1s0

Step 2: make sure the home zone can print:

firewall-cmd --zone home --list-services
firewall-cmd --zone home --add-service=ipp
firewall-cmd --zone home --add-service=ipp-client
firewall-cmd --zone home --add-service=mdns

I searched and searched but I could not find out whether ipp is needed, ipp-client is needed, or both are needed.

Check if avahi can see the printer

Is the printer advertised correctly over mdns?

When it didn't work:

$ avahi-browse -avrt
= wlp1s0 IPv6 Brother HL-2030 series @ server                UNIX Printer         local
   hostname = [server.local]
   address = [...ipv6 address...]
   port = [0]
   txt = []
= wlp1s0 IPv4 Brother HL-2030 series @ server                UNIX Printer         local
   hostname = [server.local]
   address = [...ipv4 address...]
   port = [0]
   txt = []

$ avahi-browse -rt _ipp._tcp
[empty]

When it works:

$ avahi-browse -avrt
= wlp1s0 IPv6 Brother HL-2030 series @ server                Secure Internet Printer local
   hostname = [server.local]
   address = [...ipv6 address...]
   port = [631]
   txt = ["printer-type=0x1046" "printer-state=3" "Copies=T" "TLS=1.2" "UUID=…" "URF=DM3" "pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/pwg-raster,image/urf" "product=(HL-2030 series)" "priority=0" "note=" "adminurl=https://server.local.:631/printers/Brother_HL-2030_series" "ty=Brother HL-2030 series, using brlaser v6" "rp=printers/Brother_HL-2030_series" "qtotal=1" "txtvers=1"]
= wlp1s0 IPv6 Brother HL-2030 series @ server                UNIX Printer         local
   hostname = [server.local]
   address = [...ipv6 address...]
   port = [0]
   txt = []
= wlp1s0 IPv4 Brother HL-2030 series @ server                Secure Internet Printer local
   hostname = [server.local]
   address = [...ipv4 address...]
   port = [631]
   txt = ["printer-type=0x1046" "printer-state=3" "Copies=T" "TLS=1.2" "UUID=…" "URF=DM3" "pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/pwg-raster,image/urf" "product=(HL-2030 series)" "priority=0" "note=" "adminurl=https://server.local.:631/printers/Brother_HL-2030_series" "ty=Brother HL-2030 series, using brlaser v6" "rp=printers/Brother_HL-2030_series" "qtotal=1" "txtvers=1"]
= wlp1s0 IPv4 Brother HL-2030 series @ server                UNIX Printer         local
   hostname = [server.local]
   address = [...ipv4 address...]
   port = [0]
   txt = []

$ avahi-browse -rt _ipp._tcp
+ wlp1s0 IPv6 Brother HL-2030 series @ server                Internet Printer     local
+ wlp1s0 IPv4 Brother HL-2030 series @ server                Internet Printer     local
= wlp1s0 IPv4 Brother HL-2030 series @ server                Internet Printer     local
   hostname = [server.local]
   address = [...ipv4 address...]
   port = [631]
   txt = ["printer-type=0x1046" "printer-state=3" "Copies=T" "TLS=1.2" "UUID=…" "URF=DM3" "pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/pwg-raster,image/urf" "product=(HL-2030 series)" "priority=0" "note=" "adminurl=https://server.local.:631/printers/Brother_HL-2030_series" "ty=Brother HL-2030 series, using brlaser v6" "rp=printers/Brother_HL-2030_series" "qtotal=1" "txtvers=1"]
= wlp1s0 IPv6 Brother HL-2030 series @ server                Internet Printer     local
   hostname = [server.local]https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1092109
   address = [...ipv6 address...]
   port = [631]
   txt = ["printer-type=0x1046" "printer-state=3" "Copies=T" "TLS=1.2" "UUID=…" "URF=DM3" "pdl=application/octet-stream,application/pdf,application/postscript,image/jpeg,image/png,image/pwg-raster,image/urf" "product=(HL-2030 series)" "priority=0" "note=" "adminurl=https://server.local.:631/printers/Brother_HL-2030_series" "ty=Brother HL-2030 series, using brlaser v6" "rp=printers/Brother_HL-2030_series" "qtotal=1" "txtvers=1"]

Check if cups can see the printer

From CUPS' Using Network Printers:

$ /usr/sbin/lpinfo --include-schemes dnssd -v

network dnssd://Brother%20HL-2030%20series%20%40%20server._ipp._tcp.local/cups?uuid=

Debugging session interrupted

At this point, the printer appeared.

It could be that:

In the end, debugging failed successfully, and this log now remains as a reference for possible further issues.

07 January, 2025 11:40AM