Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

November 20, 2024

Ian Jackson

The Rust Foundation's 2nd bad draft trademark policy

tl;dr: The Rust Foundation’s new trademark policy still forbids unapproved modifications: this would forbid both the Rust Community’s own development work(!) and normal Free Software distribution practices.

Background

In April 2023 I wrote about the Rust Foundation’s ham-fisted and misguided attempts to update the Rust trademark policy. This turned into drama.

The new draft

Recently, the Foundation published a new draft. It’s considerably less bad, but the most serious problem, which I identified last year, remains.

It prevents redistribution of modified versions of Rust, without pre-approval from the Rust Foundation. (Subject to some limited exceptions.) The people who wrote this evidently haven’t realised that distributing modified versions is how free software development works. Ie, the draft Rust trademark policy even forbids making a github branch for an MR to contribute to Rust!

It’s also very likely unacceptable to Debian. Rust is still on track to repeat the Firefox/Iceweasel debacle.

Below is a copy of my formal response to the consultation. The consultation closes at 07:59:00 UTC tomorrow (21st November), ie, at the end of today (Wednesday) US Pacific time, so if you want to reply, do so quickly.

My consultation response

Hi. My name is Ian Jackson. I write as a Rust contributor and as a Debian Developer with first-hand experience of Debian’s approach to trademarks. (But I am not a member of the Debian Rust Packaging Team.)

Your form invites me to state any blocking concerns. I’m afraid I have one:

PROBLEM

The policy on distributing modified versions of Rust (page 4, 8th bullet) is far too restrictive.

PROBLEM - ASPECT 1

On its face the policy forbids making a clone of the Rust repositories on a git forge, and pushing a modified branch there. That is publicly distributing a modified version of Rust.

I.e., the current policy forbids the Rust’s community’s own development workflow!

PROBLEM - ASPECT 2

The policy also does not meet the needs of Software-Freedom-respecting downstreams, including community Linux distributions such as Debian.

There are two scenarios (fuzzy, and overlapping) which provide a convenient framing to discuss this:

Firstly, in practical terms, Debian may need to backport bugfixes, or sometimes other changes. Sometimes Debian will want to pre-apply bugfixes or changes that have been contributed by users, and are intended eventually to go upstream, but are not included upstream in official Rust yet. This is a routine activity for a distribution. The policy, however, forbids it.

Secondly, Debian, as a point of principle, requires the ability to diverge from upstream if and when Debian decides that this is the right choice for Debian’s users. The freedom to modify is a key principle of Free Software. This includes making changes that the upstream project disapproves of. Some examples of this, where Debian has made changes, that upstream do not approve of, have included things like: removing user-tracking code, or disabling obsolescence “timebombs” that stop a particular version working after a certain date.

Overall, while alignment in values between Debian and Rust seems to be very good right now, modifiability it is a matter of non-negotiable principle for Debian. The 8th bullet point on page 4 of the PDF does not give Debian (and Debian’s users) these freedoms.

POSSIBLE SOLUTIONS

Other formulations, or an additional permission, seem like they would be able to meet the needs of both Debian and Rust.

The first thing to recognise is that forbidding modified versions is probably not necessary to prevent language ecosystem fragmentation. Many other programming languages are distributed under fully Free Software licences without such restrictive trademark policies. (For example, Python; I’m sure a thorough survey would find many others.)

The scenario that would be most worrying for Rust would be “embrace - extend - extinguish”. In projects with a copyleft licence, this is not a concern, but Rust is permissively licenced. However, one way to address this would be to add an additional permission for modification that permits distribution of modified versions without permission, but if the modified source code is also provided, under the original Rust licence.

I suggest therefore adding the following 2nd sub-bullet point to the 8th bullet on page 4:

  • changes which are shared, in source code form, with all recipients of the modified software, and publicly licenced under the same licence as the official materials.

This means that downstreams who fear copyleft have the option of taking Rust’s permissive copyright licence at face value, but are limited in the modifications they may make, unless they rename. Conversely downstreams such as Debian who wish to operate as part of the Free Software ecosystem can freely make modifications.

It also, obviously, covers the Rust Community’s own development work.

NON-SOLUTIONS

Some upstreams, faced with this problem, have offered Debian a special permission: ie, said that it would be OK for Debian to make modifications that Debian wants to. But Debian will not accept any Debian-specific permissions.

Debian could of course rename their Rust compiler. Debian has chosen to rename in the past: infamously, a similar policy by Mozilla resulted in Debian distributing Firefox under the name Iceweasel for many years. This is a PR problem for everyone involved, and results in a good deal of technical inconvenience and makework.

“Debian could seek approval for changes, and the Rust Foundation would grant that approval quickly”. This is unworkable on a practical level - requests for permission do not fit into Debian’s workflow, and the resulting delays would be unacceptable. But, more fundamentally, Debian rightly insists that it must have the freedom to make changes that the Foundation do not approve of. (For example, if a future Rust shipped with telemetry features Debian objected to.)

“Debian and Rust could compromise”. However, Debian is an ideological as well as technological project. The principles I have set out are part of Debian’s Foundation Documents - they are core values for Debian. When Debian makes compromises, it does so very slowly and with great deliberation, using its slowest and most heavyweight constitutional governance processes. Debian is not likely to want to engage in such a process for the benefit of one programming language.

“Users will get Rust from upstream”. This is currently often the case. Right now, Rust is moving very quickly, and by Debian standards is very new. As Rust becomes more widely used, more stable, and more part of the infrastructure of the software world, it will need to become part of standard, stable, reliable, software distributions. That means Debian.

(The consultation was a Google Forms page with a single text field, so the formatting isn’t great. I have edited the formatting very lightly to avoid rendering bugs here on my blog.)



comment count unavailable comments

20 November, 2024 12:50PM

Russell Coker

Solving Spam and Phishing for Corporations

Centralisation and Corporations

An advantage of a medium to large company is that it permits specialisation. For example I’m currently working in the IT department of a medium sized company and because we have standardised hardware (Dell Latitude and Precision laptops, Dell Precision Tower workstations, and Dell PowerEdge servers) and I am involved in fixing all Linux compatibility issues on that I can fix most problems in a small fraction of the time that I would take to fix on a random computer. There is scope for a lot of debate about the extent to which companies should standardise and centralise things. But for computer problems which can escalate quickly from minor to serious if not approached in the correct manner it’s clear that a good deal of centralisation is appropriate.

For people doing technical computer work such as programming there’s a large portion of the employees who are computer hobbyists who like to fiddle with computers. But if the support system is run well even they will appreciate having computers just work most of the time and for a large portion of the failures having someone immediately recognise the problem, like the issues with NVidia drivers that I have documented so that first line support can implement workarounds without the need for a lengthy investigation.

A big problem with email in the modern Internet is the prevalence of Phishing scams. The current corporate approach to this is to send out test Phishing email to people and then force computer security training on everyone who clicks on them. One problem with this is that attackers only need to fool one person on one occasion and when you have hundreds of people doing something on rare occasions that’s not part of their core work they will periodically get it wrong. When every test Phishing run finds several people who need extra training it seems obvious to me that this isn’t a solution that’s working well. I will concede that the majority of people who click on the test Phishing email would probably realise their mistake if asked to enter the password for the corporate email system, but I think it’s still clear that this isn’t a great solution.

Let’s imagine for the sake of discussion that everyone in a company was 100% accurate at identifying Phishing email and other scam email, if that was the case would the problem be solved? I believe that even in that hypothetical case it would not be a solved problem due to the wasted time and concentration. People can spend minutes determining if a single email is legitimate. On many occasions I have had relatives and clients forward me email because they are unsure if it’s valid, it’s great that they seek expert advice when they are unsure about things but it would be better if they didn’t have to go to that effort. What we ideally want to do is centralise the anti-Phishing and anti-spam work to a small group of people who are actually good at it and who can recognise patterns by seeing larger quantities of spam. When a spam or Phishing message is sent to 600 people in a company you don’t want 600 people to individually consider it, you want one person to recognise it and delete/block all 600. If 600 people each spend one minute considering the matter then that’s 10 work hours wasted!

The Rationale for Human Filtering

For personal email human filtering usually isn’t viable because people want privacy. But corporate email isn’t private, it’s expected that the company can read it under certain circumstances (in most jurisdictions) and having email open in public areas of the office where colleagues might see it is expected. You can visit gmail.com on your lunch break to read personal email but every company policy (and common sense) says to not have actually private correspondence on company systems.

The amount of time spent by reception staff in sorting out such email would be less than that taken by individuals. When someone sends a spam to everyone in the company instead of 500 people each spending a couple of minutes working out whether it’s legit you have one person who’s good at recognising spam (because it’s their job) who clicks on a “remove mail from this sender from all mailboxes” button and 500 messages are deleted and the sender is blocked.

Delaying email would be a concern. It’s standard practice for CEOs (and C*Os at larger companies) to have a PA receive their email and forward the ones that need their attention. So human vetting of email can work without unreasonable delays. If we had someone checking all email for the entire company probably email to the senior people would never get noticeably delayed and while people like me would get their mail delayed on occasion people doing technical work generally don’t have notifications turned on for email because it’s a distraction and a fast response isn’t needed. There are a few senders where fast response is required, which is mostly corporations sending a “click this link within 10 minutes to confirm your password change” email. Setting up rules for all such senders that are relevant to work wouldn’t be difficult to do.

How to Solve This

Spam and Phishing became serious problems over 20 years ago and we have had 20 years of evolution of email filtering which still hasn’t solved the problem. The vast majority of email addresses in use are run by major managed service providers and they haven’t managed to filter out spam/phishing mail effectively so I think we should assume that it’s not going to be solved by filtering. There is talk about what “AI” technology might do for filtering spam/phishing but that same technology can product better crafted hostile email to avoid filters.

An additional complication for corporate email filtering is that some criteria that are used to filter personal email don’t apply to corporate mail. If someone sends email to me personally about millions of dollars then it’s obviously not legit. If someone sends email to a company then it could be legit. Companies routinely have people emailing potential clients about how their products can save millions of dollars and make purchases over a million dollars. This is not a problem that’s impossible to solve, it’s just an extra difficulty that reduces the efficiency of filters.

It seems to me that the best solution to the problem involves having all mail filtered by a human. A company could configure their mail server to not accept direct external mail for any employee’s address. Then people could email files to colleagues etc without any restriction but spam and phishing wouldn’t be a problem. The issue is how to manage inbound mail. One possibility is to have addresses of the form it+russell.coker@example.com (for me as an employee in the IT department) and you would have a team of people who would read those mailboxes and forward mail to the right people if it seemed legit. Having addresses like it+russell.coker means that all mail to the IT department would be received into folders of the same account and they could be filtered by someone with suitable security level and not require any special configuration of the mail server. So the person who read the is mailbox would have a folder named russell.coker receiving mail addressed to me. The system could be configured to automate the processing of mail from known good addresses (and even domains), so they could just put in a rule saying that when Dell sends DMARC authenticated mail to is+$USER it gets immediately directed to $USER. This is the sort of thing that can be automated in the email client (mail filtering is becoming a common feature in MUAs).

For a FOSS implementation of such things the server side of it (including extracting account data from a directory to determine which department a user is in) would be about a day’s work and then an option would be to modify a webmail program to have extra functionality for approving senders and sending change requests to the server to automatically direct future mail from the same sender. As an aside I have previously worked on a project that had a modified version of the Horde webmail system to do this sort of thing for challenge-response email and adding certain automated messages to the allow-list.

The Change

One of the first things to do is configuring the system to add every recipient of an outbound message to the allow list for receiving a reply. Having a script go through the sent-mail folders of all accounts and adding the recipients to the allow lists would be easy and catch the common cases.

But even with processing the sent mail folders going from a working system without such things to a system like this will take some time for the initial work of adding addresses to the allow lists, particularly for domain wide additions of all the sites that send password confirmation messages. You would need rules to direct inbound mail to the old addresses to the new style and then address a huge amount of mail that needs to be categorised. If you have 600 employees and the average amount of time taken on the first day is 10 minutes per user then that’s 100 hours of work, 12 work days. If you had everyone from the IT department, reception, and executive assistants working on it that would be viable. After about a week there wouldn’t be much work involved in maintaining it. Then after that it would be a net win for the company.

The Benefits

If the average employee spends one minute a day dealing with spam and phishing email then with 600 employees that’s 10 hours of wasted time per day. Effectively wasting one employee’s work! I’m sure that’s the low end of the range, 5 minutes average per day doesn’t seem unreasonable especially when people are unsure about phishing email and send it to Slack so multiple employees spend time analysing it. So you could have 5 employees being wasted by hostile email and avoiding that would take a fraction of the time of a few people adding up to less than an hour of total work per day.

Then there’s the training time for phishing mail. Instead of having every employee spend half an hour doing email security training every few months (that’s 300 hours or 7.5 working weeks every time you do it) you just train the few experts.

In addition to saving time there are significant security benefits to having experts deal with possibly hostile email. Someone who deals with a lot of phishing email is much less likely to be tricked.

Will They Do It?

They probably won’t do it any time soon. I don’t think it’s expensive enough for companies yet. Maybe government agencies already have equivalent measures in place, but for regular corporations it’s probably regarded as too difficult to change anything and the costs aren’t obvious. I have been unsuccessful in suggesting that managers spend slightly more on computer hardware to save significant amounts of worker time for 30 years.

20 November, 2024 05:22AM by etbe

Arnaud Rebillout

Installing an older Ansible version via pipx

Latest Ansible requires Python 3.8 on the remote hosts

... and therefore, hosts running Debian Buster are now unsupported.

Monday, I updated the system on my laptop (Debian Sid), and I got the latest version of ansible-core, 2.18:

$ ansible --version | head -1
ansible [core 2.18.0]

To my surprise, Ansible started to fail with some remote hosts:

ansible-core requires a minimum of Python version 3.8. Current version: 3.7.3 (default, Mar 23 2024, 16:12:05) [GCC 8.3.0]

Yep, I do have to work with hosts running Debian Buster (aka. oldoldstable). While Buster is old, it's still out there, and it's still supported via Freexian’s Extended LTS.

How are we going to keep managing those machines? Obviously, we'll need an older version of Ansible.

Pipx to the rescue

TL;DR

pipx install --include-deps ansible==10.6.0
pipx inject ansible dnspython    # for community.general.dig

Installing Ansible via pipx

Lately I discovered pipx and it's incredibly simple, so I thought I'd give it a try for this use-case.

Reminder: pipx allows users to install Python applications in isolated environments. In other words, it doesn't make a mess with your system like pip does, and it doesn't require you to learn how to setup Python virtual environments by yourself. It doesn't ask for root privileges either, as it installs everything under ~/.local/.

First thing to know: pipx install ansible won't cut it, it doesn't install the whole Ansible suite. Instead we need to use the --include-deps flag in order to install all the Ansible commands.

The output should look something like that:

$ pipx install --include-deps ansible==10.6.0
  installed package ansible 10.6.0, installed using Python 3.12.7
  These apps are now globally available
    - ansible
    - ansible-community
    - ansible-config
    - ansible-connection
    - ansible-console
    - ansible-doc
    - ansible-galaxy
    - ansible-inventory
    - ansible-playbook
    - ansible-pull
    - ansible-test
    - ansible-vault
done! ✨ 🌟 ✨

Note: at the moment 10.6.0 is the latest release of the 10.x branch, but make sure to check https://pypi.org/project/ansible/#history and install whatever is the latest on this branch. The 11.x branch doesn't work for us, as it's the branch that comes with ansible-core 2.18, and we don't want that.

Next: do NOT run pipx ensurepath, even though pipx might suggest that. This is not needed. Instead, check your ~/.profile, it should contain these lines:

# set PATH so it includes user's private bin if it exists
if [ -d "$HOME/.local/bin" ] ; then
    PATH="$HOME/.local/bin:$PATH"
fi

Meaning: ~/.local/bin/ should already be in your path, unless it's the first time you installed a program via pipx and the directory ~/.local/bin/ was just created. If that's the case, you have to log out and log back in.

Now, let's open a new terminal and check if we're good:

$ which ansible
/home/me/.local/bin/ansible

$ ansible --version | head -1
ansible [core 2.17.6]

Yep! And that's working already, I can use Ansible with Buster hosts again.

What's cool is that we can run ansible to use this specific Ansible version, but we can also run /usr/bin/ansible to run the latest version that is installed via APT.

Injecting Python dependencies needed by collections

Quickly enough, I realized something odd, apparently the plugin community.general.dig didn't work anymore. After some research, I found a one-liner to test that:

# Works with APT-installed Ansible? Yes!
$ /usr/bin/ansible all -i localhost, -m debug -a msg="{{ lookup('dig', 'debian.org./A') }}"
localhost | SUCCESS => {
    "msg": "151.101.66.132,151.101.2.132,151.101.194.132,151.101.130.132"
}

# Works with pipx-installed Ansible? No!
$ ansible all -i localhost, -m debug -a msg="{{ lookup('dig', 'debian.org./A') }}"
localhost | FAILED! => {
  "msg": "An unhandled exception occurred while running the lookup plugin 'dig'.
  Error was a <class 'ansible.errors.AnsibleError'>, original message: The dig
  lookup requires the python 'dnspython' library and it is not installed."
}

The issue here is that we need python3-dnspython, which is installed on my system, but is not installed within the pipx virtual environment. It seems that the way to go is to inject the required dependencies in the venv, which is (again) super easy:

$ pipx inject ansible dnspython
  injected package dnspython into venv ansible
done! ✨ 🌟 ✨

Problem fixed! Of course you'll have to iterate to install other missing dependencies, depending on which Ansible external plugins are used in your playbooks.

Closing thoughts

Hopefully there's nothing left to discover and I can get back to work! If there's more quirks and rough edges, drop me an email so that I can update this blog post.

Let me also credit another useful blog post on the matter: https://unfriendlygrinch.info/posts/effortless-ansible-installation/

20 November, 2024 12:00AM by Arnaud Rebillout

November 19, 2024

hackergotchi for Daniel Pocock

Daniel Pocock

Jérémy Bobbio (Lunar), Magna Carta & Debian Freedoms: RIP

Jérémy Bobbio (Lunar) passed away on 8 November 2024. It is uncanny but that is exactly 30 years after Oregon voted to legalize euthanasia and it is exactly ten years after Lunar disclosed his cancer diagnosis on a Debian mailing list, debian-private, the gossip network that is being used to spread rumors about developers. While Lunar advanced computer security in many technical initiatives, the gossip on debian-private enables social engineering attacks, therefore, gossip is like a cancer too.

Here is the message from 26 November 2014, it was sent some days after the diagnosis, hence the observation that he lived with the disease for almost exactly 10 years. He thought it would just be a couple of months. It turns out people make mistakes.

Subject: [semi-VAC] A couple of months?
Date: Wed, 26 Nov 2014 20:42:26 +0100
From: Lunar <lunar@debian.org>
To: debian-private@lists.debian.org

My fellow Debianites,

I have been diagnosed with kidney cancer, and it seems there's some
metastases in my lungs. A few doctors have decided it's worth doing many
things to make me live some more. (Another privilege to acknowledge.)
I'm going to follow them and see where it goes…

The first things are going to come up pretty quickly now, but I'm not
sure how long treatments will last or how much they will affect me. What
is clear is that getting better will take most of my life in the next
weeks and probably months.

Please take care of my packages if I'm not responsive; or if I'm
responsive but don't follow-up afterwards. Basically, consider me
unreliable.

I want to keep working on reproducible builds at least a little, as
it's a good source of pleasure. You are welcome to join the fun. :)

[ Never to be disclosed. ]

Be well,
-- 
Lunar                                .''`. lunar@debian.org                    : :Ⓐ  :  # apt-get install anarchism
                                    `. `'`                                       `-   

Moving on from that, we can see that Lunar made significant contributions to the harassment of Dr Jacob Appelbaum. It appears that Lunar was opposed to everything that has been achieved for the right to due process since the signing of the Magna Carta over eight hundred years ago.

It is really important to look at this email now because the cool kids on debian-private might be using similar tactics in the GNOME conspiracy against Sonny Piers.

Lunar did not allow the cancer to get in the way of his fight against due process. He even went beyond that, advocating for gaslighting. Look at the comments about creating a "support group" to brainwash Dr Appelbaum to believe he really might be a rapist:

Subject: Re: Jacob Appelbaum and harrassement
Date: Fri, 17 Jun 2016 02:12:01 +0200
From: Jérémy Bobbio <lunar@debian.org>
To: debian-private@lists.debian.org
CC: da-manager@debian.org

Hi!

Since these stories have been published, I kept myself referring to a
zine [1] jointly done by two organizations working on supporting
survivors of sexual assault and ending rape culture. I strongly
recommend you have a look, it's not that long.

The zine list nine principles on how to support survivors of sexual
assault. They've been very helpful to me, you might want to have a look.

 [1]: http://www.phillyspissed.net/sites/default/files/survivor-support.pdf

Konstantinos Margaritis:
> I'm curious, is everyone else OK with expulsion, without having heard the side of
> the accused first? Disclaimer, I do not have any interest to the Tor project, and
> this is the first time I've actually heard of Jacob and his behaviour. I'm assuming
> that all the stories are true, but I'm not at all comfortable with expulsion,
> or any other "punishment" coming officially from Debian, without first hearing his
> side and at least having given him the right to respond to many of the emails here.

The process you describe is modeled on “coercive justice”. I think we
should instead be supporting survivors, and making sure that Debian can
be as safe and welcoming as possible.


The first principle given in the above zine is “Health & Safety First”,
and the second is “Restore Choice”.

So let's hear what people who have been abused has to say on what needs
to be done for their health and safety:

Alison Macrina who has been advocating Debian and its derivatives to
many libraries and activists has made the following demand in her
statement, amongst others [2]:

    Jake must be excluded from all community activities as a
    precondition for healing.

Isis Lovecruft who attended DebConf13 and is a longtime Debian user and
advocate has made the following demand to the communities, amongst
others [3]:

    We need to entirely remove abusers from our communities, until such
    a time as they have sufficiently demonstrated to their victims that
    their abusive behaviours will no longer continue. Jake should be
    removed from all places where his victims, their loved ones, and
    friends might come into any form of contact with him. Given the
    enormous amounts of pain myself and the other victims have gone
    through, the draining emotional stress, and (please excuse my rather
    dark humour) the development time wasted, I am not willing to
    revisit this issue for at least four years.  After that time has
    passed, it may be possible to reassess whether there is any path
    forward for Jake.

As such I support preventing Jacob Appelbaum from participating in
Debian until a process took place for him to work on his issues enough
to make those who have been abused and their friends confident that he
will not commit more abuses.

 [2]: https://medium.com/@flexlibris/theres-really-no-such-thing-as-the-voiceless-92b3fa45134d
 [3]: https://blog.patternsinthevoid.net/the-forest-for-the-trees.html


If people want to care about Jake, I suggest listening to Alison again:

    People who love Jake and want him to heal should make a support
    group for him. Those people should bear in mind that he has not
    apologized nor admitted to any wrongdoing, and they should hold him
    accountable for what he’s done.

For how this can be done, you can get some ideas from the work of Philly
Stands Up which they have documented [4].

 [4]: https://communityaccountability.files.wordpress.com/2012/06/philly-stands-up.pdf


Given some of the emails I've read here, we do have work to do in order
to keep Debian as safe and welcoming as possible. We do have to educate
ourselves and newcomers on boundaries, consent, gender-based violence,
abuse prevention, accountability processes…

Others already wrote a few things about this, so I'm not going to
develop further, and discussing this on public channels might probably
more effective.



Oh, and principle 8 is “It's Not About You”.

Seriously, it's not. Your needs won't help the people who have been
abused or Jake.

-- 
Lunar                                .''`. lunar@debian.org                    : :Ⓐ  :  # apt-get install anarchism
                                    `. `'`                                       `-   

In my last couple of blog posts on my own site and the Software Freedom Institute site, I've discussed the invalid Swiss judgment based on lies and forgeries. One thing that people have failed to notice is the false judgment attacks the principle of free redistribution. It is this paragraph here:

DFSG judgment

Translated to English, it says:

Daniel Pocock has revoked the Debian Project Code of Conduct and stated that he has the right to authorize joint authors to use the name Debian in domains. On the site debiangnulinux.org, he has used the Debian open use logo and he has offered a copy of Debian for people to download.

The Debian Code of Conduct was never accepted or consented to by the vast majority of Debian co-authors. Less than 25% of co-authors consented to the Code of Conduct. Therefore, it was not even valid in the first place. You can't revoke a Code of Conduct that wasn't valid in the first place.

The use of the Debian open use logo is authorized very clearly.

If somebody was distributing a virus or some other random software under the name Debian it would be confusing and wrong. But what they accuse me of doing is distributing a genuine copy of Debian. Debian was the birthplace of the Debian Free Software Guidelines and the right to distribute genuine copies of Debian has always been there.

Moreoever, any co-author or joint author of intellectual property has the right to unilaterally redistribute copies of the joint work as they see fit, according to this legal guidance from UC Berkeley:

Joint authorship occurs when two or more people work together on a creative work. In this case, all creators have equal rights to distribute and alter the work, and they must split profits among each other.

That is always the case unless we have entered into a signed agreement with colleagues whereby we agree to only distribute the work through a specific agent or channel. Employment contracts for IT workers almost always include provisions to prevent such unilateral distribution but in Debian, we are not employees and we never signed anything giving up our rights under copyright law.

Pretending that some of us don't have the right to redistribute copies of Debian is a form of gaslighting, a lot like gaslighting Dr Appelbaum with a "support group" to brainwash him to believe the social media rumors that he could be a rapist.

Therefore, while the judgment is invalid, it appears to be contradicting the DFSG. What they have written is actually worse than IBM Red Hat's decision to restrict the RHEL source code.

Lunar, Magna Carta & Debian Social Contract. Rest In Peace.

19 November, 2024 09:00PM

Google-funded group distributed invalid Swiss judgment to deceive Midlands-North-West

A blog has appeared on the Software Freedom Institute web site proving that the defamatory Swiss judgment was invalid from the beginning. In fact, within a few weeks, it had been shot down in a response from the Swiss Intellectual Property Office.

The rogue Debianists clearly knew since December 2023 that their judgment document was invalid. Nonetheless, on the day before Ireland voted, they published the invalid judgment anyway along with a deceptive translation of what was in it.

By choosing to publish a document that they knew to be invalid and by doing so the day before voting, just hours before the news moratorium, it is clear that they intended to deceive the Irish media and the voters of Midlands-North-West.

On some metrics, like infrastructure, the region Midlands-North-West is one of the most disadvantaged in Europe. Therefore, the attempt by foreigners funded by Google to deceive this community is outrageous.

The blog by the Software Freedom Institute reveals that the publishers of the false document received Google funding. Looking at the DebConf web site, we can see that Google regularly sponsors the annual conference for Debianists.

The legal dispute started in early 2023. We can see that one of the leading sponsors of DebConf23 was Infomaniak, a competitor of the Software Freedom Institute in the Swiss market.

From the DebConf23 sponsors page, we can see Google and we can see Infomaniak, a competitor of the Institute, all had a hand in Debian's funding during the period of this dispute:

DebConf23, infomaniak, Google, sponsors

19 November, 2024 11:30AM

November 18, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppArmadillo 14.2.0-1 on CRAN: New Upstream Minor

armadillo image

Armadillo is a powerful and expressive C++ template library for linear algebra and scientific computing. It aims towards a good balance between speed and ease of use, has a syntax deliberately close to Matlab, and is useful for algorithm development directly in C++, or quick conversion of research code into production environments. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 1191 other packages on CRAN, downloaded 37.2 million times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint / vignette) by Conrad and myself has been cited 603 times according to Google Scholar.

Conrad released a minor version 14.2.0 a few days ago after we spent about two weeks with several runs of reverse-dependency checks covering corner cases. After a short delay at CRAN due to a false positive on a test, a package failing tests we also failed under the previous version, and some concern over new deprecation warnings _whem using the headers directly as _e.g. mlpack R package does we are now on CRAN. I noticed a missing feature under large ‘64bit word’ (for large floating-point matrices) and added an exporter for icube going to double to support the 64-bit integer range (as we already did, of course, for vectors and matrices). Changes since the last CRAN release are summarised below.

Changes in RcppArmadillo version 14.2.0-1 (2024-11-16)

  • Upgraded to Armadillo release 14.2.0 (Smooth Caffeine)

    • Faster handling of symmetric matrices by inv() and rcond()

    • Faster handling of hermitian matrices by inv(), rcond(), cond(), pinv(), rank()

    • Added solve_opts::force_sym option to solve() to force the use of the symmetric solver

    • More efficient handling of compound expressions by solve()

  • Added exporter specialisation for icube for the ARMA_64BIT_WORD case

Courtesy of my CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the Rcpp R-Forge page.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

18 November, 2024 10:31PM

hackergotchi for C.J. Adams-Collier

C.J. Adams-Collier

Managing HPE SAS Controllers

Notes to self. And anyone else who might find them useful. Following are some ssacli commands which I use infrequently enough that they fall out of cache. This may repeat information in other blogs, but since I search my posts first when commands slip my mind, I thought I’d include them here, too.

hpacucli is the wrong command. Use ssacli instead.

$ KR='/usr/share/keyrings/hpe.gpg'
$ for fingerprint in \
  882F7199B20F94BD7E3E690EFADD8D64B1275EA3 \
  57446EFDE098E5C934B69C7DC208ADDE26C2B797 \
  476DADAC9E647EE27453F2A3B070680A5CE2D476 ; do \
    curl "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x${fingerprint}" \
      | gpg --no-default-keyring --keyring "${KR}" --import ; \
  done
$ gpg --list-keys --no-default-keyring --keyring "${KR}" 
/usr/share/keyrings/hpe.gpg
---------------------------
pub   rsa2048 2012-12-04 [SC] [expired: 2022-12-02]
      476DADAC9E647EE27453F2A3B070680A5CE2D476
uid           [ expired] Hewlett-Packard Company RSA (HP Codesigning Service)

pub   rsa2048 2014-11-19 [SC] [expired: 2024-11-16]
      882F7199B20F94BD7E3E690EFADD8D64B1275EA3
uid           [ expired] Hewlett-Packard Company RSA (HP Codesigning Service) - 1

pub   rsa2048 2015-12-10 [SCEA] [expires: 2025-12-07]
      57446EFDE098E5C934B69C7DC208ADDE26C2B797
uid           [ unknown] Hewlett Packard Enterprise Company RSA-2048-25 
$ echo "deb [signed-by=${KR}] http://downloads.linux.hpe.com/SDR/repo/mcp bookworm/current non-free" \
  | sudo dd of=/etc/apt/sources.list.d status=none
$ sudo apt-get update
$ sudo apt-get install -y -qq ssacli > /dev/null 2>&1
$ sudo ssacli ctrl all show status

HPE Smart Array P408i-p SR Gen10 in Slot 3
   Controller Status: OK
   Cache Status: OK
   Battery/Capacitor Status: OK

$ sudo ssacli ctrl all show detail
HPE Smart Array P408i-p SR Gen10 in Slot 3
   Bus Interface: PCI
   Slot: 3
   Serial Number: PFJHD0ARCCR1QM
   RAID 6 Status: Enabled
   Controller Status: OK
   Hardware Revision: B
   Firmware Version: 2.65
   Firmware Supports Online Firmware Activation: True
   Driver Supports Online Firmware Activation: True
   Rebuild Priority: High
   Expand Priority: Medium
   Surface Scan Delay: 3 secs
   Surface Scan Mode: Idle
   Parallel Surface Scan Supported: Yes
   Current Parallel Surface Scan Count: 1
   Max Parallel Surface Scan Count: 16
   Queue Depth: Automatic
   Monitor and Performance Delay: 60  min
   Elevator Sort: Enabled
   Degraded Performance Optimization: Disabled
   Inconsistency Repair Policy: Disabled
   Write Cache Bypass Threshold Size: 1040 KiB
   Wait for Cache Room: Disabled
   Surface Analysis Inconsistency Notification: Disabled
   Post Prompt Timeout: 15 secs
   Cache Board Present: True
   Cache Status: OK
   Cache Ratio: 10% Read / 90% Write
   Configured Drive Write Cache Policy: Disable
   Unconfigured Drive Write Cache Policy: Default
   Total Cache Size: 2.0
   Total Cache Memory Available: 1.8
   Battery Backed Cache Size: 1.8
   No-Battery Write Cache: Disabled
   SSD Caching RAID5 WriteBack Enabled: True
   SSD Caching Version: 2
   Cache Backup Power Source: Batteries
   Battery/Capacitor Count: 1
   Battery/Capacitor Status: OK
   SATA NCQ Supported: True
   Spare Activation Mode: Activate on physical drive failure (default)
   Controller Temperature (C): 53
   Cache Module Temperature (C): 43
   Capacitor Temperature  (C): 40
   Number of Ports: 2 Internal only
   Encryption: Not Set
   Express Local Encryption: False
   Driver Name: smartpqi
   Driver Version: Linux 2.1.18-045
   PCI Address (Domain:Bus:Device.Function): 0000:11:00.0
   Negotiated PCIe Data Rate: PCIe 3.0 x8 (7880 MB/s)
   Controller Mode: Mixed
   Port Max Phy Rate Limiting Supported: False
   Latency Scheduler Setting: Disabled
   Current Power Mode: MaxPerformance
   Survival Mode: Enabled
   Host Serial Number: 2M20040D1Q
   Sanitize Erase Supported: True
   Sanitize Lock: None
   Sensor ID: 0
      Location: Capacitor
      Current Value (C): 40
      Max Value Since Power On: 42
   Sensor ID: 1
      Location: ASIC
      Current Value (C): 53
      Max Value Since Power On: 55
   Sensor ID: 2
      Location: Unknown
      Current Value (C): 43
      Max Value Since Power On: 45
   Sensor ID: 3
      Location: Cache
      Current Value (C): 43
      Max Value Since Power On: 44
   Primary Boot Volume: None
   Secondary Boot Volume: None

$ sudo ssacli ctrl all show config

HPE Smart Array P408i-p SR Gen10 in Slot 3  (sn: PFJHD0ARCCR1QM)



   Internal Drive Cage at Port 1I, Box 2, OK



   Internal Drive Cage at Port 2I, Box 2, OK


   Port Name: 1I (Mixed)

   Port Name: 2I (Mixed)

   Array A (SAS, Unused Space: 0  MB)

      logicaldrive 1 (1.64 TB, RAID 6, OK)

      physicaldrive 1I:2:1 (port 1I:box 2:bay 1, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:2 (port 1I:box 2:bay 2, SAS HDD, 1.2 TB, OK)
      physicaldrive 1I:2:3 (port 1I:box 2:bay 3, SAS HDD, 300 GB, OK)
      physicaldrive 1I:2:4 (port 1I:box 2:bay 4, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:5 (port 2I:box 2:bay 5, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:6 (port 2I:box 2:bay 6, SAS HDD, 300 GB, OK)
      physicaldrive 2I:2:7 (port 2I:box 2:bay 7, SAS HDD, 1.2 TB, OK)
      physicaldrive 2I:2:8 (port 2I:box 2:bay 8, SAS HDD, 1.2 TB, OK)

   SEP (Vendor ID HPE, Model Smart Adapter) 379  (WWID: 51402EC013705E88, Port: Unknown)

$ sudo ssacli ctrl slot=3 pd 2I:2:7 show detail

HPE Smart Array P408i-p SR Gen10 in Slot 3

   Array A

      physicaldrive 2I:2:7
         Port: 2I
         Box: 2
         Bay: 7
         Status: OK
         Drive Type: Data Drive
         Interface Type: SAS
         Size: 1.2 TB
         Drive exposed to OS: False
         Logical/Physical Block Size: 512/512
         Rotational Speed: 10000
         Firmware Revision: U850
         Serial Number: KZGN1BDE
         WWID: 5000CCA01D247239
         Model: HGST    HUC101212CSS600
         Current Temperature (C): 46
         Maximum Temperature (C): 51
         PHY Count: 2
         PHY Transfer Rate: 6.0Gbps, Unknown
         PHY Physical Link Rate: 6.0Gbps, Unknown
         PHY Maximum Link Rate: 6.0Gbps, 6.0Gbps
         Drive Authentication Status: OK
         Carrier Application Version: 11
         Carrier Bootloader Version: 6
         Sanitize Erase Supported: False
         Shingled Magnetic Recording Support: None
         Drive Unique ID: 5000CCA01D247238

18 November, 2024 07:21PM by C.J. Collier

hackergotchi for Philipp Kern

Philipp Kern

debian.org now supports Security Key-backed SSH keys

debian.org's infrastructure now supports using Security Key-backed SSH keys. DDs (and guests) can use the mail gateway to add SSH keys of the types sk-ecdsa-sha2-nistp256@openssh.com and sk-ssh-ed25519@openssh.com to their LDAP accounts.

This was done in support of hardening our infrastructure: Hopefully we can require these hardware-backed keys for sensitive machines in the future, to have some assertion that it is a human that is connecting to them.

As some of us shell to machines a little too often, I also wrote a small SSH CA that issues short-lived certificates (documentation). It requires the user to login via SSH using an SK-backed key and then issues a certificate that is valid for less than a day. For cases where you need to frequently shell to a machine or to a lot of machines at once that should be a nice compromise of usability vs. security.

The capabilities of various keys differ a lot and it is not always easy to determine what feature set they support. Generally SK-backed keys work with FIDO U2F keys, if you use the ecdsa key type. Resident keys (i.e. keys stored on the token, to be used from multiple devices) require FIDO2-compatible keys. no-touch-required is its own maze, e.g. the flag is not properly restored today when pulling the public key from a resident key. The latter is also one reason for writing my own CA.

SomeoneTM should write up a matrix on what is supported where and how. In the meantime it is probably easiest to generate an ed25519 key - or if that does not work an ecdsa key - and make a backup copy of the resulting on-disk key file. And copy that around to other devices (or OSes) that require access to the key.

18 November, 2024 04:43PM by Philipp Kern (noreply@blogger.com)

Russ Allbery

Review: Delilah Green Doesn't Care

Review: Delilah Green Doesn't Care, by Ashley Herring Blake

Series: Bright Falls #1
Publisher: Jove
Copyright: February 2022
ISBN: 0-593-33641-0
Format: Kindle
Pages: 374

Delilah Green Doesn't Care is a sapphic romance novel. It's the first of a trilogy, although in the normal romance series fashion each book follows a different protagonist and has its own happy ending. It is apparently classified as romantic comedy, which did not occur to me while reading but which I suppose I can see in retrospect.

Delilah Green got the hell out of Bright Falls as soon as she could and tried not to look back. After her father died, her step-mother lavished all of her perfectionist attention on her overachiever step-sister, leaving Delilah feeling like an unwanted ghost. She escaped to New York where there was space for a queer woman with an acerbic personality and a burgeoning career in photography. Her estranged step-sister's upcoming wedding was not a good enough reason to return to the stifling small town of her childhood. The pay for photographing the wedding was, since it amounted to three months of rent and trying to sell photographs in galleries was not exactly a steady living. So back to Bright Falls Delilah goes.

Claire never left Bright Falls. She got pregnant young and ended up with a different life than she expected, although not a bad one. Now she's raising her daughter as a single mom, running the town bookstore, and dealing with her unreliable ex. She and Iris are Astrid Parker's best friends and have been since fifth grade, which means she wants to be happy for Astrid's upcoming wedding. There's only one problem: the groom. He's a controlling, boorish ass, but worse, Astrid seems to turn into a different person around him. Someone Claire doesn't like.

Then, to make life even more complicated, Claire tries to pick up Astrid's estranged step-sister in Bright Falls's bar without recognizing her.

I have a lot of things to say about this novel, but here's the core of my review: I started this book at 4pm on a Saturday because I hadn't read anything so far that day and wanted to at least start a book. I finished it at 11pm, having blown off everything else I had intended to do that evening, completely unable to put it down.

It turns out there is a specific type of romance novel protagonist that I absolutely adore: the sarcastic, confident, no-bullshit character who is willing to pick the fights and say the things that the other overly polite and anxious characters aren't able to get out. Astrid does not react well to criticism, for reasons that are far more complicated than it may first appear, and Claire and Iris have been dancing around the obvious problems with her surprise engagement. As the title says, Delilah thinks she doesn't care: she's here to do a job and get out, and maybe she'll get to tweak her annoying step-sister a bit in the process. But that also means that she is unwilling to play along with Astrid's obsessively controlling mother or her obnoxious fiance, and thus, to the barely disguised glee of Claire and Iris, is a direct threat to the tidy life that Astrid's mother is trying to shoehorn her daughter into.

This book is a great example of why I prefer sapphic romances: I think this character setup would not work, at least for me, in a heterosexual romance. Delilah's role only works if she's a woman; if a male character were the sarcastic conversational bulldozer, it would be almost impossible to avoid falling into the gender stereotype of a male rescuer. If this were a heterosexual romance trying to avoid that trap, the long-time friend who doesn't know how to directly confront Astrid would have to be the male protagonist. That could work, but it would be a tricky book to write without turning it into a story focused primarily on the subversion of gender roles. Making both protagonists women dodges the problem entirely and gives them so much narrative and conceptual space to simply be themselves, rather than characters obscured by the shadows of societal gender rules.

This is also, at it's core, a book about friendship. Claire, Astrid, and Iris have the sort of close-knit friend group that looks exclusive and unapproachable from the outside. Delilah was the stereotypical outsider, mocked and excluded when they thought of her at all. This, at least, is how the dynamics look at the start of the book, but Blake did an impressive job of shifting my understanding of those relationships without changing their essential nature. She fleshes out all of the characters, not just the romantic leads, and adds complexity, nuance, and perspective. And, yes, past misunderstanding, but it's mostly not the cheap sort that sometimes drives romance plots. It's the misunderstanding rooted in remembered teenage social dynamics, the sort of misunderstanding that happens because communication is incredibly difficult, even more difficult when one has no practice or life experience, and requires knowing oneself well enough to even know what to communicate.

The encounter between Delilah and Claire in the bar near the start of the book is cornerstone of the plot, but the moment that grabbed me and pulled me in was Delilah's first interaction with Claire's daughter Ruby. That was the point when I knew these were characters I could trust, and Blake never let me down. I love how Ruby is handled throughout this book, with all of the messy complexity of a kid of divorced parents with her own life and her own personality and complicated relationships with both parents that are independent of the relationship their parents have with each other.

This is not a perfect book. There's one prank scene that I thought was excessively juvenile and should have been counter-productive, and there's one tricky question of (nonsexual) consent that the book raises and then later seems to ignore in a way that bugged me after I finished it. There is a third-act breakup, which is not my favorite plot structure, but I think Blake handles it reasonably well. I would probably find more niggles and nitpicks if I re-read it more slowly. But it was utterly engrossing reading that exactly matched my mood the day that I picked it up, and that was a fantastic reading experience.

I'm not much of a romance reader and am not the traditional audience for sapphic romance, so I'm probably not the person you should be looking to for recommendations, but this is the sort of book that got me to immediately buy all of the sequels and start thinking about a re-read. It's also the sort of book that dragged me back in for several chapters when I was fact-checking bits of my review. Take that recommendation for whatever it's worth.

Content note: Reviews of Delilah Green Doesn't Care tend to call it steamy or spicy. I have no calibration for this for romance novels. I did not find it very sex-focused (I have read genre fantasy novels with more sex), but there are several on-page sex scenes if that's something you care about one way or the other.

Followed by Astrid Parker Doesn't Fail.

Rating: 9 out of 10

18 November, 2024 04:20AM

Software Freedom Institute

November 17, 2024

Russ Allbery

Review: Dark Deeds

Review: Dark Deeds, by Michelle Diener

Series: Class 5 #2
Publisher: Eclipse
Copyright: January 2016
ISBN: 0-6454658-4-4
Format: Kindle
Pages: 340

Dark Deeds is the second book of the self-published Class 5 science fiction romance series. It is a sequel to Dark Horse and will spoil the plot of that book, but it follows the romance series convention of switching to a new protagonist in the same universe and telling a loosely-connected story.

Fiona, like Rose in the previous book, was kidnapped by the Tecran in one of their Class 5 ships, although that's not entirely obvious at the start of the story. The book opens with her working as a slave on a Garmman trading ship while its captain works up the nerve to have her killed. She's spared this fate when the ship is raided by Krik pirates. Some brave fast-talking, and a touch of honor among thieves, lets her survive the raid and be rescued by a pursuing Grih battleship, with a useful electronic gadget as a bonus.

The author uses the nickname "Fee" for Fiona throughout this book and it was like nails on a chalkboard every time. I had to complain about that before getting into the review.

If you've read Dark Horse, you know the formula: lone kidnapped human woman, major violations of the laws against mistreatment of sentient beings that have the Grih furious on her behalf, hunky Grih starship captain who looks like a space elf, all the Grih are fascinated by her musical voice, she makes friends with a secret AI... Diener found a formula that worked well enough that she tried it again, and it would not surprise me if the formula repeated through the series. You should not go into this book expecting to be surprised.

That said, the formula did work the first time, and it largely does work again. I thoroughly enjoyed Dark Horse and wanted more, and this is more, delivered on cue. There are worse things, particularly if you're a Kindle Unlimited reader (I am not) and are therefore getting new installments for free. The Tecran fascination with kidnapping human women is explained sufficiently in Fiona's case, but I am mildly curious how Diener will keep justifying it through the rest of the series. (Maybe the formula will change, but I doubt it.)

To give Diener credit, this is not a straight repeat of the first book. Fiona is similar to Rose but not identical; Rose had an unshakable ethical calm, and Fiona is more of a scrapper. The Grih are not stupid and, given the amount of chaos Rose unleashed in the previous book, treat the sudden appearance of another human woman with a great deal more caution and suspicion. Unfortunately, this also means far less of my favorite plot element of the first book: the Grih being constantly scandalized and furious at behavior the protagonist finds sadly unsurprising.

Instead, this book has quite a bit more action. Dark Horse was mostly character interactions and tense negotiations, with most of the action saved for the end. Dark Deeds replaces a lot of the character work with political plots and infiltrating secret military bases and enemy ships. The AI (named Eazi this time) doesn't show up until well into the book and isn't as much of a presence as Sazo. Instead, there's a lot more of Fiona being drafted into other people's fights, which is entertaining enough while it's happening but which wasn't as delightful or memorable as Rose's story.

The writing continues to be serviceable but not great. It's a bit cliched and a bit awkward.

Also, Diener uses paragraph breaks for emphasis.

It's hard to stop noticing it once you see it.

Thankfully, once the story gets going and there's more dialogue, she tones that down, or perhaps I stopped noticing. It's that kind of book (and that kind of series): it's a bit rough to get started, but then there's always something happening, the characters involve a whole lot of wish-fulfillment but are still people I like reading about, and it's the sort of unapologetic "good guys win" type of light science fiction that is just the thing when one simply wants to be entertained. Once I get into the book, it's easy to overlook its shortcomings.

I spent Dark Horse knowing roughly what would happen but wondering about the details. I spent Dark Deeds fairly sure of the details and wondering when they would happen. This wasn't as fun of an experience, but the details were still enjoyable and I don't regret reading it. I am hoping that the next book will be more of a twist, or will have a character more like Rose (or at least a character with a better nickname). Sort of recommended if you liked Dark Horse and really want more of the same.

Followed by Dark Minds, which I have already purchased.

Rating: 6 out of 10

17 November, 2024 05:55AM

November 15, 2024

hackergotchi for Daniel Pocock

Daniel Pocock

Euthanasia perception, legacy & Debian Suicide Cluster

Euthanasia legislation is currently subject to very active discussion in the Oireachtas, Irish parliament and Westminster, the British parliament.

The news media has given a lot of weight to testimonials from those with terminal illness and those who are close family members facing the ethical challenges alone. By way of example, the Guardian published a report about Paola Marra, the late wife of Blur drummer Dave Rowntree.

At the same time, we can see rival news reports have appeared. For example, Switzerland is well known for making euthanasia available lawfully but nonetheless, there are still cases of euthanasia that have taken place outside the legal framework. Police in the Kanton of Schaffhausen are currently investigating use of the 3D-printed "Sarco" suicide pod.

In Canada, a judge had to intervene to prevent a a euthanasia taking place on economic rather than medical grounds.

The injunction, signed by Justice Simon R. Coval, is the first of its kind issued in the province and was issued on Saturday, the day before the woman was scheduled to die.

It prevents Dr. Ellen Wiebe or any other doctor from “causing the death” of the 53-year-old woman “by MAID or any other means.” It followed a notice of civil claim alleging Wiebe negligently approved the procedure for a patient who does not legally qualify.

When talking about the appearance of a Debian Suicide Cluster or a suicide cluster in any organization for that matter, it is important to try and work out where it began, who appeared to be the first suicide, the search for suicide zero so to speak. Given that the cause of death is not always disclosed, the perception that a particular death was a suicide could be as significant, for contagion purposes, as a confirmed suicide.

Did racist Swiss jurists ask me to kill myself too?

After all the evidence about a Debian Suicide Cluster began to appear, one of the racist Swiss women, Pascale Koester from Walder Wyss, had her man-slave apprentice send me the insult below. His reference to "persisting" reminded me of the email Jason Gunthorpe wrote about Joel "Espy" Klecker.

Subject: RE: 753935 - Debian [WW-DMS.FID901178]
Date: Wed, 26 Apr 2023 16:32:17 +0000
From: Nassisi Gillian <gillian.nassisi@walderwyss.com>
To: daniel@softwarefreedom.institute <daniel@softwarefreedom.institute>
CC: Köster Pascale <pascale.koester@walderwyss.com>

Mr Pocock,

We acknowledge receipt of both your emails dated 31 March and 21 April 2023.

First of all, our client does not see any offer to settle the case in your emails.

Furthermore, as long as you persist in your actions, we are instructed not to enter into any discussion or settlement agreement with you.

Please instruct your lawyer to contact us for any further communications.

All rights reserved.

Kind regards,

Pascale Köster and Gillian Nassisi

Gillian Nassisi
MLaw, Trainee Lawyer

Walder Wyss Ltd. | www.walderwyss.com
Zurich | Geneva | Basel | Berne | Lausanne | Lugano
3 Boulevard du Théâtre, P.O. Box, 1211 Geneva 3, Switzerland
Phone direct: +41586583077 | Phone: +41 58 658 30 00 |Fax:  +41 58 658 59 59

This e-mail message is for the sole attention and use of the intended recipient. It is confidential and may contain privileged information.
Please notify us immediately if you have received it in error and delete it from your system. Thank you.

Here is the full version of the email quoted in the earlier blog post about Klecker. In the previous blog, I only showed the first half of the email. Now I show the full email, Notice the comment that Klecker didn't have the will to continue, in other words, he didn't have the will to persist, that is the link to the racist Swiss email above.

Racist Swiss women like Pascale Koester and Albane de Ziegler interfering in the lives of volunteers are a lot like this terrible disease, Duchenne muscular dystrophy that wears people down both physically and emotionally to the point where they may decide to end it before it kills them in a physical sense.

It is important to remember that the manner in which these borderline nazis at Walder Wyss intruded on my family and I occurred after we already had the horrible experience of racial harassment from the cat-hating organizer of the far-right seniors group. After that incident in 2017, I left Zurich. As any cat will tell you, if you lie down with dogs, you get up with fleas.

Subject: RE: [jwk@espy.org: Joel Klecker]
Date: Fri, 14 Jul 2000 20:40:00 -0600 (MDT)
From: Jason Gunthorpe <jgg@ualberta.ca>
To: debian-private@lists.debian.org

On Tue, 11 Jul 2000, Brent Fulgham wrote:

> > It's very hard for me to even send this message. This is a 
> > great loss to us all.
 > First, I'd like to extend my condolences to Joel's family.  It
> is still very hard to believe this has happened.  Joel was

> always just another member of the project -- no one knew (or
> at least I did not know) that he was facing such terrible 
> hardships.  Debian is poorer for his loss.

Some of us did know, but he never wished to give specifics. I do not think
he wanted us to really know. I am greatly upset that I was unable to at
least be there with him on the 9th when he decided he could no longer
continue.. 

> I know we are dedicating the release to him, but what about 
> having a "Klecker" box as a permanent part of Debian?  If we

I will see to this.

It is nice to see all the support for Joel

Jason

Am I the only Debian Developer who read the entire history and formed the impression that Klecker may be a case of euthanasia, in other words, Klecker chose to take his own life in the context of medically-assisted-suicide?

It is awkward to ask questions like this but it is important to remember that the former Debian leader, Chris Lamb, decided to force these discussions into the open at the time my father was dying. Lamb is an arrogant toff who forced that upon my family. Here is that recording from the Swiss harassment judgment where I was explaining to the court the difficulty of dealing with a racist Swiss landlady at the same time that my father had a stroke on the other side of the world in Australia:

Stepping back a bit, we can see that Klecker's home state, Oregon, was the first US state to introduce assisted-suicide legislation. Therefore, as Klecker was growing up, Klecker and his family were exposed to a lot of public debate about euthanasia laws. The debate began in the early nineteen nineties, when Klecker was twelve or thirteen years old. A public referendum, Ballot Measure 16 approved the concept on 8 November 1994 and when Klecker was 16 years old. (Coincidentally, Jérémy Bobbio aka ‘Lunar’ died of cancer in Rennes, France on 8 November 2024, 30 years after Oregon's referendum). Implementation of the act was delayed for almost three years due to interventions by the former president and the US Supreme Court. Oregon residents suffering from terminal illness could begin availing of euthanasia in October 1997.

It must have been an unpleasant time for his family to have this heightened public awareness of the euthanasia issue at almost exactly the same time that Klecker's childhood was visibly robbed from him by the evolution of his medical condition.

Under the euthanasia law, the doctor who completes the death certificate will not mention that euthanasia was a factor:

Q: What is listed as the cause of death on death certificates for patients who die under the Death with Dignity Act?

A: The Oregon Health Authority, Center for Health Statistics recommends that physicians record the underlying terminal disease as the cause of death and mark the manner of death “natural”. This is intended to balance the confidentiality of patients and their families, while ensuring that we have complete information for statistical purposes.

A death certificate is a legal document that has two primary purposes: (1) to certify the occurrence of death for legal matters (e.g., settling the estate), and (2) to document causes of death for public health statistics. To ensure that we have accurate and complete data on patients who have ingested the medications, the Oregon Health Authority regularly matches the names of persons for whom a DWDA prescription is written with death certificates. The Attending Physician is then required to complete a follow-up form with information about whether the death resulted from ingesting the medications, or from the underlying disease.

Therefore, unless the patient or their family chooses to disclose the decision to euthanise, we have no way to be sure that it was euthanasia.

Nonetheless, if Klecker did choose euthanasia it would make him a pioneer of the concept. Cumulative statistics are published about the program, here is the report including all data up to 2023.

In the year that Klecker died, 39 people received a prescription for the medication and 29 died after taking the medication.

The statistics go into greater detail and provide further fuel for debate. For example, we can see that only sixty-three percent of people die within an hour of taking the medication while almost seven percent take more than six hours to die.

It is now seven years since my father had a stroke and the Debian people are still sooking that I wasn't fully available to do things like mentoring for Google, things that Google never pays us to do anyway.

Klecker's father sent the report below. Like all the other emails, nobody states if it was medically-assisted-suicide but his time of death in the early hours of the morning at 4:29 am is consistent with the pattern of taking the medication before going to bed. Another key point in the email is that the Sun Ultra 30 and monitor were already packed up to be sent back to Ben Collins. We should not jump to conclusions about this, Klecker was somebody who was well organized and even if it was not euthanasia, the doctors may have been able to give him some advance warning about the rate at which his illness would progress.

Subject: Re: [jwk@espy.org: Joel Klecker]
Resent-Date: 11 Jul 2000 15:47:21 -0000
Resent-From: debian-private@lists.debian.org
Date: Tue, 11 Jul 2000 10:47:19 -0500 (EST)
From: matthew.r.pavlovich.1 <mpav@purdue.edu>
To: Ben Collins <bcollins@debian.org>
CC: Debian Private List <debian-private@lists.debian.org>


To hear how the Internet and the Debian Project brought importance to
someone who was battling a tragic illness is uplifting.  
It makes all the disagreements look very insignificant.  Lets rock this
release.  For Joel.

Matthew R. Pavlovich


On Tue, 11 Jul 2000, Ben Collins wrote:

> It's very hard for me to even send this message. This is a great loss to
> us all.
> 
> ----- Forwarded message from J Klecker <jwk@espy.org> -----
> 
> X-From_: jwk@espy.org  Tue Jul 11 10:52:41 2000
> Date: Tue, 11 Jul 2000 07:58:57 -0700
> From: J Klecker <jwk@espy.org>
> X-Mailer: Mozilla 4.61 (Macintosh; I; PPC)
> X-Accept-Language: en,pdf
> To: bcollins@debian.org, Jeff Klecker <jklecker@norpac.com>
> CC: Dianne Klecker <dgk@espy.org>, jwk@espy.org
> Subject: Joel Klecker
> 
> Mr. Collins,
> My son Joel died this morning at 4:29 am after a lifelong battle with
> Duchenne Muscular Dystrophy. Joel was 21 years old. The Debian Project
> was of paramount importance to him.
> I am not technically skilled  enough to maintain Joel's system or to
> repair glitches.  However, the system will continue to run as long as
> possible (access as per normal). Please continue to use any information
> or resources of value to the project.
> I have sent the Sun Ultra 30 and monitor to you via UPS. Please notify
> me of arrival. (I feel quite responsible since it was of great
> importance to Joel)
> Will you please notify any fellow developers with whom Joel worked. We
> would welcome contacts from any of you. Either through this E-mail
> channel, jwk@espy.org     by phone at 503-769-7373   or by that "other"
> mail:   Jeffrey and Dianne Klecker and brother Ben Klecker
>             1385 East Virginia St
>             Stayton, Or 97383
> Thank you and we miss him very much,  Jeff Klecker
> 
> 
> 
> ----- End forwarded message -----
> 
> -- 
>  -----------=======-=-======-=========-----------=====------------=-=------
> /  Ben Collins  --  ...on that fantastic voyage...  --  Debian GNU/Linux   \
> `  bcollins@debian.org  --  bcollins@openldap.org  --  bcollins@linux.com  '
>  `---=========------=======-------------=-=-----=-===-======-------=--=---'
> 
> 
> -- 
> Please respect the privacy of this mailing list.
> 
> To UNSUBSCRIBE, email to debian-private-request@lists.debian.org
> with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org
> 
> 

For the purposes of a suicide cluster analysis, we really don't need to violate Klecker's privacy or make any assertion that he chose euthanasia. All we are concerned with is whether exposure to the emails about this death might have been a contagion factor in any subsequent suicide decisions.

I first volunteered to be a mentor for Debian and Google Summer of Code back in 2012. I had volunteered for this program every year for six years. Not long after my father had a stroke, I sent an email to Molly de Blanc telling her that I wouldn't be fully available for mentoring in Google Summer of Code in 2018. Here is that email:

Subject: Re: Google Summer of Code 2018
Date: Mon, 22 Jan 2018 08:41:49 +0100
From: Daniel Pocock <daniel@pocock.pro>
To: Molly de Blanc <deblanc@riseup.net>

On 22/01/18 02:25, Molly de Blanc wrote:
> I mmissed this on the application before! We need 2-5 administrators for
> the application. Who else wants to be one?
>

You can use my name temporarily while looking for other people to help
you in this role.

Google knows me as a previous administrator for Ganglia in GSoC and I've
met most of the Google people too.

However, I can't officially commit to help with the duties of an
administrator right now.

Regards,

Daniel

The Walder Wyss Swiss lawyers demanded more publicity about the Debian toxic culture. They have been extremely rude to my family and I. By spreading gossip and rumors about volunteers and our families, they create a situation where the only way to proceed is to publish emails like this so people can trace the root cause of these deaths.

Here is where the borderline nazis from Zurich demand that I publish something:

Pascale Koester, Alban de Ziegler, switzerland, nazi

 

Pascale Koester, Alban de Ziegler, switzerland, nazi

In this email, we see an example of Klecker's thinking about a week before he died, he was making arrangements for the Ultra Sparc 30. The machine was loaned from Sun Microsystems, they are now part of Oracle Corporation. In other words, they had sent this machine to the home of a dying teenager so he could do unpaid work for them and even as he prepared to die, he had to make arrangements to pack it up and return it to them.

Subject: Re: [jwk@espy.org: Joel Klecker]
Resent-Date: 12 Jul 2000 01:23:28 -0000
Resent-From: debian-private@lists.debian.org
Date: Tue, 11 Jul 2000 21:23:06 -0400
From: Ben Collins <bcollins@debian.org>
To: Erick Kinnee <erick@kinnee.net>, Brent Fulgham <brent.fulgham@xpsystems.com>, Dirk Eddelbuettel <edd@debian.org>, Debian Private List <debian-private@lists.debian.org>

On Tue, Jul 11, 2000 at 06:04:30PM -0500, Erick Kinnee wrote:
> On Tue, Jul 11, 2000 at 03:05:10PM -0700, Brent Fulgham wrote:
> > > As far as the naming goes, shouldn't it be 'espy' rather than 
> > > 'Klecker' ?
> > > 
> > You're absolutely right... that's a much better choice.
> > 
> > It might be nice to mention this to his family whenever a
> > machine is made available.
> 
> What about that Ultra 30?

The UltraSPARC 30 wont be on a network. Joel was already making
arrangements to send it back to me, about a week before this had occured.
My intention then was to hold on to it so that I could have a SCSI system
to test with (I don't have any SCSI UltraSPARC's to work with), and send
my U5 to another developer so that they could assist with the SPARC port.
Since those arrangements are already in place, I sort of need to stick
with them to avoid calling Sun too much.

Ben

-- 
 -----------=======-=-======-=========-----------=====------------=-=------
/  Ben Collins  --  ...on that fantastic voyage...  --  Debian GNU/Linux   \
`  bcollins@debian.org  --  bcollins@openldap.org  --  bcollins@linux.com  '
 `---=========------=======-------------=-=-----=-===-======-------=--=---'

On the day before Klecker died, he used IRC to tell people about it. Once again, I didn't want to tell people everything about my family but Google and the social media cabals have continued spreading gossip and smear campaigns about "behavior" at a time of grief.

Subject: Distressing news
Date: Sun, 9 Jul 2000 10:30:01 -0400
From: Ben Collins <bcollins@debian.org>
To: debian-private@lists.debian.org

For those of you that do not IRC....

06:42 *Espy* is saying 'goodbye' on IRC today, spread the word
06:42 <woot> Espy: oh?
06:44 <woot> Espy: would you elaborate on that?
06:46 <Espy> terminal illness       close to death
06:46 <Joy> Espy: you're kidding, right?
06:47 <Espy> uh, no
06:47 <woot> Espy: what brought this on?
06:48 <Joy> Espy: omg!
06:50 <woot> Espy: have you known about this long?
06:53 <Espy> I've had this disease my entire life
06:55 <Espy> been bedridden at least as long as I've been a developer
06:57 <Espy> Duchenne Muscular Dystrophy
07:02 <Espy> http://mdausa.org/disease/dmd.html

Espy is Joel Klecker, our long time devoted glibc maintainer. A few days
ago he joined IRC and told us he was giving up his packages to deal with
this very important juncture in his life. He has unsub'd from all lists
but -private, and one other.

Our thoughts and best wishes should go out to him. It might sound corny to
some, but since Joel has dedicated so much time to Debian, even though he
has also had to deal with this condition, I think we should dedicate our
next release to him.

-- 
 -----------=======-=-======-=========-----------=====------------=-=------
/  Ben Collins  --  ...on that fantastic voyage...  --  Debian GNU/Linux   \
`  bcollins@debian.org  --  bcollins@openldap.org  --  bcollins@linux.com  '
 `---=========------=======-------------=-=-----=-===-======-------=--=---'

When we talk about contagion, it is interesting to see if any subsequent suicides have features that can be traced to the communications about Klecker's death.

As we saw above, Klecker wanted to tell people, to communicate, before the end came. Frans Pop, the Debian Day Volunteer Suicide Victim did much the same thing, he also sent a note on Debian-private the night before Debian Day. It is eerily creepy, Klecker and Pop both wanted to transfer control of their packages before they died.

Klecker and Pop both made some effort to coordinate the transfer of their equipment to other developers too. We can see how people removed the machines (and possible evidence) from his home .

Finally, a more mundane matter. Frans was hosting/using a number of machines at his house and asked that they be passed back to Debian. Please contact me *off-list* if you can help. His parents live a fair way from the town where he lived, so will need to arrange to travel there to meet people. They'd therefore appreciate it if one person can take care of everything.

Klecker, Pop and Adrian von Bidder had all been outspoken about philosophical aspects of computing, technology and free software. Does this make them more pre-disposed to suicide or was this simply coincidence?

Klecker and von Bidder both had a Debian logo on their grave:

Joel Espy Klecker, Adrian von Bidder

After the death of Klecker, people decided to dedicate the release to him and name a machine after him. Here is an email about the release, notice it was sent before the time of death:

Subject: Re: draft dedication note
Date: Sun, 9 Jul 2000 23:18:37 +0300
From: Antti-Juhani Kaijanaho <gaia@iki.fi>
To: Debian Private List <debian-private@lists.debian.org>

On Sun, Jul 09, 2000 at 03:58:51PM -0400, Ben Collins wrote:
> Dedicated to Joel Klecker
> 
> Although he has not left us, and will always be in our thoughts, we are
> decicating this release of Debian to him. Throughout his long association
> with our project, he has given unselfishly to Free Software. Most of us
> were oblivious to what Joel was facing, and are only now seeing what a
> true and giving person he was, and what a friend we will be losing. So as
> a show of our appreciation, this one's for you, Joel.
> 
>  * The "Joel Klecker" release

If I did not know what is going on, I'd find this note puzzling.
What is it that "Joel was facing"?  Why are we "losing" him?  If we
want to publicize this, I think it should be with a little more clarity.

Also, I believe it is customary to use the past tense only of people
who have died.

(BTW^2, will this release be Debian 2.2 "potato" or Debian 2.2 "Joel
Klecker" or something else?)
-- 
%%% Antti-Juhani Kaijanaho % gaia@iki.fi % http://www.iki.fi/gaia/ %%%

Here is the discussion about naming the machine after him:

Subject: Re: [jwk@espy.org: Joel Klecker]
Date: Tue, 11 Jul 2000 14:57:32 -0700 (PDT)
From: Sean 'Shaleh' Perry <shaleh@valinux.com>
Organization: VA Linux
To: Dirk Eddelbuettel <edd@debian.org>
CC: Debian Private List <debian-private@lists.debian.org>


On 11-Jul-2000 Dirk Eddelbuettel wrote:
> 
> Seconded.
> 
> As far as the naming goes, shouldn't it be 'espy' rather than 'Klecker' ?
> 

probably, just did not feel like arguing over something like this.

In the end, they named the machine klecker.debian.org and it is still there today.

Klecker's death was not the only influence in the Debian world. For example, Thiemo Seufer died and Frans Pop commented on it before his own death:

Subject: Re: Thiemo Seufer
Date: Sat, 27 Dec 2008 04:45:10 +0100
From: Frans Pop <elendil@planet.nl>
To: debian-private@lists.debian.org

On Friday 26 December 2008, Martin Michlmayr wrote:
> I'm sorry to inform you that Thiemo Seufer died in a car accident
> this morning.  I was told that a big, fast moving car collided with
> his car, forcing his car from the high way.

That is very sad news and a great loss to the project. My condolences to his family and friends.

Frans

Looking through these discussions, it is noteworthy that human grief is being trivialized into text-based communications. This could be another factor, when a community doesn't have real-world mechanisms to process grief and they resort to a gossip channel like debian-private then they may never process the grief properly.

Subject: Re: Privacy of -private list ; missing even one bit of common sense and decency
Date: Sun, 22 Aug 2010 01:01:34 -0500
From: Gunnar Wolf <gwolf@gwolf.org>
To: George Danchev <danchev@spnet.net>
CC: debian-private@lists.debian.org

George Danchev dijo [Sun, Aug 22, 2010 at 08:06:01AM +0300]:
> > > > And I guess that quite a few people share your view.  It's considered
> > > > as a crime or a sin in many religions.
> > > > 
> > > > I personally do not like this point of view at all, but it
> > > > unfortunately probably has to be respected.
> > > 
> > > As a general rule I don't think that we have to respect the views of a
> > > religious minority
> > 
> > I wouldn't say it's a minority. According to wikipedia, the catholic
> > dogmas consider this as a crime, and islam considers this as a sin or
> > even a crime. That already makes quite a few people.
> 
> Just as a data point, there are few countries where Euthanasia [1] is legal 
> and the Netherlands is one of them, so I guess first of all we must respect 
> countries legislation and peoples own legal decision. Yes, I know, we do. 
> Also, I don't want to speculate if that is the case with our fellow DD Frans, 
> since I simply don't know. I've never worked with him closely, however I 
> acknowledge the huge amount of valuable contributions he invested in Debian.

Please. Stop it. So much arguing about what you know shit about, and
that took the life of one of our group, a respected and hard-working
person, makes me sick.

We know nothing about the fact. Probably, we will never know. Stop,
please, hallucinating about the situations that led him to do what he
did. 

Now we can see things are getting even worse: we have these fascist lawyers who came along and start looking for any excuse to extinguish people. In the Debian world, we are not employees so we can't be sacked. Therefore, the lawyers will try anything, whether it is the rogue UDRP harassment in the WeMakeFedora case or even trying to have a developer forcefully euthanised. If there is a law for it, and if the rogue corporations are willing to pay for it, some lawyer will try to exploit that law to meet their evil objectives.

Euthanasia going global

There is a huge contradiction, one of the world's most advanced superpowers, the United States, remains very skeptical of euthanasia. Very few states copied the example from Oregon to authorize any form of assisted suicide. Yet at the same time US industry is leading the world in artificial intelligence, a technology that could be even more catastrophic then nuclear war if we lose control of it.

Subject: Re: ideology an free software
Date: Wed, 29 Sep 2004 12:43:59 +0200
From: Jose Carlos Garcia Sogo <jsogo@debian.org>
To: debian-private@lists.debian.org

El mié, 29-09-2004 a las 11:12 +0100, David Spreen escribió:
> hey there,
> 
> the recent dwn issue's story about a debian surveillance robot made me 
> think about some ethical issues raised by free software. what about 
> debian powered weapon systems?
> 
> who of you would not work for a weapon-company but conforms with the 
> dfsg-guideline "no discrimination of use"? would you like to see your 
> work to be part of a weapon killing people?

  No discrimination means no discrimination. If you start
discriminating, you'll never end doing it. Every developer can have some
reason for discriminating some group, person or goal. (Do you like
Debian(-med) being used in abortions, euthanasia, investigation with
animals, ...?)
  Also, we cannot impose further restrictions in licenses as (L)GPL,
OSI*, BSD,...

  Cheers,

P.S: BTW, I guess that we would like to see Debian being used in any
spatial program... take into account that the technology needed for
launching people out in the space is the same than the one needed for
putting nuclear warfares out there.

-- 
Jose Carlos Garcia Sogo
   jsogo@debian.org

As we rush to construct Artificial Intelligence systems for the military, is it a sign that humanity as a whole is engaging in a form of euthanasia?

The recent news about World Food Kitchen drone executions in Gaza put the risk of autonomous weapons back into public consciousness. For those who have a background in this industry, those concerns are not actually very new, we've been thinking about these possibilities for a long time. The message above was from 2004.

My old friend Peter Eckersley warned that Google should not help the US military with AI.

Then again, how did I find out so much about the first AI-powered autonomous drone being tested at Graytown in Australia and then listed in a journal on the Pentagon web site in 2004?. The software quoted in the article, JACK, was a student project at University of Melbourne in 1999. The top guns (excuse the pun) of the CS department were put on it. One of our fellow team members is now Google Professor of Computer Science at Oxford. Peter would neither be amused nor surprised.

15 November, 2024 12:00PM

November 14, 2024

Reproducible Builds

Reproducible Builds mourns the passing of Lunar

The Reproducible Builds community sadly announces it has lost its founding member.

Jérémy Bobbio aka ‘Lunar’ passed away on Friday November 8th in palliative care in Rennes, France.

Lunar was instrumental in starting the Reproducible Builds project in 2013 as a loose initiative within the Debian project. Many of our earliest status reports were written by him and many of our key tools in use today are based on his design.

Lunar was a resolute opponent of surveillance and censorship, and he possessed an unwavering energy that fueled his work on Reproducible Builds and Tor. Without Lunar’s far-sightedness, drive and commitment to enabling teams around him, Reproducible Builds and free software security would not be in the position it is in today. His contributions will not be forgotten, and his high standards and drive will continue to serve as an inspiration to us as well as for the other high-impact projects he was involved in.

Lunar’s creativity, insight and kindness were often noted. He will be greatly missed.


Other tributes:

14 November, 2024 03:00PM

Swiss JuristGate

Edouard Bolleter & PME Magazine news report reads like paid advertising

A news report by Edouard Bolleter of PME Magazine.

He has written a news report that feels like a paid advertisement.

He wrote "a legal services insurance unlimited for the private individuals and the small businesses" and later on "We are the only insurer to accept businesses marked like a risk, those who are most frequently rejected by the legal expenses insurance market".

Monsieur Bolleter does not ask any difficult questions. The journalists in Switzerland are afraid of criminal prosecution/persecution for writing any inconvenient truths.

If it seems to good to be true, it probably is.

The law office who wants to democratize the law

No, jurists are not only for big companies! The proof is Real-Protect, an unlimited legal services insurance for the private individuals and small businesses.

Edouard Bolleter, 20.07.2018

Mathieu Parreaux, employee and co-founder of the law office Parreaux, Thiébaud & Partners, launched Real-Protect, whose terribly democratic concept and startup spirit should appeal to the bosses of French-speaking SMEs. The young company offers unlimited legal protection for individuals and businesses. The firm is made up of general lawyers (more than 10 people) and works with a network of partner lawyers registered with the Geneva, Vaud, Valais, Fribourg and Neuchâtel Bars. Real-Protect already has 450 clients with rates starting at 24.90 francs per month.

75% of clients are small businesses

Originality of the approach: any client can receive legal advice, orally or in writing, and without limits. “We are the only ones to accept companies labeled as being at risk, which are most often rejected by legal protection insurers on the market. These are primarily companies active in real estate. Paradoxically, we are also the only legal protection to enter into the matter when it comes to attacking the opposing party. These different points allow us to welcome everyone,” defends Mathieu Parreaux.

In addition to legal protection, the firm meets the tailor-made needs of SMEs, which represent 75% of its clientele. “Starting a business requires funds and 99% of SMEs start their business without contracts or general conditions, or with models taken from the internet, which is extremely dangerous. Whether they are partnerships or capital companies, we structure our prices according to their budget in order to allow them to build solid legal positions from the start,” explains Mathieu Parreaux.

The service is indeed targeted at SMEs with corporate law, contract law, tax law or prosecution law. “Our entire legal apparatus is built to support SMEs from A to Z, advising them on their structure, drafting their contracts, general conditions, etc. And also for more specific questions, in the event of a merger, acquisition, or transformation of companies,” concludes the lawyer.

14 November, 2024 02:00PM

Stefano Zacchiroli

In memory of Lunar

In memory of Lunar

I've had the incredible fortune to share the geek path of Lunar through life on multiple occasions. First, in Debian, beginning some 15+ years ago, where we were fellow developers and participated in many DebConf editions together.

Then, on the deontology committee of Nos Oignons, a non-profit organization initiated by Lunar to operate Tor relays in France. This was with the goal of diversifying relay operators and increasing access to censorship-resistance technology for everyone in the world. It was something truly innovative and unheard of at the time in France.

Later, as a member of the steering committee of Reproducible Builds, a project that Lunar brought to widespread geek popularity with a seminal "Birds of a Feather" session at DebConf13 (and then many other talks with fellow members of the project in the years to come). A decade later, Reproducible Builds is having a major impact throughout the software industry, primarily due to growing fears about the security of the software supply chain.

Finally, we had the opportunity to recruit Lunar a couple of years ago at Software Heritage, where he insisted on working until he was able to, as part of a team he loved, and that loved him back. In addition to his numerous technical contributions to the initiative, he also facilitated our first ever multi-day team seminar. The event was so successful that it has been confirmed as a long-awaited yearly recurrence by all team members.

I fondly remember one of the last conversations I had with Lunar, a few months ago, when he told me how proud he was not only of having started Nos Oignons and contributed to the ignition of Reproducible Builds, but specifically about the fact that both initiatives were now thriving without being dependent on him. He was likely thinking about a future world without him, but also realizing how impactful his activism had been on the past and present world.

Lunar changed the world for the better and left behind a trail of love and fond memories.

Che la terra ti sia lieve, compagno.

--- Zack

14 November, 2024 01:56PM

November 13, 2024

hackergotchi for Daniel Pocock

Daniel Pocock

Jonathan Carter & Debian betrayed Joel Espy Klecker

In the last blog about Klecker, I looked at how Debianists deceived the community about their knowledge of his illness. In fact, while Klecker was alive, it looks like they deceived him too and after he died, they bounced the cheque, metaphorically.

Looking at Debian mailing lists today we frequently see people using gmail.com addresses to obfuscate their identity and who they work for.

Back in the days of Klecker and Bruce Perens, it was more common for people to use an email address associated with their employment or institution. Perens had used his pixar.com email address and Mark Shuttleworth used a thawte.com email address. We can look through the debian-private archives that have been disclosed in recent years and see many other company names too.

Was young Klecker starstruck by these big names and was this a reason he was willing to work for free while bed-ridden with a terrible illness? It may not be. I found various pieces of evidence suggesting why Klecker was willing to work on Debian without payment. This email from debian-private gives us a hint about how much weight Klecker puts on the names/trademarks of institutions in email addresses:

Subject: Re: State of the project
Date: Thu, 20 Nov 2014 18:31:09 +0100
From: Marc Haber <mh+debian-private@zugschlus.de>
Organization: private site, see http://www.zugschlus.de/ for details
To: debian-private@lists.debian.org

On Tue, 18 Nov 2014 23:43:06 +1000, Anthony Towns <aj@erisian.com.au>
wrote:
>?   "?Debian always was known for its communication "style". There were
>even shirts sold
><http://www.infodrom.org/Debian/events/LinuxTag2002/t-shirts.html> in
>memory of Espy Klecker with a quote he is known for: Morons. I'm surrounded
>by morons."
>
>That's totally my favourite Debian shirt. 

Mine as well. And it would be soooo un-CoC-compliant!

Greetings
Marc
-- 
-------------------------------------- !! No courtesy copies, please !! -----
Marc Haber         |   " Questions are the         | Mailadresse im Header
Mannheim, Germany  |     Beginning of Wisdom "     | http://www.zugschlus.de/
Nordisch by Nature | Lt. Worf, TNG "Rightful Heir" | Fon: *49 621 72739834


-- 
Please respect the privacy of this mailing list. Some posts may be declassified
3 years after posting as per http://www.debian.org/vote/2005/vote_002

Here is the shirt:

Joel Espy Klecker, T-shirt, morons

If Klecker felt these people were morons, why did he hang around?

The answer is on Klecker's web site which has been preserved for us like a time capsule at the Wayback Machine.

It is on the bottom of every page too.

The web site is at www.espy.org.

At the bottom of the page we can see a copy of the logo for the Free Speech Online Blue Ribbon Campaign from the EFF.

It goes much further than Klecker. After Klecker died, they named a server after him, klecker.debian.org and people were keen to place their own personal web sites and opinions there. Here is an example of a message that appeared after the September 11 attacks, which initiated many other private discussions too.

Subject: On WTC events sympathize or comdemnation expression
Date: Wed, 12 Sep 2001 19:52:13 -0300
From: Pablo Lorenzzoni <spectra@debian.org>
Reply-To: spectra@debian.org
Organization: Projeto Debian
To: debian-private@lists.debian.org

Hello ALL!

	I were watching this nonsense flame-war about a possible Debian manifestation regarding WTC events. Let me state my position:

1 - I am not against any way of life... let it be islamic, catholic, agnostic, american, whatever...
2 - I am against every violation of Human Rights. It doesn't matter which circunstances surround it.
3 - I believe Debian Project is not a political organization per se. But it is very much clear (at least to me), that been the *only* (AFAIK) open-source distribution of a universal OS, Debian project have political strength. This also means we have a political responsability, just like the ones who vote for president but weren't ever candidate, or political-faction-affiliate.
4 - I'd like to express my sympathy to the victims of every Human Rights violation... but, of course, it is out of my reach. However, I **do** want to express it every chance I've got. This is one good chance.
5 - I believe that there's no distinction between technical and other fields. After all, the humans made everything happen. In a moment like the present one, these "walls" of "we are a technical community, so we have nothing to do with it" just don't apply at all. First of all we are humans.... then geeks.
6 - The first thing I've done as soon as I heard what happened was to try to find out if any of us were among the victims. AFAIK we are all safe. The reason I've mentioned it is that if one of us were hurt, probably this discussion would never start and our main webpage would be all black for the next 30 days.... The fact we are having this discussion scaries me very much. Are we loosing our humanity?

	Well... once this said, I think a polite, well-written expression of sympathy in our main project webpage would be appreciatted by everybody. Not a comdemnation. Not pointing fingers. Just a brief statement that we think human lifes are too important to be ended the way those hundreds ended.
	Maybe there're people here that disagree with me... maybe this never reach our main webpage... so, in my webspace under klecker, I've already pointed to Orange Ribbon Campaign Against Terrorism (same way I've pointed to EFF's Blue Ribbon one).... I suggest that everybody that agrees with me do the same with the webpages they host (not just under klecker).... the URL is http://www.comnet.com.br/or/

	Now, please: observe that I am behind a thick wall of amianthus and all flames will go straight and silently to /dev/null ;-)
	Feel free to quote or Cc this message.

	[]s

	Pablo
-- 
Pablo Lorenzzoni (Spectra) <spectra@debian.org>
GnuPG PubKey at search.keyserver.net (Key ID: 268A084D)
Webpage: http://people.debian.org/~spectra/ 

What we can see from this email is that people did not agree with each other on certain topics but they could still post blogs about those topics. Many people supported the Blue Ribbon Campaign. The Debian Social Contract, clause 3, states "We will not hide problems" and many people interpret that as a free speech philosophy.

It looks like Joel Espy Klecker was one of many people who gave their time and effort to co-authorship of Debian based on a philosophical belief that they were contributing to a free future for humanity.

Given that Klecker knew he didn't have long to live, he even mentioned his imminent death before dying, the time that he contributed had a greater value than the time other people of similar skill level contribute.

When the Code of Conduct gaslighting was imposed upon Debian, the people who pushed for that CoC were bouncing the cheque that was due to Debian's founders and earliest co-authors.

Only 25% of Debianists actually consented to the CoC. Those who voted NO and those who did not vote at all did not give consent. Klecker, being deceased, was unable to consent. His copyright interest in Debian would have passed to his family and technically, they would have the right to consent in his place up to the point where his copyright expires in the year 2070. They were never consulted or asked if they consent to this retrospective change to the agreement between authors.

We can see that Klecker's beliefs were betrayed again when Debian listmasters began censoring people for using phrases like "wayward communications".

After this outbreak of fascism on debian.org mailing lists, people moved the metaphors and serious discussions to other web sites. Financial reports from Software in the Public Interest, Inc spent over $120,000 on legal fees which appear to coincide with censorship of domain names.

The censorship decision, with overtones of Nazism, was another insult to Klecker's legacy. Oddly enough, the censorship decision was signed on World Press Freedom Day.

This is why it is so important for me to document the story of Joel Espy Klecker & Debian. As we are all co-authors, we are all in a relationship with each other under copyright law until 70 years after the last one of us dies. We can't ignore that or let somebody's girlfriend come along and snuff out our moral rights for the sake of pretending to be a family all the time.

The tale of Enron's pension scheme

During the Enron era, Enron employees were encouraged to invest their pension funds into the sharemarket and purchased the shares of their own employer, Enron.

From the New York Times:

The lawsuit says that Enron schemed to pump up the price of the stock artificially and violated its fiduciary duty to its employees by failing to act in their best interests.

Developers pooling our copyright interests into the co-authorship of Debian GNU/Linux are, in some ways, like the Enron employees pooling their pension assets into the stock of their employer.

When Klecker was contributing to Debian, he thought he was advancing Free Speech Online and when Enron employees contributed to the 401k pension scheme, they thought they were advancing their future retirement interests. In both cases, the developers and the Enron employees, our futures are being ripped out underneath us when small groups of people change the rules or rig the system.

Red Hat refusal to release source code for RHEL

Red Hat's refusal to release source code for RHEL also feels like a betrayal of the principles that encouraged unpaid volunteers to co-author Fedora and RHEL in the first place. The copyright interests of Fedora joint authorship are very similar to the interests of Debian joint authorship.

Read more about Red Hat unilaterally restricting source code access without consent of the joint authors.

Did Debianists set out to deceive us from the outset or was the decision made retrospectively?

This point is not really clear.

Looking through the leaked debian-private emails gives some hints that some people anticipated fooling future contributors.

Some of these discussions only took place after Joel Espy Klecker was already subscribed to debian-private so he may have been aware this type of thing was going on or could happen in the future.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

13 November, 2024 11:00AM

Russell Coker

Modern Sleep

Julius wrote an insightful blog post about the “modern sleep” issue with Windows [1]. Basically Microsoft decided that the right way to run laptops is to never entirely sleep, which uses more battery but gives better options for waking up and doing things. I agree with Microsoft in concept and this is something that is a problem that can be solved. A phone can run for 24+ hours without ever fully sleeping, a laptop has a more power hungry CPU and peripherals but also has a much larger battery so it should be able to do the same. Some of the reviews for Snapdragon Windows laptops claim up to 22 hours of actual work without charging! So having suspend not really stop the system should be fine.

The ability of a phone to never fully sleep is a change in quality of the usage experience, it means that you can access it and immediately have it respond and it means that all manner of services can be checked for new updates which may require a notification to the user. The XMPP protocol (AKA Jabber) was invented in 1999 which was before laptops were common and Instant Message systems were common long before then. But using Jabber or another IM system on a desktop was a very different experience to using it on a laptop and using it on a phone is different again. The “modern sleep” allows laptops to act like phones in regard to such messaging services. Currently I have Matrix IM clients running on my Android phone and Linux laptop, if I get a notification that takes much typing for a response then I get out my laptop to respond. If I had an ARM based laptop that never fully shut down I would have much less need for Matrix on a phone.

Making “modern sleep” popular will lead to more development of OS software to work with it. For Linux this will hopefully mean that regular Linux distributions (as opposed to Android which while running a Linux kernel is very different to Debian etc) get better support for such things and therefore become more usable on phones. Debian on a Librem 5 or PinePhonePro isn’t very usable due to battery life issues.

A laptop with an LTE card can be used for full mobile phone functionality. With “modern sleep” this is a viable option. I am tempted to make a laptop with LTE card and bluetooth headset a replacement for my phone. Some people will say “what if someone tries to call you when it’s not convenient to have your laptop with you”, my response is “what if people learn to not expect me to answer the phone at any time as they managed that in the 90s”. Seriously SMS or Matrix me if you want an instant response and if you want a long chat schedule it via SMS or Matrix.

Dell has some useful advice about how to use their laptops (and probably most laptops from recent times) in this regard [2]. You can’t close the lid before unplugging the power cable you have to unplug first and then close. You shouldn’t put a laptop in a sealed bag for travel either. This is a terrible situation, you can put a tablet in a bag and don’t need to take any special precautions when unplugging and laptops should work the same. The end result of what Microsoft, Dell, Intel, and others are doing will be good but they are making some silly design choices along the way! I blame Intel mostly for selling laptop CPUs with TDPs >40W!

For an amusing take on this Linus Tech Tips has a video about being forced to use MacBooks by Microsoft’s implementation of Modern Sleep [3].

I’ll try out some ARM laptops in the near future and blog about how well they work on Debian.

13 November, 2024 10:10AM by etbe

Nazi.Compare

Alexander Wirt (formorer), Wayward people & Debian censorship

Every few days somebody asks me what was the wayward word or comment that snowballed into Debian's $120,000 legal bills.

We know that in the case of Dr Norbert Preining, he was punished for using the word "it" as a pronoun for a person. Dr Preining's native language is not English and he doesn't live in a country where English has a significant role.

Back in the day, the German administration we came to know as Nazis was obsessed with both censorship and the micro-managing of language. Even in choosing a word for journalists ( schriftleiter) they were very conscious of the implications of the word that they chose.

When we talk about the Nazis in English, sometimes we use the original German word and sometimes we use an English word. For example, the Germans used the phrase Endlösung der Judenfrage and in English we translate it as Final Solution to the Jewish question. There was no "question" (fragen) as such, the phrase simply obfuscates the reference to genocide.

Alexander Wirt (formorer), an employee of NetApp, is one of the Debian mailing list censors. His role could be thought of like those journalists and newspaper editors who agreed to become trained and registered as good schriftleiter.

The word wayward is used in various contexts. For example, in an article about the racist Utopia, they tell us who would be exterminated and it wasn't just the Jews and gypsies:

These included, on the one hand, members of their own 'Aryan race' who they considered weak or wayward (such as the 'congenitally sick', the 'asocial', and homosexuals), and on the other those who were defined as belonging to 'foreign races'.

The word wayward is a very general adjective that can be used in many contexts. For example, it has also been used to describe people who are ethnically Jewish but don't identify as such:

Wayward Jews, God-fearing Gentiles, or Curious Pagans? Jewish Normativity and the Sambathions

... At stake was whether these people were Jews and the ways in which diaspora Jews and their host communities influenced one another ...

Back in the day, it looks like being wayward, whether Jewish or LGBT, would attract undue attention from the state.

Now, in some groups like Debian, it appears the LGBT agitators have taken things to the opposite extreme. Even referring to a wayward horse that I saw escaping last week would get me in trouble, just as this reference to wayward communication caused a knee-jerk fascist reaction from Debian censorship.

Is there some secret list of words that we are not allowed to use any more? When I heard about the defamation of Sony Piers by GNOME fascism and their refusal to tell us why they attacked him, I wondered if it was something trivial like this, did Sony use a word like "it" or "wayward" without permission?

When a family, workplace or community works like this, where people are attacked for things they had no way to anticipate, we use the metaphor that you feel like you are walking on eggshells. Metaphors have been banned too.

Subject: Re: Your attitude on debian mailinglists
Date: Sat, 29 Dec 2018 14:59:04 +0100
From: Alexander Wirt <formorer@formorer.de>
To: Daniel Pocock <daniel@pocock.pro>
CC: listmaster@lists.debian.org

[ ... snip various iterations of threats and blackmail ... ]

> Hi Alex,
> 
> Please tell me which email and which insults you are referring to
<5c987a44-b6c6-ce21-020c-9402940f2fde@pocock.pro>

That is exactly that type of mail I was talking about. Starting with the subject and continueing with the body. 
I don't want to get too much into details, but phrases like
"sustained this state of hostility" or "wayward" are not acceptable.
Especially since I asked you to cool down and step back a bit. 
Alex

Alexander wants to create a fake community where everybody pretends to be happy all the time, even when we are targeted with insults, threats, plagiarism and other offences by the people who think they are holier-than-thou.

13 November, 2024 09:00AM

hackergotchi for Louis-Philippe Véronneau

Louis-Philippe Véronneau

Montreal's Debian & Stuff - November 2024

Our Debian User Group met on November 2nd after a somewhat longer summer hiatus than normal. It was lovely to see a bunch of people again and to be able to dedicate a whole day to hacking :)

Here is what we did:

lavamind:

  • reproduced puppetdb FTBFS #1084038 and reported the issue upstream
  • uploaded a new upstream version for pgpainless (1.6.8-1)
  • uploaded a new revision for ruby-moneta (1.6.0-3)
  • sent an inquiry to the backports team about #1081696

pollo:

  • reviewed & merged many lintian merge requests, clearing out most of the queue
  • uploaded a new lintian release (1.120.0)
  • worked on unblocking the revival of lintian.debian.org (many thanks to anarcat and pkern)
  • apparently (kindly) told people to rtfm at least 4 times :)

anarcat:

LeLutin:

  • opened an RFS on the ruby team mailing list for the new upstream version of ruby-necromancer
  • worked on packaging the new upstream version of ruby-pathspec

tvaz:

  • did AM (Application Manager) work

tassia:

  • explored the Debian Jr. project (website, wiki, mailing list, salsa repositories)
  • played a few games for Nico's entertainment :-)
  • built and tested a Debian Jr. live image

Pictures

This time around, we went back to Foulab. Thanks for hosting us!

As always, the hacklab was full of interesting stuff and I took a few (bad) pictures for this blog post:

Two old video cameras and a 'My First Sony' tape recorder An ALP HT-286 machine with a very large 'turbo' button A New Hampshire 'IPROUTE' vanity license plate

13 November, 2024 12:25AM by Louis-Philippe Véronneau

November 12, 2024

hackergotchi for Paul Tagliamonte

Paul Tagliamonte

Complex for Whom?

In basically every engineering organization I’ve ever regarded as particularly high functioning, I’ve sat through one specific recurring conversation which is not – a conversation about “complexity”. Things are good or bad because they are or aren’t complex, architectures needs to be redone because it’s too complex – some refactor of whatever it is won’t work because it’s too complex. You may have even been a part of some of these conversations – or even been the one advocating for simple light-weight solutions. I’ve done it. Many times.

Rarely, if ever, do we talk about complexity within its rightful context – complexity for whom. Is a solution complex because it’s complex for the end user? Is it complex if it’s complex for an API consumer? Is it complex if it’s complex for the person maintaining the API service? Is it complex if it’s complex for someone outside the team maintaining it to understand? Complexity within a problem domain I’ve come to believe, is fairly zero-sum – there’s a fixed amount of complexity in the problem to be solved, and you can choose to either solve it, or leave it for those downstream of you to solve that problem on their own.

That being said, while I believe there is a lower bound in complexity to contend with for a problem, I do not believe there is an upper bound to the complexity of solutions possible. It is always possible, and in fact, very likely that teams create problems for themselves while trying to solve a problem. The rest of this post is talking to the lower bound. When getting feedback on an early draft of this blog post, I’ve been informed that Fred Brooks coined a term for what I call “lower bound complexity” – “Essential Complexity”, in the paper “No Silver Bullet—Essence and Accident in Software Engineering”, which is a better term and can be used interchangeably.

Complexity Culture

In a large enough organization, where the team is high functioning enough to have and maintain trust amongst peers, members of the team will specialize. People will begin to engage with subsets of the work to be done, and begin to have their efficacy measured against that part of the organization’s problems. Incentives shift, and over time it becomes increasingly likely that two engineers may have two very different priorities when working on the same system together. Someone accountable for uptime and tasked with responding to outages will begin to resist changes. Someone accountable for rapidly delivering features will resist gates between them and their users. Companies (either wittingly or unwittingly) will deal with this by tasking engineers with both production (feature development) and operational tasks (maintenance), so the difference in incentives isn’t usually as bad as it could be.

When we get a bunch of folks from far-flung corners of an organization in a room, fire up a slide deck and throw up some aspirational to-be architecture diagram in order to get a sign-off to solve some problem (be it someone needs a credible promotion packet, new feature needs to get delivered, or the system has begun to fail and needs fixing), the initial reaction will, more often than I’d like, start to devolve into a discussion of how this is going to introduce a bunch of complexity, going to be hard to maintain, why can’t you make it less complex?

Right around here is when I start to try and contextualize the conversation happening around me – understand what complexity is that being discussed, and understand who is taking on that burden. Think about who should be owning that problem, and work through the tradeoffs involved. Is it best solved here, or left to consumers (be them other systems, developers, or users). Should something become an API call’s optional param, taking on all the edge-cases and on, or should users have to implement the logic using the data you return (leaving everyone else to take on all the edge-cases and maintenance)? Should you process the data, or require the user to preprocess it for you?

Frequently it’s right to make an active and explicit decision to simplify and leave problems to be solved downstream, since they may not actually need to be solved – or perhaps you expect consumers will want to own the specifics of how the problem is solved, in which case you leave lots of documentation and examples. Many other times, especially when it’s something downstream consumers are likely to hit, it’s best solved internal to the system, since the only thing that can come of leaving it unsolved are bugs, frustration and half-correct solutions. This is a grey-space of tradeoffs, not a clear decision tree. No one wants the software manifestation of a katamari ball or a junk drawer, nor does anyone want a half-baked service unable to handle the simplest use-case.

Head-in-sand as a Service

Popoffs about how complex something is, are, to a first approximation, best understood as meaning “complicated for the person making comments”. A lot of the #thoughtleadership believe that an AWS hosted EKS k8s cluster running images built by CI talking to an AWS hosted PostgreSQL RDS is not complex. They’re right. Mostly right. This is less complex – less complex for them. It’s not, however, without complexity and its own tradeoffs – it’s just complexity that they do not have to deal with. Now they don’t have to maintain machines that have pesky operating systems or hard drive failures. They don’t have to deal with updating the version of k8s, nor ensuring the backups work. No one has to push some artifact to prod manually. Deployments happen unattended. You click a button and get a cluster.

On the other hand, developers outside the ops function need to deal with troubleshooting CI, debugging access control rules encoded in turing complete YAML, permissions issues inside the cluster due to whatever the fuck a service mesh is, everyone needs to learn how to use some k8s tools they only actually use during a bad day, likely while doing some x.509 troubleshooting to connect to the cluster (an internal only endpoint; just port forward it) – not to mention all sorts of rules to route packets to their project (a single repo’s binary being run in 3 containers on a single vm host).

Beyond that, there’s the invisible complexity – complexity on the interior of a service you depend on. I think about the dozens of teams maintaining the EKS service (which is either run on EC2 instances, or alternately, EC2 instances in a trench coat, moustache and even more shell scripts), the RDS service (also EC2 and shell scripts, but this time accounting for redundancy, backups, availability zones), scores of hypervisors pulled off the shelf (xen, kvm) smashed together with the ones built in-house (firecracker, nitro, etc) running on hardware that has to be refreshed and maintained continuously. Every request processed by network ACL rules, AWS IAM rules, security group rules, using IP space announced to the internet wired through IXPs directly into ISPs. I don’t even want to begin to think about the complexity inherent in how those switches are designed. Shitloads of complexity to solve problems you may or may not have, or even know you had.

What’s more complex? An app running in an in-house 4u server racked in the office’s telco closet in the back running off the office Verizon line, or an app running four hypervisors deep in an AWS datacenter? Which is more complex to you? What about to your organization? In total? Which is more prone to failure? Which is more secure? Is the complexity good or bad? What type of Complexity can you manage effectively? Which threaten the system? Which threaten your users?

COMPLEXIVIBES

This extends beyond Engineering. Decisions regarding “what tools are we able to use” – be them existing contracts with cloud providers, CIO mandated SaaS products, a list of the only permissible open source projects – will incur costs in terms of expressed “complexity”. Pinning open source projects to a fixed set makes SBOM production “less complex”. Using only one SaaS provider’s product suite (even if its terrible, because it has all the types of tools you need) makes accreditation “less complex”. If all you have is a contract with Pauly T’s lowest price technically acceptable artisinal cloudary and haberdashery, the way you pay for your compute is “less complex” for the CIO shop, though you will find yourself building your own hosted database template, mechanism to spin up a k8s cluster, and all the operational and technical burden that comes with it. Or you won’t and make it everyone else’s problem in the organization. Nothing you can do will solve for the fact that you must now deal with this problem somewhere because it was less complicated for the business to put the workloads on the existing contract with a cut-rate vendor.

Suddenly, the decision to “reduce complexity” because of an existing contract vehicle has resulted in a huge amount of technical risk and maintenance burden being onboarded. Complexity you would otherwise externalize has now been taken on internally. With a large enough organizations (specifically, in this case, i’m talking about you, bureaucracies), this is largely ignored or accepted as normal since the personnel cost is understood to be free to everyone involved. Doing it this way is more expensive, more work, less reliable and less maintainable, and yet, somehow, is, in a lot of ways, “less complex” to the organization. It’s particularly bad with bureaucracies, since screwing up a contract will get you into much more trouble than delivering a broken product, leaving basically no reason for anyone to care to fix this.

I can’t shake the feeling that for every story of technical mandates gone awry, somewhere just out of sight there’s a decisionmaker optimizing for what they believe to be the least amount of complexity – least hassle, fewest unique cases, most consistency – as they can. They freely offload complexity from their accreditation and risk acceptance functions through mandates. They will never have to deal with it. That does not change the fact that someone does.

TC;DR (TOO COMPLEX; DIDN’T REVIEW)

We wish to rid ourselves of systemic Complexity – after all, complexity is bad, simplicity is good. Removing upper-bound own-goal complexity (“accidental complexity” in Brooks’s terms) is important, but once you hit the lower bound complexity, the tradeoffs become zero-sum. Removing complexity from one part of the system means that somewhere else – maybe outside your organization or in a non-engineering function must grow it back. Sometimes, the opposite is the case, such as when a previously manual business processes is automated. Maybe that’s a good idea. Maybe it’s not. All I know is that what doesn’t help the situation is conflating complexity with everything we don’t like – legacy code, maintenance burden or toil, cost, delivery velocity.

  • Complexity is not the same as proclivity to failure. The most reliable systems I’ve interacted with are unimaginably complex, with layers of internal protection to prevent complete failure. This has its own set of costs which other people have written about extensively.
  • Complexity is not cost. Sometimes the cost of taking all the complexity in-house is less, for whatever value of cost you choose to use.
  • Complexity is not absolute. Something simple from one perspective may be wildly complex from another. The impulse to burn down complex sections of code is helpful to have generally, but sometimes things are complicated for a reason, even if that reason exists outside your codebase or organization.
  • Complexity is not something you can remove without introducing complexity elsewhere. Just as not making a decision is a decision itself; choosing to require someone else to deal with a problem rather than dealing with it internally is a choice that needs to be considered in its full context.

Next time you’re sitting through a discussion and someone starts to talk about all the complexity about to be introduced, I want to pop up in the back of your head, politely asking what does complex mean in this context? Is it lower bound complexity? Is this complexity desirable? Is what they’re saying mean something along the lines of I don’t understand the problems being solved, or does it mean something along the lines of this problem should be solved elsewhere? Do they believe this will result in more work for them in a way that you don’t see? Should this not solved at all by changing the bounds of what we should accept or redefine the understood limits of this system? Is the perceived complexity a result of a decision elsewhere? Who’s taking this complexity on, or more to the point, is failing to address complexity required by the problem leaving it to others? Does it impact others? How specifically? What are you not seeing?

What can change?

What should change?

12 November, 2024 08:21PM

Nazi.Compare

Evolution of euthanasia & WIPO UDRP similarities exposed by W. Scott Blackmer

Wikipedia has a long article on Aktion T4, the Nazi-era euthanasia program. It is not necessary to read the whole thing, simply picking out a couple of lines gives us the gist of it:

From August 1939, the Interior Ministry registered children with disabilities, requiring doctors and midwives to report all cases of newborns with severe disabilities; the 'guardian' consent element soon disappeared.

...

The reports were assessed by a panel of medical experts, of whom three were required to give their approval before a child could be killed.

...

When the Second World War began in September 1939, less rigorous standards of assessment and a quicker approval process were adopted. Older children and adolescents were included and the conditions covered came to include ...

In effect, it became a slipperly slope. The euthanasia program wasn't even well intentioned to begin with but once the legal framework existed, enthusiasts were constantly looking for ways to evade checks and balances.

Now we see the same slipperly slope phenomena with the WIPO UDRP.

In the beginning, it was an attempt to prevent extreme and obvious acts of cybersquatters hijacking trademarks.

Have a look at the most recent Debian UDRP defamation:

One of the disputed domain names, <debian.video>, shows videos of the Respondent at a DEBIAN development conference in 2013, as well as audio recordings from software development conferences in 2012.

In fact, Debian funds paid for volunteers to travel to those conferences and give the presentations. There is nothing in "bad faith" about publishing the videos of those events.

Most websites at the disputed domain names display the Complainant’s trademarked “swirl” logo in the upper left corner

In fact, the Debian logo page tells us that it is an open use logo, it is an unrestricted license to use the logo. Therefore, what we see in practice is that WIPO UDRP lawyers such as W. Scott Blackmer are well and truly on the slippery slope phase. Here is the open logo license:

The Debian Open Use Logo(s) are Copyright (c) 1999 Software in the Public Interest, Inc., and are released under the terms of the GNU Lesser General Public License, version 3 or any later version, or, at your option, of the Creative Commons Attribution-ShareAlike 3.0 Unported License.

W. Scott Blackmer was clearly informed that it was an open use logo but he simply ignored the evidence in the response.

The Aktion T4 report notes:

More pressure was placed on parents to agree to their children being sent away. Many parents suspected what was happening and refused consent, especially when it became apparent that institutions for children with disabilities were being systematically cleared of their charges. The parents were warned that they could lose custody of all their children and if that did not suffice, the parents could be threatened with call-up for 'labour duty'

Clearly, the nasty accusations of "bad faith" are being used to scare other joint authors of large copyrighted works that they can't use the name of their work or they will be publicly shamed on the WIPO web site.

In some cases families could tell that the causes of death in certificates were false, e.g. when a patient was claimed to have died of appendicitis, even though his appendix had been removed some years earlier.

These comments about appendicitis sound a lot like the open use logo case. If the appendix had been removed there can not be appendicitis. If the open logo can be used under a license then there can not be bad faith.

It appears that the Nazi euthanasia doctors and some WIPO UDRP panels are simply pushing headstrong over the top of the facts and working to targets. The Nazis had targets for killing and the UDRP panels appear headstrong obsessed with censoring.

Many domain name owners are only paying a small fee of $10 to $20 per year for their domain name. The cost of paying lawyers to respond to every frivolous UDRP demand is disproportionate to the cost of the domain name. Furthermore, the cost of going to court to appeal a blatantly wrong defamation is even more astronomically out of proportion to the cost of the domain name.

Therefore, when dealing with volunteers, the WIPO UDRP lawyers seem to know they can get away with anything.

The report on child euthanasia notes that children were still being euthanised even after allied troops had taken over:

The last child to be killed under Aktion T4 was Richard Jenne on 29 May 1945, in the children's ward of the Kaufbeuren-Irsee state hospital in Bavaria, Germany, more than three weeks after US Army troops had occupied the town.

In other words, the medical panels and the legal panels that make these decisions seem to be operating out of habit. Even when the legal environment changed and the territory was under western law, the medical and legal processes in the clinics continued to kill out of habit alone.

When some institutions refused to co-operate, teams of T4 doctors (or Nazi medical students) visited and compiled the lists, sometimes in a haphazard and ideologically motivated way.

In the W. Scott Blackmer defamation, section 6 concludes: The Panel finds that the Complainant has established the third element of the Policy with respect to all fourteen of the disputed domain names..

In other words, W. Scott Blackmer hasn't really looked for the merits of the content on a site-by-site basis, he has decided to extinguish them all with a single brush stroke. In the following paragraph, like the Nazi doctors, he compiles a big list: "For the foregoing reasons, in accordance with paragraphs 4(i) of the Policy and 15 of the Rules, the Panel orders that the disputed domain names <debian.chat>, <debiancommunity.org>, <debian.day>, <debian.family>, <debian.finance>, <debian.giving>, <debiangnulinux.org>, <debian.guide>, <debian.news>, <debian.plus>, <debianproject.community>, <debianproject.org>, <debian.team>, and <debian.video> be transferred to the Complainant."

Ironically, one of the domains that the WIPO UDRP panel was so eager to censor was the former debian.day site with the story of the Debian Day Volunteer Suicide. This is significant because the death appears to be part of a wider suicide cluster, giving weight to the argument that discussion of the suicides is in the public interest. A single, one-off case of suicide may be a private matter but a suicide timed around the project anniversary and forming part of a cluster suggests there is good cause for public discussion.

12 November, 2024 06:30PM

Sven Hoexter

fluxcd: Validate flux-system Root Kustomization

Not entirely sure how people use fluxcd, but I guess most people have something like a flux-system flux kustomization as the root to add more flux kustomizations to their kubernetes cluster. Here all of that is living in a monorepo, and as we're all humans people figure out different ways to break it, which brings the reconciliation of the flux controllers down. Thus we set out to do some pre-flight validations.

Note1: We do not use flux variable substitutions for those root kustomizations, so if you use those, you've to put additional work into the validation and pipe things through flux envsubst.

First Iteration: Just Run kustomize Like Flux Would Do It

With a folder structure where we've a cluster folder with subfolders per cluster, we just run a for loop over all of them:

for CLUSTER in ${CLUSTERS}; do
    pushd clusters/${CLUSTER}

    # validate if we can create and build a flux-system like kustomization file
    kustomize create --autodetect --recursive
    if ! kustomize build . -o /dev/null 2> error.log; then
        echo "Error building flux-system kustomization for cluster ${CLUSTER}"
        cat error.log
    fi

    popd
done

Second Iteration: Make Sure Our Workload Subfolder Have a kustomization.yaml

Next someone figured out that you can delete some yaml files from a workload subfolder, including the kustomization.yaml, but not all of them. That left around a resource definition which lacks some other referenced objects, but is still happily included into the root kustomization by kustomize create and flux, which of course did not work.

Thus we started to catch that as well in our growing for loop:

for CLUSTER in ${CLUSTERS}; do
    pushd clusters/${CLUSTER}

    # validate if we can create and build a flux-system like kustomization file
    kustomize create --autodetect --recursive
    if ! kustomize build . -o /dev/null 2> error.log; then
        echo "Error building flux-system kustomization for cluster ${CLUSTER}"
        cat error.log
    fi

    # validate if we always have a kustomization file in folders with yaml files
    for CLFOLDER in $(find . -type d); do
        test -f ${CLFOLDER}/kustomization.yaml && continue
        test -f ${CLFOLDER}/kustomization.yml && continue
        if [[ $(find ${CLFOLDER} -maxdepth 1 \( -name '*.yaml' -o -name '*.yml' \) -type f|wc -l) != 0 ]]; then
            echo "Error Cluster ${CLUSTER} folder ${CLFOLDER} lacks a kustomization.yaml"
        fi
    done

    popd
done

Note2: I shortened those snippets to the core parts. In our case some things are a bit specific to how we implemented the execution of those checks in GitHub action workflows. Hope that's enough to transport the idea of what to check for.

12 November, 2024 04:05PM

hackergotchi for James Bromberger

James Bromberger

My own little server

In 2004, I was living in London, and decided it was time I had my own little virtual private server somewhere online. As a Debian developer since the start of 2000, it had to be Debian, and it still is… This was before “cloud” as we know it today. Virtual Private Servers (VPS) was a … Continue reading "My own little server"

12 November, 2024 12:34PM by james

Swiss JuristGate

Litigium: Nati Gomez, Mathieu Parreaux, jurist who didn't pass bar exam on cross-border radio program with Benjamin Smadja

Radio Lac is the third most popular radio station in the Lake Geneva region covering Switzerland and France.

The reception area includes all the lakeside cities of Geneva, Nyon, Morges, Lausanne, Vevey and Montreux as well as the cross-border regions.

The transmitter for the region of Geneva is actually situated on Mount Saléve, at the cable car station in French territory. The inhabitants of French cities Annemasse, Thonon, Evian, Saint-Julien-en-Genevois, Saint-Genis-Pouilly, Ferney-Voltaire, Gex and Divonne are in the reception area.

The jurists of Mathieu Parreaux published several documents about their legal insurance services for cross-border commuters and residents of France.

In our last blog we discovered that Monsieur Parreaux didn't pass the bar exam either in Switzerland or in France.

Each week, Mathieu Parreaux and his colleague Nati Gomez responded to legal questions on the radio program of Benjamin Smadja, Radio Lac (Media One Group).

The insurance company gained 20,000 clients. How many clients found Parreaux, Thiébaud & Partners thanks to free publicity on Radio Lac? How many clients killed themselves?

A program from 5 November 2018 where they discuss customs charges for cross-border commuters:

 

Mont Saléve

Daniel Pocock, author of this site passed the amateur radio exam at age 14

He has provided many services on a voluntary basis since he was 14 years old. Why do the Swiss jurists insult the families of unpaid volunteers? Is that racism?

Daniel Pocock, radio amateur

 

Daniel Pocock, EI7JIB, IRTS, elected

12 November, 2024 11:30AM

Clémence Lamirand published an article in AGEFI, Mathieu Parreaux never passed the bar exam

A news report was published by Clémence Lamirand at the bureau AGEFI.

She wrote (original in French) The cabinet is young, like the majority of employees who work there. The founder, Mathieu Parreaux, has not yet passed the bar exam. For the moment, the business is his priority, the final exams will come later.

The reporter, Madame Lamirand doesn't pose difficult questions. Journalists in Switzerland fear criminal prosecution for writing any form of inconvenient truth.

Un cabinet juridique en construction

Le cabinet Parreaux, Thiébaud & Partners, basé à Genève, propose une protection juridique sur abonnement. Portrait de la toute jeune société.

Clémence Lamirand, 21 mai 2018, 20h49

Le cabinet est jeune, comme la plupart des employés qui y travaillent. Son fondateur, Mathieu Parreaux, n’a pas encore passé son brevet d’avocat. Il donne pour le moment la priorité à son entreprise, les examens finaux seront pour plus tard. Fondé en 2017 et basé à Genève, le cabinet juridique semble évoluer rapidement. Récemment, dix juristes ont été embauchés. Au début de l’année, le cabinet a fusionné avec la société de services lausannois Thiébaud pour donner naissance au cabinet juridique Parreaux, Thiébaud & Partners. «Cette entreprise était spécialisée dans les assurances, explique Mathieu Parreaux, nous avions de notre côté nos compétences propres en protection juridique. Notre rapprochement récent nous permet désormais d’être présents dans les deux domaines, sur les deux cantons.» Le cabinet juridique emploie aujourd’hui une vingtaine de personnes. Parmi les juristes, certains sont détenteurs du brevet d’avocat (six), d’autres non. Parreaux, Thiébaud & Partners travaille également avec des avocats externes, indépendants, qui peuvent prendre le relai lorsque les juristes ne peuvent pas poursuivre la défense de leurs clients, par exemple lors d’un procès au tribunal pénal. «Le statut de juriste a beaucoup d’avantages mais il ne permet pas d’aller partout. Nous avons donc noué des partenariats avec une quinzaine de professionnels présents dans tous cantons romands, précise Mathieu Parreaux. A l’avenir, nous aimerions gagner toute la Suisse. Nous devons pour cela trouver les bons avocats et les bons juristes et les inciter à rejoindre notre structure. Toutefois, nous ne voulons pas grandir trop vite et souhaitons progresser intelligemment.» Un cabinet que se veut différent des autres Aujourd’hui, Parreaux, Thiébaud & Partners, qui travaille aussi en lien avec des notaires, souhaite proposer des prestations juridiques larges et abordables. «C’est toute notre philosophie, s’enthousiasme le jeune entrepreneur, notre cabinet est une structure unique qui souhaite proposer à ses clients des prestations variées et un service client performant, le tout à un prix adapté.» Son fondateur est spécialisé en droit des contrats, droit fiscal et droit des sociétés. Il s’est entouré de spécialistes dans différents domaines. «Avec des compétences variées, nos conseillers peuvent répondre rapidement et efficacement à nos clients, explique Mathieu Parreaux. Ainsi, nous couvrons actuellement 44 domaines du droit.» Le cabinet propose donc conseils juridiques et conciliation. Les spécialistes rédigent tous types d’actes légaux pour les entreprises, des contrats de travail comme des conditions générales par exemple. Une permanence juridique privée est assurée. «Nous faisons tout pour anticiper et être proactifs, résume le fondateur, nous essayons de régler les différents en amont.» Une protection juridique sur abonnement Parreaux, Thiébaud & Partners propose depuis quelques semaines une protection juridique, pour les particuliers comme pour les entreprises, sous forme d’abonnement. Pour un engagement d’une durée de 3, 5 ou 8 ans, une entreprise peut souscrire à Real-Protect. «Nous donnons des conseils oraux mais aussi écrits, précise Mathieu Parreaux, ce qui nous engage. De plus, notre conseil est illimité. Nous souhaitons être réellement là pour nos clients. Toujours avec un coût raisonnable.» «Le prix est ce qui m’a attiré en premier, avoue Jessy Kadimadio, client qui vient de lancer une régie immobilière et qui a fait appel au cabinet pour des rédactions de contrats, mais j’ai été par la suite agréablement surpris par leur disponibilité. J’ai aussi été séduit par le côté outsider de cette jeune société.»

12 November, 2024 10:00AM

November 11, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSpdlog 0.0.19 on CRAN: New Upstream, New Features

Version 0.0.19 of RcppSpdlog arrived on CRAN early this morning and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This releases updates the code to the version 1.15.0 of spdlog which was released on Saturday, and contains fmt 11.0.2. It also contains a contributed PR which allows use std::format under C++20, bypassing fmt (with some post-merge polish too), and another PR correcting a documentation double-entry.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.19 (2024-11-10)

  • Support use of std::format under C++20 via opt-in define instead of fmt (Xanthos Xanthopoulos in #19)

  • An erroneous duplicate log=level documentation level was removed (Contantinos Giachalis in #20)

  • Upgraded to upstream release spdlog 1.15.0 (Dirk in #21)

  • Partially revert / simplify src/formatter.cpp accomodating both #19 and previous state (Dirk in #21)

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page, or the package documention site. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 November, 2024 05:47PM

Antoine Beaupré

Why I should be running Debian unstable right now

So a common theme on the Internet about Debian is so old. And right, I am getting close to the stage that I feel a little laggy: I am using a bunch of backports for packages I need, and I'm missing a bunch of other packages that just landed in unstable and didn't make it to backports for various reasons.

I disagree that "old" is a bad thing: we definitely run Debian stable on a fleet of about 100 servers and can barely keep up, I would make it older. And "old" is a good thing: (port) wine and (any) beer needs time to age properly, and so do humans, although some humans never seem to grow old enough to find wisdom.

But at this point, on my laptop, I am feeling like I'm missing out. This page, therefore, is an evolving document that is a twist on the classic NewIn game. Last time I played seems to be #newinwheezy (2013!), so really, I'm due for an update. (To be fair to myself, I do keep tabs on upgrades quite well at home and work, which do have their share of "new in", just after the fact.)

New packages to explore

Those tools are shiny new things available in unstable or perhaps Trixie (testing) already that I am not using yet, but I find interesting enough to list here.

  • backdown: clever file deduplicator
  • codesearch: search all of Debian's source code (tens of thousands of packages) from the commandline! (see also dcs-cli, not in Debian)
  • dasel: JSON/YML/XML/CSV parser, similar to jq, but different syntax, not sure I'd grow into it, but often need to parse YML like JSON and failing
  • fyi: notify-send replacement
  • git-subrepo: git-submodule replacement I am considering
  • gtklock: swaylock replacement with bells and whistles, particularly interested in showing time, battery and so on
  • hyprland: possible Sway replacement, but there are rumors of a toxic community (rebuttal, I haven't reviewed either in detail), so approach carefully)
  • kooha: simple screen recorder with audio support, currently using wf-recorder which is a more.. minimalist option
  • linescroll: rate graphs on live logs, mostly useful on servers though
  • memray: Python memory profiler
  • ruff: faster Python formatter and linter, flake8/black/isort replacement, alas not mypy/LSP unfortunately, designed to be ran alongside such a tool, which is not possible in Emacs eglot right now, but is possible in lsp-mode
  • sfwbar: pretty status bar, may replace waybar, which i am somewhat unhappy with (my UTC clock disappears randomly)
  • shoutidjc: streaming workstation, currently using butt but it doesn't support HTTPS correctly
  • spytrap-adb: cool spy gear
  • trippy: trippy network analysis tool, kind of an improved MTR
  • yubikey-touch-detector: notifications for when I need to touch my YubiKey

New packages I won't use

Those are packages that I have tested because I found them interesting, but ended up not using, but I think people could find interesting anyways.

  • kew: surprisingly fast music player, parsed my entire library (which is huge) instantaneously and just started playing (I still use Supersonic, for which I maintain a flatpak on my Navidrome server)
  • mdformat: good markdown formatter, think black or gofmt but for markdown), but it didn't actually do what I needed, and it's not quite as opinionated as it should (or could) be)

Backports already in use

Those are packages I already use regularly, which have backports or that can just be installed from unstable:

  • asn: IP address forensics
  • markdownlint: markdown linter, I use that a lot
  • poweralertd: pops up "your battery is almost empty" messages
  • sway-notification-center: used as part of my status bar, yet another status bar basically, a little noisy, stuck in a libc dep update
  • tailspin: used to color logs

Out of date packages

Those are packages that are in Debian stable (Bookworm) already, but that are somewhat lacking and could benefit from an upgrade.

Last words

If you know of cool things I'm missing out of, then by all means let me know!

That said, overall, this is a pretty short list! I have most of what I need in stable right now, and if I wasn't a Debian developer, I don't think I'd be doing the jump now. But considering how easier it is to develop Debian (and how important it is to test the next release!), I'll probably upgrade soon.

Previously, I was running Debian testing (which why the slug on that article is why-trixie), but now I'm actually considering just running unstable on my laptop directly anyways. It's been a long time since we had any significant instability there, and I can typically deal with whatever happens, except maybe when I'm traveling, and then it's easy to prepare for that (just pin testing).

11 November, 2024 04:17PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Why academics under-share research data - A social relational theory

This post is a review for Computing Reviews for Why academics under-share research data - A social relational theory , a article published in Journal of the Association for Information Science and Technology

As an academic, I have cheered for and welcomed the open access (OA) mandates that, slowly but steadily, have been accepted in one way or another throughout academia. It is now often accepted that public funds means public research. Many of our universities or funding bodies will demand that, with varying intensities–sometimes they demand research to be published in an OA venue, sometimes a mandate will only “prefer” it. Lately, some journals and funder bodies have expanded this mandate toward open science, requiring not only research outputs (that is, articles and books) to be published openly but for the data backing the results to be made public as well. As a person who has been involved with free software promotion since the mid 1990s, it was natural for me to join the OA movement and to celebrate when various universities adopt such mandates.

Now, what happens after a university or funder body adopts such a mandate? Many individual academics cheer, as it is the “right thing to do.” However, the authors observe that this is not really followed thoroughly by academics. What can be observed, rather, is the slow pace or “feet dragging” of academics when they are compelled to comply with OA mandates, or even an outright refusal to do so. If OA and open science are close to the ethos of academia, why aren’t more academics enthusiastically sharing the data used for their research? This paper finds a subversive practice embodied in the refusal to comply with such mandates, and explores an hypothesis based on Karl Marx’s productive worker theory and Pierre Bourdieu’s ideas of symbolic capital.

The paper explains that academics, as productive workers, become targets for exploitation: given that it’s not only the academics’ sharing ethos, but private industry’s push for data collection and industry-aligned research, they adapt to technological changes and jump through all kinds of hurdles to create more products, in a result that can be understood as a neoliberal productivity measurement strategy. Neoliberalism assumes that mechanisms that produce more profit for academic institutions will result in better research; it also leads to the disempowerment of academics as a class, although they are rewarded as individuals due to the specific value they produce.

The authors continue by explaining how open science mandates seem to ignore the historical ways of collaboration in different scientific fields, and exploring different angles of how and why data can be seen as “under-shared,” failing to comply with different aspects of said mandates. This paper, built on the social sciences tradition, is clearly a controversial work that can spark interesting discussions. While it does not specifically touch on computing, it is relevant to Computing Reviews readers due to the relatively high percentage of academics among us.

11 November, 2024 02:53PM

hackergotchi for Thomas Lange

Thomas Lange

Using NIS (Network Information Service) in 2024

The topic of this posting already tells you that an old Unix guy tells stories about old techniques.

I'm a happy NIS (formerly YP) user since 30+ years. I started using it with SunOS 4.0, later using it with Solaris and with Linux since 1999.

In the past, a colleague wasn't happyly using NIS+ when he couldn't log in as root after a short time because of some well known bugs and wrong configs. NIS+ was also much slower than my NIS setup. I know organisations using NIS for more than 80.000 user accounts in 2024.

I know the security implications of NIS but I can live with them, because I manage all computers in the network that have access to the NIS maps. And NIS on Linux offers to use shadow maps, which are only accessible to the root account. My users are forced to use very long passwords.

Unfortunately NIS support for the PAM modules was removed in Debian in pam 1.4.0-13, which means Debian 12 (bookworm) is lacking NIS support in PAM, but otherwise it is still supported. This only affects changing the NIS password via passwd. You can still authenticate users and use other NIS maps.

But yppasswd is deprecated and you should not use it! If you use yppasswd it may generate a new password hash by using the old DES crypt algorithm, which is very weak and only uses the first 8 chars in your password. Do not use yppasswd any more! yppasswd only detects DES, MD5, SHA256 and SHA512 hashes, but for me and some colleagues it only creates weak DES hashes after a password change. yescrypt hashes which are the default in Debian 12 are not supported at all. The solution is to use the plain passwd program.

On the NIS master, you should setup your NIS configuration to use /etc/shadow and /etc/passwd even if your other NIS maps are in /var/yp/src or similar. Make sure to have these lines in your /var/yp/Makefile:

PASSWD      = /etc/passwd
SHADOW      = /etc/shadow

Call make once, and it will generate the shadow and passwd map. You may want to set the variable MINUID which defines which entries are not put into the NIS maps.

On all NIS clients you still need the entries (for passwd, shadow, group,...) that point to the nis service. E.g.:

passwd:         files nis systemd
group:          files nis systemd
shadow:         files nis

You can remove all occurences of "nis" in your /etc/pam.d/common-password file.

Then you can use the plain passwd program to change your password on the NIS master. But this does not call make in /var/yp for updating the NIS shadow map.

Let's use inotify(7) for that. First, create a small shell script /usr/local/sbin/shadow-change:

#! /bin/sh

PATH=/usr/sbin:/usr/bin

# only watch the /etc/shadow file
if [ "$2" != "shadow" ]; then
  exit 0
fi

cd /var/yp || exit 3
sleep 2
make

Then install the package incron.

# apt install incron
# echo root >> /etc/incron.allow
# incrontab -e

Add this line:

/etc    IN_MOVED_TO     /usr/local/sbin/shadow-change $@ $# $%

It's not possible to use IN_MODIFY or watch other events on /etc/shadow directly, because the passwd command creates a /etc/nshadow file, deletes /etc/shadow and then moves nshadow to shadow. inotify on a file does not work after the file was removed.

You can see the logs from incrond by using:

# journalctl _COMM=incrond
e.g.

Oct 01 12:21:56 kueppers incrond[6588]: starting service (version 0.5.12, built on Jan 27 2023 23:08:49)
Oct 01 13:43:55 kueppers incrond[6589]: table for user root created, loading
Oct 01 13:45:42 kueppers incrond[6589]: PATH (/etc) FILE (shadow) EVENT (IN_MOVED_TO)
Oct 01 13:45:42 kueppers incrond[6589]: (root) CMD ( /usr/local/sbin/shadow-change /etc shadow IN_MOVED_TO)

I've disabled the execution of yppasswd using dpkg-divert

# dpkg-divert --local --rename --divert /usr/bin/yppasswd-disable /usr/bin/yppasswd
chmod a-rwx /usr/bin/yppasswd-disable

Do not forget to limit the access to the shadow.byname map in ypserv.conf and general access to NIS in ypserv.securenets.

I've also discovered the package pamtester, which is a nice package for testing your pam configs.

11 November, 2024 10:20AM

Vincent Bernat

Customize Caddy's plugins with Nix

Caddy is an open-source web server written in Go. It handles TLS certificates automatically and comes with a simple configuration syntax. Users can extend its functionality through plugins1 to add features like rate limiting, caching, and Docker integration.

While Caddy is available in Nixpkgs, adding extra plugins is not simple.2 The compilation process needs Internet access, which Nix denies during build to ensure reproducibility. When trying to build the following derivation using xcaddy, a tool for building Caddy with plugins, it fails with this error: dial tcp: lookup proxy.golang.org on [::1]:53: connection refused.

{ pkgs }:
pkgs.stdenv.mkDerivation {
  name = "caddy-with-xcaddy";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      xcaddy build --with github.com/caddy-dns/powerdns@v1.0.1
    '';
  installPhase = ''
    mkdir -p $out/bin
    cp caddy $out/bin
  '';
}

Fixed-output derivations are an exception to this rule and get network access during build. They need to specify their output hash. For example, the fetchurl function produces a fixed-output derivation:

{ stdenv, fetchurl }:
stdenv.mkDerivation rec {
  pname = "hello";
  version = "2.12.1";
  src = fetchurl {
    url = "mirror://gnu/hello/hello-${version}.tar.gz";
    hash = "sha256-jZkUKv2SV28wsM18tCqNxoCZmLxdYH2Idh9RLibH2yA=";
  };
}

To create a fixed-output derivation, you need to set the outputHash attribute. The example below shows how to output Caddy’s source code, with some plugin enabled, as a fixed-output derivation using xcaddy and go mod vendor.

pkgs.stdenvNoCC.mkDerivation rec {
  pname = "caddy-src-with-xcaddy";
  version = "2.8.4";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      export GOCACHE=$TMPDIR/go-cache
      export GOPATH="$TMPDIR/go"
      XCADDY_SKIP_BUILD=1 TMPDIR="$PWD" \
        xcaddy build v${version} --with github.com/caddy-dns/powerdns@v1.0.1
      (cd buildenv* && go mod vendor)
    '';
  installPhase = ''
    mv buildenv* $out
  '';

  outputHash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
  outputHashAlgo = "sha256";
  outputHashMode = "recursive";
}

With a fixed-output derivation, it is up to us to ensure the output is always the same:

  • we ask xcaddy to not compile the program and keep the source code,3
  • we pin the version of Caddy we want to build, and
  • we pin the version of each requested plugin.

You can use this derivation to override the src attribute in pkgs.caddy:

pkgs.caddy.overrideAttrs (prev: {
  src = pkgs.stdenvNoCC.mkDerivation { /* ... */ };
  vendorHash = null;
  subPackages = [ "." ];
});

Check out the complete example in the GitHub repository. To integrate into a Flake, add github:vincentbernat/caddy-nix as an overlay:

{
  inputs = {
    nixpkgs.url = "nixpkgs";
    flake-utils.url = "github:numtide/flake-utils";
    caddy.url = "github:vincentbernat/caddy-nix";
  };
  outputs = { self, nixpkgs, flake-utils, caddy }:
    flake-utils.lib.eachDefaultSystem (system:
      let
        pkgs = import nixpkgs {
          inherit system;
          overlays = [ caddy.overlays.default ];
        };
      in
      {
        packages = {
          default = pkgs.caddy.withPlugins {
            plugins = [ "github.com/caddy-dns/powerdns@v1.0.1" ];
            hash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
          };
        };
      });
}

Update (2024-11)

This flake won’t work with Nixpkgs 24.05 or older because it relies on this commit to properly override the vendorHash attribute.


  1. This article uses the term “plugins,” though Caddy documentation also refers to them as “modules” since they are implemented as Go modules. ↩︎

  2. This is a feature request since quite some time. A proposed solution has been rejected. The one described in this article is a bit different. ↩︎

  3. This is not perfect: if the source code produced by xcaddy changes, the hash would change and the build would fail. ↩︎

11 November, 2024 07:35AM by Vincent Bernat

November 10, 2024

Nazi.Compare

Joan Meyer correctly linked Gideon Cody raid on Marion County Record to Kristallnacht

Earlier this year, I traveled to Marion in Kansas, United States, for the anniversary of the raid on the Marion County Record.

We watched the documentary about the raid, Unwarranted: The Senseless Death of Journalist Joan Meyer which was produced by Jaime Green and Travis Heying. The moment where Joan Meyer called the police nazis jumped out at me. I made a mental note to include it here in the nazi.compare web site but I wanted to review it carefully and give it the justice it deserves.

I opened up the video on the anniversary of the Kristallnacht and the evidence jumped out at me. I don't think anybody has noticed it before but Joan was right on the money about nazi stuff.

The Kristallnacht occurred on the night of 9 to 10 November 1938. It was a giant pogrom by Nazi party members. The police did not participate but they didn't try to stop it either.

However, the Jewish press were not attacked during the Kristallnacht.

In fact, Hitler's Nazis attacked the Jewish press on the previous night, 8 November.

Looking at the body cam footage where Joan Meyer accuses Gideon Cody and his police colleagues of "nazi stuff", we can see a time and date stamp at the bottom right corner. The date of the raid is written in the United States date format, Month/Day/Year, 08/11/2023 which was 11 August 2023. When we see the date 08/11/2023 in Europe, for example, in Germany, we would interpret that as Day/Month/Year, in other words, that is how Europeans and Germans write 8 November 2023, the day that Nazis raided the Jewish press in advance of the Kristallnacht.

Jewish publications banned

 

Here is the section of the video where Joan Meyer makes the Nazi comment, look at the date stamp at the bottom right corner, it is 08/11/2023 as in 8 November for Europe:

FSFE censored communications from the elected representatives

While thinking about the way the Nazis gave these censorship orders the night before the Kristallnacht, I couldn't help thinking about the orders from Matthias Kirschner and Heiki Lõhmus at FSFE when they wanted to censor communications from the elected Fellowship representatives.

Berlin police have declined to help FSFE shut down web sites that are making accurate FSFE / Nazi comparisons.

This policy determines conditions and rights of the FSFE bodies (staffers,
GA members, local and topical teams) or members of the FSFE community to to
mass mail registered FSFE community members who have opted in to receive
information about FSFE's activities.

## Definitions

For the purpose of this document:
 * all registered FSFE community members who have opted in to receive
   information about FSFE's activities are referred to as "recipients".
 * mass emails that we send out to recipients are referred to as "mailings".
 * mailings that are only sent to recipients who live in a certain area (a
   municipality or a language zone or similar) or that are part of a topical
   team are referred to as "select mailings" and mails to all recipients of
   the FSFE are referred to as "overall mailings".


## Considerations

 * Mailings should be sent to better integrate our community in important
   aspects of our work, which can be for example - but is not limited to -
   information about critical happenings that we need their input or activity
   for, milestones we have achieved and thank you's, engagement in the inner FSFE
   processes and fundraising.
 * Mailings should be properly balanced between delivering information and
   getting to the point.
 * Mailings should contain material/information that can be considered worth
   of our supporters' interests.
 * Mailings are not to spread general news - that is what we have the
   newsletter and our news items for.
 * You can find help on editing mailings by reading through our
   press release guidelines: https://wiki.fsfe.org/Internal/PressReleaseGuide
 * All community members are invited to use select mailings for evaluations,
   to inform about certain aspects of FSFE's work, to organise events and
   activities or other extraordinary purposes.


## Policies

 * Mailings must not be against FSFE's interests and conform to our Code of
   Conduct.
 * All overall mailings have to involve the PR team behind pr@lists.fsfe.org
   for a final edit. In urgent cases, review by the PR team may be skipped
   with approval of the responsible authority.
 * All select mailings need approval by the relevant country or topical team
   coordinator or - in absence - by the Community Coordinator or the Executive
   Council.
 * All overall mailings need the approval of the Executive Council.
 * All mailings need to be reviewed by someone with the authority to approve
   the mailing. Nobody may review or approve a mailing they have prepared on
   their own.

Gideon Cody of Marion County

Gideon Cody, Marion County

10 November, 2024 11:00PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

inline 0.3.20: Mostly Maintenance

A new release of the inline package got to CRAN today marking the first release in three and half years. inline facilitates writing code in-line in simple string expressions or short files. The package was used quite extensively by Rcpp in the very early days before Rcpp Attributes arrived on the scene providing an even better alternative for its use cases. inline is still used by rstan and a number of other packages.

This release was tickled by changing in r-devel just this week, and the corresponding ‘please fix or else’ email I received this morning. R_NO_REMAP is now the default in r-devel, and while we had already converted most (old-style) calls into the API to using the now mandatory Rf_ prefix, the package contained few remaining cases in examples as well as one in code generation. The release also contains a helpful contributed PR making an error message a little clearer, plus several small and common maintenance changed around continuous integration, package layout and the repository.

The NEWS extract follows and details the changes some more.

Changes in inline version 0.3.20 (2024-11-10)

  • Error message formatting is improved for compileCode (Alexis Derumigny in #25)

  • Switch to using Authors@R, other general packaging maintenance for continuous integration and repository

  • Use Rf_ in a handful of cases as R-devel now mandates it

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 November, 2024 07:29PM

Reproducible Builds

Reproducible Builds in October 2024

Welcome to the October 2024 report from the Reproducible Builds project.

Our reports attempt to outline what we’ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Table of contents:

  1. Beyond bitwise equality for Reproducible Builds?
  2. ‘Two Ways to Trustworthy’ at SeaGL 2024
  3. Number of cores affected Android compiler output
  4. On our mailing list…
  5. diffoscope
  6. IzzyOnDroid passed 25% reproducible apps
  7. Distribution work
  8. Website updates
  9. Reproducibility testing framework
  10. Supply-chain security at Open Source Summit EU
  11. Upstream patches

Beyond bitwise equality for Reproducible Builds?

Jens Dietrich, Tim White, of Victoria University of Wellington, New Zealand along with Behnaz Hassanshahi and Paddy Krishnan of Oracle Labs Australia published a paper entitled “Levels of Binary Equivalence for the Comparison of Binaries from Alternative Builds”:

The availability of multiple binaries built from the same sources creates new challenges and opportunities, and raises questions such as: “Does build A confirm the integrity of build B?” or “Can build A reveal a compromised build B?”. To answer such questions requires a notion of equivalence between binaries. We demonstrate that the obvious approach based on bitwise equality has significant shortcomings in practice, and that there is value in opting for alternative notions. We conceptualise this by introducing levels of equivalence, inspired by clone detection types.

A PDF of the paper is freely available.


Two Ways to Trustworthy’ at SeaGL 2024

On Friday 8th November, Vagrant Cascadian will present a talk entitled Two Ways to Trustworthy at SeaGL in Seattle, WA.

Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Vagrant’s talk:

[…] delves into how two project[s] approaches fundamental security features through Reproducible Builds, Bootstrappable Builds, code auditability, etc. to improve trustworthiness, allowing independent verification; trustworthy projects require little to no trust.

Exploring the challenges that each project faces due to very different technical architectures, but also contextually relevant social structure, adoption patterns, and organizational history should provide a good backdrop to understand how different approaches to security might evolve, with real-world merits and downsides.


Number of cores affected Android compiler output

Fay Stegerman wrote that the cause of the Android toolchain bug from September’s report that she reported to the Android issue tracker has been found and the bug has been fixed.

the D8 Java to DEX compiler (part of the Android toolchain) eliminated a redundant field load if running the class’s static initialiser was known to be free of side effects, which ended up accidentally depending on the sharding of the input, which is dependent on the number of CPU cores used during the build.

To make it easier to understand the bug and the patch, Fay also made a small example to illustrate when and why the optimisation involved is valid.


On our mailing list…

On our mailing list this month:


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 279, 280, 281 and 282 to Debian:

  • Ignore errors when listing .ar archives (#1085257). []
  • Don’t try and test with systemd-ukify in the Debian stable distribution. []
  • Drop Depends on the deprecated python3-pkg-resources (#1083362). []

In addition, Jelle van der Waa added support for Unified Kernel Image (UKI) files. [][][] Furthermore, Vagrant Cascadian updated diffoscope in GNU Guix to version 282. [][]


IzzyOnDroid passed 25% reproducible apps

The IzzyOnDroid project has reached a good milestone by reaching over 25% of the ~1,200 Android apps provided by their repository (of official APKs built by the original application developers) having been confirmed to be reproducible by a rebuilder.


Distribution work

In Debian this month:

  • Holger Levsen uploaded devscripts version 2.24.2, including many changes to the debootsnap, debrebuild and reproducible-check scripts. This is the first time that debrebuild actually works (using sbuild’s unshare backend). As part of this, Holger also fixed an issue in the reproducible-check script where a typo in the code led to incorrect results []

  • Recently, a news entry was added to snapshot.debian.org’s homepage, describing the recent changes that made the system stable again:

    The new server has no problems keeping up with importing the full archives on every update, as each run finishes comfortably in time before it’s time to run again. [While] the new server is the one doing all the importing of updated archives, the HTTP interface is being served by both the new server and one of the VM’s at LeaseWeb.

    The entry list a number of specific updates surrounding the API endpoints and rate limiting.

  • Lastly, 12 reviews of Debian packages were added, 3 were updated and 18 were removed this month adding to our knowledge about identified issues.

Elsewhere in distribution news, Zbigniew Jędrzejewski-Szmek performed another rebuild of Fedora 42 packages, with the headline result being that 91% of the packages are reproducible. Zbigniew also reported a reproducibility problem with QImage.

Finally, in openSUSE, Bernhard M. Wiedemann published another report for that distribution.


Website updates

There were an enormous number of improvements made to our website this month, including:

  • Alba Herrerias:

    • Improve consistency across distribution-specific guides. []
    • Fix a number of links on the Contribute page. []
  • Chris Lamb:

  • hulkoba

  • James Addison:

    • Huge and significant work on a (as-yet-merged) quickstart guide to be linked from the homepage [][][][][]
    • On the homepage, link directly to the Projects subpage. []
    • Relocate “dependency-drift” notes to the Volatile inputs page. []
  • Ninette Adhikari:

    • Add a brand new ‘Success stories’ page that “highlights the success stories of Reproducible Builds, showcasing real-world examples of projects shipping with verifiable, reproducible builds”. [][][][][][]
  • Pol Dellaiera:

    • Update the website’s README page for building the website under NixOS. [][][][][]
    • Add a new academic paper citation. []

Lastly, Holger Levsen filed an extensive issue detailing a request to create an overview of recommendations and standards in relation to reproducible builds.


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen, including:

  • Add a basic index.html for rebuilderd. []
  • Update the nginx.conf configuration file for rebuilderd. []
  • Document how to use a rescue system for Infomaniak’s OpenStack cloud. []
  • Update usage info for two particular nodes. []
  • Fix up a version skew check to fix the name of the riscv64 architecture. []
  • Update the rebuilderd-related TODO. []

In addition, Mattia Rizzolo added a new IP address for the inos5 node [] and Vagrant Cascadian brought 4 virt nodes back online [].


Supply-chain security at Open Source Summit EU

The Open Source Summit EU took place recently, and covered plenty of topics related to supply-chain security, including:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

10 November, 2024 06:26PM

Thorsten Alteholz

My Debian Activities in October 2024

FTP master

This month I accepted 398 and rejected 22 packages. The overall number of packages that got accepted was 441.

In case your RM bug is not closed within a month, you can assume that either the conversion of the subject of the bug email to the corresponding dak command did not work or you still need to take care of reverse dependencies. The dak command related to your removal bug can be found here.

Unfortunately the bahavior of some project members caused a decline of motivation of team members to work on these bugs. When I look at these bugs, I just copy and paste the above mentioned dak commands. If they don’t work, I don’t have the time to debug what is going wrong. So please read the docs and take care of it yourself. Please also keep in mind that you need to close the bug or set a moreinfo tag if you don’t want anybody to act on your removal bug.

Debian LTS

This was my hundred-twenty-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 3925-1] asterisk security update to fix two CVEs related to privilege escalation and DoS
  • [DLA 3940-1] xorg-server update to fix one CVE related to privilege escalation

Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the seventy-fifth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1198-1]cups security update for one CVE in Buster to fix the IPP attribute related CVEs.
  • [ELA-1199-1]cups security update for two CVEs in Stretch to fix the IPP attribute related CVEs
  • [ELA-1216-1]graphicsmagick security update for one CVE in Jessie
  • [ELA-1217-1]asterisk security update for two CVEs in Buster related to privilege escalation
  • [ELA-1218-1]asterisk security update for two CVEs in Stretch related to privilege escalation and DoS
  • [ELA-1223-1]xorg-server security update for one CVE in Jessie, Stretch and Buster related to privilege escalation

I also did a week of FD and attended the monthly LTS/ELTS meeting.

Debian Printing

Unfortunately I didn’t found any time to work on this topic.

Debian Matomo

Unfortunately I didn’t found any time to work on this topic.

Debian Astro

Unfortunately I didn’t found any time to work on this topic.

Debian IoT

This month I uploaded new upstream or bugfix versions of:

  • pywws (yes, again this month)

Debian Mobcom

This month I uploaded new packages or new upstream or bugfix versions of:

misc

This month I uploaded new upstream or bugfix versions of:

10 November, 2024 12:26AM by alteholz

November 09, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

Progressively enhancing CGI apps with htmx

I was interested in learning about htmx, so I used it to improve the experience of posting comments on my blog.

It seems much of modern web development is structured around having a JavaScript program on the front-end (browser) which exchanges data encoded in JSON asynchronously with the back-end servers. htmx uses a novel (or throwback) approach: it asynchronously fetches snippets of HTML from the back-end, and splices the results into the live page. For example, a htmx-powered button may request a URI on the server, receive HTML in response, and then the button itself would be replaced by the resulting HTML, within the page.

I experimented with incorporating it into an existing, old-school CGI web app: IkiWiki, which I became a co-maintainer of this year, and powers my blog. Throughout this project I referred to the excellent book Server-Driven Web Apps with htmx.

Comment posting workflow

I really value blog comments, but the UX for posting them on my blog was a bit clunky. It went like this:

  1. you load a given page (such as this blog post), which is a static HTML document. There's a link to add a comment to the page.

  2. The link loads a new page which is generated dynamically and served back to you via CGI. This contains a HTML form for you to write your comment.

  3. The form submits to the server via HTTP POST. IkiWiki validates the form content. Various static pages (in particular the one you started on, in Step 1) are regenerated.

  4. the server response to the request in (3) is a HTTP 302 redirect, instructing the browser to go back to the page in Step 1.

First step: fetching a comment form

First, I wanted the "add a comment" link to present the edit box in the current page. This step was easiest: add four attributes to the "comment on this page" anchor tag:

hx-get="<CGI ENDPOINT GOES HERE>"
suppresses the normal behaviour of the tag, so clicking on it doesn't load a new page.

issues an asynchronous HTTP GET to the CGI end-point, which returns the full HTML document for the comment edit form

hx-select=".editcomment form"
extract the edit-comment form from within that document
hx-swap=beforeend and hx-target=".addcomment"
append (courtesy of beforeend) the form into the source page after the "add comment" anchor tag (.addcomment)

Now, clicking "comment on this page" loads in the edit-comment box below it without moving you away from the source page. All that without writing any new code!

Second step: handling previews

The old Preview Comment page

The old Preview Comment page

In the traditional workflow, clicking on "Preview" loaded a new page containing the edit form (but not the original page or any existing comments) with a rendering of the comment-in-progress below it. I wasn't originally interested in supporting the "Preview" feature, but I needed to for reasons I'll explain later.

Rather than load new pages, I wanted "Preview" to insert a rendering of the comment-in-progress being inserted into the current page's list of comments, marked up to indicate that it's a preview.

IkiWiki provides some templates which you can override to customise your site. I've long overridden page.tmpl, the template used for all pages. I needed to add a new empty div tag in order to have a "hook" to target with the previewed comment.

The rest of this was achieved with htmx attributes on the "Preview" button, similar to in the last step: hx-post to define a target URI when you click the button (and specify HTTP POST); hx-select to filter the resulting HTML and extract the comment; hx-target to specify where to insert it.

Now, clicking "Preview" does not leave the current page, but fetches a rendering of your comment-in-progress, and splices it into the comment list, appropriately marked up to be clear it's a preview.

Third step: handling submitted comments

IkiWiki is highly configurable, and many different things could happen once you post a comment.

On my personal blog, all comments are held for moderation before they are published. The page you were served after submitting a comment was rather bare-bones, a status message "Your comment will be posted after moderator review", without the original page content or comments.

I wanted your comment to appear in the page immediately, albeit marked up to indicate it was awaiting review. Since the traditional workflow didn't render or present your comment to you, I had to cheat.

handling moderated comments

Moderation message upon submitting a comment

Moderation message upon submitting a comment

One of my goals with this project was not to modify IkiWiki itself. I had to break this rule for moderated comments. When returning the "comment is moderated" page, IkiWiki uses HTTP status code 200, the same as for other scenarios. I wrote a tiny patch to return HTTP 202 (Accepted, but not processed) instead.

I now have to write some actual JavaScript. htmx emits the htmx:beforeSwap event after an AJAX call returns, but before the corresponding swap is performed. I wrote a function that is triggered on this event, filters for HTTP 202 responses, triggers the "Preview" button, and then alters the result to indicate a moderated, rather than previewed, comment. (That's why I bothered to implement previews). You can read the full function here: jon.js.

Summary

I've done barely any front-end web development for years and I found working with htmx to be an enjoyable experience.

You can leave a comment on this very blog post if you want to see it in action. I couldn't resist adding an easter egg: Brownie points if you can figure out what it is.

Adding htmx to an existing CGI-based website let me improve one of the workflows in a gracefully-degrading way (without JavaScript, the old method will continue to work fine) without modifying the existing application itself (well, almost) and without having to write very much code of my own at all: nearly all of the configuration was declarative.

09 November, 2024 09:16PM

hackergotchi for Daniel Pocock

Daniel Pocock

Joel Espy Klecker, unpaid, terminally ill youth labor & Debian knew it

According to the official history of Debian, which was moved here after my last blog on Klecker (see snapshot / archive copy), no one knew that Joel "Espy" Klecker was a terminally ill teenager working without pay from his sickbed. Here is the same text that I copied in my first step into the Klecker case.

On July 11th, 2000, Joel Klecker, who was also known as Espy, passed away at 21 years of age. No one who saw 'Espy' in #mklinux, the Debian lists or channels knew that behind this nickname was a young man suffering from a form of Duchenne muscular dystrophy. Most people only knew him as 'the Debian glibc and powerpc guy' and had no idea of the hardships Joel fought. Though physically impaired, he shared his great mind with others.

Joel Klecker (also known as Espy) will be missed.

In fact, they did know. The Debian history page on the list of Debian's lies.

Subject: RE: [jwk@espy.org: Joel Klecker]
Date: Fri, 14 Jul 2000 20:40:00 -0600 (MDT)
From: Jason Gunthorpe <jgg@ualberta.ca>
To: debian-private@lists.debian.org
CC: Debian Private List 


On Tue, 11 Jul 2000, Brent Fulgham wrote:

> > It's very hard for me to even send this message. This is a 
> > great loss to us all.
 > First, I'd like to extend my condolences to Joel's family.  It
> is still very hard to believe this has happened.  Joel was

> always just another member of the project -- no one knew (or
> at least I did not know) that he was facing such terrible 
> hardships.  Debian is poorer for his loss.

Some of us did know, but he never wished to give specifics. I do not think
he wanted us to really know. I am greatly upset that I was unable to at

[ ... snip .... ]

This case is so bad that I am going to have to write multiple blogs to dissect some of the messages in the threads about the casualty.

Joel Espy Klecker, Debian

An obituary was published in the newspaper:

Joel Edmund Klecker

Aug. 29, 1978 - July 11, 2000

STAYTON - Joel Klecker, 21, died Tuesday of muscular dystrophy.

He was born in Salem and raised in Stayton. He attended Stayton public schools and Stayton High School. He was a Debian software project developer, one of 500 worldwide, worked on Apple computers and was a computer enthusiast.

Survivors include his parents, Dianne and Jeffrey Klecker of Stayton; brother, Ben of Stayton; and grandparents, Roy and Yvonne Welstad of Aumsville.

Services will be 2 p.m. Saturday at Calvary Lutheran Church, where he was a member. Interment will be at Lone Oak Grier Cemetery in Sublimity. Arrangements are by Restlawn Funeral Home in Salem.

Contributions: Muscular Dystrophy Association, 4800 Macadam Ave., Portland, OR 97201.

Klecker was born 29 August 1978. These messages hint that his first packages may have been contributed in November or December 1997. Message 1, message 2 and message 3.

At the time, he would have been 19 years old, still a teenager, when he began doing unpaid work for the other Debian cabal members.

Many of the Debianists today obfuscate who they really work for to try and make it look like Debian is a hobby or a "Family" but the impersonation of family is fallacy.

Jason Gunthorpe (jgg), who is now with NVIDIA clearly knew some things about it.

Jason Gunthorpe, Debian, NVIDIA

We don't know which people had knowledge of Klecker's situation or which organizations they worked for. During the Debian trademark dispute, a list of organizations using Klecker's work was submitted to the Swiss trademark office. While written in Italian, the names of these companies are clear. They all assert they are using Debian. Did they know there has been unpaid youth labor, terminally ill teenagers, bed-ridden, writing and testing the packages for them?

The names of the companies are copied below. Remember, Mark Shuttleworth sold his first business Thawte for $700 million about eight months before Klecker died.

While Klecker was bed-ridden, here is that jet-ridden picture:

Mark Shuttleworth, private jet, Canonical, Ubuntu

Is it really fair that Klecker, his family and many other volunteers get nothing at all from Debian? Or is that modern slavery? The US State Dept definition of Modern Slavery is extremely broad and includes all kinds of deceptive work practices.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

Come già riportato nell’incipit del presente paragrafo, la nascita del progetto DEBIAN risale al 1997, ed ogni paese ha al proprio interno una comunità attiva di sviluppatori volontari che si occupano di progettare,
testare e distribuire programmi basati sul sistema operativo multipiattaforma DEBIAN per i più svariati usi;
ogni anno, a partire dal 2004, come visibile qui https://www.debconf.org/ è stata organizzata una conferenza
internazionale alla quale partecipano tutti gli sviluppatori volontari che fanno parte delle diverse comunità
nazionali attive in ciascun paese del mondo (la lista dei paesi è visibile nel macrogruppo Entries by region ed
annovera, inter alia, Svizzera, Francia, Germania, Italia, Regno Unito, Polonia, Austria, Spagna, Norvegia,
Belgio, USA ecc, oltre ad estendersi a Sud est asiatico, all’Africa e all’America Latina).

Consultando, ad esempio, il documento DebConf13 - Sponsors relativo alla conferenza per l’anno 2013
visibile nella Section Swiss Debian Community, si nota che fra i numerosi sponsor spiccano Google, la
arcinota società creatrice del famosissimo ed omonimo motore di ricerca, e HP, il noto produttore di
hardware; allo stesso modo, consultando il documento DebConf15 - Sponsors visibile nella Section
European Debian Community, fra gli sponsor della conferenza per l’anno 2015, è possibile annoverare
nuovamente Google, HP oltre a IBM, il noto produttore di software e hardware, VALVE, il noto distributore
di videogiochi online, Fujitsu, il noto produttore di hardware, nonché BMW GROUP, il noto produttore di
autoveicoli. Per quanto concerne l’ultima conferenza svoltasi appunto nel 2022, visionando il documento
DebConf22 sponsorship brochure visibile nella Section European Debian Community, è possibile
annoverare tra gli sponsor, oltre a Google, anche Lenovo, il noto produttore di hardware, e Infomaniak, il più
grande fornitore di hosting per siti internet della Svizzera (tale azienda, oltre ad aver sponsorizzato numerose
edizioni della conferenza annuale, offre anche servizi di streaming e video on demand, ospitando più di
200.000 domini, 150.000 siti web e 350 stazioni radio/TV).

La stessa tipologia di informazioni, a livello europeo, può essere rinvenuta consultando la voce Entries in
section European Debian Communiy nella sezione Entries by section.

Fra gli altri sponsor di cui alle diverse conferenze tenutesi annualmente, si annoverano, inoltre, l’Università
di Zurigo-Dipartimento di informatica, il Politecnico di Zurigo-Dipartimento di Ingegneria elettrica, la
PricewaterhouseCooper (notissima società di revisione), Amazon Web Services (la piattaforma di cloud
computing e servizi web di proprietà di Amazon, la arcinota società di commercio elettronico statunitense),
Roche (la nota casa farmaceutica), Univention Corporate Server (nota società tedesca produttrice di software
open source per la gestione di infrastrutte informatiche complesse), Hitachi (noto produttore di hardware) il
Cantone di Neuchâtel ecc., oltre ad un nutrito numero di altre società private ed altri enti; per una
panoramica delle conferenze annuali tenutesi negli ultimi dieci anni è possibile osservare la relativa
documentazione promozionale filtrando le Categories e cercando la voce Community – DebConf.

Osservando inoltre il macro gruppo Entries by year, è stato anche raccolto materiale volto a coprire l’ultimo
decennio di attività, 2012/2022, del progetto DEBIAN compiegando documenti provenienti da diverse fonti
della stampa specializzata e non, attestazioni da parte di Università e Centri di ricerca nazionali ed esteri,
attestazioni da parte di utilizzatori del software DEBIAN per la propria attività imprenditoriale/commerciale
ecc.

Il software DEBIAN di SPI è infatti utilizzato sia da numerose società private in Svizzera e nella Unione
Europea, sia da numerosi enti istituzionali e di ricerca attivi nei più svariati ambiti.

I documenti disponibili nella Entries by section alla voce Cooperation with private companies in Switzerland
mostrano infatti come LIIP www.liip.ch una nota società svizzera (con sedi a Losanna, Friburgo, Berna,
Basilea, Zurigo e San Gallo) attiva nella prestazione di servizi connessi alla rete internet quali, ad esempio,
registrazione di domini per siti internet, configurazione di Server, servizi di hosting e per la creazione di siti
web, campagne pubblicitarie via internet, gestione dei social network, sia anch’essa una utilizzatrice di
Debian e, fra l’altro, anche uno degli sponsor delle conferenze annuali degli sviluppatori volontari.

Un altro documento incluso sotto la stessa voce, Debian Training Courses in Switzerland, mostra come
vengano tenuti corsi di formazione sul software DEBIAN; sempre sotto la stessa voce, il documento
Microsoft Azure available from new cloud regions in Switzerland for all customers mostra come il servizio
di cloud computing di Microsoft (notissimo produttore di software), Microsot Azure, offra il software
DEBIAN tra la selezione dei software messi a disposizione.

Sempre sotto la stessa voce, per mostrare la trasversalità e la permeazione del software DEBIAN in tutti i
settori delle cerchie interessate, vi sono documenti che attestano, ad esempio, come un centro osteopatico a
Losanna, https://osteo7-7.ch/ operi con server dotati del software DEBIAN, che la casa editrice Ringier AG
di Zurigo attiva nei mercati di quotidiani, periodici, televisione, web e raccolta pubblicitaria si sia occupata
del software DEBIAN e che Lenovo (noto produttore di hardware) si sia interessato anch’esso al software
DEBIAN.

I documenti disponibili nella Entries by section alla voce Cooperation with private companies in the
European Union mostrano la notorietà del software DEBIAN presso svariate società localizzate in diversi
paesi europei, ad esempio, Logikon labs http://www.logikonlabs.com/ in Grecia, Servicio Técnico, Open
Tech S.L https://www.opentech.es/ in Spagna, ALTISCENE http://www.altiscene.fr/, Logilab
https://www.logilab.fr/, Bureau d'études et valorisations archéologiques Éveha https://www.eveha.fr/ in
Francia, 99ideas https://99ideas.pl/ e Roan Agencja Interaktywna https://roan24.pl/ in Polonia, Mendix
Technology https://www.mendix.com/ in Olanda; in ragione del numero particolarmente elevato di
documenti, pari a 135 unità, si invita pertanto a prendere visione dell’elevato di società piccole, medie e
grandi che vedono il software DEBIAN alla base dei loro sistemi informatici.

Consultando le voci Research & Papers, Institutional/Governmental cooperation e Miscellaneous nella
sezione Entries by section, si possono rinvenire una serie di documenti inerenti articoli divulgativi,
scientifici, saggi di ricerca, abstract di tesi, monografie, brevi guide ecc., aventi ad oggetto il software
DEBIAN, realizzati, inter alia, dal Politecnico di Zurigo, dall’Università di Edimburgo, dall’Università di
Oxford, dall’EPFL di Losanna, dalla Università di Ginevra, dall’Università di Roma Tor Vergata,
dall’European Synchrotron Radiation Facility di Grenoble, dal WSL Istituto per lo studio della neve e delle
valanghe SLF di Davos, dall’Università Politécnica di Madrid, dalla Scuola Specializzata Superiore di
Economia-Sezione di Informatica di Gestione del Canton Ticino, dalla Unione internazionale delle
telecomunicazioni di Ginevra, dalla BBC del Regno Unito, dal CERN di Ginevra, dall’Università di
Glasgow, dalla Università di Durham ecc.

Le voci Swiss press coverage e European press coverage nella sezione Entries by section includono una
rassegna stampa, sia a livello svizzero sia a livello europeo, inerente articoli aventi ad oggetto il software
DEBIAN, inter alia, da parte di www.netzwoche.ch, Swiss IT Magazine, RTS Info, Corriere della Sera,
Linux Magazine, www.heise.de, www.gamestar.de, The Guardian, la BBC, L’Espresso, Il Disinformatico
(blog amministrato dal noto giornalista ticinese Paolo Attivissimo), Linux user, www.computerbase.de,
www.derstandard.at, https://blog.programster.org/ , www.digi.no, Linux Magazine https://www.linux-magazine.com/, ecc.

Verificando le voci Attestation & Statements by third parties, Switzerland, nella colonna Entries by section,
si nota come diversi attori facenti parte delle cerchie commerciali determinanti, composte da consumatori,
canali di distribuzione e commercianti, abbiano reso esplicite attestazioni di notorietà e conoscenza del
marchio DEBIAN in Svizzera per i prodotti rivendicati nella classe 9 (“Logiciels de système d'exploitation et
centres publics de traitement de l'information.”):

- il WSL Istituto per lo studio della neve e delle valanghe SLF di Davos;
- il Dipartimento di Informatica dell’Università di Zurigo;
- il provider di servizi internet www.oriented.net di Basilea;
- il centro osteopatico Osteo 7/7 www.osteo7-7.ch con sedi a Losanna e Ginevra;
- il CERN www.home.web.cern.ch di Ginevra per il tramite dell’Ing. Javier Serrano in qualità di BE-CEM
- Electronics Design and Low-level software (EDL) Section Leader presso il CERN;
- www.infomaniak.com il più grande fornitore di hosting per siti internet della Svizzera (tale azienda offre
anche servizi di streaming e video on demand, ospitando più di 200.000 domini, 150.000 siti web e 350
stazioni radio/TV) per il tramite del CEO di Infomaniak.com Boris Siegenthaler;
- www.liip.ch nota società svizzera (con sedi a Losanna, Friburgo, Berna, Basilea, Zurigo e San Gallo) attiva
nella prestazione di servizi connessi alla rete internet quali, ad esempio, registrazione di domini per siti
internet, configurazione di Server, servizi di hosting e per la creazione di siti web, campagne pubblicitarie
via internet e gestione dei social network, per il tramite del cofondatore e partner di LIIP Gerhard Andrey;
- www.microsoft.com notissimo produttore di software e servizi cloud (Windows, Microsoft Azure ecc.), per
il tramite di Sarah Novotny, Direttrice della strategia Open Source di Microsoft;
- www.microsoft.com notissimo produttore di software e servizi cloud (Windows, Microsoft Azure ecc.),
per il tramite dell’Ing. KY Srinivasan quale Distinguished Engineer di Microsoft;
- www.microsoft.com notissimo produttore di software e servizi cloud (Windows, Microsoft Azure ecc.),
per il tramite dell’Ing. Joshua Poulson quale Program Manager di Microsoft;
- il CERN www.home.web.cern.ch di Ginevra per il tramite del Dr. Axel Nauman in qualità di Senior
applied physicist and ROOT Project Leader presso il CERN;
- www.univention.com uno dei principali fornitori di software open source nei settori della gestione delle
identità e dell'integrazione e distribuzione delle applicazioni in Europa e Svizzera, che conta migliaia di
utenti e organizzazioni partner, per il tramite del CEO di Uninvention Peter H. Ganten.

Alla voce, sottostante a quella di cui sopra, verificando le voci Attestation & Statements by third parties,
European Union, nella colonna Entries by section, si nota come diversi attori facenti parte delle cerchie
commerciali determinanti, composte da consumatori, canali di distribuzione e commercianti, abbiano reso
esplicite attestazioni di notorietà e conoscenza del marchio DEBIAN in Europa per i prodotti rivendicati
nella classe 9 (“Logiciels de système d'exploitation et centres publics de traitement de l'information.”) per
un totale di ben 146 records (di cui si riportano i primi 25 qui di seguito):

- il Rost-Lab Bioinformatics Group dell’Università tedesca di Monaco;
- la società greca Logikon Labs di Atene;
- il Dipartimento di Ingegneria dell’Università italiana di Roma Tor Vergata;
- la società spagnola Servicio Técnico, Open Tech SL di Las Palmas;
- la società francese ALTISCENE di Tolosa;
- la società polacca Zakład Gospodarowania Nieruchomościami w Dzielnicy Mokotów m.st. di Varsavia;
- la società francese Logilab di Parigi;
- la società svedese www.Bayour.com di Göteborg;
- l’ente francese ESRF (European Synchrotron Radiation Facility) di Grenoble;
- la società austriaca www.mur.at di Graz;
- la società polacca www.Dictionaries24.com di Poznan;
- l’organizzazione no-profit francese TuxFamily;
- l’ente tedesco LINKES FORUM di Kreis;
- la società polacca www.99ideas.com di Gliwice;
- il Departamento de Arquitectura y Tecnología de Sistemas Informáticos (Facultad de Informática),
dell’Universidad Politécnica spagnola di Madrid;
- la società italiana Reware Soc. Coop di Roma;
- la società polacca Roan Agencja Interaktywna di Gorzów;
- la società slovacca RoDi Zilina;
- la società olandese Mendix Technology di Rotterdam;
- l’ente francese Bureau d'études et valorisations archéologiques Éveha di Limoges;
- la società olandese AlterWeb;
- l’Electronics Research Group dell’Università inglese di Aberdeen;
- la società olandese MrHostman di Montfoort;
- la società polacca System rezerwacji online Nakiedy di Gdansk;
oltre, come detto, alle restanti testimonianze e attestazioni rese dalle più disparate società e diversi enti
pubblici e privati aventi base/sede, rispettivamente, in Svizzera, Italia, Germania, Regno Unito, Francia,
Polonia, Austria, Spagna, Olanda, Norvegia, Belgio, Repubblica Ceca, Svezia, Bulgaria, Grecia, Finlandia,
Kosovo, Slovacchia, Bosnia, Danimarca, Ungheria, Lituania, Romania, di cui invitiamo a prendere visione.

Per dimostrare sia a livello nominativo, dimostrando inoltre la diffusione numericamente quantitativa, degli
utilizzatori del software DEBIAN, si riporta di seguito un estratto del sito web del progetto DEBIAN
https://www.debian.org/users/index.it.html tramite il quale si possono scorrere tutte le attestazioni
volontariamente lasciate sul sito www.debian.org del progetto DEBIAN dagli utilizzatori finali (ciascun
nominativo è un link interattivo al sito https://www.debian.org/users/index.it.html), di diversa estrazione, del
software DEBIAN:

Istituzioni educative (educational)

Commerciali (commercial)
Organizzazioni non-profit (non-profit)
Enti statali (government)
Istituzioni educative (educational)
Electronics Research Group, University of Aberdeen, Aberdeen, Scotland
Department of Informatics, University of Zurich, Zurich, Switzerland
General Students' Committee (AStA), Saarland University, Saarbrücken, Germany
Athénée Royal de Gembloux, Gembloux, Belgium
Computer Science, Brown University, Providence, RI, USA
Sidney Sussex College, University of Cambridge, UK
CEIC, Scuola Normale Superiore di Pisa, Italy
Mexican Space Weather Service (SCiESMEX), Geophysics Institute campus Morelia (IGUM),
National University of Mexico (UNAM), Mexico
COC Araraquara, Brazil
Departamento de Arquitectura y Tecnología de Sistemas Informáticos (Facultad de Informática),
Universidad Politécnica de Madrid, Madrid, Spain
Department of Control Engineering, Faculty of Electrical Engineering, Czech Technical University,
Czech Republic
Swiss Federal Institute of Technology Zurich, Department of Physics, ETH Zurich, Switzerland
Genomics Research Group, CRIBI - Università di Padova, Italy
Dipartimento di Geoscienze, Università degli Studi di Padova, Italy
Nucleo Lab, Universidad Mayor de San Andrés, Bolivia
Department of Physics, Harvard University, USA
Infowebhosting, Perugia, Italy
Medical Information System Laboratory, Doshisha University, Kyoto, Japan
Bioinformatics & Theo. Biology Group, Dept. of Biology, Technical University Darmstadt,
Germany
Center for Climate Risk and Opportunity Management in Southeast Asia and Pacific, Indonesia
Laboratorio de Comunicaciones Digitales, Universidad Nac. de Cordoba, Argentina
Laboratorio di Calcolo e Multimedia, Università degli Studi di Milano, Italy
Department of Engineering, University of Rome Tor Vergata, Italy
Lycée Molière, Belgium
Max Planck Institute for Informatics, Saarbrücken, Germany
Computer Department, Model Engineering College, Cochin, India
Medicina - Facultad de Ciencias Médicas, Universidad Nacional del Comahue, Cipolletti, Río
Negro, Argentina
Artificial Intelligence Lab, Massachusetts Institute of Technology, USA
Montana Tech, Butte, Montana, USA
Mittelschule, Montessoriverein Chemnitz, Chemnitz, Germany
Laboratory GQE-Le Moulon / CNRS / INRAE, Gif-sur-Yvette, France
Department of Measurement and Control Technology MRT (Department of Mechanical
Engineering), University of Kassel, Germany
Department of Computer Science & Engineering, Muthayammal Engineering College, Rasipuram,
Tamilnadu, India
Spanish Bioinformatics Institute, Spanish National Cancer Research Centre, Madrid, Spain
NI, Núcleo de Informática, Brazil
Software & Networking Lab, National University of Oil and Gas, Ivano-Frankivsk, Ukraine
Parallel Processing Group, Department of Computer Science and Engineering, University of
Ioannina, Ioannina, Greece
Departamento de Matemática -- Universidade Federal do Paraná, Brazil
Departamento de Informática -- Universidade Federal do Paraná, Brazil
Protein Design Group, National Center for Biotechnology, Spain
Rost Lab/Bioinformatics Group, Technical University of Munich, Germany
Department of Computer Science, University of Salzburg, Salzburg, Austria
Don Bosco Technical Institute, Sunyani, Ghana
Instituto de Robótica y Automática, Escuela Superior de Ingenieros, University of Sevilla, Spain
Computer Engineering Department, Sharif University of Technology, Iran
Dipartimento di Scienze Statistiche, Università di Padova, Italy
School of Mathematics, Tata Institute of Fundamental Research, Bombay, India
Department of Computer and Engineering, Thiagarajar College of Engineering, Madurai, India
Library and IT Services, Tilburg University, Tilburg, the Netherlands
Computer Science Department, Trinity College, Hartford Connecticut, USA
Turnkey IT Training Institute, Colombo, Sri Lanka.
System Department, University of Santander, Cúcuta, Colombia
Academic Administration, Universidad de El Salvador, El Salvador
Universitas Indonesia (UI), Depok, Indonesia
Laboratoire de Chimie physique, CNRS UMR 8000, Université Paris-Sud, Orsay, France
Dirección de Tecnología e Informática, Universidad Nacional Experimental de Guayana, Puerto
Ordaz, Venezuela
School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
International Arctic Research Center, University of Alaska Fairbanks, USA
Laboratoire VERIMAG, CNRS/Grenoble INP/Université Joseph Fourier, France
Centre for Information Technology, University of West Bohemia, Pilsen, Czech Republic
Game Development Club, Worcester Polytechnic Institute, Worcester MA, USA

Commerciali (commercial)

IT Department, 100ASA Srl, Dragoni, Italy
99ideas, Gliwice, Poland
Tech Dept, ABC Startsiden AS, Oslo, Norway
Admins.CZ, Prague, Czech Republic
AdvertSolutions.com, United Kingdom
Kancelaria Adwokacka Adwokat Wiktor Gamracki, Rzeszów, Poland
Adwokat radca prawny, Poznan Lodz, Poland
AFR@NET, Tehran, Iran
African Lottery, Cape Town, South Africa
AKAOMA Consulting, France
Alfabet Sukcesu, Lubliniec, Poland
AlterWeb
Altiria, Spain
ALTISCENE, Toulouse, France
Anykey Solutions, Sweden
JSC VS, Russia
Apache Auto Parts Incorporated, Parma USA
Applied Business Solutions, São Paulo, Brazil
Archiwwwe, Stockholm, Sweden
Computational Archaeology Division, Arc-Team, Cles, Italy
Articulate Labs, Inc., Dallas, TX, US
Athena Capital Research, USA
Atrium 21 Sp. z o.o. Warsaw, Poland
Co. AUSA, Almacenes Universales SA, Cuba
Agencja interaktywna Avangardo, Szczecin, Poland
Axigent Technologies Group, Inc., Amarillo, Texas, USA
Ayonix, Inc., Japan
AZ Imballaggi S.r.l., Pontedera, Italy
Backblaze Inc, USA
Baraco Compañia Anónima, Venezuela
Big Rig Tax, USA
BioDec, Italy
bitName, Italy
BMR Genomics, Padova, Italy
B-Open Solutions srl, Italy
Braithwaite Technology Consultants Inc., Canada
BrandLive, Warsaw, Poland
calbasi.net web developers, Catalonia, Spain
Camping Porticciolo, Bracciano (Rome), Italy
CAROL - Cooperativa dos Agricultores da Região de Orlândia, Orlândia, São Paulo, Brazil
Centros de Desintoxicación 10, Grupo Dropalia, Alicante, Spain
Charles Retina Institute, Tennessee, USA
Chrysanthou & Chrysanthou LLC, Nicosia, Cyprus
CIE ADEMUR, Spain
CLICKPRESS Internet agency, Iserlohn, Germany
Code Enigma
Companion Travel LLC, Tula, Russia
Computación Integral, Chile
Computerisms, Yukon, Canada
CRX LTDA, Santiago, Chile
CyberCartes, Marseilles, France
DataPath Inc. - Software Solutions for Employee Benefit Plans, USA
Datasul Paranaense, Curitiba PR, Brazil
Internal IT, Dawan, France
DEQX, Australia
Diciannove Soc. Coop., Italy
DigitalLinx, Kansas City, MO, USA
Directory Wizards Inc, Delaware, USA
IT / Sales Department, Diversicom Corp of Riverview, USA
Dubiel Vitrum, Rabka, Rabka, Poland
Eactive, Wroclaw, Poland
eCompute Corporation, Japan
Agencja Interaktywna Empressia, Poznan, Poland
enbuenosaires.com, Buenos Aires, Argentina
Eniverse, Warsaw, Poland
Epigenomics, Berlin, Germany
Essential Systems, UK
Ethan Clark Air Conditioning, Houston, Texas, USA
EuroNetics Operation KB, Sweden
Bureau d'études et valorisations archéologiques Éveha, Limoges, France
Fahrwerk Kurierkollektiv UG, Berlin, Germany
Faunalia, AP, Italy
Flamingo Agency, Chicago, IL, USA
Freeside Internet Services, Inc., USA
Frogfoot Networks, South Africa
French Travel Organisation, Nantes, France
Fusion Marketing, Cracow, Poland
IT, Geodata Danmark, Denmark
GigaTux, London, UK
Globalways AG, Germany
GNUtransfer - GNU Hosting, Mar del Plata, Argentina
G.O.D. Gesellschaft für Organisation und Datenverarbeitung mbH, Germany
Goodwin Technology, Springvale, Maine, USA
GPLHost LLC, Wilmington, Delaware, USA; GPLHost UK LTD, London, United Kingdom;
GPLHost Networks PTE LTD, Singapore, Singapore
Hermes IT, Romania
HeureKA -- Der EDV Dienstleister, Austria
HostingChecker, Varna, Bulgaria
Hostsharing eG (Cooperation), Germany
Hotel in Rome, Foggia, Italy
Huevo Vibrador, Madrid, Spain
ICNS, X-tec GmbH, Germany
Instasent, Madrid, Spain
IT outsourcing department, InTerra Ltd., Russian Federation
IreneMilito.it, Cosenza, Italy
Iskon Internet d.d., Croatia
IT Lab, Foggia, Italy
Keliweb SRL, Cosenza, Italy
Kosmetyczny Outlet, KosmetycznyOutlet, Wroclaw, Poland
Kulturystyka.sklep.pl sp. z o.o, Kleszczow, Poland
Linden Lab, San Francisco, California, USA
Linode, USA
LinuxCareer.com, Rendek Online Media, Australia
Linuxlabs, Krakow, Poland
IT services, Lixper S.r.L., Italy
Logikon Labs, Athens, Greece
Logilab, Paris, France
Madkom Ltd. (Madkom Sp. z o.o.), Poland
Inmobiliaria Mar Menuda SA, Tossa de Mar, Spain
IT Services, Medhurst Communications Ltd, UK
Media Design, The Netherlands
Mediasecure, London, United Kingdom
Megaserwis S.C. Serwis laptopów i odzyskiwanie danych, Warsaw, Poland
Mendix Technology, Rotterdam, the Netherlands
Mobusi Mobile Performance Advertising, Los Angeles, California, USA
Molino Harinero Sula, S.A., Honduras
MrHostman, Montfoort, The Netherlands
MTTM La Fraternelle, France
System rezerwacji online Nakiedy, Gdansk, Poland
New England Ski Areas Council, USA
IT Ops, NG Communications bvba, Kortenberg, Belgium
nPulse Technologies, LLC, Charlottesville, VA, USA
Oktet Labs, Saint Petersburg, Russia
One-Eighty Out, Incorporated, Colorado Springs, Colorado, USA
Servicio Técnico, Open Tech S.L., Las Palmas, Spain
oriented.net Web Hosting, Basel, Switzerland
Osteo 7/7, Lausanne and Geneva, Switzerland
IT Department, OutletPC, Henderson, NV, USA
Parkyeri, Istanbul, Turkey
Pelagicore AB, Gothenburg, Sweden
Development and programming, www.perfumesyregalos.com, Spain
PingvinBolt webshop, Hungary
DeliveryHero Holding GmbH, IT System Operations, Berlin, Germany
Portantier Information Security, Buenos Aires, Argentina
Pouyasazan, Isfahan-Iran
PR International Ltd, Kings Langley, Hertfordshire, UK
PROBESYS, Grenoble, France
Agencja interaktywna Prodesigner, Szczecin, Poland
Questia Media
RatujLaptopa, Warsaw, Poland
The Register, Situation Publishing, UK
NOC, RG3.Net, Brazil
RHX Studio Associato, Italy
Roan Agencja Interaktywna, Gorzów Wielkopolski, Poland
RoDi, Zilina, Slovakia
Rubbettino Editore, Soveria Mannelli (CZ), Italy
Industrial Router Group, RuggedCom, Canada
RV-studio, Zielonka, Poland
S4 Hosting, Lithuania
Salt Edge Inc., Toronto, Canada
Ing. Salvatore Capolupo, Cosenza, Italy
Santiago Engenharia LTDA, Brazil
SCA Packaging Deutschland Stiftung & Co. KG, IS Department (HO-IS), Germany
Overstep SRL, Via Marco Simone, 80 00012 Guidonia Montecelio, Rome, Italy
ServerHost, Bucharest, Romania
Seznam.cz, a.s., Czech Republic
Shellrent Srl, Vicenza, Italy
Siemens
Information Technology Dep., SIITE SRLS, Lodi / Milano, Italy
SilverStorm Technologies, Pennsylvania, USA
Sinaf Seguros, Brazil
Skroutz S.A., Athens, Greece
SMS Masivos, Mexico
Auto Service Cavaliere, Rome, Italy
soLNet, s.r.o., Czech Republic
Som Tecnologia, Girona, Spain
Software Development, SOURCEPARK GmbH, Berlin, Germany
Computer Division, Stabilys Ltd, London, United Kingdom
Departamento de administración y servicios, SW Computacion, Argentina
Taxon Estudios Ambientales, SL, Murcia, Spain
ITW TechSpray, Amarillo, TX, USA
Tehran Raymand Co., Tehran, Iran
Telsystem, Telecomunicacoes e Sistemas, Brazil
The Story, Poland
TI Consultores, consulting technologies of information and businesses, Nicaragua
CA, Telegraaf Media ICT, Amsterdam, the Netherlands
T-Mobile Czech Republic a. s.
Nomura Technical Management Office Ltd., Kobe, Japan
TomasiL stone engravings, Italy
Tri-Art Manufacturing, Canada
EDI Team, Hewlett Packard do Brasil, São Paulo, Brazil
Trovalost, Cosenza, Italy
Taiwan uCRobotics Technology, Inc., Taoyuan, Taiwan (ROC)
United Drug plc, Ireland
Koodiviidakko Oy, Finland
Departamento de Sistemas, La Voz de Galicia, A Coruña, Spain
VPSLink, USA
Wavecon GmbH, Fürth, Germany
WTC Communications, Canada
Wyniki Lotto Web Page, Poznan, Poland
Software Development, XSoft Ltd., Bulgaria
Zomerlust Systems Design (ZSD), Cape Town, South Africa

Organizzazioni non-profit (non-profit)

Bayour.com, Gothenburg, Sweden
Eye Of The Beholder BBS, Fidonet Technology Network, Catalonia/Spain
Dictionaries24.com, Poznan, Poland
Beyond Disability Inc., Pearcedale, Australia
E.O. Ospedali Galliera, Italy
ESRF (European Synchrotron Radiation Facility), Grenoble, France
F-Droid - the definitive source for free software Android apps
GreenNet Ltd., UK
GREFA, Grupo para la rehabilitación de la fauna autóctona y su hábitat, Majadahonda, Madrid,
Spain
GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt, Germany
LINKES FORUM in Oberberg e.V., Gummersbach, Oberbergischer Kreis, Germany
MAG4 Piemonte, Torino, Italy
Mur.at - Verein zur Förderung von Netzwerkkunst, Graz, Austria
High School Technology Services, Washington DC USA
PRINT, Espace autogéré des Tanneries, France
Reware Soc. Coop - Impresa Sociale, Rome, Italy
Systems Support Group, The Wellcome Trust Sanger Institute, Cambridge, UK
SARA, Netherlands
Institute for Snow and Avalanche Research (SLF), Swiss Federal Institute for Forest, Snow and
Landscape Research (WSL), Davos, Switzerland
SRON: Netherlands Institute for Space Research
TuxFamily, France

Enti statali (government)

Agência Nacional de Vigilância Sanitária - ANVISA (Health Surveillance National Agency) -
Gerência de Infra-estrutura e Tecnologia (GITEC), Brazil
Directorate of Information Technology, Council of Europe, Strasbourg, France
Gerencia de Redes, Eletronorte S/A, Brazil
European Audiovisual Observatory, Strasbourg, France
Informatique, Financière agricole du Québec, Canada
Informática de Municípios Associados - IMA, Governo Municipal, Campinas/SP, Brazil
Bureau of Immigration, Philippines
Institute of Mathematical Sciences, Chennai, India
INSEE (National Institute for Statistics and Economic Studies), France
London Health Sciences Centre, Ontario, Canada
Lorient Agglomération, Lorient France
Ministry of Foreign Affairs, Dominican Republic
Procempa, Porto Alegre, RS, Brazil
SEMAD, Secretaria de Estado de Meio Ambiente e Desenvolvimento Sustentável, Goiânia/GO,
Brasil
Servizio Informativo Comunale, Comune di Riva del Garda, ITALY
St. Joseph's Health Care London, Ontario, Canada
State Nature Conservation Agency, Slovakia
Servicio de Prevencion y Lucha Contra Incendios Forestales, Ministerio de Produccion Provincia de
Rio Negro, Argentina
Vermont Department of Taxes, State of Vermont, USA
Zakład Gospodarowania Nieruchomościami w Dzielnicy Mokotów m.st. Warszawy, Warsaw, Poland

Please see the chronological history of how the Debian harassment and abuse culture evolved.

09 November, 2024 02:00PM

November 07, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

John Carpenter's "The Fog"

'The Fog' 7 inch vinyl record

A gift from my brother. Coincidentally I’ve had John Carpenter’s “Halloween” echoing around my my head for weeks: I’ve been deconstructing it and trying to learn to play it.

07 November, 2024 09:51AM

November 06, 2024

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

this is Bits from DPL for October. In addition to a summary of my recent activities, I aim to include newsworthy developments within Debian that might be of interest to the broader community. I believe this provides valuable insights and foster a sense of connection across our diverse projects. Also, I welcome your feedback on the format and focus of these Bits, as community input helps shape their value.

Ada Lovelace Day 2024

As outlined in my platform, I'm committed to increasing the diversity of Debian developers. I hope the recent article celebrating Ada Lovelace Day 2024–featuring interviews with women in Debian–will serve as an inspiring motivation for more women to join our community.

MiniDebConf Cambridge

This was my first time attending the MiniDebConf in Cambridge, hosted at the ARM building. I thoroughly enjoyed the welcoming atmosphere of both MiniDebCamp and MiniDebConf. It was wonderful to reconnect with people who hadn't made it to the last two DebConfs, and, as always, there was plenty of hacking, insightful discussions, and valuable learning.

If you missed the recent MiniDebConf, there's a great opportunity to attend the next one in Toulouse. It was recently decided to include a MiniDebCamp beforehand as well.

FTPmaster accepts MRs for DAK

At the recent MiniDebConf in Cambridge, I discussed potential enhancements for DAK to make life easier for both FTP Team members and developers. For those interested, the document "Hacking on DAK" provides guidance on setting up a local DAK instance and developing patches, which can be submitted as MRs.

As a perfectly random example of such improvements some older MR, "Add commands to accept/reject updates from a policy queue" might give you some inspiration.

At MiniDebConf, we compiled an initial list of features that could benefit both the FTP Team and the developer community. While I had preliminary discussions with the FTP Team about these items, not all ideas had consensus. I aim to open a detailed, public discussion to gather broader feedback and reach a consensus on which features to prioritize.

  • Accept+Bug report

Sometimes, packages are rejected not because of DFSG-incompatible licenses but due to other issues that could be resolved within an existing package (as discussed in my DebConf23 BoF, "Chatting with ftpmasters"[1]). During the "Meet the ftpteam" BoF (Log/transcription of the BoF can be found here), for the moment until the MR gets accepted, a new option was proposed for FTP Team members reviewing packages in NEW:

Accept + Bug Report

This option would allow a package to enter Debian (in unstable or experimental) with an automatically filed RC bug report. The RC bug would prevent the package from migrating to testing until the issues are addressed. To ensure compatibility with the BTS, which only accepts bug reports for existing packages, a delayed job (24 hours post-acceptance) would file the bug.

  • Binary name changes - for instance if done to experimental not via new

When binary package names change, currently the package must go through the NEW queue, which can delay the availability of updated libraries. Allowing such packages to bypass the queue could expedite this process. A configuration option to enable this bypass specifically for uploads to experimental may be useful, as it avoids requiring additional technical review for experimental uploads.

Previously, I believed the requirement for binary name changes to pass through NEW was due to a missing feature in DAK, possibly addressable via an MR. However, in discussions with the FTP Team, I learned this is a matter of team policy rather than technical limitation. I haven't found this policy documented, so it may be worth having a community discussion to clarify and reach consensus on how we want to handle binary name changes to get the MR sensibly designed.

  • Remove dependency tree

When a developer requests the removal of a package – whether entirely or for specific architectures – RM bugs must be filed for the package itself as well as for each package depending on it. It would be beneficial if the dependency tree could be automatically resolved, allowing either:

a) the DAK removal tooling to remove the entire dependency tree
   after prompting the bug report author for confirmation, or

b) the system to auto-generate corresponding bug reports for all
   packages in the dependency tree.

The latter option might be better suited for implementation in an MR for reportbug. However, given the possibility of large-scale removals (for example, targeting specific architectures), having appropriate tooling for this would be very beneficial.

In my opinion the proposed DAK enhancements aim to support both FTP Team members and uploading developers. I'd be very pleased if these ideas spark constructive discussion and inspire volunteers to start working on them--possibly even preparing to join the FTP Team.

On the topic of ftpmasters: an ongoing discussion with SPI lawyers is currently reviewing the non-US agreement established 22 years ago. Ideally, this review will lead to a streamlined workflow for ftpmasters, removing certain hurdles that were originally put in place due to legal requirements, which were updated in 2021.

Contacting teams

My outreach efforts to Debian teams have slowed somewhat recently. However, I want to emphasize that anyone from a packaging team is more than welcome to reach out to me directly. My outreach emails aren't following any specific orders--just my own somewhat naïve view of Debian, which I'm eager to make more informed.

Recently, I received two very informative responses: one from the Qt/KDE Team, which thoughtfully compiled input from several team members into a shared document. The other was from the Rust Team, where I received three quick, helpful replies–one of which included an invitation to their upcoming team meeting.

Interesting readings on our mailing lists

I consider the following threads on our mailing list some interesting reading and would like to add some comments.

Sensible languages for younger contributors

Though the discussion on debian-devel about programming languages took place in September, I recently caught up with it. I strongly believe Debian must continue evolving to stay relevant for the future.

"Everything must change, so that everything can stay the same." -- Giuseppe Tomasi di Lampedusa, The Leopard

I encourage constructive discussions on integrating programming languages in our toolchain that support this evolution.

Concerns regarding the "Open Source AI Definition"

A recent thread on the debian-project list discussed the "Open Source AI Definition". This topic will impact Debian in the future, and we need to reach an informed decision. I'd be glad to see more perspectives in the discussions−particularly on finding a sensible consensus, understanding how FTP Team members view their delegated role, and considering whether their delegation might need adjustments for clarity on this issue.

Kind regards Andreas.

06 November, 2024 11:00PM by Andreas Tille

hackergotchi for Jaldhar Vyas

Jaldhar Vyas

Making America Great Again

Making America Great Again

Justice For Peanut

Some interesting takeaways (With the caveat that exit polls are not completely accurate and we won't have the full picture for days.)

  • President Trump seems to have won the popular vote which no Republican has done I believe since Reagan.

  • Apparently women didn't particularly care about abortion (CNN said only 14% considered it their primary issue) There is a noticable divide but it is single versus married not women versus men per se.

  • Hispanics who are here legally voted against Hispanics coming here illegally. Latinx's didn't vote for anything because they don't exist.

  • The infamous MSG rally joke had no effect on the voting habits of Puerto Ricans.

  • Republicans have taken the Senate and if trends continue as they are will retain control of the House of Representatives.

  • President Biden may have actually been a better candidate than Border Czar Harris.

06 November, 2024 07:11AM

November 05, 2024

Nazi.Compare

Linus Torvalds' self-deprecating LKML CoC mail linked to Hitler's first writing: Gemlich letter

The first piece of anti-semitic writing attributed to Adolf Hitler is the Gemlich letter.

After World War I, Hitler remained in the German army. He was posted to an intelligence role in Munich. Adolf Gemlich wrote a letter about the Jewish question. Hitler's superior, Karl Mayr, asked Hitler to write the response.

The Gemlich letter was written on 16 September 1919, while Hitler was still an army officer, well before Hitler became Fuhrer.

One of the key points in the letter states that there should be a Code of Conduct (CoC) for Jewish people:

legally fight and remove the privileges enjoyed by the Jews as opposed to other foreigners living among us

So there would be one set of laws for everybody else and a second set of laws, or a CoC, for the Jews.

The other key point in the Gemlich letter is "behavior":

there lives amongst us a non-German, alien race, unwilling and indeed unable to shed its racial characteristics, its particular feelings, thoughts and ambitions

On 16 September 2018 Linus Torvalds posted the email announcing he has to submit himself to the code of conduct on the Linux Kernel Mailing List and mind his behavior.

Linus tells us he is taking a break, in other words, some of his privileges are on hold for a while.

Could the date of the email be a secret hint from Linus that he doesn't approve of the phenomena of CoC gaslighting?

We saw the same thing in Afghanistan. When the Taliban took back control of the country, women had to change their behavior and become better at listening to the demands from their masters.

From	Linus Torvalds 
Date	Sun, 16 Sep 2018 12:22:43 -0700
Subject	Linux 4.19-rc4 released, an apology, and a maintainership note
[ So this email got a lot longer than I initially thought it would
get,  but let's start out with the "regular Sunday release" part ]

Another week, another rc.

Nothing particularly odd stands out on the technical side in the
kernel updates for last week - rc4 looks fairly average in size for
this stage in the release cycle, and all the other statistics look
pretty normal too.

We've got roughly two thirds driver fixes (gpu and networking look to
be the bulk of it, but there's smaller changes all over in various
driver subsystems), with the rest being the usual mix: core
networking, perf tooling updates, arch updates, Documentation, some
filesystem, vm and minor core kernel fixes.

So it's all fairly small and normal for this stage.  As usual, I'm
appending the shortlog at the bottom for people who want to get an
overview of the details without actually having to go dig in the git
tree.

The one change that stands out and merits mention is the code of
conduct addition...

[ And here comes the other, much longer, part... ]

Which brings me to the *NOT* normal part of the last week: the
discussions (both in public mainly on the kernel summit discussion
lists and then a lot in various private communications) about
maintainership and the kernel community.  Some of that discussion came
about because of me screwing up my scheduling for the maintainer
summit where these things are supposed to be discussed.

And don't get me wrong.  It's not like that discussion itself is in
any way new to this week - we've been discussing maintainership and
community for years. We've had lots of discussions both in private and
on mailing lists.  We have regular talks at conferences - again, both
the "public speaking" kind and the "private hallway track" kind.

No, what was new last week is really my reaction to it, and me being
perhaps introspective (you be the judge).

There were two parts to that.

One was simply my own reaction to having screwed up my scheduling of
the maintainership summit: yes, I was somewhat embarrassed about
having screwed up my calendar, but honestly, I was mostly hopeful that
I wouldn't have to go to the kernel summit that I have gone to every
year for just about the last two decades.

Yes, we got it rescheduled, and no, my "maybe you can just do it
without me there" got overruled.  But that whole situation then
started a whole different kind of discussion.  And kind of
incidentally to that one, the second part was that I realized that I
had completely mis-read some of the people involved.

This is where the "look yourself in the mirror" moment comes in.

So here we are, me finally on the one hand realizing that it wasn't
actually funny or a good sign that I was hoping to just skip the
yearly kernel summit entirely, and on the other hand realizing that I
really had been ignoring some fairly deep-seated feelings in the
community.

It's one thing when you can ignore these issues.  Usually it’s just
something I didn't want to deal with.

This is my reality.  I am not an emotionally empathetic kind of person
and that probably doesn't come as a big surprise to anybody.  Least of
all me.  The fact that I then misread people and don't realize (for
years) how badly I've judged a situation and contributed to an
unprofessional environment is not good.

This week people in our community confronted me about my lifetime of
not understanding emotions.  My flippant attacks in emails have been
both unprofessional and uncalled for.  Especially at times when I made
it personal.  In my quest for a better patch, this made sense to me.
I know now this was not OK and I am truly sorry.

The above is basically a long-winded way to get to the somewhat
painful personal admission that hey, I need to change some of my
behavior, and I want to apologize to the people that my personal
behavior hurt and possibly drove away from kernel development
entirely.

I am going to take time off and get some assistance on how to
understand people’s emotions and respond appropriately.

Put another way: When asked at conferences, I occasionally talk about
how the pain-points in kernel development have generally not been
about the _technical_ issues, but about the inflection points where
development flow and behavior changed.

These pain points have been about managing the flow of patches, and
often been associated with big tooling changes - moving from making
releases with "patches and tar-balls" (and the _very_ painful
discussions about how "Linus doesn't scale" back 15+ years ago) to
using BitKeeper, and then to having to write git in order to get past
the point of that no longer working for us.

We haven't had that kind of pain-point in about a decade.  But this
week felt like that kind of pain point to me.

To tie this all back to the actual 4.19-rc4 release (no, really, this
_is_ related!) I actually think that 4.19 is looking fairly good,
things have gotten to the "calm" period of the release cycle, and I've
talked to Greg to ask him if he'd mind finishing up 4.19 for me, so
that I can take a break, and try to at least fix my own behavior.

This is not some kind of "I'm burnt out, I need to just go away"
break.  I'm not feeling like I don't want to continue maintaining
Linux. Quite the reverse.  I very much *do* want to continue to do
this project that I've been working on for almost three decades.

This is more like the time I got out of kernel development for a while
because I needed to write a little tool called "git".  I need to take
a break to get help on how to behave differently and fix some issues
in my tooling and workflow.

And yes, some of it might be "just" tooling.  Maybe I can get an email
filter in place so at when I send email with curse-words, they just
won't go out.  Because hey, I'm a big believer in tools, and at least
_some_ problems going forward might be improved with simple
automation.

I know when I really look “myself in the mirror” it will be clear it's
not the only change that has to happen, but hey...  You can send me
suggestions in email.

I look forward to seeing you at the Maintainer Summit.

                Linus

05 November, 2024 05:00PM

November 04, 2024

Sven Hoexter

Google CloudDNS HTTPS Records with ipv6hint

I naively provisioned an HTTPS record at Google CloudDNS like this via terraform:

resource "google_dns_record_set" "testv6" {
    name         = "testv6.some-domain.example."
    managed_zone = "some-domain-example"
    type         = "HTTPS"
    ttl          = 3600
    rrdatas      = ["1 . alpn=\"h2\" ipv4hint=\"198.51.100.1\" ipv6hint=\"2001:DB8::1\""]
}

This results in a permanent diff because the Google CloudDNS API seems to parse the record content, and stores the ipv6hint expanded (removing the :: notation) and in all lowercase as 2001:db8:0:0:0:0:0:1. Thus to fix the permanent diff we've to use it like this:

resource "google_dns_record_set" "testv6" {
    name = "testv6.some-domain.example."
    managed_zone = "some-domain-example"
    type = "HTTPS"
    ttl = 3600
    rrdatas = ["1 . alpn=\"h2\" ipv4hint=\"198.51.100.1\" ipv6hint=\"2001:db8:0:0:0:0:0:1\""]
}

Guess I should be glad that they already support HTTPS records natively, and not bicker too much about the implementation details.

04 November, 2024 01:13PM

November 03, 2024

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Ultimate rules as a service

Since WFDF changed their ultimate rules web site to be less-than-ideal (in the name of putting everything into Wordpress…), I made my own, at urules.org. It was a fun journey; I've never fiddled with PWAs before, and I was a bit surprised how low-level it all was. I assumed that since my page is just a bunch of HTML files and ~100 lines of JS, I could just bundle that up—but no, that is something they expect a framework to do for you.

The only primitive you get is seemingly that you can fire up your own background service worker (JS running in its own, locked-down context) and that gets to peek at every HTTP request done and possibly intercept it. So you can use a Web Cache (seemingly a separate concept from web local storage?), insert stuff into that, and then query it to intercept requests. It doesn't feel very elegant, perhaps?

It is a bit neat that I can use this to make my own bundling, though. All the pages and images (painfully converted to SVG to save space and re-flow for mobile screens, mostly by simply drawing over bitmaps by hand in Inkscape) are stuck into a JSON dictionary, compressed using the slowest compressor I could find and then downloaded as a single 159 kB bundle. It makes the site actually sort of weird to navigate; since it pretty quickly downloads the bundle in the background, everything goes offline and the speed of loading new pages just feels… off somehow. As if it's not a Serious Web Page if there's no load time.

Of course, this also means that I couldn't cache PNGs, because have you ever tried to have non-UTF-8 data in a JSON sent through N layers of JavaScript? :-)

03 November, 2024 10:48AM

hackergotchi for Guido Günther

Guido Günther

Free Software Activities October 2024

Another short status update of what happened on my side last month. Besides a phosh bugfix release improving text input and selection was a prevalent pattern again resulting in improvements in the compositor, the OSK and some apps.

phosh

  • Install gir (MR). Needed for e.g. Debian to properly package the Rust bindings.
  • Try harder to find an app icon when showing notifications (MR)
  • Add a simple Pomodoro timer plugin (MR)
  • Small screenshot manager fixes (MR)
  • Tweak portals configuration (MR)
  • Consistent focus style on lock screen and settings (MR). Improves the visual appearance as the dotted focus frame doesn't match our otherwise colored focus frames
  • Don't focus buttons in settings (MR). Improves the visual appearance as attention isn't drawn to the button focus.
  • Close Phosh's settings when activating a Settings panel (MR)

phoc

  • Improve cursor and cursor theme handling, hide mouse pointer by default (MR)
  • Don't submit empty preedit (MR)
  • Fix flickering selection bubbles in GTK4's text input fields (MR)
  • Backport two more fixes and release 0.41.1 (MR)

phosh-mobile-settings

  • Allow to select default text completer (MR, MR)
  • Don't crash when we fail to load a pref plugin (MR)

libphosh-rs

  • Update with current gir and allow to use status pages (MR)
  • Expose screenshot manager and build without warnings (MR). (Improved further by a follow up MR from Sam)
  • Fix clippy warnings and add clippy to CI (MR)

phosh-osk-stub

  • presage: Always set predictors (MR). Avoids surprises with unwanted predictors.
  • Install completer information (MR)
  • Handle overlapping touch events (MR). This should improve fast typing.
  • Allow plain ctrl and alt in the shortcuts bar (MR
  • Use Adwaita background color to make the OSK look more integrated (MR)
  • Use StyleManager to support accent colors (MR)
  • Fix emoji section selection in RTL locales (MR)
  • Don't submit empty preedit (MR). Helps to better preserve text selections.

phosh-osk-data

  • Add scripts to build word corpus from Wikipedia data (MR) See here for the data.

xdg-desktop-portal-phosh

  • Release 0.42~rc1 (MR)
  • Fix HighContrast (MR)

Debian

  • Collect some of the QCom workarounds in a package (MR). This is not meant to go into Debian proper but it's nicer than doing all the mods by hand and forgetting which files were modified.
  • q6voiced: Fix service configuration (MR)
  • chatty: Enable clock test again (MR), and then unbreak translations (MR)
  • phosh: Ship gir for libphosh-rs (MR)
  • phoc: Backport input method related fix (MR)
  • Upload initial package of phosh-osk-data: Status in NEW
  • Upload initial package of xdg-desktop-portal-pohsh: Status in NEW
  • Backport phosh-osk-stub abbrev fix (MR
  • phoc: Update to 0.42.1 (MR
  • mobile-tweaks: Enable zram on Librem 5 and PP (MR)

ModemManager

  • Some further work on the Cell Broadcast to address comments MR)

Calls

  • Further improve daemon mode (MR) (mentioned last month already but got even simpler)

GTK

  • Handle Gtk{H,V}Separator when migrating UI files to GTK4 (MR)

feedbackd

  • Modernize README a bit (MR)

Chatty

  • Use special event for SMS (MR)
  • Another QoL fix when using OSK (MR)
  • Fix printing time diffs on 32bit architectures (MR)

libcmatrix

  • Use endpoints for authenticated media (MR). Needed to support v1.11 servers.

phosh-ev

  • Switch to GNOME 47 runtime (MR)

git-buildpackage

  • Don't use deprecated pkg-resources (MR)

Unified push specification

  • Expand on DBus activation a bit (MR)

swipeGuess

  • Small build improvement and mention phosh-osk-stub (Commit)

wlr-clients

  • Fix -o option and add help output (MR)

iotas (Note taking app)

  • Don't take focus with header bar buttons (MR). Makes typing faster (as the OSK won't hide) and thus using the header bar easier

Flare (Signal app)

  • Don't take focus when sending messages, adding emojis or attachments (MR). Makes typing faster (as the OSK won't hide) and thus using those buttons easier

xdg-desktop-portal

  • Use categories that work for both xdg-spec and the portal (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is fairly incomplete, hope to improve on this in the upcoming months:

  • phosh-tour: add first login mode (MR)
  • phosh: Animate swipe closing notifications (MR)
  • iio-sensor-proxy: Report correct value on claim (MR)
  • iio-sensor-proxy: face-{up,down} (MR)
  • phosh-mobile-settings: Squeekboad scaling (MR)
  • libcmatrix: Misc cleanups/fixes (MR)
  • phosh: Notification separator improvements (MR
  • phosh: Accent colors (MR

Help Development

If you want to support my work see donations. This includes a list of hardware we want to improve support for. Thanks a lot to all current and past donors.

03 November, 2024 10:17AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Doing more swimming in everyday life for the past few months.

Doing more swimming in everyday life for the past few months. Seems like I am keeping that up.

03 November, 2024 09:24AM by Junichi Uekawa

November 02, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.0.13-1 on CRAN: Hot Fix

rcpp logo

A hot-fix release 1.0.13-1, consisting of two small PRs relative to the last regular CRAN release 1.0.13, just arrived on CRAN. When we prepared 1.0.13, we included a change related to the ‘tightening’ of the C API of R itself. Sadly, we pinned an expected change to ‘comes with next (minor) release 4.4.2’ rather than now ‘next (normal aka major) release 4.5.0’. And now that R 4.4.2 is out (as of two days ago) we accidentally broke building against the header file with that check. Whoops. Bugs happen, and we are truly sorry—but this is now addressed in 1.0.13-1.

The normal (bi-annual) release cycle will resume with 1.0.14 slated for January. As you can see from the NEWS file of the development branch, we have a number of changes coming. You can safely access that release candidate version, either off the default branch at github or via r-universe artifacts.

The list below details all changes, as usual. The only other change concerns the now-mandatory use of Authors@R.

Changes in Rcpp release version 1.0.13-1 (2024-11-01)

  • Changes in Rcpp API:

    • Use read-only VECTOR_PTR and STRING_PTR only with with R 4.5.0 or later (Kevin in #1342 fixing #1341)
  • Changes in Rcpp Deployment:

    • Authors@R is now used in DESCRIPTION as mandated by CRAN

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

02 November, 2024 09:13PM

Russell Coker

More About the Yoga Gen3

Two months ago I bought a Thinkpad X1 Yoga Gen3 [1]. I’m still very happy with it, the screen is a great improvement over the FullHD screen on my previous Thinkpad. I have yet to discover what’s the best resolution to have on a laptop if price isn’t an issue, but it’s at least 1440p for a 14″ display, that’s 210DPI. The latest Thinkpad X1 Yoga is the 7th gen and has up to 3840*2400 resolution on the internal display for 323DPI. Apple apparently uses the term “Retina Display” to mean something in the range of 250DPI to 300DPI, so my current laptop is below “Retina” while the most expensive new Thinkpads are above it.

I did some tests on external displays and found that this Thinkpad along with a Dell Latitude of the same form factor and about the same age can only handle one 4K display on a Thunderbolt dock and one on HDMI. On Reddit u/Carlioso1234 pointed out this specs page which says it supports a maximum of 3 displays including the built in TFT [2]. The Thunderbolt/USB-C connection has a maximum resolution of 5120*2880 and the HDMI port has a maximum of 4K. The latest Yoga can support four displays total which means 2*5K over Thunderbolt and one 4K over HDMI. It would be nice if someone made a 8000*2880 ultrawide display that looked like 2*5K displays when connected via Thunderbolt. It would also be nice if someone made a 32″ 5K display, currently they all seem to be 27″ and I’ve found that even for 4K resolution 32″ is better than 27″.

With the typical configuration of Linux and the BIOS the Yoga Gen3 will have it’s touch screen stop working after suspend. I have confirmed this for stylus use but as the finger-touch functionality is broken I couldn’t confirm that. On r/thinkpad u/p9k told me how to fix this problem [3]. I had to set the BIOS to Win 10 Sleep aka Hybrid sleep and then put the following in /etc/systemd/system/thinkpad-wakeup-config.service :

# https://www.reddit.com/r/thinkpad/comments/1blpy20/comment/kw7se2l/?context=3

[Unit]
Description=Workarounds for sleep wakeup source for Thinkpad X1 Yoga 3
After=sysinit.target
After=systemd-modules-load.service

[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo 'enabled' > /sys/devices/platform/i8042/serio0/power/wakeup"
ExecStart=/bin/sh -c "echo 'enabled' > /sys/devices/platform/i8042/serio1/power/wakeup"
ExecStart=/bin/sh -c "echo 'LID' > /proc/acpi/wakeup"

[Install]
WantedBy=multi-user.target

Now it works fine, for stylus at least. I still get kernel error messages like the following which don’t seem to cause problems:

wacom 0003:056A:5146.0005: wacom_idleprox_timeout: tool appears to be hung in-prox. forcing it out.

When it wasn’t working I got the above but also kernel error messages like:

wacom 0003:056A:5146.0005: wacom_wac_queue_insert: kfifo has filled, starting to drop events

This change affected the way suspend etc operate. Now when I connect the laptop to power it will leave suspend mode. I’ve configured KDE to suspend when the lid is closed and there’s no monitor connected.

02 November, 2024 08:05AM by etbe

Moving Between Devices

I previously wrote about the possibility of transferring work between devices as an alternative to “convergence” (using a phone or tablet as a desktop) [1]. This idea has been implemented in some commercial products already.

MrWhosTheBoss made a good YouTube video reviewing recent Huawei products [2]. At 2:50 in that video he shows how you can link a phone and tablet, control one from the other, drag and drop of running apps and files between phone and tablet, mirror the screen between devices, etc. He describes playing a video on one device and having it appear on the other, I hope that it actually launches a new instance of the player app as the Google Chromecast failed in the market due to remote display being laggy. At 7:30 in that video he starts talking about the features that are available when you have multiple Huawei devices, starting with the ability to move a Bluetooth pairing for earphones to a different device.

At 16:25 he shows what Huawei is doing to get apps going including allowing apk files to be downloaded and creating what they call “Quick Apps” which are instances of a web browser configured to just use one web site and make it look like a discrete app, we need something like this for FOSS phone distributions – does anyone know of a browser that’s good for it?

Another thing that we need is to have an easy way of transferring open web pages between systems. Chrome allows sending pages between systems but it’s proprietary, limited to Chrome only, and also takes an unreasonable amount of time. KDEConnect allows sharing clipboard contents which can be used to send URLs that can then be pasted into a browser, but the process of copy URL, send via KDEConnect, and paste into other device is unreasonably slow. The design of Chrome with a “Send to your devices” menu option from the tab bar is OK. But ideally we need a “Send to device” for all tabs of a window as well, we need it to run from free software and support using your own server not someone else’s server (AKA “the cloud”). Some of the KDEConnect functionality but using a server rather than direct connection over the same Wifi network (or LAN if bridged to Wifi) would be good.

What else do we need?

02 November, 2024 08:03AM by etbe

What is a Workstation?

I recently had someone describe a Mac Mini as a “workstation”, which I strongly disagree with. The Wikipedia page for Workstation [1] says that it’s a type of computer designed for scientific or technical use, for a single user, and would commonly run a multi-user OS.

The Mac Mini runs a multi-user OS and is designed for a single user. The issue is whether it is for “scientific or technical use”. A Mac Mini is a nice little graphical system which could be used for CAD and other engineering work. But I believe that the low capabilities of the system and lack of expansion options make it less of a workstation.

The latest versions of the Mac Mini (to be officially launched next week) have up to 64G of RAM and up to 8T of storage. That is quite decent compute power for a small device. For comparison the HP ML 110 Gen9 workstation I’m currently using was released in 2021 and has 256G of RAM and has 4 * 3.5″ SAS bays so I could easily put a few 4TB NVMe devices and some hard drives larger than 10TB. The HP Z640 workstation I have was released in 2014 and has 128G of RAM and 4*2.5″ SATA drive bays and 2*3.5″ SATA drive bays. Previously I had a Dell PowerEdge T320 which was released in 2012 and had 96G of RAM and 8*3.5″ SAS bays.

In CPU and GPU power the recent Mac Minis will compare well to my latest workstations. But they compare poorly to workstations from as much as 12 years ago for RAM and storage. Which is more important depends on the task, if you have to do calculations on 80G of data with lots of scans through the entire data set then a system with 64G of RAM will perform very poorly and a system with 96G and a CPU less than half as fast will perform better. A Dell PowerEdge T320 from 2012 fully loaded with 192G of RAM will outperform a modern Mac Mini on many tasks due to this and the T420 supported up to 384G.

Another issue is generic expansion options. I expect a workstation to have a number of PCIe slots free for GPUs and other devices. The T320 I used to use had a PCIe power cable for a power hungry GPU and I think all the T320 and T420 models with high power PSUs supported that.

I think that a usable definition of a “workstation” is a system having a feature set that is typical of servers (ECC RAM, lots of storage for RAID, maybe hot-swap storage devices, maybe redundant PSUs, and lots of expansion options) while also being suitable for running on a desktop or under a desk. The Mac Mini is nice for running on a desk but that’s the only workstation criteria it fits. I think that ECC RAM should be a mandatory criteria and any system without it isn’t a workstation. That excludes most Apple hardware. The Mac Mini is more of a thin-client than a workstation.

My main workstation with ECC RAM could run 3 VMs that each have more RAM than the largest Mac Mini that will be sold next week.

If 32G of non-ECC RAM is considered enough for a “workstation” then you could get an Android phone that counts as a workstation – and it will probably cost less than a Mac Mini.

02 November, 2024 05:03AM by etbe

November 01, 2024

hackergotchi for Colin Watson

Colin Watson

Free software activity in October 2024

Almost all of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Ansible

I noticed that Ansible had fallen out of Debian testing due to autopkgtest failures. This seemed like a problem worth fixing: in common with many other people, we use Ansible for configuration management at Freexian, and it probably wouldn’t make our sysadmins too happy if they upgraded to trixie after its release and found that Ansible was gone.

The problems here were really just slogging through test failures in both the ansible-core and ansible packages, but their test suites are large and take a while to run so this took some time. I was able to contribute a few small fixes to various upstreams in the process:

This should now get back into testing tomorrow.

OpenSSH

Martin-Éric Racine reported that ssh-audit didn’t list the ext-info-s feature as being available in Debian’s OpenSSH 9.2 packaging in bookworm, contrary to what OpenSSH upstream said on their specifications page at the time. I spent some time looking into this and realized that upstream was mistakenly saying that implementations of ext-info-c and ext-info-s were added at the same time, while in fact ext-info-s was added rather later. ssh-audit now has clearer output, and the OpenSSH maintainers have corrected their specifications page.

I looked into a report of an ssh failure in certain cases when using GSS-API key exchange (which is a Debian patch). Once again, having integration tests was a huge win here: the affected scenario is quite a fiddly one, but I was able to set it up in the test, and thereby make sure it doesn’t regress in future. It still took me a couple of hours to get all the details right, but in the past this sort of thing took me much longer with a much lower degree of confidence that the fix was correct.

On upstream’s advice, I cherry-picked some key exchange fixes needed for big-endian architectures.

Python team

I packaged python-evalidate, needed for a new upstream version of buildbot.

The Python 3.13 transition rolls on. I fixed problems related to it in htmlmin, humanfriendly, postgresfixture (contributed upstream), pylint, python-asyncssh (contributed upstream), python-oauthlib, python3-simpletal, quodlibet, zope.exceptions, and zope.interface.

A trickier Python 3.13 issue involved the cgi module. Years ago I ported zope.publisher to the multipart module because cgi.FieldStorage was broken in some situations, and as a result I got a recommendation into Python’s “dead batteries” PEP 594. Unfortunately there turns out to be a name conflict between multipart and python-multipart on PyPI; python-multipart upstream has been working to disentangle this, though we still need to work out what to do in Debian. All the same, I needed to fix python-wadllib and multipart seemed like the best fit; I contributed a port upstream and temporarily copied multipart into Debian’s python-wadllib source package to allow its tests to pass. I’ll come back and fix this properly once we sort out the multipart vs. python-multipart packaging.

tzdata moved some timezone definitions to tzdata-legacy, which has broken a number of packages. I added tzdata-legacy build-dependencies to alembic and python-icalendar to deal with this in those packages, though there are still some other instances of this left.

I tracked down an nltk regression that caused build failures in many other packages.

I fixed Rust crate versioning issues in pydantic-core, python-bcrypt, and python-maturin (mostly fixed by Peter Michael Green and Jelmer Vernooij, but it needed a little extra work).

I fixed other build failures in entrypoints, mayavi2, python-pyvmomi (mostly fixed by Alexandre Detiste, but it needed a little extra work), and python-testing.postgresql (ditto).

I fixed python3-simpletal to tolerate future versions of dh-python that will drop their dependency on python3-setuptools.

I fixed broken symlinks in python-treq.

I removed (build-)depends on python3-pkg-resources from alembic, autopep8, buildbot, celery, flufl.enum, flufl.lock, python-public, python-wadllib (contributed upstream), pyvisa, routes, vulture, and zodbpickle (contributed upstream).

I upgraded astroid, asyncpg (fixing a Python 3.13 failure and a build failure), buildbot (noticing an upstream test bug in the process), dnsdiag, frozenlist, netmiko (fixing a Python 3.13 failure), psycopg3, pydantic-settings, pylint, python-asyncssh, python-bleach, python-btrees, python-cytoolz, python-django-pgtrigger, python-django-test-migrations, python-gssapi, python-icalendar, python-json-log-formatter, python-pgbouncer, python-pkginfo, python-plumbum, python-stdlib-list, python-tokenize-rt, python-treq (fixing a Python 3.13 failure), python-typeguard, python-webargs (fixing a build failure), pyupgrade, pyvisa, pyvisa-py (fixing a Python 3.13 failure), toolz, twisted, vulture, waitress (fixing CVE-2024-49768 and CVE-2024-49769), wtf-peewee, wtforms, zodbpickle, zope.exceptions, zope.interface, zope.proxy, zope.security, and zope.testrunner to new upstream versions.

I tried to fix a regression in python-scruffy, but I need testing feedback.

I requested removal of python-testing.mysqld.

01 November, 2024 12:19PM by Colin Watson

Russ Allbery

Review: Overdue and Returns

Review: Overdue and Returns, by Mark Lawrence

Publisher: Mark Lawrence
Copyright: June 2023
Copyright: February 2024
ASIN: B0C9N51M6Y
ASIN: B0CTYNQGBX
Format: Kindle
Pages: 99

Overdue is a stand-alone novelette in the Library Trilogy universe. Returns is a collection of two stories, the novelette "Returns" and the short story "About Pain." All of them together are about the length of a novella, so I'm combining them into a single review.

These are ancillary stories in the same universe as the novels, but not necessarily in the same timeline. (Trying to fit "About Pain" into the novel timeline will give you a headache and I am choosing to read it as author's fan fiction.) I'm guessing they're part of the new fad for releasing short fiction on Amazon to tide readers over and maintain interest between books in a series, a fad about which I have mixed feelings. Given the total lack of publisher metadata in either the stories or on Amazon, I'm assuming they were self-published even though the novels are published by Ace, but I don't know that for certain.

There are spoilers for The Book That Wouldn't Burn, so don't read these before that novel. There are no spoilers for The Book That Broke the World, and I don't think the reading order would matter.

I found all three of these stories irritating and thuddingly trite. "Returns" is probably the best of the lot in terms of quality of storytelling, but I intensely dislike the structural implications of the nature of the book at its center and am therefore hoping that it's non-canonical.

I would not waste your time with these even if you are enjoying the novels.

"Overdue": Three owners of the same bookstore at different points in time have encounters with an albino man named Yute who is on a quest. One of the owners is trying to write a book, one of them is older, depressed, and closed off, and one of them has regular conversations with her sister's ghost. The nature of the relationship between the three is too much of a spoiler, but it involves similar shenanigans as The Book That Wouldn't Burn.

Lawrence uses my least favorite resolution of benign ghost stories. The story tries very hard to sell it as a good thing, but I thought it was cruel and prefer fantasy that rejects both branches of that dilemma. Other than that, it was fine, I guess, although the moral was delivered with all of the subtlety of the last two minutes of a Saturday morning cartoon. (5)

"Returns": Livira returns a book deep inside the library and finds that she can decipher it, which leads her to a story about Yute going on a trip to recover another library book. This had a lot of great Yute lines, plus I always like seeing Livira in exploration mode. The book itself is paradoxical in a causality-destroying way, which is handwaved away as literal magic. I liked this one the best of the three stories, but I hope the world-building of the main series does not go in this direction and I'm a little afraid it might. (6)

"About Pain": A man named Holden runs into a woman named Clovis at the gym while carrying a book titled Catcher that his dog found and that he's returning to the library. I thoroughly enjoy Clovis and was happy to read a few more scenes about her. Other than that, this was fine, I guess, although it is a story designed to deliver a point and that point is one that appears in every discussion of classics and re-reading that has ever happened on the Internet. Also, I know I'm being grumpy, but Lawrence's puns with authors and character names are chapter-epigraph amusing but not short-story-length funny. Yes, yes, his name is Holden, we get it. (5)

Rating: 5 out of 10

01 November, 2024 04:11AM

Paul Wise

FLOSS Activities October 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Sponsors

All work was done on a volunteer basis.

01 November, 2024 01:10AM

October 31, 2024

hackergotchi for Gunnar Wolf

Gunnar Wolf

Do you have a minute..?

Do you have a minute...?

…to talk about the so-called “Intellectual Property”?

31 October, 2024 10:07PM

October 30, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

gcbd 0.2.7 on CRAN: More Mere Maintenance

Another pure maintenance release 0.2.7 of the gcbd package is now on CRAN. The gcbd proposes a benchmarking framework for LAPACK and BLAS operations (as the library can exchanged in a plug-and-play sense on suitable OSs) and records result in local database. Its original motivation was to also compare to GPU-based operations. However, as it is both challenging to keep CUDA working packages on CRAN providing the basic functionality appear to come and go so testing the GPU feature can be challenging. The main point of gcbd is now to actually demonstrate that ‘yes indeed’ we can just swap BLAS/LAPACK libraries without any change to R, or R packages. The ‘configure / rebuild R for xyz’ often seen with ‘xyz’ being Goto or MKL is simply plain wrong: you really can just swap them (on proper operating systems, and R configs – see the package vignette for more). But nomatter how often we aim to correct this record, it invariably raises its head another time.

This release accommodates a CRAN change request as we were referencing the (now only suggested) package gputools. As hinted in the previous paragraph, it was once on CRAN but is not right now so we adjusted our reference.

CRANberries also provides a diffstat report for the latest release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

30 October, 2024 01:10AM

October 29, 2024

Sven Hoexter

GKE version 1.31.1-gke.1678000+ is a baddy

Just a "warn your brothers" for people foolish enough to use GKE and run on the Rapid release channel.

Update from version 1.31.1-gke.1146000 to 1.31.1-gke.1678000 is causing trouble whenever NetworkPolicy resources and a readinessProbe (or health check) are configured. As a workaround we started to remove the NetworkPolicy resources. E.g. when kustomize is involved with a patch like this:

- patch: |-
    $patch: delete
    apiVersion: "networking.k8s.io/v1"
    kind: NetworkPolicy
    metadata:
        name: dummy
  target:
    kind: NetworkPolicy

We tried to update to the latest version - right now 1.31.1-gke.2008000 - which did not change anything. Behaviour is pretty much erratic, sometimes it still works and sometimes the traffic is denied. It also seems that there is some relevant fix in 1.31.1-gke.1678000 because that is now the oldest release of 1.31.1 which I can find in the regular and rapid release channels. The last known good version 1.31.1-gke.1146000 is not available to try a downgrade.

29 October, 2024 11:28AM

October 28, 2024

hackergotchi for Thomas Lange

Thomas Lange

30.000 FAIme jobs created in 7 years

The number of FAIme jobs has reached 30.000. Yeah!
At the end of this November the FAIme web service for building customized ISOs turns 7 years old. It had reached 10.000 jobs in March 2021 and 20.000 jobs were reached in June 2023. A nice increase of the usage.

Here are some statistics for the jobs processed in 2024:

Type of jobs

3%     cloud image
11%     live ISO
86%     install ISO

Distribution

2%     bullseye
8%     trixie
12%     ubuntu 24.04
78%     bookworm

Misc

  • 18%   used a custom postinst script
  • 11%   provided their ssh pub key for passwordless root login
  • 50%   of the jobs didn't included a desktop environment at all, the others used GNOME, XFCE or KDE or the Ubuntu desktop the most.
  • The biggest ISO was a FAIme job which created a live ISO with a desktop and some additional packages This job took 30min to finish and the resulting ISO was 18G in size.

Execution Times

The cloud and live ISOs need more time for their creation because the FAIme server needs to unpack and install all packages. For the install ISO the packages are only downloaded. The amount of software packages also affects the build time. Every ISO is build in a VM on an old 6-core E5-1650 v2. Times given are calculated from the jobs of the past two weeks.

Job type     Avg     Max
install no desktop     1 min     2 min
install GNOME     2 min     5 min

The times for Ubuntu without and with desktop are one minute higher than those mentioned above.

Job type     Avg     Max
live no desktop     4 min     6 min
live GNOME     8 min     11 min

The times for cloud images are similar to live images.

A New Feature

For a few weeks now, the system has been showing the number of jobs ahead of you in the queue when you submit a job that cannot be processed immediately.

The Next Milestone

At the end of this years the FAI project will be 25 years old. If you have a success story of your FAI usage to share please post it to the linux-fai mailing list or send it to me. Do you know the FAI questionnaire ? A lot of reports are already available.

Here's an overview what happened in the past 20 years in the FAI project.

About FAIme

FAIme is the service for building your own customized ISO via a web interface. You can create an installation or live ISO or a cloud image. Several Debian releases can be selected and also Ubuntu server or Ubuntu desktop installation ISOs can be customized. Multiple options are available like selecting a desktop and the language, adding your own package list, choosing a partition layout, adding a user, choosing a backports kernel, adding a postinst script and some more.

28 October, 2024 11:57AM

October 27, 2024

Enrico Zini

Typing decorators for class members with optional arguments

This looks straightforward and is far from it. I expect tool support will improve in the future. Meanwhile, this blog post serves as a step by step explanation for what is going on in code that I'm about to push to my team.

Let's take this relatively straightforward python code. It has a function printing an int, and a decorator that makes it argument optional, taking it from a global default if missing:

from unittest import mock

default = 42


def with_default(f):
    def wrapped(self, value=None):
        if value is None:
            value = default
        return f(self, value)

    return wrapped


class Fiddle:
    @with_default
    def print(self, value):
        print("Answer:", value)


fiddle = Fiddle()
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

It works nicely as expected:

$ python3 test0.py
Answer: 12
Answer: 42
Mocked answer: 12
Mocked answer: None

It lacks functools.wraps and typing, though. Let's add them.

Adding functools.wraps

Adding a simple @functools.wraps, mock unexpectedly stops working:

# python3 test1.py
Answer: 12
Answer: 42
Mocked answer: 12
Traceback (most recent call last):
  File "/home/enrico/lavori/freexian/tt/test1.py", line 42, in <module>
    fiddle.print()
  File "<string>", line 2, in print
  File "/usr/lib/python3.11/unittest/mock.py", line 186, in checksig
    sig.bind(*args, **kwargs)
  File "/usr/lib/python3.11/inspect.py", line 3211, in bind
    return self._bind(args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/inspect.py", line 3126, in _bind
    raise TypeError(msg) from None
TypeError: missing a required argument: 'value'

This is the new code, with explanations and a fix:

# Introduce functools
import functools
from unittest import mock

default = 42


def with_default(f):
    @functools.wraps(f)
    def wrapped(self, value=None):
        if value is None:
            value = default
        return f(self, value)

    # Fix:
    # del wrapped.__wrapped__

    return wrapped


class Fiddle:
    @with_default
    def print(self, value):
        assert value is not None
        print("Answer:", value)


fiddle = Fiddle()
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    # mock's autospec uses inspect.getsignature, which follows __wrapped__ set
    # by functools.wraps, which points to a wrong signature: the idea that
    # value is optional is now lost
    fiddle.print()

Adding typing

For simplicity, from now on let's change Fiddle.print to match its wrapped signature:

      # Give up with making value not optional, to simplify things :(
      def print(self, value: int | None = None) -> None:
          assert value is not None
          print("Answer:", value)

Typing with ParamSpec

# Introduce typing, try with ParamSpec
import functools
from typing import TYPE_CHECKING, ParamSpec, Callable
from unittest import mock

default = 42

P = ParamSpec("P")


def with_default(f: Callable[P, None]) -> Callable[P, None]:
    # Using ParamSpec we forward arguments, but we cannot use them!
    @functools.wraps(f)
    def wrapped(self, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return wrapped


class Fiddle:
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)

mypy complains inside the wrapper, because while we forward arguments we don't constrain them, so we can't be sure there is a value in there:

test2.py:17: error: Argument 2 has incompatible type "int"; expected "P.args"  [arg-type]
test2.py:19: error: Incompatible return value type (got "_Wrapped[P, None, [Any, int | None], None]", expected "Callable[P, None]")  [return-value]
test2.py:19: note: "_Wrapped[P, None, [Any, int | None], None].__call__" has type "Callable[[Arg(Any, 'self'), DefaultArg(int | None, 'value')], None]"

Typing with Callable

We can use explicit Callable argument lists:

# Introduce typing, try with Callable
import functools
from typing import TYPE_CHECKING, Callable, TypeVar
from unittest import mock

default = 42

A = TypeVar("A")


# Callable cannot represent the fact that the argument is optional, so now mypy
# complains if we try to omit it
def with_default(f: Callable[[A, int | None], None]) -> Callable[[A, int | None], None]:
    @functools.wraps(f)
    def wrapped(self: A, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return wrapped


class Fiddle:
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)


if TYPE_CHECKING:
    reveal_type(Fiddle.print)

fiddle = Fiddle()
fiddle.print(12)
# !! Too few arguments for "print" of "Fiddle"  [call-arg]
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

Now mypy complains when we try to omit the optional argument, because Callable cannot represent optional arguments:

test3.py:32: note: Revealed type is "def (test3.Fiddle, Union[builtins.int, None])"
test3.py:37: error: Too few arguments for "print" of "Fiddle"  [call-arg]
test3.py:46: error: Too few arguments for "print" of "Fiddle"  [call-arg]

typing's documentation says:

Callable cannot express complex signatures such as functions that take a variadic number of arguments, overloaded functions, or functions that have keyword-only parameters. However, these signatures can be expressed by defining a Protocol class with a call() method:

Let's do that!

Typing with Protocol, take 1

# Introduce typing, try with Protocol
import functools
from typing import TYPE_CHECKING, Protocol, TypeVar, Generic, cast
from unittest import mock

default = 42

A = TypeVar("A", contravariant=True)


class Printer(Protocol, Generic[A]):
    def __call__(_, self: A, value: int | None = None) -> None:
        ...


def with_default(f: Printer[A]) -> Printer[A]:
    @functools.wraps(f)
    def wrapped(self: A, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return cast(Printer, wrapped)


class Fiddle:
    # function has a __get__ method to generated bound versions of itself
    # the Printer protocol does not define it, so mypy is now unable to type
    # the bound method correctly
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)


if TYPE_CHECKING:
    reveal_type(Fiddle.print)

fiddle = Fiddle()
# !! Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

New mypy complaints:

test4.py:41: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"  [arg-type]
test4.py:42: error: Missing positional argument "self" in call to "__call__" of "Printer"  [call-arg]
test4.py:50: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"  [arg-type]
test4.py:51: error: Missing positional argument "self" in call to "__call__" of "Printer"  [call-arg]

What happens with class methods, is that the function object has a __get__ method that generates a bound versions of itself. Our Printer protocol does not define it, so mypy is now unable to type the bound method correctly.

Typing with Protocol, take 2

So... we add the function descriptor methos to our Protocol!

A lot of this is taken from this discussion.

# Introduce typing, try with Protocol, harder!
import functools
from typing import TYPE_CHECKING, Protocol, TypeVar, Generic, cast, overload, Union
from unittest import mock

default = 42

A = TypeVar("A", contravariant=True)

# We now produce typing for the whole function descriptor protocol
#
# See https://github.com/python/typing/discussions/1040


class BoundPrinter(Protocol):
    """Protocol typing for bound printer methods."""

    def __call__(_, value: int | None = None) -> None:
        """Bound signature."""


class Printer(Protocol, Generic[A]):
    """Protocol typing for printer methods."""

    # noqa annotations are overrides for flake8 being confused, giving either D418:
    # Function/ Method decorated with @overload shouldn't contain a docstring
    # or D105:
    # Missing docstring in magic method
    #
    # F841 is for vulture being confused:
    #   unused variable 'objtype' (100% confidence)

    @overload
    def __get__(  # noqa: D105
        self, obj: A, objtype: type[A] | None = None  # noqa: F841
    ) -> BoundPrinter:
        ...

    @overload
    def __get__(  # noqa: D105
        self, obj: None, objtype: type[A] | None = None  # noqa: F841
    ) -> "Printer[A]":
        ...

    def __get__(
        self, obj: A | None, objtype: type[A] | None = None  # noqa: F841
    ) -> Union[BoundPrinter, "Printer[A]"]:
        """Implement function descriptor protocol for class methods."""

    def __call__(_, self: A, value: int | None = None) -> None:
        """Unbound signature."""


def with_default(f: Printer[A]) -> Printer[A]:
    @functools.wraps(f)
    def wrapped(self: A, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return cast(Printer, wrapped)


class Fiddle:
    # function has a __get__ method to generated bound versions of itself
    # the Printer protocol does not define it, so mypy is now unable to type
    # the bound method correctly
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)


fiddle = Fiddle()
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

It works! It's typed! And mypy is happy!

27 October, 2024 03:46PM

October 26, 2024

hackergotchi for Steve McIntyre

Steve McIntyre

Mini-Debconf in Cambridge, October 10-13 2024

Group photo

Again this year, Arm offered to host us for a mini-debconf in Cambridge. Roughly 60 people turned up on 10-13 October to the Arm campus, where they made us really welcome. They even had some Debian-themed treats made to spoil us!

Cakes

Hacking together

minicamp

For the first two days, we had a "mini-debcamp" with disparate group of people working on all sorts of things: Arm support, live images, browser stuff, package uploads, etc. And (as is traditional) lots of people doing last-minute work to prepare slides for their talks.

Sessions and talks

Secure Boot talk

Saturday and Sunday were two days devoted to more traditional conference sessions. Our talks covered a typical range of Debian subjects: a DPL "Bits" talk, an update from the Release Team, live images. We also had some wider topics: handling your own data, what to look for in the upcoming Post-Quantum Crypto world, and even me talking about the ups and downs of Secure Boot. Plus a random set of lightning talks too! :-)

Video team awesomeness

Video team in action

Lots of volunteers from the DebConf video team were on hand too (both on-site and remotely!), so our talks were both streamed live and recorded for posterity - see the links from the individual talk pages in the wiki, or http://meetings-archive.debian.net/pub/debian-meetings/2024/MiniDebConf-Cambridge/ for the full set if you'd like to see more.

A great time for all

Again, the mini-conf went well and feedback from attendees was very positive. Thanks to all our helpers, and of course to our sponsor: Arm for providing the venue and infrastructure for the event, and all the food and drink too!

Photo credits: Andy Simpkins, Mark Brown, Jonathan Wiltshire. Thanks!

26 October, 2024 08:54PM

October 25, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

Behringer Model-D (synths I didn't buy)

Whilst researching what synth to buy, I learned of the Behringer1 Model-D2: a 2018 clone of the 1970 Moog Minimoog, in a desktop form factor.

Behringer Model-D

Behringer Model-D

In common with the original Minimoog, it's a monophonic analogue synth, featuring three audible oscillators3 , Moog's famous 12-ladder filter and a basic envelope generator. The model-d has lost the keyboard from the original and added some patch points for the different stages, enabling some slight re-routing of the audio components.

1970 Moog Minimoog

1970 Moog Minimoog

Since I was focussing on more fundamental, back-to-basics instruments, this was very appealing to me. I'm very curious to find out what's so compelling about the famous Moog sound. The relative lack of features feels like an advantage: less to master. The additional patch points makes it a little more flexible and offer a potential gateway into the world of modular synthesis. The Model-D is also very affordable: about £ 200 GBP. I'll never own a real Moog.

For this to work, I would need to supplement it with some other equipment. I'd need a keyboard (or press the Micron into service as a controller); I would want some way of recording and overdubbing (same as with any synth). There are no post-mix effects on the Model-D, such as delay, reverb or chorus, so I may also want something to add those.

What stopped me was partly the realisation that there was little chance that a perennial beginner, such as I, could eek anything novel out of a synthesiser design that's 54 years old. Perhaps that shouldn't matter, but it gave me pause. Whilst the Model-D has patch points, I don't have anything to connect to them, and I'm firmly wanting to avoid the Modular Synthesis money pit. The lack of effects, and polyphony could make it hard to live-sculpt a tone.

I started characterizing the Model-D as the "heart" choice, but it seemed wise to instead go for a "head" choice.

Maybe another day!


  1. There's a whole other blog post of material I could write about Behringer and their clones of classic synths, some long out of production, and others, not so much. But, I decided to skip on that for now.
  2. taken from the fact that the Minimoog was a productised version of Moog's fourth internal prototype, the model D.
  3. 2 oscillators is more common in modern synths

25 October, 2024 03:56PM

October 23, 2024

Why hardware synths?

Russell wrote a great comment on my last post (thanks!):

What benefits do these things offer when a general purpose computer can do so many things nowadays? Is there a USB keyboard that you can connect to a laptop or phone to do these things? I presume that all recent phones have the compute power to do all the synthesis you need if you have the right software. Is it just a lack of software and infrastructure for doing it on laptops/phones that makes synthesisers still viable?

I've decided to turn my response into a post of its own.

The issue is definitely not compute power. You can indeed attach a USB keyboard to a computer and use a plethora of software synthesisers, including very faithful emulations of all the popular classics. The raw compute power of modern hardware synths is comparatively small: I’ve been told the modern Korg digital synths are on a par with a raspberry pi. I’ve seen some DSPs which are 32 bit ARMs, and other tools which are roughly equivalent to arduinos.

I can think of four reasons hardware synths remain popular with some despite the above:

  1. As I touched on in my original synth post, computing dominates my life outside of music already. I really wanted something separate from that to keep mental distance from work.

  2. Synths have hard real-time requirements. They don't have raw power in compute terms, but they absolutely have to do their job within microseconds of being instructed to, with no exceptions. Linux still has a long way to go for hard real-time.

  3. The Linux audio ecosystem is… complex. Dealing with pipewire, pulseaudio, jack, alsa, oss, and anything else I've forgotten, as well as their failure modes, is too time consuming.

  4. The last point is to do with creativity and inspiration. A good synth is more than the sum of its parts: it's an instrument, carefully designed and its components integrated by musically-minded people who have set out to create something to inspire. There are plenty of synths which aren't good instruments, but have loads of features: they’re boxes of "stuff". Good synths can't do it all: they often have limitations which you have to respond to, work around or with, creatively. This was expressed better than I could by Trent Reznor in the video archetype of a synthesiser:

23 October, 2024 09:51AM

Arturia Microfreak

Arturia Microfreak. [© CC-BY-SA 4](https://commons.wikimedia.org/wiki/File:MicroFreak.jpg)

Arturia Microfreak. © CC-BY-SA 4

I nearly did, but ultimately I didn't buy an Arturia Microfreak.

The Microfreak is a small form factor hybrid synth with a distinctive style. It's priced at the low end of the market and it is overflowing with features. It has a weird 2-octave keyboard which is a stylophone-style capacitive strip rather than weighted keys. It seems to have plenty of controls, but given the amount of features it has, much of that functionality is inevitably buried in menus. The important stuff is front and centre, though. The digital oscillators are routed through an analog filter. The Microfreak gained sampler functionality in a firmware update that surprised and delighted its owners.

I watched a load of videos about the Microfreak, but the above review from musician Stimming stuck in my mind because it made a comparison between the Microfreak and Teenage Engineering's OP-1.

The Teenage Engineering OP-1.

The Teenage Engineering OP-1.

I'd been lusting after the OP-1 since it appeared in 2011: a pocket-sized1 music making machine with eleven synthesis engines, a sampler, and less conventional features such as an FM radio, a large colour OLED display, and a four track recorder. That last feature in particular was really appealing to me: I loved the idea of having an all-in-one machine to try and compose music. Even then, I was not keen on involving conventional computers in music making.

Of course in many ways it is a very compromised machine. I never did buy a OP-1, and by now they've replaced it with a new model (the OP-1 field) that costs 50% more (but doesn't seem to do 50% more) I'm still not buying one.

Framing the Microfreak in terms of the OP-1 made the penny drop for me. The Microfreak doesn't have the four-track functionality, but almost no synth has: I'm going to have to look at something external to provide that. But it might capture a similar sense of fun; it's something I could use on the sofa, in the spare room, on the train, during lunchbreaks at work, etc.

On the other hand, I don't want to make the same mistake as with the Micron: too much functionality requiring some experience to understand what you want so you can go and find it in the menus. I also didn't get a chance to audition the unusual keyboard: there's only one music store carrying synths left in Newcastle and they didn't have one.

So I didn't buy the Microfreak. Maybe one day in the future once I'm further down the road. Instead, I started to concentrate my search on more fundamental, back-to-basics instruments…


  1. Big pockets, mind

23 October, 2024 09:51AM