Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.

Feeds

November 12, 2024

Sven Hoexter

fluxcd: Validate flux-system Root Kustomization

Not entirely sure how people use fluxcd, but I guess most people have something like a flux-system flux kustomization as the root to add more flux kustomizations to their kubernetes cluster. Here all of that is living in a monorepo, and as we're all humans people figure out different ways to break it, which brings the reconciliation of the flux controllers down. Thus we set out to do some pre-flight validations.

Note1: We do not use flux variable substitutions for those root kustomizations, so if you use those, you've to put additional work into the validation and pipe things through flux envsubst.

First Iteration: Just Run kustomize Like Flux Would Do It

With a folder structure where we've a cluster folder with subfolders per cluster, we just run a for loop over all of them:

for CLUSTER in ${CLUSTERS}; do
    pushd clusters/${CLUSTER}

    # validate if we can create and build a flux-system like kustomization file
    kustomize create --autodetect --recursive
    if ! kustomize build . -o /dev/null 2> error.log; then
        echo "Error building flux-system kustomization for cluster ${CLUSTER}"
        cat error.log
    fi

    popd
done

Second Iteration: Make Sure Our Workload Subfolder Have a kustomization.yaml

Next someone figured out that you can delete some yaml files from a workload subfolder, including the kustomization.yaml, but not all of them. That left around a resource definition which lacks some other referenced objects, but is still happily included into the root kustomization by kustomize create and flux, which of course did not work.

Thus we started to catch that as well in our growing for loop:

for CLUSTER in ${CLUSTERS}; do
    pushd clusters/${CLUSTER}

    # validate if we can create and build a flux-system like kustomization file
    kustomize create --autodetect --recursive
    if ! kustomize build . -o /dev/null 2> error.log; then
        echo "Error building flux-system kustomization for cluster ${CLUSTER}"
        cat error.log
    fi

    # validate if we always have a kustomization file in folders with yaml files
    for CLFOLDER in $(find . -type d); do
        test -f ${CLFOLDER}/kustomization.yaml && continue
        test -f ${CLFOLDER}/kustomization.yml && continue
        if <span class="createlink"><a href="/blog/ikiwiki.cgi?do=create&amp;from=posts%2Ffluxcd_validate_flux-system_ks&amp;page=wc_-l__41_____33____61___0_" rel="nofollow">?</a> $(find ${CLFOLDER} -maxdepth 1 \( -name '*.yaml' -o -name '*.yml' \) -type f</span>; then
            echo "Error Cluster ${CLUSTER} folder ${CLFOLDER} lacks a kustomization.yaml"
        fi
    done

    popd
done

Note2: I shortened those snippets to the core parts. In our case some things are a bit specific to how we implemented the execution of those checks in GitHub action workflows. Hope that's enough to transport the idea of what to check for.

12 November, 2024 02:24PM

hackergotchi for James Bromberger

James Bromberger

My own little server

In 2004, I was living in London, and decided it was time I had my own little virtual private server somewhere online. As a Debian developer since the start of 2000, it had to be Debian, and it still is… This was before “cloud” as we know it today. Virtual Private Servers (VPS) was a … Continue reading "My own little server"

12 November, 2024 12:34PM by james

Swiss JuristGate

Litigium: Nati Gomez, Mathieu Parreaux, jurist who didn't pass bar exam on cross-border radio program with Benjamin Smadja

Radio Lac is the third most popular radio station in the Lake Geneva region covering Switzerland and France.

The reception area includes all the lakeside cities of Geneva, Nyon, Morges, Lausanne, Vevey and Montreux as well as the cross-border regions.

The transmitter for the region of Geneva is actually situated on Mount Saléve, at the cable car station in French territory. The inhabitants of French cities Annemasse, Thonon, Evian, Saint-Julien-en-Genevois, Saint-Genis-Pouilly, Ferney-Voltaire, Gex and Divonne are in the reception area.

The jurists of Mathieu Parreaux published several documents about their legal insurance services for cross-border commuters and residents of France.

In our last blog we discovered that Monsieur Parreaux didn't pass the bar exam either in Switzerland or in France.

Each week, Mathieu Parreaux and his colleague Nati Gomez responded to legal questions on the radio program of Benjamin Smadja, Radio Lac (Media One Group).

The insurance company gained 20,000 clients. How many clients found Parreaux, Thiébaud & Partners thanks to free publicity on Radio Lac? How many clients killed themselves?

A program from 5 November 2018 where they discuss customs charges for cross-border commuters:

 

Mont Saléve

Daniel Pocock, author of this site passed the amateur radio exam at age 14

He has provided many services on a voluntary basis since he was 14 years old. Why do the Swiss jurists insult the families of unpaid volunteers? Is that racism?

Daniel Pocock, radio amateur

 

Daniel Pocock, EI7JIB, IRTS, elected

12 November, 2024 11:30AM

Clémence Lamirand published an article in AGEFI, Mathieu Parreaux never passed the bar exam

A news report was published by Clémence Lamirand at the bureau AGEFI.

She wrote (original in French) The cabinet is young, like the majority of employees who work there. The founder, Mathieu Parreaux, has not yet passed the bar exam. For the moment, the business is his priority, the final exams will come later.

The reporter, Madame Lamirand doesn't pose difficult questions. Journalists in Switzerland fear criminal prosecution for writing any form of inconvenient truth.

Un cabinet juridique en construction

Le cabinet Parreaux, Thiébaud & Partners, basé à Genève, propose une protection juridique sur abonnement. Portrait de la toute jeune société.

Clémence Lamirand, 21 mai 2018, 20h49

Le cabinet est jeune, comme la plupart des employés qui y travaillent. Son fondateur, Mathieu Parreaux, n’a pas encore passé son brevet d’avocat. Il donne pour le moment la priorité à son entreprise, les examens finaux seront pour plus tard. Fondé en 2017 et basé à Genève, le cabinet juridique semble évoluer rapidement. Récemment, dix juristes ont été embauchés. Au début de l’année, le cabinet a fusionné avec la société de services lausannois Thiébaud pour donner naissance au cabinet juridique Parreaux, Thiébaud & Partners. «Cette entreprise était spécialisée dans les assurances, explique Mathieu Parreaux, nous avions de notre côté nos compétences propres en protection juridique. Notre rapprochement récent nous permet désormais d’être présents dans les deux domaines, sur les deux cantons.» Le cabinet juridique emploie aujourd’hui une vingtaine de personnes. Parmi les juristes, certains sont détenteurs du brevet d’avocat (six), d’autres non. Parreaux, Thiébaud & Partners travaille également avec des avocats externes, indépendants, qui peuvent prendre le relai lorsque les juristes ne peuvent pas poursuivre la défense de leurs clients, par exemple lors d’un procès au tribunal pénal. «Le statut de juriste a beaucoup d’avantages mais il ne permet pas d’aller partout. Nous avons donc noué des partenariats avec une quinzaine de professionnels présents dans tous cantons romands, précise Mathieu Parreaux. A l’avenir, nous aimerions gagner toute la Suisse. Nous devons pour cela trouver les bons avocats et les bons juristes et les inciter à rejoindre notre structure. Toutefois, nous ne voulons pas grandir trop vite et souhaitons progresser intelligemment.» Un cabinet que se veut différent des autres Aujourd’hui, Parreaux, Thiébaud & Partners, qui travaille aussi en lien avec des notaires, souhaite proposer des prestations juridiques larges et abordables. «C’est toute notre philosophie, s’enthousiasme le jeune entrepreneur, notre cabinet est une structure unique qui souhaite proposer à ses clients des prestations variées et un service client performant, le tout à un prix adapté.» Son fondateur est spécialisé en droit des contrats, droit fiscal et droit des sociétés. Il s’est entouré de spécialistes dans différents domaines. «Avec des compétences variées, nos conseillers peuvent répondre rapidement et efficacement à nos clients, explique Mathieu Parreaux. Ainsi, nous couvrons actuellement 44 domaines du droit.» Le cabinet propose donc conseils juridiques et conciliation. Les spécialistes rédigent tous types d’actes légaux pour les entreprises, des contrats de travail comme des conditions générales par exemple. Une permanence juridique privée est assurée. «Nous faisons tout pour anticiper et être proactifs, résume le fondateur, nous essayons de régler les différents en amont.» Une protection juridique sur abonnement Parreaux, Thiébaud & Partners propose depuis quelques semaines une protection juridique, pour les particuliers comme pour les entreprises, sous forme d’abonnement. Pour un engagement d’une durée de 3, 5 ou 8 ans, une entreprise peut souscrire à Real-Protect. «Nous donnons des conseils oraux mais aussi écrits, précise Mathieu Parreaux, ce qui nous engage. De plus, notre conseil est illimité. Nous souhaitons être réellement là pour nos clients. Toujours avec un coût raisonnable.» «Le prix est ce qui m’a attiré en premier, avoue Jessy Kadimadio, client qui vient de lancer une régie immobilière et qui a fait appel au cabinet pour des rédactions de contrats, mais j’ai été par la suite agréablement surpris par leur disponibilité. J’ai aussi été séduit par le côté outsider de cette jeune société.»

12 November, 2024 10:00AM

November 11, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

RcppSpdlog 0.0.19 on CRAN: New Upstream, New Features

Version 0.0.19 of RcppSpdlog arrived on CRAN early this morning and has been uploaded to Debian. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.

This releases updates the code to the version 1.15.0 of spdlog which was released on Saturday, and contains fmt 11.0.2. It also contains a contributed PR which allows use std::format under C++20, bypassing fmt (with some post-merge polish too), and another PR correcting a documentation double-entry.

The NEWS entry for this release follows.

Changes in RcppSpdlog version 0.0.19 (2024-11-10)

  • Support use of std::format under C++20 via opt-in define instead of fmt (Xanthos Xanthopoulos in #19)

  • An erroneous duplicate log=level documentation level was removed (Contantinos Giachalis in #20)

  • Upgraded to upstream release spdlog 1.15.0 (Dirk in #21)

  • Partially revert / simplify src/formatter.cpp accomodating both #19 and previous state (Dirk in #21)

Courtesy of my CRANberries, there is also a diffstat report. More detailed information is on the RcppSpdlog page, or the package documention site. If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

11 November, 2024 05:47PM

Antoine Beaupré

Why I should be running Debian unstable right now

So a common theme on the Internet about Debian is so old. And right, I am getting close to the stage that I feel a little laggy: I am using a bunch of backports for packages I need, and I'm missing a bunch of other packages that just landed in unstable and didn't make it to backports for various reasons.

I disagree that "old" is a bad thing: we definitely run Debian stable on a fleet of about 100 servers and can barely keep up, I would make it older. And "old" is a good thing: (port) wine and (any) beer needs time to age properly, and so do humans, although some humans never seem to grow old enough to find wisdom.

But at this point, on my laptop, I am feeling like I'm missing out. This page, therefore, is an evolving document that is a twist on the classic NewIn game. Last time I played seems to be #newinwheezy (2013!), so really, I'm due for an update. (To be fair to myself, I do keep tabs on upgrades quite well at home and work, which do have their share of "new in", just after the fact.)

New packages to explore

Those tools are shiny new things available in unstable or perhaps Trixie (testing) already that I am not using yet, but I find interesting enough to list here.

  • backdown: clever file deduplicator
  • codesearch: search all of Debian's source code (tens of thousands of packages) from the commandline! (see also dcs-cli, not in Debian)
  • dasel: JSON/YML/XML/CSV parser, similar to jq, but different syntax, not sure I'd grow into it, but often need to parse YML like JSON and failing
  • fyi: notify-send replacement
  • git-subrepo: git-submodule replacement I am considering
  • gtklock: swaylock replacement with bells and whistles, particularly interested in showing time, battery and so on
  • hyprland: possible Sway replacement, but there are rumors of a toxic community (rebuttal, I haven't reviewed either in detail), so approach carefully)
  • kooha: simple screen recorder with audio support, currently using wf-recorder which is a more.. minimalist option
  • linescroll: rate graphs on live logs, mostly useful on servers though
  • memray: Python memory profiler
  • ruff: faster Python formatter and linter, flake8/black/isort replacement, alas not mypy/LSP unfortunately, designed to be ran alongside such a tool, which is not possible in Emacs eglot right now, but is possible in lsp-mode
  • sfwbar: pretty status bar, may replace waybar, which i am somewhat unhappy with (my UTC clock disappears randomly)
  • shoutidjc: streaming workstation, currently using butt but it doesn't support HTTPS correctly
  • spytrap-adb: cool spy gear
  • trippy: trippy network analysis tool, kind of an improved MTR
  • yubikey-touch-detector: notifications for when I need to touch my YubiKey

New packages I won't use

Those are packages that I have tested because I found them interesting, but ended up not using, but I think people could find interesting anyways.

  • kew: surprisingly fast music player, parsed my entire library (which is huge) instantaneously and just started playing (I still use Supersonic, for which I maintain a flatpak on my Navidrome server)
  • mdformat: good markdown formatter, think black or gofmt but for markdown), but it didn't actually do what I needed, and it's not quite as opinionated as it should (or could) be)

Backports already in use

Those are packages I already use regularly, which have backports or that can just be installed from unstable:

  • asn: IP address forensics
  • markdownlint: markdown linter, I use that a lot
  • poweralertd: pops up "your battery is almost empty" messages
  • sway-notification-center: used as part of my status bar, yet another status bar basically, a little noisy, stuck in a libc dep update
  • tailspin: used to color logs

Out of date packages

Those are packages that are in Debian stable (Bookworm) already, but that are somewhat lacking and could benefit from an upgrade.

Last words

If you know of cool things I'm missing out of, then by all means let me know!

That said, overall, this is a pretty short list! I have most of what I need in stable right now, and if I wasn't a Debian developer, I don't think I'd be doing the jump now. But considering how easier it is to develop Debian (and how important it is to test the next release!), I'll probably upgrade soon.

Previously, I was running Debian testing (which why the slug on that article is why-trixie), but now I'm actually considering just running unstable on my laptop directly anyways. It's been a long time since we had any significant instability there, and I can typically deal with whatever happens, except maybe when I'm traveling, and then it's easy to prepare for that (just pin testing).

11 November, 2024 04:17PM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Why academics under-share research data - A social relational theory

This post is a review for Computing Reviews for Why academics under-share research data - A social relational theory , a article published in Journal of the Association for Information Science and Technology

As an academic, I have cheered for and welcomed the open access (OA) mandates that, slowly but steadily, have been accepted in one way or another throughout academia. It is now often accepted that public funds means public research. Many of our universities or funding bodies will demand that, with varying intensities–sometimes they demand research to be published in an OA venue, sometimes a mandate will only “prefer” it. Lately, some journals and funder bodies have expanded this mandate toward open science, requiring not only research outputs (that is, articles and books) to be published openly but for the data backing the results to be made public as well. As a person who has been involved with free software promotion since the mid 1990s, it was natural for me to join the OA movement and to celebrate when various universities adopt such mandates.

Now, what happens after a university or funder body adopts such a mandate? Many individual academics cheer, as it is the “right thing to do.” However, the authors observe that this is not really followed thoroughly by academics. What can be observed, rather, is the slow pace or “feet dragging” of academics when they are compelled to comply with OA mandates, or even an outright refusal to do so. If OA and open science are close to the ethos of academia, why aren’t more academics enthusiastically sharing the data used for their research? This paper finds a subversive practice embodied in the refusal to comply with such mandates, and explores an hypothesis based on Karl Marx’s productive worker theory and Pierre Bourdieu’s ideas of symbolic capital.

The paper explains that academics, as productive workers, become targets for exploitation: given that it’s not only the academics’ sharing ethos, but private industry’s push for data collection and industry-aligned research, they adapt to technological changes and jump through all kinds of hurdles to create more products, in a result that can be understood as a neoliberal productivity measurement strategy. Neoliberalism assumes that mechanisms that produce more profit for academic institutions will result in better research; it also leads to the disempowerment of academics as a class, although they are rewarded as individuals due to the specific value they produce.

The authors continue by explaining how open science mandates seem to ignore the historical ways of collaboration in different scientific fields, and exploring different angles of how and why data can be seen as “under-shared,” failing to comply with different aspects of said mandates. This paper, built on the social sciences tradition, is clearly a controversial work that can spark interesting discussions. While it does not specifically touch on computing, it is relevant to Computing Reviews readers due to the relatively high percentage of academics among us.

11 November, 2024 02:53PM

hackergotchi for Thomas Lange

Thomas Lange

Using NIS (Network Information Service) in 2024

The topic of this posting already tells you that an old Unix guy tells stories about old techniques.

I'm a happy NIS (formerly YP) user since 30+ years. I started using it with SunOS 4.0, later using it with Solaris and with Linux since 1999.

In the past, a colleague wasn't happyly using NIS+ when he couldn't log in as root after a short time because of some well known bugs and wrong configs. NIS+ was also much slower than my NIS setup. I know organisations using NIS for more than 80.000 user accounts in 2024.

I know the security implications of NIS but I can live with them, because I manage all computers in the network that have access to the NIS maps. And NIS on Linux offers to use shadow maps, which are only accessible to the root account. My users are forced to use very long passwords.

Unfortunately NIS support for the PAM modules was removed in Debian in pam 1.4.0-13, which means Debian 12 (bookworm) is lacking NIS support in PAM, but otherwise it is still supported. This only affects changing the NIS password via passwd. You can still authenticate users and use other NIS maps.

But yppasswd is deprecated and you should not use it! If you use yppasswd it may generate a new password hash by using the old DES crypt algorithm, which is very weak and only uses the first 8 chars in your password. Do not use yppasswd any more! yppasswd only detects DES, MD5, SHA256 and SHA512 hashes, but for me and some colleagues it only creates weak DES hashes after a password change. yescrypt hashes which are the default in Debian 12 are not supported at all. The solution is to use the plain passwd program.

On the NIS master, you should setup your NIS configuration to use /etc/shadow and /etc/passwd even if your other NIS maps are in /var/yp/src or similar. Make sure to have these lines in your /var/yp/Makefile:

PASSWD      = /etc/passwd
SHADOW      = /etc/shadow

Call make once, and it will generate the shadow and passwd map. You may want to set the variable MINUID which defines which entries are not put into the NIS maps.

On all NIS clients you still need the entries (for passwd, shadow, group,...) that point to the nis service. E.g.:

passwd:         files nis systemd
group:          files nis systemd
shadow:         files nis

You can remove all occurences of "nis" in your /etc/pam.d/common-password file.

Then you can use the plain passwd program to change your password on the NIS master. But this does not call make in /var/yp for updating the NIS shadow map.

Let's use inotify(7) for that. First, create a small shell script /usr/local/sbin/shadow-change:

#! /bin/sh

PATH=/usr/sbin:/usr/bin

# only watch the /etc/shadow file
if [ "$2" != "shadow" ]; then
  exit 0
fi

cd /var/yp || exit 3
sleep 2
make

Then install the package incron.

# apt install incron
# echo root >> /etc/incron.allow
# incrontab -e

Add this line:

/etc    IN_MOVED_TO     /usr/local/sbin/shadow-change $@ $# $%

It's not possible to use IN_MODIFY or watch other events on /etc/shadow directly, because the passwd command creates a /etc/nshadow file, deletes /etc/shadow and then moves nshadow to shadow. inotify on a file does not work after the file was removed.

You can see the logs from incrond by using:

# journalctl _COMM=incrond
e.g.

Oct 01 12:21:56 kueppers incrond[6588]: starting service (version 0.5.12, built on Jan 27 2023 23:08:49)
Oct 01 13:43:55 kueppers incrond[6589]: table for user root created, loading
Oct 01 13:45:42 kueppers incrond[6589]: PATH (/etc) FILE (shadow) EVENT (IN_MOVED_TO)
Oct 01 13:45:42 kueppers incrond[6589]: (root) CMD ( /usr/local/sbin/shadow-change /etc shadow IN_MOVED_TO)

I've disabled the execution of yppasswd using dpkg-divert

# dpkg-divert --local --rename --divert /usr/bin/yppasswd-disable /usr/bin/yppasswd
chmod a-rwx /usr/bin/yppasswd-disable

Do not forget to limit the access to the shadow.byname map in ypserv.conf and general access to NIS in ypserv.securenets.

I've also discovered the package pamtester, which is a nice package for testing your pam configs.

11 November, 2024 10:20AM

Vincent Bernat

Customize Caddy's plugins with Nix

Caddy is an open-source web server written in Go. It handles TLS certificates automatically and comes with a simple configuration syntax. Users can extend its functionality through plugins1 to add features like rate limiting, caching, and Docker integration.

While Caddy is available in Nixpkgs, adding extra plugins is not simple.2 The compilation process needs Internet access, which Nix denies during build to ensure reproducibility. When trying to build the following derivation using xcaddy, a tool for building Caddy with plugins, it fails with this error: dial tcp: lookup proxy.golang.org on [::1]:53: connection refused.

{ pkgs }:
pkgs.stdenv.mkDerivation {
  name = "caddy-with-xcaddy";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      xcaddy build --with github.com/caddy-dns/powerdns@v1.0.1
    '';
  installPhase = ''
    mkdir -p $out/bin
    cp caddy $out/bin
  '';
}

Fixed-output derivations are an exception to this rule and get network access during build. They need to specify their output hash. For example, the fetchurl function produces a fixed-output derivation:

{ stdenv, fetchurl }:
stdenv.mkDerivation rec {
  pname = "hello";
  version = "2.12.1";
  src = fetchurl {
    url = "mirror://gnu/hello/hello-${version}.tar.gz";
    hash = "sha256-jZkUKv2SV28wsM18tCqNxoCZmLxdYH2Idh9RLibH2yA=";
  };
}

To create a fixed-output derivation, you need to set the outputHash attribute. The example below shows how to output Caddy’s source code, with some plugin enabled, as a fixed-output derivation using xcaddy and go mod vendor.

pkgs.stdenvNoCC.mkDerivation rec {
  pname = "caddy-src-with-xcaddy";
  version = "2.8.4";
  nativeBuildInputs = with pkgs; [ go xcaddy cacert ];
  unpackPhase = "true";
  buildPhase =
    ''
      export GOCACHE=$TMPDIR/go-cache
      export GOPATH="$TMPDIR/go"
      XCADDY_SKIP_BUILD=1 TMPDIR="$PWD" \
        xcaddy build v${version} --with github.com/caddy-dns/powerdns@v1.0.1
      (cd buildenv* && go mod vendor)
    '';
  installPhase = ''
    mv buildenv* $out
  '';

  outputHash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
  outputHashAlgo = "sha256";
  outputHashMode = "recursive";
}

With a fixed-output derivation, it is up to us to ensure the output is always the same:

  • we ask xcaddy to not compile the program and keep the source code,3
  • we pin the version of Caddy we want to build, and
  • we pin the version of each requested plugin.

You can use this derivation to override the src attribute in pkgs.caddy:

pkgs.caddy.overrideAttrs (prev: {
  src = pkgs.stdenvNoCC.mkDerivation { /* ... */ };
  vendorHash = null;
  subPackages = [ "." ];
});

Check out the complete example in the GitHub repository. To integrate into a Flake, add github:vincentbernat/caddy-nix as an overlay:

{
  inputs = {
    nixpkgs.url = "nixpkgs";
    flake-utils.url = "github:numtide/flake-utils";
    caddy.url = "github:vincentbernat/caddy-nix";
  };
  outputs = { self, nixpkgs, flake-utils, caddy }:
    flake-utils.lib.eachDefaultSystem (system:
      let
        pkgs = import nixpkgs {
          inherit system;
          overlays = [ caddy.overlays.default ];
        };
      in
      {
        packages = {
          default = pkgs.caddy.withPlugins {
            plugins = [ "github.com/caddy-dns/powerdns@v1.0.1" ];
            hash = "sha256-F/jqR4iEsklJFycTjSaW8B/V3iTGqqGOzwYBUXxRKrc=";
          };
        };
      });
}

  1. This article uses the term “plugins,” though Caddy documentation also refers to them as “modules” since they are implemented as Go modules. ↩︎

  2. This is a feature request since quite some time. A proposed solution has been rejected. The one described in this article is a bit different. ↩︎

  3. This is not perfect: if the source code produced by xcaddy changes, the hash would change and the build would fail. ↩︎

11 November, 2024 07:35AM by Vincent Bernat

November 10, 2024

Nazi.Compare

Joan Meyer correctly linked Gideon Cody raid on Marion County Record to Kristallnacht

Earlier this year, I traveled to Marion in Kansas, United States, for the anniversary of the raid on the Marion County Record.

We watched the documentary about the raid, Unwarranted: The Senseless Death of Journalist Joan Meyer which was produced by Jaime Green and Travis Heying. The moment where Joan Meyer called the police nazis jumped out at me. I made a mental note to include it here in the nazi.compare web site but I wanted to review it carefully and give it the justice it deserves.

I opened up the video on the anniversary of the Kristallnacht and the evidence jumped out at me. I don't think anybody has noticed it before but Joan was right on the money about nazi stuff.

The Kristallnacht occurred on the night of 9 to 10 November 1938. It was a giant pogrom by Nazi party members. The police did not participate but they didn't try to stop it either.

However, the Jewish press were not attacked during the Kristallnacht.

In fact, Hitler's Nazis attacked the Jewish press on the previous night, 8 November.

Looking at the body cam footage where Joan Meyer accuses Gideon Cody and his police colleagues of "nazi stuff", we can see a time and date stamp at the bottom right corner. The date of the raid is written in the United States date format, Month/Day/Year, 08/11/2023 which was 11 August 2023. When we see the date 08/11/2023 in Europe, for example, in Germany, we would interpret that as Day/Month/Year, in other words, that is how Europeans and Germans write 8 November 2023, the day that Nazis raided the Jewish press in advance of the Kristallnacht.

Jewish publications banned

 

Here is the section of the video where Joan Meyer makes the Nazi comment, look at the date stamp at the bottom right corner, it is 08/11/2023 as in 8 November for Europe:

FSFE censored communications from the elected representatives

While thinking about the way the Nazis gave these censorship orders the night before the Kristallnacht, I couldn't help thinking about the orders from Matthias Kirschner and Heiki Lõhmus at FSFE when they wanted to censor communications from the elected Fellowship representatives.

Berlin police have declined to help FSFE shut down web sites that are making accurate FSFE / Nazi comparisons.

This policy determines conditions and rights of the FSFE bodies (staffers,
GA members, local and topical teams) or members of the FSFE community to to
mass mail registered FSFE community members who have opted in to receive
information about FSFE's activities.

## Definitions

For the purpose of this document:
 * all registered FSFE community members who have opted in to receive
   information about FSFE's activities are referred to as "recipients".
 * mass emails that we send out to recipients are referred to as "mailings".
 * mailings that are only sent to recipients who live in a certain area (a
   municipality or a language zone or similar) or that are part of a topical
   team are referred to as "select mailings" and mails to all recipients of
   the FSFE are referred to as "overall mailings".


## Considerations

 * Mailings should be sent to better integrate our community in important
   aspects of our work, which can be for example - but is not limited to -
   information about critical happenings that we need their input or activity
   for, milestones we have achieved and thank you's, engagement in the inner FSFE
   processes and fundraising.
 * Mailings should be properly balanced between delivering information and
   getting to the point.
 * Mailings should contain material/information that can be considered worth
   of our supporters' interests.
 * Mailings are not to spread general news - that is what we have the
   newsletter and our news items for.
 * You can find help on editing mailings by reading through our
   press release guidelines: https://wiki.fsfe.org/Internal/PressReleaseGuide
 * All community members are invited to use select mailings for evaluations,
   to inform about certain aspects of FSFE's work, to organise events and
   activities or other extraordinary purposes.


## Policies

 * Mailings must not be against FSFE's interests and conform to our Code of
   Conduct.
 * All overall mailings have to involve the PR team behind pr@lists.fsfe.org
   for a final edit. In urgent cases, review by the PR team may be skipped
   with approval of the responsible authority.
 * All select mailings need approval by the relevant country or topical team
   coordinator or - in absence - by the Community Coordinator or the Executive
   Council.
 * All overall mailings need the approval of the Executive Council.
 * All mailings need to be reviewed by someone with the authority to approve
   the mailing. Nobody may review or approve a mailing they have prepared on
   their own.

Gideon Cody of Marion County

Gideon Cody, Marion County

10 November, 2024 11:00PM

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

inline 0.3.20: Mostly Maintenance

A new release of the inline package got to CRAN today marking the first release in three and half years. inline facilitates writing code in-line in simple string expressions or short files. The package was used quite extensively by Rcpp in the very early days before Rcpp Attributes arrived on the scene providing an even better alternative for its use cases. inline is still used by rstan and a number of other packages.

This release was tickled by changing in r-devel just this week, and the corresponding ‘please fix or else’ email I received this morning. R_NO_REMAP is now the default in r-devel, and while we had already converted most (old-style) calls into the API to using the now mandatory Rf_ prefix, the package contained few remaining cases in examples as well as one in code generation. The release also contains a helpful contributed PR making an error message a little clearer, plus several small and common maintenance changed around continuous integration, package layout and the repository.

The NEWS extract follows and details the changes some more.

Changes in inline version 0.3.20 (2024-11-10)

  • Error message formatting is improved for compileCode (Alexis Derumigny in #25)

  • Switch to using Authors@R, other general packaging maintenance for continuous integration and repository

  • Use Rf_ in a handful of cases as R-devel now mandates it

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

10 November, 2024 07:29PM

Reproducible Builds

Reproducible Builds in October 2024

Welcome to the October 2024 report from the Reproducible Builds project.

Our reports attempt to outline what we’ve been up to over the past month, highlighting news items from elsewhere in tech where they are related. As ever, if you are interested in contributing to the project, please visit our Contribute page on our website.

Table of contents:

  1. Beyond bitwise equality for Reproducible Builds?
  2. ‘Two Ways to Trustworthy’ at SeaGL 2024
  3. Number of cores affected Android compiler output
  4. On our mailing list…
  5. diffoscope
  6. IzzyOnDroid passed 25% reproducible apps
  7. Distribution work
  8. Website updates
  9. Reproducibility testing framework
  10. Supply-chain security at Open Source Summit EU
  11. Upstream patches

Beyond bitwise equality for Reproducible Builds?

Jens Dietrich, Tim White, of Victoria University of Wellington, New Zealand along with Behnaz Hassanshahi and Paddy Krishnan of Oracle Labs Australia published a paper entitled “Levels of Binary Equivalence for the Comparison of Binaries from Alternative Builds”:

The availability of multiple binaries built from the same sources creates new challenges and opportunities, and raises questions such as: “Does build A confirm the integrity of build B?” or “Can build A reveal a compromised build B?”. To answer such questions requires a notion of equivalence between binaries. We demonstrate that the obvious approach based on bitwise equality has significant shortcomings in practice, and that there is value in opting for alternative notions. We conceptualise this by introducing levels of equivalence, inspired by clone detection types.

A PDF of the paper is freely available.


Two Ways to Trustworthy’ at SeaGL 2024

On Friday 8th November, Vagrant Cascadian will present a talk entitled Two Ways to Trustworthy at SeaGL in Seattle, WA.

Founded in 2013, SeaGL is a free, grassroots technical summit dedicated to spreading awareness and knowledge about free source software, hardware and culture. Vagrant’s talk:

[…] delves into how two project[s] approaches fundamental security features through Reproducible Builds, Bootstrappable Builds, code auditability, etc. to improve trustworthiness, allowing independent verification; trustworthy projects require little to no trust.

Exploring the challenges that each project faces due to very different technical architectures, but also contextually relevant social structure, adoption patterns, and organizational history should provide a good backdrop to understand how different approaches to security might evolve, with real-world merits and downsides.


Number of cores affected Android compiler output

Fay Stegerman wrote that the cause of the Android toolchain bug from September’s report that she reported to the Android issue tracker has been found and the bug has been fixed.

the D8 Java to DEX compiler (part of the Android toolchain) eliminated a redundant field load if running the class’s static initialiser was known to be free of side effects, which ended up accidentally depending on the sharding of the input, which is dependent on the number of CPU cores used during the build.

To make it easier to understand the bug and the patch, Fay also made a small example to illustrate when and why the optimisation involved is valid.


On our mailing list…

On our mailing list this month:


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 279, 280, 281 and 282 to Debian:

  • Ignore errors when listing .ar archives (#1085257). []
  • Don’t try and test with systemd-ukify in the Debian stable distribution. []
  • Drop Depends on the deprecated python3-pkg-resources (#1083362). []

In addition, Jelle van der Waa added support for Unified Kernel Image (UKI) files. [][][] Furthermore, Vagrant Cascadian updated diffoscope in GNU Guix to version 282. [][]


IzzyOnDroid passed 25% reproducible apps

The IzzyOnDroid project has reached a good milestone by reaching over 25% of the ~1,200 Android apps provided by their repository (of official APKs built by the original application developers) having been confirmed to be reproducible by a rebuilder.


Distribution work

In Debian this month:

  • Holger Levsen uploaded devscripts version 2.24.2, including many changes to the debootsnap, debrebuild and reproducible-check scripts. This is the first time that debrebuild actually works (using sbuild’s unshare backend). As part of this, Holger also fixed an issue in the reproducible-check script where a typo in the code led to incorrect results []

  • Recently, a news entry was added to snapshot.debian.org’s homepage, describing the recent changes that made the system stable again:

    The new server has no problems keeping up with importing the full archives on every update, as each run finishes comfortably in time before it’s time to run again. [While] the new server is the one doing all the importing of updated archives, the HTTP interface is being served by both the new server and one of the VM’s at LeaseWeb.

    The entry list a number of specific updates surrounding the API endpoints and rate limiting.

  • Lastly, 12 reviews of Debian packages were added, 3 were updated and 18 were removed this month adding to our knowledge about identified issues.

Elsewhere in distribution news, Zbigniew Jędrzejewski-Szmek performed another rebuild of Fedora 42 packages, with the headline result being that 91% of the packages are reproducible. Zbigniew also reported a reproducibility problem with QImage.

Finally, in openSUSE, Bernhard M. Wiedemann published another report for that distribution.


Website updates

There were an enormous number of improvements made to our website this month, including:

  • Alba Herrerias:

    • Improve consistency across distribution-specific guides. []
    • Fix a number of links on the Contribute page. []
  • Chris Lamb:

  • hulkoba

  • James Addison:

    • Huge and significant work on a (as-yet-merged) quickstart guide to be linked from the homepage [][][][][]
    • On the homepage, link directly to the Projects subpage. []
    • Relocate “dependency-drift” notes to the Volatile inputs page. []
  • Ninette Adhikari:

    • Add a brand new ‘Success stories’ page that “highlights the success stories of Reproducible Builds, showcasing real-world examples of projects shipping with verifiable, reproducible builds”. [][][][][][]
  • Pol Dellaiera:

    • Update the website’s README page for building the website under NixOS. [][][][][]
    • Add a new academic paper citation. []

Lastly, Holger Levsen filed an extensive issue detailing a request to create an overview of recommendations and standards in relation to reproducible builds.


Reproducibility testing framework

The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility. In October, a number of changes were made by Holger Levsen, including:

  • Add a basic index.html for rebuilderd. []
  • Update the nginx.conf configuration file for rebuilderd. []
  • Document how to use a rescue system for Infomaniak’s OpenStack cloud. []
  • Update usage info for two particular nodes. []
  • Fix up a version skew check to fix the name of the riscv64 architecture. []
  • Update the rebuilderd-related TODO. []

In addition, Mattia Rizzolo added a new IP address for the inos5 node [] and Vagrant Cascadian brought 4 virt nodes back online [].


Supply-chain security at Open Source Summit EU

The Open Source Summit EU took place recently, and covered plenty of topics related to supply-chain security, including:


Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:



Finally, If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

10 November, 2024 06:26PM

Thorsten Alteholz

My Debian Activities in October 2024

FTP master

This month I accepted 398 and rejected 22 packages. The overall number of packages that got accepted was 441.

In case your RM bug is not closed within a month, you can assume that either the conversion of the subject of the bug email to the corresponding dak command did not work or you still need to take care of reverse dependencies. The dak command related to your removal bug can be found here.

Unfortunately the bahavior of some project members caused a decline of motivation of team members to work on these bugs. When I look at these bugs, I just copy and paste the above mentioned dak commands. If they don’t work, I don’t have the time to debug what is going wrong. So please read the docs and take care of it yourself. Please also keep in mind that you need to close the bug or set a moreinfo tag if you don’t want anybody to act on your removal bug.

Debian LTS

This was my hundred-twenty-fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [DLA 3925-1] asterisk security update to fix two CVEs related to privilege escalation and DoS
  • [DLA 3940-1] xorg-server update to fix one CVE related to privilege escalation

Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the seventy-fifth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1198-1]cups security update for one CVE in Buster to fix the IPP attribute related CVEs.
  • [ELA-1199-1]cups security update for two CVEs in Stretch to fix the IPP attribute related CVEs
  • [ELA-1216-1]graphicsmagick security update for one CVE in Jessie
  • [ELA-1217-1]asterisk security update for two CVEs in Buster related to privilege escalation
  • [ELA-1218-1]asterisk security update for two CVEs in Stretch related to privilege escalation and DoS
  • [ELA-1223-1]xorg-server security update for one CVE in Jessie, Stretch and Buster related to privilege escalation

I also did a week of FD and attended the monthly LTS/ELTS meeting.

Debian Printing

Unfortunately I didn’t found any time to work on this topic.

Debian Matomo

Unfortunately I didn’t found any time to work on this topic.

Debian Astro

Unfortunately I didn’t found any time to work on this topic.

Debian IoT

This month I uploaded new upstream or bugfix versions of:

  • pywws (yes, again this month)

Debian Mobcom

This month I uploaded new packages or new upstream or bugfix versions of:

misc

This month I uploaded new upstream or bugfix versions of:

10 November, 2024 12:26AM by alteholz

November 09, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

Progressively enhancing CGI apps with htmx

I was interested in learning about htmx, so I used it to improve the experience of posting comments on my blog.

It seems much of modern web development is structured around having a JavaScript program on the front-end (browser) which exchanges data encoded in JSON asynchronously with the back-end servers. htmx uses a novel (or throwback) approach: it asynchronously fetches snippets of HTML from the back-end, and splices the results into the live page. For example, a htmx-powered button may request a URI on the server, receive HTML in response, and then the button itself would be replaced by the resulting HTML, within the page.

I experimented with incorporating it into an existing, old-school CGI web app: IkiWiki, which I became a co-maintainer of this year, and powers my blog. Throughout this project I referred to the excellent book Server-Driven Web Apps with htmx.

Comment posting workflow

I really value blog comments, but the UX for posting them on my blog was a bit clunky. It went like this:

  1. you load a given page (such as this blog post), which is a static HTML document. There's a link to add a comment to the page.

  2. The link loads a new page which is generated dynamically and served back to you via CGI. This contains a HTML form for you to write your comment.

  3. The form submits to the server via HTTP POST. IkiWiki validates the form content. Various static pages (in particular the one you started on, in Step 1) are regenerated.

  4. the server response to the request in (3) is a HTTP 302 redirect, instructing the browser to go back to the page in Step 1.

First step: fetching a comment form

First, I wanted the "add a comment" link to present the edit box in the current page. This step was easiest: add four attributes to the "comment on this page" anchor tag:

hx-get="<CGI ENDPOINT GOES HERE>"
suppresses the normal behaviour of the tag, so clicking on it doesn't load a new page.

issues an asynchronous HTTP GET to the CGI end-point, which returns the full HTML document for the comment edit form

hx-select=".editcomment form"
extract the edit-comment form from within that document
hx-swap=beforeend and hx-target=".addcomment"
append (courtesy of beforeend) the form into the source page after the "add comment" anchor tag (.addcomment)

Now, clicking "comment on this page" loads in the edit-comment box below it without moving you away from the source page. All that without writing any new code!

Second step: handling previews

The old Preview Comment page

The old Preview Comment page

In the traditional workflow, clicking on "Preview" loaded a new page containing the edit form (but not the original page or any existing comments) with a rendering of the comment-in-progress below it. I wasn't originally interested in supporting the "Preview" feature, but I needed to for reasons I'll explain later.

Rather than load new pages, I wanted "Preview" to insert a rendering of the comment-in-progress being inserted into the current page's list of comments, marked up to indicate that it's a preview.

IkiWiki provides some templates which you can override to customise your site. I've long overridden page.tmpl, the template used for all pages. I needed to add a new empty div tag in order to have a "hook" to target with the previewed comment.

The rest of this was achieved with htmx attributes on the "Preview" button, similar to in the last step: hx-post to define a target URI when you click the button (and specify HTTP POST); hx-select to filter the resulting HTML and extract the comment; hx-target to specify where to insert it.

Now, clicking "Preview" does not leave the current page, but fetches a rendering of your comment-in-progress, and splices it into the comment list, appropriately marked up to be clear it's a preview.

Third step: handling submitted comments

IkiWiki is highly configurable, and many different things could happen once you post a comment.

On my personal blog, all comments are held for moderation before they are published. The page you were served after submitting a comment was rather bare-bones, a status message "Your comment will be posted after moderator review", without the original page content or comments.

I wanted your comment to appear in the page immediately, albeit marked up to indicate it was awaiting review. Since the traditional workflow didn't render or present your comment to you, I had to cheat.

handling moderated comments

Moderation message upon submitting a comment

Moderation message upon submitting a comment

One of my goals with this project was not to modify IkiWiki itself. I had to break this rule for moderated comments. When returning the "comment is moderated" page, IkiWiki uses HTTP status code 200, the same as for other scenarios. I wrote a tiny patch to return HTTP 202 (Accepted, but not processed) instead.

I now have to write some actual JavaScript. htmx emits the htmx:beforeSwap event after an AJAX call returns, but before the corresponding swap is performed. I wrote a function that is triggered on this event, filters for HTTP 202 responses, triggers the "Preview" button, and then alters the result to indicate a moderated, rather than previewed, comment. (That's why I bothered to implement previews). You can read the full function here: jon.js.

Summary

I've done barely any front-end web development for years and I found working with htmx to be an enjoyable experience.

You can leave a comment on this very blog post if you want to see it in action. I couldn't resist adding an easter egg: Brownie points if you can figure out what it is.

Adding htmx to an existing CGI-based website let me improve one of the workflows in a gracefully-degrading way (without JavaScript, the old method will continue to work fine) without modifying the existing application itself (well, almost) and without having to write very much code of my own at all: nearly all of the configuration was declarative.

09 November, 2024 09:16PM

hackergotchi for Daniel Pocock

Daniel Pocock

Joel Espy Klecker, unpaid, terminally ill youth labor & Debian knew it

According to the official history of Debian, which was moved here after my last blog on Klecker (see snapshot / archive copy), no one knew that Joel "Espy" Klecker was a terminally ill teenager working without pay from his sickbed. Here is the same text that I copied in my first step into the Klecker case.

On July 11th, 2000, Joel Klecker, who was also known as Espy, passed away at 21 years of age. No one who saw 'Espy' in #mklinux, the Debian lists or channels knew that behind this nickname was a young man suffering from a form of Duchenne muscular dystrophy. Most people only knew him as 'the Debian glibc and powerpc guy' and had no idea of the hardships Joel fought. Though physically impaired, he shared his great mind with others.

Joel Klecker (also known as Espy) will be missed.

In fact, they did know. The Debian history page on the list of Debian's lies.

Subject: RE: [jwk@espy.org: Joel Klecker]
Date: Fri, 14 Jul 2000 20:40:00 -0600 (MDT)
From: Jason Gunthorpe 
To: debian-private@lists.debian.org
CC: Debian Private List 


On Tue, 11 Jul 2000, Brent Fulgham wrote:

> > It's very hard for me to even send this message. This is a 
> > great loss to us all.
 > First, I'd like to extend my condolences to Joel's family.  It
> is still very hard to believe this has happened.  Joel was

> always just another member of the project -- no one knew (or
> at least I did not know) that he was facing such terrible 
> hardships.  Debian is poorer for his loss.

Some of us did know, but he never wished to give specifics. I do not think
he wanted us to really know. I am greatly upset that I was unable to at

[ ... snip .... ]

This case is so bad that I am going to have to write multiple blogs to dissect some of the messages in the threads about the casualty.

Joel Espy Klecker, Debian

An obituary was published in the newspaper:

Joel Edmund Klecker

Aug. 29, 1978 - July 11, 2000

STAYTON - Joel Klecker, 21, died Tuesday of muscular dystrophy.

He was born in Salem and raised in Stayton. He attended Stayton public schools and Stayton High School. He was a Debian software project developer, one of 500 worldwide, worked on Apple computers and was a computer enthusiast.

Survivors include his parents, Dianne and Jeffrey Klecker of Stayton; brother, Ben of Stayton; and grandparents, Roy and Yvonne Welstad of Aumsville.

Services will be 2 p.m. Saturday at Calvary Lutheran Church, where he was a member. Interment will be at Lone Oak Grier Cemetery in Sublimity. Arrangements are by Restlawn Funeral Home in Salem.

Contributions: Muscular Dystrophy Association, 4800 Macadam Ave., Portland, OR 97201.

Klecker was born 29 August 1978. These messages hint that his first packages may have been contributed in November or December 1997. Message 1, message 2 and message 3.

At the time, he would have been 19 years old, still a teenager, when he began doing unpaid work for the other Debian cabal members.

Many of the Debianists today obfuscate who they really work for to try and make it look like Debian is a hobby or a "Family" but the impersonation of family is fallacy.

Jason Gunthorpe (jgg), who is now with NVIDIA clearly knew some things about it.

Jason Gunthorpe, Debian, NVIDIA

We don't know which people had knowledge of Klecker's situation or which organizations they worked for. During the Debian trademark dispute, a list of organizations using Klecker's work was submitted to the Swiss trademark office. While written in Italian, the names of these companies are clear. They all assert they are using Debian. Did they know there has been unpaid youth labor, terminally ill teenagers, bed-ridden, writing and testing the packages for them?

The names of the companies are copied below. Remember, Mark Shuttleworth sold his first business Thawte for $700 million about eight months before Klecker died.

While Klecker was bed-ridden, here is that jet-ridden picture:

Mark Shuttleworth, private jet, Canonical, Ubuntu

Is it really fair that Klecker, his family and many other volunteers get nothing at all from Debian? Or is that modern slavery? The US State Dept definition of Modern Slavery is extremely broad and includes all kinds of deceptive work practices.

Please see the chronological history of how the Debian harassment and abuse culture evolved.

Come già riportato nell’incipit del presente paragrafo, la nascita del progetto DEBIAN risale al 1997, ed ogni paese ha al proprio interno una comunità attiva di sviluppatori volontari che si occupano di progettare,
testare e distribuire programmi basati sul sistema operativo multipiattaforma DEBIAN per i più svariati usi;
ogni anno, a partire dal 2004, come visibile qui https://www.debconf.org/ è stata organizzata una conferenza
internazionale alla quale partecipano tutti gli sviluppatori volontari che fanno parte delle diverse comunità
nazionali attive in ciascun paese del mondo (la lista dei paesi è visibile nel macrogruppo Entries by region ed
annovera, inter alia, Svizzera, Francia, Germania, Italia, Regno Unito, Polonia, Austria, Spagna, Norvegia,
Belgio, USA ecc, oltre ad estendersi a Sud est asiatico, all’Africa e all’America Latina).

Consultando, ad esempio, il documento DebConf13 - Sponsors relativo alla conferenza per l’anno 2013
visibile nella Section Swiss Debian Community, si nota che fra i numerosi sponsor spiccano Google, la
arcinota società creatrice del famosissimo ed omonimo motore di ricerca, e HP, il noto produttore di
hardware; allo stesso modo, consultando il documento DebConf15 - Sponsors visibile nella Section
European Debian Community, fra gli sponsor della conferenza per l’anno 2015, è possibile annoverare
nuovamente Google, HP oltre a IBM, il noto produttore di software e hardware, VALVE, il noto distributore
di videogiochi online, Fujitsu, il noto produttore di hardware, nonché BMW GROUP, il noto produttore di
autoveicoli. Per quanto concerne l’ultima conferenza svoltasi appunto nel 2022, visionando il documento
DebConf22 sponsorship brochure visibile nella Section European Debian Community, è possibile
annoverare tra gli sponsor, oltre a Google, anche Lenovo, il noto produttore di hardware, e Infomaniak, il più
grande fornitore di hosting per siti internet della Svizzera (tale azienda, oltre ad aver sponsorizzato numerose
edizioni della conferenza annuale, offre anche servizi di streaming e video on demand, ospitando più di
200.000 domini, 150.000 siti web e 350 stazioni radio/TV).

La stessa tipologia di informazioni, a livello europeo, può essere rinvenuta consultando la voce Entries in
section European Debian Communiy nella sezione Entries by section.

Fra gli altri sponsor di cui alle diverse conferenze tenutesi annualmente, si annoverano, inoltre, l’Università
di Zurigo-Dipartimento di informatica, il Politecnico di Zurigo-Dipartimento di Ingegneria elettrica, la
PricewaterhouseCooper (notissima società di revisione), Amazon Web Services (la piattaforma di cloud
computing e servizi web di proprietà di Amazon, la arcinota società di commercio elettronico statunitense),
Roche (la nota casa farmaceutica), Univention Corporate Server (nota società tedesca produttrice di software
open source per la gestione di infrastrutte informatiche complesse), Hitachi (noto produttore di hardware) il
Cantone di Neuchâtel ecc., oltre ad un nutrito numero di altre società private ed altri enti; per una
panoramica delle conferenze annuali tenutesi negli ultimi dieci anni è possibile osservare la relativa
documentazione promozionale filtrando le Categories e cercando la voce Community – DebConf.

Osservando inoltre il macro gruppo Entries by year, è stato anche raccolto materiale volto a coprire l’ultimo
decennio di attività, 2012/2022, del progetto DEBIAN compiegando documenti provenienti da diverse fonti
della stampa specializzata e non, attestazioni da parte di Università e Centri di ricerca nazionali ed esteri,
attestazioni da parte di utilizzatori del software DEBIAN per la propria attività imprenditoriale/commerciale
ecc.

Il software DEBIAN di SPI è infatti utilizzato sia da numerose società private in Svizzera e nella Unione
Europea, sia da numerosi enti istituzionali e di ricerca attivi nei più svariati ambiti.

I documenti disponibili nella Entries by section alla voce Cooperation with private companies in Switzerland
mostrano infatti come LIIP www.liip.ch una nota società svizzera (con sedi a Losanna, Friburgo, Berna,
Basilea, Zurigo e San Gallo) attiva nella prestazione di servizi connessi alla rete internet quali, ad esempio,
registrazione di domini per siti internet, configurazione di Server, servizi di hosting e per la creazione di siti
web, campagne pubblicitarie via internet, gestione dei social network, sia anch’essa una utilizzatrice di
Debian e, fra l’altro, anche uno degli sponsor delle conferenze annuali degli sviluppatori volontari.

Un altro documento incluso sotto la stessa voce, Debian Training Courses in Switzerland, mostra come
vengano tenuti corsi di formazione sul software DEBIAN; sempre sotto la stessa voce, il documento
Microsoft Azure available from new cloud regions in Switzerland for all customers mostra come il servizio
di cloud computing di Microsoft (notissimo produttore di software), Microsot Azure, offra il software
DEBIAN tra la selezione dei software messi a disposizione.

Sempre sotto la stessa voce, per mostrare la trasversalità e la permeazione del software DEBIAN in tutti i
settori delle cerchie interessate, vi sono documenti che attestano, ad esempio, come un centro osteopatico a
Losanna, https://osteo7-7.ch/ operi con server dotati del software DEBIAN, che la casa editrice Ringier AG
di Zurigo attiva nei mercati di quotidiani, periodici, televisione, web e raccolta pubblicitaria si sia occupata
del software DEBIAN e che Lenovo (noto produttore di hardware) si sia interessato anch’esso al software
DEBIAN.

I documenti disponibili nella Entries by section alla voce Cooperation with private companies in the
European Union mostrano la notorietà del software DEBIAN presso svariate società localizzate in diversi
paesi europei, ad esempio, Logikon labs http://www.logikonlabs.com/ in Grecia, Servicio Técnico, Open
Tech S.L https://www.opentech.es/ in Spagna, ALTISCENE http://www.altiscene.fr/, Logilab
https://www.logilab.fr/, Bureau d'études et valorisations archéologiques Éveha https://www.eveha.fr/ in
Francia, 99ideas https://99ideas.pl/ e Roan Agencja Interaktywna https://roan24.pl/ in Polonia, Mendix
Technology https://www.mendix.com/ in Olanda; in ragione del numero particolarmente elevato di
documenti, pari a 135 unità, si invita pertanto a prendere visione dell’elevato di società piccole, medie e
grandi che vedono il software DEBIAN alla base dei loro sistemi informatici.

Consultando le voci Research & Papers, Institutional/Governmental cooperation e Miscellaneous nella
sezione Entries by section, si possono rinvenire una serie di documenti inerenti articoli divulgativi,
scientifici, saggi di ricerca, abstract di tesi, monografie, brevi guide ecc., aventi ad oggetto il software
DEBIAN, realizzati, inter alia, dal Politecnico di Zurigo, dall’Università di Edimburgo, dall’Università di
Oxford, dall’EPFL di Losanna, dalla Università di Ginevra, dall’Università di Roma Tor Vergata,
dall’European Synchrotron Radiation Facility di Grenoble, dal WSL Istituto per lo studio della neve e delle
valanghe SLF di Davos, dall’Università Politécnica di Madrid, dalla Scuola Specializzata Superiore di
Economia-Sezione di Informatica di Gestione del Canton Ticino, dalla Unione internazionale delle
telecomunicazioni di Ginevra, dalla BBC del Regno Unito, dal CERN di Ginevra, dall’Università di
Glasgow, dalla Università di Durham ecc.

Le voci Swiss press coverage e European press coverage nella sezione Entries by section includono una
rassegna stampa, sia a livello svizzero sia a livello europeo, inerente articoli aventi ad oggetto il software
DEBIAN, inter alia, da parte di www.netzwoche.ch, Swiss IT Magazine, RTS Info, Corriere della Sera,
Linux Magazine, www.heise.de, www.gamestar.de, The Guardian, la BBC, L’Espresso, Il Disinformatico
(blog amministrato dal noto giornalista ticinese Paolo Attivissimo), Linux user, www.computerbase.de,
www.derstandard.at, https://blog.programster.org/ , www.digi.no, Linux Magazine https://www.linux-magazine.com/, ecc.

Verificando le voci Attestation & Statements by third parties, Switzerland, nella colonna Entries by section,
si nota come diversi attori facenti parte delle cerchie commerciali determinanti, composte da consumatori,
canali di distribuzione e commercianti, abbiano reso esplicite attestazioni di notorietà e conoscenza del
marchio DEBIAN in Svizzera per i prodotti rivendicati nella classe 9 (“Logiciels de système d'exploitation et
centres publics de traitement de l'information.”):

- il WSL Istituto per lo studio della neve e delle valanghe SLF di Davos;
- il Dipartimento di Informatica dell’Università di Zurigo;
- il provider di servizi internet www.oriented.net di Basilea;
- il centro osteopatico Osteo 7/7 www.osteo7-7.ch con sedi a Losanna e Ginevra;
- il CERN www.home.web.cern.ch di Ginevra per il tramite dell’Ing. Javier Serrano in qualità di BE-CEM
- Electronics Design and Low-level software (EDL) Section Leader presso il CERN;
- www.infomaniak.com il più grande fornitore di hosting per siti internet della Svizzera (tale azienda offre
anche servizi di streaming e video on demand, ospitando più di 200.000 domini, 150.000 siti web e 350
stazioni radio/TV) per il tramite del CEO di Infomaniak.com Boris Siegenthaler;
- www.liip.ch nota società svizzera (con sedi a Losanna, Friburgo, Berna, Basilea, Zurigo e San Gallo) attiva
nella prestazione di servizi connessi alla rete internet quali, ad esempio, registrazione di domini per siti
internet, configurazione di Server, servizi di hosting e per la creazione di siti web, campagne pubblicitarie
via internet e gestione dei social network, per il tramite del cofondatore e partner di LIIP Gerhard Andrey;
- www.microsoft.com notissimo produttore di software e servizi cloud (Windows, Microsoft Azure ecc.), per
il tramite di Sarah Novotny, Direttrice della strategia Open Source di Microsoft;
- www.microsoft.com notissimo produttore di software e servizi cloud (Windows, Microsoft Azure ecc.),
per il tramite dell’Ing. KY Srinivasan quale Distinguished Engineer di Microsoft;
- www.microsoft.com notissimo produttore di software e servizi cloud (Windows, Microsoft Azure ecc.),
per il tramite dell’Ing. Joshua Poulson quale Program Manager di Microsoft;
- il CERN www.home.web.cern.ch di Ginevra per il tramite del Dr. Axel Nauman in qualità di Senior
applied physicist and ROOT Project Leader presso il CERN;
- www.univention.com uno dei principali fornitori di software open source nei settori della gestione delle
identità e dell'integrazione e distribuzione delle applicazioni in Europa e Svizzera, che conta migliaia di
utenti e organizzazioni partner, per il tramite del CEO di Uninvention Peter H. Ganten.

Alla voce, sottostante a quella di cui sopra, verificando le voci Attestation & Statements by third parties,
European Union, nella colonna Entries by section, si nota come diversi attori facenti parte delle cerchie
commerciali determinanti, composte da consumatori, canali di distribuzione e commercianti, abbiano reso
esplicite attestazioni di notorietà e conoscenza del marchio DEBIAN in Europa per i prodotti rivendicati
nella classe 9 (“Logiciels de système d'exploitation et centres publics de traitement de l'information.”) per
un totale di ben 146 records (di cui si riportano i primi 25 qui di seguito):

- il Rost-Lab Bioinformatics Group dell’Università tedesca di Monaco;
- la società greca Logikon Labs di Atene;
- il Dipartimento di Ingegneria dell’Università italiana di Roma Tor Vergata;
- la società spagnola Servicio Técnico, Open Tech SL di Las Palmas;
- la società francese ALTISCENE di Tolosa;
- la società polacca Zakład Gospodarowania Nieruchomościami w Dzielnicy Mokotów m.st. di Varsavia;
- la società francese Logilab di Parigi;
- la società svedese www.Bayour.com di Göteborg;
- l’ente francese ESRF (European Synchrotron Radiation Facility) di Grenoble;
- la società austriaca www.mur.at di Graz;
- la società polacca www.Dictionaries24.com di Poznan;
- l’organizzazione no-profit francese TuxFamily;
- l’ente tedesco LINKES FORUM di Kreis;
- la società polacca www.99ideas.com di Gliwice;
- il Departamento de Arquitectura y Tecnología de Sistemas Informáticos (Facultad de Informática),
dell’Universidad Politécnica spagnola di Madrid;
- la società italiana Reware Soc. Coop di Roma;
- la società polacca Roan Agencja Interaktywna di Gorzów;
- la società slovacca RoDi Zilina;
- la società olandese Mendix Technology di Rotterdam;
- l’ente francese Bureau d'études et valorisations archéologiques Éveha di Limoges;
- la società olandese AlterWeb;
- l’Electronics Research Group dell’Università inglese di Aberdeen;
- la società olandese MrHostman di Montfoort;
- la società polacca System rezerwacji online Nakiedy di Gdansk;
oltre, come detto, alle restanti testimonianze e attestazioni rese dalle più disparate società e diversi enti
pubblici e privati aventi base/sede, rispettivamente, in Svizzera, Italia, Germania, Regno Unito, Francia,
Polonia, Austria, Spagna, Olanda, Norvegia, Belgio, Repubblica Ceca, Svezia, Bulgaria, Grecia, Finlandia,
Kosovo, Slovacchia, Bosnia, Danimarca, Ungheria, Lituania, Romania, di cui invitiamo a prendere visione.

Per dimostrare sia a livello nominativo, dimostrando inoltre la diffusione numericamente quantitativa, degli
utilizzatori del software DEBIAN, si riporta di seguito un estratto del sito web del progetto DEBIAN
https://www.debian.org/users/index.it.html tramite il quale si possono scorrere tutte le attestazioni
volontariamente lasciate sul sito www.debian.org del progetto DEBIAN dagli utilizzatori finali (ciascun
nominativo è un link interattivo al sito https://www.debian.org/users/index.it.html), di diversa estrazione, del
software DEBIAN:

Istituzioni educative (educational)

Commerciali (commercial)
Organizzazioni non-profit (non-profit)
Enti statali (government)
Istituzioni educative (educational)
Electronics Research Group, University of Aberdeen, Aberdeen, Scotland
Department of Informatics, University of Zurich, Zurich, Switzerland
General Students' Committee (AStA), Saarland University, Saarbrücken, Germany
Athénée Royal de Gembloux, Gembloux, Belgium
Computer Science, Brown University, Providence, RI, USA
Sidney Sussex College, University of Cambridge, UK
CEIC, Scuola Normale Superiore di Pisa, Italy
Mexican Space Weather Service (SCiESMEX), Geophysics Institute campus Morelia (IGUM),
National University of Mexico (UNAM), Mexico
COC Araraquara, Brazil
Departamento de Arquitectura y Tecnología de Sistemas Informáticos (Facultad de Informática),
Universidad Politécnica de Madrid, Madrid, Spain
Department of Control Engineering, Faculty of Electrical Engineering, Czech Technical University,
Czech Republic
Swiss Federal Institute of Technology Zurich, Department of Physics, ETH Zurich, Switzerland
Genomics Research Group, CRIBI - Università di Padova, Italy
Dipartimento di Geoscienze, Università degli Studi di Padova, Italy
Nucleo Lab, Universidad Mayor de San Andrés, Bolivia
Department of Physics, Harvard University, USA
Infowebhosting, Perugia, Italy
Medical Information System Laboratory, Doshisha University, Kyoto, Japan
Bioinformatics & Theo. Biology Group, Dept. of Biology, Technical University Darmstadt,
Germany
Center for Climate Risk and Opportunity Management in Southeast Asia and Pacific, Indonesia
Laboratorio de Comunicaciones Digitales, Universidad Nac. de Cordoba, Argentina
Laboratorio di Calcolo e Multimedia, Università degli Studi di Milano, Italy
Department of Engineering, University of Rome Tor Vergata, Italy
Lycée Molière, Belgium
Max Planck Institute for Informatics, Saarbrücken, Germany
Computer Department, Model Engineering College, Cochin, India
Medicina - Facultad de Ciencias Médicas, Universidad Nacional del Comahue, Cipolletti, Río
Negro, Argentina
Artificial Intelligence Lab, Massachusetts Institute of Technology, USA
Montana Tech, Butte, Montana, USA
Mittelschule, Montessoriverein Chemnitz, Chemnitz, Germany
Laboratory GQE-Le Moulon / CNRS / INRAE, Gif-sur-Yvette, France
Department of Measurement and Control Technology MRT (Department of Mechanical
Engineering), University of Kassel, Germany
Department of Computer Science & Engineering, Muthayammal Engineering College, Rasipuram,
Tamilnadu, India
Spanish Bioinformatics Institute, Spanish National Cancer Research Centre, Madrid, Spain
NI, Núcleo de Informática, Brazil
Software & Networking Lab, National University of Oil and Gas, Ivano-Frankivsk, Ukraine
Parallel Processing Group, Department of Computer Science and Engineering, University of
Ioannina, Ioannina, Greece
Departamento de Matemática -- Universidade Federal do Paraná, Brazil
Departamento de Informática -- Universidade Federal do Paraná, Brazil
Protein Design Group, National Center for Biotechnology, Spain
Rost Lab/Bioinformatics Group, Technical University of Munich, Germany
Department of Computer Science, University of Salzburg, Salzburg, Austria
Don Bosco Technical Institute, Sunyani, Ghana
Instituto de Robótica y Automática, Escuela Superior de Ingenieros, University of Sevilla, Spain
Computer Engineering Department, Sharif University of Technology, Iran
Dipartimento di Scienze Statistiche, Università di Padova, Italy
School of Mathematics, Tata Institute of Fundamental Research, Bombay, India
Department of Computer and Engineering, Thiagarajar College of Engineering, Madurai, India
Library and IT Services, Tilburg University, Tilburg, the Netherlands
Computer Science Department, Trinity College, Hartford Connecticut, USA
Turnkey IT Training Institute, Colombo, Sri Lanka.
System Department, University of Santander, Cúcuta, Colombia
Academic Administration, Universidad de El Salvador, El Salvador
Universitas Indonesia (UI), Depok, Indonesia
Laboratoire de Chimie physique, CNRS UMR 8000, Université Paris-Sud, Orsay, France
Dirección de Tecnología e Informática, Universidad Nacional Experimental de Guayana, Puerto
Ordaz, Venezuela
School of Computer Science and Engineering, University of New South Wales, Sydney, Australia
International Arctic Research Center, University of Alaska Fairbanks, USA
Laboratoire VERIMAG, CNRS/Grenoble INP/Université Joseph Fourier, France
Centre for Information Technology, University of West Bohemia, Pilsen, Czech Republic
Game Development Club, Worcester Polytechnic Institute, Worcester MA, USA

Commerciali (commercial)

IT Department, 100ASA Srl, Dragoni, Italy
99ideas, Gliwice, Poland
Tech Dept, ABC Startsiden AS, Oslo, Norway
Admins.CZ, Prague, Czech Republic
AdvertSolutions.com, United Kingdom
Kancelaria Adwokacka Adwokat Wiktor Gamracki, Rzeszów, Poland
Adwokat radca prawny, Poznan Lodz, Poland
AFR@NET, Tehran, Iran
African Lottery, Cape Town, South Africa
AKAOMA Consulting, France
Alfabet Sukcesu, Lubliniec, Poland
AlterWeb
Altiria, Spain
ALTISCENE, Toulouse, France
Anykey Solutions, Sweden
JSC VS, Russia
Apache Auto Parts Incorporated, Parma USA
Applied Business Solutions, São Paulo, Brazil
Archiwwwe, Stockholm, Sweden
Computational Archaeology Division, Arc-Team, Cles, Italy
Articulate Labs, Inc., Dallas, TX, US
Athena Capital Research, USA
Atrium 21 Sp. z o.o. Warsaw, Poland
Co. AUSA, Almacenes Universales SA, Cuba
Agencja interaktywna Avangardo, Szczecin, Poland
Axigent Technologies Group, Inc., Amarillo, Texas, USA
Ayonix, Inc., Japan
AZ Imballaggi S.r.l., Pontedera, Italy
Backblaze Inc, USA
Baraco Compañia Anónima, Venezuela
Big Rig Tax, USA
BioDec, Italy
bitName, Italy
BMR Genomics, Padova, Italy
B-Open Solutions srl, Italy
Braithwaite Technology Consultants Inc., Canada
BrandLive, Warsaw, Poland
calbasi.net web developers, Catalonia, Spain
Camping Porticciolo, Bracciano (Rome), Italy
CAROL - Cooperativa dos Agricultores da Região de Orlândia, Orlândia, São Paulo, Brazil
Centros de Desintoxicación 10, Grupo Dropalia, Alicante, Spain
Charles Retina Institute, Tennessee, USA
Chrysanthou & Chrysanthou LLC, Nicosia, Cyprus
CIE ADEMUR, Spain
CLICKPRESS Internet agency, Iserlohn, Germany
Code Enigma
Companion Travel LLC, Tula, Russia
Computación Integral, Chile
Computerisms, Yukon, Canada
CRX LTDA, Santiago, Chile
CyberCartes, Marseilles, France
DataPath Inc. - Software Solutions for Employee Benefit Plans, USA
Datasul Paranaense, Curitiba PR, Brazil
Internal IT, Dawan, France
DEQX, Australia
Diciannove Soc. Coop., Italy
DigitalLinx, Kansas City, MO, USA
Directory Wizards Inc, Delaware, USA
IT / Sales Department, Diversicom Corp of Riverview, USA
Dubiel Vitrum, Rabka, Rabka, Poland
Eactive, Wroclaw, Poland
eCompute Corporation, Japan
Agencja Interaktywna Empressia, Poznan, Poland
enbuenosaires.com, Buenos Aires, Argentina
Eniverse, Warsaw, Poland
Epigenomics, Berlin, Germany
Essential Systems, UK
Ethan Clark Air Conditioning, Houston, Texas, USA
EuroNetics Operation KB, Sweden
Bureau d'études et valorisations archéologiques Éveha, Limoges, France
Fahrwerk Kurierkollektiv UG, Berlin, Germany
Faunalia, AP, Italy
Flamingo Agency, Chicago, IL, USA
Freeside Internet Services, Inc., USA
Frogfoot Networks, South Africa
French Travel Organisation, Nantes, France
Fusion Marketing, Cracow, Poland
IT, Geodata Danmark, Denmark
GigaTux, London, UK
Globalways AG, Germany
GNUtransfer - GNU Hosting, Mar del Plata, Argentina
G.O.D. Gesellschaft für Organisation und Datenverarbeitung mbH, Germany
Goodwin Technology, Springvale, Maine, USA
GPLHost LLC, Wilmington, Delaware, USA; GPLHost UK LTD, London, United Kingdom;
GPLHost Networks PTE LTD, Singapore, Singapore
Hermes IT, Romania
HeureKA -- Der EDV Dienstleister, Austria
HostingChecker, Varna, Bulgaria
Hostsharing eG (Cooperation), Germany
Hotel in Rome, Foggia, Italy
Huevo Vibrador, Madrid, Spain
ICNS, X-tec GmbH, Germany
Instasent, Madrid, Spain
IT outsourcing department, InTerra Ltd., Russian Federation
IreneMilito.it, Cosenza, Italy
Iskon Internet d.d., Croatia
IT Lab, Foggia, Italy
Keliweb SRL, Cosenza, Italy
Kosmetyczny Outlet, KosmetycznyOutlet, Wroclaw, Poland
Kulturystyka.sklep.pl sp. z o.o, Kleszczow, Poland
Linden Lab, San Francisco, California, USA
Linode, USA
LinuxCareer.com, Rendek Online Media, Australia
Linuxlabs, Krakow, Poland
IT services, Lixper S.r.L., Italy
Logikon Labs, Athens, Greece
Logilab, Paris, France
Madkom Ltd. (Madkom Sp. z o.o.), Poland
Inmobiliaria Mar Menuda SA, Tossa de Mar, Spain
IT Services, Medhurst Communications Ltd, UK
Media Design, The Netherlands
Mediasecure, London, United Kingdom
Megaserwis S.C. Serwis laptopów i odzyskiwanie danych, Warsaw, Poland
Mendix Technology, Rotterdam, the Netherlands
Mobusi Mobile Performance Advertising, Los Angeles, California, USA
Molino Harinero Sula, S.A., Honduras
MrHostman, Montfoort, The Netherlands
MTTM La Fraternelle, France
System rezerwacji online Nakiedy, Gdansk, Poland
New England Ski Areas Council, USA
IT Ops, NG Communications bvba, Kortenberg, Belgium
nPulse Technologies, LLC, Charlottesville, VA, USA
Oktet Labs, Saint Petersburg, Russia
One-Eighty Out, Incorporated, Colorado Springs, Colorado, USA
Servicio Técnico, Open Tech S.L., Las Palmas, Spain
oriented.net Web Hosting, Basel, Switzerland
Osteo 7/7, Lausanne and Geneva, Switzerland
IT Department, OutletPC, Henderson, NV, USA
Parkyeri, Istanbul, Turkey
Pelagicore AB, Gothenburg, Sweden
Development and programming, www.perfumesyregalos.com, Spain
PingvinBolt webshop, Hungary
DeliveryHero Holding GmbH, IT System Operations, Berlin, Germany
Portantier Information Security, Buenos Aires, Argentina
Pouyasazan, Isfahan-Iran
PR International Ltd, Kings Langley, Hertfordshire, UK
PROBESYS, Grenoble, France
Agencja interaktywna Prodesigner, Szczecin, Poland
Questia Media
RatujLaptopa, Warsaw, Poland
The Register, Situation Publishing, UK
NOC, RG3.Net, Brazil
RHX Studio Associato, Italy
Roan Agencja Interaktywna, Gorzów Wielkopolski, Poland
RoDi, Zilina, Slovakia
Rubbettino Editore, Soveria Mannelli (CZ), Italy
Industrial Router Group, RuggedCom, Canada
RV-studio, Zielonka, Poland
S4 Hosting, Lithuania
Salt Edge Inc., Toronto, Canada
Ing. Salvatore Capolupo, Cosenza, Italy
Santiago Engenharia LTDA, Brazil
SCA Packaging Deutschland Stiftung & Co. KG, IS Department (HO-IS), Germany
Overstep SRL, Via Marco Simone, 80 00012 Guidonia Montecelio, Rome, Italy
ServerHost, Bucharest, Romania
Seznam.cz, a.s., Czech Republic
Shellrent Srl, Vicenza, Italy
Siemens
Information Technology Dep., SIITE SRLS, Lodi / Milano, Italy
SilverStorm Technologies, Pennsylvania, USA
Sinaf Seguros, Brazil
Skroutz S.A., Athens, Greece
SMS Masivos, Mexico
Auto Service Cavaliere, Rome, Italy
soLNet, s.r.o., Czech Republic
Som Tecnologia, Girona, Spain
Software Development, SOURCEPARK GmbH, Berlin, Germany
Computer Division, Stabilys Ltd, London, United Kingdom
Departamento de administración y servicios, SW Computacion, Argentina
Taxon Estudios Ambientales, SL, Murcia, Spain
ITW TechSpray, Amarillo, TX, USA
Tehran Raymand Co., Tehran, Iran
Telsystem, Telecomunicacoes e Sistemas, Brazil
The Story, Poland
TI Consultores, consulting technologies of information and businesses, Nicaragua
CA, Telegraaf Media ICT, Amsterdam, the Netherlands
T-Mobile Czech Republic a. s.
Nomura Technical Management Office Ltd., Kobe, Japan
TomasiL stone engravings, Italy
Tri-Art Manufacturing, Canada
EDI Team, Hewlett Packard do Brasil, São Paulo, Brazil
Trovalost, Cosenza, Italy
Taiwan uCRobotics Technology, Inc., Taoyuan, Taiwan (ROC)
United Drug plc, Ireland
Koodiviidakko Oy, Finland
Departamento de Sistemas, La Voz de Galicia, A Coruña, Spain
VPSLink, USA
Wavecon GmbH, Fürth, Germany
WTC Communications, Canada
Wyniki Lotto Web Page, Poznan, Poland
Software Development, XSoft Ltd., Bulgaria
Zomerlust Systems Design (ZSD), Cape Town, South Africa

Organizzazioni non-profit (non-profit)

Bayour.com, Gothenburg, Sweden
Eye Of The Beholder BBS, Fidonet Technology Network, Catalonia/Spain
Dictionaries24.com, Poznan, Poland
Beyond Disability Inc., Pearcedale, Australia
E.O. Ospedali Galliera, Italy
ESRF (European Synchrotron Radiation Facility), Grenoble, France
F-Droid - the definitive source for free software Android apps
GreenNet Ltd., UK
GREFA, Grupo para la rehabilitación de la fauna autóctona y su hábitat, Majadahonda, Madrid,
Spain
GSI Helmholtzzentrum für Schwerionenforschung GmbH, Darmstadt, Germany
LINKES FORUM in Oberberg e.V., Gummersbach, Oberbergischer Kreis, Germany
MAG4 Piemonte, Torino, Italy
Mur.at - Verein zur Förderung von Netzwerkkunst, Graz, Austria
High School Technology Services, Washington DC USA
PRINT, Espace autogéré des Tanneries, France
Reware Soc. Coop - Impresa Sociale, Rome, Italy
Systems Support Group, The Wellcome Trust Sanger Institute, Cambridge, UK
SARA, Netherlands
Institute for Snow and Avalanche Research (SLF), Swiss Federal Institute for Forest, Snow and
Landscape Research (WSL), Davos, Switzerland
SRON: Netherlands Institute for Space Research
TuxFamily, France

Enti statali (government)

Agência Nacional de Vigilância Sanitária - ANVISA (Health Surveillance National Agency) -
Gerência de Infra-estrutura e Tecnologia (GITEC), Brazil
Directorate of Information Technology, Council of Europe, Strasbourg, France
Gerencia de Redes, Eletronorte S/A, Brazil
European Audiovisual Observatory, Strasbourg, France
Informatique, Financière agricole du Québec, Canada
Informática de Municípios Associados - IMA, Governo Municipal, Campinas/SP, Brazil
Bureau of Immigration, Philippines
Institute of Mathematical Sciences, Chennai, India
INSEE (National Institute for Statistics and Economic Studies), France
London Health Sciences Centre, Ontario, Canada
Lorient Agglomération, Lorient France
Ministry of Foreign Affairs, Dominican Republic
Procempa, Porto Alegre, RS, Brazil
SEMAD, Secretaria de Estado de Meio Ambiente e Desenvolvimento Sustentável, Goiânia/GO,
Brasil
Servizio Informativo Comunale, Comune di Riva del Garda, ITALY
St. Joseph's Health Care London, Ontario, Canada
State Nature Conservation Agency, Slovakia
Servicio de Prevencion y Lucha Contra Incendios Forestales, Ministerio de Produccion Provincia de
Rio Negro, Argentina
Vermont Department of Taxes, State of Vermont, USA
Zakład Gospodarowania Nieruchomościami w Dzielnicy Mokotów m.st. Warszawy, Warsaw, Poland

Please see the chronological history of how the Debian harassment and abuse culture evolved.

09 November, 2024 02:00PM

November 08, 2024

Daniel Pocock, Nomination for Ireland, Dublin Bay South, General Election 2024

Polling day in Ireland is anticipated to be Friday, 29 November 2024.

If you are reading a copy of this page on a third-party site, please click here to view the most recent version of the page. New information will be added during the month of November and after polling finishes.

Videos

I'm the only candidate with the skills and experience to handle issues relating to Artificial Intelligence, Cybersecurity, AirBNB and the risks of technology for your children.

Those issues have a much bigger impact on your job and housing than the immigration problem.

Why vote for and promote Mr Pocock for Dublin Bay South?

(More details will be added here, please check again from time to time)

Experience in multiple countries (UK, France, Switzerland, Singapore, Australia) and multiple business sectors (finance, technology and non-profit).

Education/Online safety for your kids, cover ups: In 2007, a school teacher quit his job to run a social enterprise doing open source software. Open source cabals led us down this path with so much flattery but the "payment with recognition" doesn't help families. The former school teacher became one of several victims in the Debian suicide cluster. Another engineer died on our wedding day , cause of death suppressed. If the culture of big tech is not safe for school teachers, if this culture is not even safe for the engineers who make it, how can it be safe for your kids? Read about Debian Suicide Cluster.

Healthy and safety, mental health, children and technology: Mr Pocock has been outspoken about these issues in the field of software engineering and the impact of technology on society. Read the blog post "Google, FSFE and Child Labor" that prompted attempts to have Mr Pocock censored. Read Mr Pocock's analysis of the Debian suicide cluster. Watch Mr Pocock's appearance at the United Nations Forum on Business and Human Rights in Geneva.

Trains: while some climate topics are controversial, there appears to be very broad public support for bringing back Ireland's missing trains. Mr Pocock is the only candidate who has the experience of a community campaign that saved a train in Australia. He can bring this international experience and success mentality to Ireland.

Empathy: when women in the free software movement complained, Mr Pocock listened and provided support to victims. When Mr Pocock heard about people in County Donegal losing their homes, he could empathise from his own experience.

Outsiders urgently needed for Ireland's toughest problems. From the Stardust cover-up to the "war zones" in our health system to the protracted defective concrete (Mica) blocks crisis, Ireland is dogged by leadership failures. Dublin Bay South has four seats in the Dáil. Giving one of those seats to a highly experienced member of the Irish diaspora is the only way to break out of the buck-passing culture.

Ireland's PR voting system provides a unique opportunity for independent candidates like Daniel Pocock to scrutinize the work of the parties we elect in all levels of government.

Register to vote

Urgent: please make sure you are registered to vote. The deadline to register is only a few days away.

Postal voting

Urgent: if you can not visit the polling booth in person on 29 November, please apply for a postal vote before the deadline.

Platform

Full details of my platform will be posted throughout the campaign and this page will be updated with links to policy and campaign issues.

In short, I want to bring practical solutions to the real problems facing this district. As one of very few professional engineers (MIEI, MIEAust) contesting the election, I'm shocked by the stories of defective construction materials leaving families homeless in various parts of the country. Given the tragedy at my own home in 2023, I hope to bring both experience and empathy.

In fact, the crisis of defective blocks in Irish homes has some remarkable similarities to the Swiss JuristGate crisis, whereby many cross-border workers and small businesses found our legal protection insurance to be worthless.

Please see my biography for background details.

Donate

If you are eligible to make donations, please follow the instructions here.

Volunteer

Anybody is welcome to volunteer for my campaign. Please help people find my web site and discuss my blogs with people. Please contact me by email if you would like to get involved in the campaign.

Threats against the Pocock campaign

I have already received numerous threats of defamation, physical and sexual violence. These will be documented to help the community understand the challenges of talking about inconvenient truths.

Trivia

The West Clare Hotelier was elected on my birthday. Make of that what you will. It is also the day the Berlin Wall came down.

Why on earth would anybody want to run for public office?

Research found that some of the world's largest companies have spent over $120,000 trying to censor me even before I announced my candidacy. I guess that I might have something to say that is in the public interest.

Vote [1] Daniel POCOCK

Daniel Pocock, European Parliament election 2024
Daniel Pocock, European Parliament election 2024

08 November, 2024 12:00PM

List of Debian lies and deception

There have been some hideous lies from the Debian Project Leader, Debian Press team and other rogue members of Debian over many years. Volunteers are pointing out some of the blatant lies in the official Debian history and other documents. This page has been created to provide an index of the lies. It is a placeholder page right now and will be updated shortly with a very long list.

Pinocchio, Debian lies and deception

08 November, 2024 12:00PM

Ireland North West Infrastructure crisis compared to Latin America

People are complaining about infrastructure and public services all over Ireland.

Yet in the Midlands-North-West region, things are far worse than in the rest of the country and the European Commission (EC) produced some rankings, the Regional Competitive Index, that appears to confirm what people are saying in the street.

During the European campaigns in May 2024, people frequently mentioned that the region was scraping the bottom of the barrel on one particular metric for infrastructure. I wanted to know where this figure comes from and what it really means.

The EC produced a rather large table with multiple metrics for each region. The data set is called the EU Regional Competitiveness Index 2.0 - 2022 edition. There is a web site where people can visualize some of the data.

The interactive charts are nice but to really understand the data it is helpful to load it into a spreadsheet so we can look at the region names and numbers side-by-side.

Further down the page there is a list of documents to download. One of those documents is a spreadsheet with the data, this is not the one we want but it is useful nonetheless. The title is RCI 2.0 - Raw data 2022, revised. ( backup copy of the data).

The document of interest is the spreadsheet RCI 2.0 - scores 2022 edition and back-calculations for 2019 and 2016 editions, revised. This takes the raw data values and combines them into scores for each region and each category. ( backup copy of the data).

The metric that has been mentioned so widely during the election campaigns is in column I, the "Infrastructure Pillar". We can highlight the entire table and sort (Descending order) on the Infrastructure Pillar to see where Ireland North West really appears in the table.

Looking at the table like this gives us some insights that we could not get from the interactive mapping tool. I'll elaborate below, but lets begin by looking at the table.


A B C D I
1 REGION NAME NUTS code Stage of development Rank Infrastructure Pillar
2 Ile-de-France FR10 MD 3 185.8
3 Utrecht NL31 MD 1 171.6
4 Noord-Brabant NL41 MD 4 168.3
5 Zuid-Holland NL33 MD 2 160.9
6 Antwerpen BE21 MD 11 160.4
7 Gelderland NL22 MD 9 159.7
8 Darmstadt DE71 MD 18 155.7
9 Comunidad de Madrid ES30 MD 36 154.3
10 Amsterdam and its commuting zone NL_C MD 4 153.6
11 Cataluña ES51 MD 108 152.4
12 Karlsruhe DE12 MD 19 150.5
13 Liège BE33 TR 59 150.1
14 Köln DEA2 MD 17 149.0
15 Düsseldorf DEA1 MD 16 148.9
16 Rheinhessen-Pfalz DEB3 MD 27 147.2
17 Brussels and its commuting zone BE_C MD 8 145.1
18 Limburg (NL) NL42 MD 13 145.0
19 Oberbayern DE21 MD 15 144.9
20 Picardie FRE2 TR 107 144.1







[ rows deleted ]









165 Kentriki Makedonia EL52 LD 199 61.3
166 Sardegna ITG2 LD 203 61.3
167 Basse-Normandie FRD1 TR 125 60.9
168 Mazowiecki regionalny PL92 LD 177 60.5
169 Ciudad de Melilla ES64 LD 194 60.5
170 Severovýchod CZ05 TR 123 60.1
171 Jihozápad CZ03 TR 131 59.8
172 Länsi-Suomi FI19 TR 56 58.9
173 Kujawsko-pomorskie PL61 LD 171 58.9
174 Thessalia EL61 LD 208 58.6
175 Opolskie PL52 LD 167 58.1
176 Região Autónoma da Madeira PT30 LD 182 57.6
177 Umbria ITI2 TR 163 57.3
178 Kärnten AT21 MD 91 57.0
179 Övre Norrland SE33 MD 75 56.7
180 Valle d’Aosta/Vallée d’Aoste ITC2 MD 175 56.6
181 Lubuskie PL43 LD 171 56.5
182 Limousin FRI2 TR 116 56.2
183 Małopolskie PL21 LD 127 55.9
184 Voreio Aigaio EL41 LD 217 55.2
185 Západné Slovensko SK02 LD 164 54.5
186 Peloponnisos EL65 LD 215 53.5
187 La Réunion FRY4 LD 183 53.3
188 Região Autónoma dos Açores PT20 LD 206 53.3
189 Warmińsko-mazurskie PL62 LD 188 52.6
190 Corse FRM0 TR 184 52.0
191 Podlaskie PL84 LD 181 51.8
192 Sterea Elláda EL64 LD 228 51.8
193 Martinique FRY2 LD 149 51.2
194 Southern IE05 MD 94 50.7
195 Jadranska Hrvatska HR03 LD 186 50.7
196 Dél-Alföld HU33 LD 192 50.4
197 Észak-Alföld HU32 LD 202 48.9
198 Guadeloupe FRY1 LD 188 48.5
199 Vidurio ir vakarų Lietuvos regionas LT02 LD 151 48.0
200 Ionia Nisia EL62 LD 218 47.3
201 Sud-Muntenia RO31 LD 230 46.8
202 Észak-Magyarország HU31 LD 207 46.3
203 Notio Aigaio EL42 LD 224 45.8
204 Podkarpackie PL82 LD 169 45.1
205 Ciudad de Ceuta ES63 LD 211 44.7
206 Pohjois- ja Itä-Suomi FI1D TR 84 44.6
207 Yugoiztochen BG34 LD 227 44.2
208 Dytiki Elláda EL63 LD 220 44.0
209 Panonska Hrvatska HR02 LD 195 41.5
210 Mayotte FRY5 LD 205 41.1
211 Molise ITF2 LD 197 40.2
212 Basilicata ITF5 LD 201 39.9
213 Lubelskie PL81 LD 180 37.9
214 Kriti EL43 LD 209 37.5
215 Prov. Autonoma di Trento ITH2 MD 141 36.5
216 Sud-Est RO22 LD 234 36.0
217 Dél-Dunántúl HU23 LD 197 35.1
218 Prov. Autonoma di Bolzano/Bozen ITH1 MD 160 34.8
219 Northern and Western IE04 TR 114 34.1
220 Anatoliki Makedonia, Thraki EL51 LD 225 34.0
221 Vest RO42 LD 223 33.5
222 Východné Slovensko SK04 LD 193 32.6
223 Severoiztochen BG33 LD 221 32.5
224 Nord-Est RO21 LD 233 31.8
225 Åland FI20 MD 106 30.0
226 Świętokrzyskie PL72 LD 187 30.0
227 Severen tsentralen BG32 LD 212 28.0
228 Guyane FRY3 LD 210 27.0
229 Stredné Slovensko SK03 LD 176 26.9
230 Sud-Vest Oltenia RO41 LD 231 26.7
231 Centru RO12 LD 229 24.8
232 Dytiki Makedonia EL53 LD 216 22.9
233 Nord-Vest RO11 LD 226 21.7
234 Severozapaden BG31 LD 232 19.8
235 Ipeiros EL54 LD 213 19.5

The numbers in the left column are the row numbers from the spreadsheet so they are off by one. We can see Ireland "Northern and Western" is at row number 219 very close to the bottom of the spreadsheet.

The first two letters of the NUTS code tells us the country. We can see most of the regions at the bottom of the list are the rural areas of eastern European countries like Romania (RO) and Bulgaria (BG). Many people left those countries to come and work in Dublin under the Freedom of Movement system.

The scores were calculated in 2022. The following year, on 28 February 2023, there was a notorious train crash in the Tempe Valley of Thessaly, Greece on the opposite side of the EU from Ireland.

When the infrastructure scores were calculated in Brussels, Thessaly was placed above Ireland North West. We can see Thessalia at row number 174 in the table. It has a score of 58.6. The score for Ireland North West is even lower, just 34.1.

After the Greek train crash, there was a lot of debate about the poor quality of infrastructure in their region. Should residents of Ireland's north and west be concerned?

Surely it couldn't get any worse? What does worse look like after all?

This is where it was much more useful to look at the table instead of using the interactive charts and maps on the web site. When I opened the table, I immediately saw the name "Guyane" just below Ireland. Guyane is an administrative region of France but it is actually located in South America. The only reason it is included in the table at all is because it is controlled by an EU country, France. So the table is telling us that Ireland North West is comparable to a South American country. French Guiana has a score of 27.0. So Ireland North West is a lot closer to French Guiana than it is to Thessaly.

According to the Wikipedia page for French Guiana, forty one percent of the country is part of the Amazon rainforest anyway. In other words, it is closer to Paddington Bear (trailer) than Paddington railway station.

Paddington bear

 

Sadly, there was no Celtic Tiger for the north and west. However, there is a list of carnivorous mammals in French Guiana that includes Cougars and Jaguars.

 

Cougar, French Guiana

Act now: elections, donations, your support makes a difference

If you care about this issue, please read more about how you can support my work.

08 November, 2024 09:45AM

Apple Tax funds: railways, defective concrete blocks in Ireland’s North and West

Ireland has recently been told we have to accept and spend the long disputed EUR 14 billion of tax on the Apple computer business.

We are now approaching a general election where the political parties are likely to make promises about how they spend the money.

This is one good reason for having the election three months early: Ireland can have a public debate about the money before the government starts to sign contracts and write cheques. The new government, whoever it is, will hopefully have both a mandate and the competence to use the money wisely.

Apple may be one of Ireland's biggest tax payers but they don't get a vote as a company. Employees who are Irish citizens will be able to vote. Nonetheless, Apple is giving us giant hints that nobody has commented on publicly so far.

We can use OpenStreetmap to look at the location of Apple's headquarters in California and we can see two things very quickly.

The first are the railways. Railway lines are a big issue throughout Ireland. In many cases, the railways are already there but they are poorly maintained and they haven't been used for years. The railway that passes on the north side of Apple is the Caltrain peninsula line.

Hopefully the new government will see this as some kind of an omen and use some Apple money for rail infrastructure projects.

On the south side of Apple's HQ is the Vasona Industrial Lead and look where it goes, the local cement quarry. Of course, this is the other big issue in the North and West of Ireland but it is also an issue for Ireland as a whole. There are thousands of families in counties Donegal, Mayo and to a lesser extent in other regions who have suffered from the use of defective concrete in their homes. Many of the homes need to be totally rebuilt. It is a scandal that has been going on for years. There is a risk of fatal accidents and suicides by stressed homeowners. On top of that, people regularly ask why there are so few financial companies offering products like insurance and mortgages in the Irish republic. The unusual and uncertain nature of defective houses may deter more financial companies from offering services to Ireland as a whole.

The poor regulation of concrete production in Ireland reminds me of JuristGate, the crisis in the legal insurance industry in Switzerland. People seemed to know that something was wrong with the product for many years before the consumers found out they had been sold a lemon.

JuristGate

The City of Cupertino provides an interactive web site where we can see the real time impact of quarry dust on local residents like Apple, their employees and families.

News reports describe over 2,100 code violations, $12.7 million in penalties paid by quarry operators and complaints by the local community. Yet none of them refer to the supply of defective concrete blocks. The concrete produced there was of the highest standard, among other things, they supplied all the US military facilities throughout the entire Pacific region in WW2.

Here is the map of the region. Is there a hidden message in it that Apple tax revenues should be spent on trains and 100 percent redress for defective concrete?

Apple, Cupertino, tax, Ireland, defective concrete, railways

 

Daniel Pocock, Galway, Dail, Ireland

Act now: elections, donations, your support makes a difference

If you care about this issue, please read more about how you can support my work.

08 November, 2024 09:45AM

November 07, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

John Carpenter's "The Fog"

'The Fog' 7 inch vinyl record

A gift from my brother. Coincidentally I’ve had John Carpenter’s “Halloween” echoing around my my head for weeks: I’ve been deconstructing it and trying to learn to play it.

07 November, 2024 09:51AM

November 06, 2024

hackergotchi for Bits from Debian

Bits from Debian

Bits from the DPL

Dear Debian community,

this is Bits from DPL for October. In addition to a summary of my recent activities, I aim to include newsworthy developments within Debian that might be of interest to the broader community. I believe this provides valuable insights and foster a sense of connection across our diverse projects. Also, I welcome your feedback on the format and focus of these Bits, as community input helps shape their value.

Ada Lovelace Day 2024

As outlined in my platform, I'm committed to increasing the diversity of Debian developers. I hope the recent article celebrating Ada Lovelace Day 2024–featuring interviews with women in Debian–will serve as an inspiring motivation for more women to join our community.

MiniDebConf Cambridge

This was my first time attending the MiniDebConf in Cambridge, hosted at the ARM building. I thoroughly enjoyed the welcoming atmosphere of both MiniDebCamp and MiniDebConf. It was wonderful to reconnect with people who hadn't made it to the last two DebConfs, and, as always, there was plenty of hacking, insightful discussions, and valuable learning.

If you missed the recent MiniDebConf, there's a great opportunity to attend the next one in Toulouse. It was recently decided to include a MiniDebCamp beforehand as well.

FTPmaster accepts MRs for DAK

At the recent MiniDebConf in Cambridge, I discussed potential enhancements for DAK to make life easier for both FTP Team members and developers. For those interested, the document "Hacking on DAK" provides guidance on setting up a local DAK instance and developing patches, which can be submitted as MRs.

As a perfectly random example of such improvements some older MR, "Add commands to accept/reject updates from a policy queue" might give you some inspiration.

At MiniDebConf, we compiled an initial list of features that could benefit both the FTP Team and the developer community. While I had preliminary discussions with the FTP Team about these items, not all ideas had consensus. I aim to open a detailed, public discussion to gather broader feedback and reach a consensus on which features to prioritize.

  • Accept+Bug report

Sometimes, packages are rejected not because of DFSG-incompatible licenses but due to other issues that could be resolved within an existing package (as discussed in my DebConf23 BoF, "Chatting with ftpmasters"[1]). During the "Meet the ftpteam" BoF (Log/transcription of the BoF can be found here), for the moment until the MR gets accepted, a new option was proposed for FTP Team members reviewing packages in NEW:

Accept + Bug Report

This option would allow a package to enter Debian (in unstable or experimental) with an automatically filed RC bug report. The RC bug would prevent the package from migrating to testing until the issues are addressed. To ensure compatibility with the BTS, which only accepts bug reports for existing packages, a delayed job (24 hours post-acceptance) would file the bug.

  • Binary name changes - for instance if done to experimental not via new

When binary package names change, currently the package must go through the NEW queue, which can delay the availability of updated libraries. Allowing such packages to bypass the queue could expedite this process. A configuration option to enable this bypass specifically for uploads to experimental may be useful, as it avoids requiring additional technical review for experimental uploads.

Previously, I believed the requirement for binary name changes to pass through NEW was due to a missing feature in DAK, possibly addressable via an MR. However, in discussions with the FTP Team, I learned this is a matter of team policy rather than technical limitation. I haven't found this policy documented, so it may be worth having a community discussion to clarify and reach consensus on how we want to handle binary name changes to get the MR sensibly designed.

  • Remove dependency tree

When a developer requests the removal of a package – whether entirely or for specific architectures – RM bugs must be filed for the package itself as well as for each package depending on it. It would be beneficial if the dependency tree could be automatically resolved, allowing either:

a) the DAK removal tooling to remove the entire dependency tree
   after prompting the bug report author for confirmation, or

b) the system to auto-generate corresponding bug reports for all
   packages in the dependency tree.

The latter option might be better suited for implementation in an MR for reportbug. However, given the possibility of large-scale removals (for example, targeting specific architectures), having appropriate tooling for this would be very beneficial.

In my opinion the proposed DAK enhancements aim to support both FTP Team members and uploading developers. I'd be very pleased if these ideas spark constructive discussion and inspire volunteers to start working on them--possibly even preparing to join the FTP Team.

On the topic of ftpmasters: an ongoing discussion with SPI lawyers is currently reviewing the non-US agreement established 22 years ago. Ideally, this review will lead to a streamlined workflow for ftpmasters, removing certain hurdles that were originally put in place due to legal requirements, which were updated in 2021.

Contacting teams

My outreach efforts to Debian teams have slowed somewhat recently. However, I want to emphasize that anyone from a packaging team is more than welcome to reach out to me directly. My outreach emails aren't following any specific orders--just my own somewhat naïve view of Debian, which I'm eager to make more informed.

Recently, I received two very informative responses: one from the Qt/KDE Team, which thoughtfully compiled input from several team members into a shared document. The other was from the Rust Team, where I received three quick, helpful replies–one of which included an invitation to their upcoming team meeting.

Interesting readings on our mailing lists

I consider the following threads on our mailing list some interesting reading and would like to add some comments.

Sensible languages for younger contributors

Though the discussion on debian-devel about programming languages took place in September, I recently caught up with it. I strongly believe Debian must continue evolving to stay relevant for the future.

"Everything must change, so that everything can stay the same." -- Giuseppe Tomasi di Lampedusa, The Leopard

I encourage constructive discussions on integrating programming languages in our toolchain that support this evolution.

Concerns regarding the "Open Source AI Definition"

A recent thread on the debian-project list discussed the "Open Source AI Definition". This topic will impact Debian in the future, and we need to reach an informed decision. I'd be glad to see more perspectives in the discussions−particularly on finding a sensible consensus, understanding how FTP Team members view their delegated role, and considering whether their delegation might need adjustments for clarity on this issue.

Kind regards Andreas.

06 November, 2024 11:00PM by Andreas Tille

hackergotchi for Jaldhar Vyas

Jaldhar Vyas

Making America Great Again

Making America Great Again

Justice For Peanut

Some interesting takeaways (With the caveat that exit polls are not completely accurate and we won't have the full picture for days.)

  • President Trump seems to have won the popular vote which no Republican has done I believe since Reagan.

  • Apparently women didn't particularly care about abortion (CNN said only 14% considered it their primary issue) There is a noticable divide but it is single versus married not women versus men per se.

  • Hispanics who are here legally voted against Hispanics coming here illegally. Latinx's didn't vote for anything because they don't exist.

  • The infamous MSG rally joke had no effect on the voting habits of Puerto Ricans.

  • Republicans have taken the Senate and if trends continue as they are will retain control of the House of Representatives.

  • President Biden may have actually been a better candidate than Border Czar Harris.

06 November, 2024 07:11AM

November 05, 2024

Nazi.Compare

Linus Torvalds' self-deprecating LKML CoC mail linked to Hitler's first writing: Gemlich letter

The first piece of anti-semitic writing attributed to Adolf Hitler is the Gemlich letter.

After World War I, Hitler remained in the German army. He was posted to an intelligence role in Munich. Adolf Gemlich wrote a letter about the Jewish question. Hitler's superior, Karl Mayr, asked Hitler to write the response.

The Gemlich letter was written on 16 September 1919, while Hitler was still an army officer, well before Hitler became Fuhrer.

One of the key points in the letter states that there should be a Code of Conduct (CoC) for Jewish people:

legally fight and remove the privileges enjoyed by the Jews as opposed to other foreigners living among us

So there would be one set of laws for everybody else and a second set of laws, or a CoC, for the Jews.

The other key point in the Gemlich letter is "behavior":

there lives amongst us a non-German, alien race, unwilling and indeed unable to shed its racial characteristics, its particular feelings, thoughts and ambitions

On 16 September 2018 Linus Torvalds posted the email announcing he has to submit himself to the code of conduct on the Linux Kernel Mailing List and mind his behavior.

Linus tells us he is taking a break, in other words, some of his privileges are on hold for a while.

Could the date of the email be a secret hint from Linus that he doesn't approve of the phenomena of CoC gaslighting?

We saw the same thing in Afghanistan. When the Taliban took back control of the country, women had to change their behavior and become better at listening to the demands from their masters.

From	Linus Torvalds 
Date	Sun, 16 Sep 2018 12:22:43 -0700
Subject	Linux 4.19-rc4 released, an apology, and a maintainership note
[ So this email got a lot longer than I initially thought it would
get,  but let's start out with the "regular Sunday release" part ]

Another week, another rc.

Nothing particularly odd stands out on the technical side in the
kernel updates for last week - rc4 looks fairly average in size for
this stage in the release cycle, and all the other statistics look
pretty normal too.

We've got roughly two thirds driver fixes (gpu and networking look to
be the bulk of it, but there's smaller changes all over in various
driver subsystems), with the rest being the usual mix: core
networking, perf tooling updates, arch updates, Documentation, some
filesystem, vm and minor core kernel fixes.

So it's all fairly small and normal for this stage.  As usual, I'm
appending the shortlog at the bottom for people who want to get an
overview of the details without actually having to go dig in the git
tree.

The one change that stands out and merits mention is the code of
conduct addition...

[ And here comes the other, much longer, part... ]

Which brings me to the *NOT* normal part of the last week: the
discussions (both in public mainly on the kernel summit discussion
lists and then a lot in various private communications) about
maintainership and the kernel community.  Some of that discussion came
about because of me screwing up my scheduling for the maintainer
summit where these things are supposed to be discussed.

And don't get me wrong.  It's not like that discussion itself is in
any way new to this week - we've been discussing maintainership and
community for years. We've had lots of discussions both in private and
on mailing lists.  We have regular talks at conferences - again, both
the "public speaking" kind and the "private hallway track" kind.

No, what was new last week is really my reaction to it, and me being
perhaps introspective (you be the judge).

There were two parts to that.

One was simply my own reaction to having screwed up my scheduling of
the maintainership summit: yes, I was somewhat embarrassed about
having screwed up my calendar, but honestly, I was mostly hopeful that
I wouldn't have to go to the kernel summit that I have gone to every
year for just about the last two decades.

Yes, we got it rescheduled, and no, my "maybe you can just do it
without me there" got overruled.  But that whole situation then
started a whole different kind of discussion.  And kind of
incidentally to that one, the second part was that I realized that I
had completely mis-read some of the people involved.

This is where the "look yourself in the mirror" moment comes in.

So here we are, me finally on the one hand realizing that it wasn't
actually funny or a good sign that I was hoping to just skip the
yearly kernel summit entirely, and on the other hand realizing that I
really had been ignoring some fairly deep-seated feelings in the
community.

It's one thing when you can ignore these issues.  Usually it’s just
something I didn't want to deal with.

This is my reality.  I am not an emotionally empathetic kind of person
and that probably doesn't come as a big surprise to anybody.  Least of
all me.  The fact that I then misread people and don't realize (for
years) how badly I've judged a situation and contributed to an
unprofessional environment is not good.

This week people in our community confronted me about my lifetime of
not understanding emotions.  My flippant attacks in emails have been
both unprofessional and uncalled for.  Especially at times when I made
it personal.  In my quest for a better patch, this made sense to me.
I know now this was not OK and I am truly sorry.

The above is basically a long-winded way to get to the somewhat
painful personal admission that hey, I need to change some of my
behavior, and I want to apologize to the people that my personal
behavior hurt and possibly drove away from kernel development
entirely.

I am going to take time off and get some assistance on how to
understand people’s emotions and respond appropriately.

Put another way: When asked at conferences, I occasionally talk about
how the pain-points in kernel development have generally not been
about the _technical_ issues, but about the inflection points where
development flow and behavior changed.

These pain points have been about managing the flow of patches, and
often been associated with big tooling changes - moving from making
releases with "patches and tar-balls" (and the _very_ painful
discussions about how "Linus doesn't scale" back 15+ years ago) to
using BitKeeper, and then to having to write git in order to get past
the point of that no longer working for us.

We haven't had that kind of pain-point in about a decade.  But this
week felt like that kind of pain point to me.

To tie this all back to the actual 4.19-rc4 release (no, really, this
_is_ related!) I actually think that 4.19 is looking fairly good,
things have gotten to the "calm" period of the release cycle, and I've
talked to Greg to ask him if he'd mind finishing up 4.19 for me, so
that I can take a break, and try to at least fix my own behavior.

This is not some kind of "I'm burnt out, I need to just go away"
break.  I'm not feeling like I don't want to continue maintaining
Linux. Quite the reverse.  I very much *do* want to continue to do
this project that I've been working on for almost three decades.

This is more like the time I got out of kernel development for a while
because I needed to write a little tool called "git".  I need to take
a break to get help on how to behave differently and fix some issues
in my tooling and workflow.

And yes, some of it might be "just" tooling.  Maybe I can get an email
filter in place so at when I send email with curse-words, they just
won't go out.  Because hey, I'm a big believer in tools, and at least
_some_ problems going forward might be improved with simple
automation.

I know when I really look “myself in the mirror” it will be clear it's
not the only change that has to happen, but hey...  You can send me
suggestions in email.

I look forward to seeing you at the Maintainer Summit.

                Linus

05 November, 2024 05:00PM

November 04, 2024

Sven Hoexter

Google CloudDNS HTTPS Records with ipv6hint

I naively provisioned an HTTPS record at Google CloudDNS like this via terraform:

resource "google_dns_record_set" "testv6" {
    name         = "testv6.some-domain.example."
    managed_zone = "some-domain-example"
    type         = "HTTPS"
    ttl          = 3600
    rrdatas      = ["1 . alpn=\"h2\" ipv4hint=\"198.51.100.1\" ipv6hint=\"2001:DB8::1\""]
}

This results in a permanent diff because the Google CloudDNS API seems to parse the record content, and stores the ipv6hint expanded (removing the :: notation) and in all lowercase as 2001:db8:0:0:0:0:0:1. Thus to fix the permanent diff we've to use it like this:

resource "google_dns_record_set" "testv6" {
    name = "testv6.some-domain.example."
    managed_zone = "some-domain-example"
    type = "HTTPS"
    ttl = 3600
    rrdatas = ["1 . alpn=\"h2\" ipv4hint=\"198.51.100.1\" ipv6hint=\"2001:db8:0:0:0:0:0:1\""]
}

Guess I should be glad that they already support HTTPS records natively, and not bicker too much about the implementation details.

04 November, 2024 01:13PM

November 03, 2024

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Ultimate rules as a service

Since WFDF changed their ultimate rules web site to be less-than-ideal (in the name of putting everything into Wordpress…), I made my own, at urules.org. It was a fun journey; I've never fiddled with PWAs before, and I was a bit surprised how low-level it all was. I assumed that since my page is just a bunch of HTML files and ~100 lines of JS, I could just bundle that up—but no, that is something they expect a framework to do for you.

The only primitive you get is seemingly that you can fire up your own background service worker (JS running in its own, locked-down context) and that gets to peek at every HTTP request done and possibly intercept it. So you can use a Web Cache (seemingly a separate concept from web local storage?), insert stuff into that, and then query it to intercept requests. It doesn't feel very elegant, perhaps?

It is a bit neat that I can use this to make my own bundling, though. All the pages and images (painfully converted to SVG to save space and re-flow for mobile screens, mostly by simply drawing over bitmaps by hand in Inkscape) are stuck into a JSON dictionary, compressed using the slowest compressor I could find and then downloaded as a single 159 kB bundle. It makes the site actually sort of weird to navigate; since it pretty quickly downloads the bundle in the background, everything goes offline and the speed of loading new pages just feels… off somehow. As if it's not a Serious Web Page if there's no load time.

Of course, this also means that I couldn't cache PNGs, because have you ever tried to have non-UTF-8 data in a JSON sent through N layers of JavaScript? :-)

03 November, 2024 10:48AM

hackergotchi for Guido Günther

Guido Günther

Free Software Activities October 2024

Another short status update of what happened on my side last month. Besides a phosh bugfix release improving text input and selection was a prevalent pattern again resulting in improvements in the compositor, the OSK and some apps.

phosh

  • Install gir (MR). Needed for e.g. Debian to properly package the Rust bindings.
  • Try harder to find an app icon when showing notifications (MR)
  • Add a simple Pomodoro timer plugin (MR)
  • Small screenshot manager fixes (MR)
  • Tweak portals configuration (MR)
  • Consistent focus style on lock screen and settings (MR). Improves the visual appearance as the dotted focus frame doesn't match our otherwise colored focus frames
  • Don't focus buttons in settings (MR). Improves the visual appearance as attention isn't drawn to the button focus.
  • Close Phosh's settings when activating a Settings panel (MR)

phoc

  • Improve cursor and cursor theme handling, hide mouse pointer by default (MR)
  • Don't submit empty preedit (MR)
  • Fix flickering selection bubbles in GTK4's text input fields (MR)
  • Backport two more fixes and release 0.41.1 (MR)

phosh-mobile-settings

  • Allow to select default text completer (MR, MR)
  • Don't crash when we fail to load a pref plugin (MR)

libphosh-rs

  • Update with current gir and allow to use status pages (MR)
  • Expose screenshot manager and build without warnings (MR). (Improved further by a follow up MR from Sam)
  • Fix clippy warnings and add clippy to CI (MR)

phosh-osk-stub

  • presage: Always set predictors (MR). Avoids surprises with unwanted predictors.
  • Install completer information (MR)
  • Handle overlapping touch events (MR). This should improve fast typing.
  • Allow plain ctrl and alt in the shortcuts bar (MR
  • Use Adwaita background color to make the OSK look more integrated (MR)
  • Use StyleManager to support accent colors (MR)
  • Fix emoji section selection in RTL locales (MR)
  • Don't submit empty preedit (MR). Helps to better preserve text selections.

phosh-osk-data

  • Add scripts to build word corpus from Wikipedia data (MR) See here for the data.

xdg-desktop-portal-phosh

  • Release 0.42~rc1 (MR)
  • Fix HighContrast (MR)

Debian

  • Collect some of the QCom workarounds in a package (MR). This is not meant to go into Debian proper but it's nicer than doing all the mods by hand and forgetting which files were modified.
  • q6voiced: Fix service configuration (MR)
  • chatty: Enable clock test again (MR), and then unbreak translations (MR)
  • phosh: Ship gir for libphosh-rs (MR)
  • phoc: Backport input method related fix (MR)
  • Upload initial package of phosh-osk-data: Status in NEW
  • Upload initial package of xdg-desktop-portal-pohsh: Status in NEW
  • Backport phosh-osk-stub abbrev fix (MR
  • phoc: Update to 0.42.1 (MR
  • mobile-tweaks: Enable zram on Librem 5 and PP (MR)

ModemManager

  • Some further work on the Cell Broadcast to address comments MR)

Calls

  • Further improve daemon mode (MR) (mentioned last month already but got even simpler)

GTK

  • Handle Gtk{H,V}Separator when migrating UI files to GTK4 (MR)

feedbackd

  • Modernize README a bit (MR)

Chatty

  • Use special event for SMS (MR)
  • Another QoL fix when using OSK (MR)
  • Fix printing time diffs on 32bit architectures (MR)

libcmatrix

  • Use endpoints for authenticated media (MR). Needed to support v1.11 servers.

phosh-ev

  • Switch to GNOME 47 runtime (MR)

git-buildpackage

  • Don't use deprecated pkg-resources (MR)

Unified push specification

  • Expand on DBus activation a bit (MR)

swipeGuess

  • Small build improvement and mention phosh-osk-stub (Commit)

wlr-clients

  • Fix -o option and add help output (MR)

iotas (Note taking app)

  • Don't take focus with header bar buttons (MR). Makes typing faster (as the OSK won't hide) and thus using the header bar easier

Flare (Signal app)

  • Don't take focus when sending messages, adding emojis or attachments (MR). Makes typing faster (as the OSK won't hide) and thus using those buttons easier

xdg-desktop-portal

  • Use categories that work for both xdg-spec and the portal (MR)

Reviews

This is not code by me but reviews on other peoples code. The list is fairly incomplete, hope to improve on this in the upcoming months:

  • phosh-tour: add first login mode (MR)
  • phosh: Animate swipe closing notifications (MR)
  • iio-sensor-proxy: Report correct value on claim (MR)
  • iio-sensor-proxy: face-{up,down} (MR)
  • phosh-mobile-settings: Squeekboad scaling (MR)
  • libcmatrix: Misc cleanups/fixes (MR)
  • phosh: Notification separator improvements (MR
  • phosh: Accent colors (MR

Help Development

If you want to support my work see donations. This includes a list of hardware we want to improve support for. Thanks a lot to all current and past donors.

03 November, 2024 10:17AM

hackergotchi for Junichi Uekawa

Junichi Uekawa

Doing more swimming in everyday life for the past few months.

Doing more swimming in everyday life for the past few months. Seems like I am keeping that up.

03 November, 2024 09:24AM by Junichi Uekawa

November 02, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

Rcpp 1.0.13-1 on CRAN: Hot Fix

rcpp logo

A hot-fix release 1.0.13-1, consisting of two small PRs relative to the last regular CRAN release 1.0.13, just arrived on CRAN. When we prepared 1.0.13, we included a change related to the ‘tightening’ of the C API of R itself. Sadly, we pinned an expected change to ‘comes with next (minor) release 4.4.2’ rather than now ‘next (normal aka major) release 4.5.0’. And now that R 4.4.2 is out (as of two days ago) we accidentally broke building against the header file with that check. Whoops. Bugs happen, and we are truly sorry—but this is now addressed in 1.0.13-1.

The normal (bi-annual) release cycle will resume with 1.0.14 slated for January. As you can see from the NEWS file of the development branch, we have a number of changes coming. You can safely access that release candidate version, either off the default branch at github or via r-universe artifacts.

The list below details all changes, as usual. The only other change concerns the now-mandatory use of Authors@R.

Changes in Rcpp release version 1.0.13-1 (2024-11-01)

  • Changes in Rcpp API:

    • Use read-only VECTOR_PTR and STRING_PTR only with with R 4.5.0 or later (Kevin in #1342 fixing #1341)
  • Changes in Rcpp Deployment:

    • Authors@R is now used in DESCRIPTION as mandated by CRAN

Thanks to my CRANberries, you can also look at a diff to the previous release Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page. Bugs reports are welcome at the GitHub issue tracker as well (where one can also search among open or closed issues).

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

02 November, 2024 09:13PM

Russell Coker

More About the Yoga Gen3

Two months ago I bought a Thinkpad X1 Yoga Gen3 [1]. I’m still very happy with it, the screen is a great improvement over the FullHD screen on my previous Thinkpad. I have yet to discover what’s the best resolution to have on a laptop if price isn’t an issue, but it’s at least 1440p for a 14″ display, that’s 210DPI. The latest Thinkpad X1 Yoga is the 7th gen and has up to 3840*2400 resolution on the internal display for 323DPI. Apple apparently uses the term “Retina Display” to mean something in the range of 250DPI to 300DPI, so my current laptop is below “Retina” while the most expensive new Thinkpads are above it.

I did some tests on external displays and found that this Thinkpad along with a Dell Latitude of the same form factor and about the same age can only handle one 4K display on a Thunderbolt dock and one on HDMI. On Reddit u/Carlioso1234 pointed out this specs page which says it supports a maximum of 3 displays including the built in TFT [2]. The Thunderbolt/USB-C connection has a maximum resolution of 5120*2880 and the HDMI port has a maximum of 4K. The latest Yoga can support four displays total which means 2*5K over Thunderbolt and one 4K over HDMI. It would be nice if someone made a 8000*2880 ultrawide display that looked like 2*5K displays when connected via Thunderbolt. It would also be nice if someone made a 32″ 5K display, currently they all seem to be 27″ and I’ve found that even for 4K resolution 32″ is better than 27″.

With the typical configuration of Linux and the BIOS the Yoga Gen3 will have it’s touch screen stop working after suspend. I have confirmed this for stylus use but as the finger-touch functionality is broken I couldn’t confirm that. On r/thinkpad u/p9k told me how to fix this problem [3]. I had to set the BIOS to Win 10 Sleep aka Hybrid sleep and then put the following in /etc/systemd/system/thinkpad-wakeup-config.service :

# https://www.reddit.com/r/thinkpad/comments/1blpy20/comment/kw7se2l/?context=3

[Unit]
Description=Workarounds for sleep wakeup source for Thinkpad X1 Yoga 3
After=sysinit.target
After=systemd-modules-load.service

[Service]
Type=oneshot
ExecStart=/bin/sh -c "echo 'enabled' > /sys/devices/platform/i8042/serio0/power/wakeup"
ExecStart=/bin/sh -c "echo 'enabled' > /sys/devices/platform/i8042/serio1/power/wakeup"
ExecStart=/bin/sh -c "echo 'LID' > /proc/acpi/wakeup"

[Install]
WantedBy=multi-user.target

Now it works fine, for stylus at least. I still get kernel error messages like the following which don’t seem to cause problems:

wacom 0003:056A:5146.0005: wacom_idleprox_timeout: tool appears to be hung in-prox. forcing it out.

When it wasn’t working I got the above but also kernel error messages like:

wacom 0003:056A:5146.0005: wacom_wac_queue_insert: kfifo has filled, starting to drop events

This change affected the way suspend etc operate. Now when I connect the laptop to power it will leave suspend mode. I’ve configured KDE to suspend when the lid is closed and there’s no monitor connected.

02 November, 2024 08:05AM by etbe

Moving Between Devices

I previously wrote about the possibility of transferring work between devices as an alternative to “convergence” (using a phone or tablet as a desktop) [1]. This idea has been implemented in some commercial products already.

MrWhosTheBoss made a good YouTube video reviewing recent Huawei products [2]. At 2:50 in that video he shows how you can link a phone and tablet, control one from the other, drag and drop of running apps and files between phone and tablet, mirror the screen between devices, etc. He describes playing a video on one device and having it appear on the other, I hope that it actually launches a new instance of the player app as the Google Chromecast failed in the market due to remote display being laggy. At 7:30 in that video he starts talking about the features that are available when you have multiple Huawei devices, starting with the ability to move a Bluetooth pairing for earphones to a different device.

At 16:25 he shows what Huawei is doing to get apps going including allowing apk files to be downloaded and creating what they call “Quick Apps” which are instances of a web browser configured to just use one web site and make it look like a discrete app, we need something like this for FOSS phone distributions – does anyone know of a browser that’s good for it?

Another thing that we need is to have an easy way of transferring open web pages between systems. Chrome allows sending pages between systems but it’s proprietary, limited to Chrome only, and also takes an unreasonable amount of time. KDEConnect allows sharing clipboard contents which can be used to send URLs that can then be pasted into a browser, but the process of copy URL, send via KDEConnect, and paste into other device is unreasonably slow. The design of Chrome with a “Send to your devices” menu option from the tab bar is OK. But ideally we need a “Send to device” for all tabs of a window as well, we need it to run from free software and support using your own server not someone else’s server (AKA “the cloud”). Some of the KDEConnect functionality but using a server rather than direct connection over the same Wifi network (or LAN if bridged to Wifi) would be good.

What else do we need?

02 November, 2024 08:03AM by etbe

What is a Workstation?

I recently had someone describe a Mac Mini as a “workstation”, which I strongly disagree with. The Wikipedia page for Workstation [1] says that it’s a type of computer designed for scientific or technical use, for a single user, and would commonly run a multi-user OS.

The Mac Mini runs a multi-user OS and is designed for a single user. The issue is whether it is for “scientific or technical use”. A Mac Mini is a nice little graphical system which could be used for CAD and other engineering work. But I believe that the low capabilities of the system and lack of expansion options make it less of a workstation.

The latest versions of the Mac Mini (to be officially launched next week) have up to 64G of RAM and up to 8T of storage. That is quite decent compute power for a small device. For comparison the HP ML 110 Gen9 workstation I’m currently using was released in 2021 and has 256G of RAM and has 4 * 3.5″ SAS bays so I could easily put a few 4TB NVMe devices and some hard drives larger than 10TB. The HP Z640 workstation I have was released in 2014 and has 128G of RAM and 4*2.5″ SATA drive bays and 2*3.5″ SATA drive bays. Previously I had a Dell PowerEdge T320 which was released in 2012 and had 96G of RAM and 8*3.5″ SAS bays.

In CPU and GPU power the recent Mac Minis will compare well to my latest workstations. But they compare poorly to workstations from as much as 12 years ago for RAM and storage. Which is more important depends on the task, if you have to do calculations on 80G of data with lots of scans through the entire data set then a system with 64G of RAM will perform very poorly and a system with 96G and a CPU less than half as fast will perform better. A Dell PowerEdge T320 from 2012 fully loaded with 192G of RAM will outperform a modern Mac Mini on many tasks due to this and the T420 supported up to 384G.

Another issue is generic expansion options. I expect a workstation to have a number of PCIe slots free for GPUs and other devices. The T320 I used to use had a PCIe power cable for a power hungry GPU and I think all the T320 and T420 models with high power PSUs supported that.

I think that a usable definition of a “workstation” is a system having a feature set that is typical of servers (ECC RAM, lots of storage for RAID, maybe hot-swap storage devices, maybe redundant PSUs, and lots of expansion options) while also being suitable for running on a desktop or under a desk. The Mac Mini is nice for running on a desk but that’s the only workstation criteria it fits. I think that ECC RAM should be a mandatory criteria and any system without it isn’t a workstation. That excludes most Apple hardware. The Mac Mini is more of a thin-client than a workstation.

My main workstation with ECC RAM could run 3 VMs that each have more RAM than the largest Mac Mini that will be sold next week.

If 32G of non-ECC RAM is considered enough for a “workstation” then you could get an Android phone that counts as a workstation – and it will probably cost less than a Mac Mini.

02 November, 2024 05:03AM by etbe

November 01, 2024

hackergotchi for Colin Watson

Colin Watson

Free software activity in October 2024

Almost all of my Debian contributions this month were sponsored by Freexian.

You can also support my work directly via Liberapay.

Ansible

I noticed that Ansible had fallen out of Debian testing due to autopkgtest failures. This seemed like a problem worth fixing: in common with many other people, we use Ansible for configuration management at Freexian, and it probably wouldn’t make our sysadmins too happy if they upgraded to trixie after its release and found that Ansible was gone.

The problems here were really just slogging through test failures in both the ansible-core and ansible packages, but their test suites are large and take a while to run so this took some time. I was able to contribute a few small fixes to various upstreams in the process:

This should now get back into testing tomorrow.

OpenSSH

Martin-Éric Racine reported that ssh-audit didn’t list the ext-info-s feature as being available in Debian’s OpenSSH 9.2 packaging in bookworm, contrary to what OpenSSH upstream said on their specifications page at the time. I spent some time looking into this and realized that upstream was mistakenly saying that implementations of ext-info-c and ext-info-s were added at the same time, while in fact ext-info-s was added rather later. ssh-audit now has clearer output, and the OpenSSH maintainers have corrected their specifications page.

I looked into a report of an ssh failure in certain cases when using GSS-API key exchange (which is a Debian patch). Once again, having integration tests was a huge win here: the affected scenario is quite a fiddly one, but I was able to set it up in the test, and thereby make sure it doesn’t regress in future. It still took me a couple of hours to get all the details right, but in the past this sort of thing took me much longer with a much lower degree of confidence that the fix was correct.

On upstream’s advice, I cherry-picked some key exchange fixes needed for big-endian architectures.

Python team

I packaged python-evalidate, needed for a new upstream version of buildbot.

The Python 3.13 transition rolls on. I fixed problems related to it in htmlmin, humanfriendly, postgresfixture (contributed upstream), pylint, python-asyncssh (contributed upstream), python-oauthlib, python3-simpletal, quodlibet, zope.exceptions, and zope.interface.

A trickier Python 3.13 issue involved the cgi module. Years ago I ported zope.publisher to the multipart module because cgi.FieldStorage was broken in some situations, and as a result I got a recommendation into Python’s “dead batteries” PEP 594. Unfortunately there turns out to be a name conflict between multipart and python-multipart on PyPI; python-multipart upstream has been working to disentangle this, though we still need to work out what to do in Debian. All the same, I needed to fix python-wadllib and multipart seemed like the best fit; I contributed a port upstream and temporarily copied multipart into Debian’s python-wadllib source package to allow its tests to pass. I’ll come back and fix this properly once we sort out the multipart vs. python-multipart packaging.

tzdata moved some timezone definitions to tzdata-legacy, which has broken a number of packages. I added tzdata-legacy build-dependencies to alembic and python-icalendar to deal with this in those packages, though there are still some other instances of this left.

I tracked down an nltk regression that caused build failures in many other packages.

I fixed Rust crate versioning issues in pydantic-core, python-bcrypt, and python-maturin (mostly fixed by Peter Michael Green and Jelmer Vernooij, but it needed a little extra work).

I fixed other build failures in entrypoints, mayavi2, python-pyvmomi (mostly fixed by Alexandre Detiste, but it needed a little extra work), and python-testing.postgresql (ditto).

I fixed python3-simpletal to tolerate future versions of dh-python that will drop their dependency on python3-setuptools.

I fixed broken symlinks in python-treq.

I removed (build-)depends on python3-pkg-resources from alembic, autopep8, buildbot, celery, flufl.enum, flufl.lock, python-public, python-wadllib (contributed upstream), pyvisa, routes, vulture, and zodbpickle (contributed upstream).

I upgraded astroid, asyncpg (fixing a Python 3.13 failure and a build failure), buildbot (noticing an upstream test bug in the process), dnsdiag, frozenlist, netmiko (fixing a Python 3.13 failure), psycopg3, pydantic-settings, pylint, python-asyncssh, python-bleach, python-btrees, python-cytoolz, python-django-pgtrigger, python-django-test-migrations, python-gssapi, python-icalendar, python-json-log-formatter, python-pgbouncer, python-pkginfo, python-plumbum, python-stdlib-list, python-tokenize-rt, python-treq (fixing a Python 3.13 failure), python-typeguard, python-webargs (fixing a build failure), pyupgrade, pyvisa, pyvisa-py (fixing a Python 3.13 failure), toolz, twisted, vulture, waitress (fixing CVE-2024-49768 and CVE-2024-49769), wtf-peewee, wtforms, zodbpickle, zope.exceptions, zope.interface, zope.proxy, zope.security, and zope.testrunner to new upstream versions.

I tried to fix a regression in python-scruffy, but I need testing feedback.

I requested removal of python-testing.mysqld.

01 November, 2024 12:19PM by Colin Watson

Russ Allbery

Review: Overdue and Returns

Review: Overdue and Returns, by Mark Lawrence

Publisher: Mark Lawrence
Copyright: June 2023
Copyright: February 2024
ASIN: B0C9N51M6Y
ASIN: B0CTYNQGBX
Format: Kindle
Pages: 99

Overdue is a stand-alone novelette in the Library Trilogy universe. Returns is a collection of two stories, the novelette "Returns" and the short story "About Pain." All of them together are about the length of a novella, so I'm combining them into a single review.

These are ancillary stories in the same universe as the novels, but not necessarily in the same timeline. (Trying to fit "About Pain" into the novel timeline will give you a headache and I am choosing to read it as author's fan fiction.) I'm guessing they're part of the new fad for releasing short fiction on Amazon to tide readers over and maintain interest between books in a series, a fad about which I have mixed feelings. Given the total lack of publisher metadata in either the stories or on Amazon, I'm assuming they were self-published even though the novels are published by Ace, but I don't know that for certain.

There are spoilers for The Book That Wouldn't Burn, so don't read these before that novel. There are no spoilers for The Book That Broke the World, and I don't think the reading order would matter.

I found all three of these stories irritating and thuddingly trite. "Returns" is probably the best of the lot in terms of quality of storytelling, but I intensely dislike the structural implications of the nature of the book at its center and am therefore hoping that it's non-canonical.

I would not waste your time with these even if you are enjoying the novels.

"Overdue": Three owners of the same bookstore at different points in time have encounters with an albino man named Yute who is on a quest. One of the owners is trying to write a book, one of them is older, depressed, and closed off, and one of them has regular conversations with her sister's ghost. The nature of the relationship between the three is too much of a spoiler, but it involves similar shenanigans as The Book That Wouldn't Burn.

Lawrence uses my least favorite resolution of benign ghost stories. The story tries very hard to sell it as a good thing, but I thought it was cruel and prefer fantasy that rejects both branches of that dilemma. Other than that, it was fine, I guess, although the moral was delivered with all of the subtlety of the last two minutes of a Saturday morning cartoon. (5)

"Returns": Livira returns a book deep inside the library and finds that she can decipher it, which leads her to a story about Yute going on a trip to recover another library book. This had a lot of great Yute lines, plus I always like seeing Livira in exploration mode. The book itself is paradoxical in a causality-destroying way, which is handwaved away as literal magic. I liked this one the best of the three stories, but I hope the world-building of the main series does not go in this direction and I'm a little afraid it might. (6)

"About Pain": A man named Holden runs into a woman named Clovis at the gym while carrying a book titled Catcher that his dog found and that he's returning to the library. I thoroughly enjoy Clovis and was happy to read a few more scenes about her. Other than that, this was fine, I guess, although it is a story designed to deliver a point and that point is one that appears in every discussion of classics and re-reading that has ever happened on the Internet. Also, I know I'm being grumpy, but Lawrence's puns with authors and character names are chapter-epigraph amusing but not short-story-length funny. Yes, yes, his name is Holden, we get it. (5)

Rating: 5 out of 10

01 November, 2024 04:11AM

Paul Wise

FLOSS Activities October 2024

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Sponsors

All work was done on a volunteer basis.

01 November, 2024 01:10AM

October 31, 2024

hackergotchi for Gunnar Wolf

Gunnar Wolf

Do you have a minute..?

Do you have a minute...?

…to talk about the so-called “Intellectual Property”?

31 October, 2024 10:07PM

October 30, 2024

Russell Coker

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

gcbd 0.2.7 on CRAN: More Mere Maintenance

Another pure maintenance release 0.2.7 of the gcbd package is now on CRAN. The gcbd proposes a benchmarking framework for LAPACK and BLAS operations (as the library can exchanged in a plug-and-play sense on suitable OSs) and records result in local database. Its original motivation was to also compare to GPU-based operations. However, as it is both challenging to keep CUDA working packages on CRAN providing the basic functionality appear to come and go so testing the GPU feature can be challenging. The main point of gcbd is now to actually demonstrate that ‘yes indeed’ we can just swap BLAS/LAPACK libraries without any change to R, or R packages. The ‘configure / rebuild R for xyz’ often seen with ‘xyz’ being Goto or MKL is simply plain wrong: you really can just swap them (on proper operating systems, and R configs – see the package vignette for more). But nomatter how often we aim to correct this record, it invariably raises its head another time.

This release accommodates a CRAN change request as we were referencing the (now only suggested) package gputools. As hinted in the previous paragraph, it was once on CRAN but is not right now so we adjusted our reference.

CRANberries also provides a diffstat report for the latest release.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

30 October, 2024 01:10AM

October 29, 2024

Sven Hoexter

GKE version 1.31.1-gke.1678000+ is a baddy

Just a "warn your brothers" for people foolish enough to use GKE and run on the Rapid release channel.

Update from version 1.31.1-gke.1146000 to 1.31.1-gke.1678000 is causing trouble whenever NetworkPolicy resources and a readinessProbe (or health check) are configured. As a workaround we started to remove the NetworkPolicy resources. E.g. when kustomize is involved with a patch like this:

- patch: |-
    $patch: delete
    apiVersion: "networking.k8s.io/v1"
    kind: NetworkPolicy
    metadata:
        name: dummy
  target:
    kind: NetworkPolicy

We tried to update to the latest version - right now 1.31.1-gke.2008000 - which did not change anything. Behaviour is pretty much erratic, sometimes it still works and sometimes the traffic is denied. It also seems that there is some relevant fix in 1.31.1-gke.1678000 because that is now the oldest release of 1.31.1 which I can find in the regular and rapid release channels. The last known good version 1.31.1-gke.1146000 is not available to try a downgrade.

29 October, 2024 11:28AM

October 28, 2024

hackergotchi for Thomas Lange

Thomas Lange

30.000 FAIme jobs created in 7 years

The number of FAIme jobs has reached 30.000. Yeah!
At the end of this November the FAIme web service for building customized ISOs turns 7 years old. It had reached 10.000 jobs in March 2021 and 20.000 jobs were reached in June 2023. A nice increase of the usage.

Here are some statistics for the jobs processed in 2024:

Type of jobs

3%     cloud image
11%     live ISO
86%     install ISO

Distribution

2%     bullseye
8%     trixie
12%     ubuntu 24.04
78%     bookworm

Misc

  • 18%   used a custom postinst script
  • 11%   provided their ssh pub key for passwordless root login
  • 50%   of the jobs didn't included a desktop environment at all, the others used GNOME, XFCE or KDE or the Ubuntu desktop the most.
  • The biggest ISO was a FAIme job which created a live ISO with a desktop and some additional packages This job took 30min to finish and the resulting ISO was 18G in size.

Execution Times

The cloud and live ISOs need more time for their creation because the FAIme server needs to unpack and install all packages. For the install ISO the packages are only downloaded. The amount of software packages also affects the build time. Every ISO is build in a VM on an old 6-core E5-1650 v2. Times given are calculated from the jobs of the past two weeks.

Job type     Avg     Max
install no desktop     1 min     2 min
install GNOME     2 min     5 min

The times for Ubuntu without and with desktop are one minute higher than those mentioned above.

Job type     Avg     Max
live no desktop     4 min     6 min
live GNOME     8 min     11 min

The times for cloud images are similar to live images.

A New Feature

For a few weeks now, the system has been showing the number of jobs ahead of you in the queue when you submit a job that cannot be processed immediately.

The Next Milestone

At the end of this years the FAI project will be 25 years old. If you have a success story of your FAI usage to share please post it to the linux-fai mailing list or send it to me. Do you know the FAI questionnaire ? A lot of reports are already available.

Here's an overview what happened in the past 20 years in the FAI project.

About FAIme

FAIme is the service for building your own customized ISO via a web interface. You can create an installation or live ISO or a cloud image. Several Debian releases can be selected and also Ubuntu server or Ubuntu desktop installation ISOs can be customized. Multiple options are available like selecting a desktop and the language, adding your own package list, choosing a partition layout, adding a user, choosing a backports kernel, adding a postinst script and some more.

28 October, 2024 11:57AM

October 27, 2024

Enrico Zini

Typing decorators for class members with optional arguments

This looks straightforward and is far from it. I expect tool support will improve in the future. Meanwhile, this blog post serves as a step by step explanation for what is going on in code that I'm about to push to my team.

Let's take this relatively straightforward python code. It has a function printing an int, and a decorator that makes it argument optional, taking it from a global default if missing:

from unittest import mock

default = 42


def with_default(f):
    def wrapped(self, value=None):
        if value is None:
            value = default
        return f(self, value)

    return wrapped


class Fiddle:
    @with_default
    def print(self, value):
        print("Answer:", value)


fiddle = Fiddle()
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

It works nicely as expected:

$ python3 test0.py
Answer: 12
Answer: 42
Mocked answer: 12
Mocked answer: None

It lacks functools.wraps and typing, though. Let's add them.

Adding functools.wraps

Adding a simple @functools.wraps, mock unexpectedly stops working:

# python3 test1.py
Answer: 12
Answer: 42
Mocked answer: 12
Traceback (most recent call last):
  File "/home/enrico/lavori/freexian/tt/test1.py", line 42, in <module>
    fiddle.print()
  File "<string>", line 2, in print
  File "/usr/lib/python3.11/unittest/mock.py", line 186, in checksig
    sig.bind(*args, **kwargs)
  File "/usr/lib/python3.11/inspect.py", line 3211, in bind
    return self._bind(args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/lib/python3.11/inspect.py", line 3126, in _bind
    raise TypeError(msg) from None
TypeError: missing a required argument: 'value'

This is the new code, with explanations and a fix:

# Introduce functools
import functools
from unittest import mock

default = 42


def with_default(f):
    @functools.wraps(f)
    def wrapped(self, value=None):
        if value is None:
            value = default
        return f(self, value)

    # Fix:
    # del wrapped.__wrapped__

    return wrapped


class Fiddle:
    @with_default
    def print(self, value):
        assert value is not None
        print("Answer:", value)


fiddle = Fiddle()
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    # mock's autospec uses inspect.getsignature, which follows __wrapped__ set
    # by functools.wraps, which points to a wrong signature: the idea that
    # value is optional is now lost
    fiddle.print()

Adding typing

For simplicity, from now on let's change Fiddle.print to match its wrapped signature:

      # Give up with making value not optional, to simplify things :(
      def print(self, value: int | None = None) -> None:
          assert value is not None
          print("Answer:", value)

Typing with ParamSpec

# Introduce typing, try with ParamSpec
import functools
from typing import TYPE_CHECKING, ParamSpec, Callable
from unittest import mock

default = 42

P = ParamSpec("P")


def with_default(f: Callable[P, None]) -> Callable[P, None]:
    # Using ParamSpec we forward arguments, but we cannot use them!
    @functools.wraps(f)
    def wrapped(self, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return wrapped


class Fiddle:
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)

mypy complains inside the wrapper, because while we forward arguments we don't constrain them, so we can't be sure there is a value in there:

test2.py:17: error: Argument 2 has incompatible type "int"; expected "P.args"  [arg-type]
test2.py:19: error: Incompatible return value type (got "_Wrapped[P, None, [Any, int | None], None]", expected "Callable[P, None]")  [return-value]
test2.py:19: note: "_Wrapped[P, None, [Any, int | None], None].__call__" has type "Callable[[Arg(Any, 'self'), DefaultArg(int | None, 'value')], None]"

Typing with Callable

We can use explicit Callable argument lists:

# Introduce typing, try with Callable
import functools
from typing import TYPE_CHECKING, Callable, TypeVar
from unittest import mock

default = 42

A = TypeVar("A")


# Callable cannot represent the fact that the argument is optional, so now mypy
# complains if we try to omit it
def with_default(f: Callable[[A, int | None], None]) -> Callable[[A, int | None], None]:
    @functools.wraps(f)
    def wrapped(self: A, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return wrapped


class Fiddle:
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)


if TYPE_CHECKING:
    reveal_type(Fiddle.print)

fiddle = Fiddle()
fiddle.print(12)
# !! Too few arguments for "print" of "Fiddle"  [call-arg]
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

Now mypy complains when we try to omit the optional argument, because Callable cannot represent optional arguments:

test3.py:32: note: Revealed type is "def (test3.Fiddle, Union[builtins.int, None])"
test3.py:37: error: Too few arguments for "print" of "Fiddle"  [call-arg]
test3.py:46: error: Too few arguments for "print" of "Fiddle"  [call-arg]

typing's documentation says:

Callable cannot express complex signatures such as functions that take a variadic number of arguments, overloaded functions, or functions that have keyword-only parameters. However, these signatures can be expressed by defining a Protocol class with a call() method:

Let's do that!

Typing with Protocol, take 1

# Introduce typing, try with Protocol
import functools
from typing import TYPE_CHECKING, Protocol, TypeVar, Generic, cast
from unittest import mock

default = 42

A = TypeVar("A", contravariant=True)


class Printer(Protocol, Generic[A]):
    def __call__(_, self: A, value: int | None = None) -> None:
        ...


def with_default(f: Printer[A]) -> Printer[A]:
    @functools.wraps(f)
    def wrapped(self: A, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return cast(Printer, wrapped)


class Fiddle:
    # function has a __get__ method to generated bound versions of itself
    # the Printer protocol does not define it, so mypy is now unable to type
    # the bound method correctly
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)


if TYPE_CHECKING:
    reveal_type(Fiddle.print)

fiddle = Fiddle()
# !! Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

New mypy complaints:

test4.py:41: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"  [arg-type]
test4.py:42: error: Missing positional argument "self" in call to "__call__" of "Printer"  [call-arg]
test4.py:50: error: Argument 1 to "__call__" of "Printer" has incompatible type "int"; expected "Fiddle"  [arg-type]
test4.py:51: error: Missing positional argument "self" in call to "__call__" of "Printer"  [call-arg]

What happens with class methods, is that the function object has a __get__ method that generates a bound versions of itself. Our Printer protocol does not define it, so mypy is now unable to type the bound method correctly.

Typing with Protocol, take 2

So... we add the function descriptor methos to our Protocol!

A lot of this is taken from this discussion.

# Introduce typing, try with Protocol, harder!
import functools
from typing import TYPE_CHECKING, Protocol, TypeVar, Generic, cast, overload, Union
from unittest import mock

default = 42

A = TypeVar("A", contravariant=True)

# We now produce typing for the whole function descriptor protocol
#
# See https://github.com/python/typing/discussions/1040


class BoundPrinter(Protocol):
    """Protocol typing for bound printer methods."""

    def __call__(_, value: int | None = None) -> None:
        """Bound signature."""


class Printer(Protocol, Generic[A]):
    """Protocol typing for printer methods."""

    # noqa annotations are overrides for flake8 being confused, giving either D418:
    # Function/ Method decorated with @overload shouldn't contain a docstring
    # or D105:
    # Missing docstring in magic method
    #
    # F841 is for vulture being confused:
    #   unused variable 'objtype' (100% confidence)

    @overload
    def __get__(  # noqa: D105
        self, obj: A, objtype: type[A] | None = None  # noqa: F841
    ) -> BoundPrinter:
        ...

    @overload
    def __get__(  # noqa: D105
        self, obj: None, objtype: type[A] | None = None  # noqa: F841
    ) -> "Printer[A]":
        ...

    def __get__(
        self, obj: A | None, objtype: type[A] | None = None  # noqa: F841
    ) -> Union[BoundPrinter, "Printer[A]"]:
        """Implement function descriptor protocol for class methods."""

    def __call__(_, self: A, value: int | None = None) -> None:
        """Unbound signature."""


def with_default(f: Printer[A]) -> Printer[A]:
    @functools.wraps(f)
    def wrapped(self: A, value: int | None = None) -> None:
        if value is None:
            value = default
        return f(self, value)

    return cast(Printer, wrapped)


class Fiddle:
    # function has a __get__ method to generated bound versions of itself
    # the Printer protocol does not define it, so mypy is now unable to type
    # the bound method correctly
    @with_default
    def print(self, value: int | None = None) -> None:
        assert value is not None
        print("Answer:", value)


fiddle = Fiddle()
fiddle.print(12)
fiddle.print()


def mocked(self, value=None):
    print("Mocked answer:", value)


with mock.patch.object(Fiddle, "print", autospec=True, side_effect=mocked):
    fiddle.print(12)
    fiddle.print()

It works! It's typed! And mypy is happy!

27 October, 2024 03:46PM

October 26, 2024

hackergotchi for Steve McIntyre

Steve McIntyre

Mini-Debconf in Cambridge, October 10-13 2024

Group photo

Again this year, Arm offered to host us for a mini-debconf in Cambridge. Roughly 60 people turned up on 10-13 October to the Arm campus, where they made us really welcome. They even had some Debian-themed treats made to spoil us!

Cakes

Hacking together

minicamp

For the first two days, we had a "mini-debcamp" with disparate group of people working on all sorts of things: Arm support, live images, browser stuff, package uploads, etc. And (as is traditional) lots of people doing last-minute work to prepare slides for their talks.

Sessions and talks

Secure Boot talk

Saturday and Sunday were two days devoted to more traditional conference sessions. Our talks covered a typical range of Debian subjects: a DPL "Bits" talk, an update from the Release Team, live images. We also had some wider topics: handling your own data, what to look for in the upcoming Post-Quantum Crypto world, and even me talking about the ups and downs of Secure Boot. Plus a random set of lightning talks too! :-)

Video team awesomeness

Video team in action

Lots of volunteers from the DebConf video team were on hand too (both on-site and remotely!), so our talks were both streamed live and recorded for posterity - see the links from the individual talk pages in the wiki, or http://meetings-archive.debian.net/pub/debian-meetings/2024/MiniDebConf-Cambridge/ for the full set if you'd like to see more.

A great time for all

Again, the mini-conf went well and feedback from attendees was very positive. Thanks to all our helpers, and of course to our sponsor: Arm for providing the venue and infrastructure for the event, and all the food and drink too!

Photo credits: Andy Simpkins, Mark Brown, Jonathan Wiltshire. Thanks!

26 October, 2024 08:54PM

Russell Coker

The CUPS Vulnerability

The Announcement

Late last month there was an announcement of a “severity 9.9 vulnerability” allowing remote code execution that affects “all GNU/Linux systems (plus others)” [1]. For something to affect all Linux systems that would have to be either a kernel issue or a sshd issue. The announcement included complaints about the lack of response of vendors and “And YES: I LOVE hyping the sh1t out of this stuff because apparently sensationalism is the only language that forces these people to fix”.

He seems to have a different experience to me of reporting bugs, I have had plenty of success getting bugs fixed without hyping them. I just report the bug, wait a while, and it gets fixed. I have reported potential security bugs without even bothering to try and prove that they were exploitable (any situation where you can make a program crash is potentially exploitable), I just report it and it gets fixed. I was very dubious about his ability to determine how serious a bug is and to accurately report it so this wasn’t a situation where I was waiting for it to be disclosed to discover if it affected me. I was quite confident that my systems wouldn’t be at any risk.

Analysis

Not All Linux Systems Run CUPS

When it was published my opinion was proven to be correct, it turned out to be a series of CUPS bugs [2]. To describe that as “all GNU/Linux systems (plus others)” seems like a vast overstatement, maybe a good thing to say if you want to be a TikTok influencer but not if you want to be known for computer security work.

For the Debian distribution the cups-browsed package (which seems to be the main exploitable one) is recommended by cups-daemon, as I have my Debian systems configured to not install recommended packages by default that means that it wasn’t installed on any of my systems. Also the vast majority of my systems don’t do printing and therefore don’t have any part of CUPS installed.

CUPS vs NAT

The next issue is that in Australia most home ISPs don’t have IPv6 enabled and CUPS doesn’t do the things needed to allow receiving connections from the outside world via NAT with IPv4. If inbound port 631 is blocked on both TCP and USP as is the default on Australian home Internet or if there is a correctly configured firewall in place then the network is safe from attack. There is a feature called uPnP port forwarding [3] to allow server programs to ask a router to send inbound connections to them, this is apparently usually turned off by default in router configuration. If it is enabled then there are Debian packages of software to manage this, the miniupnpc package has the client (which can request NAT changes on the router) [4]. That package is not installed on any of my systems and for my home network I don’t use a router that runs uPnP.

The only program I knowingly run that uses uPnP is Warzone2100 and as I don’t play network games that doesn’t happen. Also as an aside in version 4.4.2-1 of warzone2100 in Debian and Ubuntu I made it use Bubblewrap to run the game in a container. So a Remote Code Execution bug in Warzone 2100 won’t be an immediate win for an attacker (exploits via X11 or Wayland are another issue).

MAC Systems

Debian has had AppArmor enabled by default since Buster was released in 2019 [5]. There are claims that AppArmor will stop this exploit from doing anything bad.

To check SE Linux access I first use the “semanage fcontext” command to check the context of the binary, cupsd_exec_t means that the daemon runs as cupsd_t. Then I checked what file access is granted with the sesearch program, mostly just access to temporary files, cupsd config files, the faillog, the Kerberos cache files (not used on the Kerberos client systems I run), Samba run files (might be a possibility of exploiting something there), and the security_t used for interfacing with kernel security infrastructure. I then checked the access to the security class and found that it is permitted to check contexts and access-vectors – not access that can be harmful.

The next test was to use sesearch to discover what capabilities are granted, which unfortunately includes the sys_admin capability, that is a capability that allows many sysadmin tasks that could be harmful (I just checked the Fedora source and Fedora 42 has the same access). Whether the sys_admin capability can be used to do bad things with the limited access cupsd_t has to device nodes etc is not clear. But this access is undesirable.

So the SE Linux policy in Debian and Fedora will stop cupsd_t from writing SETUID programs that can be used by random users for root access and stop it from writing to /etc/shadow etc. But the sys_admin capability might allow it to do hostile things and I have already uploaded a changed policy to Debian/Unstable to remove that. The sys_rawio capability also looked concerning but it’s apparently needed to probe for USB printers and as the domain has no access to block devices it is otherwise harmless. Below are the commands I used to discover what the policy allows and the output from them.

# semanage fcontext -l|grep bin/cups-browsed
/usr/bin/cups-browsed                              regular file       system_u:object_r:cupsd_exec_t:s0 
# sesearch -A -s cupsd_t -c file -p write
allow cupsd_t cupsd_interface_t:file { append create execute execute_no_trans getattr ioctl link lock map open read rename setattr unlink write };
allow cupsd_t cupsd_lock_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_log_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_runtime_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_rw_etc_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t cupsd_tmp_t:file { append create getattr ioctl link lock open read rename setattr unlink write };
allow cupsd_t faillog_t:file { append getattr ioctl lock open read write };
allow cupsd_t init_tmpfs_t:file { append getattr ioctl lock read write };
allow cupsd_t krb5_host_rcache_t:file { append create getattr ioctl link lock open read rename setattr unlink write }; [ allow_kerberos ]:True
allow cupsd_t print_spool_t:file { append create getattr ioctl link lock open read relabelfrom relabelto rename setattr unlink write };
allow cupsd_t samba_var_t:file { append getattr ioctl lock open read write };
allow cupsd_t security_t:file { append getattr ioctl lock open read write };
allow cupsd_t security_t:file { append getattr ioctl lock open read write }; [ allow_kerberos ]:True
allow cupsd_t usbfs_t:file { append getattr ioctl lock open read write };
# sesearch -A -s cupsd_t -c security
allow cupsd_t security_t:security check_context; [ allow_kerberos ]:True
allow cupsd_t security_t:security { check_context compute_av };
# sesearch -A -s cupsd_t -c capability
allow cupsd_t cupsd_t:capability net_bind_service; [ allow_ypbind ]:True
allow cupsd_t cupsd_t:capability { audit_write chown dac_override dac_read_search fowner fsetid ipc_lock kill net_bind_service setgid setuid sys_admin sys_rawio sys_resource sys_tty_config };
# sesearch -A -s cupsd_t -c capability2
allow cupsd_t cupsd_t:capability2 { block_suspend wake_alarm };
# sesearch -A -s cupsd_t -c blk_file

Conclusion

This is an example of how not to handle security issues. Some degree of promotion is acceptable but this is very excessive and will result in people not taking security announcements seriously in future. I wonder if this is even a good career move by the researcher in question, will enough people believe that they actually did something good in this that it outweighs the number of people who think it’s misleading at best?

26 October, 2024 06:51AM by etbe

October 25, 2024

hackergotchi for Jonathan Dowland

Jonathan Dowland

Behringer Model-D (synths I didn't buy)

Whilst researching what synth to buy, I learned of the Behringer1 Model-D2: a 2018 clone of the 1970 Moog Minimoog, in a desktop form factor.

Behringer Model-D

Behringer Model-D

In common with the original Minimoog, it's a monophonic analogue synth, featuring three audible oscillators3 , Moog's famous 12-ladder filter and a basic envelope generator. The model-d has lost the keyboard from the original and added some patch points for the different stages, enabling some slight re-routing of the audio components.

1970 Moog Minimoog

1970 Moog Minimoog

Since I was focussing on more fundamental, back-to-basics instruments, this was very appealing to me. I'm very curious to find out what's so compelling about the famous Moog sound. The relative lack of features feels like an advantage: less to master. The additional patch points makes it a little more flexible and offer a potential gateway into the world of modular synthesis. The Model-D is also very affordable: about £ 200 GBP. I'll never own a real Moog.

For this to work, I would need to supplement it with some other equipment. I'd need a keyboard (or press the Micron into service as a controller); I would want some way of recording and overdubbing (same as with any synth). There are no post-mix effects on the Model-D, such as delay, reverb or chorus, so I may also want something to add those.

What stopped me was partly the realisation that there was little chance that a perennial beginner, such as I, could eek anything novel out of a synthesiser design that's 54 years old. Perhaps that shouldn't matter, but it gave me pause. Whilst the Model-D has patch points, I don't have anything to connect to them, and I'm firmly wanting to avoid the Modular Synthesis money pit. The lack of effects, and polyphony could make it hard to live-sculpt a tone.

I started characterizing the Model-D as the "heart" choice, but it seemed wise to instead go for a "head" choice.

Maybe another day!


  1. There's a whole other blog post of material I could write about Behringer and their clones of classic synths, some long out of production, and others, not so much. But, I decided to skip on that for now.
  2. taken from the fact that the Minimoog was a productised version of Moog's fourth internal prototype, the model D.
  3. 2 oscillators is more common in modern synths

25 October, 2024 03:56PM

October 23, 2024

Why hardware synths?

Russell wrote a great comment on my last post (thanks!):

What benefits do these things offer when a general purpose computer can do so many things nowadays? Is there a USB keyboard that you can connect to a laptop or phone to do these things? I presume that all recent phones have the compute power to do all the synthesis you need if you have the right software. Is it just a lack of software and infrastructure for doing it on laptops/phones that makes synthesisers still viable?

I've decided to turn my response into a post of its own.

The issue is definitely not compute power. You can indeed attach a USB keyboard to a computer and use a plethora of software synthesisers, including very faithful emulations of all the popular classics. The raw compute power of modern hardware synths is comparatively small: I’ve been told the modern Korg digital synths are on a par with a raspberry pi. I’ve seen some DSPs which are 32 bit ARMs, and other tools which are roughly equivalent to arduinos.

I can think of four reasons hardware synths remain popular with some despite the above:

  1. As I touched on in my original synth post, computing dominates my life outside of music already. I really wanted something separate from that to keep mental distance from work.

  2. Synths have hard real-time requirements. They don't have raw power in compute terms, but they absolutely have to do their job within microseconds of being instructed to, with no exceptions. Linux still has a long way to go for hard real-time.

  3. The Linux audio ecosystem is… complex. Dealing with pipewire, pulseaudio, jack, alsa, oss, and anything else I've forgotten, as well as their failure modes, is too time consuming.

  4. The last point is to do with creativity and inspiration. A good synth is more than the sum of its parts: it's an instrument, carefully designed and its components integrated by musically-minded people who have set out to create something to inspire. There are plenty of synths which aren't good instruments, but have loads of features: they’re boxes of "stuff". Good synths can't do it all: they often have limitations which you have to respond to, work around or with, creatively. This was expressed better than I could by Trent Reznor in the video archetype of a synthesiser:

23 October, 2024 09:51AM

Arturia Microfreak

Arturia Microfreak. [© CC-BY-SA 4](https://commons.wikimedia.org/wiki/File:MicroFreak.jpg)

Arturia Microfreak. © CC-BY-SA 4

I nearly did, but ultimately I didn't buy an Arturia Microfreak.

The Microfreak is a small form factor hybrid synth with a distinctive style. It's priced at the low end of the market and it is overflowing with features. It has a weird 2-octave keyboard which is a stylophone-style capacitive strip rather than weighted keys. It seems to have plenty of controls, but given the amount of features it has, much of that functionality is inevitably buried in menus. The important stuff is front and centre, though. The digital oscillators are routed through an analog filter. The Microfreak gained sampler functionality in a firmware update that surprised and delighted its owners.

I watched a load of videos about the Microfreak, but the above review from musician Stimming stuck in my mind because it made a comparison between the Microfreak and Teenage Engineering's OP-1.

The Teenage Engineering OP-1.

The Teenage Engineering OP-1.

I'd been lusting after the OP-1 since it appeared in 2011: a pocket-sized1 music making machine with eleven synthesis engines, a sampler, and less conventional features such as an FM radio, a large colour OLED display, and a four track recorder. That last feature in particular was really appealing to me: I loved the idea of having an all-in-one machine to try and compose music. Even then, I was not keen on involving conventional computers in music making.

Of course in many ways it is a very compromised machine. I never did buy a OP-1, and by now they've replaced it with a new model (the OP-1 field) that costs 50% more (but doesn't seem to do 50% more) I'm still not buying one.

Framing the Microfreak in terms of the OP-1 made the penny drop for me. The Microfreak doesn't have the four-track functionality, but almost no synth has: I'm going to have to look at something external to provide that. But it might capture a similar sense of fun; it's something I could use on the sofa, in the spare room, on the train, during lunchbreaks at work, etc.

On the other hand, I don't want to make the same mistake as with the Micron: too much functionality requiring some experience to understand what you want so you can go and find it in the menus. I also didn't get a chance to audition the unusual keyboard: there's only one music store carrying synths left in Newcastle and they didn't have one.

So I didn't buy the Microfreak. Maybe one day in the future once I'm further down the road. Instead, I started to concentrate my search on more fundamental, back-to-basics instruments…


  1. Big pockets, mind

23 October, 2024 09:51AM

October 22, 2024

hackergotchi for Dirk Eddelbuettel

Dirk Eddelbuettel

drat 0.2.5 on CRAN: Small Updates

drat user

A new minor release of the drat package arrived on CRAN today, which is just over a year since the previous release. drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code.

Because for once it really is as your mother told you: Friends don’t let friends install random git commit snapshots. Properly rolled-up releases it is. Just how CRAN shows us: a model that has demonstrated for over two-and-a-half decades how to do this. And you can too: drat is easy to use, documented by six vignettes and just works. Detailed information about drat is at its documentation site. That said, and ‘these days’, if you mainly care about github code then r-universe is there too, also offering binaries its makes and all that jazz. But sometimes you just want to, or need to, roll a local repository and drat can help you there.

This release contains a small PR (made by Arne Holmin just after the previous release) adding support for an ‘OSflacour’ variable (helpful for macOS). We also corrected an issue with one test file being insufficiently careful of using git2r only when installed, and as usual did a round of maintenance for the package concerning both continuous integration and documentation.

The NEWS file summarises the release as follows:

Changes in drat version 0.2.5 (2024-10-21)

  • Function insertPackage has a new optional argument OSflavour (Arne Holmin in #142)

  • A test file conditions correctly about git2r being present (Dirk)

  • Several smaller packaging updates and enhancements to continuous integration and documentation have been added (Dirk)

Courtesy of my CRANberries, there is a comparison to the previous release. More detailed information is on the drat page as well as at the documentation site.

If you like this or other open-source work I do, you can sponsor me at GitHub.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

22 October, 2024 12:38AM

October 21, 2024

Sven Hoexter

Terraform: Making Use of Precondition Checks

I'm in the unlucky position to have to deal with GitHub. Thus I've a terraform module in a project which deals with populating organization secrets in our GitHub organization, and assigning repositories access to those secrets.

Since the GitHub terraform provider internally works mostly with repository IDs, not slugs (this human readable organization/repo format), we've to do some mapping in between. In my case it looks like this:

#tfvars Input for Module
org_secrets = {
    "SECRET_A" = {
        repos = [
            "infra-foo",
            "infra-baz",
            "deployment-foobar",
        ]
    "SECRET_B" = {
        repos = [
            "job-abc",
            "job-xyz",
        ]
    }
}

# Module Code
/*
Limitation: The GH search API which is queried returns at most 1000
results. Thus whenever we reach that limit this approach will no longer work.
The query is also intentionally limited to internal repositories right now.
*/
data "github_repositories" "repos" {
    query           = "org:myorg archived:false -is:public -is:private"
    include_repo_id = true
}

/*
The properties of the github_repositories.repos data source queried
above contains only lists. Thus we've to manually establish a mapping
between the repository names we need as a lookup key later on, and the
repository id we got in another list from the search query above.
*/
locals {
    # Assemble the set of repository names we need repo_ids for
    repos = toset(flatten([for v in var.org_secrets : v.repos]))

    # Walk through all names in the query result list and check
    # if they're also in our repo set. If yes add the repo name -> id
    # mapping to our resulting map
    repos_and_ids = {
        for i, v in data.github_repositories.repos.names : v => data.github_repositories.repos.repo_ids[i]
        if contains(local.repos, v)
    }
}

resource "github_actions_organization_secret" "org_secrets" {
    for_each        = var.org_secrets
    secret_name     = each.key
    visibility      = "selected"
    # the logic how the secret value is sourced is omitted here
    plaintext_value = data.xxx
    selected_repository_ids = [
        for r in each.value.repos : local.repos_and_ids[r]
        if can(local.repos_and_ids[r])
    ]
}

Now if we do something bad, delete a repository and forget to remove it from the configuration for the module, we receive some error message that a (numeric) repository ID could not be found. Pretty much useless for the average user because you've to figure out which repository is still in the configuration list, but got deleted recently.

Luckily terraform supports since version 1.2 precondition checks, which we can use in an output-block to provide the information which repository is missing. What we need is the set of missing repositories and the validation condition:

locals {
    # Debug facility in combination with an output and precondition check
    # There we can report which repository we still have in our configuration
    # but no longer get as a result from the data provider query
    missing_repos = setsubtract(local.repos, data.github_repositories.repos.names)
}

# Debug facility - If we can not find every repository in our
# search query result, report those repos as an error
output "missing_repos" {
    value = local.missing_repos
    precondition {
        condition     = length(local.missing_repos) == 0
        error_message = format("Repos in config missing from resultset: %v", local.missing_repos)
    }
}

Now you only have to be aware that GitHub is GitHub and the TF provider has open bugs, but is not supported by GitHub and you will encounter inconsistent results. But it works, even if your terraform apply failed that way.

21 October, 2024 01:28PM

Russ Allbery

California general election

As usual with these every-two-year posts, probably of direct interest only to California residents. Maybe the more obscure things we're voting on will be a minor curiosity to people elsewhere. I'm a bit late this year, although not as late as last year, so a lot of people may have already voted, but I've been doing this for a while and wanted to keep it up.

This post will only be about the ballot propositions. I don't have anything useful to say about the candidates that isn't hyper-local. I doubt anyone who has read my posts will be surprised by which candidates I'm voting for.

As always with Calfornia ballot propositions, it's worth paying close attention to which propositions were put on the ballot by the legislature, usually because there's some state law requirement (often that I disagree with) that they be voted on by the public, and propositions that were put on the ballot by voter petition. The latter are often poorly written and have hidden problems. As a general rule of thumb, I tend to default to voting against propositions added by petition. This year, one can conveniently distinguish by number: the single-digit propositions were added by the legislature, and the two-digit ones were added by petition.

Proposition 2: YES. Issue $10 billion in bonds for public school infrastructure improvements. I generally vote in favor of spending measures like this unless they have some obvious problem. The opposition argument is a deranged rant against immigrants and government debt and fails to point out actual problems. The opposition argument also claims this will result in higher property taxes and, seriously, if only that were true. That would make me even more strongly in favor of it.

Proposition 3: YES. Enshrines the right to marriage without regard to sex or race into the California state constitution. This is already the law given US Supreme Court decisions, but fixing California state law is a long-overdue and obvious cleanup step. One of the quixotic things I would do if I were ever in government, which I will never be, would be to try to clean up the laws to make them match reality, repealing all of the dead clauses that were overturned by court decisions or are never enforced. I am in favor of all measures in this direction even when I don't agree with the direction of the change; here, as a bonus, I also strongly agree with the change.

Proposition 4: YES. Issue $10 billion in bonds for infrastructure improvements to mitigate climate risk. This is basically the same argument as Proposition 2. The one drawback of this measure is that it's kind of a mixed grab bag of stuff and probably some of it should be supported out of the general budget rather than bonds, but I consider this a minor problem. We definitely need to ramp up climate risk mitigation efforts.

Proposition 5: YES. Reduces the required super-majority to pass local bond measures for affordable housing from 67% to 55%. The fact that this requires a supermajority at all is absurd, California desperately needs to build more housing of any kind however we can, and publicly funded housing is an excellent idea.

Proposition 6: YES. Eliminates "involuntary servitude" (in other words, "temporary" slavery) as a legally permissible punishment for crimes in the state of California. I'm one of the people who think the 13th Amendment to the US Constitution shouldn't have an exception for punishment for crimes, so obviously I'm in favor of this. This is one very, very tiny step towards improving the absolutely atrocious prison conditions in the state.

Proposition 32: YES. Raises the minimum wage to $18 per hour from the current $16 per hour, over two years, and ties it to inflation. This is one of the rare petition-based propositions that I will vote in favor of because it's very straightforward, we clearly should be raising the minimum wage, and living in California is absurdly expensive because we refuse to build more housing (see Propositions 5 and 33). The opposition argument is the standard lie that a higher minimum wage will increase unemployment, which we know from numerous other natural experiments is simply not true.

Proposition 33: NO. Repeals Costa-Hawkins, which prohibits local municipalities from enacting rent control on properties built after 1995. This one is going to split the progressive vote rather badly, I suspect.

California has a housing crisis caused by not enough housing supply. It is not due to vacant housing, as much as some people would like you to believe that; the numbers just don't add up. There are way more people living here and wanting to live here than there is housing, so we need to build more housing.

Rent control serves a valuable social function of providing stability to people who already have housing, but it doesn't help, and can hurt, the project of meeting actual housing demand. Rent control alone creates a two-tier system where people who have housing are protected but people who don't have housing have an even harder time getting housing than they do today. It's therefore quite consistent with the general NIMBY playbook of trying to protect the people who already have housing by making life harder for the people who do not, while keeping the housing supply essentially static.

I am in favor of rent control in conjunction with real measures to increase the housing supply. I am therefore opposed to this proposition, which allows rent control without any effort to increase housing supply. I am quite certain that, if this passes, some municipalities will use it to make constructing new high-density housing incredibly difficult by requiring it all be rent-controlled low-income housing, thus cutting off the supply of multi-tenant market-rate housing entirely. This is already a common political goal in the part of California where I live. Local neighborhood groups advocate for exactly this routinely in local political fights.

Give me a mandate for new construction that breaks local zoning obstructionism, including new market-rate housing to maintain a healthy lifecycle of housing aging into affordable housing as wealthy people move into new market-rate housing, and I will gladly support rent control measures as part of that package. But rent control on its own just allocates winners and losers without addressing the underlying problem.

Proposition 34: NO. This is an excellent example of why I vote against petition propositions by default. This is a law designed to affect exactly one organization in the state of California: the AIDS Healthcare Foundation. The reason for this targeting is disputed; one side claims it's because of the AHF support for Proposition 33, and another side claims it's because AHF is a slumlord abusing California state funding. I have no idea which side of this is true. I also don't care, because I am fundamentally opposed to writing laws this way. Laws should establish general, fair principles that are broadly applicable, not be written with bizarrely specific conditions (health care providers that operate multifamily housing) that will only be met by a single organization. This kind of nonsense creates bad legal codes and the legal equivalent of technical debt. Just don't do this.

Proposition 35: YES. I am, reluctantly, voting in favor of this even though it is a petition proposition because it looks like a useful simplification and cleanup of state health care funding, makes an expiring tax permanent, and is supported by a very wide range of organizations that I generally trust to know what they're talking about. No opposition argument was filed, which I think is telling.

Proposition 36: NO. I am resigned to voting down attempts to start new "war on drugs" nonsense for the rest of my life because the people who believe in this crap will never, ever, ever stop. This one has bonus shoplifting fear-mongering attached, something that touches on nasty local politics that have included large retail chains manipulating crime report statistics to give the impression that shoplifting is up dramatically. It's yet another round of the truly horrific California "three strikes" criminal penalty obsession, which completely misunderstands both the causes of crime and the (almost nonexistent) effectiveness of harsh punishment as deterrence.

21 October, 2024 12:03AM

October 20, 2024

hackergotchi for Bits from Debian

Bits from Debian

Ada Lovelace Day 2024 - Interview with some Women in Debian

Alt Ada Lovelace portrait

Ada Lovelace Day was celebrated on October 8 in 2024, and on this occasion, to celebrate and raise awareness of the contributions of women to the STEM fields we interviewed some of the women in Debian.

Here we share their thoughts, comments, and concerns with the hope of inspiring more women to become part of the Sciences, and of course, to work inside of Debian.

This article was simulcasted to the debian-women mail list.

Beatrice Torracca

1. Who are you?

I am Beatrice, I am Italian. Internet technology and everything computer-related is just a hobby for me, not my line of work or the subject of my academic studies. I have too many interests and too little time. I would like to do lots of things and at the same time I am too Oblomovian to do any.

2. How did you get introduced to Debian?

As a user I started using newsgroups when I had my first dialup connection and there was always talk about this strange thing called Linux. Since moving from DR DOS to Windows was a shock for me, feeling like I lost the control of my machine, I tried Linux with Debian Potato and I never strayed away from Debian since then for my personal equipment.

3. How long have you been into Debian?

Define "into". As a user... since Potato, too many years to count. As a contributor, a similar amount of time, since early 2000 I think. My first archived email about contributing to the translation of the description of Debian packages dates 2001.

4. Are you using Debian in your daily life? If yes, how?

Yes!! I use testing. I have it on my desktop PC at home and I have it on my laptop. The desktop is where I have a local IMAP server that fetches all the mails of my email accounts, and where I sync and back up all my data. On both I do day-to-day stuff (from email to online banking, from shopping to taxes), all forms of entertainment, a bit of work if I have to work from home (GNU R for statistics, LibreOffice... the usual suspects). At work I am required to have another OS, sadly, but I am working on setting up a Debian Live system to use there too. Plus if at work we start doing bioinformatics there might be a Linux machine in our future... I will of course suggest and hope for a Debian system.

5. Do you have any suggestions to improve women's participation in Debian?

This is a tough one. I am not sure. Maybe, more visibility for the women already in the Debian Project, and make the newcomers feel seen, valued and welcomed. A respectful and safe environment is key too, of course, but I think Debian made huge progress in that aspect with the Code of Conduct. I am a big fan of promoting diversity and inclusion; there is always room for improvement.

Ileana Dumitrescu (ildumi)

1. Who are you?

I am just a girl in the world who likes cats and packaging Free Software.

2. How did you get introduced to Debian?

I was tinkering with a computer running Debian a few years ago, and I decided to learn more about Free Software. After a search or two, I found Debian Women.

3. How long have you been into Debian?

I started looking into contributing to Debian in 2021. After contacting Debian Women, I received a lot of information and helpful advice on different ways I could contribute, and I decided package maintenance was the best fit for me. I eventually became a Debian Maintainer in 2023, and I continue to maintain a few packages in my spare time.

4. Are you using Debian in your daily life? If yes, how?

Yes, it is my favourite GNU/Linux operating system! I use it for email, chatting, browsing, packaging, etc.

5. Do you have any suggestions to improve women's participation in Debian?

The mailing list for Debian Women may attract more participation if it is utilized more. It is where I started, and I imagine participation would increase if it is more engaging.

Kathara Sasikumar (kathara)

1. Who are you?

I'm Kathara Sasikumar, 22 years old and a recent Debian user turned Maintainer from India. I try to become a creative person through sketching or playing guitar chords, but it doesn't work! xD

2. How did you get introduced to Debian?

When I first started college, I was that overly enthusiastic student who signed up for every club and volunteered for anything that crossed my path just like every other fresher.

But then, the pandemic hit, and like many, I hit a low point. COVID depression was real, and I was feeling pretty down. Around this time, the FOSS Club at my college suddenly became more active. My friends, knowing I had a love for free software, pushed me to join the club. They thought it might help me lift my spirits and get out of the slump I was in.

At first, I joined only out of peer pressure, but once I got involved, the club really took off. FOSS Club became more and more active during the pandemic, and I found myself spending more and more time with it.

A year later, we had the opportunity to host a MiniDebConf at our college. Where I got to meet a lot of Debian developers and maintainers, attending their talks and talking with them gave me a wider perspective on Debian, and I loved the Debian philosophy.

At that time, I had been distro hopping but never quite settled down. I occasionally used Debian but never stuck around. However, after the MiniDebConf, I found myself using Debian more consistently, and it truly connected with me. The community was incredibly warm and welcoming, which made all the difference.

3. How long have you been into Debian?

Now, I've been using Debian as my daily driver for about a year.

4. Are you using Debian in your daily life? If yes, how?

It has become my primary distro, and I use it every day for continuous learning and working on various software projects with free and open-source tools. Plus, I've recently become a Debian Maintainer (DM) and have taken on the responsibility of maintaining a few packages. I'm looking forward to contributing more to the Debian community 🙂

Rhonda D'Vine (rhonda)

1. Who are you?

My name is Rhonda, my pronouns are she/her, or per/pers. I'm 51 years old, working in IT.

2. How did you get introduced to Debian?

I was already looking into Linux because of university, first it was SuSE. And people played around with gtk. But when they packaged GNOME and it just didn't even install I looked for alternatives. A working colleague from back then gave me a CD of Debian. Though I couldn't install from it because Slink didn't recognize the pcmcia drive. I had to install it via floppy disks, but apart from that it was quite well done. And the early GNOME was working, so I never looked back. 🙂

3. How long have you been into Debian?

Even before I was more involved, a colleague asked me whether I could help with translating the release documentation. That was my first contribution to Debian, for the slink release in early 1999. And I was using some other software before on my SuSE systems, and I wanted to continue to use them on Debian obviously. So that's how I got involved with packaging in Debian. But I continued to help with translation work, for a long period of time I was almost the only person active for the German part of the website.

4. Are you using Debian in your daily life? If yes, how?

Being involved with Debian was a big part of the reason I got into my jobs since a long time now. I always worked with maintaining Debian (or Ubuntu) systems. Privately I run Debian on my laptop, with occasionally switching to Windows in dual boot when (rarely) needed.

5. Do you have any suggestions to improve women's participation in Debian?

There are factors that we can't influence, like that a lot of women are pushed into care work because patriarchal structures work that way, and don't have the time nor energy to invest a lot into other things. But we could learn to appreciate smaller contributions better, and not focus so much on the quantity of contributions. When we look at longer discussions on mailing lists, those that write more mails actually don't contribute more to the discussion, they often repeat themselves without adding more substance. Through working on our own discussion patterns this could create a more welcoming environment for a lot of people.

Sophie Brun (sophieb)

1. Who are you?

I'm a 44 years old French woman. I'm married and I have 2 sons.

2. How did you get introduced to Debian?

In 2004 my boyfriend (now my husband) installed Debian on my personal computer to introduce me to Debian. I knew almost nothing about Open Source. During my engineering studies, a professor mentioned the existence of Linux, Red Hat in particular, but without giving any details.

I learnt Debian by using and reading (in advance) The Debian Administrator's Handbook.

3. How long have you been into Debian?

I've been a user since 2004. But I only started contributing to Debian in 2015: I had quit my job and I wanted to work on something more meaningful. That's why I joined my husband in Freexian, his company. Unlike most people I think, I started contributing to Debian for my work. I only became a DD in 2021 under gentle social pressure and when I felt confident enough.

4. Are you using Debian in your daily life? If yes, how?

Of course I use Debian in my professional life for almost all the tasks: from administrative tasks to Debian packaging.

I also use Debian in my personal life. I have very basic needs: Firefox, LibreOffice, GnuCash and Rhythmbox are the main applications I need.

Sruthi Chandran (srud)

1. Who are you?

A feminist, a librarian turned Free Software advocate and a Debian Developer. Part of Debian Outreach team and DebConf Committee.

2. How did you get introduced to Debian?

I got introduced to the free software world and Debian through my husband. I attended many Debian events with him. During one such event, out of curiosity, I participated in a Debian packaging workshop. Just after that I visited a Tibetan community in India and they mentioned that there was no proper Tibetan font in GNU/Linux. Tibetan font was my first package in Debian.

3. How long have you been into Debian?

I have been contributing to Debian since 2016 and Debian Developer since 2019.

4. Are you using Debian in your daily life? If yes, how?

I haven't used any other distro on my laptop since I got introduced to Debian.

5. Do you have any suggestions to improve women's participation in Debian?

I was involved with actively mentoring newcomers to Debian since I started contributing myself. I specially work towards reducing the gender gap inside the Debian and Free Software community in general. In my experience, I believe that visibility of already existing women in the community will encourage more women to participate. Also I think we should reintroduce mentoring through debian-women.

Tássia Camões Araújo (tassia)

1. Who are you?

Tássia Camões Araújo, a Brazilian living in Canada. I'm a passionate learner who tries to push myself out of my comfort zone and always find something new to learn. I also love to mentor people on their learning journey. But I don't consider myself a typical geek. My challenge has always been to not get distracted by the next project before I finish the one I have in my hands. That said, I love being part of a community of geeks and feel empowered by it. I love Debian for its technical excellence, and it's always reassuring to know that someone is taking care of the things I don't like or can't do. When I'm not around computers, one of my favorite things is to feel the wind on my cheeks, usually while skating or riding a bike; I also love music, and I'm always singing a melody in my head.

2. How did you get introduced to Debian?

As a student, I was privileged to be introduced to FLOSS at the same time I was introduced to computer programming. My university could not afford to have labs in the usual proprietary software model, and what seemed like a limitation at the time turned out to be a great learning opportunity for me and my colleagues. I joined this student-led initiative to "liberate" our servers and build LTSP-based labs - where a single powerful computer could power a few dozen diskless thin clients. How revolutionary it was at the time! And what an achievement! From students to students, all using Debian. Most of that group became close friends; I've married one of them, and a few of them also found their way to Debian.

3. How long have you been into Debian?

I first used Debian in 2001, but my first real connection with the community was attending DebConf 2004. Since then, going to DebConfs has become a habit. It is that moment in the year when I reconnect with the global community and my motivation to contribute is boosted. And you know, in 20 years I've seen people become parents, grandparents, children grow up; we've had our own child and had the pleasure of introducing him to the community; we've mourned the loss of friends and healed together. I'd say Debian is like family, but not the kind you get at random once you're born, Debian is my family by choice.

4. Are you using Debian in your daily life? If yes, how?

These days I teach at Vanier College in Montréal. My favorite course to teach is UNIX, which I have the pleasure of teaching mostly using Debian. I try to inspire my students to discover Debian and other FLOSS projects, and we are happy to run a FLOSS club with participation from students, staff and alumni. I love to see these curious young minds put to the service of FLOSS. It is like recruiting soldiers for a good battle, and one that can change their lives, as it certainly did mine.

5. Do you have any suggestions to improve women's participation in Debian?

I think the most effective way to inspire other women is to give visibility to active women in our community. Speaking at conferences, publishing content, being vocal about what we do so that other women can see us and see themselves in those positions in the future. It's not easy, and I don't like being in the spotlight. It took me a long time to get comfortable with public speaking, so I can understand the struggle of those who don't want to expose themselves. But I believe that this space of vulnerability can open the way to new connections. It can inspire trust and ultimately motivate our next generation. It's with this in mind that I publish these lines.

Another point we can't neglect is that in Debian we work on a volunteer basis, and this in itself puts us at a great disadvantage. In our societies, women usually take a heavier load than their partners in terms of caretaking and other invisible tasks, so it is hard to afford the free time needed to volunteer. This is one of the reasons why I bring my son to the conferences I attend, and so far I have received all the support I need to attend DebConfs with him. It is a way to share the caregiving burden with our community - it takes a village to raise a child. Besides allowing us to participate, it also serves to show other women (and men) that you can have a family life and still contribute to Debian.

My feeling is that we are not doing super well in terms of diversity in Debian at the moment, but that should not discourage us at all. That's the way it is now, but that doesn't mean it will always be that way. I feel like we go through cycles. I remember times when we had many more active female contributors, and I'm confident that we can improve our ratio again in the future. In the meantime, I just try to keep going, do my part, attract those I can, reassure those who are too scared to come closer. Debian is a wonderful community, it is a family, and of course a family cannot do without us, the women.

These interviews were conducted via email exchanges in October, 2024. Thanks to all the wonderful women who participated in this interview. We really appreciate your contributions in Debian and to Free/Libre software.

20 October, 2024 10:01PM by Anupa Ann Joseph

Nazi.Compare

Nazi research into Jewish smell, Hitler's love of dogs & the SVP in Switzerland

Hitler and the Nazis were obsessed with the idea that Jews could be identified by a distinctive smell. While America was building the A-bomb, Hitler diverted science funding to research the Jewish smell. The smell was rumored to resemble sulfur.

More recently, research has considered the similarities in accusations of an African smell and a Jewish smell:

It makes the case that there was a shift in the way that smell, beginning in the late nineteenth century, was used to not simply demarcate groups but, in addition, to supposedly detect ‘race’ and ethnicity.

Dogs have a very strong sense of smell and coincidentally, it is documented that Adolf Hitler loved dogs and there are rumors, harder to substantiate, that Hitler was not fond of cats or maybe even afraid of them.

It is easy to see why a fascist dictator would prefer dogs and not cats. Dogs can be trained and they are obedient like foot-soldiers in the army. Cats, on the other hand, do not obey human commands or Codes of Conduct imposed upon them.

Prominent Debian Developer Daniel Pocock has recently released details of the Swiss harassment judgment. His former landlady, an organizer of the SVP senioren (far right Swiss seniors group) had started rumors about a smell coming from Pocock's cats. Even the judge asked if it could be acceptable to pose questions about this imaginary smell. Obviously the judge was not familiar with this awkward similarity to the persecution of Jewish and African people throughout history.

20 October, 2024 06:00AM

October 15, 2024

Andrew Cater

Mini-DebConf Cambridge 20241013 1300

 LATE NEWS

 I haven't blogged until now: I should have done from Thursday onwards.

It's a joy to be here in Cambridge at ARM HQ. Lots of people I recognise from last year  here: lots *not* here because this mini-conference is a month before the next one in Toulouse and many people can't attend both.

Two days worth of chatting, working on bits and pieces, chatting and informal meetings was a very good and useful way to build relationships and let teams find some space for themselves.

Lots of quiet hacking going on - a few loud conversations. A new ARM machine in mini-ITX format - see Steve McIntyre's blog on planet.debian.org about Rock 5 ITX.

Two days worth of talks for Saturday and Sunday. For some people, this is a first time. Lightning talks are particularly good to break down barriers - three slides and five minutes (and the chance for a bit of gamesmanship to break the rules creatively).

Longer talks: a couple from Steve Capper of ARM were particularly helpful to those interested in upcoming development. A couple of the talks in the schedule are traditional: if the release team are here, they tell us what they are doing, for example.

ARM are main sponsors and have been very generous in giving us conference and facilities space. Fast network, coffee and interested people - what's not to like :)

[EDIT/UPDATE - And my talk is finished and went fairly well: slides have now been uploaded and the talk is linked from the Mini-DebConf pages]

15 October, 2024 10:13PM by Andrew Cater (noreply@blogger.com)

Iustin Pop

Optical media lifetime - one data point

Way back (more than 10 years ago) when I was doing DVD-based backups, I knew that normal DVDs/Blu-Rays are no long-term archival solutions, and that if I was real about doing optical media backups, I need to switch to M-Disc. I actually bought a (small stack) of M-Disc Blu-Rays, but never used them.

I then switched to other backups solutions, and forgot about the whole topic. Until, this week, while sorting stuff, I happened upon a set of DVD backups from a range of years, and was very curious whether they are still readable after many years.

And, to my surprise, there were no surprises! Went backward in time, and:

  • 2014, TDK DVD+R, fully readable
  • 2012, JVC DVD+R and TDK DVD+R, fully readable
  • 2010, Verbatim DVD+R, fully readable
  • 2009/2008/2007, Verbatim DVD+R, 4 DVDs, fully readable

I also found stack of dual-layer DVD+R from 2012-2014, some for sure Verbatim, and some unmarked (they were intended to be printed on), but likely Verbatim as well. All worked just fine. Just that, even at ~8GiB per disk, backing up raw photo files took way too many disks, even in 2014 😅.

At this point I was happy that all 12+ DVDs I found, ranging from 10 to 14 years, are all good. Then I found a batch of 3 CDs! Here the results were mixed:

  • 2003: two TDK “CD-R80â€�, “Mettalicâ€�, 700MB: fully readable, after 21 years!
  • unknown year, likely around 1999-2003, but no later, “Creationâ€� CD-R, 700MB: read errors to the extent I can’t even read the disk signature (isoinfo -d).

I think the takeaway is that for all explicitly selected media - TDK, JVC and Verbatim - they hold for 10-20 years. Valid reads from summer 2003 is mind boggling for me, for (IIRC) organic media - not sure about the “TDK metallic� substrate. And when you just pick whatever (“Creation�), well, the results are mixed.

Note that in all this, it was about CDs and DVDs. I have no idea how Blu-Rays behave, since I don’t think I ever wrote a Blu-Ray. In any case, surprising to me, and makes me rethink a bit my backup options. Sizes from 25 to 100GB Blu-Rays are reasonable for most critical data. And they’re WORM, as opposed to most LTO media, which is re-writable (and to some small extent, prone to accidental wiping).

Now, I should check those M-Disks to see if they can still be written to, after 10 years 😀

15 October, 2024 02:00PM

October 14, 2024

hackergotchi for Philipp Kern

Philipp Kern

Touch Notifications for YubiKeys

When setting up your YubiKey you have the option to require the user to touch the device to authorize an operation (be it signing, decrypting, or authenticating). While web browsers often provide clear prompts for this, other applications like SSH or GPG will not. Instead the operation will just hang without any visual indication that user input is required. The YubiKey itself will blink, but depending on where it is plugged in that is not very visible.

yubikey-touch-detector (fresh in unstable) solves this issue by providing a way for your desktop environment to signal the user that the device is waiting for a touch. It provides an event feed on a socket that other components can consume. It comes with libnotify support and there are some custom integrations for other environments.

For GNOME and KDE libnotify support should be sufficient, however you still need to turn it on:

$ mkdir -p ~/.config/yubikey-touch-detector
$ sed -e 's/^YUBIKEY_TOUCH_DETECTOR_LIBNOTIFY=.*/YUBIKEY_TOUCH_DETECTOR_LIBNOTIFY=true/' \
  < /usr/share/doc/yubikey-touch-detector/examples/service.conf.example \
  > ~/.config/yubikey-touch-detector/service.conf
$ systemctl --user restart yubikey-touch-detector

I would still have preferred a more visible, more modal prompt. I guess that would be an exercise for another time, listening to the socket and presenting a window. But for now, desktop notifications will do for me.

PS: I have not managed to get SSH's no-touch-required to work with YubiKey 4, while it works just fine with a YubiKey 5.

14 October, 2024 10:39AM by Philipp Kern (noreply@blogger.com)

October 13, 2024

hackergotchi for Andy Simpkins

Andy Simpkins

The state of the art

A long time ago….

A long time ago a computer was a woman (I think almost exclusively a women, not a man) who was employed to do a lot of repetitive mathematics – typically for accounting and stock / order processing.

Then along came Lyons, who deployed an artificial computer to perform the same task, only with fewer errors in less time. Modern day computing was born – we had entered the age of the Digital Computer.

These computers were large, consumed huge amounts of power but were precise, and gave repeatable, verifiable results.

Over time the huge mainframe digital computers have shrunk in size, increased in performance, and consume far less power – so much so that they often didn’t need the specialist CFC based, refrigerated liquid cooling systems of their bigger mainframe counterparts, only requiring forced air flow, and occasionally just convection cooling. They shrank so far and became cheep enough that the Personal Computer became to be, replacing the mainframe with its time shared resources with a machine per user. Desktop or even portable “laptop” computers were everywhere.

We networked them together, so now we can share information around the office, a few computers were given specialist tasks of being available all the time so we could share documents, or host databases these servers were basically PCs designed to operate 24×7, usually more powerful than their desktop counterparts (or at least with faster storage and networking).

Next we joined these networks together and the internet was born. The dream of a paperless office might actually become realised – we can now send email (and documents) from one organisation (or individual) to another via email. We can make our specialist computers applications available outside just the office and web servers / web apps come of age.

Fast forward a few years and all of a sudden we need huge data-halls filled with “Rack scale” machines augmented with exotic GPUs and NPUs again with refrigerated liquid cooling, all to do the same task that we were doing previously without the magical buzzword that has been named AI; because we all need another dot com bubble or block chain band waggon to jump aboard. Our AI enabled searches take slightly longer, consume magnitudes more power, and best of all the results we are given may or may not be correct….

Progress, less precise answers, taking longer, consuming more power, without any verification and often giving a different result if you repeat your question AND we still need a personal computing device to access this wondrous thing.

Remind me again why we are here?

(time lines and huge swaves of history simply ignored to make an attempted comic point – this is intended to make a point and not be scholarly work)

13 October, 2024 03:15PM by andy

October 11, 2024

hackergotchi for Steve McIntyre

Steve McIntyre

Rock 5 ITX

It's been a while since I've posted about arm64 hardware. The last machine I spent my own money on was a SolidRun Macchiatobin, about 7 years ago. It's a small (mini-ITX) board with a 4-core arm64 SoC (4 * Cortex-A72) on it, along with things like a DIMM socket for memory, lots of networking, 3 SATA disk interfaces.

The Macchiatobin was a nice machine compared to many earlier systems, but it took quite a bit of effort to get it working to my liking. I replaced the on-board U-Boot firmware binary with an EDK2 build, and that helped. After a few iterations we got a new build including graphical output on a PCIe graphics card. Now it worked much more like a "normal" x86 computer.

I still have that machine running at home, and it's been a reasonably reliable little build machine for arm development and testing. It's starting to show its age, though - the onboard USB ports no longer work, and so it's no longer useful for doing things like installation testing. :-/

So...

I was involved in a conversation in the #debian-arm IRC channel a few weeks ago, and diederik suggested the Radxa Rock 5 ITX. It's another mini-ITX board, this time using a Rockchip RK3588 CPU. Things have moved on - the CPU is now an 8-core big.LITTLE config: 4*Cortex A76 and 4*Cortex A55. The board has NVMe on-board, 4*SATA, built-in Mali graphics from the CPU, soldered-on memory. Just about everything you need on an SBC for a small low-power desktop, a NAS or whatever. And for about half the price I paid for the Macchiatobin. I hit "buy" on one of the listed websites. :-)

A few days ago, the new board landed. I picked the version with 24GB of RAM and bought the matching heatsink and fan. I set it up in an existing case borrowed from another old machine and tried the Radxa "Debian" build. All looked OK, but I clearly wasn't going to stay with that. Onwards to running a native Debian setup!

I installed an EDK2 build from https://github.com/edk2-porting/edk2-rk3588 onto the onboard SPI flash, then rebooted with a Debian 12.7 (Bookworm) arm64 installer image on a USB stick. How much trouble could this be?

I was shocked! It Just Worked (TM)

I'm running a standard Debian arm64 system. The graphical installer ran just fine. I installed onto the NVMe, adding an Xfce desktop for some simple tests. Everything Just Worked. After many years of fighting with a range of different arm machines (from simple SBCs to desktops and servers), this was without doubt the most straightforward setup I've ever done. Wow!

It's possible to go and spend a lot of money on an Ampere machine, and I've seen them work well too. But for a hobbyist user (or even a smaller business), the Rock 5 ITX is a lovely option. Total cost to me for the board with shipping fees, import duty, etc. was just over £240. That's great value, and I can wholeheartedly recommend this board!

The two things that are missing compared to the Macchiatobin? This is soldered-on memory (but hey, 24G is plenty for me!) It also doesn't have a PCIe slot, but it has sufficient onboard network, video and storage interfaces that I think it will cover most people's needs.

Where's the catch? It seems these are very popular right now, so it can be difficult to find these machines in stock online.

FTAOD, I should also point out: I bought this machine entirely with my own money, for my own use for development and testing. I've had no contact with the Radxa or Rockchip folks at all here, I'm just so happy with this machine that I've felt the need to shout about it! :-)

Here's some pictures...

Rock 5 ITX top view

Rock 5 ITX back panel view

Rock 5 EDK2 startuo

Rock 5 xfce login

Rock 5 ITX running Firefox

11 October, 2024 01:53PM

October 10, 2024

hackergotchi for Sean Whitton

Sean Whitton

sway-completing-read

I finally figured out how to have an application launcher with my usual Emacs completion keybindings:

This is with Icomplete. If you use another completion framework it will look different. Crucially, it’s what you are already used to using inside Emacs, with the same completion style (flex vs. orderless vs. …), bindings etc..

Here is my Sway binding:

    bindsym p exec i3-dmenu-desktop \
        --dmenu="dmenu_emacsclient 'Application: '", \
        mode "default"

(for me this is inside a mode { } block)

The dmenu_emacsclient script is here. It relies on the function spw/sway-completing-read from my init.el.

As usual, this code is available for your reuse under the terms of the GNU GPL. Please see the license and copyright information in the linked files.

You also probably want a for_window directive in your Sway config to enable floating the window, and perhaps to resize it. Enjoy having your Emacs completion bindings for application launching, too!

10 October, 2024 05:23AM

hackergotchi for Gunnar Wolf

Gunnar Wolf

Started a guide to writing FUSE filesystems in Python

As DebConf22 was coming to an end, in Kosovo, talking with Eeveelweezel they invited me to prepare a talk to give for the Chicago Python User Group. I replied that I’m not really that much of a Python guy… But would think about a topic. Two years passed. I meet Eeveelweezel again for DebConf24 in Busan, South Korea. And the topic came up again. I had thought of some ideas, but none really pleased me. Again, I do write some Python when needed, and I teach using Python, as it’s the language I find my students can best cope with. But delivering a talk to ChiPy?

On the other hand, I have long used a very simplistic and limited filesystem I’ve designed as an implementation project at class: FIUnamFS (for “Facultad de Ingeniería, Universidad Nacional Autónoma de México�: the Engineering Faculty for Mexico’s National University, where I teach. Sorry, the link is in Spanish — but you will find several implementations of it from the students 😉). It is a toy filesystem, with as many bad characteristics you can think of, but easy to specify and implement. It is based on contiguous file allocation, has no support for sub-directories, and is often limited to the size of a 1.44MB floppy disk.

As I give this filesystem as a project to my students (and not as a mere homework), I always ask them to try and provide a good, polished, professional interface, not just the simplistic menu I often get. And I tell them the best possible interface would be if they provide support for FIUnamFS transparently, usable by the user without thinking too much about it. With high probability, that would mean: Use FUSE.

Python FUSE

But, in the six semesters I’ve used this project (with 30-40 students per semester group), only one student has bitten the bullet and presented a FUSE implementation.

Maybe this is because it’s not easy to understand how to build a FUSE-based filesystem from a high-level language such as Python? Yes, I’ve seen several implementation examples and even nice web pages (i.e. the examples shipped with thepython-fuse module Stavros’ passthrough filesystem, Dave Filesystem based upon, and further explaining, Stavros’, and several others) explaining how to provide basic functionality. I found a particularly useful presentation by Matteo Bertozzi presented ~15 years ago at PyCon4… But none of those is IMO followable enough by itself. Also, most of them are very old (maybe the world is telling me something that I refuse to understand?).

And of course, there isn’t a single interface to work from. In Python only, we can find python-fuse, Pyfuse, Fusepy… Where to start from?

…So I setup to try and help.

Over the past couple of weeks, I have been slowly working on my own version, and presenting it as a progressive set of tasks, adding filesystem calls, and being careful to thoroughly document what I write (but… maybe my documentation ends up obfuscating the intent? I hope not — and, read on, I’ve provided some remediation).

I registered a GitLab project for a hand-holding guide to writing FUSE-based filesystems in Python. This is a project where I present several working FUSE filesystem implementations, some of them RAM-based, some passthrough-based, and I intend to add to this also filesystems backed on pseudo-block-devices (for implementations such as my FIUnamFS).

So far, I have added five stepwise pieces, starting from the barest possible empty filesystem, and adding system calls (and functionality) until (so far) either a read-write filesystem in RAM with basicstat() support or a read-only passthrough filesystem.

I think providing fun or useful examples is also a good way to get students to use what I’m teaching, so I’ve added some ideas I’ve had: DNS Filesystem, on-the-fly markdown compiling filesystem, unzip filesystem and uncomment filesystem.

They all provide something that could be seen as useful, in a way that’s easy to teach, in just some tens of lines. And, in case my comments/documentation are too long to read, uncommentfs will happily strip all comments and whitespace automatically! 😉

So… I will be delivering my talk tomorrow (2024.10.10, 18:30 GMT-6) at ChiPy (virtually). I am also presenting this talk virtually at Jornadas Regionales de Software Libre in Santa Fe, Argentina, next week (virtually as well). And also in November, in person, at nerdear.la, that will be held in Mexico City for the first time.

Of course, I will also share this project with my students in the next couple of weeks… And hope it manages to lure them into implementing FUSE in Python. At some point, I shall report!

Update: After delivering my ChiPy talk, I have uploaded it to YouTube: A hand-holding guide to writing FUSE-based filesystems in Python, and after presenting at Jornadas Regionales, I present you the video in Spanish here: Aprendiendo y enseñando a escribir sistemas de archivo en espacio de usuario con FUSE y Python.

10 October, 2024 01:07AM

October 09, 2024

hackergotchi for Ben Hutchings

Ben Hutchings

FOSS activity in September 2024

09 October, 2024 10:57PM by Ben Hutchings

October 08, 2024

Thorsten Alteholz

My Debian Activities in September 2024

FTP master

This month I accepted 441 and rejected 29 packages. The overall number of packages that got accepted was 448.

I couldn’t believe my eyes, but this month I really accepted the same number of packages as last month.

Debian LTS

This was my hundred-twenty-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:

  • [unstable] libcupsfilters security update to fix one CVE related to validation of IPP attributes obtained from remote printers
  • [unstable] cups-filters security update to fix two CVEs related to validation of IPP attributes obtained from remote printers
  • [unstable] cups security update to fix one CVE related to validation of IPP attributes obtained from remote printers
  • [DSA 5778-1] prepared package for cups-filters security update to fix two CVEs related to validation of IPP attributes obtained from remote printers
  • [DSA 5779-1] prepared package for cups security update to fix one CVE related to validation of IPP attributes obtained from remote printers
  • [DLA 3905-1] cups-filters security update to fix two CVEs related to validation of IPP attributes obtained from remote printers
  • [DLA 3904-1] cups security update to fix one CVE related to validation of IPP attributes obtained from remote printers
  • [DLA 3905-1] cups-filters security update to fix two CVEs related to validation of IPP attributes obtained from remote printers

Despite the announcement the package libppd in Debian is not affected by the CVEs related to CUPS. By pure chance there is an unrelated package with the same name in Debian. I also answered some question about the CUPS related uploads. Due to the CUPS issues, I postponed my work on other packages to October.

Last but not least I did a week of FD this month and attended the monthly LTS/ELTS meeting.

Debian ELTS

This month was the seventy-fourth ELTS month. During my allocated time I uploaded or worked on:

  • [ELA-1186-1]cups-filters security update for two CVEs in Stretch and Buster to fix the IPP attribute related CVEs.
  • [ELA-1187-1]cups-filters security update for one CVE in Jessie to fix the IPP attribute related CVEs (the version in Jessie was not affected by the other CVE).

I also started to work on updates for cups in Buster, Stretch and Jessie, but their uploads will happen only in October.

I also did a week of FD and attended the monthly LTS/ELTS meeting.

Debian Printing

This month I uploaded …

  • libcupsfilters to also fix a dependency and autopkgtest issue besides the security fix mentioned above.
  • splix for a new upstream version. This package is managed now by OpenPrinting.

Last but not least I tried to prepare an update for hplip. Unfortunately this is a nerve-stretching task and I need some more time.

This work is generously funded by Freexian!

Debian Matomo

This month I even found some time to upload packages that are dependencies of Matomo …

This work is generously funded by Freexian!

Debian Astro

This month I uploaded a new upstream or bugfix version of:

Most of the uploads were related to package migration to testing. As some of them are in non-free or contrib, one has to build all binary versions. From my point of view handling packages in non-free or contrib could be very much improved, but well, they are not part of Debian …

Anyway, starting in December there is an Outreachy project that takes care of automatic updates of these packages. So hopefully it will be much easier to keep those package up to date. I will keep you informed.

Debian IoT

This month I uploaded new upstream or bugfix versions of:

Debian Mobcom

This month I did source uploads of all the packages that were prepared last month by Nathan and started the transition. It went rather smooth except for a few packages where the new version did not propagate to the tracker and they got stuck in old failing autopkgtest. Anyway, in the end all packages migrated to testing.

I also uploaded new upstream releases or fixed bugs in:

misc

This month I uploaded new upstream or bugfix versions of:

Most of those uploads were needed to help packages to migrate to testing.

08 October, 2024 09:49PM by alteholz

hackergotchi for Steinar H. Gunderson

Steinar H. Gunderson

Pimp my SV08

The Sovol SV08 is a 3D printer which is a semi-assembled clone of Voron 2.4, an open-source design. It's not the cheapest of printers, but for what you get, it's extremely good value for money—as long as you can deal with certain, err, quality issues.

Anyway, I have one, and one of the fun things about an open design is that you can switch out things to your liking. (If you just want a tool, buy something else. Bambu P1S, for instance, if you can live with a rather closed ecosystem. It's a bit like an iPhone in that aspect, really.) So I've put together a spreadsheet with some of the more common choices:

Pimp my SV08

It doesn't contain any of the really difficult mods, and it also doesn't cover pure printables. And none of the dreaded macro stuff that people seem to be obsessing over (it's really like being in the 90s with people's mIRC scripts all over again sometimes :-/), except where needed to make hardware work.

08 October, 2024 05:41PM