After my last post
on superimposed codes, I discovered that OEIS already had a sequence
for it (I had just missed it due to a slightly different convention),
namely A286874 (and its sister sequence
A303977, which lists the number of distinct
maximal solutions). However, very few terms of this sequence were known;
in particular, it was known that a(12) >= 20 (easily proved by simply
demonstrating a set of twenty 12-bit numbers with the desired property),
but it wasn't known if the value could be higher (i.e., whether there
existed a 12-bit set with 21 elements or more). The SAT solver wasn't
really working well for this anymore, so I thought; can I just bruteforce it?
I.e., can I enumerate all 12-bit 20-element sets and then see if any of them
have room for a 21st element?
Now, obviously you cannot run a completely dumb bruteforce. The raw state
space is 12*20 = 240 bits, and going through 2^240 different options is
far away. But it's a good place to start, and then we can start employing
tricks from there. (I'm sure there are more fancy ways somehow, but this
one was what I chose. I'm a genius with mathematics, but I can write
code.)
So I started with a 20-level deep for loop, with each element counting from
0 to 4095 (inclusive). Now, there are some speedups that are obvious; for
instance, once you have two elements, you can check that neither is a
subset of the other (which is, except in some edge cases with small sets
that we don't need to worry about here, a looser condition than what
we're trying to test for), and then skip the remaining 18 levels.
Similarly, once we have the first three elements, we can start testing
whether one is a subset of OR of the two others, and abort similarly.
Furthermore, we can start considering symmetries. We only care about
solutions that are qualitatively distinct, in that the ordering of the
elements don't matter and the ordering of the bits also don't matter.
So we can simply only consider sequences where the elements are in order,
which is extremely simple, very cheap, and nets us a speedup of 20! ~= 2.4 *
10^18. We have to be a bit careful, though, because this symmetry can
conflict with other symmetries that we'd like to use for speedup.
For instance, it would be nice to impose the condition that the elements
must be in order of increasing population count (number of set bits),
but if we do this at the same time as the “strictly increasing” condition,
we'll start missing valid solutions. (I did use a very weak variant of
it, though; no element can have smaller popcount than the first one.
Otherwise, you could just swap those two elements and shuffle columns
around, and it wouldn't be in conflict.)
However, there is more that we can do which isn't in conflict. In particular,
let's consider (writing only 5-bit elements for brevity) that we are
considering candidates for the first element:
00011
00101
00110
10010
These are all, obviously, the same (except that the latter ones will be
more restrictive); we could just shuffle bits around and get the same thing.
So we impose a new symmetry: Whenever we introduce new bits (bits that were
previously always set), they need to start from the right. So now this start
of a sequence is valid:
00011
00101
but this is not:
00011
01001
The reason is, again, that we could get the first sequence from the second
by flipping the second and third bit (counting from the left). This is cheap
and easy to test for, and is not in conflict with our “increasing” criterion
as long as we make this specific choice.
But we can extend this even further. Look at these two alternatives:
00111
01001
and
00111
01010
They are also obviously equivalent as prefixes (just swap the fourth and
fifth bits), so we don't want to keep both. We make a very similar
restriction as before; if all previous bits in a position are the same,
then we need to fill bits from the right. (If they're not, then we cannot
impose a restriction.) This is also fairly easy to do with some bit fiddling,
although my implementation only considers consecutive bits. (It's not in
conflict with the strictly-increasing criterion, again because it only
makes values lower, not higher. It is, in a sense, a non-decreasing criterion
on the columns.)
And finally, consider these two sequences (with some other elements
in-between):
00111
01001
.....
10011
and
00111
01011
.....
10001
They are also equivalent; if you exchange first and second bit and then
swap the order of them, you end up with the same. So this brings us to the
last symmetry: If you introduce a new bit (or more generally N new bits),
then you are not allowed to introduce later a value that is the same bit
shifted more to the left and with the other bits being lower. So the
second sequence would be outlawed.
Now, how do we do all of these tests efficiently? (In particular, the last
symmetry, while it helped a lot in reducing the number of duplicate
solutions, wasn't a speed win at first.) My first choice was to just generate
code that did all the tests, and did them as fast as possible. This was
actually quite efficient, although it took GCC several minutes to compile
(and Clang even more, although the resulting code wasn't much faster).
Amusingly, this code ended up with an IPC above 6 on my Zen 3 (5950X);
no need for hyperthreading here! I don't think I've ever seen real-life
code this taxing on the execution units, even though this code is naturally
extremely branch-heavy. Modern CPUs are amazing beasts.
It's a bit wasteful that we have 64-bit ALUs (and 256-bit SIMD ALUs) and
use them to do AND/OR on 12 bits at a time. So I tried various tricks with
packing the values to do more tests at a time, but unfortunately, it only
lead to slowdowns. So eventually, I settled at a very different solution:
Bitsets. At any given time, we have a 4096-bit set of valid future values
for the inner for loops. Whenever we decide on a value, we look up in a
set of pregenerated tables and just AND them into our set. For instance,
if we just picked the value 3 (00011), we look up into the “3” table
and it will instantly tell us that values like 7 (00111), 11 (01011),
and many others are going to be invalid for all inner iterations and we
can just avoid considering them altogether. (Iterating over only the
set bits in a bitset is pretty fast in general, using only standard
tricks.) This saves us from testing any further value against these
illegals, so it's super-fast. The resulting tables are large (~4 GB),
since we need to look up pairs of values into it, so
this essentially transforms our high-ALU problem into a memory-bound
problem, but it's still easily worth it (I think it gave a speedup
of something like 80x). The actual ANDing is easily done with AVX2,
256 bits at a time.
This optimization not only made the last symmetry-breaking feasible,
but also sped up the entire process enough (you essentially get O(n)
bitset intersections instead of O(n²) new tests per level) that it
went from a “multiple machines, multiple months” project to
running comfortably within a day on my 5950X (~6 core-days).
I guess maybe a bit anticlimactic; I had to move the database I used
for work distribution locally to the machine or else the latency
would be killing me. It found the five different solutions very
quickly and then a couple of thousand duplicates of them (filtering
those out efficiently is a kind of tricky problem in itself!),
and then confirmed there were no others. I submitted it to OEIS,
and it should hopefully go through the editing process fairly fast.
The obvious next question is: Can we calculate a(13) in the same way?
Unfortunately, it seems the answer is no. Recompiling the same code
with 13-bit parameters (taking the LUTs up to ~31 GB, still within
the amount of RAM I've got) and making a 25-deep instead of 20-level
deep for loop, and then running for a while, it seems that we're looking
at roughly 4–5000 core years. Which is infeasible unless you've got
a lot of money to burn (with spot VMs on GCE, you're talking about roughly
half a million dollars, give or take) on something that isn't a very
important problem in computer science.
In theory, there's still hope, though: The fact that we're still finding
the same solution ~1000x (down from ~100000x before the last symmetries
were added!) indicates that there's some more symmetry
that we could in theory exploit and break (and that factor 1000 is likely
to be much larger for 25 elements than for 20). So if someone more creative
than me could invent code for identifying them—or some other way of rejecting
elements early—we could perhaps identify a(13). But I don't think that's
happening anytime soon. Brute force found its sweet spot and I'm happy
about that, but it doesn't scale forever. :-)
This was my hundred-thirty-second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:
[DLA 4221-1] libblockdev security update of one embargoed CVE related to obtaining full root privileges.
[hardening udisks2] uploaded new version of udisks2 with a hardening patch related to DLA 4221-1
[DLA 4235-1] sudo security update to fix one embargoed CVE related to prevent a local privilege escalation.
[#1106867] got permission to upload kmail-account-wizard; the package was marked as accepted in July.
This month I also did a week of FD duties and attended the monthly LTS/ELTS meeting.
Debian ELTS
This month was the eighty-third ELTS month. During my allocated time I uploaded or worked on:
[ELA-1465-1] libblockdev security update to fix one embargoed CVE in Buster, related to obtaining full root privileges.
[ELA-1475-1] gst-plugins-good1.0 security update to fix 16 CVEs in Stretch. This also includes cherry picking other commits to make this fixes possible.
[ELA-1476-1] sudo security update to fix one embargoed CVE in Buster, Stretch and Jessie. The fix is related to prevent a local privilege escalation.
This month I also did a week of FD duties and attended the monthly LTS/ELTS meeting.
… ta-lib to close at least one RFP by uploading a real package
Unfortunately I stumbled over a discussion about RFPs. One part of those involved wanted to automatically close older RFPs, the other part just wanted to keep them. But nobody suggested to really take care of those RFPs. Why is it easier to spend time on talking about something instead of solving the real problem? Anyway, I had a look at those open RFPs. Some of them can be just closed because they haven’t been closed when uploading the corresponding package. For some others the corresponding software has not seen any upstream activity for several years and depends on older software no longer in Debian (like Python 2). Such bugs can be just closed. Some requested software only works together with long gone technology (for example the open Twitter API). Such bugs can be just closed. Last but not least, even the old RFPs contain nice software, that is still maintained upstream and useful. One example is ta-lib that I uploaded in June. So, please, let’s put our money where out mouths are. My diary of closed RFP bugs is on people.d.o. If only ten people follow suit, all bugs can be closed within a year.
FTP master
It is still this time of the year when just a few packages arrive in NEW: it is Hard Freeze. So please don’t hold it against me that I enjoy the sun more than processing packages in NEW. This month I accepted 104 and rejected 13 packages. The overall number of packages that got accepted was 105.
For some time now I was looking for a device to replace my Thinkpad. Its a 14"
device, but thats to big for my taste. I am a big fan of small notebooks, so
when frame.work announced their 12" laptop, I took the chance and ordered one
right away.
I was in one of the very early batches and got my package a couple of days ago.
When ordering, I chose the DIY edition, but in the end there was not that much
of DIY to do: I had to plug in the storage and the memory, put the keyboard in
and tighten some screws. There are very detailed
instructions with a lot of
photos that tell you which part to put where, which is nice.
My first impressions of the device are good - it is heavier than I anticipated,
but very vell made. It is very easy to assemble and disassemble and it feels
like it can take a hit.
When I started it the first time it took some minutes to boot because of the
new memory module, but then it told me right away that it could not detect an
operating system. As usual when I want to install a new system, I created
a GRML live usb system and tried to boot from this
USB device. But the Framwork BIOS did not want to let me boot GRML, telling
me it is blocked by the current security policy. So I started to look in the
BIOS where I could find the SecureBoot configuration, but there was no such
setting anywhere. I then resorted to a Debian Live image, which was allowed
to boot.
I only learned later, that the SecureBoot setting is in a separate section
that is not part of the main BIOS configuration dialog. There is an “Administer
Secure Boot” icon which you can choose when starting the device, but
apparently only before you try to load an image that is not
allowed.
I always use my personal minimal install
script to
install my Debian systems, so it did not make that much of a difference to use
Debian Live instead of GRML. I only had to apt install debootstrap before
running the script.
I updated the install script to default to trixie and to also install
shim-signed and after successful installation booted into Debian 13 on the
Framwork 12. Everthing seems to work fine so far. WIFI works. For sway to
start I had to install firmware-intel-graphics. The touchscreen works without
me having to configure anything (though I don’t have frame.work stylus, as they
are not yet available), also changing the brightness of the screen worked right
away. The keyboard feels very nice, likewise the touchpad, which I configured
to allow tap-to-click using the tap enabled option of
sway-input.
One small downside of the keyboard is that it does not have a backlight, which
was a surprise. But given that this is a frame.work laptop, there are chances
that a future generation of the keyboard will have backlight support.
The screen of the laptop can be turned all the way around to the back of the
laptops body, so it can be used as a tablet. In this mode the keyboard gets
disabled to prevent accidently pushing keys when using the device in tablet
mode.
For online meetings I still prefer using headphones with cables over bluetooth
once, so I’m glad that the laptop has a headphone jack on the side.
Above the screen there are a camera and a microphone, which both have separate
physical switches to disable them.
I ordered a couple of expansion cards, in the current setup I use two USB-C,
one HDMI and one USB-A. I also ordered a 1TB expansion card and only used this
to transfer my /home, but I soon realized that the card got rather hot, so I
probably won’t use it as a permanent expansion.
I can not yet say a lot about how long the battery lasts, but I will bring
the laptop to DebConf 25, I guess there I’ll find out. There I might also
have a chance to test if the screen is bright enough to be usable outdoors ;)
In June there was an extended discussion about the ongoing challenges
around mentoring newcomers in Debian. As many of you know, this is a
topic I’ve cared about deeply--long before becoming DPL. In my view, the
issue isn’t just a matter of lacking tools or needing to “try harder” to
attract contributors. Anyone who followed the discussion will likely
agree that it’s more complex than that.
I sometimes wonder whether Debian’s success contributes to the problem.
From the outside, things may appear to “just work”, which can lead to
the impression: “Debian is doing fine without me--they clearly have
everything under control.” But that overlooks how much volunteer effort
it takes to keep the project running smoothly.
We should make it clearer that help is always needed--not only in
packaging, but also in writing technical documentation, designing web
pages, reaching out to upstreams about license issues, finding sponsors,
or organising events. (Speaking from experience, I would have
appreciated help in patiently explaining Free Software benefits to
upstream authors.) Sometimes we think too narrowly about what newcomers
can do, and also about which tasks could be offloaded from overcommitted
contributors.
In fact, one of the most valuable things a newcomer can contribute is
better documentation. Those of us who’ve been around for years may be
too used to how things work--or make assumptions about what others
already know. A person who just joined the project is often in the best
position to document what’s confusing, what’s missing, and what they
wish they had known sooner.
In that sense, the recent "random new contributor’s experience"
posts might be a useful starting point for further reflection. I think
we can learn a lot from positive user stories, like this recent
experience of a newcomer
adopting the courier package. I'm absolutely convinced that those who just
found their way into Debian have valuable perspectives--and that we stand to
learn the most from listening to them.
Lucas Nussbaum has volunteered to handle the paperwork and submit a
request on Debian’s behalf to LLM providers,
aiming to secure project-wide access for Debian Developers. If successful,
every DD will be free to use this access--or not--according to their own
preferences.
For at least 12 years laptops have been defaulting to not having the traditional PC 101 key keyboard function key functionality and instead have had other functions like controlling the volume and have had a key labelled Fn to toggle the functions. It’s been a BIOS option to control whether traditional function keys or controls for volume etc are the default and for at least 12 years I’ve configured all my laptops to have the traditional function keys as the default.
Recently I’ve been working in corporate IT and having exposure to many laptops with the default BIOS settings for those keys to change volume etc and no reasonable option for addressing it. This has made me reconsider the options for configuring these things.
F1 key launches help doesn’t seem to get much use. The main help option in practice is Google (I anticipate controversy about this and welcome comments) and all the software vendors are investigating LLM options for help which probably won’t involve F1.
F2 is for renaming files but doesn’t get much use. Probably most people who use graphical file managers use the right mouse button for it. I use it when sorting a selection of photos.
F3 is for launching a search (which is CTRL-F in most programs).
ALT-F4 is for closing a window which gets some use, although for me the windows I close are web browsers (via CTRL-W) and terminals (via CTRL-D).
F5 is for reloading a page which is used a lot in web browsers.
F6 moves the input focus to the URL field of a web browser.
F8 is for moving a file which in the degenerate case covers the rename functionality of F2.
F11 is for full-screen mode in browsers which is sometimes handy.
The keys F1, F3, F4, F7, F9, F10, and F12 don’t get much use for me and for the people I observe. The F2 and F8 keys aren’t useful in most programs, F6 is only really used in web browsers – but the web browser counts as “most programs” nowadays.
Here’s the description of Thinkpad Fn keys [2]. I use Thinkpads for fun and Dell laptops for work, so it would be nice if they both worked in similar ways but of course they don’t. Dell doesn’t document how their Fn keys are laid out, but the relevant bit is that F1 to F4 are the same as on Thinkpads which is convenient as they are the ones that are likely to be commonly used and needed in a hurry.
I have used the KDE settings on my Thinkpad to map the function F1 to F3 keys to the Fn equivalents which are F1 to mute-audio, F2 for vol-down, and F3 for vol-up to allow using them without holding down the Fn key while having other function keys such as F5 and F6 have their usual GUI functionality. Now I have to could train myself to use F8 in situations where I usually use F2, at least when using a laptop.
The only other Fn combinations I use are F5 and F6 for controlling screen brightness, but that’s not something I use much.
It’s annoying that the laptop manufacturers forced me to this. Having a Fn key to get extra functions and not need 101+ keys on a laptop size device is a reasonable design choice. But they could have done away with the PrintScreen key to make space for something else. Also for Thinkpads a touch pad is something that could obviously be removed to gain some extra space as the Trackpoint does all that’s needed in that regard.
There are many negative articles about “AI” (which is not about actual Artificial Intelligence also known as “AGI”). Which I think are mostly overblown and often ridiculous.
Resource Usage
Complaints about resource usage are common, training Llama 3.1 could apparently produce as much pollution as “10,000 round trips by car between Los Angeles and New York City”. That’s not great but when you compare to the actual number of people doing such drives in the US and the number of people taking commercial flights on that route it doesn’t seem like such a big deal. Apparently commercial passenger jets cause CO2 emissions per passenger about equal to a car with 2 people. Why is it relevant whether pollution comes from running servers, driving cars, or steel mills? Why not just tax polluters for the damage they do and let the market sort it out? People in the US make a big deal about not being communist, so why not have a capitalist solution, make it more expensive to do undesirable things and let the market sort it out?
ML systems are a less bad use of compute resources than Bitcoin, at least ML systems give some useful results while Bitcoin has nothing good going for it.
The result of that was a lot of Internet companies going bankrupt, the investors in those companies losing money, and other companies then bought up their assets and made profitable companies. The cheap Internet we now have was built on the hardware from bankrupt companies which was sold for far less than the manufacture price. That allowed it to scale up from modem speeds to ADSL without the users paying enough to cover the purchase of the infrastructure. In the early 2000s I worked for two major Dutch ISPs that went bankrupt (not my fault) and one of them continued operations in the identical manner after having the stock price go to zero (I didn’t get to witness what happened with the other one). As far as I’m aware random Dutch citizens and residents didn’t suffer from this and employees just got jobs elsewhere.
There are good things being done with ML systems and when companies like OpenAI go bankrupt other companies will buy the hardware and do good things.
NVidia isn’t ever going to have the future sales that would justify a market capitalisation of almost 4 Trillion US dollars. This market cap can support paying for new research and purchasing rights to patented technology in a similar way to the high stock price of Google supported buying YouTube, DoubleClick, and Motorola Mobility which are the keys to Google’s profits now.
The Real Upsides of ML
Until recently I worked for a company that used ML systems to analyse drivers for signs of fatigue, distraction, or other inappropriate things (smoking which is illegal in China, using a mobile phone, etc). That work was directly aimed at saving human lives with a significant secondary aim of saving wear on vehicles (in the mining industry drowsy drivers damage truck tires and that’s a huge business expense).
There are many applications of ML in medical research such as recognising cancer cells in tissue samples.
There are many less important uses for ML systems, such as recognising different types of pastries to correctly bill bakery customers – technology that was apparently repurposed for recognising cancer cells.
The ability to recognise objects in photos is useful. It can be used for people who want to learn about random objects they see and could be used for helping young children learn about their environment. It also has some potential for assistance for visually impaired people, it wouldn’t be good for safety critical systems (don’t cross a road because a ML system says there are no cars coming) but could be useful for identifying objects (is this a lemon or a lime). The Humane AI pin had some real potential to do good things but there wasn’t a suitable business model [2], I think that someone will develop similar technology in a useful way eventually.
Even without trying to do what the Humane AI Pin attempted, there are many ways for ML based systems to assist phone and PC use.
ML systems allow analysing large quantities of data and giving information that may be correct. When used by a human who knows how to recognise good answers this can be an efficient way of solving problems. I personally have solved many computer problems with the help of LLM systems while skipping over many results that were obviously wrong to me. I believe that any expert in any field that is covered in the LLM input data could find some benefits from getting suggestions from an LLM. It won’t necessarily allow them to solve problems that they couldn’t solve without it but it can provide them with a set of obviously wrong answers mixed in with some useful tips about where to look for the right answers.
I don’t think it’s reasonable to expect ML systems to make as much impact on society as the industrial revolution, and the agricultural revolutions which took society from more than 90% farm workers to less than 5%. That doesn’t mean everything will be fine but it is something that can seem OK after the changes have happened. I’m not saying “apart from the death and destruction everything will be good”, the death and destruction are optional. Improvements in manufacturing and farming didn’t have to involve poverty and death for many people, improvements to agriculture didn’t have to involve overcrowding and death from disease. This was an issue of political decisions that were made.
The Real Problems of ML
Political decisions that are being made now have the aim of making the rich even richer and leaving more people in poverty and in many cases dying due to being unable to afford healthcare. The ML systems that aim to facilitate such things haven’t been as successful as evil people have hoped but it will happen and we need appropriate legislation if we aren’t going to have revolutions.
There are documented cases of suicide being inspired by Chat GPT systems [4]. There have been people inspired towards murder by ChatGPT systems but AFAIK no-one has actually succeeded in such a crime yet. There are serious issues that need to be addressed with the technology and with legal constraints about how people may use it. It’s interesting to consider the possible uses of ChatGPT systems for providing suggestions to a psychologist, maybe ChatGPT systems could be used to alleviate mental health problems.
The cases of LLM systems being used for cheating on assignments etc isn’t a real issue. People have been cheating on assignments since organised education was invented.
There is a real problem of ML systems based on biased input data that issue decisions that are the average of the bigotry of the people who provided input. That isn’t going to be worse than the current situation of bigoted humans making decisions based on hate and preconceptions but it will be more insidious. It is possible to search for that so for example a bank could test it’s mortgage approval ML system by changing one factor at a time (name, gender, age, address, etc) and see if it changes the answer. If it turns out that the ML system is biased on names then the input data could have names removed. If it turns out to be biased about address then there could be weights put in to oppose that.
For a long time there has been excessive trust in computers. Computers aren’t magic they just do maths really fast and implement choices based on the work of programmers – who have all the failings of other humans. Excessive trust in a rule based system is less risky than excessive trust in a ML system where no-one really knows why it makes the decisions it makes.
Self driving cars kill people, this is the truth that Tesla stock holders don’t want people to know.
Companies that try to automate everything with “AI” are going to be in for some nasty surprises. Getting computers to do everything that humans do in any job is going to be a large portion of an actual intelligent computer which if it is achieved will raise an entirely different set of problems.
I’ve previously blogged about ML Security [5]. I don’t think this will be any worse than all the other computer security problems in the long term, although it will be more insidious.
How Will It Go?
Companies spending billions of dollars without firm plans for how to make money are going to go bankrupt no matter what business they are in. Companies like Google and Microsoft can waste some billions of dollars on AI Chat systems and still keep going as successful businesses. Companies like OpenAI that do nothing other than such chat systems won’t go well. But their assets can be used by new companies when sold at less than 10% the purchase price.
Companies like NVidia that have high stock prices based on the supposed ongoing growth in use of their hardware will have their stock prices crash. But the new technology they develop will be used by other people for other purposes. If hospitals can get cheap diagnostic ML systems because of unreasonable investment into “AI” then that could be a win for humanity.
Companies that bet their entire business on AI even when it’s not necessarily their core business (as Tesla has done with self driving) will have their stock price crash dramatically at a minimum and have the possibility of bankruptcy. Having Tesla go bankrupt is definitely better than having people try to use them as self driving cars.
Armadillo is a powerful
and expressive C++ template library for linear algebra and scientific
computing. It aims towards a good balance between speed and ease of use,
has a syntax deliberately close to Matlab, and is useful for algorithm
development directly in C++, or quick conversion of research code into
production environments. RcppArmadillo
integrates this library with the R environment and language–and is
widely used by (currently) 1241 other packages on CRAN, downloaded 40.4 million
times (per the partial logs from the cloud mirrors of CRAN), and the CSDA paper (preprint
/ vignette) by Conrad and myself has been cited 634 times according
to Google Scholar.
Conrad released a minor
version 4.6.0 yesterday which offers new accessors for non-finite
values. And despite being in Beautiful British Columbia on vacation, I
had wrapped up two rounds of reverse dependency checks preparing his
4.6.0 release, and shipped this to CRAN this morning where it passed
with flying colours and no human intervention—even with over 1200
reverse dependencies. The changes since the last CRAN release are summarised
below.
Changes in
RcppArmadillo version 14.6.0-1 (2025-07-02)
Upgraded to Armadillo release 14.6.0 (Caffe Mocha)
Added balance() to transform matrices so that column
and row norms are roughly the same
Added omit_nan() and omit_nonfinite()
to extract elements while omitting NaN and non-finite values
Added find_nonnan() for finding indices of non-NaN
elements
Added standalone replace() function
The fastLm() help page now mentions that options to
solve() can control its behavior.
With a friendly Canadian hand wave from vacation in Beautiful British
Columbia, and speaking on behalf of the Rcpp Core Team, I am excited to
shared that the (regularly scheduled bi-annual) update to Rcpp just brought version 1.1.0 to CRAN. Debian builds haven been prepared and
uploaded, Windows and macOS builds should appear at CRAN in the next few
days, as will builds in different Linux distribution–and of course r2u should catch up
tomorrow as well.
The key highlight of this release is the switch to C++11 as minimum
standard. R itself did so in
release 4.0.0 more than half a decade ago; if someone is really tied to
an older version of R and an equally old compiler then using an
older Rcpp with it has to be
acceptable. Our own tests (using continuous integration at GitHub) still go back all the way to R
3.5.* and work fine (with a new-enough compiler). In the
previous release post, we commented that we had only reverse
dependency (falsely) come up in the tests by CRAN, this time there was none
among the well over 3000 packages using Rcpp at CRAN. Which really is quite
amazing, and possibly also a testament to our rigorous continued testing
of our development and snapshot releases on the key branch.
This release continues with the six-months January-July cycle started
with release
1.0.5 in July 2020. As just mentioned, we do of course make interim
snapshot ‘dev’ or ‘rc’ releases available. While we not longer regularly
update the Rcpp drat
repo, the r-universe
page and repo now really fill this role admirably (and with many
more builds besides just source). We continue to strongly encourage
their use and testing—I run my systems with these versions which tend to
work just as well, and are of course also fully tested against all
reverse-dependencies.
Rcpp has long established itself
as the most popular way of enhancing R with C or C++ code. Right now,
3038 packages on CRAN depend on
Rcpp for making analytical code go
faster and further. On CRAN, 13.6% of all packages depend (directly) on
Rcpp, and 61.3% of all compiled
packages do. From the cloud mirror of CRAN (which is but a subset of all
CRAN downloads), Rcpp has been
downloaded 100.8 million times. The two published papers (also included
in the package as preprint vignettes) have, respectively, 2023 (JSS, 2011) and 380 (TAS, 2018)
citations, while the the book (Springer useR!,
2013) has another 695.
As mentioned, this release switches to C++11 as the minimum standard.
The diffstat display in the CRANberries
comparison to the previous release shows how several (generated)
sources files with C++98 boilerplate have now been removed; we also
flattened a number of if/else sections we no
longer need to cater to older compilers (see below for details). We also
managed more accommodation for the demands of tighter use of the C API
of R by removing DATAPTR and CLOENV use. A
number of other changes are detailed below.
The full list below details all changes, their respective PRs and, if
applicable, issue tickets. Big thanks from all of us to all
contributors!
Changes in
Rcpp release version 1.1.0 (2025-07-01)
Changes in Rcpp API:
C++11 is now the required minimal C++ standard
The std::string_view type is now covered by
wrap() (Lev Kandel in #1356 as discussed
in #1357)
A last remaining DATAPTR use has been converted to
DATAPTR_RO (Dirk in #1359)
Under R 4.5.0 or later, R_ClosureEnv is used instead
of CLOENV (Dirk in #1361 fixing #1360)
Use of lsInternal switched to
lsInternal3 (Dirk in #1362)
Removed compiler detection macro in a header cleanup setting
C++11 as the minunum (Dirk in #1364 closing #1363)
Variadic templates are now used onconditionally given C++11 (Dirk
in #1367
closing #1366)
Remove RCPP_USING_CXX11 as a #define as
C++11 is now a given (Dirk in #1369)
Additional cleanup for __cplusplus checks (Iñaki in
#1371 fixing #1370)
Unordered set construction no longer needs a macro for the
pre-C++11 case (Iñaki in #1372)
Lambdas are supported in a Rcpp Sugar functions (Iñaki in #1373)
The Date(time)Vector classes now have default ctor (Dirk in #1385 closing #1384)
Fixed an issue where Rcpp::Language would duplicate its arguments
(Kevin in #1388, fixing #1386)
Changes in Rcpp Attributes:
The C++26 standard now has plugin support (Dirk in #1381 closing #1380)
Changes in Rcpp Documentation:
Several typos were correct in the NEWS file (Ben Bolker in #1354)
The Rcpp Libraries vignette mentions PACKAGE_types.h
to declare types used in RcppExports.cpp (Dirk in #1355)
The vignettes bibliography file was updated to current package
versions, and now uses doi references (Dirk in #1389)
Changes in Rcpp Deployment:
Rcpp.package.skeleton() creates ‘URL’ and
‘BugReports’ if given a GitHub username (Dirk in #1358)
R 4.4.* has been added to the CI matrix (Dirk in #1376)
Tests involving NA propagation are skipped under linux-arm64 as
they are under macos-arm (Dirk in #1379 closing #1378)
Another short status update of what happened on my side last
month. Phosh 0.48.0 is out with nice improvements, phosh.mobi e.V. is
alive, helped a bit to get cellbroadcastd
out, osk bugfixes and some more:
My Debian contributions this month were all
sponsored by
Freexian. This was a very light month; I did a few things that were easy or
that seemed urgent for the upcoming trixie release, but otherwise most of my
energy went into
Debusine. I’ll be giving
a talk about that at DebConf in a couple of weeks; this is the first DebConf
I’ll have managed to make it to in over a decade, so I’m pretty excited.
After reading a bunch of recent discourse about X11 and Wayland, I decided
to try switching my laptop (a Framework 13 AMD running Debian trixie with
GNOME) over to Wayland. I don’t remember why it was running X; I think I
must have either inherited some configuration from my previous laptop (in
which case it could have been due to anything up to ten years ago or so), or
else I had some initial problem while setting up my new laptop and failed to
make a note of it. Anyway, the switch was hardly noticeable, which was great.
One problem I did notice is that my preferred terminal emulator, pterm,
crashed after the upgrade. I run a slightly-modified version from git to
make some small terminal emulation changes that I really must either get
upstream or work out how to live without one of these days, so it took me a
while to notice that it only crashed when running from the packaged version,
because the crash was in code that only runs when pterm has a set-id bit.
I reported this upstream, they quickly fixed
it,
and I
backported
it to the Debian package.
As I often do, this year I have also prepared a set of personalized maps
for your OpenPGP keysigning in DebConf25, in Brest!
What is that, dare you ask?
One of the not-to-be-missed traditions of DebConf is a Key-Signing Party
(KSP) that spans the whole conference! Travelling from all the corners of
the world to a single, large group gathering, we have the ideal opportunity
to spread some communicable diseasestrust on your peers’
identities and strengthen Debian’s OpenPGP keyring.
But whom should you approach for keysigning?
Go find yourself in the nice listing I have
prepared. By clicking on your
long keyid (in my case, the link labeled 0x2404C9546E145360), anybody can
download your certificate (public key + signatures). The
SVG and
PNG
links will yield a graphic version of your position within the DC25
keyring, and the
TXT
link will give you a textual explanation of it. (of course, your links will
differ, yada yada…)
Please note this is still a preview of our KSP information: You will
notice there are outstanding several things for me to fix before marking
the file as final. First, some names have encoding issues I will
fix. Second, some keys might be missing — if you submitted your key as part
of the conference registration form but it is not showing, it must be
because my scripts didn’t find it in any of the queried keyservers. My
scripts are querying the following servers:
Make sure your key is available in at least some of them; I will try to do
a further run on Friday, before travelling, or shortly after arriving to
France.
If you didn’t submit your key in time, but you will be at DC25, please
mail me stating [DC25 KSP] in your mail title, and I will manually add it
to the list.
On (hopefully!) Friday, I’ll post the final, canonical KSP coordination
page which you should download and calculate its SHA256-sum. We will have
printed out convenience sheets to help you do your keysigning at the front
desk.
Just when you thought it was safe to go to court, think again.
Your lawyer might not have your best interests at heart and even
worse, they may be working for the other side.
In 2014, journalists discovered Victoria Police had a secret
informer, a mole snitching on the underworld, identified by the
code name Lawyer X.
It was beyond embarassing: not only did police have the burden
of protecting their secret informer, they may also have to
protect her relatives who share the same name. The most
notable among them, the informer's uncle,
James Gobbo,
a supreme court judge who subsequently served as Governor
for the State of Victoria.
There is absolutely no suggestion that Lawyer X's
relatives had anything to do with her misdeeds. Nonetheless,
the clients she betrayed were the biggest crooks in town,
until, of course, her unethical behavior gave them the opportunity
to have those convictions overturned and present themselves as
model citizens once again. Any relatives or
former business associates of Lawyer X, including
the former governor, would be in danger for the rest of their
lives.
James Gobbo and his son
James Gobbo junior are both Old Xaverians,
graduates of Melbourne's elite Jesuit school for boys, like my
father and I.
Lawyer X was eventually revealed to be
Nicola Gobbo,
a graduate of the elite girls school Genazzano FCJ College.
My aunt, that is my father's sister, also went to Genazzano.
Alumni communications typically refer to Old Xaverians with
the symbols "OX" and the year of graduation, for example,
"OX96" for somebody who graduated in 1996.
Whenever a scandal like this arises, if the suspect is a
graduate of one of these elite schools, the newspapers will be
very quick to dramatize the upper class background.
The case of Lawyer X was a head and shoulders above
any other scandal: a former prefect and class captain who
made a career out of partying with drug lords, having their children
and simultaneously bugging their conversations for the police.
Stories like this are inconvenient for those elite schools
but in reality, I don't feel the schools are responsible when
one of these unlucky outcomes arises. The majority of students
are getting a head start in life but there is simply nothing that any
school can do to prevent one or two alumni going off the rails
like this.
Having been through this environment myself, I couldn't
believe what I was seeing in 2023 when the Swiss financial regulator (FINMA)
voluntarily published a few paragraphs from a secret judgment, using the
code name "X" to refer to a whole law office (cabinet juridique in
French) of jurists in Geneva who had ripped off their clients.
The Gobbo family, Genazzano FCJ College and alumni have finally been
vindicated. The misdeeds of Lawyer X pale in comparison to the
crimes of the Swiss law firm X.
Lawyer X was a former member of a political party.
One of the jurists from Law firm X was working for the rogue law office
at the same time that he was a member of Geneva city council.
He is a member of the same political party as the Swiss president from
that era.
In 1993, Lawyer X was an editor of Farrago, Australia's leading
student newspaper. Law firm X used the
Swiss media to write positive stories about their company.
When the same company was outlawed, nanny-state laws prevented the media reporting
anything at all about its downfall. Ironically,
one of my former clients was also an editor of Farrago before he became
Australia's Minister for Finance. The word Farrago gives a fascinating
insight into the life of Lawyer X.
Here is a sample sentence using the word Farrago in the
Cambridge dictionary:
... told us a farrago of lies
When FINMA revealed the secret judgment shuttering Law Firm X,
Urban Angehrn, the FINMA director, resigned citing health reasons.
His dramatic resignation helped bury news stories about the Law firm X judgment.
In Australia, a number of chief commissioners have resigned. In fact, Victoria
Police have been through three leaders in the last year.
Who predicted Elon Musk would acquire Twitter?
In 2018, I attended the UN Forum on Business and Human Rights,
where I made this brief intervention predicting the future of Facebook and
Twitter. When
Elon Musk purchased Twitter in 2022, he called it X.
Go figure.
On Monday I had my Viva Voce (PhD defence), and passed (with minor
corrections).
Post-viva refreshment
It's a relief to have passed after 8 years of work. I'm not quite
done of course, as I have the corrections to make! Once those are
accepted I'll upload my thesis here.
We are pleased to announce that AMD
has committed to sponsor DebConf25 as a
Platinum Sponsor.
The AMD ROCm platform includes programming models, tools, compilers,
libraries, and runtimes for AI and HPC solution development on AMD GPUs.
Debian is an officially supported platform for AMD ROCm and a growing
number of components are now included directly in the Debian distribution.
For more than 55 years AMD has driven innovation in high-performance
computing, graphics and visualization technologies.
AMD is deeply committed to supporting and contributing to open-source
projects, foundations, and open-standards organizations, taking pride in
fostering innovation and collaboration within the open-source community.
With this commitment as Platinum Sponsor, AMD is contributing to the
annual Debian Developers’ Conference, directly supporting the progress
of Debian and Free Software.
AMD contributes to strengthening the worldwide community that
collaborates on Debian projects year-round.
Thank you very much, AMD, for your support of DebConf25!
Become a sponsor too!
DebConf25 will take place from 14 to 20
July 2025 in Brest, France, and will be preceded by DebCamp, from 7 to 13
July 2025.
Debian uses LDAP for storing information about users, hosts and other
objects. The wrapping around this is called userdir-ldap, or ud-ldap
for short. It provides a mail gateway, web UI and a couple of schemas
for different object types.
Back in late 2018 and early 2019, we (DSA) removed support for ISO5218
in userdir-ldap, and removed the corresponding data. This made some
people upset, since they were using that information, as imprecise as
it was, to infer people’s pronouns. ISO5218 has four values for sex,
unknown, male, female and N/A. This might have been acceptable when
the standard was new (in 1976), but it wasn’t acceptable any longer in
2018.
A couple of days ago, I finally got around to adding support to
userdir-ldap to let people specify their pronouns. As it should be,
it’s a free-form text field. (We don’t have localised fields in LDAP,
so it probably makes sense for people to put the English version of
their pronouns there, but the software does not try to control that.)
So far, it’s only exposed through the LDAP gateway, not in the web UI.
If you’re a Debian developer, you can set your pronouns using
echo "pronouns: he/him" | gpg --clearsign | mail changes@db.debian.org
I see that four people have already done so in the time I’ve taken to
write this post.
JP was puzzled that using podman run --memory=2G … would not result in the 2G limit being visible inside the container.
While we were able to identify this as a visualization problem — tools like free(1) only look at /proc/meminfo and that is not virtualized inside a container, you'd have to look at /sys/fs/cgroup/memory.max and friends instead — I couldn't leave it at that.
And then I remembered there is actually something that can provide a virtual (cgroup-aware) /proc for containers: LXCFS!
But does it work with Podman?!
I always used it with LXC, but there is technically no reason why it wouldn't work with a different container solution — cgroups are cgroups after all.
As we all know: there is only one way to find out!
Take a fresh Debian 12 VM, install podman and verify things behave as expected:
And after installing (and starting) lxcfs, we can use the virtual /proc/meminfo it generates by bind-mounting it into the container (LXC does that part automatically for us):
The same of course works with all the other proc entries lxcfs provides (cpuinfo, diskstats, loadavg, meminfo, slabinfo, stat, swaps, and uptime here), just bind-mount them.
And yes, free(1) now works too!
bash-5.1# free -m total used free shared buff/cache availableMem: 2048 3 1976 0 67 2044Swap: 0 0 0
Just don't blindly mount the whole /var/lib/lxcfs/proc over the container's /proc.
It did work (as in: "bash and free didn't crash") for me, but with /proc/$PID etc missing, I bet things will go south pretty quickly.
Il defunto Papa Francesco chiese a un gruppo di circa quattrocento vescovi di lavorare insieme dal 2021 al 2024 per analizzare il modo in cui i fedeli
cattolici interagiscono e si sviluppano come movimento. Formalmente, a questo comitato di vescovi fu dato il titolo di
Sinodo sulla sinodalità . Il termine Sinodo è ampiamente utilizzato in tutte le religioni cristiane per riferirsi a comitati, consigli o riunioni di tali gruppi a qualsiasi livello della gerarchia ecclesiastica. Il termine
sinodalità è specifico della Chiesa cattolica. Il Sinodo ha una pagina web ufficiale in cui
cerca di spiegare la sinodalità .
Sono stati creati diversi gruppi di lavoro su un'ampia gamma di argomenti. In questa analisi, mi limiterò a esaminare il gruppo di lavoro numero tre, che ha esaminato il tema
della missione nell'ambiente digitale . Successivamente, fornirò alcune mie prove sugli argomenti che il gruppo di lavoro sta prendendo in considerazione.
Anche
i ripetitori di pacchetti
per radio amatoriali rientrano nel campo di applicazione, sebbene le licenze per radio amatoriali non consentano la trasmissione esplicita di materiale religioso.
Il Vaticano è stato uno dei primi ad adottare la radio a onde corte. Papa Leone XIV e monsignor Lucio Adrian Ruiz, segretario del Dicastero per la Comunicazione, hanno visitato questa settimana la sede della Radio Vaticana:
Leggendo i risultati sia del gruppo di lavoro che del Sinodo nel suo complesso, ritengo che la Chiesa nel suo complesso non abbia deciso né di accogliere né di rifiutare
i media di controllo sociale . Stanno riconoscendo che fanno parte del panorama digitale e stanno cercando di decidere come la Chiesa si relaziona ad esso.
Come si è evoluto il processo sinodale ad alto livello
Prima di entrare nei dettagli, ecco una panoramica del processo e dei resoconti pubblicati in momenti diversi, con link diretti alle edizioni tradotte.
Il sito web principale del Sinodo è
www.Synod.va ed è disponibile in diverse lingue. A quanto pare, il contenuto è stato creato in italiano e tradotto in inglese e in altre lingue. Questo lo rende un po' più difficile da leggere.
Nell'ottobre 2023 si è svolto un lungo incontro a Roma durante il quale è stata elaborata una bozza iniziale del rapporto.
Punti chiave del rapporto finale in relazione all'ambiente digitale
Al punto 58, il rapporto osserva che i cristiani potrebbero tentare di proclamare il Vangelo attraverso la loro partecipazione in un ambiente digitale.
58. ... I cristiani, ciascuno secondo i suoi diversi ruoli - nella famiglia e negli altri stati di vita; nel mondo del lavoro e nelle professioni; impegnati civilmente, politicamente, socialmente o ecologicamente; nello sviluppo di una cultura ispirata al Vangelo, inclusa l'evangelizzazione dell'ambiente digitale - percorrono le strade del mondo e annunciano il Vangelo lì dove vivono, sostenuti dai doni dello Spirito.
59. Così facendo, chiedono alla Chiesa di non abbandonarli, ma di farli sentire inviati e sostenuti nella missione.
Questo punto sembra incoraggiare la Chiesa a riflettere sulla situazione affrontata da coloro che sono sotto l'influenza di un ambiente digitale, ma non implica necessariamente che l'ambiente digitale sia buono o cattivo.
Al punto 112, riguardante la mobilità, che comprende persone di tutti i livelli sociali, il rapporto osserva:
Alcuni mantengono forti legami con il loro Paese d'origine, soprattutto grazie ai media digitali, e per questo motivo può risultare difficile stabilire legami nel nuovo Paese; altri si ritrovano a vivere senza radici.
Questa è un'osservazione eccellente. In Europa, ho incontrato coppie le cui relazioni dipendono interamente dai dispositivi che usano per la traduzione automatica. Quando arrivano nuovi arrivati &ZeroWidthSpace&ZeroWidthSpacein città, la cultura di WhatsApp incoraggia i vicini a passare settimane o mesi a parlare alle loro spalle senza mai guardarli negli occhi.
113. La diffusione della cultura digitale, particolarmente evidente tra i giovani, sta cambiando profondamente la loro esperienza dello spazio e del tempo, influenzando le loro attività quotidiane, la comunicazione e le relazioni interpersonali, inclusa la fede. Le opportunità che offre internet stanno ridisegnando relazioni, legami e confini. Oggi sperimentiamo spesso solitudine ed emarginazione, anche se siamo più connessi che mai. Inoltre, coloro che hanno interessi economici e politici propri possono usare
i social media per diffondere ideologie e generare forme di polarizzazione aggressive e manipolatrici. Non siamo ben preparati a questo e dobbiamo dedicare risorse affinché l’ambiente digitale diventi uno spazio profetico per la missione e l’annuncio. Le Chiese locali devono incoraggiare, sostenere e accompagnare quanti si impegnano nella missione nell’ambiente digitale. Le comunità e i gruppi digitali cristiani, in particolare i giovani, sono chiamati anche a riflettere sul modo in cui creano legami di appartenenza, promuovendo l’incontro e il dialogo. Devono offrire formazione ai loro coetanei, sviluppando un modo sinodale di essere Chiesa. Internet, costituito come una rete di connessioni, offre nuove opportunità per vivere meglio la dimensione sinodale della Chiesa.
Questo paragrafo riconosce i pericoli della tecnologia digitale, in particolare
dei social media che controllano la società , e le parole chiave sono "Non siamo ben preparati a questo". Tuttavia, suggerisce che le chiese locali dovrebbero "incoraggiare" a ridurre questi rischi online. Non credo che "incoraggiare" sia la parola giusta da usare, ma non credo nemmeno che dovrebbero scoraggiare.
149. Il processo sinodale ha richiamato con insistenza l'attenzione su alcuni ambiti specifici della formazione del Popolo di Dio alla sinodalità. Il primo di questi riguarda l'impatto dell'ambiente digitale sui processi di apprendimento, sulla concentrazione, sulla percezione di sé e del mondo e sulla costruzione delle relazioni interpersonali. La cultura digitale costituisce una dimensione cruciale della testimonianza della Chiesa nella cultura contemporanea e un campo missionario emergente. Ciò richiede di garantire che il messaggio cristiano sia presente online in modi affidabili che non ne distorcano ideologicamente i contenuti. Sebbene i media digitali abbiano un grande potenziale per migliorare le nostre vite, possono anche causare danni e lesioni attraverso il bullismo, la disinformazione, lo sfruttamento sessuale e la dipendenza. Le istituzioni educative della Chiesa devono aiutare i bambini e gli adulti a sviluppare competenze critiche per navigare in sicurezza nel web.
Questi commenti sono molto pertinenti e molto coerenti con la mia testimonianza, parte della quale è riprodotta più avanti in questa relazione.
150. Un altro ambito di grande importanza è la promozione in tutti i contesti ecclesiali di una cultura della tutela, rendendo le comunità luoghi sempre più sicuri per i minori e le persone vulnerabili.
Quando ho sollevato questo argomento nelle comunità del software libero, la mia famiglia è stata attaccata senza pietà. Si vedano le
email che ho inviato alla fine del 2017 e i commenti su IBM
Red Hat più avanti in questo rapporto.
Fonti relative al gruppo di lavoro tre, la missione in un ambiente digitale
Il sito web di Synod.va ha pubblicato l'elenco di
tutti i gruppi di lavoro . Il sito web include un breve video su ciascun gruppo e un link ai loro rapporti più recenti.
Il video del gruppo di lavoro tre dura poco meno di due minuti. Ecco alcune delle citazioni chiave e le mie osservazioni:
"Oggi le persone, soprattutto i giovani, hanno imparato a vivere contemporaneamente e senza soluzione di continuità sia negli spazi digitali che in quelli fisici."
Le affermazioni contenute nel video non sono quelle presentate nel rapporto finale. Ci arriveremo. Ciononostante, ogni volta che
si parla
di controllo sociale sui media , si tende a generalizzare sull'impossibilità di vivere senza. Ogni volta che vediamo un'affermazione come questa, è importante contestarla.
"In che modo la Chiesa utilizza e si appropria della cultura digitale?"
La domanda retorica è interessante. In realtà, i superpoteri della Silicon Valley usano e si appropriano di qualsiasi contenuto che forniamo loro. La chiesa non usa loro, usa noi. Come pensi che siano diventati così ricchi?
Una domanda più appropriata potrebbe essere: "In che modo la Chiesa
supplisce alle carenze delle culture digitali?".
"Questo ambiente è ormai "indistinguibile dalla sfera della vita quotidiana".
Papa Francesco era un uomo intelligente e aveva intorno a sé persone intelligenti, tra cui il defunto Cardinale Pell. Possiamo far risalire questa citazione al pensiero di Alan Turing. Turing è considerato il padre dell'informatica e un martire. Turing ci ha trasmesso esattamente lo stesso concetto nel leggendario test di Turing, che lo stesso Turing definì il gioco dell'imitazione nel 1949.
Un altro modo di interpretare questo fenomeno è dire che le masse sono state plagiate dai signori della Silicon Valley.
Le scelte prese dai vertici di Facebook rappresentano un problema enorme – per i bambini, per la sicurezza pubblica, per la democrazia – ed è per questo che mi sono fatto avanti. E sia chiaro: non deve andare per forza così. Siamo qui oggi grazie alle scelte deliberate di Facebook.
Il riassunto del gruppo di lavoro continua...
Per annunciare efficacemente il Vangelo nella nostra cultura contemporanea, dobbiamo discernere le opportunità e le sfide presentate da questa nuova dimensione del “luogo”
Ciononostante, il rapporto include l'espressione "maggiore immersione" e ritengo che la Chiesa non dovrebbe dare per scontato che questa sia una linea d'azione predefinita.
La sintesi affronta anche il concetto di giurisdizione. La Chiesa cattolica si è tradizionalmente organizzata su base geografica. Internet permette alle persone di connettersi e formare comunità virtuali senza alcuna connessione geografica.
Tra l'altro, prima di Internet, la Chiesa poteva spostare sacerdoti ad alto rischio da una parrocchia all'altra senza doversi preoccupare di eventuali collegamenti. Ho esaminato meticolosamente i documenti della Commissione Reale australiana e ho trovato questa nota del leggendario Padre X___:
Ciò significa che se qualcuno in Australia venisse a sapere che Padre Z___ è in cura a causa di qualcosa accaduto a Boston e andasse lì per scoprirlo, si troverebbe in un vicolo cieco.
La lettera in questione è stata scritta poco prima che Internet diventasse di dominio pubblico. Rileggendo quelle parole oggi, ci ricordano con chiarezza come Internet stia stravolgendo la nostra vita.
Il gruppo di lavoro prosegue affermando che sta cercando "raccomandazioni o proposte pratiche" da tutta la comunità su qualsiasi argomento correlato alla missione della Chiesa nell'ambiente digitale.
Le persone impegnate nel movimento del software libero, siano esse
cattoliche o meno, possono contattare la propria diocesi locale per scoprire chi coordina a livello locale la risposta a queste sfide.
Un'altra frase che mi ha colpito:
"oggi viviamo in una cultura digitale"
Non esattamente. Alcuni direbbero che ci viene imposta una cultura digitale. Istituzioni come la politica e i media ne sono dipendenti e la mettono su un piedistallo. Pertanto, è ancora più vitale che altre istituzioni, come la Chiesa, si assumano il compito di mettere in discussione ogni aspetto della cultura digitale e di promuovere valide alternative.
La vita senza cellulari, la vita senza app
Telefoni cellulari e app sono strettamente correlati. Alcune persone scelgono di vivere senza uno smartphone, in altre parole, hanno solo la metà dei problemi di un telefono cellulare completo. Alcune persone scelgono anche di avere smartphone senza l'app store di Google o Apple, ad esempio chi installa
Replicant o
LineageOS e utilizza l'
app store di F-Droid per limitare il proprio telefono alle app etiche.
In termini pratici, ci sono persone che non riescono a spostarsi nella propria città natale senza usare il telefono. Un interrogativo interessante per la chiesa è: quale percentuale di fedeli non è in grado di identificare il percorso più diretto da casa alla chiesa più vicina senza usare un'app? Sarebbe interessante analizzare le risposte in base a diversi fattori, come l'età e gli anni di residenza nella parrocchia.
Un'altra domanda chiave, strettamente correlata a quella precedente, è: quanti parrocchiani riescono a ricordare gli orari delle messe e gli eventi chiave del calendario parrocchiale senza guardare il telefono? È fantastico avere queste informazioni visibili sul sito web della parrocchia; tuttavia, quando le persone sono veramente coinvolte nella parrocchia e nella comunità, queste informazioni vengono memorizzate. Più queste informazioni sono diffuse in una comunità, più questa è resiliente.
I sistemi di autenticazione minano la dignità umana
Oggigiorno vediamo spesso aziende che insistono sul fatto che hanno bisogno dei nostri numeri di cellulare per "autenticarci" o per "firmare" documenti tramite SMS.
Questo tipo di cose è particolarmente inquietante. Molte persone hanno familiarità con la pratica nazista di marchiare a fuoco i numeri di identificazione sulla pelle dei prigionieri ebrei. I numeri di cellulare hanno una funzione simile. Anche se i numeri non vengono marchiati a fuoco sulla pelle, spesso è scomodo per le persone cambiare il proprio numero.
Esistono molti fenomeni strettamente correlati, tra cui siti web che richiedono agli utenti di autenticarsi tramite un account Gmail o Facebook.
A livello di Chiesa, Stato, istruzione, assistenza sanitaria e servizi finanziari, è fondamentale garantire che tutti possano partecipare nel modo che desiderano senza rinunciare alla propria dignità.
La Chiesa deve esprimersi su questi argomenti con la stessa voce con cui si esprime su temi come l'aborto.
È necessario sottolineare il consenso
Le preoccupazioni relative al consenso e alla coercizione sono diventate un tema di grande attualità nel mondo di oggi. Ironicamente, le
piattaforme
di controllo sociale che fingono di aiutare le donne a trovare una piattaforma violano il principio del consenso in molti altri modi.
Si consideri, ad esempio, chi ha dedicato tempo alla creazione di un profilo su Facebook o Twitter, a volte per molti anni, connettendosi con centinaia o migliaia di follower, per poi ritrovarsi a dover aggiungere il proprio numero di cellulare al proprio account. Se non lo fanno, l'account viene bloccato. Non esiste una vera e propria ragione tecnica per avere un numero di cellulare nell'account, poiché molti di questi servizi hanno funzionato esattamente allo stesso modo per molti anni prima che tali richieste diventassero comuni.
Le persone non acconsentono liberamente a condividere i propri numeri di telefono con Mark Zuckerberg ed Elon Musk. I servizi sono stati imbastarditi per tendere un'imboscata ai loro utenti con queste richieste.
È significativo che questa cultura di agguati e costrizioni si insinui nella società. In Australia, Chanel Contos ha lanciato una petizione/rivista molto pubblicizzata con storie di donne di scuole private d'élite che si sentivano vittime di agguati, bullismo e costrette a incontri fisici indesiderati.
Ironicamente, la signorina Contos ha reso pubbliche le sue preoccupazioni proprio attraverso le stesse piattaforme che stanno minando la nostra comprensione del consenso e della privacy.
La Chiesa stessa ha dovuto fare un profondo esame di coscienza sui temi del consenso e degli abusi di potere. Questo la pone in una posizione interessante, in cui possiamo affermare che, anche considerando alcune delle rivelazioni più sconvolgenti sugli abusi, i responsabili sono il male minore rispetto ai padroni della Silicon Valley.
È sorprendente la rapidità con cui le istituzioni della Silicon Valley hanno abbandonato ogni sistema di pesi e contrappesi, ritenendo opportuno fare ciò che più gli aggrada. La Chiesa cattolica e altre istituzioni religiose possono ora fare tesoro di quanto hanno imparato dall'analisi critica dei propri errori e mettere in guardia la società da quanto sarebbe stupido ripetere la stessa strada con questi gangster digitali.
La tecnologia digitale è molto più di un semplice controllo sociale dei media
La chiesa non è nuova alla tecnologia. Le prime macchine da stampa furono installate nei locali della chiesa. Caxton installò la prima macchina da stampa inglese nell'Abbazia di Westminster. Altri siti includevano Oxford e l'Abbazia di St Alban. Prima della stampa, leggere e scrivere erano attività riservate ai chierici e molte delle loro opere esistevano solo in latino. La stampa permise la produzione in serie di Bibbie in tedesco e inglese. Questo, a sua volta, ebbe un enorme impatto sulla standardizzazione della lingua, così come contribuì a standardizzare gli atteggiamenti morali che la Silicon Valley sta distruggendo sotto di noi. La versione della Bibbia di Re Giacomo è ampiamente riconosciuta per il suo impatto sulla lingua inglese.
La standardizzazione del linguaggio fu solo un effetto collaterale di questa invenzione. La Riforma fu un altro. Con l'acquisizione dei libri e della capacità di leggere, le persone divennero meno dipendenti dal clero.
Allo stesso modo,
i media di controllo sociale stanno avendo un impatto sulla nostra cultura, nel bene e nel male. Proprio come la stampa ha permesso la Riforma,
i media di controllo sociale potrebbero portare a ulteriori cambiamenti nel modo in cui gli esseri umani si organizzano attorno a strutture e credenze religiose. I signori della Silicon Valley stanno attivamente riflettendo su questi ruoli. Elon Musk si è persino travestito da Satana. Se la Chiesa cattolica non offrirà un'alternativa convincente a questi spostamenti di potere, verrà sottratta al suo controllo.
Frances Haugen (informatrice di Facebook): quasi nessuno al di fuori di Facebook sa cosa succede al suo interno. I vertici dell'azienda nascondono informazioni vitali al pubblico, al governo degli Stati Uniti, ai suoi azionisti e ai governi di tutto il mondo. I documenti che ho fornito dimostrano che Facebook ci ha ripetutamente ingannato su ciò che le sue stesse ricerche rivelano sulla sicurezza dei bambini, sul suo ruolo nella diffusione di messaggi d'odio e divisivi e molto altro ancora.
Mentre le generazioni precedenti si rivolgevano al clero per un consiglio, per poi leggere la Bibbia a loro volta, i giovani di oggi si rivolgono a un motore di ricerca e un domani potrebbero affidarsi all'intelligenza artificiale. Possiamo già osservare come motori di ricerca,
social media e bot di intelligenza artificiale spingano le persone a livelli crescenti di conflitto con i vicini o le spingano su sentieri oscuri di isolamento, autolesionismo e suicidio.
Risorse della Chiesa cattolica rilevanti per l'ambiente digitale
La Chiesa cattolica ha un ruolo importante nell'istruzione e nelle scuole, pertanto può vedere l'impatto del
controllo sociale dei media e può far rispettare i divieti per i bambini e fornire formazione al personale e ai genitori.
Gli insegnanti, in quanto dipendenti della Chiesa o dello Stato, hanno segnalato un aumento dei casi di bullismo da parte di genitori che si raggruppano sulle app di messaggistica. In un caso recente,
la polizia britannica ha inviato sei agenti a umiliare un genitore che aveva usato WhatsApp per protestare contro la scuola locale. Il conflitto, la natura conflittuale di questo ambiente e l'enorme spreco di risorse della polizia sono tutte conseguenze del modo in cui la tecnologia è progettata e utilizzata nella società. Ogni episodio come questo offre uno spunto di riflessione sulle opportunità che la Chiesa cattolica ha di chiedersi "esiste un modo migliore?".
Le parole di Frances Haugen aiutano a spiegare ai genitori di bambini piccoli l'assedio dei sei poliziotti:
Ho visto che Facebook ha ripetutamente incontrato conflitti tra i propri profitti e la nostra sicurezza. Facebook ha sistematicamente risolto questi conflitti a favore dei propri profitti. Il risultato è stato un sistema che amplifica divisione, estremismo e polarizzazione, minando le società di tutto il mondo.
La Chiesa cattolica è un importante datore di lavoro in molti paesi. Questo le conferisce la facoltà di prendere decisioni sull'uso di telefoni cellulari e app di messaggistica nel rapporto datore di lavoro/dipendente. Un datore di lavoro non può vietare ai dipendenti di utilizzare questi dispositivi nel tempo libero, ma può decidere di eliminarne l'uso ufficiale per motivi di lavoro. Il rapporto datore di lavoro/dipendente offre un'ulteriore opportunità per formare sull'importanza della dignità umana al di sopra delle esigenze dei nostri dispositivi.
L'agenda pubblica nell'ambiente digitale, l'aborto della nostra specie
Con molti politici e giornalisti che oggi vivono la loro vita sotto
il controllo dei social media , la loro capacità di valutare quali temi meritino un dibattito pubblico è fortemente influenzata dalle tematiche che si suppone siano di tendenza online. Si pensa che le tematiche siano di tendenza online in conseguenza dell'interesse pubblico, mentre in realtà i gestori delle piattaforme online esercitano la loro influenza per garantire che alcune questioni sembrino crescere in modo organico, mentre argomenti significativi ma scomodi vengono opportunamente sepolti nel flusso di notizie.
In questo contesto, la Chiesa cattolica offre una via alternativa per porre questioni all'ordine del giorno del dibattito pubblico, indipendentemente dal fatto che una particolare questione appaia "di tendenza" o meno. Questo potere viene spesso utilizzato per questioni vicine all'insegnamento della Chiesa, come il lobbying sull'aborto, ma non c'è motivo per cui la Chiesa non possa utilizzare le stesse risorse per fare lobbying contro l'aborto del genere umano da parte dell'intelligenza artificiale.
Aiuto alle vittime di discriminazione da parte dei signori della Silicon Valley e delle bande online
Le origini della Chiesa cattolica risalgono alla persecuzione di Gesù e dei martiri San Pietro e San Paolo.
Ma lasciamo da parte gli esempi antichi e veniamo a coloro che, nei tempi a noi più vicini, hanno lottato per la fede. Prendiamo i nobili esempi della nostra generazione. Per gelosia e invidia, i più grandi e giusti pilastri della Chiesa furono perseguitati e giunsero fino alla morte. Poniamo davanti ai nostri occhi i buoni Apostoli. Pietro, per ingiusta invidia, sopportò non una o due, ma molte fatiche, e alla fine, dopo aver reso la sua testimonianza, se ne andò verso il luogo di gloria che gli spettava. Anche Paolo, per invidia, mostrò con l'esempio il premio che è dato alla pazienza: fu incatenato sette volte; fu bandito; fu lapidato; divenuto araldo sia in Oriente che in Occidente, ottenne la nobile fama dovuta alla sua fede; e dopo aver predicato la giustizia al mondo intero, giunto fino all'estremità dell'Occidente e reso testimonianza davanti ai governanti, lasciò finalmente il mondo e andò verso il luogo santo, divenendo il massimo esempio di pazienza. (prima lettera di Clemente ai Corinzi, 5:1 - 5:7)
Queste parole spiegano la persecuzione di Pietro e Paolo sotto l'imperatore Nerone, avvenuta quasi duemila anni fa.
Ottocento anni fa è stata promulgata la Magna Carta che, nel corso del tempo, ha ispirato la Carta dei diritti degli Stati Uniti, la Dichiarazione universale dei diritti umani e l'abolizione della pena di morte.
Eppure oggi vediamo i signori della Silicon Valley voler buttare tutto questo dalla finestra e riportarci indietro ai tempi di Nerone.
Ogni individuo ha diritto di prendere parte liberamente alla vita culturale della comunità, di godere delle arti e di partecipare al progresso scientifico ed ai suoi benefici.
Ogni individuo ha diritto alla protezione degli interessi morali e materiali derivanti da ogni produzione scientifica, letteraria e artistica di cui egli sia autore.
Quando visitiamo i siti web di noti progetti di software libero come Debian e Fedora, li vediamo dichiarare apertamente il loro desiderio di censurare certe persone. Chiunque si esprima su questioni etiche nel nostro settore è stato oggetto di queste estreme rappresaglie di tanto in tanto.
Le somiglianze tra questi casi e la crescente lista di vittime sono la prova lampante che non si tratta di casi casuali. Esiste uno sforzo coordinato per ridurre o eludere i diritti civili. Se esiste uno spazio o un mondo digitale, allora è inquietantemente simile al mondo in cui gli imperatori romani ricorrevano a esecuzioni raccapriccianti per perpetuare il controllo attraverso la paura.
La Chiesa cattolica può andare alla ricerca delle vittime che sono state cancellate, delle vittime che sono state de-piattaformate e di coloro che hanno qualcosa da dire sulla dignità umana nell'era dell'intelligenza artificiale. Che queste persone siano
cattoliche o meno, le preoccupazioni che gli esperti indipendenti hanno cercato di indagare e pubblicizzare devono essere poste al di sopra del rumore prodotto dai dipartimenti di pubbliche relazioni.
Allo stesso tempo, l'impatto orribile inflitto alle nostre famiglie è spesso nascosto alla vista del pubblico.
I bambini nell'ambiente digitale
È significativo che abbiamo trovato tattiche molto simili utilizzate da Harvey Weinstein e Chris Lamb, ex leader del progetto Debian.
Questo è significativo perché Lamb è stato formato durante il Google Summer of Code ed è stato finanziato da Google, che ha anche ricevuto un cospicuo pagamento di 300.000 dollari poco prima che tre vittime rivelassero lo scandalo. Nonostante la promessa di trasparenza di Debian, il denaro è stato rivelato solo
più di sei mesi dopo e il nome di Google non è mai stato pubblicamente collegato a quei numeri.
Quando Weinstein nutriva preoccupazioni sul comportamento di alcune donne, inviava pettegolezzi sgradevoli sul "comportamento" ad altri membri del settore. C'è qualcosa di snob in questi atteggiamenti nei confronti del comportamento umano.
"Ricordo che la Miramax ci disse che lavorare con loro era un incubo e che avremmo dovuto evitarli a tutti i costi. Probabilmente era il 1998", ha detto Jackson.
"All'epoca non avevamo motivo di mettere in dubbio ciò che queste persone ci stavano dicendo, ma a posteriori mi rendo conto che molto probabilmente si trattava della campagna diffamatoria della Miramax in pieno svolgimento."
Diverse persone si sono fatte avanti dimostrando che Chris Lamb stava facendo esattamente la stessa cosa nel suo ruolo in Debian. Secondo la legge sul copyright, i coautori non hanno alcun obbligo nei confronti della persona eletta a ricoprire di volta in volta il ruolo di Debian Project Leader. Siamo tutti uguali.
Oggetto: R: Stato di sviluppatore Debian
Data: mar, 18 dic 2018 10:36:09 +0900
Da: Norbert Preining <norbert@preining.info>
A: Daniel Pocock <daniel@pocock.pro>
Ciao Daniel,
anche se affrontare una causa come questa nel Regno Unito è al di sopra delle
mie capacità e possibilità finanziarie,
ho paura che Lamb abbia effettivamente rovinato una candidatura per un'azienda
di New York, un lavoro correlato a Debian. Se è successo, e posso
ragionevolmente documentarlo, prenderei in considerazione una causa per diffamazione.
> Lamb è residente nel Regno Unito e invia email dal Regno Unito
> https://regainyourname.com/news/cyberbullying-cyberstalking-and-online-harassment-a-uk-study/
Grazie per i link, li terrò a mente.
Norbert
--
PREINING Norbert http://www.preining.info
Accelia Inc. + JAIST + TeX Live + Debian Developer
GPG: 0x860CDC13 fp: F7D8 A928 26E3 16A1 9FA0 ACF0 6CAC A448 860C DC13
Ancora più inquietante è il fatto che Lamb abbia iniziato ad attaccare la mia famiglia proprio nello stesso periodo in cui il cardinale George Pell è stato condannato nel 2018. Un mio cugino di secondo grado era membro dell'ex coro del cardinale George Pell a Melbourne. Lamb e i suoi complici, finanziati da Google, hanno diffuso voci anonime di abusi.
Diverse persone si sono fatte avanti con prove che Lamb si comportava come Weinstein, diffondendo voci alle nostre spalle. Quando io e il Dott. Preining abbiamo parlato, una terza vittima ha visto lo scandalo e si è identificata pubblicamente il giorno di Natale:
Oggetto: Ri: Censura in Debian
Data: mar, 25 dic 2018 23:44:38 +0100
Da: martin f krafft
Organizzazione: Il progetto Debian
A: debian-project@lists.debian.org
Ciao progetto,
È molto triste leggere di quello che sta succedendo.
So che c'è stato almeno un altro caso in cui DAM e AH
hanno agito al di fuori del loro mandato, minacciando di
espellere il progetto e scegliendo in modo molto selettivo con chi comunicare.
Lo so perché sono stato preso di mira.
Né DAM né AH (le stesse persone ancora attive oggi) hanno fatto
un solo tentativo di ascoltarmi. Nessuna delle mie e-mail a DAM o ad AH
ha mai ricevuto risposta.
Al contrario, DAM ha emesso un verdetto e ha influenzato altre persone al
punto che "perché DAM ha emesso una sentenza" è stato addotto come motivazione per altre
misure. Si è trattato di un abuso incostituzionale dei poteri di DAM e, nel
caso di AH, l'intera vicenda ha anche sfiorato la diffamazione. Tra gli altri,
l'attuale DPL Chris Lamb ha promesso una revisione a tempo debito, ma
non è mai successo nulla.
... [ snip ] ...
Ma se non è sicuro per gli ingegneri che sviluppano questa tecnologia, non lo è certamente per i bambini.
Il 5 ottobre 2021 ho sollevato le preoccupazioni relative ai bambini in questa cultura con il rapporto
Google, FSFE e lavoro minorile .
Red Hat , una sussidiaria di IBM dal 2019, ha avviato un'azione legale per censurare e screditare le mie preoccupazioni. Mi hanno accusato di malafede per aver pubblicato quell'articolo. Tuttavia, il collegio legale ha stabilito che
Red Hat mi stava molestando e che stava commettendo un abuso della procedura amministrativa.
L'ironia, ovviamente, è che i Cardinali indossano cappelli rossi, come il nome dell'azienda
Red Hat che è stata scoperta a maltrattarmi. Chris Lamb di Debian aveva diffuso voci sulla mia famiglia quando il Cardinale Pell fu condannato.
Il modo in cui tutto questo ha intersecato le nostre vite e la nostra fede, le voci di abusi dopo la condanna del defunto Cardinale Pell, la mia visita ai Carabinieri il giorno della morte del Cardinale, il giorno delle nozze, la Domenica delle Palme, un suicidio simulato (non confermato), la crocifissione del Dr. Stallman a Pasqua e i linciaggi natalizi di Debian, è sconcertante. Come si dice nei film polizieschi, segui i soldi.
L'ambiente digitale sottopone i parrocchiani alla sorveglianza di terze parti
La Chiesa cattolica è nata dalla persecuzione e bisogna ricordare che la sorveglianza è un pilastro della persecuzione.
Il fatto che i servizi più grandi, come Google, Facebook e Twitter, siano tutti apparentemente gratuiti è la prova che ricavano tutti i loro profitti dalla capacità di condurre un'efficace sorveglianza e manipolazione della popolazione.
Un tempo, la Chiesa svolgeva ruoli simili. I fedeli si sottoponevano a una forma di sorveglianza attraverso il sacramento della confessione, dove ricevevano consiglio dal loro sacerdote. I sacerdoti cercavano di esercitare una certa influenza dal pulpito, con la minaccia della scomunica e, di tanto in tanto, l'inquisizione o la persecuzione di qualcuno che era all'avanguardia, come Galileo.
Se le aziende tecnologiche riescono ad approssimare tutte queste funzioni in modo così efficace tramite algoritmi, corriamo il rischio che la religione diventi superflua.
Pertanto, cercare di svolgere il ruolo della Chiesa attraverso un mezzo che si sostituisce a quello della religione è molto simile a scavarsi la fossa con i propri occhi.
Attraverso una serie di inchieste pubbliche e di segnalazioni di informatori, abbiamo appreso fino a che punto questi padroni ci stiano privando della nostra dignità. Il loro obiettivo è anticipare ogni nostra decisione, influenzare le persone con cui parliamo, il nostro voto e ogni singolo centesimo del nostro bilancio.
Se ognuna di queste decisioni è controllata e perfino microgestita per noi, con precisione scientifica, fino all'ultimo centesimo sul nostro conto in banca ogni mese, dall'influenza di algoritmi, quale spazio rimane nella nostra coscienza per l'influenza del Vangelo?
Missione: rimanere rilevanti
Pertanto, la domanda posta al gruppo di lavoro sulla
missione nell'ambiente digitale
potrebbe essere riformulata come segue: in che modo la religione, di qualsiasi natura, continua a essere rilevante?
Oggi, per tradizione, in molte famiglie delle culture più ricche la chiesa è un luogo in cui si celebrano matrimoni, funerali e talvolta anche l'istruzione dei figli.
Affinché la chiesa possa dotare i propri parrocchiani di tecnologia, anziché perderli a causa della tecnologia, dobbiamo porci delle domande su alcuni degli argomenti sollevati dal movimento del software libero.
Come garantire che ogni persona abbia il pieno controllo sui propri dispositivi, incluso il diritto alla riparazione e il diritto di cambiare il sistema operativo.
Sviluppare strategie per proteggere le persone dai rischi della tecnologia. Ad esempio,
i social media che controllano il controllo sociale consentono a piccoli gruppi, ma molto rumorosi, di arrecare gravi danni alle proprie vittime attraverso la diffusione deliberata e ripetuta di pettegolezzi e diffamazione. Sta diventando sempre più difficile garantire che nessuna persona o minoranza venga esclusa dalle vendette online. Come fornire supporto alle persone prese di mira da questi individui tossici? Come garantire che ogni persona e gruppo possa parlare a turno?
Missione: proteggere la società dagli stessi errori
L'Australia ha avviato la procedura per l'istituzione di una Commissione Reale sugli abusi commessi da una vasta gamma di istituzioni, tra cui la Chiesa. Eppure, per molte persone decedute o che hanno perso familiari, salute e carriera, era troppo tardi. Non sarebbe opportuno intervenire con misure così incisive prima che si verifichino fallimenti catastrofici? È giunto il momento di rivolgere lo stesso livello di attenzione ai
dirigenti dei
media che controllano i social media e allo sfruttamento e alla manipolazione dell'opinione pubblica a più livelli.
Conclusione
I media di controllo sociale stanno rapidamente diventando una copertura per l'intelligenza artificiale. Come ci ha suggerito il test di Turing (il gioco dell'imitazione) fin dal 1949, è inevitabile che ogni nuova iterazione di questo fenomeno diventi sempre più indistinguibile dalla realtà. In quanto tale, potrebbe presentarsi non solo come un sostituto per i nostri simili, ma anche come un'alternativa alla Chiesa. Le persone potrebbero essere indotte ad accettarla come il loro Dio. In altre parole,
i media di controllo sociale potrebbero rendere la Chiesa irrilevante e, dopo averlo fatto, potrebbero continuare a rendere irrilevante l'umanità.
Basta guardare come mi fanno le smorfie dopo la morte di mio padre. La maleducazione che subisco quasi quotidianamente è iniziata in un momento di dolore. Le persone vengono plagiate a mettere da parte anche il più elementare rispetto per la dignità umana, il rispetto per la famiglia, in un momento di dolore, e questo diventa solo un'altra opportunità per strumentalizzare gli altri per divertimento. Questo aspetto della mia vita è stato interamente creato dai
social media
e dalle persone che stanno definendo quello spazio nella mia professione.
Nella sua testimonianza al Congresso, Frances Haugen ci ha detto:
Credo che ciò che ho fatto sia stato giusto e necessario per il bene comune, ma so che Facebook ha risorse infinite che potrebbe usare per distruggermi.
Nel 2018, ho partecipato al Forum delle Nazioni Unite su Imprese e Diritti Umani a Ginevra, dove ho rilasciato alcune brevi dichiarazioni sul fatto che Facebook e Twitter fossero finiti nelle mani sbagliate. Il Forum delle Nazioni Unite si è svolto contemporaneamente alla giuria che stava esaminando le accuse contro il Cardinale George Pell. Pell è stato condannato e queste
piattaforme
di controllo sociale si sono riempite di voci su di me e la mia famiglia, proprio quei fenomeni di cui la stessa Haugen sembra aver paura.
Ecco il video con i commenti che ho fatto al Forum delle Nazioni Unite. Ho parlato per appena quarantatré secondi e hanno speso 120.000 dollari per attaccare la mia famiglia.
Le regretté pape François a demandé à un groupe d'environ quatre cents évêques de travailler ensemble, de 2021 à 2024, à une réflexion sur la manière dont les fidèles
catholiques interagissent et progressent en tant que mouvement. Officiellement, ce comité d'évêques a reçu le titre de
Synode sur la synodalité . Le terme « Synode » est largement utilisé dans toutes les religions chrétiennes pour désigner les comités, conseils ou réunions de ces groupes, à tous les niveaux de la hiérarchie ecclésiale. Le terme
« Synodalité » est spécifique à l'Église catholique. Le Synode dispose d'une page web officielle où il
tente d'expliquer ce qu'est la synodalité .
Plusieurs groupes de travail ont été créés sur des sujets très variés. Dans cette analyse, je me concentre uniquement sur le troisième groupe, qui s'est penché sur la
mission dans l'environnement numérique . Je présente ensuite mes propres observations sur les sujets abordés par ce groupe.
Dans un récent article de presse, sans rapport avec le Synode, le diocèse de Paderborn (centre-nord de l'Allemagne)
a annoncé qu'il tenterait d'utiliser TikTok pour interagir avec les jeunes . Le champ d'action du groupe de travail 3 est très vaste et ne se limite pas aux
plateformes
de médias sociaux . Il me semble qu'il couvre toutes les formes de technologies numériques.
Même
les répéteurs de paquets
radioamateurs sont concernés, même si les licences de radioamateur n'autorisent pas la transmission explicite de matériel religieux.
Le Vatican a été l'un des premiers à adopter la radio à ondes courtes. Le pape Léon XIV et Mgr Lucio Adrian Ruiz, secrétaire du Dicastero per la Comunicazione, ont visité les installations de Radio Vatican cette semaine :
À la lecture des résultats du groupe de travail et du Synode dans son ensemble, j'ai le sentiment que l'Église dans son ensemble n'a pas décidé d'adopter ou de rejeter
les médias de contrôle social . Elle reconnaît qu'ils font partie du paysage numérique et tente de définir la manière dont l'Église s'y rapporte.
Comment le processus synodal a évolué à un niveau élevé
Avant d’entrer dans les détails, voici un aperçu du processus et des rapports parus à différents moments, avec des liens directs vers les éditions traduites.
Le site web principal du Synode est
www.Synod.va et il est disponible en plusieurs langues. Il semble que le contenu ait été rédigé en italien puis traduit en anglais et dans d'autres langues. Cela le rend un peu plus difficile à lire.
Une réunion élargie a eu lieu à Rome en octobre 2023, au cours de laquelle un premier projet de rapport a été élaboré.
Points clés du rapport final concernant l'environnement numérique
Au point 58, le rapport note que les chrétiens pourraient tenter de proclamer l’Évangile par leur participation à un environnement numérique.
58. ... Les chrétiens, chacun selon leurs divers rôles - au sein de la famille et des autres états de vie ; sur le lieu de travail et dans leur profession ; engagés civilement, politiquement, socialement ou écologiquement ; dans le développement d'une culture inspirée par l'Évangile, y compris l'évangélisation de l'environnement numérique - parcourent les chemins du monde et annoncent l'Évangile là où ils vivent, soutenus par les dons de l'Esprit.
59. Ce faisant, ils demandent à l’Église de ne pas les abandonner mais de leur permettre de se sentir envoyés et soutenus dans la mission.
Ce point semble encourager l’Église à réfléchir à la situation à laquelle sont confrontés ceux qui sont sous l’influence d’un environnement numérique, mais cela n’implique pas nécessairement que l’environnement numérique soit bon ou mauvais.
Au point 112, concernant la mobilité, qui inclut des personnes de tous les niveaux de la société, le rapport note :
Certains entretiennent des liens forts avec leur pays d’origine, notamment grâce aux médias numériques, et peuvent donc avoir du mal à nouer des liens dans leur nouveau pays ; d’autres se retrouvent à vivre sans racines.
C'est une excellente observation. En Europe, j'ai rencontré des couples dont la relation dépend entièrement des appareils de traduction automatique. Lorsque de nouveaux arrivants arrivent en ville, la culture WhatsApp encourage les voisins à passer des semaines, voire des mois, à discuter dans leur dos sans même les regarder dans les yeux.
113. La diffusion de la culture numérique, particulièrement visible chez les jeunes, transforme profondément leur perception de l'espace et du temps ; elle influence leurs activités quotidiennes, leur communication et leurs relations interpersonnelles, y compris la foi. Les opportunités offertes par Internet remodèlent les relations, les liens et les frontières. Aujourd'hui, nous ressentons souvent la solitude et la marginalisation, même si nous sommes plus connectés que jamais. De plus, ceux qui ont leurs propres intérêts économiques et politiques peuvent exploiter
les médias sociaux pour diffuser des idéologies et générer des formes de polarisation agressives et manipulatrices. Nous ne sommes pas bien préparés à cela et devons consacrer des ressources pour que l'environnement numérique devienne un espace prophétique de mission et d'annonce. Les Églises locales devraient encourager, soutenir et accompagner ceux qui s'engagent dans la mission dans l'environnement numérique. Les communautés et groupes chrétiens numériques, en particulier les jeunes, sont également appelés à réfléchir à la manière dont ils créent des liens d'appartenance, favorisant la rencontre et le dialogue. Ils doivent offrir une formation à leurs pairs, développant une manière synodale d'être Église. Internet, constitué comme un réseau de connexions, offre de nouvelles opportunités pour mieux vivre la dimension synodale de l’Église.
Ce paragraphe reconnaît les dangers du numérique, en particulier
des médias de contrôle social , et le mot clé est : « Nous ne sommes pas bien préparés à cela ». Il suggère néanmoins que les églises locales devraient « encourager » davantage ces risques en ligne. Je ne pense pas que le mot « encourager » soit le bon, mais je ne pense pas qu'elles devraient non plus décourager.
149. Le processus synodal a insisté sur certains aspects spécifiques de la formation du Peuple de Dieu à la synodalité. Le premier concerne l'impact de l'environnement numérique sur les processus d'apprentissage, la concentration, la perception de soi et du monde, et la construction des relations interpersonnelles. La culture numérique constitue une dimension cruciale du témoignage de l'Église dans la culture contemporaine et dans un champ missionnaire émergent. Cela exige de veiller à ce que le message chrétien soit présent en ligne de manière fiable, sans en déformer idéologiquement le contenu. Bien que les médias numériques aient un grand potentiel pour améliorer nos vies, ils peuvent aussi causer des préjudices et des blessures par le biais du harcèlement, de la désinformation, de l'exploitation sexuelle et des addictions. Les établissements d'enseignement de l'Église doivent aider les enfants et les adultes à développer des compétences essentielles pour naviguer en toute sécurité sur le web.
Ces commentaires sont très pertinents et très cohérents avec mon propre témoignage, dont une partie est reproduite plus loin dans ce rapport.
150. Un autre domaine de grande importance est la promotion dans tous les contextes ecclésiaux d’une culture de protection, faisant des communautés des lieux toujours plus sûrs pour les mineurs et les personnes vulnérables.
Lorsque j'ai évoqué ce sujet dans les communautés du logiciel libre, ma famille a été impitoyablement attaquée. Voir les
courriels que j'ai envoyés fin 2017 et les commentaires sur IBM
Red Hat plus loin dans ce rapport.
Sources liées au groupe de travail trois, la mission dans un environnement numérique
Le site web Synod.va a publié la liste de
tous les groupes de travail . Il comprend une courte vidéo sur chaque groupe et un lien vers leurs rapports les plus récents.
La vidéo du troisième groupe de travail dure un peu moins de deux minutes. Voici quelques citations clés et mes propres observations :
« Aujourd’hui, les gens, en particulier les jeunes, ont appris à vivre simultanément et de manière transparente dans des espaces numériques et physiques. »
Je pense que cette affirmation est totalement erronée. Les gens ont appris à utiliser les espaces numériques. Une étude récente suggère que
près de 70 % des jeunes se sentent mal après avoir utilisé les réseaux sociaux . Autrement dit, ils se sentent obligés de les utiliser. Par conséquent, leur vie est perturbée. Les gens souffrent.
Les déclarations faites dans la vidéo ne sont pas celles présentées dans le rapport final. Nous y reviendrons. Néanmoins, dès qu'il
est question
des médias de contrôle social , on a tendance à généraliser sur l'impossibilité de vivre sans eux. Chaque fois que nous voyons une telle affirmation, il est important de la remettre en question.
« Comment l’Église utilise-t-elle et s’approprie-t-elle la culture numérique ? »
La question rhétorique est intéressante. En réalité, les extrémistes de la Silicon Valley utilisent et s'approprient tout le contenu que nous leur fournissons. L'Église ne les utilise pas, elle nous utilise. Comment pensez-vous qu'ils sont devenus si riches ?
Une meilleure question pourrait être : « Comment l’Église
comble-t-elle les lacunes des cultures numériques ? ».
« Cet environnement est désormais « indiscernable de la sphère de la vie quotidienne ». »,
Le pape François était un homme intelligent, entouré de personnes brillantes, dont le regretté cardinal Pell. Cette citation trouve son origine dans la pensée d'Alan Turing. Turing est considéré comme le père de l'informatique et un martyr. Turing nous a transmis exactement le même concept avec le légendaire test de Turing, que Turing lui-même a appelé le « jeu de l'imitation » en 1949.
Une autre façon d’interpréter ce phénomène est de dire que les masses ont subi un lavage de cerveau de la part des seigneurs de la Silicon Valley.
Les choix des dirigeants de Facebook constituent un problème majeur – pour les enfants, la sécurité publique, la démocratie – et c'est pourquoi je me suis exprimé. Et soyons clairs : il n'est pas nécessaire qu'il en soit ainsi. Si nous sommes ici aujourd'hui, c'est grâce aux choix délibérés de Facebook.
Le résumé du groupe de travail continue...
« Pour proclamer efficacement l'Évangile dans notre culture contemporaine, nous devons discerner les opportunités et les défis présentés par cette nouvelle dimension du « lieu » »
Cette citation précise reconnaît qu'il existe à la fois des opportunités et des défis. L'année du jubilé est placée sous le signe de l'espoir et j'espère sincèrement que les membres du groupe de travail lisent les informations des lanceurs d'alerte, des psychologues pour enfants et
même des médecins légistes qui nous alertent sur l'impact de Facebook et de ses semblables .
Néanmoins, le rapport inclut l’expression « immersion plus grande » et j’estime que l’Église ne devrait pas supposer que « l’immersion plus grande » est une ligne de conduite par défaut.
Le résumé aborde également la notion de juridiction. L'Église catholique s'est traditionnellement organisée sur une base géographique. Internet permet aux individus de se connecter et de former des communautés virtuelles sans lien géographique.
Par ailleurs, avant l'avènement d'Internet, l'Église pouvait déplacer des prêtres à haut risque d'une paroisse d'un bout à l'autre de la ville sans craindre que quiconque ne les relie. J'ai examiné minutieusement les documents de la Commission royale australienne et suis tombé sur cette note du légendaire Père X___ :
Cela signifie que si quelqu'un en Australie, apprenant que le père Z___ a suivi un traitement à cause de quelque chose qui s'est passé à Boston et se rendant sur place pour le savoir, se retrouverait dans une impasse.
La lettre en question a été écrite juste avant qu'Internet ne devienne une réalité pour le grand public. Lire ces mots aujourd'hui nous rappelle brutalement à quel point Internet bouleverse notre quotidien.
Le groupe de travail poursuit en indiquant qu'il recherche des « recommandations ou propositions pratiques » de la part de toute la communauté, sur tout sujet lié à la mission de l'Église dans l'environnement numérique.
Les personnes engagées dans le mouvement du logiciel libre, qu’elles soient
catholiques ou non, peuvent contacter leur diocèse local pour savoir qui coordonne localement la réponse à ces défis.
Une autre phrase qui a attiré mon attention :
« Aujourd'hui, nous vivons dans une culture numérique »
Pas exactement. Certains diront qu'une culture numérique nous est imposée. Des institutions comme la politique et les médias s'y accrochent et la mettent sur un piédestal. Il est donc d'autant plus crucial que d'autres institutions, comme l'Église, se donnent pour mission de remettre en question l'ensemble de la culture numérique et de proposer des alternatives viables.
La vie sans téléphone portable, la vie sans applications
Téléphones mobiles et applications sont étroitement liés. Certains choisissent de vivre sans smartphone, c'est-à-dire qu'ils n'ont que la moitié des problèmes d'un téléphone portable classique. D'autres choisissent également d'utiliser un smartphone sans l'App Store de Google ou d'Apple, par exemple ceux qui installent
Replicant ou
LineageOS et utilisent l'
App Store de F-Droid pour limiter leur utilisation aux applications éthiques.
Concrètement, certaines personnes sont incapables de se déplacer dans leur ville sans utiliser leur téléphone. Une question intéressante se pose pour l'Église : quelle proportion de fidèles est incapable d'identifier le chemin le plus direct entre leur domicile et l'église la plus proche sans consulter une application ? Il serait intéressant d'analyser les réponses en fonction de divers facteurs tels que l'âge et le nombre d'années de résidence dans la paroisse.
Une autre question clé, étroitement liée à la précédente, est de savoir combien de paroissiens peuvent se souvenir des horaires de messe et des événements clés du calendrier paroissial sans consulter leur téléphone. C'est formidable que ces informations soient visibles sur le site web de la paroisse, mais lorsque les gens s'impliquent véritablement dans la paroisse et la communauté, elles resteront gravées dans leur mémoire. Plus ces informations sont diffusées dans une communauté, plus celle-ci est résiliente.
Les systèmes d’authentification portent atteinte à la dignité humaine
Aujourd’hui, nous voyons fréquemment des entreprises insister sur le fait qu’elles ont besoin de nos numéros de téléphone portable pour nous « authentifier » ou pour « signer » des documents par SMS.
Ce genre de chose est particulièrement inquiétant. Nombreux sont ceux qui connaissent la pratique nazie consistant à graver des numéros d'identification sur la peau des prisonniers juifs. Les numéros de téléphone portable ont une fonction similaire. Même s'ils ne sont pas gravés physiquement sur notre peau, il est souvent gênant de les changer.
Il existe de nombreux phénomènes étroitement liés, notamment des sites Web exigeant que les utilisateurs s’authentifient à partir d’un compte Gmail ou Facebook.
Au niveau de l’Église, de l’État, de l’éducation, des soins de santé et des services financiers, il est essentiel de garantir que chacun puisse participer comme il le souhaite sans renoncer à sa dignité.
L’Église doit s’exprimer sur ces sujets avec la même intensité que sur des thèmes tels que l’avortement.
Il faut mettre l’accent sur le consentement
Les préoccupations relatives au consentement et à la coercition sont devenues un sujet majeur dans le monde d'aujourd'hui. Ironiquement, les
plateformes
médiatiques de contrôle social qui prétendent donner une tribune aux femmes violent le principe du consentement de bien d'autres manières.
Prenons l'exemple de personnes qui ont créé un profil sur Facebook ou Twitter, parfois pendant des années, se connectant à des centaines, voire des milliers d'abonnés, et qui se voient ensuite demander d'ajouter leur numéro de téléphone portable à leur compte. S'ils ne le font pas, leur compte est bloqué. Il n'y a aucune raison technique valable d'avoir un numéro de téléphone portable sur son compte, car nombre de ces services fonctionnaient exactement de la même manière pendant de nombreuses années avant que ces demandes ne deviennent courantes.
Les gens ne consentent pas librement à partager leur numéro de téléphone avec Mark Zuckerberg et Elon Musk. Les services ont été altérés pour piéger leurs utilisateurs avec ces demandes.
Il est significatif que cette culture du piège et de la coercition se propage dans la société. En Australie, Chanel Contos a lancé une pétition/journal très médiatisé, rassemblant les témoignages de femmes scolarisées dans des écoles privées prestigieuses qui se sentaient victimes de pièges, de harcèlement et de violences physiques non désirées.
Ironiquement, Mme Contos a fait connaître ses inquiétudes sur les mêmes plateformes qui sapent notre compréhension du consentement et de la vie privée.
L'Église elle-même a dû se livrer à un examen de conscience approfondi sur les questions de consentement et d'abus de pouvoir. Cela la place dans une position intéressante : même au vu des révélations les plus choquantes sur les abus, les responsables constituent un moindre mal par rapport aux dirigeants de la Silicon Valley.
Il est remarquable de constater la rapidité avec laquelle les institutions de la Silicon Valley ont abandonné tout contrôle et tout équilibre pour n'agir qu'à leur guise. L'Église catholique et les autres institutions religieuses peuvent désormais tirer les leçons de l'analyse critique de leurs propres erreurs et mettre en garde la société contre la stupidité de s'engager à nouveau sur la même voie avec ces gangsters du numérique.
La technologie numérique est bien plus qu’un simple moyen de contrôle social
L'Église n'est pas novice en matière de technologie. Les premières presses à imprimer ont été installées dans les locaux de l'Église. Caxton a installé la première presse d'Angleterre à l'abbaye de Westminster. Parmi les autres sites figuraient Oxford et l'abbaye de Saint-Alban. Avant l'apparition de l'imprimerie, la lecture et l'écriture étaient réservées aux clercs et nombre de leurs ouvrages n'existaient qu'en latin. L'imprimerie a permis la production en masse de bibles en allemand et en anglais. Cela a eu un impact considérable sur la standardisation de la langue, tout comme elle a contribué à standardiser les attitudes morales que la Silicon Valley est en train de détruire. La version King James de la Bible est largement reconnue pour son influence sur la langue anglaise.
La standardisation de la langue n'était qu'un effet secondaire de cette invention. La Réforme en était un autre. À mesure que les gens acquéraient des livres et la faculté de lire, ils devenaient moins dépendants du clergé.
De même,
les médias de contrôle social influencent aujourd'hui notre culture, pour le meilleur et pour le pire. Tout comme l'imprimerie a permis la Réforme,
ces médias pourraient entraîner de nouveaux changements dans la manière dont les humains s'organisent autour des structures et des croyances religieuses. Les dirigeants de la Silicon Valley envisagent activement ces rôles. Elon Musk s'est même déguisé en Satan. Si l'Église catholique ne propose pas d'alternative convaincante à ces changements de pouvoir, elle sera dépossédée de ses pouvoirs.
Frances Haugen (lanceuse d'alerte Facebook) : Presque personne en dehors de Facebook ne sait ce qui se passe en interne. La direction de l'entreprise cache des informations vitales au public, au gouvernement américain, à ses actionnaires et aux gouvernements du monde entier. Les documents que j'ai fournis prouvent que Facebook nous a induits en erreur à plusieurs reprises sur ce que révèlent ses propres recherches concernant la sécurité des enfants, son rôle dans la diffusion de messages haineux et clivants, et bien d'autres choses encore.
Alors que les générations précédentes consultaient les religieux pour obtenir des conseils, puis lisaient la Bible, les jeunes d'aujourd'hui se tournent vers les moteurs de recherche et, demain, ils pourraient faire confiance à l'intelligence artificielle. Nous constatons déjà que les moteurs de recherche,
les médias sociaux et les robots d'intelligence artificielle incitent les gens à multiplier les conflits avec leurs voisins ou les conduisent sur les sentiers sombres de l'isolement, de l'automutilation et du suicide.
Ressources de l'Église catholique pertinentes pour l'environnement numérique
L'Église catholique joue un rôle important dans l'éducation et les écoles. Par conséquent, l'Église peut voir l'impact des
médias de contrôle social et l'Église peut imposer des interdictions aux enfants et fournir une formation au personnel et aux parents.
Les enseignants, qu'ils soient employés par l'Église ou l'État, ont signalé une augmentation du harcèlement de la part de parents qui se regroupent sur des applications de messagerie. Récemment,
la police britannique a envoyé six agents humilier un parent qui avait utilisé WhatsApp pour s'en prendre à l'école locale. Le conflit, le caractère conflictuel de cet environnement et l'énorme gaspillage de ressources policières sont autant de conséquences de la manière dont cette technologie est conçue et utilisée dans la société. Chaque incident de ce type donne un aperçu des occasions pour l'Église catholique de se demander « existe-t-il une meilleure solution ? ».
Les mots de Frances Haugen aident à expliquer les six policiers qui assiégent les parents de jeunes enfants :
J'ai constaté que Facebook se heurtait régulièrement à des conflits entre ses propres profits et notre sécurité. Facebook a systématiquement résolu ces conflits en faveur de ses propres profits. Il en résulte un système qui amplifie les divisions, l'extrémisme et la polarisation, et qui fragilise les sociétés du monde entier.
L'Église catholique est un employeur important dans de nombreux pays. Cela lui confère le pouvoir de décider de l'utilisation des téléphones portables et des applications de messagerie dans le cadre de la relation employeur-employé. Un employeur ne peut interdire à ses employés d'utiliser ces appareils pendant leur temps libre, mais il peut décider d'en supprimer toute utilisation officielle à des fins professionnelles. La relation employeur-employé offre une nouvelle occasion de sensibiliser les employés à l'importance de la dignité humaine, au-delà des exigences de nos appareils.
L'agenda public dans l'environnement numérique, l'avortement de notre espèce
Alors que de nombreux hommes politiques et journalistes vivent désormais au sein
des médias sociaux , leur capacité à évaluer les sujets dignes d'un débat public est fortement influencée par les sujets supposément tendance en ligne. On pense que ces sujets deviennent tendance en ligne grâce à l'intérêt général, alors qu'en réalité, les gestionnaires des plateformes en ligne exercent une influence pour que certains sujets semblent se développer naturellement, tandis que des sujets importants mais gênants sont commodément noyés dans le flot de l'actualité.
Dans ce contexte, l'Église catholique offre une voie alternative pour inscrire des questions à l'ordre du jour du débat public, qu'elles soient d'actualité ou non. Ce pouvoir est le plus souvent utilisé pour des questions proches de l'enseignement de l'Église, comme le lobbying sur l'avortement. Cependant, rien n'empêche l'Église d'utiliser les mêmes ressources pour lutter contre l'avortement de l'humanité par l'IA.
Aide aux victimes de discriminations de la part des seigneurs de la Silicon Valley et des mafias en ligne
L’Église catholique trouve ses origines dans la persécution de Jésus et des martyrs saint Pierre et saint Paul.
Mais laissons de côté les exemples anciens et venons-en à ceux qui, aux temps les plus proches de nous, ont lutté pour la foi. Prenons les nobles exemples de notre génération. Par jalousie et envie, les plus grands et les plus justes piliers de l'Église ont été persécutés et ont même été condamnés à mort. Plaçons devant nos yeux les bons apôtres. Pierre, par une envie injuste, a enduré non pas une ou deux, mais de nombreuses épreuves, et finalement, après avoir rendu son témoignage, il est parti pour la gloire qui lui était due. Par envie, Paul aussi a montré par l'exemple la récompense qui est donnée à la patience : sept fois il a été enchaîné ; il a été banni ; il a été lapidé ; devenu un héraut, tant en Orient qu'en Occident, il a acquis la noble renommée due à sa foi ; et après avoir prêché la justice au monde entier, et étant arrivé aux extrémités de l'Occident, et ayant rendu témoignage devant les dirigeants, il a finalement quitté le monde et est allé au lieu saint, étant devenu le plus grand exemple. de patience. » (première épître de Clément aux Corinthiens, 5:1 - 5:7)
Ces paroles expliquent la persécution de Pierre et Paul sous l’empereur Néron il y a près de deux mille ans.
Il y a huit cents ans, la Magna Carta est arrivée et, au fil du temps, elle a inspiré la Déclaration des droits des États-Unis, la Déclaration universelle des droits de l'homme et l'abolition de la peine capitale.
Et pourtant, aujourd’hui, nous voyons les seigneurs de la Silicon Valley vouloir tout jeter par la fenêtre et nous ramener à l’époque de Néron.
Toute personne a le droit de participer librement à la vie culturelle de la communauté, de jouir des arts et de participer au progrès scientifique et aux bienfaits qui en découlent.
Toute personne a droit à la protection des intérêts moraux et matériels découlant de toute production scientifique, littéraire ou artistique dont elle est l'auteur.
Lorsque nous consultons les sites web de projets de logiciels libres bien connus comme Debian et Fedora, nous les voyons proclamer ouvertement leur volonté de censurer certaines personnes. Quiconque s'exprime sur les questions éthiques dans notre secteur est parfois victime de représailles extrêmes.
Les similitudes entre ces cas et la liste croissante des victimes prouvent clairement qu'ils ne sont pas le fruit du hasard. Il existe une action coordonnée visant à restreindre ou à contourner les droits civiques. Si un espace ou un monde numérique existe, il ressemble étrangement à celui où les empereurs romains recouraient à des exécutions macabres pour perpétuer leur emprise par la peur.
L'Église catholique peut rechercher les victimes de l'annulation de leur publication, celles qui ont été déplateformées et celles qui ont leur mot à dire sur la dignité humaine à l'ère de l'IA. Que ces personnes soient
catholiques ou non, les préoccupations que des experts indépendants tentent d'étudier et de faire connaître doivent être mises en avant, au-delà du brouhaha des services de relations publiques.
Dans le même temps, l’impact horrible infligé à nos familles est souvent caché au public.
Les enfants dans l'environnement numérique
Il est révélateur que nous ayons trouvé des tactiques très similaires utilisées par Harvey Weinstein et Chris Lamb, ancien dirigeant du projet Debian.
C'est important, car Lamb a été formé grâce au Google Summer of Code et financé par Google, notamment par un important versement de 300 000 dollars peu avant que trois victimes ne révèlent le scandale. Malgré la promesse de transparence de Debian, l'argent n'a été révélé que
plus de six mois plus tard, et le nom de Google n'est jamais associé publiquement à ces chiffres.
Lorsque Weinstein s'inquiétait du comportement de certaines femmes, il envoyait de vilaines rumeurs à d'autres acteurs du milieu. Il y a quelque chose de snob dans ces attitudes envers le comportement humain.
Lorsque des femmes ont porté plainte auprès de la police, le réalisateur Peter Jackson a pris la parole et
a confirmé que Weinstein avait utilisé ces sales tours , répandant des rumeurs sur le comportement de femmes qui n'étaient pas assez soumises à son goût.
« Je me souviens que Miramax nous avait dit que travailler avec eux était un cauchemar et que nous devions les éviter à tout prix. C'était probablement en 1998 », a déclaré Jackson.
« À l'époque, nous n'avions aucune raison de remettre en question ce que ces gens nous disaient, mais avec le recul, je me rends compte qu'il s'agissait très probablement de la campagne de diffamation de Miramax en plein essor. »
Plusieurs personnes se sont manifestées, démontrant que Chris Lamb faisait exactement la même chose dans son rôle chez Debian. En vertu du droit d'auteur, les coauteurs n'ont aucune obligation envers la personne élue pour occuper ponctuellement le poste de chef de projet Debian. Nous sommes tous égaux.
Objet : Re : Statut de développeur Debian
Date : mar. 18 déc. 2018 10:36:09 +0900
De : Norbert Preining <norbert@preining.info>
À : Daniel Pocock <daniel@pocock.pro>
Bonjour Daniel,
même si subir un procès comme celui-ci au Royaume-Uni dépasse
mes capacités et mes possibilités financières,
j'ai peur que Lamb ait également refusé une candidature pour une
entreprise à New York, un emploi lié à Debian. Si cela s'est produit, et que je peux
raisonnablement le prouver, j'envisagerais une action en justice pour diffamation.
> Lamb réside au Royaume-Uni et envoie des e-mails depuis le Royaume-Uni
> https://regainyourname.com/news/cyberbullying-cyberstalking-and-online-harassment-a-uk-study/
Merci pour les liens, je les garderai à l'esprit.
Norbert
--
PREINING Norbert http://www.preining.info
Accelia Inc. + JAIST + TeX Live + Développeur Debian
GPG : 0x860CDC13 fp : F7D8 A928 26E3 16A1 9FA0 ACF0 6CAC A448 860C DC13
Plus inquiétant encore, Lamb a commencé ses attaques contre ma famille au moment même où le cardinal George Pell a été condamné en 2018. Mon cousin au second degré était membre de l'ancienne chorale du cardinal George Pell à Melbourne. Lamb et ses complices, financés par Google, ont lancé des rumeurs anonymes d'abus.
Plusieurs personnes ont apporté des preuves montrant que Lamb se comportait comme Weinstein, répandant des rumeurs dans notre dos. Lorsque le Dr Preining et moi-même avons pris la parole, une troisième victime a eu vent du scandale et s'est identifiée publiquement le jour de Noël :
Objet : Re : Censure dans Debian
Date : mar. 25 déc. 2018 23:44:38 +0100
De : martin f krafft
Organisation : Le projet Debian
À : debian-project@lists.debian.org
Bonjour projet,
C’est très triste de lire ce qui se passe.
Je sais qu’il y a eu au moins un autre cas où DAM et AH
ont outrepassé leur mandat, menaçant d’
expulsion du projet et choisissant très sélectivement leurs interlocuteurs.
Je le sais, car j’étais ciblé.
Ni DAM ni AH (les mêmes personnes toujours actives aujourd’hui) n’ont
tenté une seule fois de m’entendre. Aucun de mes courriels à DAM ou à AH
n’a reçu de réponse.
Au lieu de cela, DAM a rendu un verdict et a influencé d’autres personnes au
point que « parce que DAM a statué » a été invoqué comme justification pour d’autres
mesures. Il s’agissait d’un abus de pouvoir inconstitutionnel de DAM, et dans
le cas d’AH, tout ce désordre frisait la diffamation. Entre autres,
l’actuel DPL, Chris Lamb, a promis une révision en temps voulu, mais
rien ne s’est jamais produit.
… [snip] …
Mais si cette technologie n’est pas sûre pour les ingénieurs qui la développent, elle n’est certainement pas sûre pour les enfants.
Le 5 octobre 2021, j'ai soulevé les inquiétudes concernant les enfants dans cette culture avec le rapport
Google, FSFE & Child Labor .
Red Hat , filiale d'IBM depuis 2019, a intenté une action en justice pour censurer et discréditer mes propos. Ils m'ont accusé de mauvaise foi pour avoir publié cet article. Pourtant, la commission d'enquête a jugé que
Red Hat me harcelait et abusait de la procédure administrative.
L'ironie, bien sûr, c'est que les cardinaux portent des casquettes rouges, comme le nom de l'entreprise
Red Hat qui a été accusée d'abus à mon égard. Chris Lamb, de Debian, avait lancé les rumeurs concernant ma famille lorsque le cardinal Pell a été condamné.
La manière dont cela a interféré avec nos vies et notre foi, les rumeurs d'abus après la condamnation du regretté cardinal Pell, ma visite aux carabiniers le jour de la mort du cardinal, le jour du mariage, le dimanche des Rameaux, qui a été un suicide par imitation (non confirmé), la crucifixion du Dr Stallman à Pâques et les lynchages de Noël de Debian, tout cela est stupéfiant. Comme on dit dans les films policiers, il faut suivre l'argent.
L'environnement numérique soumet les paroissiens à la surveillance de tiers
L’Église catholique est née de la persécution et il faut se rappeler que la surveillance est la pierre angulaire de la persécution.
Le fait que les plus grands services, comme Google, Facebook et Twitter, soient tous ostensiblement gratuits est la preuve qu’ils tirent tous leurs profits de leur capacité à surveiller et à manipuler efficacement la population.
Autrefois, l'Église remplissait des rôles similaires. Les fidèles se soumettaient à une forme de surveillance par le sacrement de la confession, où ils recevaient les conseils de leur prêtre. Les prêtres cherchaient à exercer une certaine influence depuis la chaire, menaçant l'excommunication et, de temps à autre, l'inquisition ou la persécution de personnes en avance sur leur temps, comme Galilée.
Si les entreprises technologiques peuvent approximer toutes ces fonctions de manière aussi efficace grâce à des algorithmes, nous courons le risque que la religion devienne redondante.
Par conséquent, tenter de jouer le rôle de l’Église à travers un média qui se substitue à celui de la religion revient à creuser sa propre tombe.
Grâce à une série d'enquêtes publiques et de lanceurs d'alerte, nous avons constaté à quel point ces seigneurs nous privent de notre dignité. Leur objectif est d'anticiper chacune de nos décisions, d'influencer nos interlocuteurs, nos votes et le moindre centime de notre budget.
Si chacune de ces décisions est contrôlée et même microgérée pour nous, avec une précision scientifique, jusqu’au dernier centime de notre compte bancaire chaque mois, par l’influence des algorithmes, quelle place reste-t-il dans notre conscience pour l’influence de l’Évangile ?
Mission : rester pertinent
Par conséquent, la question assignée au groupe de travail sur la
mission dans l’environnement numérique
pourrait être reformulée ainsi : comment la religion, quelle que soit sa nature, reste-t-elle pertinente ?
Pour de nombreuses familles des cultures aisées d’aujourd’hui, l’Église est engagée par tradition dans les mariages, les funérailles et parfois dans l’éducation des enfants.
Pour que l’Église puisse donner du pouvoir à ses paroissiens grâce à la technologie, plutôt que de les perdre à cause de la technologie, nous devons nous poser des questions sur certains des sujets soulevés par le mouvement du logiciel libre.
Comment garantir que chaque personne ait le contrôle total de ses appareils, y compris le droit de les réparer et le droit de modifier le système d’exploitation.
Élaborer des stratégies pour protéger les individus des risques liés à la technologie. Par exemple,
les médias de contrôle social permettent à des groupes restreints, mais très bruyants, de nuire gravement à leurs victimes par la diffusion délibérée et répétée de rumeurs et de diffamations. Il devient de plus en plus difficile de garantir qu'aucune personne ni minorité ne soit exclue par les vendettas en ligne. Comment apporter un soutien aux personnes ciblées par ces individus toxiques ? Comment garantir que chaque personne et chaque groupe puisse s'exprimer à son tour ?
Mission : protéger la société des mêmes erreurs
L'Australie a mis en place une commission royale d'enquête sur les abus commis par diverses institutions, dont l'Église. Pourtant, il était trop tard pour nombre de personnes décédées ou ayant perdu des proches, la santé ou leur carrière. Ne serait-il pas judicieux d'intervenir aussi vigoureusement avant plutôt qu'après des échecs catastrophiques ? Il est grand temps d'exercer le même contrôle sur
les dirigeants
des médias, qui exercent un contrôle social , ainsi que sur l'exploitation et la manipulation du public à de multiples niveaux.
Conclusion
Les médias de contrôle social deviennent rapidement une façade pour l'intelligence artificielle. Comme le test de Turing (jeu d'imitation) nous l'a suggéré depuis 1949, il est inévitable que chaque nouvelle itération de ce phénomène devienne de plus en plus indiscernable de la réalité. De ce fait, ils pourraient se présenter non seulement comme un substitut à nos semblables, mais aussi comme une alternative à l'Église. Les gens pourraient être dupés et l'accepter comme leur Dieu. Autrement dit,
les médias de contrôle social pourraient rendre l'Église insignifiante, et par la suite, rendre l'humanité insignifiante.
Il suffit de voir les grimaces des gens après la mort de mon père. L'impolitesse que je subis presque quotidiennement a commencé dans une période de deuil. On leur a inculqué le respect le plus élémentaire de la dignité humaine, le respect de la famille dans un moment de deuil, et cela devient une nouvelle occasion de se servir les uns des autres à des fins récréatives. Cet aspect de ma vie a été entièrement créé par
les médias sociaux
et ceux qui définissent cet espace dans ma propre profession.
Dans son témoignage devant le Congrès, Frances Haugen nous a dit :
Je crois que ce que j’ai fait était juste et nécessaire pour le bien commun, mais je sais que Facebook dispose de ressources infinies, qu’il pourrait utiliser pour me détruire.
En 2018, j'ai assisté au Forum des Nations Unies sur les entreprises et les droits de l'homme à Genève, où j'ai brièvement commenté la situation de Facebook et Twitter, tombés entre de mauvaises mains. Le Forum des Nations Unies s'est tenu au moment même où le jury examinait les accusations portées contre le cardinal George Pell. Pell a été condamné et ces
plateformes
de contrôle social se sont répandues dans les rumeurs concernant ma famille et moi, le phénomène même que Haugen elle-même semble redouter.
Voici la vidéo avec les commentaires que j'ai faits au Forum de l'ONU. J'ai parlé à peine quarante-trois secondes et ils ont dépensé 120 000 dollars pour attaquer ma famille.
El difunto Papa Francisco solicitó a un grupo de aproximadamente cuatrocientos obispos que trabajaran juntos, entre 2021 y 2024, en una revisión de cómo las personas de
fe católica interactúan y avanzan como movimiento. Formalmente, este comité de obispos recibió el nombre de
Sínodo sobre la Sinodalidad . El término Sínodo se usa ampliamente en todas las religiones cristianas para referirse a comités, juntas o reuniones de dichos grupos en cualquier nivel de la jerarquía eclesiástica. El término
sinodalidad es específico de la Iglesia católica. El Sínodo cuenta con una página web oficial donde
intenta explicar la sinodalidad .
Se crearon varios grupos de trabajo sobre una amplia gama de temas. En esta reseña, solo me centraré en el tercer grupo de trabajo, que examinó el tema de
la misión en el entorno digital . A continuación, presentaré mi propia evidencia sobre los temas que el grupo de trabajo está considerando.
En una noticia reciente no relacionada con el Sínodo, la diócesis de Paderborn (centro/norte de Alemania)
anunció que intentará usar TikTok para conectar con los jóvenes . El alcance del grupo de trabajo tres es muy amplio y no se limita a
las plataformas
de control social . Considero que abarca todas las formas de tecnología digital.
Incluso
los repetidores de paquetes
de radioaficionados están dentro del alcance, aunque las licencias de radioaficionado no permiten la transmisión explícita de material religioso.
El Vaticano fue pionero en la adopción de la radio de onda corta. El papa León XIV y monseñor Lucio Adrián Ruiz, secretario del Dicastero para la Comunicación, visitaron las instalaciones de Radio Vaticano esta semana:
Al leer los resultados tanto del grupo de trabajo como del Sínodo en general, considero que la iglesia en su conjunto no ha decidido si aceptar o rechazar
el control social de los medios . Reconocen que forman parte del panorama digital y tratan de definir cómo la iglesia se relaciona con él.
Cómo evolucionó el proceso sinodal a alto nivel
Antes de entrar en detalles, aquí presentamos una visión general del proceso y los informes que surgieron en diferentes momentos, con enlaces directos a las ediciones traducidas.
El sitio web principal del Sínodo es
www.Synod.va y está disponible en varios idiomas. Parece que el contenido fue creado en italiano y traducido al inglés y a otros idiomas. Esto dificulta un poco su lectura.
En octubre de 2023 se celebró una reunión ampliada en Roma donde se elaboró &ZeroWidthSpace&ZeroWidthSpaceun borrador inicial del informe.
Puntos clave del informe final en relación con el entorno digital
En el punto 58, el informe señala que los cristianos podrían estar intentando proclamar el Evangelio a través de su participación en un entorno digital.
58. ... los cristianos, cada uno según sus diversos roles —en la familia y en los otros estados de vida; en el lugar de trabajo y en la profesión; comprometidos civil, política, social o ecológicamente; en el desarrollo de una cultura inspirada en el Evangelio, incluida la evangelización del entorno digital— recorren los caminos del mundo y anuncian el Evangelio allí donde viven, sostenidos por los dones del Espíritu.
59. Al hacerlo, piden a la Iglesia que no los abandone, sino que les permita sentirse enviados y sostenidos en la misión.
Este punto parece animar a la Iglesia a contemplar la situación que enfrentan aquellos bajo la influencia de un entorno digital, pero no implica necesariamente que el entorno digital sea bueno o malo.
En el punto 112, relativo a la movilidad, que incluye a personas de todos los niveles de la sociedad, el informe señala:
Algunos mantienen fuertes vínculos con su país de origen, especialmente con la ayuda de los medios digitales, y por ello les puede resultar difícil establecer conexiones en su nuevo país; otros se encuentran viviendo sin raíces.
Esta es una excelente observación. En Europa, he conocido parejas cuyas relaciones dependen completamente de dispositivos que usan para la traducción automática. Cuando llega gente nueva a la ciudad, la cultura de WhatsApp anima a los vecinos a pasar semanas o meses hablando a sus espaldas sin siquiera mirarlos a los ojos.
113. La expansión de la cultura digital, especialmente evidente entre los jóvenes, está transformando profundamente su experiencia del espacio y el tiempo; influye en sus actividades cotidianas, su comunicación y sus relaciones interpersonales, incluida la fe. Las oportunidades que ofrece internet están transformando las relaciones, los vínculos y los límites. Hoy en día, a menudo experimentamos soledad y marginación, a pesar de estar más conectados que nunca. Además, quienes tienen sus propios intereses económicos y políticos pueden explotar
las redes sociales para difundir ideologías y generar formas agresivas y manipuladoras de polarización. No estamos bien preparados para esto y debemos dedicar recursos para garantizar que el entorno digital se convierta en un espacio profético para la misión y el anuncio. Las Iglesias locales deben animar, apoyar y acompañar a quienes participan en la misión en el entorno digital. Las comunidades y grupos digitales cristianos, en particular los jóvenes, también están llamados a reflexionar sobre cómo crear vínculos de pertenencia, promoviendo el encuentro y el diálogo. Necesitan ofrecer formación entre sus iguales, desarrollando una forma sinodal de ser Iglesia. Internet, constituido como una red de conexiones, ofrece nuevas oportunidades para vivir mejor la dimensión sinodal de la Iglesia.
Este párrafo reconoce los peligros de la tecnología digital, especialmente
los medios de control social , y la clave es "No estamos bien preparados para esto". Sin embargo, sugiere que las iglesias locales deberían fomentar más estos riesgos en línea. No creo que "fomentar" sea la palabra correcta, pero tampoco creo que deban desalentar.
149. El proceso sinodal ha llamado la atención con insistencia sobre algunas áreas específicas de la formación del Pueblo de Dios para la sinodalidad. La primera de ellas se refiere al impacto del entorno digital en los procesos de aprendizaje, la concentración, la percepción de uno mismo y del mundo, y la construcción de relaciones interpersonales. La cultura digital constituye una dimensión crucial del testimonio de la Iglesia en la cultura contemporánea y un campo misionero emergente. Esto exige garantizar que el mensaje cristiano esté presente en línea de forma fiable y sin distorsionar ideológicamente su contenido. Si bien los medios digitales tienen un gran potencial para mejorar nuestras vidas, también pueden causar daños y perjuicios a través del acoso, la desinformación, la explotación sexual y la adicción. Las instituciones educativas de la Iglesia deben ayudar a niños y adultos a desarrollar habilidades esenciales para navegar con seguridad en la web.
Estos comentarios son muy pertinentes y muy coherentes con mi propio testimonio, parte del cual se reproduce más adelante en este informe.
150. Otro ámbito de gran importancia es la promoción en todos los contextos eclesiales de una cultura de la salvaguardia, haciendo de las comunidades lugares cada vez más seguros para los menores y las personas vulnerables.
Fuentes relacionadas con el grupo de trabajo tres, la misión en un entorno digital
El sitio web Synod.va publicó una lista de
todos los grupos de trabajo . Incluye un breve video sobre cada grupo y un enlace a sus informes más recientes.
El video del grupo de trabajo tres dura poco menos de dos minutos. Aquí tienen algunas citas clave y mis propias observaciones:
Hoy en día, las personas, especialmente los jóvenes, han aprendido a vivir simultáneamente y sin interrupciones tanto en el espacio digital como en el físico.
Las afirmaciones del video no coinciden con las del informe final. Ya hablaremos de ello. Sin embargo, siempre que
se mencionan
los medios de control social , se tiende a generalizar sobre la imposibilidad de vivir sin ellos. Cada vez que vemos una afirmación como esta, es importante cuestionarla.
¿Cómo la Iglesia utiliza y se apropia de la cultura digital?
La pregunta retórica es interesante. En realidad, los superpoderes de Silicon Valley usan y se apropian de cualquier contenido que les damos. La iglesia no los usa a ellos, sino a nosotros. ¿Cómo crees que se enriquecieron tanto?
Una mejor pregunta podría ser: "¿cómo
complementa la iglesia las deficiencias de las culturas digitales?".
"Este entorno es ahora "indistinguible de la esfera de la vida cotidiana".
El papa Francisco era un hombre inteligente y contaba con personas inteligentes a su alrededor, incluido el difunto cardenal Pell. Podemos rastrear esta cita hasta el pensamiento de Alan Turing. Turing es considerado el abuelo de la informática y un mártir. Turing nos transmitió exactamente el mismo concepto en el legendario test de Turing, al que el propio Turing llamó el juego de imitación en 1949.
Otra forma de interpretar este fenómeno es decir que las masas han sido sometidas a un lavado de cerebro por los señores de Silicon Valley.
Las decisiones que están tomando los directivos de Facebook son un grave problema —para los niños, la seguridad pública y la democracia—. Por eso di un paso al frente. Y, seamos claros: no tiene por qué ser así. Estamos aquí hoy gracias a las decisiones deliberadas que Facebook ha tomado.
El resumen del grupo de trabajo continúa...
Para proclamar eficazmente el Evangelio en nuestra cultura contemporánea, debemos discernir las oportunidades y los desafíos que presenta esta nueva dimensión del “lugar”.
Esa cita en particular reconoce que existen tanto oportunidades como desafíos. El año del jubileo se centra en la esperanza y espero de verdad que los miembros del grupo de trabajo estén leyendo las advertencias de denunciantes, psicólogos infantiles e
incluso forenses sobre el impacto de Facebook y similares .
Sin embargo, el informe incluye la frase "mayor inmersión" y creo que la iglesia no debería asumir que una "mayor inmersión" es un curso de acción predeterminado.
El resumen también aborda el concepto de jurisdicción. La Iglesia Católica se ha organizado tradicionalmente sobre una base geográfica. Internet permite a las personas conectarse y formar comunidades virtuales sin ninguna conexión geográfica.
Por cierto, antes de internet, la Iglesia podía trasladar a sacerdotes de alto riesgo de una parroquia de un extremo a otro de la ciudad sin preocuparse de que alguien pudiera atar cabos. Revisé minuciosamente los documentos de la Comisión Real de Australia y encontré esta nota del legendario Padre X___:
Esto significa que si alguien en Australia se entera de que el Padre Z___ recibió tratamiento debido a algo que sucedió en Boston y va allí para averiguarlo, se encontrará en un callejón sin salida.
La carta en cuestión se escribió justo antes de que internet se popularizara. Al leer esas palabras hoy, nos recuerda claramente cómo internet está cambiando la vida.
El grupo de trabajo continúa comentando que están buscando "recomendaciones o propuestas prácticas" de toda la comunidad, sobre cualquier tema relacionado con la misión de la Iglesia en el entorno digital.
Las personas involucradas en el movimiento de software libre, sean
católicas o no, pueden comunicarse con su diócesis local para averiguar quién está coordinando localmente la respuesta a estos desafíos.
Otra frase que me llamó la atención:
"hoy vivimos en una cultura digital"
No exactamente. Algunos dirían que se nos está imponiendo una cultura digital. Instituciones como la política y los medios de comunicación están enganchadas a ella y la elevan a un pedestal. Por lo tanto, es aún más vital que otras instituciones, como la iglesia, asuman el rol de cuestionar todo lo relacionado con la cultura digital y también de mantener alternativas viables.
La vida sin móviles, la vida sin aplicaciones
Los teléfonos móviles y las aplicaciones están estrechamente relacionados. Hay quienes prefieren vivir sin un smartphone; es decir, solo tienen la mitad de los problemas que un teléfono móvil completo. También hay quienes prefieren tener smartphones sin la tienda de aplicaciones de Google o Apple; por ejemplo, quienes instalan
Replicant o
LineageOS y usan la
tienda de aplicaciones F-Droid para limitar su teléfono a aplicaciones éticas.
En la práctica, hay personas que no pueden desplazarse por su ciudad natal sin usar el teléfono. Surge una pregunta interesante para la iglesia: ¿qué proporción de fieles no puede identificar la ruta más directa desde su casa hasta la iglesia más cercana sin usar una aplicación? Sería interesante analizar las respuestas en función de diversos factores, como la edad y los años de residencia en la parroquia.
Otra pregunta clave, estrechamente relacionada con la anterior, es cuántos feligreses pueden recordar los horarios de misa y los eventos clave del calendario parroquial sin mirar su teléfono. Es excelente tener esta información visible en el sitio web de la parroquia; sin embargo, cuando las personas participan activamente en la parroquia y la comunidad, esta información se memoriza. Cuanto más presente sea esta información en una comunidad, más resiliente será.
Sistemas de autenticación que socavan la dignidad humana
Hoy en día vemos con frecuencia empresas que insisten en que necesitan nuestros números de teléfono móvil para “autenticarnos” o para “firmar” documentos por mensaje de texto.
Este tipo de cosas es particularmente escalofriante. Mucha gente conoce la práctica de la época nazi de grabar los números de identificación en la piel de los prisioneros judíos. Los números de teléfono móvil tienen una función similar. Aunque no se graban físicamente en la piel, suele ser incómodo cambiarlos.
Hay muchos fenómenos estrechamente relacionados, incluidos sitios web que exigen que los usuarios se autentiquen desde una cuenta de Gmail o Facebook.
A nivel de la Iglesia, el Estado, la educación, la atención sanitaria y los servicios financieros, es vital garantizar que todos puedan participar como quieran sin renunciar a su dignidad.
La iglesia necesita hablar sobre estos temas con la misma vehemencia con la que habla sobre temas como el aborto.
Es necesario enfatizar el consentimiento
La preocupación por el consentimiento y la coerción se ha convertido en un tema de gran importancia en el mundo actual. Irónicamente, las
plataformas
de control social que pretenden dar voz a las mujeres están violando el principio del consentimiento de muchas otras maneras.
Pensemos, por ejemplo, en las personas que dedicaron tiempo a crear un perfil en Facebook o Twitter, a veces durante muchos años, conectando con cientos o miles de seguidores y luego se encontraron con la exigencia de añadir su número de teléfono móvil a su cuenta. Si no lo hacían, su cuenta era bloqueada. No existe una razón técnica válida para tener un número de teléfono móvil en la cuenta, ya que muchos de estos servicios funcionaron exactamente igual durante muchos años antes de que tales exigencias se volvieran comunes.
Las personas no consienten libremente compartir sus números de teléfono con Mark Zuckerberg y Elon Musk. Los servicios han sido corrompidos para engañar a sus usuarios con estas exigencias.
Es significativo que esta cultura de emboscar y coaccionar a las personas se haya extendido a la sociedad. En Australia, Chanel Contos inició una petición/diario muy publicitada con historias de mujeres de escuelas privadas de élite que se sintieron emboscadas, intimidadas y obligadas a tener encuentros físicos no deseados.
Irónicamente, la señorita Contos hizo públicas sus preocupaciones a través de las mismas plataformas que están socavando nuestra comprensión del consentimiento y la privacidad.
La propia iglesia ha tenido que hacer un profundo examen de conciencia sobre temas de consentimiento y abuso de poder. Esto la coloca en una posición interesante, donde podemos afirmar que, incluso considerando algunas de las revelaciones más impactantes sobre el abuso, los responsables son el mal menor en comparación con los amos de Silicon Valley.
Es notable la rapidez con la que las instituciones de Silicon Valley han abandonado todos los controles y contrapesos, y han optado por hacer lo que les place. La Iglesia Católica y otras instituciones religiosas ahora pueden aprovechar lo aprendido del análisis crítico de sus propios errores y advertir a la sociedad sobre la estupidez que sería repetir el mismo camino con estos gánsteres digitales.
La tecnología digital es mucho más que medios de control social
La iglesia no es ajena a la tecnología. Las primeras imprentas se instalaron en los templos. Caxton instaló la primera imprenta de Inglaterra en la Abadía de Westminster. Otros lugares fueron Oxford y la Abadía de St. Alban. Antes de la imprenta, leer y escribir eran actividades reservadas a los clérigos, y muchas de sus obras solo existían en latín. La imprenta permitió la producción masiva de biblias en alemán e inglés. Esto, a su vez, tuvo un gran impacto en la estandarización del idioma, así como en la estandarización de las actitudes morales que Silicon Valley está destruyendo. La versión King James de la Biblia es ampliamente reconocida por su impacto en el idioma inglés.
La estandarización del lenguaje fue solo un efecto secundario de esta invención. La reforma fue otro. A medida que la gente adquirió libros y el poder de la lectura, se volvió menos dependiente del clero.
De igual manera,
los medios de control social actuales están impactando nuestra cultura, para bien o para mal. Así como la imprenta facilitó la reforma,
los medios de control social podrían conducir a nuevos cambios en la forma en que los humanos nos organizamos en torno a las estructuras y creencias religiosas. Los amos de Silicon Valley están considerando activamente estos roles para sí mismos. Elon Musk incluso se ha disfrazado de Satanás. Si la Iglesia Católica no ofrece una alternativa convincente a estos cambios de poder, se la arrebatarán.
Frances Haugen (denunciante de Facebook): Casi nadie fuera de Facebook sabe lo que ocurre dentro de Facebook. La dirección de la compañía oculta información vital al público, al gobierno de Estados Unidos, a sus accionistas y a gobiernos de todo el mundo. Los documentos que he proporcionado demuestran que Facebook nos ha engañado repetidamente sobre lo que revela su propia investigación sobre la seguridad infantil, su papel en la difusión de mensajes de odio y polarización, y mucho más.
Mientras que las generaciones anteriores recurrían a clérigos en busca de consejo, y luego leían la Biblia, los jóvenes de hoy recurren a un motor de búsqueda, y mañana podrían depositar su fe en la inteligencia artificial. Ya podemos ver evidencia de que los motores de búsqueda,
las redes sociales de control y los bots de IA conducen a las personas a mayores niveles de conflicto con sus vecinos o las llevan por caminos oscuros de aislamiento, autolesión y suicidio.
Recursos de la Iglesia Católica relevantes para el entorno digital
La Iglesia Católica tiene un papel importante en la educación y las escuelas, por lo tanto, la Iglesia puede ver el impacto de
los medios de control social y puede imponer prohibiciones a los niños y brindar capacitación al personal y a los padres.
Los docentes, tanto empleados de la Iglesia como del Estado, han reportado un aumento del acoso escolar por parte de padres que se agrupan en aplicaciones de mensajería. En un caso reciente,
la policía británica envió a seis agentes para humillar a un padre que había usado WhatsApp para protestar contra la escuela local. El conflicto, la naturaleza conflictiva de este entorno y el enorme desperdicio de recursos policiales son consecuencias de la forma en que se diseña y utiliza la tecnología en la sociedad. Cada incidente como este ofrece una perspectiva sobre las oportunidades que tiene la Iglesia Católica para preguntarse: "¿Hay una mejor manera?".
Las palabras de Frances Haugen ayudan a explicar por qué seis policías sitiaron a los padres de niños pequeños:
Vi que Facebook se enfrentaba repetidamente a conflictos entre sus propios beneficios y nuestra seguridad. Facebook siempre resolvía esos conflictos a favor de sus propios beneficios. El resultado ha sido un sistema que amplifica la división, el extremismo y la polarización, y que debilita a las sociedades de todo el mundo.
La Iglesia Católica es un importante empleador en muchos países. Esto le otorga la capacidad de tomar decisiones sobre el uso de teléfonos móviles y aplicaciones de mensajería en la relación empleador-empleado. Un empleador no puede prohibir a sus empleados el uso de estos dispositivos en su tiempo libre, pero sí puede decidir eliminar cualquier uso oficial de estos recursos con fines laborales. La relación empleador-empleado ofrece otra oportunidad para capacitar sobre la importancia de la dignidad humana por encima de las exigencias de nuestros dispositivos.
La agenda pública en el entorno digital, el aborto de nuestra especie
Dado que muchos políticos y periodistas viven ahora a través de
medios de control social , su capacidad para evaluar qué temas merecen debate público se ve fuertemente influenciada por los que supuestamente son tendencia en línea. Existe la idea de que los temas son tendencia en línea debido al interés público, mientras que la realidad es que los administradores de las plataformas en línea ejercen influencia para asegurar que algunos temas parezcan crecer orgánicamente, mientras que temas importantes pero inconvenientes quedan convenientemente ocultos en el torrente de noticias.
En este contexto, la Iglesia Católica ofrece una vía alternativa para incluir temas en la agenda de debate público, independientemente de si un tema en particular parece ser tendencia o no. Este poder se utiliza con mayor frecuencia para temas relacionados con la doctrina de la Iglesia, como el cabildeo sobre el aborto, pero no hay razón para que la Iglesia no pueda utilizar los mismos recursos para presionar contra el aborto de la raza humana mediante IA.
Ayuda a las víctimas de discriminación por parte de los señores de Silicon Valley y las turbas en línea
La Iglesia Católica remonta sus orígenes a la persecución de Jesús y los mártires San Pedro y San Pablo.
Pero dejemos de lado los ejemplos antiguos y vayamos a quienes, en tiempos más cercanos a nosotros, lucharon por la fe. Tomemos los nobles ejemplos de nuestra generación. Por celos y envidia, los pilares más grandes y justos de la Iglesia fueron perseguidos, llegando incluso a la muerte. Consideremos a los buenos apóstoles. Pedro, por envidia injusta, soportó no uno ni dos, sino muchos trabajos, y finalmente, tras dar su testimonio, partió al lugar de gloria que le correspondía. Por envidia, Pablo también mostró con su ejemplo el premio que se otorga a la paciencia: siete veces fue encadenado, desterrado y apedreado. Habiéndose convertido en heraldo, tanto en Oriente como en Occidente, obtuvo el noble renombre debido a su fe; y habiendo predicado la justicia al mundo entero, y habiendo llegado a los confines de Occidente, y habiendo dado testimonio ante gobernantes, finalmente partió del mundo y fue al lugar santo, convirtiéndose en el mayor ejemplo. de paciencia." (Primera epístola de Clemente a los Corintios, 5:1 - 5:7)
Estas palabras describen la persecución de Pedro y Pablo bajo el emperador Nerón hace casi dos mil años.
Hace ochocientos años llegó la Carta Magna y, con el tiempo, ha inspirado la Carta de Derechos de Estados Unidos, la Declaración Universal de Derechos Humanos y la abolición de la pena capital.
Sin embargo, hoy vemos que los señores de Silicon Valley quieren tirar todo eso por la ventana y llevarnos de nuevo a la época de Nerón.
Toda persona tiene derecho a tomar parte libremente en la vida cultural de la comunidad, a gozar de las artes y a participar del progreso científico y de sus beneficios.
Toda persona tiene derecho a la protección de los intereses morales y materiales que le correspondan por razón de las producciones científicas, literarias o artísticas de que sea autora.
Al consultar los sitios web de proyectos de software libre conocidos como Debian y Fedora, vemos que proclaman abiertamente su deseo de censurar a ciertas personas. Cualquiera que se pronuncie sobre cuestiones éticas en nuestra industria ha sido objeto de estas extremas represalias ocasionalmente.
Las similitudes entre estos casos y la creciente lista de víctimas son prueba clara de que no son aleatorios. Existe un esfuerzo coordinado para reducir o eludir los derechos civiles. Si existe un espacio o un mundo digital, es inquietantemente similar al mundo donde los emperadores romanos recurrían a ejecuciones espantosas para perpetuar el control mediante el miedo.
La Iglesia Católica puede buscar a las víctimas que han sido canceladas, a las víctimas que han sido expulsadas de sus plataformas y a quienes tienen algo que decir sobre la dignidad humana en la era de la IA. Sean o
no católicas , las preocupaciones que expertos independientes han estado intentando investigar y difundir deben ser puestas por encima del ruido de los departamentos de relaciones públicas.
Al mismo tiempo, el terrible impacto infligido a nuestras familias a menudo queda oculto a la vista del público.
Los niños en el entorno digital
Es revelador que hayamos encontrado tácticas muy similares utilizadas por Harvey Weinstein y Chris Lamb, ex líder del Proyecto Debian.
Esto es significativo porque Lamb se formó a través del Google Summer of Code y recibió financiación de Google, incluyendo un cuantioso pago de 300.000 dólares poco antes de que tres víctimas revelaran el escándalo. A pesar de la promesa de transparencia de Debian, el dinero no se reveló
hasta seis meses después y el nombre de Google nunca se ha relacionado públicamente con las cifras.
Cuando a Weinstein le preocupaba el comportamiento de algunas mujeres, difundía rumores desagradables sobre su "comportamiento" a otras personas de la industria. Hay algo de esnobismo en estas actitudes hacia el comportamiento humano.
Cuando las mujeres presentaron denuncias a la policía, el director de cine Peter Jackson habló y
confirmó que Weinstein había estado usando estos trucos sucios , difundiendo rumores sobre el comportamiento de las mujeres que no eran lo suficientemente sumisas para su gusto.
"Recuerdo que Miramax nos dijo que era una pesadilla trabajar con ellos y que debíamos evitarlos a toda costa. Esto fue probablemente en 1998", dijo Jackson.
En ese momento, no teníamos motivos para cuestionar lo que nos decían, pero en retrospectiva, me doy cuenta de que es muy probable que se tratara de la campaña de desprestigio de Miramax en pleno apogeo.
Diversas personas han dado a conocer que Chris Lamb hacía exactamente lo mismo en su puesto en Debian. Según la ley de derechos de autor, los coautores no tienen ninguna obligación con la persona elegida para servir como Líder del Proyecto Debian en cada momento. Todos somos iguales.
Asunto: Re: Estado del desarrollador de Debian
Fecha: mar., 18 dic. 2018 10:36:09 +0900
De: Norbert Preining <norbert@preining.info>
Para: Daniel Pocock <daniel@pocock.pro>
Hola, Daniel,
Incluso si, pasar por una demanda como esta en el Reino Unido es algo fuera de lo común.
Mis capacidades y posibilidades financieras.
Pero me temo que Lamb en realidad también arruinó una solicitud para un
empresa en Nueva York, un trabajo relacionado con Debian. Si eso ha sucedido, y puedo...
Si lo documentara razonablemente, consideraría una demanda por difamación.
> Lamb reside en el Reino Unido y envía correos electrónicos desde el Reino Unido.
> https://regainyourname.com/news/ciberbullying-ciberacecho-y-acoso-en-linea-un-estudio-del-Reino-Unido/
Gracias por los enlaces, los tendré en cuenta.
Norberto
--
PREINING Norbert http://www.preining.info
Accelia Inc. + JAIST + TeX Live + Desarrollador Debian
GPG: 0x860CDC13 fp: F7D8 A928 26E3 16A1 9FA0 ACF0 6CAC A448 860C DC13
Aún más inquietante, Lamb comenzó sus ataques contra mi familia justo cuando el cardenal George Pell fue condenado en 2018. Mi primo segundo había sido miembro del antiguo coro del cardenal George Pell en Melbourne. Lamb y sus cómplices, financiados por Google, difundieron rumores anónimos sobre abusos.
Varias personas presentaron pruebas de que Lamb se comportaba como Weinstein, difundiendo rumores a nuestras espaldas. Cuando el Dr. Preining y yo hablamos, una tercera víctima vio el escándalo y se identificó públicamente el día de Navidad:
Asunto: Re: Censura en Debian
Fecha: mar., 25 dic. 2018 23:44:38 +0100
De: martin f. krafft
Organización: El proyecto Debian
Para: debian-project@lists.debian.org
Hola proyecto,
Es muy triste leer lo que está pasando.
Sé que ha habido al menos otro caso, en el que DAM y AH
Han actuado fuera de su mandato, amenazando con proyectos
expulsión y elegir de forma muy selectiva con quién comunicarse.
Lo sé, porque me estaban tomando como blanco.
Ni DAM ni AH (las mismas personas que siguen activas hoy en día) hicieron...
Un solo intento de escucharme. Ninguno de mis correos electrónicos a DAM ni a AH
fueron respondidas alguna vez.
En cambio, DAM dictó un veredicto e influyó en otras personas para que...
punto de que "porque DAM gobernaba" se dio como razón para otros
medidas. Esto fue un abuso inconstitucional de los poderes de DAM, y en
En el caso de AH, todo el embrollo también rozaba la difamación. Entre otros,
El actual DPL Chris Lamb prometió una revisión a su debido tiempo, pero
Nunca pasó nada.
... [ recorte ] ...
Pero si no es seguro para los ingenieros que crean esta tecnología, ciertamente no es seguro para los niños.
Red Hat , subsidiaria de IBM desde 2019, inició acciones legales para censurar y desacreditar mis preocupaciones. Me acusaron de mala fe por publicar ese artículo. Sin embargo, el panel legal dictaminó que
Red Hat me estaba acosando y que había abusado del procedimiento administrativo.
La ironía, claro, es que los Cardenales llevan sombreros rojos, como el nombre de la empresa
Red Hat , que fue descubierta abusando de mí. Chris Lamb, de Debian, fue quien inició los rumores sobre mi familia cuando el Cardenal Pell fue condenado.
La forma en que esto se entrelazó con nuestras vidas y nuestra fe, los rumores de abuso tras la condena del difunto cardenal Pell, mi visita a los Carabineros el día de la muerte del cardenal, el día de la boda, el Domingo de Ramos, siendo un suicidio imitador (sin confirmar), la crucifixión del Dr. Stallman en Pascua y los linchamientos navideños de Debian, es asombroso. Como dicen en las películas de crímenes, sigue el dinero.
El entorno digital somete a los feligreses a la vigilancia de terceros
La Iglesia Católica nació de la persecución y hay que recordar que la vigilancia es una piedra angular de la persecución.
El hecho de que los servicios más grandes, como Google, Facebook y Twitter, sean aparentemente gratuitos es una prueba de que obtienen todos sus beneficios de su capacidad de ejercer una vigilancia y manipulación eficaces de la población.
En una época, la Iglesia cumplía funciones similares. Los fieles se sometían a una forma de vigilancia mediante el sacramento de la confesión, donde recibían consejo de su sacerdote. Los sacerdotes buscaban ejercer cierta influencia desde el púlpito, con la amenaza de excomunión y, de vez en cuando, la inquisición o persecución de algún adelantado a su tiempo, como Galileo.
Si las empresas tecnológicas pueden aproximar todas estas funciones tan efectivamente con algoritmos, corremos el riesgo de que la religión se vuelva redundante.
Por lo tanto, intentar desempeñar el papel de la Iglesia a través de un medio que sustituye el papel de la religión es muy parecido a cavar la propia tumba.
A través de una serie de investigaciones públicas y denuncias, hemos escuchado hasta qué punto estos señores están despojando nuestra dignidad. Su objetivo es anticipar cada una de nuestras decisiones, influir en con quién hablamos, en cómo votamos y hasta el último centavo de nuestro presupuesto.
Si cada una de esas decisiones es controlada e incluso microgestionada por nosotros, con precisión científica, hasta el último centavo de nuestra cuenta bancaria cada mes, por la influencia de algoritmos, ¿qué espacio queda en nuestra conciencia para la influencia del Evangelio?
Misión: seguir siendo relevante
Por lo tanto, la pregunta asignada al grupo de trabajo sobre la
misión en el entorno digital
podría reformularse así: ¿cómo puede la religión, sea cual sea su naturaleza, seguir siendo relevante?
Hoy en día, para muchas familias de culturas adineradas, la iglesia participa, por tradición, en la celebración de bodas, funerales y, a veces, en la educación de los niños.
Para que la iglesia pueda empoderar a los feligreses con la tecnología, en lugar de perderlos a causa de la tecnología, necesitamos hacer preguntas sobre algunos de los temas planteados por el movimiento del software libre.
Cómo garantizar que cada persona tenga control total sobre sus dispositivos, incluido el derecho a repararlos y a cambiar el sistema operativo.
Desarrollar estrategias para proteger a las personas de los riesgos de la tecnología. Por ejemplo,
las redes sociales de control permiten que grupos pequeños pero muy ruidosos causen un daño intenso a sus víctimas mediante la difusión deliberada y repetida de chismes y difamaciones. Cada vez es más difícil garantizar que ninguna persona o minoría sea excluida por las venganzas en línea. ¿Cómo brindar apoyo a las personas que son blanco de estas personas tóxicas? ¿Cómo garantizar que cada persona y grupo pueda expresarse?
Misión: proteger a la sociedad de los mismos errores
Australia pasó por el proceso de crear una Comisión Real para investigar los abusos cometidos por diversas instituciones, incluida la Iglesia. Sin embargo, fue demasiado tarde para muchas de las personas que fallecieron o perdieron a sus familiares, su salud y sus carreras. ¿No sería fantástico realizar intervenciones tan contundentes antes, y no después, de los catastróficos fracasos? Ya es hora de que se aplique el mismo nivel de escrutinio a
los jefes
de los medios de comunicación que controlan las redes sociales y a la explotación y manipulación del público en múltiples niveles.
Conclusión
Los medios de control social se están convirtiendo rápidamente en una fachada para la inteligencia artificial. Como nos ha sugerido la prueba de Turing (juego de imitación) desde 1949, es inevitable que cada nueva iteración de este fenómeno se vuelva cada vez más indistinguible de la realidad. Por ello, podría presentarse no solo como un sustituto de los demás seres humanos, sino como una alternativa a la iglesia. La gente podría ser engañada para aceptarla como su dios. En otras palabras,
los medios de control social podrían volver irrelevante a la iglesia y, después de eso, podrían continuar haciendo irrelevante a la humanidad.
Solo mira cómo me hacen muecas tras la muerte de mi padre. La grosería que sufro casi a diario empezó en un momento de duelo. A la gente se le inculca que deje de lado incluso el respeto más básico por la dignidad humana, el respeto por la familia en momentos de duelo, y se convierte en otra oportunidad para usarnos mutuamente como pasatiempo. Este aspecto de mi vida fue creado enteramente por
los medios de control social
y las personas que definen ese espacio en mi propia profesión.
En su testimonio ante el Congreso, Frances Haugen nos dijo:
Creo que lo que hice fue correcto y necesario para el bien común, pero sé que Facebook tiene recursos infinitos que podría usar para destruirme.
En 2018, asistí al Foro de la ONU sobre Empresas y Derechos Humanos en Ginebra, donde hice algunos breves comentarios sobre la posibilidad de que Facebook y Twitter cayeran en malas manos. El Foro de la ONU coincidió con la consideración de los cargos contra el cardenal George Pell por parte del jurado. Pell fue condenado y estas
plataformas
de control social se llenaron de rumores sobre mi familia y sobre mí, el mismo fenómeno al que la propia Haugen parece temer.
Aquí está el video con los comentarios que hice en el Foro de la ONU. Hablé apenas cuarenta y tres segundos y gastaron 120.000 dólares en atacar a mi familia.
The late Pope Francis asked a group of approximately four hundred
bishops to work together from 2021 to 2024 on a review of how people of
Catholic faith interact and advance as a movement. In formal
terms, this committee of bishops was given the title
Synod on Synodality. The term Synod is used widely
in all Christian religions to refer to committees, boards or meetings of
those groups at any level of the church heirarchy. The term
Synodality is specific to the Catholic Church. The Synod has
an official web page where they
attempt to explain Synodality.
Various working groups were created on a wide range of topics. In this
review, I am only looking at working group three, which examined the topic
the mission in the digital environment. I then go on to
provide some of my own evidence about the topics the working group
is considering.
Even
amateur radio packet repeaters are in scope although
amateur radio licensing doesn't allow the explicit transmission of
religious material.
The Vatican was an early adopter of shortwave radio. Pope Leo XIV and
Monsignor Lucio Adrian Ruiz, secretary of the Dicastero per la Comunicazione
visited Vatican Radio's broadcasting facility this week:
Reading the outputs from both the working group and the overall
Synod, I feel that the church as a whole did not decide to either
embrace or reject
social control media. They are acknowledging
that it is part of the digital landscape and trying to decide how
the church relates to it.
How the Synod process evolved at a high level
Before delving into the details, here is an overview of
the process and the reports that came out at different times,
with direct links to the translated editions.
The main web site for the Synod is at
www.Synod.va and it is available
in various languages. It appears that the content was created in
Italian and translated to English and other languages. This makes
it a little bit more difficult to read.
There was an extended gathering in Rome in October 2023 where
an initial draft report was produced.
Key points from the final report as it relates to the digital environment
At point 58, the report notes that Christians may be attempting to
proclaim the Gospel through their participation in a digital environment.
58. ... Christians, each according to their diverse roles - within the family and
other states of life; in the workplace and in their professions; engaged civilly, politically,
socially or ecologically; in the development of a culture inspired by the Gospel, including the
evangelisation of the digital environment - walk the paths of the world and proclaim the Gospel
where they live, sustained by the gifts of the Spirit.
59. In doing so, they ask the Church not to abandon them but rather to enable them to feel
that they are sent and sustained in mission.
This point appears to encourage the church to contemplate the situation
faced by those under the influence of a digital environment but it does not
necessarily imply the digital environment is good or bad.
At point 112, concerning mobility, which includes people from all levels
of society, the report notes:
Some maintain strong bonds with their country of origin, especially with
the help of digital media, and thus can find it difficult to form connections
in their new country; others find themselves living without roots.
This is an excellent observation. In Europe, I've met couples who
have relationships entirely dependent upon devices they use for
automated machine translation. When new people arrive in town, the
WhatsApp culture encourages neighbors to spend weeks or months talking
behind their backs without ever looking them in the eye.
113. The spread of digital culture, particularly evident among young people, is profoundly
changing their experience of space and time; it influences their daily activities, communication
and interpersonal relationships, including faith. The opportunities that the internet provides are
reshaping relationships, bonds and boundaries. Nowadays, we often experience loneliness and
marginalisation, even though we are more connected than ever. Moreover, those with their own
economic and political interests can exploit
social media to spread ideologies and generate
aggressive and manipulative forms of polarisation. We are not well prepared for this and ought
to dedicate resources to ensure that the digital environment becomes a prophetic space for
mission and proclamation. Local Churches should encourage, sustain and accompany those
who are engaged in mission in the digital environment. Christian digital communities and
groups, particularly young people, are also called to reflect on how they create bonds of
belonging, promoting encounter and dialogue. They need to offer formation among their peers,
developing a synodal way of being Church. The internet, constituted as a web of connections,
offers new opportunities to better live the synodal dimension of the Church.
This paragraph acknowledges the dangers of digital technology, especially
social control media and the key words are
"We are not well prepared for this". Yet it suggests that local churches
should "encourage" more of these online risks. I don't feel the word
"encourage" is the right word to use but I don't think they should
discourage either.
149. The synodal process has insistently drawn attention to some specific areas of
formation of the People of God for synodality. The first of these concerns the impact of the
digital environment on learning processes, concentration, the perception of self and the world,
and the building of interpersonal relationships. Digital culture constitutes a crucial dimension
of the Church’s witness in contemporary culture and an emerging missionary field. This
requires ensuring that the Christian message is present online in reliable ways that do not
ideologically distort its content. Although digital media has great potential to improve our lives,
it can also cause harm and injury through bullying, misinformation, sexual exploitation and
addiction. Church educational institutions must help children and adults develop critical skills
to safely navigate the web.
These comments are very relevant and very consistent with my own
testimony, some of which is reproduced later in this report.
150. Another area of great importance is the promotion in all ecclesial contexts of a
culture of safeguarding, making communities ever safer places for minors and vulnerable
persons.
When I raised this topic in the free software communities, my family
was attacked ruthlessly. See the
emails I sent at the end of 2017 and comments about IBM
Red Hat later in this
report.
Sources related to working group three, the mission in a digital environment
The Synod.va web site published a list of
all the working groups. The web site includes a brief video about
each group and a link to their most recent reports.
The video for working group three lasts a little bit less than two
minutes. Here are some of the key quotes and my own observations:
"Today, people, especially the young, have learnt to
live simultaneously and seamlessly in both digital and
physical spaces."
I feel that statement is quite wrong. People have learnt how to use
digital spaces. One recent research report suggests that
nearly seventy percent of young people feel bad after using social media.
In other words, they feel pressured into using it. Therefore, they
are not living seamlessly. People are suffering.
The statements made in the video are not the statements
presented in the final report. We will get to that. Nonetheless, whenever
social control media is mentioned, there is a tendency for
people to make these generalisations about being unable to live without
it. Every time we see a statement like this, it is important to
challenge it.
"How does the church use and approriate the digital culture?"
The rhetorical question is interesting. In reality, the Silicon
Valley overloads use and appropriate any content that we give them.
The church doesn't use them, they use us. How do you think they got
so rich?
A better question might be "how does the church
complement the shortcomings of digital cultures?".
"This environment
is now “indistinguishable from the sphere of everyday life.”",
Pope Francis was a smart guy and he had some smart people around him,
including the late Cardinal Pell. We can trace that quote right back to the
thinking of Alan Turing. Turing is considered to be the grandfather of computer
science and a martyr. Turing gave us exactly the same concept in the
legendary Turing test, which Turing himself called the imitation game in
1949.
Another way to interpret this phenomena is to say that the masses
have been brainwashed by the Silicon Valley overlords.
The choices being made by
Facebook’s leadership are a huge problem — for children, for public safety,
for democracy — that is why I came forward. And let’s be clear:
it doesn’t have to be this way. We are here today because of
deliberate choices Facebook has made.
The summary from the working group goes on...
"To proclaim the Gospel effectively in our contemporary
culture, we must discern the opportunities and challenges
presented by this new dimension of the “place”"
That particular quote acknowledges that there are both
opportunities and challenges. The jubilee year is all about hope
and I really hope the working group members are reading the stuff
from whistleblowers, child psychologists and
even coroners who are warning us about the impact of Facebook and their ilk.
Nonetheless, the report includes the phrase "greater immersion"
and I feel the church should not assume "greater immersion" is a default
course of action.
The summary also touches on the concept of jurisdiction. The
Catholic Church has traditionally organized itself on a geographical
basis. The Internet allows people to connect and form virtual
communities without any geographical connection.
On a sidenote, in the days before the Internet, the church was
able to move high-risk priests from a parish on one side of the city
to the other side of the city and not worry about anybody joining
the dots. I went through the papers from Australia's Royal Commission
meticulously and found this note from the legendary Father X___:
That means that if anyone in Australia, learning that
Father Z___ had treatment because of something that happened in Boston
and going there to find out, would run into a dead end.
The letter in question was penned just before the Internet came
onto public consciousness. Looking at those words today, it is a
stark reminder about how the Internet is tipping life on its head.
The working group goes on to comment that they are seeking
"practical recommendations or proposals" from across the community,
on any topic related to the Church's mission in the digital environment.
People engaged in the free software movement, whether they are
Catholic or not, can contact their local diocese to find out who
is locally coordinating the response to these challenges.
Another phrase that caught my eye:
"today we live in a digital culture"
Not exactly. Some people would say that a digital culture is being
imposed on us. Institutions like politics and the media are hooked on it
and they put it up on a pedestal. Therefore, it is even more vital that
other institutions, such as the church, take the role of questioning
everything about digital culture and also maintaining viable alternatives.
Life without mobile phones, life without apps
Mobile phones and apps are closely related. There are some people
who choose to live without a smart phone, in other words, they
only have half the problems of a full mobile phone. Some people also
choose to have smart phones without the Google or Apple app store,
for example, people who install the
Replicant or
LineageOS and use the
F-Droid app store to limit their phone to ethical apps.
In practical terms, there are people who are unable to navigate their
home town without using their phone. An interesting question arises
for the church, what proportion of followers are unable to identify the
most direct route from their home to their closest church without looking
at an app? It would be interesting to analyze the responses based on
various factors such as age and years of residence in the parish.
Another key question, closely related to the above, is how many
parishioners can recall regular mass times and key events in the parish
calendar without looking at their phone? It is great to have this
information visible on the parish web site, nonetheless, when
people are truly engaged in the parish and the community, this
information will be committed to memory. The more pervasive this
information is in a community, the more resilient the community.
Authentication systems undermining human dignity
Today we frequently see companies insisting they need to have
our mobile phone numbers to "authenticate" us or to "sign" documents
by text message.
This type of thing is particularly creepy. Many people are familiar
with the Nazi-era practice of burning identification numbers into the
skin of Jewish prisoners. Mobile phone numbers serve a similar
functional purpose. Even though the numbers are not physically
burnt into our skin, it is often inconvenient for people to change
their number.
There are many closely related phenomena, including web sites
demanding users authenticate themselves from a Gmail or Facebook
account.
At the level of the church, the state, education, health care and
financial services, it is vital to ensure everybody can participate
in the way they want to without giving up their dignity.
The church needs to become just as vocal about these topics
as it is about themes such as abortion.
Need to emphasize consent
Concerns about consent and coercion have become a big topic in
the world today. Ironically, the
social control media platforms
pretending to help give women a platform are violating the
principle of consent in so many other ways.
Consider, for example, people who spent time creating a profile
on Facebook or Twitter, sometimes over many years, connecting with
hundreds or thousands of followers and then being confronted with the
demand to add their mobile phone number to their account. If they
don't add their mobile phone number, their account is blocked. There
is no genuine technical reason to have a mobile phone number in the
account as many of these services worked exactly the same way for
many years before such demands became commonplace.
People are not freely consenting to share their phone numbers
with Mark Zuckerberg and Elon Musk. The services have been bastardized
to ambush their users with these demands.
Significantly, this culture of ambushing and coercing people
trickles down into society. In Australia, Chanel Contos started
a highly publicized petition/journal with stories from women at
elite private schools who felt they had been ambushed, bullied and
coerced into unwanted physical encounters.
Ironically, Miss Contos publicized her concerns through the very
same platforms that are undermining our understanding of consent and
privacy.
The church itself has had to do a lot of soul searching on topics
of consent and abuses of power. This puts the church in an interesting
position where we can say that even considering some of the most shocking
revelations about abuse, those responsible are the lesser evil compared to
the overlords in Silicon Valley.
It is remarkable how quickly the institutions of Silicon Valley have
abandoned all checks and balances and seen fit to do as they please.
The Catholic Church and other religious institutions can now
take what they have learnt from the critical analysis of their own mistakes
and warn society how stupid it would be to go down the same path again
with these digital gangsters.
Digital technology is much more than social control media
The church is not new to technology. Early printing presses
were installed in church premises. Caxton installed England's
first press at Westminster Abbey. Other sites included Oxford
and St Alban's Abbey. Prior to the printing press, reading and
writing were activities reserved for clerics and many of their
works only existed in Latin. The printing press enabled the
mass production of bibles in German and English languages. This,
in turn, had a huge impact on the standardization of the language
just as it helped standardize the moral attitudes that Silicon Valley
is ripping out underneath us. The King James Version of the bible is
widely recognized for its impact on the English language.
The standardization of language was only one side-effect of
this invention. The reformation was another. As people gained
books and the power of reading, they became less dependant upon
the clerics.
Likewise,
social control media today is having an impact on our culture,
for better or worse. Just as printing presses enabled the reformation,
social control media may lead to further changes in the way humans organize
ourselves around religious structures and beliefs. The overlords
in Silicon Valley are actively contemplating these roles for themselves.
Elon Musk has even dressed up as Satan. If the Catholic Church doesn't
offer a compelling alternative to these power shifts then it will
be taken out of the church's hands.
Frances Haugen (Facebook whistleblower): almost no one outside of Facebook knows
what happens inside Facebook. The company’s leadership keeps vital information from
the public, the U.S. government, its shareholders, and governments around the world.
The documents I have provided prove that Facebook has repeatedly misled us about
what its own research reveals about the safety of children, its role in spreading hateful
and polarizing messages, and so much more.
Whereas previous generations went to clerics for advice, followed
by reading the bible themselves, the youth today go to a search engine
and tomorrow people may be putting their faith in artificial intelligence.
We can already see evidence of search engines,
social control media and
AI bots guiding people to increased levels of conflict with their
neighbors or putting people on dark paths of isolation, self-harm and
suicide.
Catholic Church resources relevant to digital environment
Catholic Church has a big role in education and schools, therefore,
the church can see the impact of
social control media and the church can
enforce bans for children and provide training to staff and parents.
Teachers, as employees of the church or the state, have reported a
rise in bullying from parents who group together on messaging apps.
In one recent case,
British police sent six officers to humiliate a parent who had used
WhatsApp to agitate about the local
school. The conflict, the adversarial nature of this environment and
the huge waste of police resources are all consequences of the way
the technology is designed and used in society. Each incident like
this provides an insight about opportunities for the Catholic Church
to ask "is there a better way?".
Words from Frances Haugen help explain the six police officers
laying seige to the parents of small children:
I saw that Facebook repeatedly encountered conflicts
between its own profits and our safety. Facebook consistently resolved those conflicts
in favor of its own profits. The result has been a system that amplifies division,
extremism, and polarization — and undermining societies around the world.
The Catholic Church is a large employer in many countries.
This gives the church the ability to make decisions about the use
of mobile phones and messaging apps in the employer/employee
relationship. An employer can't prohibit staff from using these
things in their personal time but they can decide to eliminate
any official use of these gimmicks for work purposes. The employer/employee
relationship provides another opportunity to provide training about the
importance of human dignity above the demands of our devices.
The public agenda in the digital environment, abortion of our species
With many politicians and journalists now living their lives through
social control media, their ability to evaluate which issues are worthy
of public debate are heavily influenced by the issues that are supposedly
trending online. There is a notion that issues are trending online
as a consequence of public interest while the reality is the managers
of online platforms exert influence to ensure some issues appear
to grow organically while significant but inconvenient topics are
conveniently buried in the flood of news.
In this context, the Catholic Church provides an alternative
route to put issues on the agenda for public discussion, regardless of
whether a particular issue appears to be "trending" or not. This
power is most often used for issues close to the church's teaching,
such as lobbying about abortion, but there is no reason the church
can't use the same resources to lobby against the abortion of
the human race by AI.
Aid for victims of discrimination by Silicon Valley overlords and online
mobs
The Catholic Church traces its origins to the persecution of Jesus
and the martyrs Saint Peter and Saint Paul.
"But let us pass from ancient examples, and come unto those who have in the times nearest to us, wrestled for the faith. Let us take the noble examples of our own generation. Through jealousy and envy the greatest and most just pillars of the Church were persecuted, and came even unto death. Let us place before our eyes the good Apostles. Peter, through unjust envy, endured not one or two but many labours, and at last, having delivered his testimony, departed unto the place of glory due to him. Through envy Paul, too, showed by example the prize that is given to patience: seven times was he cast into chains; he was banished; he was stoned; having become a herald, both in the East and in the West, he obtained the noble renown due to his faith; and having preached righteousness to the whole world, and having come to the extremity of the West, and having borne witness before rulers, he departed at length out of the world, and went to the holy place, having become the greatest example of patience." (first epistle of Clement to the Corinthians, 5:1 - 5:7)
These words account for the persecution of Peter and Paul under
the Emperor Nero almost two thousand years ago.
Eight hundred years ago, the Magna Carta arrived and over time,
it is has inspired the US Bill of Rights, the Universal Declaration
of Human Rights and the abolition of capital punishment.
Yet today we see the Silicon Valley overlords wish to throw all of
that out the window and take us back to the time of Nero.
Everyone has the right freely to participate in the cultural life of the community, to enjoy the arts and to share in scientific advancement and its benefits.
Everyone has the right to the protection of the moral and material interests resulting from any scientific, literary or artistic production of which he is the author.
When we look at the web sites of well known free software projects like
Debian and Fedora, we see them openly proclaiming their desire to censor
certain people. Anybody who speaks up about ethical issues in our
industry has been subject to these extreme reprisals from time to time.
The similarities between these cases and the growing list of victims
is clear proof that they are not random. There is a coordinated effort
to roll back or circumvent civil rights. If a digital space or digital
world does exist, then it is eerily similar to the world where Roman
Emperors used grisly executions to perpetuate control through fear.
The Catholic Church can seek out the victims who have been canceled,
victims who have been de-platformed and people who have
something to say about human dignity in the era of AI. Whether or not
these people are
Catholics or not, the concerns
that independent experts
have been trying to research and publicize need to be elevated above
the noise from public relations departments.
At the same time, the horrific impact inflicted on our families is
often hidden from public view.
Children in the digital environment
It is telling that we found very similar tactics used by
Harvey Weinstein and Chris Lamb, former leader of the Debian Project.
This is significant because Lamb was trained through the Google
Summer of Code and funded by Google, including a large payment of
$300,000 shortly before three victims revealed the scandal.
Despite Debian's promise of transparency, the money was only revealed
more than six months later and Google's name is never publicly
connected to the numbers.
When Weinstein had concerns about the behavior of some women,
he would send nasty rumors about "behavior" to other people in the
industry. There's something snobby about these attitudes to
human behavior.
When women made complaints to the police, the film director
Peter Jackson spoke up and
confirmed Weinstein had been using these dirty tricks,
spreading rumors about behavior of women who were not
submissive enough for his liking.
"I recall Miramax telling us they were a nightmare to work with and we should avoid them at all costs. This was probably in 1998," Jackson said.
"At the time, we had no reason to question what these guys were telling us - but in hindsight, I realise that this was very likely the Miramax smear campaign in full swing."
A range of people have come forward showing that Chris Lamb was doing
exactly the same thing in his role at Debian. Under copyright law,
co-authors do not have any obligation to the person elected to
serve as Debian Project Leader from time to time. We are all equals.
Subject: Re: Debian Developer status
Date: Tue, 18 Dec 2018 10:36:09 +0900
From: Norbert Preining <norbert@preining.info>
To: Daniel Pocock <daniel@pocock.pro>
Hi Daniel,
even if, going through a lawsuite like this in the UK is out and above
my abilities and financial possibilities.
But I am scared that Lamb actually also hosed an application for a
company in NY, a job related to Debian. If that has happened, and I can
reasonably document it, I would consider a defamation law suite.
> Lamb is a resident of the UK and sending emails from the UK
> https://regainyourname.com/news/cyberbullying-cyberstalking-and-online-harassment-a-uk-study/
Thanks for the links, I will keep them in mind.
Norbert
--
PREINING Norbert http://www.preining.info
Accelia Inc. + JAIST + TeX Live + Debian Developer
GPG: 0x860CDC13 fp: F7D8 A928 26E3 16A1 9FA0 ACF0 6CAC A448 860C DC13
Even more disturbing, Lamb started his attacks on my family at
the very same time that Cardinal George Pell was convicted in 2018.
My second cousin had been a member of Cardinal George Pell's former
choir in Melbourne. Lamb and his co-conspirators, funded by Google,
started anonymous rumors about abuse.
Multiple people came forward with evidence that Lamb was behaving
like Weinstein, spreading the rumors behind our backs. When
Dr Preining and I spoke up, a third victim saw the scandal and
identified himself publicly on Christmas Day:
Subject: Re: Censorship in Debian
Date: Tue, 25 Dec 2018 23:44:38 +0100
From: martin f krafft
Organization: The Debian project
To: debian-project@lists.debian.org
Hello project,
It's very sad to read about what's going on.
I know that there's been at least another case, in which DAM and AH
have acted outside their mandate, threatening with project
expulsion, and choosing very selectively with whom they communicate.
I know, because I was being targeted.
Neither DAM nor AH (the same people still active today) made
a single attempt to hear me. None of my e-mails to either DAM or AH
were ever answered.
Instead, DAM ruled a verdict, and influenced other people to the
point that "because DAM ruled" was given as a reason for other
measures. This was an unconstitutional abuse of DAM's powers, and in
the case of AH, the whole mess also bordered on libel. Among others,
the current DPL Chris Lamb promised a review in due time, but
nothing ever happened.
... [ snip ] ...
Yet if it is not safe for the engineers who make this technology,
it is certainly not safe for kids.
On 5 October 2021, I raised the concerns about children in this culture
with the report
Google, FSFE & Child Labor.
Red Hat, a subsidiary of IBM since 2019, started legal action to
censor and discredit my concerns. They accused me of
bad faith for publishing that article. Yet the legal panel ruled that
Red Hat was harassing me and engaged in an abuse of the the
administrative procedure.
The irony, of course, is that the Cardinals wear red hats, like the
name of the company
Red Hat who
were found to be abusing me. Chris Lamb
at Debian had started the rumors about my family when
Cardinal Pell was convicted.
The manner in which this intersected our lives and our faith,
the abuse rumors after the late Cardinal Pell's conviction,
my visit to the Carabinieri on the day the Cardinal died,
the wedding day, on Palm Sunday, being a copy-cat (unconfirmed) suicide,
the crucifixion of Dr Stallman at Easter and
the Debian Christmas lynchings, it is staggering. As they say in crime
movies, follow the money.
Digital environment subjects parishioners to third-party surveillance
The Catholic Church was born out of persecution and it has to be
remembered that surveillance is a cornerstone of persecution.
The fact that the largest services, like Google, Facebook and Twitter
are all ostensibly free is proof that they gain all of their profit
from their ability to conduct effective surveillance and manipulation
of the population.
At one time, the church used to fulfil similar roles. Followers
would submit themselves to a form of surveillance through the sacrament
of confession, where they would receive counsel from their priest.
Priests seek to exert some influence from the pulpet, with the threat
of ex-communication and from time to time, the odd inquisition or
persecution of somebody who was ahead of his time like Galileo.
If tech companies can approximate all these functions so effectively
with algorithms, we run the risk that religion becomes redundant.
Therefore, attempting to perform the church's role through a medium
that is substituting itself for the role of religion is a lot like
digging one's own grave.
Through a series of public inquiries and whistleblowers, we've
heard the extent to which these overlords are stripping away our dignity.
Their goal is to anticipate our every decision, influence who we talk to,
influence how we vote and influence every last cent in our budget.
If every one of those decisions is controlled and even micromanaged
for us, with scientific precision, right down to the last cent in our
bank account each month, by the influence of algorithms,
what space is left in our consciousness for the influence of the Gospel?
Mission: remaining relevant
Therefore, the question assigned to the working group about the
mission in the digital environment
could be rephrased as how does religion, of any nature, remain
relevant at all?
For many families in affluent cultures today, the church is engaged
out of tradition for weddings, funerals and sometimes education for
the children.
For the church to empower parishioners with technology, rather than
losing parishioners to technology, we need to ask questions about some
of the topics raised by the free software movement.
How to ensure each person has full control over their devices,
including right to repair and right to change the operating system.
Develop strategies to protect people from the risks of technology.
For example,
social control media allows small but very noisy groups to
do intense harm to their victims with the deliberate and repeated spread
of gossip and defamation. It is becoming harder and harder to ensure that
no person or minority is excluded by online vendettas. How to provide
support to people targetted by these toxic people?
How to ensure that every person and group can take their turn to speak?
Mission: protecting society from the same mistakes
Australia went through the process of having a Royal Commission
into abuses by a wide range of institutions, including the church.
Yet that was too late for many of the people who have either died or
lost their family members, health and careers. Wouldn't it be great
to make such strong interventions before rather than after catastrophic
failures have occurred? It is high time for the same level of scrutiny on
social control media bosses and the exploitation and manipulation
of the public on multiple levels.
Conclusion
Social control media is rapidly becoming a front for artificial
intelligence. As the Turing test (imitation game) has suggested
to us since 1949, it is inevitable that each new iteration of this
phenomena will become more and more indistinguishable from reality.
As such, it may present itself not only as a substitute for fellow
human beings but as an alternative
to the church. People may be duped into accepting it as their God.
In other words,
social control media may make the church irrelevant
and after it does that, it may go on to make humanity irrelevant.
Just look at the way people make faces at me after my father died.
The rudeness I experience on an almost daily basis started at a time of grief.
People are brainwashed to set aside even the most basic respect
for human dignity, the respect for a family at a time of grief
and it just becomes another opportunity to use each other for sport.
This aspect of my life was entirely created by
social control media
and the people who are defining that space in my own profession.
In her testimony to Congress, Frances Haugen told us:
I believe what I did was right and necessary for the common good — but I know
Facebook has infinite resources, which it could use to destroy me.
In 2018, I attended the UN Forum on Business and Human Rights in Geneva,
making some brief comments about Facebook and Twitter falling into the
wrong hands. The UN Forum occurred at the same time the jury was considering
the charges against Cardinal George Pell. Pell was convicted and these
social control media platforms filled up with rumors about my family
and I, the very phenomena Haugen herself seems to be afraid of.
A new minor release 0.2.6 of our RcppRedis
package arrived on CRAN today.
RcppRedis
is one of several packages connecting R to the fabulous Redis in-memory datastructure store (and
much more). It works equally well with the newer fork Valkey. RcppRedis
does not pretend to be feature complete, but it may do some things
faster than the other interfaces, and also offers an optional coupling
with MessagePack binary
(de)serialization via RcppMsgPack. The
package has been “deployed in production” as a risk / monitoring tool on
a trading floor for several years. It also supports pub/sub
dissemination of streaming market data as per this
earlier example.
This update brings new functions del, lrem,
and lmove (for the matching Redis / Valkey commands) which
may be helpful in using Redis (or Valkey) as a job queue.
We also extended the publish accessor by supporting text
(i.e. string) mode along with raw or
rds (the prior default which always serialized R objects) just how
listen already worked with these three cases. The change
makes it possible to publish from R to subscribers not running R as they
cannot rely on the R deserealizer. An example is provided by almm, a live market
monitor, which we introduced in this
blog post. Apart from that the continuous integration script
received another mechanical update.
The detailed changes list follows.
Changes in version 0.2.6
(2025-06-24)
The commands DEL, LREM and
LMOVE have been added
The continuous integration setup was updated once more
The pub/sub publisher now supports a type argument similar to the
listener, this allows string message publishing for non-R
subscribers
to /etc/rc.local. After that I only had to enable the Sensors plugin below Statistics -> Setup -> General plugins and check 'Monitor all except specified` in its "Configure" dialog.
Single signon is a pretty vital part of modern enterprise security. You have users who need access to a bewildering array of services, and you want to be able to avoid the fallout of one of those services being compromised and your users having to change their passwords everywhere (because they're clearly going to be using the same password everywhere), or you want to be able to enforce some reasonable MFA policy without needing to configure it in 300 different places, or you want to be able to disable all user access in one place when someone leaves the company, or, well, all of the above. There's any number of providers for this, ranging from it being integrated with a more general app service platform (eg, Microsoft or Google) or a third party vendor (Okta, Ping, any number of bizarre companies). And, in general, they'll offer a straightforward mechanism to either issue OIDC tokens or manage SAML login flows, requiring users present whatever set of authentication mechanisms you've configured.
This is largely optimised for web authentication, which doesn't seem like a huge deal - if I'm logging into Workday then being bounced to another site for auth seems entirely reasonable. The problem is when you're trying to gate access to a non-web app, at which point consistency in login flow is usually achieved by spawning a browser and somehow managing submitting the result back to the remote server. And this makes some degree of sense - browsers are where webauthn token support tends to live, and it also ensures the user always has the same experience.
But it works poorly for CLI-based setups. There's basically two options - you can use the device code authorisation flow, where you perform authentication on what is nominally a separate machine to the one requesting it (but in this case is actually the same) and as a result end up with a straightforward mechanism to have your users socially engineered into giving Johnny Badman a valid auth token despite webauthn nominally being unphisable (as described years ago), or you reduce that risk somewhat by spawning a local server and POSTing the token back to it - which works locally but doesn't work well if you're dealing with trying to auth on a remote device. The user experience for both scenarios sucks, and it reduces a bunch of the worthwhile security properties that modern MFA supposedly gives us.
There's a third approach, which is in some ways the obviously good approach and in other ways is obviously a screaming nightmare. All the browser is doing is sending a bunch of requests to a remote service and handling the response locally. Why don't we just do the same? Okta, for instance, has an API for auth. We just need to submit the username and password to that and see what answer comes back. This is great until you enable any kind of MFA, at which point the additional authz step is something that's only supported via the browser. And basically everyone else is the same.
Of course, when we say "That's only supported via the browser", the browser is still just running some code of some form and we can figure out what it's doing and do the same. Which is how you end up scraping constants out of Javascript embedded in the API response in order to submit that data back in the appropriate way. This is all possible but it's incredibly annoying and fragile - the contract with the identity provider is that a browser is pointed at a URL, not that any of the internal implementation remains consistent.
I've done this. I've implemented code to scrape an identity provider's auth responses to extract the webauthn challenges and feed those to a local security token without using a browser. I've also written support for forwarding those challenges over the SSH agent protocol to make this work with remote systems that aren't running a GUI. This week I'm working on doing the same again, because every identity provider does all of this differently.
There's no fundamental reason all of this needs to be custom. It could be a straightforward "POST username and password, receive list of UUIDs describing MFA mechanisms, define how those MFA mechanisms work". That even gives space for custom auth factors (I'm looking at you, Okta Fastpass). But instead I'm left scraping JSON blobs out of Javascript and hoping nobody renames a field, even though I only care about extremely standard MFA mechanisms that shouldn't differ across different identity providers.
Someone, please, write a spec for this. Please don't make it be me.
If we ever thought a couple of years or decades of constant use would get
humankind to understand how an asymetric key pair is to be handled… It’s
time we moved back to square one.
I had to do an online tramit with the Mexican federal government to get a
statement certifying I successfully finished my studies, and I found this
jewel of user interface:
So… I have to:
Submit the asymetric key I use for tax purposes, as that’s the ID the
government has registered for me. OK, I didn’t expect it to be used for
this purpose as well, but I’ll accept it. Of course, in our tax system
many people don’t require having a public key generated (“easier”
regimes are authenticated by password only), but all professionals with
a cédula profesional (everybody getting a unviersitary title) is now
compelled to do this step.
Not only I have to submit my certificate (public key)… But also the
private part (and, of course, the password that secures it).
I understand I’m interacting with a Javascript thingie that runs only
client-side, and I trust it is not shipping my private key to their
servers. But given it is an opaque script, I have no assurance about
it. And, of course, this irks me because I am who I am and because I’ve
spent several years thinking about cryptography. But for regular people,
it just looks as a stupid inconvenience: they have to upload two weird
files with odd names and provide a password. What for?
This is beyond stupid. I’m baffled.
(of course, I did it, because I need the fsckin’ document. Oh, and of
course, I paid my MX$1770, ≈€80, for it… which does not make me too
happy for a tramit that’s not even shuffling papers, only storing the right
bits in the right corner of the right datacenter, but anyhow…)
Nel 2021, il defunto Papa Francesco ha avviato il
Sinodo sulla sinodalità , un processo che si è concluso con una relazione finale nell'ottobre 2024.
L'
elenco dei gruppi di lavoro comprende un gruppo dedicato alle sfide della poligamia, soprattutto nelle regioni in cui la chiesa può reclutare nuovi seguaci che hanno già più partner in famiglia.
Il rapporto finale del Sinodo dell'ottobre 2024 ha menzionato la poligamia solo una volta. A quanto pare, il gruppo di lavoro non ha individuato una via d'uscita su cui i vescovi potessero concordare e rimane un tema aperto per la Chiesa.
Tra tutte le religioni cristiane, la Chiesa cattolica è una delle più severe in materia di poligamia. Catechismo della Chiesa Cattolica, par. 2387:
La poligamia non è conforme alla legge morale. La comunione [coniugale] è radicalmente contraddetta dalla poligamia; questa, infatti, nega direttamente il disegno di Dio rivelato fin dal principio, perché è contraria alla pari dignità personale dell'uomo e della donna che nel matrimonio si donano con un amore totale e perciò unico ed esclusivo .
Si noti che la parola esclusivo fa parte della
definizione
cattolica .
Si potrebbe sostenere che alcune persone sono ormai così totalmente coinvolte dal
controllo sociale dei media da non avere più un legame mentale esclusivo con il loro partner nel mondo reale.
Facebook sceglie quali informazioni miliardi di persone vedono, plasmando la loro percezione della realtà. Anche chi non usa Facebook è influenzato dalla radicalizzazione di chi lo usa. Un'azienda che ha il controllo sui nostri pensieri, sentimenti e comportamenti più profondi ha bisogno di una vera supervisione.
In altre parole, gli algoritmi di Facebook sono diventati una terza persona in molti matrimoni. Gli algoritmi di Facebook stanno integrando le decisioni dei genitori sui loro figli, e non in senso positivo.
Ho visto che Facebook ha ripetutamente incontrato conflitti tra i propri profitti e la nostra sicurezza. Facebook ha sistematicamente risolto questi conflitti a favore dei propri profitti. Il risultato è stato un sistema che amplifica la divisione, l'estremismo e la polarizzazione, minando le società di tutto il mondo. In alcuni casi, questo pericoloso dibattito online ha portato a violenze vere e proprie che danneggiano e persino uccidono le persone. In altri casi, la loro macchina per ottimizzare i profitti sta generando autolesionismo e odio verso se stessi, soprattutto tra i gruppi vulnerabili, come le adolescenti. Questi problemi sono stati ripetutamente confermati dalle ricerche interne di Facebook.
Alan Turing previde questo fenomeno nel 1949 con la sua proposta per il gioco dell'imitazione. Oggi lo chiamiamo Test di Turing. L'implicazione del pensiero di Turing è che, con ogni nuova iterazione degli algoritmi, diventa sempre più difficile per un essere umano distinguere gli algoritmi da un essere umano reale.
Se l'essere umano non è in grado di distinguere gli algoritmi da un altro essere umano reale, allora è logico supporre che possa iniziare a formare legami emotivi con gli algoritmi e le persone create dall'intelligenza
artificiale .
Molto è stato scritto in studi di ricerca sull'interazione tra
i media di controllo sociale e la dopamina nel cervello . Il nostro cervello può avere naturalmente stimoli di dopamina, ad esempio quando un bambino ci sorride, e può avere stimoli quando vediamo qualcosa di artificiale, come un video di un bambino generato dall'intelligenza artificiale su Facebook. Sono necessarie ulteriori ricerche per comprendere in che misura questi stimoli sostitutivi compromettano il funzionamento familiare nel mondo reale.
Ma non è solo la dopamina a entrare in gioco. Anche l'ossitocina, spesso soprannominata "ormone delle coccole", gioca un ruolo nei nostri legami sociali online. Quando interagiamo positivamente sui social media, il nostro cervello rilascia ossitocina, creando un senso di connessione e fiducia. È come se il nostro cervello non riuscisse a distinguere tra un abbraccio virtuale e uno reale.
Allarmante.
Dobbiamo considerare questo fenomeno come una forma di poligamia virtuale o cyberpoligamia e quando discutiamo delle sfide della poligamia, potrebbe non essere giusto concentrarsi sulla poligamia in Africa e non parlare contemporaneamente del fenomeno virtuale.
Osservando le relazioni aperte nell'ecosistema del software open source, molti di questi aspetti vengono accennati, ma mai dichiarati. Nel 2016, si sono diffuse voci su uno sviluppatore, il Dr. Jacob Appelbaum. Sono comparsi diversi articoli di cronaca. La rivista
Die Zeit ha pubblicato un articolo intitolato
"Cosa ha fatto quest'uomo?" . Chiunque condividesse link all'articolo veniva immediatamente punito in alcune community. L'articolo afferma:
Seduta di fronte a loro c'è una giovane americana. Aveva conosciuto gli altri solo un paio di giorni prima, ma sembra a disagio a questa festa. Non parla molto, ma ascolta con cordialità quello che viene detto.
...
Gli invitati alla festa del signor Appelbaum sono circa 20 e sono programmatori, hacker e attivisti provenienti da tutto il mondo.
Un tema legato alla crisi del Dr. Appelbaum è il concetto di relazioni aperte nelle comunità del software libero e open source. Quando la crisi è iniziata nel 2016, si è discusso molto su cosa accadesse realmente alle feste. Sono comparsi resoconti giornalistici. La gente lo trovava imbarazzante.
Sono queste persone a creare le basi tecnologiche per molti dei servizi online da cui dipendiamo. Pertanto, se il fenomeno della poligamia è valido in queste comunità, è inevitabile che diventi moralmente accettabile anche nelle tecnologie estrapolate dal nostro lavoro.
Woody Allen ha distribuito il film
Vicky Cristina Barcelona nel 2008. Abbiamo notato parallelismi nelle liste delle stanze della DebConf che ora le persone condividono.
È seguito il Debian Pregnancy Cluster e subito dopo, nel 2014, si è deciso di organizzare
la Women's MiniDebConf a Barcellona , &ZeroWidthSpace&ZeroWidthSpacecome nel film. Altri hanno abbandonato. Per quanto ne so, l'evento non si è mai più ripetuto.
I casi Debian potrebbero rappresentare un caso limite, tipico dei gruppi simili a sette, ma il fenomeno della poligamia virtuale nei
media di controllo sociale sembra rappresentare un rischio molto più ampio.
Frances Haugen, la whistleblower di Facebook, ha consegnato un'enorme quantità di documenti che rivelano fino a che punto gli algoritmi di Facebook si ingrazino ai loro soggetti. Haugen ha dimostrato l'effetto dissuasivo di Facebook su alcuni tipi di soggetti, ad esempio le adolescenti con disturbi alimentari.
Il riprogrammazione del cervello, la sostituzione dell'amore umano con l'amore virtuale non è un problema solo nei rapporti tra marito e moglie e genitori e figli. Basti pensare alla
morte di Abraham Raji alla DebConf23 in India .
Un paio di giorni dopo l'annegamento di Abraham, scattarono una foto di gruppo nella piscina dell'hotel e la pubblicarono con la didascalia "Entrate e unitevi a noi".
Confrontate questo con la risposta di Amnesty International al suicidio di due dipendenti. Amnesty International commissionò una serie di rapporti esterni e li pubblicò prontamente affinché tutti i suoi donatori, volontari e dipendenti potessero leggerli. Dopo il caso del
Debian Suicide Cluster , non fu mai pubblicato alcun rapporto. Ingenti somme di denaro furono spese
per cercare di impedire la pubblicazione delle prove sulle morti .
A un osservatore esterno, il modo in cui questi gruppi copiano e incollano una dichiarazione standard su ogni morte e poi vanno avanti come se nulla fosse accaduto può apparire estremamente insensibile. Dobbiamo analizzare più attentamente per comprendere le dinamiche di queste relazioni. Molte di queste persone raramente si incontrano nel mondo reale. Se il novantanove percento del rapporto con Abraham si basava su comunicazioni elettroniche, significa forse che le persone non avevano instaurato un rapporto umano con lui prima di incontrarlo per la prima volta alla conferenza?
Questo è sconcertante. Facendo un passo indietro, scopriamo che le persone avevano un rapporto non proprio umano con il volontario deceduto, ma d'altra parte, quando si usano
i social media per il controllo , alcune persone si legano agli algoritmi e alle esperienze in modo ancora più forte di quanto non si leghino alla vita familiare nel mondo reale.
In altre parole, non possiamo semplicemente preoccuparci dell'impatto delle amicizie nascoste sui
social media di controllo , dobbiamo preoccuparci degli algoritmi stessi che riprogrammano quelle parti della mente umana che sono normalmente riservate all'aspetto esclusivo di una relazione coniugale. O a ciò che era considerato esclusivo nei matrimoni sani prima
dell'avvento dei
social media di controllo .
È importante osservare un diagramma completo come questo perché alcune di queste persone sono attivamente coinvolte in attacchi di cyberbullismo contro altri sviluppatori di software open source. Per fermare il cyberbullismo, dobbiamo identificarne le origini.
I Debianisti cercano di presentarsi come un'organizzazione quasi professionale. Si vantano di titoli altisonanti e rubano il gergo del
Codice di Condotta da organizzazioni più credibili. Se vogliono usare questi titoli altisonanti e il gergo del Codice di Condotta, hanno anche l'obbligo di rivelare tutti i loro conflitti di interesse sentimentali e finanziari. Cercare di nascondere questi conflitti di interesse con scuse sulla privacy e sulle molestie è immorale e disonesto.
Quando rendiamo pubbliche tutte queste relazioni, quando vediamo che tutte le persone che ricoprono titoli importanti in diversi team sono legate sentimentalmente, possiamo vedere che Debian non è affatto un'organizzazione professionale, è più simile a un'associazione studentesca o a un gruppo teatrale amatoriale.
For some time I’ve been noticing news reports about PFAs [1]. I hadn’t thought much about that issue, I grew up when leaded petrol was standard, when almost all thermometers had mercury, when all small batteries had mercury, and I had generally considered that I had already had so many nasty chemicals in my body that as long as I don’t eat bottom feeding seafood often I didn’t have much to worry about. I already had a higher risk of a large number of medical issues than I’d like due to decisions made before I was born and there’s not much to do about it given that there are regulations restricting the emissions of lead, mercury etc.
I just watched a Veritasium video about Teflon and the PFA poisoning related to it’s production [2]. This made me realise that it’s more of a problem than I realised and it’s a problem that’s getting worse. PFA levels in the parts-per-trillion range in the environment can cause parts-per-billion in the body which increases the risks of several cancers and causes other health problems. Fortunately there is some work being done on water filtering, you can get filters for a home level now and they are working on filters that can work at a sufficient scale for a city water plant.
Also they noted that donating blood regularly can decrease levels of PFAs in the bloodstream. So presumably people who have medical conditions that require receiving donated blood regularly will have really high levels.
When I was younger, and definitely naïve, I was so looking forward to AI, which
will help us write lots of good, reliable code faster. Well, principally me, not
thinking what impact it will have industry-wide. Other more general concerns,
like societal issues, role of humans in the future and so on were totally not on
my radar.
At the same time, I didn’t expect this will actually happen. Even years later,
things didn’t change dramatically. Even the first release of ChatGPT a few years
back didn’t click for me, as the limitations were still significant.
Hints of serious change
The first hint of the change, for me, was when a few months ago (yes, behind the
curve), I asked ChatGPT to re-explain a concept to me, and it just wrote a lot
of words, but without a clear explanation. On a whim, I asked Grok—then recently
launched, I think—to do the same. And for the first time, the explanation
clicked and I felt I could have a conversation with it. Of course, now I forgot
again that theoretical CS concept, but the first step was done: I can ask an LLM
to explain something, and it will, and I can have a back and forth logical
discussion, even if on some theoretical concept. Additionally, I learned that
not all LLMs are the same, and that means there’s real competition and that leap
frogging is possible.
Another topic on which I tried to adopt early and failed to get mileage out of
it, was GitHub Copilot (in VSC). I tried, it helped, but didn’t feel any
speed-up at all. Then more recently, in May, I asked Grok what’s the state of
the art in AI-assisted coding. It said either Claude in a browser tab, or in VSC
via continue.dev extension.
The continue.dev extension/tooling is a bit of a strange/interesting thing. It
seems to want to be a middle-man between the user and actual LLM services, i.e.
you pay a subscription to continue.dev, not to Anthropic itself, and they manage
the keys/APIs, for whatever backend LLMs you want to use. The integration with
Visual Studio Code is very nice, but I don’t know if long-term their business
model will make sense. Well, not my problem.
Claude: reverse engineering my old code and teaching new concepts
So I installed the latter and subscribed, thinking 20 CHF for a month is good
for testing. I skipped the tutorial model/assistant, created a new one from
scratch, just enabled Claude 3.7 Sonnet, and started using it. And then, my mind
was blown-not just by the LLM, but by the ecosystem. As said, I’ve used GitHub
copilot before, but it didn’t seem effective. I don’t know if a threshold has
been reached, or Claude (3.7 at that time) is just better than ChatGPT.
I didn’t use the AI to write (non-trivial) code for me, at most boilerplate
snippets. But I used it both as partner for discussion - “I want to do x, what
do you think, A or B?�, and as a teacher, especially for fronted topics, which
I’m not familiar with.
Since May, in mostly fragmented sessions, I’ve achieved more than in the last
two years. Migration from old school JS to ECMA modules, a webpacker (reducing
bundle size by 50%), replacing an old Javascript library with hand written code
using modern APIs, implementing the zoom feature together with all of keyboard,
mouse, touchpad and touchscreen support, simplifying layout from manually
computed to automatic layout, and finding a bug in webkit for which it also
wrote a cool minimal test (cool, as in, way better than I’d have ever, ever
written, because for me it didn’t matter that much). And more. Could I have done
all this? Yes, definitely, nothing was especially tricky here. But hours and
hours of reading MDN, scouring Stack Overflow and Reddit, and lots of trial and
error. So doable, but much more toily.
This, to me, feels like cheating. 20 CHF per month to make me 3x more productive
is free money—well, except that I don’t make money on my code which is written
basically for myself. However, I don’t get stuck anymore searching hours in the
web for guidance, I ask my question, and I get at least direction if not answer,
and I’m finished way earlier. I can now actually juggle more hobbies, in the
same amount of time, if my personal code takes less time or differently said, if
I’m more efficient at it.
Not all is roses, of course. Once, it did write code with such an endearing
error that it made me laugh. It was so blatantly obvious that you shouldn’t keep
other state in the array that holds pointer status because that confuses the
calculation of “how many pointers are down�, probably to itself too if I’d have
asked. But I didn’t, since it felt a bit embarassing to point out such a dumb
mistake. Yes, I’m anthropomorphising again, because this is the easiest way to
deal with things.
In general, it does an OK-to-good-to-sometimes-awesome job, and the best thing
is that it summarises documentation and all of Reddit and Stack Overflow. And
gives links to those.
Now, I have no idea yet what this means for the job of a software engineer. If
on open source code, my own code, it makes me 3x faster—reverse engineering my
code from 10 years ago is no small feat—for working on large codebases, it
should do at least the same, if not more.
As an example of how open-ended the assistance can be, at one point, I started
implementing a new feature—threading a new attribute to a large number of call
points. This is not complex at all, just add a new field to a Haskell record,
and modifying everything to take it into account, populate it, merge it when
merging the data structures, etc. The code is not complex, tending toward
boilerplate a bit, and I was wondering on a few possible choices for
implementation, so, with just a few lines of code written that were not even
compiling, I asked “I want to add a new feature, should I do A or B if I want it
to behave like this�, and the answer was something along the lines of “I see
you want to add the specific feature I was working on, but the implementation
is incomplete, you still need to to X, Y and Z�. My mind was blown at this
point, as I thought, if the code doesn’t compile, surely the computer won’t be
able to parse it, but this is not a program, this is an LLM, so of course it
could read it kind of as a human would. Again, the code complexity is not
great, but the fact that it was able to read a half-written patch, understand
what I was working towards, and reason about, was mind-blowing, and scary. Like
always.
Non-code writing
Now, after all this, while writing a recent blog post, I thought—this is going
to be public anyway, so let me ask Claude what it thinks about it. And I was
very surprised, again: gone was all the pain of rereading three times my post to
catch typos (easy) or phrasing structure issues. It gave me very clearly points,
and helped me cut 30-40% of the total time. So not only coding, but word
smithing too is changed. If I were an author, I’d be delighted (and scared).
Here is the overall reply it gave me:
Spelling and grammar fixes, all of them on point except one mistake (I claimed
I didn’t capitalize one word, but I did). To the level of a good grammar
checker.
Flow Suggestions, which was way beyond normal spelling and grammar. It felt
like a teacher telling me to do better in my writing, i.e. nitpicking on
things that actually were true even if they’d still work. I.e. lousy phrase
structure, still understandable, but lousy nevertheless.
Other notes: an overall summary. This was mostly just praising my post 😅. I
wish LLMs were not so focused on “praise the user�.
So yeah, this speeds me up to about 2x on writing blog posts, too. It definitely
feels not fair.
Wither the future?
After all this, I’m a bit flabbergasted. Gone are the 2000’s with code without
unittests, gone are the 2010’s without CI/CD, and now, mid-2020’s, gone is the
lone programmer that scours the internet to learn new things, alone?
What this all means for our skills in software development, I have no idea,
except I know things have irreversibly changed (a butlerian jihad aside). Do I
learn better with a dedicated tutor even if I don’t fight with the problem for
so long? Or is struggling in finding good docs the main method of learning? I
don’t know yet. I feel like I understand the topics I’m discussing with the AI,
but who knows in reality what it will mean long term in terms of “stickiness� of
learning. For the better, or for worse, things have changed. After all the
advances over the last five centuries in mechanical sciences, it has now come to
some aspects of the intellectual work.
Maybe this is the answer to the ever-growing complexity of tech stacks? I.e. a
return of the lone programmer that builds things end-to-end, but with AI taming
the complexity added in the last 25 years? I can dream, of course, but this also
means that the industry overall will increase in complexity even more, because
large companies tend to do that, so maybe a net effect of not much…
One thing I did learn so far is that my expectation that AI (at this level) will
only help junior/beginner people, i.e. it would flatten the skills band, is not
true. I think AI can speed up at least the middle band, likely the middle top
band, I don’t know about the 10x programmers (I’m not one of them). So, my
question about AI now is how to best use it, not to lament how all my learning
(90% self learning, to be clear) is obsolete. No, it isn’t. AI helps me start
and finish one migration (that I delayed for ages), then start the second, in
the same day.
At the end of this—a bit rambling—reflection on the past month and a half, I
still have many questions about AI and humanity. But one has been answered: yes,
“AI�, quotes or no quotes, already has changed this field (producing software),
and we’ve not seen the end of it, for sure.
I had a peculiar question at work recently, and it went off of a tangent that
was way too long and somewhat interesting, so I wanted to share.
The question is: Can you create a set of N-bit numbers (codes), so that
a) Neither is a subset of each other, and
b) Neither is a subset of the OR of two of the others?
Of course, you can trivially do this (e.g., for N=5, choose 10000, 01000,
00100 and so on), but how many can you make for a
given N? This is seemingly an open question, but at least I found that
they are called (1,2) superimposed codes and have history at least
back to this 1964 paper.
They present a fairly elegant (but definitely non-optimal) way of
constructing them for certain N; let me show an example for N=25:
We start by counting 3-digit numbers (k=3) in base 5 (q=5):
000
001
002
003
004
010
011
etc…
Now we have 5^3 numbers. Let's set out to give them the property that we
want.
This code (set of numbers) trivially has distance 1; that is, every number
differs from every other number by at least one digit. We'd like to increase
that distance so that it is at least as large as k.
Reed-Solomon gives us an
optimal way of doing that; for every number, we add two checksum digits and
R-S will guarantee that the resulting code has distance 3. (Just trust me
on this, I guess. It only works for q >= (k+1)/2, though, and q must be
a power of an odd prime because otherwise the group theory doesn't work out.)
We now have a set of 5-digit numbers with distance 3. But if we now take any
three numbers from this set, there is at least one digit where all three must
differ, since the distance is larger than half the number of digits: Two
numbers A and B differ from each other in at least 3 of the 5 digits, and A
and C also has to differ from each other in at least 3 of the 5 digits. There
just isn't room for A and B to be the same in all the places that A differ
from C.
To modify this property into the one that we want, we encode each digit into
binary using one-hot encoding (00001, 00010, 00100, etc.). Now our 5-digit
numbers are 25-bit numbers. And due to the "all different" property in the
previous paragraph, we also have our superimposition property; there's at
least one 5-bit group where A|B shares no bits with C. So this gives us a
25-bit set with 125 different values and our desired property.
This isn't necessarily an optimal code (and the authors are very clear on
that), but it's at least systematic and easy to extend to larger sizes.
(I used a SAT solver to extend this to 170 different values, just by keeping
the 125 first and asking for 45 more that were not in conflict. 55 more
was evidently hard.) The paper has tons more information, including some
stuff based on Steiner systems
that I haven't tried to understand. And of course, there are tons more
later papers, including one by Erdős. :-)
I've applied for an account at OEIS so I can add
a sequence for the maximum number of possible codes for each N.
It doesn't have many terms known yet, because the SAT solver struggles
hard with this (at least in my best formulation), but at least it will
give the next person something to find when they are searching. :-)
The Linux kernel has an interesting file descriptor called pidfd. As the name imples, it is a file descriptor to a pid or a specific process. The nice thing about it is that is guaranteed to be for the specific process you expected when you got that pidfd. A process ID, or PID, has no reuse guarantees, which means what you think process 1234 is and what the kernel knows what process 1234 is could be different because your process exited and the process IDs have looped around.
pidfds are *odd*, they’re half a “normal” file descriptor and half… something else. That means some file descriptor things work and some fail in odd ways. stat() works, but using them in the first parameter of openat() will fail.
One thing you can do with them is use epoll() on them to get process status, in fact the pidfd_open() manual page says:
A PID file descriptor returned by pidfd_open() (or by clone(2) with the CLONE_PID flag) can be used for the following purposes:
…
A PID file descriptor can be monitored using poll(2), select(2), and epoll(7). When the process that it refers to terminates, these interfaces indicate the file descriptor as readable.
So if you want to wait until something terminates, then you can just find the pidfd of the process and sit an epoll_wait() onto it. Simple, right? Except its not quite true.
procps issue #386 stated that if you had a list of processes, then pidwait only finds half of them. I’d like to thank Steve the issue reporter for the initial work on this. The odd thing is that for every exited process, you get two epoll events. You get an EPOLLIN first, then a EPOLLIN | EPOLLHUP after that. Steve suggested the first was when the process exits, the second when the process has been collected by the parent.
I have a collection of oddball processes, including ones that make zombies. A zombie is a child that has exited but has not been wait() ed by its parent. In other words, if a parent doesn’t collect its dead child, then the child becomes a zombie. The test program spawns a child, which exits after some seconds. The parent waits longer, calls wait() waits some more then exits. Running pidwait we can see the following epoll events:
When the child exits, EPOLLIN on the child is triggered. At this stage the child is a zombie.
When the parent calls wait(), then EPOLLIN | EPOLLHUP on the child is triggered.
When the parent exits, EPOLLIN then EPOLLIN | EPOLLHUP on the parent is triggered. That is, two events for the one thing.
If you want to use epoll() to know when a process terminates, then you need to decide on what you mean by that:
If you mean it has exited, but not collected yet (e.g. a zombie possibly) then you need to select on EPOLLIN only.
If you mean the process is fully gone, then EPOLLHUP is a better choice. You can even change the epoll_ctl() call to use this instead.
A “zombie trigger” (EPOLLIN with no subsequent EPOLLHUP) is a bit tricky to work out. There is no guarantee the two events have to be in the same epoll, especially if the parent is a bit tardy on their wait() call.
What does not work is having two variables which validate each other, e.g.
variable "nat_min_ports" {
description = "Minimal amount of ports to allocate for 'min_ports_per_vm'"
default = 32
type = number
validation {
condition = (
var.nat_min_ports >= 32 &&
var.nat_min_ports <= 32768 &&
var.nat_min_ports < var.nat_max_ports
)
error_message = "Must be between 32 and 32768 and less than 'nat_max_ports'"
}
}
variable "nat_max_ports" {
description = "Maximal amount of ports to allocate for 'max_ports_per_vm'"
default = 16384
type = number
validation {
condition = (
var.nat_max_ports >= 64 &&
var.nat_max_ports <= 65536 &&
var.nat_max_ports > var.nat_min_ports
)
error_message = "Must be between 64 and 65536 and above 'nat_min_ports'"
}
}
That let directly to the following rather opaque error message:
Received an error
Error: Cycle: module.gcp_project_network.var.nat_max_ports (validation), module.gcp_project_network.var.nat_min_ports (validation)
Removed the sort of duplicate check var.nat_max_ports > var.nat_min_ports on
nat_max_ports to break the cycle.
23 years ago I was in a bad place. I'd quit my first attempt at a PhD for various reasons that were, with hindsight, bad, and I was suddenly entirely aimless. I lucked into picking up a sysadmin role back at TCM where I'd spent a summer a year before, but that's not really what I wanted in my life. And then Hanna mentioned that her PhD supervisor was looking for someone familiar with Linux to work on making Dasher, one of the group's research projects, more usable on Linux. I jumped.
The timing was fortuitous. Sun were pumping money and developer effort into accessibility support, and the Inference Group had just received a grant from the Gatsy Foundation that involved working with the ACE Centre to provide additional accessibility support. And I was suddenly hacking on code that was largely ignored by most developers, supporting use cases that were irrelevant to most developers. Being in a relatively green field space sounds refreshing, until you realise that you're catering to actual humans who are potentially going to rely on your software to be able to communicate. That's somewhat focusing.
This was, uh, something of an on the job learning experience. I had to catch up with a lot of new technologies very quickly, but that wasn't the hard bit - what was difficult was realising I had to cater to people who were dealing with use cases that I had no experience of whatsoever. Dasher was extended to allow text entry into applications without needing to cut and paste. We added support for introspection of the current applications UI so menus could be exposed via the Dasher interface, allowing people to fly through menu hierarchies and pop open file dialogs. Text-to-speech was incorporated so people could rapidly enter sentences and have them spoke out loud.
But what sticks with me isn't the tech, or even the opportunities it gave me to meet other people working on the Linux desktop and forge friendships that still exist. It was the cases where I had the opportunity to work with people who could use Dasher as a tool to increase their ability to communicate with the outside world, whose lives were transformed for the better because of what we'd produced. Watching someone use your code and realising that you could write a three line patch that had a significant impact on the speed they could talk to other people is an incomparable experience. It's been decades and in many ways that was the most impact I've ever had as a developer.
I left after a year to work on fruitflies and get my PhD, and my career since then hasn't involved a lot of accessibility work. But it's stuck with me - every improvement in that space is something that has a direct impact on the quality of life of more people than you expect, but is also something that goes almost unrecognised. The people working on accessibility are heroes. They're making all the technology everyone else produces available to people who would otherwise be blocked from it. They deserve recognition, and they deserve a lot more support than they have.
But when we deal with technology, we deal with transitions. A lot of the Linux accessibility support depended on X11 behaviour that is now widely regarded as a set of misfeatures. It's not actually good to be able to inject arbitrary input into an arbitrary window, and it's not good to be able to arbitrarily scrape out its contents. X11 never had a model to permit this for accessibility tooling while blocking it for other code. Wayland does, but suffers from the surrounding infrastructure not being well developed yet. We're seeing that happen now, though - Gnome has been performing a great deal of work in this respect, and KDE is picking that up as well. There isn't a full correspondence between X11-based Linux accessibility support and Wayland, but for many users the Wayland accessibility infrastructure is already better than with X11.
That's going to continue improving, and it'll improve faster with broader support. We've somehow ended up with the bizarre politicisation of Wayland as being some sort of woke thing while X11 represents the Roman Empire or some such bullshit, but the reality is that there is no story for improving accessibility support under X11 and sticking to X11 is going to end up reducing the accessibility of a platform.
When you read anything about Linux accessibility, ask yourself whether you're reading something written by either a user of the accessibility features, or a developer of them. If they're neither, ask yourself why they actually care and what they're doing to make the future better.
A few months ago I bought a Intel Arc B580 for the main purpose of getting 8K video going [1]. I had briefly got it working in a test PC but then I wanted to deploy it on my HP z840 that I use as a build server and for playing with ML stuff [2]. I only did brief tests of it previously and this was my first attempt at installing it in a system I use. My plan was to keep the NVidia RTX A2000 in place and run 2 GPUs, that’s not an uncommon desire among people who want to do ML stuff and it’s the type of thing that the z840 is designed for, the machine has slots 2, 4, and 6 being PCIe*16 so it should be able to fit 3 cards that each take 2 slots. So having one full size GPU, the half-height A2000, and a NVMe controller that uses *16 to run four NVMe devices should be easy.
Intel designed the B580 to use every millimeter of space possible while still being able to claim to be a 2 slot card. On the circuit board side there is a plastic cover over the board that takes all the space before the next slot so a 2 slot card can’t go on that side without having it’s airflow blocked. On the other side it takes all the available space so that any card that wants to blow air through can’t fit and also such that a medium size card (such as the card for 4 NVMe devices) would block it’s air flow. So it’s impossible to have a computer with 6 PCIe slots run the B580 as well as 2 other full size *16 cards.
Support for this type of GPU is something vendors like HP should consider when designing workstation class systems. For HP there is no issue of people installing motherboards in random cases (the HP motherboard in question uses proprietary power connectors and won’t even boot with an ATX PSU without significant work). So they could easily design a motherboard and case with a few extra mm of space between pairs of PCIe slots. The cards that are double width are almost always *16 so you could pair up a *16 slot and another slot and have extra space on each side of the pair. I think for most people a system with 6 PCIe slots with a bit of extra space for GPU cooling would be more useful than having 7 PCIe slots. But as HP have full design control they don’t even need to reduce the number of PCIe slots, they could just make the case taller. If they added another 4 slots and increased the case size accordingly it still wouldn’t be particularly tall by the standards of tower cases from the 90s! The z8 series of workstations are the biggest workstations that HP sells so they should design them to do these things. At the time that the z840 was new there was a lot of ML work being done and HP was selling them as ML workstations, they should have known how people would use them and design them accordingly.
So I removed the NVidia card and decided to run the system with just the Arc card, things should have been fine but Intel designed the card to be as high as possible and put the power connector on top. This prevented installing the baffle for directing air flow over the PCIe slots and due to the design of the z840 (which is either ingenious or stupid depending on your point of view) the baffle is needed to secure the PCIe cards in place. So now all the PCIe cards are just secured by friction in the slots, this isn’t an unusual situation for machines I assemble but it’s not something I desired.
This is the first time I’ve felt compelled to write a blog post reviewing a product before even getting it working. But the physical design of the B580 is outrageously impractical unless you are designing your entire computer around the GPU.
As an aside the B580 does look very nice. The plastic surround is very fancy, it’s a pity that it interferes with the operation of the rest of the system.
In short, the world has moved on to hosting and working with source code in Git repositories. In Debian, we work with source packages that are used to generated the binary artifacts that users know as .deb files. In Debian, there is so much tooling and culture built around this. For example, our workflow passes what we call the island test – you could take every source package in Debian along with you to an island with no Internet, and you’ll still be able to rebuild or modify every package. When changing the workflows, you risk losing benefits like this, and over the years there has been a number of different ideas on how to move to a purely or partially git flow for Debian, none that really managed to gain enough momentum or project-wide support.
Tag2upload makes a lot of sense. It doesn’t take away any of the benefits of the current way of working (whether technical or social), but it does make some aspects of Debian packages significantly simpler and faster. Even so, if you’re a Debian Developer and more familiar with how the sausage have made, you’ll have noticed that this has been a very long road for the tag2upload maintainers, they’ve hit multiple speed bumps since 2019, but with a lot of patience and communication and persistence from all involved (and almost even a GR), it is finally materializing.
Performing my first tag2upload
So, first, I needed to choose which package I want to upload. We’re currently in hard freeze for the trixie release, so I’ll look for something simple that I can upload to experimental.
I chose bundlewrap, it’s quote a straightforward python package, and updates are usually just as straightforward, so it’s probably a good package to work on without having to deal with extra complexities in learning how to use tag2upload.
So, I do the usual uscan and dch -i to update my package…
And then I realise that I still want to build a source package to test it in cowbuilder. Hmm, I remember that Helmut showed me that building a source package isn’t necessary in sbuild, but I have a habit of breaking my sbuild configs somehow, but I guess I should revisit that.
So, I do a dpkg-buildpackage -S -sa and test it out with cowbuilder, because that’s just how I roll (at least for now, fixing my local sbuild setup is yak shaving for another day, let’s focus!).
I end up with a binary that looks good, so I’m satisfied that I can upload this package to the Debian archives. So, time to configure tag2upload.
The first step is to set up the webhook in Salsa. I was surprised two find two webhooks already configured:
I know of KGB that posts to IRC, didn’t know that this was the mechanism it does that by before. Nice! Also don’t know what the tagpending one does, I’ll go look into that some other time.
Configuring a tag2upload webhook is quite simple, add a URL, call the name tag2upload, and select only tag push events:
I run the test webhook, and it returned a code 400 message about a missing ‘message’ header, which the documentation says is normal.
Next, I install git-debpush from experimental.
The wiki page simply states that you can use the git-debpush command to upload, but doesn’t give any examples on how to use it, and its manpage doesn’t either. And when I run just git-debpush I get:
jonathan@lapcloud:~/devel/debian/python-team/bundlewrap/bundlewrap-4.23.1$ git-debpush git-debpush: check failed: upstream tag upstream/4.22.0 is not an ancestor of refs/heads/debian/master; probably a mistake ('upstream-nonancestor' check) pristine-tar is /usr/bin/pristine-tar git-debpush: some check(s) failed; you can pass --force to ignore them
I have no idea what that’s supposed to mean. I was also not sure whether I should tag anything to begin with, or if some part of the tag2upload machinery automatically does it. I think I might have tagged debian/4.23-1 before tagging upstream/4.23 and perhaps it didn’t like it, I reverted and did it the other way around and got a new error message. Progress!
jonathan@lapcloud:~/devel/debian/python-team/bundlewrap/bundlewrap-4.23.1$ git-debpush git-debpush: could not determine the git branch layout git-debpush: please supply a --quilt= argument
Looking at the manpage, it looks like –quilt=baredebian matches my package the best, so I try that:
Ooh! That looked like it did something! And a minute later I received the notification of the upload in my inbox:
So, I’m not 100% sure that this makes things much easier for me than doing a dput, but, it’s not any more difficult or more work either (once you know how it works), so I’ll be using git-debpush from now on, and I’m sure as I get more used to the git workflow of doing things I’ll understand more of the benefits. And at last, my one last use case for using FTP is now properly dead. RIP FTP :)
I’ve been meaning to write a post about this bug for a while, so here
it is (before I forget the details!).
First, I’d like to thank a few people:
My friend Gabriel F. T. Gomes, who helped with debugging and simply
talking about the issue. I love doing some pair debugging, and I
noticed that he also had a great time diving into the internals of
glibc and libgcc.
My teammate Dann Frazier, who always provides invaluable insights
and was there to motivate me to push a bit further in order to
figure out what was going on.
The upstream GCC and glibc developers who finally drove the
investigation to completion and came up with an elegant fix.
I’ll probably forget some details because it’s been more than a week
(and life at $DAYJOB moves fast), but we’ll see.
The background story
Wolfi OS takes security seriously, and one of the things we have is a
package which sets the hardening compiler flags for C/C++ according to
the best practices recommended by OpenSSF. At the time of this
writing, these flags are (in GCC’s spec file parlance):
The important part for our bug is the usage of -z now and
-fno-strict-aliasing.
As I was saying, these flags are set for almost every build, but
sometimes things don’t work as they should and we need to disable
them. Unfortunately, one of these problematic cases has been glibc.
There was an attempt to enable hardening while building glibc, but
that introduced a strange breakage to several of our packages and had
to be reverted.
Things stayed pretty much the same until a few weeks ago, when I
started working on one of my roadmap items: figure out why hardening
glibc wasn’t working, and get it to work as much as possible.
Reproducing the bug
I started off by trying to reproduce the problem. It’s important to
mention this because I often see young engineers forgetting to check
if the problem is even valid anymore. I don’t blame them; the anxiety
to get the bug fixed can be really blinding.
Fortunately, I already had one simple test to trigger the failure.
All I had to do was install the py3-matplotlib package and then
invoke:
$ python3 -c 'import matplotlib'
This would result in an abortion with a coredump.
I followed the steps above, and readily saw the problem manifesting
again. OK, first step is done; I wasn’t getting out easily from this
one.
Initial debug
The next step is to actually try to debug the failure. In an ideal
world you get lucky and are able to spot what’s wrong after just a few
minutes. Or even better: you also can devise a patch to fix the bug
and contribute it to upstream.
I installed GDB, and then ran the py3-matplotlib command inside it.
When the abortion happened, I issued a backtrace command inside GDB
to see where exactly things had gone wrong. I got a stack trace
similar to the following:
#0 0x00007c43afe9972c in __pthread_kill_implementation () from /lib/libc.so.6
#1 0x00007c43afe3d8be in raise () from /lib/libc.so.6
#2 0x00007c43afe2531f in abort () from /lib/libc.so.6
#3 0x00007c43af84f79d in uw_init_context_1[cold] () from /usr/lib/libgcc_s.so.1
#4 0x00007c43af86d4d8 in _Unwind_RaiseException () from /usr/lib/libgcc_s.so.1
#5 0x00007c43acac9014 in __cxxabiv1::__cxa_throw (obj=0x5b7d7f52fab0, tinfo=0x7c429b6fd218 <typeinfo for pybind11::attribute_error>, dest=0x7c429b5f7f70 <pybind11::reference_cast_error::~reference_cast_error() [clone .lto_priv.0]>)
at ../../../../libstdc++-v3/libsupc++/eh_throw.cc:93
#6 0x00007c429b5ec3a7 in ft2font__getattr__(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >) [clone .lto_priv.0] [clone .cold] () from /usr/lib/python3.13/site-packages/matplotlib/ft2font.cpython-313-x86_64-linux-gnu.so
#7 0x00007c429b62f086 in pybind11::cpp_function::initialize<pybind11::object (*&)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >), pybind11::object, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >, pybind11::name, pybind11::scope, pybind11::sibling>(pybind11::object (*&)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >), pybind11::object (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >), pybind11::name const&, pybind11::scope const&, pybind11::sibling const&)::{lambda(pybind11::detail::function_call&)#1}::_FUN(pybind11::detail::function_call&) [clone .lto_priv.0] ()
from /usr/lib/python3.13/site-packages/matplotlib/ft2font.cpython-313-x86_64-linux-gnu.so
#8 0x00007c429b603886 in pybind11::cpp_function::dispatcher(_object*, _object*, _object*) () from /usr/lib/python3.13/site-packages/matplotlib/ft2font.cpython-313-x86_64-linux-gnu.so
...
Huh. Initially this didn’t provide me with much information. There
was something strange seeing the abort function being called right
after _Unwind_RaiseException, but at the time I didn’t pay much
attention to it.
OK, time to expand our horizons a little. Remember when I said that
several of our packages would crash with a hardened glibc? I decided
to look for another problematic package so that I could make it crash
and get its stack trace. My thinking here is that maybe if I can
compare both traces, something will come up.
I happened to find an old discussion where Dann Frazier mentioned that
Emacs was also crashing for him. He and I share the Emacs passion,
and I totally agreed with him when he said that “Emacs crashing is
priority -1!” (I’m paraphrasing).
I installed Emacs, ran it, and voilà : the crash happened again. OK,
that was good. When I ran Emacs inside GDB and asked for a backtrace,
here’s what I got:
#0 0x00007eede329972c in __pthread_kill_implementation () from /lib/libc.so.6
#1 0x00007eede323d8be in raise () from /lib/libc.so.6
#2 0x00007eede322531f in abort () from /lib/libc.so.6
#3 0x00007eede262879d in uw_init_context_1[cold] () from /usr/lib/libgcc_s.so.1
#4 0x00007eede2646e7c in _Unwind_Backtrace () from /usr/lib/libgcc_s.so.1
#5 0x00007eede3327b11 in backtrace () from /lib/libc.so.6
#6 0x000059535963a8a1 in emacs_backtrace ()
#7 0x000059535956499a in main ()
Ah, this backtrace is much simpler to follow. Nice.
Hmmm. Now the crash is happening inside _Unwind_Backtrace. A
pattern emerges! This must have something to do with stack unwinding
(or so I thought… keep reading to discover the whole truth). You
see, the backtrace function (yes, it’s a function) and C++’s
exception handling mechanism use similar techniques to do their jobs,
and it pretty much boils down to unwinding frames from the stack.
I looked into Emacs’ source code, specifically the emacs_backtrace
function, but could not find anything strange over there. This bug
was probably not going to be an easy fix…
The quest for a minimal reproducer
Being able to easily reproduce the bug is awesome and really helps
with debugging, but even better is being able to have a minimal
reproducer for the problem.
You see, py3-matplotlib is a huge package and pulls in a bunch of
extra dependencies, so it’s not easy to ask other people to “just
install this big package plus these other dependencies, and then run
this command…”, especially if we have to file an upstream bug and
talk to people who may not even run the distribution we’re using. So
I set up to try and come up with a smaller recipe to reproduce the
issue, ideally something that’s not tied to a specific package from
the distribution.
Having all the information gathered from the initial debug session,
especially the Emacs backtrace, I thought that I could write a very
simple program that just invoked the backtrace function from glibc
in order to trigger the code path that leads to _Unwind_Backtrace.
Here’s what I wrote:
After compiling it, I determined that yes, the problem did happen with
this small program as well. There was only a small nuisance: the
manifestation of the bug was not deterministic, so I had to execute
the program a few times until it crashed. But that’s much better than
what I had before, and a small price to pay. Having a minimal
reproducer pretty much allows us to switch our focus to what really
matters. I wouldn’t need to dive into Emacs’ or Python’s source code
anymore.
At the time, I was sure this was a glibc bug. But then something else
happened.
GCC 15
I had to stop my investigation efforts because something more
important came up: it was time to upload GCC 15 to Wolfi. I spent a
couple of weeks working on this (it involved rebuilding the whole
archive, filing hundreds of FTBFS bugs, patching some programs, etc.),
and by the end of it the transition went smooth. When the GCC 15
upload was finally done, I switched my focus back to the glibc
hardening problem.
The first thing I did was to… yes, reproduce the bug again. It had
been a few weeks since I had touched the package, after all. So I
built a hardened glibc with the latest GCC and… the bug did not
happen anymore!
Fortunately, the very first thing I thought was “this must be GCC”,
so I rebuilt the hardened glibc with GCC 14, and the bug was there
again. Huh, unexpected but very interesting.
Diving into glibc and libgcc
At this point, I was ready to start some serious debugging. And then
I got a message on Signal. It was one of those moments where two
minds think alike: Gabriel decided to check how I was doing, and I was
thinking about him because this involved glibc, and Gabriel
contributed to the project for many years. I explained what I was
doing, and he promptly offered to help. Yes, there are more people
who love low level debugging!
We spent several hours going through disassembles of certain functions
(because we didn’t have any debug information in the beginning),
trying to make sense of what we were seeing. There was some heavy GDB
involved; unfortunately I completely lost the session’s history
because it was done inside a container running inside an ephemeral VM.
But we learned a lot. For example:
It was hard to actually understand the full stack trace leading to
uw_init_context_1[cold]. _Unwind_Backtrace obviously didn’t
call it (it called uw_init_context_1, but what was that [cold]
doing?). We had to investigate the disassemble of
uw_init_context_1 in order to determined where
uw_init_context_1[cold] was being called.
The [cold] suffix is a GCC function attribute that can be used to
tell the compiler that the function is unlikely to be reached. When
I read that, my mind immediately jumped to “this must be an
assertion”, so I went to the source code and found the spot.
We were able to determine that the return code of
uw_frame_state_for was 5, which means _URC_END_OF_STACK.
That’s why the assertion was triggering.
After finding these facts without debug information, I decided to bite
the bullet and recompiled GCC 14 with -O0 -g3, so that we could
debug what uw_frame_state_for was doing. After banging our heads a
bit more, we found that fde is NULL at this excerpt:
// ...
fde =_Unwind_Find_FDE (context->ra +_Unwind_IsSignalFrame (context) -1,
&context->bases);
if (fde == NULL)
{
#ifdef MD_FALLBACK_FRAME_STATE_FOR
/* Couldn't find frame unwind info for this function. Try a
target-specific fallback mechanism. This will necessarily
not provide a personality routine or LSDA. */returnMD_FALLBACK_FRAME_STATE_FOR (context, fs);
#else
return _URC_END_OF_STACK;
#endif
}
// ...
We’re debugging on amd64, which means that
MD_FALLBACK_FRAME_STATE_FOR is defined and therefore is called. But
that’s not really important for our case here, because we had
established before that _Unwind_Find_FDE would never return NULL
when using a non-hardened glibc (or a glibc compiled with GCC 15). So
we decided to look into what _Unwind_Find_FDE did.
The function is complex because it deals with .eh_frame , but we
were able to pinpoint the exact location where find_fde_tail (one of
the functions called by _Unwind_Find_FDE) is returning NULL:
if (pc < table[0].initial_loc + data_base)
return NULL;
We looked at the addresses of pc and table[0].initial_loc + data_base, and found that the former fell within libgcc’s text
section, which the latter fell within /lib/ld-linux-x86-64.so.2
text.
At this point, we were already too tired to continue. I decided to
keep looking at the problem later and see if I could get any further.
Bisecting GCC
The next day, I woke up determined to find what changed in GCC 15 that
caused the bug to disappear. Unless you know GCC’s internals like
they are your own home (which I definitely don’t), the best way to do
that is to git bisect the commits between GCC 14 and 15.
I spent a few days running the bisect. It took me more time than I’d
have liked to find the right range of commits to pass git bisect
(because of how branches and tags are done in GCC’s repository), and I
also had to write some helper scripts that:
Modified the gcc.yaml package definition to make it build with the
commit being bisected.
Built glibc using the GCC that was just built.
Ran tests inside a docker container (with the recently built glibc
installed) to determine whether the bug was present.
At the end, I had a commit to point to:
commit 99b1daae18c095d6c94d32efb77442838e11cbfb
Author: Richard Biener <rguenther@suse.de>
Date: Fri May 3 14:04:41 2024 +0200
tree-optimization/114589 - remove profile based sink heuristics
Makes sense, right?! No? Well, it didn’t for me either. Even after
reading what was changed in the code and the upstream bug fixed by the
commit, I was still clueless as to why this change “fixed” the problem
(I say “fixed” because it may very well be an unintended consequence
of the change, and some other problem might have been introduced).
Upstream takes over
After obtaining the commit that possibly fixed the bug, while talking
to Dann and explaining what I did, he suggested that I should file an
upstream bug and check with them. Great idea, of course.
It’s a bit long, very dense and complex, but ultimately upstream was
able to find the real problem and have a patch accepted in just two
days. Nothing like knowing the code base. The initial bug became:
In the end, the problem was indeed in how the linker defines
__ehdr_start, which, according to the code (from
elf/dl-support.c):
if (_dl_phdr == NULL)
{
/* Starting from binutils-2.23, the linker will define the
magic symbol __ehdr_start to point to our own ELF header
if it is visible in a segment that also includes the phdrs.
So we can set up _dl_phdr and _dl_phnum even without any
information from auxv. */externconstElfW(Ehdr) __ehdr_start attribute_hidden;
assert (__ehdr_start.e_phentsize ==sizeof*GL(dl_phdr));
_dl_phdr = (constvoid*) &__ehdr_start + __ehdr_start.e_phoff;
_dl_phnum = __ehdr_start.e_phnum;
}
But the following definition is the problematic one (from elf/rtld.c):
This symbol (along with its counterpart, __ehdr_end) was being
run-time relocated when it shouldn’t be. The fix that was pushed
added optimization barriers to prevent the compiler from doing the
relocations.
I don’t claim to fully understand what was done here, and Jakub’s
analysis is a thing to behold, but in the end I was able to confirm
that the patch fixed the bug. And in the end, it was indeed a glibc
bug.
Conclusion
This was an awesome bug to investigate. It’s one of those that
deserve a blog post, even though some of the final details of the fix
flew over my head.
I’d like to start blogging more about these sort of bugs, because I’ve
encountered my fair share of them throughout my career. And it was
great being able to do some debugging with another person, exchange
ideas, learn things together, and ultimately share that deep
satisfaction when we find why a crash is happening.
I have at least one more bug in my TODO list to write about (another
one with glibc, but this time I was able to get to the end of it and
come up with a patch). Stay tunned.
P.S.: After having published the post I realized that I forgot to
explain why the -z now and -fno-strict-aliasing flags were
important.
-z now is the flag that I determined to be the root cause of the
breakage. If I compiled glibc with every hardening flag except -z now, everything worked. So initially I thought that the problem had
to do with how ld.so was resolving symbols at runtime. As it turns
out, this ended up being more a symptom than the real cause of the
bug.
As for -fno-strict-aliasing, a Gentoo developer who commented on the
GCC bug above mentioned that this OpenSSF bug had a good point against
using this flag for hardening. I still have to do a deep dive on what
was discussed in the issue, but this is certainly something to take
into consideration. There’s this very good write-up about strict
aliasing in general if you’re interested in understanding it better.
Everybody is trying out AI assistants these days, so I figured I'd jump on that train and see how fast it derails.
I went with CodeRabbit because I've seen it on YouTube — ads work, I guess.
I am trying to answer the following questions:
Did the AI find things that humans did not find (or didn't bother to mention)
Did the AI output help the humans with the review (useful summary etc)
Did the AI output help the humans with the code (useful suggestions etc)
Was the AI output misleading?
Was the AI output distracting?
To reduce the amount of output and not to confuse contributors, CodeRabbit was configured to only do reviews on demand.
What follows is a rather unscientific evaluation of CodeRabbit based on PRs in two Foreman-related repositories,
looking at the summaries CodeRabbit posted as well as the comments/suggestions it had about the code.
The summary CodeRabbit posted is technically correct.
This update introduces several changes across CI configuration, Ansible roles, plugins, and test playbooks. It expands CI test coverage to a new Ansible version, adjusts YAML key types in test variables, refines conditional logic in Ansible tasks, adds new default variables, and improves clarity and consistency in playbook task definitions and debug output.
Yeah, it does all of that, all right.
But it kinda misses the point that the addition here is "Ansible 2.19 support", which starts with adding it to the CI matrix and then adjusting the code to actually work with that version.
Also, the changes are not for "clarity" or "consistency", they are fixing bugs in the code that the older Ansible versions accepted, but the new one is more strict about.
Then it adds a table with the changed files and what changed in there.
To me, as the author, it felt redundant, and IMHO doesn't add any clarity to understand the changes.
(And yes, same "clarity" vs bugfix mistake here, but that makes sense as it apparently miss-identified the change reason)
And then the sequence diagrams…
They probably help if you have a dedicated change to a library or a library consumer,
but for this PR it's just noise, especially as it only covers two of the changes (addition of 2.19 to the test matrix and a change to the inventory plugin), completely ignoring other important parts.
Overall verdict: noise, don't need this.
comments posted
CodeRabbit also posted 4 comments/suggestions to the changes.
Guard against undefined result.task
IMHO a valid suggestion, even if on the picky side as I am not sure how to make it undefined here.
I ended up implementing it, even if with slightly different (and IMHO better readable) syntax.
Valid complaint? Probably.
Useful suggestion? So-So.
Wasted time? No.
Inconsistent pipeline in when for composite CV versions
That one was funny! The original complaint was that the when condition used slightly different data manipulation than the data that was passed when the condition was true.
The code was supposed to do "clean up the data, but only if there are any items left after removing the first 5, as we always want to keep 5 items".
And I do agree with the analysis that it's badly maintainable code.
But the suggested fix was to re-use the data in the variable we later use for performing the cleanup.
While this is (to my surprise!) valid Ansible syntax, it didn't make the code much more readable as you need to go and look at the variable definition.
The better suggestion then came from Ewoud: to compare the length of the data with the number we want to keep.
Humans, so smart!
But Ansible is not Ewoud's native turf, so he asked whether there is a more elegant way to count how much data we have than to use | list | count in Jinja (the data comes from a Python generator, so needs to be converted to a list first).
And the AI helpfully suggested to use | count instead!
However, count is just an alias for length in Jinja, so it behaves identically and needs a list.
Luckily the AI quickly apologized for being wrong after being pointed at the Jinja source and didn't try to waste my time any further.
Wouldn't I have known about the count alias, we'd have committed that suggestion and let CI fail before reverting again.
Valid complaint? Yes.
Useful suggestion? Nope.
Wasted time? Yes.
Apply the same fix for non-composite CV versions
The very same complaint was posted a few lines later, as the logic there is very similar — just slightly different data to be filtered and cleaned up.
Interestingly, here the suggestion also was to use the variable.
But there is no variable with the data!
The text actually says one need to "define" it, yet the "committable suggestion" doesn't contain that part.
Interestingly, when asked where it sees the "inconsistency" in that hunk, it said the inconsistency is with the composite case above.
That however is nonsense, as while we want to keep the same number of composite and non-composite CV versions,
the data used in the task is different — it even gets consumed by a totally different playbook — so there can't be any real consistency between the branches.
Valid complaint? Yes (the expression really could use some cleanup).
Useful suggestion? Nope.
Wasted time? Yes.
I ended up applying the same logic as suggested by Ewoud above.
As that refactoring was possible in a consistent way.
Ensure consistent naming for Oracle Linux subscription defaults
One of the changes in Ansible 2.19 is that Ansible fails when there are undefined variables, even if they are only undefined for cases where they are unused.
CodeRabbit complains that the names of the defaults I added are inconsistent.
And that is technically correct.
But those names are already used in other places in the code, so I'd have to refactor more to make it work properly.
Once being pointed at the fact that the variables already exist,
the AI is as usual quick to apologize, yay.
The repository module was updated to support additional parameters for repository synchronization and authentication. New options were added for ansible collections, ostree, Python packages, and yum repositories, including authentication tokens, filtering controls, and version retention settings. All changes were limited to module documentation and argument specification.
But it doesn't add anything you'd not get from looking at the diff, especially as it contains a large documentation chunk explaining those parameters.
No sequence diagram this time.
That's a good thing!
Overall verdict: noise (even if the amount is small), don't need this.
comments posted
CodeRabbit generated two comments for this PR.
Interestingly, none of them overlapped with the issues ansible-lint and friends found.
get rid of the FIXMEs
Yepp, that's fair
Valid complaint? Yes.
Useful suggestion? Nope. (But it's not possible in this case!)
Wasted time? No.
add validation for the new parameters
Yepp, I forgot these (not intentionally!).
The diff it suggests is nonsense, as it doesn't take into account the existing Ansible and Yum validations, but it clearly has read them as the style etc of the new ones matches.
It also managed to group the parameters correctly by repository type, so it's something.
ifmodule.foreman_params['content_type']!='ansible_collection':invalid_list=[keyforkeyin['ansible_collection_requirements']ifkeyinmodule.foreman_params]ifinvalid_list:module.fail_json(msg="({0}) can only be used with content_type 'ansible_collection'".format(",".join(invalid_list)))++#Validateansible_collectionspecificparameters+ifmodule.foreman_params['content_type']!='ansible_collection':+invalid_list=[keyforkeyin['ansible_collection_auth_token','ansible_collection_auth_url']ifkeyinmodule.foreman_params]+ifinvalid_list:+module.fail_json(msg="({0}) can only be used with content_type 'ansible_collection'".format(",".join(invalid_list)))++#Validateostreespecificparameters+ifmodule.foreman_params['content_type']!='ostree':+invalid_list=[keyforkeyin['depth','exclude_refs','include_refs']ifkeyinmodule.foreman_params]+ifinvalid_list:+module.fail_json(msg="({0}) can only be used with content_type 'ostree'".format(",".join(invalid_list)))++#Validatepythonpackagespecificparameters+ifmodule.foreman_params['content_type']!='python':+invalid_list=[keyforkeyin['excludes','includes','package_types','keep_latest_packages']ifkeyinmodule.foreman_params]+ifinvalid_list:+module.fail_json(msg="({0}) can only be used with content_type 'python'".format(",".join(invalid_list)))++#Validateyumspecificparameter+ifmodule.foreman_params['content_type']!='yum'and'upstream_authentication_token'inmodule.foreman_params:+module.fail_json(msg="upstream_authentication_token can only be used with content_type 'yum'")
Interestingly, it also said "Note: If 'python' is not a valid content_type, please adjust the validation accordingly." which is quite a hint at a bug in itself.
The module currently does not even allow to create content_type=python repositories.
That should have been more prominent, as it's a BUG!
Valid complaint? Yes.
Useful suggestion? Mostly (I only had to merge the Yum and Ansible branches with the existing code).
It did miss-interpret the change to a test playbook as an actual "behavior" change:
"Introduced new playbook variables for database configuration" — there is no database configuration in this repository, just the test playbook using the same metadata as a consumer of the library.
Later on it does say "Playbook metadata and test fixtures", so… unclear whether this is a miss-interpretation or just badly summarized.
As long as you also look at the diff, it won't confuse you, but if you're using the summary as the sole source of information (bad!) it would.
This time the sequence diagram is actually useful, yay.
Again, not 100% accurate: it's missing the fact that saving the parameters is hidden behind an "if enabled" flag — something it did represent correctly for loading them.
Overall verdict: not really useful, don't need this.
comments posted
Here I was a bit surprised, especially as the nitpicks were useful!
Persist-path should respect per-user state locations (nitpick)
My original code used os.environ.get('OBSAH_PERSIST_PATH', '/var/lib/obsah/parameters.yaml') for the location of the persistence file.
CodeRabbit correctly pointed out that this won't work for non-root users and one should respect XDG_STATE_HOME.
Ewoud did point that out in his own review, so I am not sure whether CodeRabbit came up with this on its own, or also took the human comments into account.
The suggested code seems fine too — just doesn't use /var/lib/obsah at all anymore.
This might be a good idea for the generic library we're working on here, and then be overridden to a static /var/lib path in a consumer (which always runs as root).
In the end I did not implement it, but mostly because I was lazy and was sure we'd override it anyway.
Valid complaint? Yes.
Useful suggestion? Yes.
Wasted time? Nope.
Positional parameters are silently excluded from persistence (nitpick)
The library allows you to generate both positional (foo without --) and non-positional (--foo) parameters, but the code I wrote would only ever persist non-positional parameters.
This was intentional, but there is no documentation of the intent in a comment — which the rabbit thought would be worth pointing out.
It's a fair nitpick and I ended up adding a comment.
Valid complaint? Yes.
Useful suggestion? Yes.
Wasted time? Nope.
Enforce FQDN validation for database_host
The library has a way to perform type checking on passed parameters, and one of the supported types is "FQDN" — so a fully qualified domain name, with dots and stuff.
The test playbook I added has a database_host variable, but I didn't bother adding a type to it, as I don't really need any type checking here.
While using "FQDN" might be a bit too strict here — technically a working database connection can also use a non-qualified name or an IP address, I was positively surprised by this suggestion.
It shows that the rest of the repository was taken into context when preparing the suggestion.
Valid complaint? In the context of a test, no. Would that be a real command definition, yes.
Useful suggestion? Yes.
Wasted time? Nope.
reset_args() can raise AttributeError when a key is absent
This is a correct finding, the code is not written in a way that would survive if it tries to reset things that are not set.
However, that's only true for the case where users pass in --reset-<parameter> without ever having set parameter before.
The complaint about the part where the parameter is part of the persisted set but not in the parsed args is wrong — as parsed args inherit from the persisted set.
The suggested code is not well readable, so I ended up fixing it slightly differently.
Valid complaint? Mostly.
Useful suggestion? Meh.
Wasted time? A bit.
Persisted values bypass argparse type validation
When persisting, I just yaml.safe_dump the parsed parameters, which means the YAML will contain native types like integers.
The argparse documentation warns that the type checking argparse does only applies to strings and is skipped if you pass anything else (via default values).
While correct, it doesn't really hurt here as the persisting only happens after the values were type-checked.
So there is not really a reason to type-check them again.
Well, unless the type changes, anyway.
Not sure what I'll do with this comment.
Valid complaint? Nah.
Useful suggestion? Nope.
Wasted time? Not much.
consider using contextlib.suppress
This was added when I asked CodeRabbit for a re-review after pushing some changes.
Interestingly, the PR already contained try: … except: pass code before, and it did not flag that.
Also, the code suggestion contained import contextlib in the middle of the code, instead in the head of the file.
Who would do that?!
But the comment as such was valid, so I fixed it in all places it is applicable, not only the one the rabbit found.
Valid complaint? Yes.
Useful suggestion? Nope.
Wasted time? Nope.
workaround to ensure LCE and CV are always sent together
A workaround was added to the _update_entity method in the ForemanAnsibleModule class to ensure that when updating a host, both content_view_id and lifecycle_environment_id are always included together in the update payload. This prevents partial updates that could cause inconsistencies.
Partial updates are not a thing.
The workaround is purely for the fact that Katello expects both parameters to be sent,
even if only one of them needs an actual update.
No diagram, good.
Overall verdict: misleading summaries are bad!
comments posted
Given a small patch, there was only one comment.
Implementation looks correct, but consider adding error handling for robustness.
This reads correct on the first glance.
More error handling is always better, right?
But if you dig into the argumentation, you see it's wrong.
Either:
we're working with a Katello setup and the host we're updating has content, so CV and LCE will be present
we're working with a Katello setup and the host has no content (yet), so CV and LCE will be "updated" and we're not running into the workaround
we're working with a plain Foreman, then both parameters are not even accepted by Ansible
The AI accepted defeat once I asked it to analyze things in more detail, but why did I have to ask in the first place?!
Valid complaint? Nope.
Useful suggestion? Nope.
Wasted time? Yes, as I've actually tried to come up with a case where it can happen.
Summary
Well, idk, really.
Did the AI find things that humans did not find (or didn't bother to mention)?
Yes. It's debatable whether these were useful (see e.g. the database_host example), but I tend to be in the "better to nitpick/suggest more and dismiss than oversee" team, so IMHO a positive win.
Did the AI output help the humans with the review (useful summary etc)?
In my opinion it did not.
The summaries were either "lots of words, no real value" or plain wrong.
The sequence diagrams were not useful either.
Luckily all of that can be turned off in the settings, which is what I'd do if I'd continue using it.
Did the AI output help the humans with the code (useful suggestions etc)?
While the actual patches it posted were "meh" at best, there were useful findings that resulted in improvements to the code.
Was the AI output misleading?
Absolutely! The whole Jinja discussion would have been easier without the AI "help".
Same applies for the "error handling" in the workaround PR.
Was the AI output distracting?
The output is certainly a lot, so yes I think it can be distracting.
As mentioned, I think dropping the summaries can make the experience less distracting.
What does all that mean?
I will disable the summaries for the repositories, but will leave the @coderabbitai review trigger active if someone wants an AI-assisted review.
This won't be something that I'll force on our contributors and maintainers, but they surely can use it if they want.
But I don't think I'll be using this myself on a regular basis.
Yes, it can be made "usable". But so can be vim ;-)
Also, I'd prefer to have a junior human asking all the questions and making bad suggestions, so they can learn from it, and not some planet burning machine.
I'm lucky enough to have a weird niche ISP available to me, so I'm paying $35 a month for around 600MBit symmetric data. Unfortunately they don't offer static IP addresses to residential customers, and nor do they allow multiple IP addresses per connection, and I'm the sort of person who'd like to run a bunch of stuff myself, so I've been looking for ways to manage this.
What I've ended up doing is renting a cheap VPS from a vendor that lets me add multiple IP addresses for minimal extra cost. The precise nature of the VPS isn't relevant - you just want a machine (it doesn't need much CPU, RAM, or storage) that has multiple world routeable IPv4 addresses associated with it and has no port blocks on incoming traffic. Ideally it's geographically local and peers with your ISP in order to reduce additional latency, but that's a nice to have rather than a requirement.
By setting that up you now have multiple real-world IP addresses that people can get to. How do we get them to the machine in your house you want to be accessible? First we need a connection between that machine and your VPS, and the easiest approach here is Wireguard. We only need a point-to-point link, nothing routable, and none of the IP addresses involved need to have anything to do with any of the rest of your network. So, on your local machine you want something like:
The addresses here are (other than the VPS address) arbitrary - but they do need to be consistent, otherwise Wireguard is going to be unhappy and your packets will not have a fun time. Bring that interface up with wg-quick and make sure the devices can ping each other. Hurrah! That's the easy bit.
Now you want packets from the outside world to get to your internal machine. Let's say the external IP address you're going to use for that machine is 321.985.520.309 and the wireguard address of your local system is 867.420.696.005. On the VPS, you're going to want to do:
Now, all incoming packets for 321.985.520.309 will be rewritten to head towards 867.420.696.005 instead (make sure you've set net.ipv4.ip_forward to 1 via sysctl!). Victory! Or is it? Well, no.
What we're doing here is rewriting the destination address of the packets so instead of heading to an address associated with the VPS, they're now going to head to your internal system over the Wireguard link. Which is then going to ignore them, because the AllowedIPs statement in the config only allows packets coming from your VPS, and these packets still have their original source IP. We could rewrite the source IP to match the VPS IP, but then you'd have no idea where any of these packets were coming from, and that sucks. Let's do something better. On the local machine, in the peer, let's update AllowedIps to 0.0.0.0/0 to permit packets form any source to appear over our Wireguard link. But if we bring the interface up now, it'll try to route all traffic over the Wireguard link, which isn't what we want. So we'll add table = off to the interface stanza of the config to disable that, and now we can bring the interface up without breaking everything but still allowing packets to reach us. However, we do still need to tell the kernel how to reach the remote VPN endpoint, which we can do with ip route add vpswgaddr dev wg0. Add this to the interface stanza as:
PostUp = ip route add vpswgaddr dev wg0 PreDown = ip route del vpswgaddr dev wg0
That's half the battle. The problem is that they're going to show up there with the source address still set to the original source IP, and your internal system is (because Linux) going to notice it has the ability to just send replies to the outside world via your ISP rather than via Wireguard and nothing is going to work. Thanks, Linux. Thinux.
But there's a way to solve this - policy routing. Linux allows you to have multiple separate routing tables, and define policy that controls which routing table will be used for a given packet. First, let's define a new table reference. On the local machine, edit /etc/iproute2/rt_tables and add a new entry that's something like:
1 wireguard
where "1" is just a standin for a number not otherwise used there. Now edit your wireguard config and replace table=off with table=wireguard - Wireguard will now update the wireguard routing table rather than the global one. Now all we need to do is to tell the kernel to push packets into the appropriate routing table - we can do that with ip rule add from localaddr lookup wireguard, which tells the kernel to take any packet coming from our Wireguard address and push it via the Wireguard routing table. Add that to your Wireguard interface config as:
PostUp = ip rule add from localaddr lookup wireguard PreDown = ip rule del from localaddr lookup wireguard and now your local system is effectively on the internet.
You can do this for multiple systems - just configure additional Wireguard interfaces on the VPS and make sure they're all listening on different ports. If your local IP changes then your local machines will end up reconnecting to the VPS, but to the outside world their accessible IP address will remain the same. It's like having a real IP without the pain of convincing your ISP to give it to you.
The Internet has changed a lot in the last 40+ years. Fads have come and gone.
Network protocols have been designed, deployed, adopted, and abandoned.
Industries have come and gone. The types of people on the internet have changed
a lot. The number of people on the internet has changed a lot, creating an
information medium unlike anything ever seen before in human history. There’s a
lot of good things about the Internet as of 2025, but there’s also an
inescapable hole in what it used to be, for me.
I miss being able to throw a site up to send around to friends to play with
without worrying about hordes of AI-feeding HTML combine harvesters DoS-ing my
website, costing me thousands in network transfer for the privilege. I miss
being able to put a lightly authenticated game server up and not worry too much
at night – wondering if that process is now mining bitcoin. I miss being able
to run a server in my home closet. Decades of cat and mouse games have rendered
running a mail server nearly impossible. Those who are “brave” enough to try
are met with weekslong stretches of delivery failures and countless hours
yelling ineffectually into a pipe that leads from the cheerful lobby of some
disinterested corporation directly into a void somewhere 4 layers below ground
level.
I miss the spirit of curiosity, exploration, and trying new things. I miss
building things for fun without having to worry about being too successful,
after which “security” offices start demanding my supplier paperwork in
triplicate as heartfelt thanks from their engineering teams. I miss communities
that are run because it is important to them, not for ad revenue. I miss
community operated spaces and having more than four websites that are all full
of nothing except screenshots of each other.
Every other page I find myself on now has an AI generated click-bait title,
shared for rage-clicks all brought-to-you-by-our-sponsors–completely covered
wall-to-wall with popup modals, telling me how much they respect my privacy,
with the real content hidden at the bottom bracketed by deceptive ads served by
companies that definitely know which new coffee shop I went to last month.
This is wrong, and those who have seen what was know it.
I can’t keep doing it. I’m not doing it any more. I reject the notion that
this is as it needs to be. It is wrong. The hole left in what the Internet used
to be must be filled. I will fill it.
What comes before part b?
Throughout the 2000s, some of my favorite memories were from LAN parties at my
friends’ places. Dragging your setup somewhere, long nights playing games,
goofing off, even building software all night to get something working—being
able to do something fiercely technical in the context of a uniquely social
activity. It wasn’t really much about the games or the projects—it was an
excuse to spend time together, just hanging out. A huge reason I learned so
much in college was that campus was a non-stop LAN party – we could freely
stand up servers, talk between dorms on the LAN, and hit my dorm room computer
from the lab. Things could go from individual to social in the matter of
seconds. The Internet used to work this way—my dorm had public IPs handed out
by DHCP, and my workstation could serve traffic from anywhere on the internet.
I haven’t been back to campus in a few years, but I’d be surprised if this were
still the case.
In December of 2021, three of us got together and connected our houses together
in what we now call The Promised LAN. The idea is simple—fill the hole we feel
is gone from our lives. Build our own always-on 24/7 nonstop LAN party. Build a
space that is intrinsically social, even though we’re doing technical things.
We can freely host insecure game servers or one-off side projects without
worrying about what someone will do with it.
Over the years, it’s evolved very slowly—we haven’t pulled any all-nighters.
Our mantra has become “old growth”, building each layer carefully. As of May
2025, the LAN is now 19 friends running around 25 network segments. Those 25
networks are connected to 3 backbone nodes, exchanging routes and IP traffic
for the LAN. We refer to the set of backbone operators as “The Bureau of LAN
Management”. Combined decades of operating critical infrastructure has
driven The Bureau to make a set of well-understood, boring, predictable,
interoperable and easily debuggable decisions to make this all happen.
Nothing here is exotic or even technically interesting.
Applications of trusting trust
The hardest part, however, is rejecting the idea that anything outside our own
LAN is untrustworthy—nearly irreversible damage inflicted on us by the
Internet. We have solved this by not solving it. We strictly control
membership—the absolute hard minimum for joining the LAN requires 10 years of
friendship with at least one member of the Bureau, with another 10 years of
friendship planned. Members of the LAN can veto new members even if all other
criteria is met. Even with those strict rules, there’s no shortage of friends
that meet the qualifications—but we are not equipped to take that many folks
on. It’s hard to join—-both socially and technically. Doing something malicious
on the LAN requires a lot of highly technical effort upfront, and it would
endanger a decade of friendship. We have relied on those human, social,
interpersonal bonds to bring us all together. It’s worked for the last 4 years,
and it should continue working until we think of something better.
We assume roommates, partners, kids, and visitors all have access to The
Promised LAN. If they’re let into our friends' network, there is a level of
trust that works transitively for us—I trust them to be on mine. This LAN is
not for “security”, rather, the network border is a social one. Benign
“hacking”—in the original sense of misusing systems to do fun and interesting
things—is encouraged. Robust ACLs and firewalls on the LAN are, by definition,
an interpersonal—not technical—failure. We all trust every other network
operator to run their segment in a way that aligns with our collective values
and norms.
Over the last 4 years, we’ve grown our own culture and fads—around half of the
people on the LAN have thermal receipt printers with open access, for printing
out quips or jokes on each other’s counters. It’s incredible how much network
transport and a trusting culture gets you—there’s a 3-node IRC network, exotic
hardware to gawk at, radios galore, a NAS storage swap, LAN only email, and
even a SIP phone network of “redphones”.
DIY
We do not wish to, nor will we, rebuild the internet. We do not wish to, nor
will we, scale this. We will never be friends with enough people, as hard as we
may try. Participation hinges on us all having fun. As a result, membership
will never be open, and we will never have enough connected LANs to deal with
the technical and social problems that start to happen with scale. This is a
feature, not a bug.
This is a call for you to do the same. Build your own LAN. Connect it with
friends’ homes. Remember what is missing from your life, and fill it in. Use
software you know how to operate and get it running. Build slowly. Build your
community. Do it with joy. Remember how we got here. Rebuild a community space
that doesn’t need to be mediated by faceless corporations and ad revenue. Build
something sustainable that brings you joy. Rebuild something you use daily.
Took some time yesterday to upload the current state of what will
be at some point vym 3 to experimental. If you're a user of this
tool you can give it a try, but be aware that the file format changed, and
can't be processed with vym releases before 2.9.500! Thus it's
important to create a backup until you're sure that you're ready
to move on. On the technical side this is also the switch from Qt5 to Qt6.
I was not aware that one can write bad Markdown, since Markdown has such a
simple syntax, that I thought you just write, and it’s fine. Naïve, I know!
I’ve started editing the files for this blog/site with Visual Studio Code too,
and I had from another project the markdown lint
extension
installed, so as I was opening old files, more and more problems appeared. On a
whim, I searched and found the “lint all files� command, and after running it,
oops—more than 400 problems!
Now, some of them were entirely trivial and a matter of subjective style, like
mixing both underscore and asterisk for emphasis in a single file, and asterisks
and dashes for list items. Others, seemingly trivial like tab indentation, were
actually also causing rendering issues, so fixing that solved a real cosmetic
issue.
But some of the issues flagged were actual problems. For example, one sentence
that I had, was:
there seems to be some race condition between <something> and ntp
Here “something� was interpreted as an (invalid) HTML tag, and not rendered at
all.
Another problem, but more minor, was that I had links to Wikipedia with spaces
in the link name, which Visual Studio Code breaks at first space, rather than
encoded spaces or underscores-based, as Wikipedia generates today. In the
rendered output, Pandoc seemed to do the right think though.
However, the most interesting issue that was flagged was no details in HTML links, i.e. links of the form:
for more details, see [here](http://example.com).
Which works for non-visually impaired people, but not for people using assistive
technologies. And while trying to fix this, it turns out that you can do much
better, for everyone, because “here� is really non-descriptive. You can use
either the content as label (“an article about configuring BIND�), or the
destination (“an article on this-website�), rather than the plain “here�.
The only, really only check I disabled, was tweaking the trailing punctuation
checks in headers, as I really like to write a header that ends with exclamation
marks. I like exclamation marks in general! So why not use them in headers too.
The question mark is allowlisted by default, though that I use rarely.
During the changes/tweaks, I also did random improvements, but I didn’t change
the updated tag, since most of them were minor. But a non-minor thing was
tweaking the CSS for code blocks, since I had a really stupid non-symmetry
between top and bottom padding (5px vs 0), and which I don’t know where it came
from. But the MDN article on
padding has as an
example exactly what I had (except combined, I had it split). Did I just copy
blindly? Possible…
So, all good and then, and I hope this doesn’t trigger a flow of updates on any
aggregators, since all the changes were really trivial. And while I don’t write
often, I did touch about 60 posts or pages, ouch! Who knew that changing editors
can have such a large impact 😆
This time I seem to be settling on either Commit Mono or Space
Mono. For now I'm using Commit Mono because it's a little more
compressed than Fira and does have a italic version. I don't like how
Space Mono's parenthesis (()) is "squarish", it feels visually
ambiguous with the square brackets ([]), a big no-no for my primary
use case (code).
So here I am using a new font, again. It required changing a bunch of
configuration files in my home directory (which is in a private
repository, sorry) and Emacs configuration (thankfully that's
public!).
One gotcha is I realized I didn't actually have a global font
configuration in Emacs, as some Faces define their own font
family, which overrides the frame defaults.
This is what it looks like, before:
Fira Mono
After:
Commit Mono
(Notice how those screenshots are not sharp? I'm surprised too. The
originals look sharp on my display, I suspect this is something to
do with the Wayland transition. I've tried with both grim and
flameshot, for what its worth. Update: turns out this is a really
complicated issue having to do with displaying images as well as
screenshots, see the issues in shotman and grim.)
And here is an update of those in a single screenshot with the new
test sheet:
Fira and Commit mono with the new test sheet, generated
with foot -W 80x63 -T pop-up -f 'Commit mono:size=12' --hold sh -c
"sed -n '/```/,/```/{/```/d;p}' *fonts-again.md ; printf 'Commit
mono'" 2>/dev/null and foot -W 80x61 -T pop-up -f 'Fira
mono:size=12' --hold sh -c "sed -n '/```/,/```/{/```/d;p}'
*fonts-again.md ; printf 'Fira mono'" 2>/dev/null.
They are pretty similar! Commit Mono feels a bit more vertically
compressed maybe too much so, actually -- the line height feels too
low. But it's heavily customizable so that's something that's
relatively easy to fix, if it's really a problem. Its weight is also a
little heavier and wider than Fira which I find a little distracting
right now, but maybe I'll get used to it.
I like how the ampersand (&) is more traditional, although I'll miss
the exotic one Fira produced... I like how the back quotes (`,
GRAVE ACCENT) drop down low, nicely aligned with the apostrophe. As
I mentioned before, I like how the bar on the "f" aligns with the
other top of letters, something in Fira mono that really annoys me now
that I've noticed it (it's not aligned!).
A UTF-8 test file
Here's the test sheet I've made up to test various characters. I could
have sworn I had a good one like this lying around somewhere but
couldn't find it so here it is, I guess.
So there you have it, got completely nerd swiped by typography
again. Now I can go back to writing a too-long proposal again.
Sources and inspiration for the above:
the unicode(1) command, to lookup individual characters to
disambiguate, for example, - (U+002D HYPHEN-MINUS, the minus
sign next to zero on US keyboards) and − (U+2212 MINUS SIGN, a
math symbol)
searchable list of characters and their names - roughly
equivalent to the unicode(1) command, but in one page, amazingly
the /usr/share/unicode database doesn't have any one file like
this
UTF-8 encoded plain text file - nice examples of edge cases,
curly quotes example and box drawing alignment test which,
incidentally, showed me I needed specific faces customisation in
Emacs to get the Markdown code areas to display properly, also the
idea of comparing various dashes
In my previous blog post about fonts, I
had a list of alternative fonts, but it seems people are not digging
through this, so I figured I would redo the list here to preempt "but
have you tried Jetbrains mono" kind of comments.
My requirements are:
no ligatures: yes, in the previous post, I wanted ligatures but
I have changed my mind. after testing this, I find them distracting,
confusing, and they often break the monospace nature of the display
(note that some folks wrote emacs code to selectively enable
ligatures which is an interesting compromise)z
monospace: this is to display code
italics: often used when writing Markdown, where I do make use of
italics... Emacs falls back to underlining text when lacking italics
which is hard to read
free-ish, ultimately should be packaged in Debian
Here is the list of alternatives I have considered in the past and why
I'm not using them:
agave: recommended by tarzeau, not sure I like the lowercase
a, a bit too exotic, packaged as fonts-agave
Cascadia code: optional ligatures, multilingual, not liking the
alignment, ambiguous parenthesis (look too much like square
brackets), new default for Windows Terminal and Visual Studio,
packaged as fonts-cascadia-code
Fira Code: ligatures, was using Fira Mono from which it is derived,
lacking italics except for forks, interestingly, Fira Code succeeds
the alignment test but Fira Mono fails to show the X signs properly!
packaged as fonts-firacode
Hack: no ligatures, very similar to Fira, italics, good
alternative, fails the X test in box alignment, packaged as
fonts-hack
IBM Plex: irritating website, replaces Helvetica as the IBM
corporate font, no ligatures by default, italics, proportional alternatives,
serifs and sans, multiple languages, partial failure in box alignment test (X signs),
fancy curly braces contrast perhaps too much with the rest of the
font, packaged in Debian as fonts-ibm-plex
Inconsolata: no ligatures, maybe italics? more compressed than
others, feels a little out of balance because of that, packaged in
Debian as fonts-inconsolata
Intel One Mono: nice legibility, no ligatures, alignment issues
in box drawing, not packaged in Debian
Iosevka: optional ligatures, italics, multilingual, good
legibility, has a proportional option, serifs and sans, line height
issue in box drawing, fails dash test, not in Debian
Monoid: optional ligatures, feels much "thinner" than
Jetbrains, not liking alignment or spacing on that one, ambiguous
2Z, problems rendering box drawing, packaged as fonts-monoid
Mononoki: no ligatures, looks good, good alternative, suggested
by the Debian fonts team as part of fonts-recommended, problems
rendering box drawing, em dash bigger than en dash, packaged as
fonts-mononoki
spleen: bitmap font, old school, spacing issue in box drawing
test, packaged as fonts-spleen
sudo: personal project, no ligatures, zero originally not
dotted, relied on metrics for legibility, spacing issue in box
drawing, not in Debian
victor mono: italics are cursive by default (distracting),
ligatures by default, looks good, more compressed than commit mono,
good candidate otherwise, has a nice and compact proof sheet
So, if I get tired of Commit Mono, I might probably try, in order:
Hack
Jetbrains Mono
IBM Plex Mono
Iosevka, Monoki and Intel One Mono are also good options, but have
alignment problems. Iosevka is particularly disappointing as the EM
DASH metrics are just completely wrong (much too wide).
Also note that there is now a package in Debian called fnt to
manage fonts like this locally, including in-line previews (that don't
work in bookworm but should be improved in trixie and later).
Today we reconnect to a previous post, namely #36
on pub/sub for live market monitoring with R and Redis. It
introduced both Redis as well as the
(then fairly recent) extensions to RcppRedis to
support the publish-subscibe (“pub/sub”) model of Redis. In short, it manages both subscribing
clients as well as producer for live, fast and lightweight data
transmission. Using pub/sub is generally more efficient than the
(conceptually simpler) ‘poll-sleep’ loops as polling creates cpu and
network load. Subscriptions are lighterweight as they get notified, they
are also a little (but not much!) more involved as they require a
callback function.
We should mention that Redis has a
recent fork in Valkey that arose when
the former did one of these non-uncommon-among-db-companies licenuse
suicides—which, happy to say, they reversed more recently—so that we now
have both the original as well as this leading fork (among others). Both
work, the latter is now included in several Linux distros, and the C
library hiredis used to
connect to either is still licensed permissibly as well.
All this came about because Yahoo! Finance recently had another
‘hickup’ in which they changed something leading to some data clients
having hiccups. This includes GNOME applet Stocks Extension
I had been running. There is a lively discussion on its issue
#120 suggestions for example a curl wrapper (which then makes each
access a new system call).
Separating data acquisition and presentation
becomes an attractive alternative, especially given how the standard
Python and R accessors to the Yahoo! Finance service continued to work
(and how per post
#36 I already run data acquisition). Moreoever, and somewhat
independently, it occurred to me that the cute (and both funny in its
pun, and very pretty in its display) ActivateLinux
program might offer an easy-enough way to display updates on the
desktop.
There were two aspects to address. First, the subscription side
needed to be covered in either plain C or C++. That, it turns out, is
very straightforward and there are existing documentation and prior
examples (e.g. at StackOverflow) as well as the ability to have an LLM
generate a quick stanza as I did with Claude. A modified variant is now
in the example
repo ‘redis-pubsub-examples’ in file subscriber.c.
It is deliberately minimal and the directory does not even have a
Makefile: just compile and link against both
libevent (for the event loop controlling this) and
libhiredis (for the Redis or Valkey connection). This
should work on any standard Linux (or macOS) machine with those two
(very standard) libraries installed.
The second aspect was trickier. While we can get Claude to modify the
program to also display under x11, it still uses a single controlling
event loop. It took a little bit of probing on my event to understand
how to modify (the x11 use of) ActivateLinux,
but as always it was reasonably straightforward in the end: instead of
one single while loop awaiting events we now first check
for pending events and deal with them if present but otherwise do not
idle and wait but continue … in another loop that also checks on the Redis or Valkey “pub/sub” events. So two thumbs up
to vibe coding
which clearly turned me into an x11-savvy programmer too…
The result is in a new (and currently fairly bare-bones) repo almm. It includes all
files needed to build the application, borrowed with love from ActivateLinux
(which is GPL-licensed, as is of course our minimal extension) and adds
the minimal modifications we made, namely linking with
libhiredis and some minimal changes to
x11/x11.c. (Supporting wayland as well is on the TODO list,
and I also need to release a new RcppRedis version
to CRAN as one currently needs
the GitHub version.)
We also made a simple mp4 video with a sound overlay which describes
the components briefly:
Comments and questions welcome. I will probably add a little bit of
command-line support to the almm. Selecting the
symbol subscribed to is currently done in the most minimal way via
environment variable SYMBOL (NB: not SYM as
the video using the default value shows). I also worked out how to show
the display only one of my multiple monitors so I may add an explicit
screen id selector too. A little bit of discussion (including minimal Docker use around r2u) is also in issue
#121 where I first floated the idea of having StocksExtension
listen to Redis (or Valkey). Other suggestions are most
welcome, please use issue tickets at the almm repository.
I have a few pictures on this blog, mostly in earlier years, because even with
small pictures, the git repository became 80MiB soon—this is not much in
absolute terms, but the actual Markdown/Haskell/CSS/HTML total size is tiny
compared to the picture, PDFs and fonts. I realised I need a better solution,
probably about ten years ago, and that I should investigate
git-annex. Then time passed, and I heard
about git-lfs, so I thought that’s the way forward.
Now, I recently got interested again into doing something about this repository,
and started researching.
Detour: git-lfs
I was sure that git-lfs, being supported by large providers, would be the
modern solution. But to my surprise, git-lfs is very server centric, which in
hindsight makes sense, but for a home setup, it’s not very good. Maybe I
misunderstood, but git-lfs is more a protocol/method for a forge to store
files, rather than an end-user solution. But then you need to backup those files
separately (together with the rest of the forge), or implement another way of
safeguarding them.
Further details such as the fact that it keeps two copies of the files (one in
the actual checked-out tree, one in internal storage) means it’s not a good
solution. Well, for my blog yes, but not in general. Then posts on Reddit about
horror stories—people being locked out of github due to quota, as an example, or
this Stack Overflow
post
about git-lfs constraining how one uses git, convinced me that’s not what I
want. To each their own, but not for me—I might want to push this blog’s repo to
github, but I definitely wouldn’t want in that case to pay for github storage
for my blog images (which are copies, not originals). And yes, even in 2025,
those quotas are real—GitHub
limits—and
I agree with GitHub, storage and large bandwidth can’t be free.
Back to the future: git-annex
So back to git-annex. I thought it’s going to be a simple thing, but oh boy,
was I wrong. It took me half a week of continuous (well, in free time) reading
and discussions with LLMs to understand a bit how it works. I think, honestly,
it’s a bit too complex, which is why the workflows
page lists seven (!) levels of
workflow complexity, from fully-managed, to fully-manual. IMHO, respect to the
author for the awesome tool, but if you need a web app to help you manage git,
it hints that the tool is too complex.
I made the mistake of running git annex sync once, to realise it actually
starts pushing to my upstream repo and creating new branches and whatnot, so
after enough reading, I settled on workflow 6/7, since I don’t want another tool
to manage my git history. Maybe I’m an outlier here, but everything “automatic�
is a bit too much for me.
Once you do managed yourself how git-annex works (on the surface, at least), it
is a pretty cool thing. It uses a git-annex git branch to store
metainformation, and that is relatively clean. If you do run git annex sync,
it creates some extra branches, which I don’t like, but meh.
Trick question: what is a remote?
One of the most confusing things about git-annex was understanding its “remote�
concept. I thought a “remote� is a place where you replicate your data. But not,
that’s a special remote. A normal remote is a git remote, but which is
expected to be git/ssh/with command line access. So if you have a git+ssh
remote, git-annex will not only try to push it’s above-mentioned branch, but
also copy the files. If such a remote is on a forge that doesn’t support
git-annex, then it will complain and get confused.
Of course, if you read the extensive docs, you just do git config remote.<name>.annex-ignore true, and it will understand that it should not
“sync� to it.
But, aside, from this case, git-annex expects that all checkouts and clones of
the repository are both metadata and data. And if you do any annex commands in
them, all other clones will know about them! This can be unexpected, and you
find people complaining about it, but nowadays there’s a solution:
git clone … dir && cd dirgit config annex.private truegit annex init "temp copy"
This is important. Any “leaf� git clone must be followed by that annex.private true config, especially on CI/CD machines. Honestly, I don’t understand why
by default clones should be official data stores, but it is what it is.
I settled on not making any of my checkouts “stable�, but only the actual
storage places. Except those are not git repositories, but just git-annex
storage things. I.e., special remotes.
Is it confusing enough yet ? 😄
Special remotes
The special remotes, as said, is what I expected to be the normal git annex
remotes, i.e. places where the data is stored. But well, they exist, and while
I’m only using a couple simple ones, there is a large number of
them. Among the interesting
ones: git-lfs, a
remote that allows also storing the git repository itself
(git-remote-annex),
although I’m bit confused about this one, and most of the common storage
providers via the rclone
remote.
Plus, all of the special remotes support encryption, so this is a really neat
way to store your files across a large number of things, and handle replication,
number of copies, from which copy to retrieve, etc. as you with.
And many of other features
git-annex has tons of other features, so to some extent, the sky’s the limit.
Automatic selection of what to add git it vs plain git, encryption handling,
number of copies, clusters, computed files, etc. etc. etc. I still think it’s
cool but too complex, though!
Uses
Aside from my blog post, of course.
I’ve seen blog posts/comments about people using git-annex to track/store their
photo collection, and I could see very well how the remote encrypted repos—any
of the services supported by rclone could be an N+2 copy or so. For me, tracking
photos would be a bit too tedious, but it could maybe work after more research.
A more practical thing would probably be replicating my local movie collection
(all legal, to be clear) better than “just run rsync from time to time� and
tracking the large files in it via git-annex. That’s an exercise for another
day, though, once I get more mileage with it - my blog pictures are copies, so I
don’t care much if they get lost, but movies are primary online copies, and I
don’t want to re-dump the discs. Anyway, for later.
Migrating to git-annex
Migrating here means ending in a state where all large files are in git-annex,
and the plain git repo is small. Just moving the files to git annex at the
current head doesn’t remove them from history, so your git repository is still
large; it won’t grow in the future, but remains with old size (and contains the
large files in its history).
In my mind, a nice migration would be: run a custom command, and all the history
is migrated to git-annex, so I can go back in time and the still use git-annex.
I naïvely expected this would be easy and already available, only to find
comments on the git-annex site with unsure git-filter-branch calls and some
web discussions. This is the
discussion
on the git annex website, but it didn’t make me confident it would do the right
thing.
But that discussion is now 8 years old. Surely in 2025, with git-filter-repo,
it’s easier? And, maybe I’m missing something, but it is not. Not from the point
of view of plain git, that’s easy, but because interacting with git-annex, which
stores its data in git itself, so doing this properly across successive steps of
a repo (when replaying the commits) is, I think, not well defined behaviour.
So I was stuck here for a few days, until I got an epiphany: As I’m going to
rewrite the repository, of course I’m keeping a copy of it from before
git-annex. If so, I don’t need the history, back in time, to be correct in the
sense of being able to retrieve the binary files too. It just needs to be
correct from the point of view of the actual Markdown and Haskell files that
represent the “meat� of the blog.
This simplified the problem a lot. At first, I wanted to just skip these files,
but this could also drop commits (git-filter-repo, by default, drops the commits
if they’re empty), and removing the files loses information - when they were
added, what were the paths, etc. So instead I came up with a rather clever idea,
if I might say so: since git-annex replaces files with symlinks already, just
replace the files with symlinks in the whole history, except symlinks that
are dangling (to represent the fact that files are missing). One could also use
empty files, but empty files are more “valid� in a sense than dangling symlinks,
hence why I settled on those.
Doing this with git-filter-repo is easy, in newer versions, with the
new --file-info-callback. Here is the simple code I used:
This goes and replaces files with a symlink to nowhere, but the symlink should
explain why it’s dangling. Then later renames or moving the files around work
“naturally�, as the rename/mv doesn’t care about file contents. Then, when the
filtering is done via:
copy the (binary) files from the original repository
since they’re named the same, and in the same places, git sees a type change
then simply run git annex add on those files
For me it was easy as all such files were in a few directories, so just copying
those directories back, a few git-annex add commands, and done.
Of course, then adding a few rsync remotes, git annex copy --to, and the
repository was ready.
Well, I also found a bug in my own Hakyll setup: on a fresh clone, when the
large files are just dangling symlinks, the builder doesn’t complain, just
ignores the images. Will have to fix.
Other resources
This is a blog that I read at the beginning, and I found it very useful as an
intro: https://switowski.com/blog/git-annex/. It didn’t help me understand how
it works under the covers, but it is well written. The author does use the
‘sync’ command though, which is too magic for me, but also agrees about its
complexity 😅
The proof is in the pudding
And now, for the actual first image to be added that never lived in the old
plain git repository. It’s not full-res/full-size, it’s cropped a bit on the
bottom.
Earlier in the year, I went to Paris for a very brief work trip, and I walked
around a bit—it was more beautiful than what I remembered from way way back. So
a bit random selection of a picture, but here it is:
Large language models (LLMs) have awed the world, emerging as the fastest-growing application of all time–ChatGPT reached 100 million active users in January 2023, just two months after its launch. After an initial cycle, they have gradually been mostly accepted and incorporated into various workflows, and their basic mechanics are no longer beyond the understanding of people with moderate computer literacy. Now, given that the technology is better understood, we face the question of how convenient LLM chatbots are for different occupations. This paper embarks on the question of whether LLMs can be useful for networking applications.
This paper systematizes querying three popular LLMs (GPT-3.5, GPT-4, and Claude 3) with questions taken from several network management online courses and certifications, and presents a taxonomy of six axes along which the incorrect responses were classified:
Accuracy: the correctness of the answers provided by LLMs;
Detectability: how easily errors in the LLM output can be identified;
Cause: for each incorrect answer, the underlying causes behind the error;
Explainability: the quality of the explanations with which the LLMs support their answers;
Effects: the impact of wrong answers on users; and
Stability: whether a minor change, such as a change in the order of the prompts, yields vastly different answers for a single query.
The authors also measure four strategies toward improving answers:
Self-correction: giving the original question and received answer back to the LLM, as well as the expected correct answer, as part of the prompt;
One-shot prompting: adding to the prompt “when answering user questions, follow this example” followed by a similar correct answer;
Majority voting: using the answer that most models agree upon; and
Fine-tuning: further training on a specific dataset to adapt the LLM to a particular task or domain.
The authors observe that, while some of those strategies were marginally useful, they sometimes resulted in degraded performance.
The authors queried the commercially available instances of Gemini and GPT, which achieved scores over 90 percent for basic subjects but fared notably worse in topics that require understanding and converting between different numeric notations, such as working with Internet protocol (IP) addresses, even if they are trivial (that is, presenting the subnet mask for a given network address expressed as the typical IPv4 dotted-quad representation).
As a last item in the paper, the authors compare performance with three popular open-source models: Llama3.1, Gemma2, and Mistral with their default settings. Although those models are almost 20 times smaller than the GPT-3.5 commercial model used, they reached comparable performance levels. Sadly, the paper does not delve deeper into these models, which can be deployed locally and adapted to specific scenarios.
The paper is easy to read and does not require deep mathematical or AI-related knowledge. It presents a clear comparison along the described axes for the 503 multiple-choice questions presented. This paper can be used as a guide for structuring similar studies over different fields.
If you ever face the need to activate the PROXY Protocol in HaProxy
(e.g. if you're as unlucky as I'm, and you have to use Google Cloud TCP
proxy load balancer), be aware that there are two ways to do that.
Both are part of the frontend configuration.
accept-proxy
This one is the big hammer and forces the usage of the PROXY protocol
on all connections. Sample:
If you have to, e.g. during a phase of migrations, receive traffic directly, without
the PROXY protocol header and from a proxy with the header there is also a more
flexible option based on a tcp-request connection action. Sample:
Source addresses here are those of GCP global TCP proxy frontends. Replace with whatever
suites your case. Since this is happening just after establishing a TCP connection,
there is barely anything else available to match on beside of the source address.
Yes, something was setting an ACL on it. Thus began to saga to figure out what was doing that.
Firing up inotifywatch, I saw it was systemd-udevd or its udev-worker. But cranking up logging on that to maximum only showed me that uaccess was somehow doing this.
I started digging. uaccess turned out to be almost entirely undocumented. People say to use it, but there’s no description of what it does or how. Its purpose appears to be to grant access to devices to those logged in to a machine by dynamically adding them to ACLs for devices. OK, that’s a nice goal, but why was machine A doing this and not machine B?
I dug some more. I came across a hint that uaccess may only do that for a “seat”. A seat? I’ve not heard of that in Linux before.
Turns out there’s some information (older and newer) about this out there. Sure enough, on the machine with KDE, loginctl list-sessions shows me on seat0, but on the machine where I log in from ttyUSB0, it shows an empty seat.
But how to make myself part of the seat? I tried various udev rules to add the “seat” or “master-of-seat” tags, but nothing made any difference.
I finally gave up and did the old-fashioned rule to just make it work already:
This was my hundred-thirty-first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian. During my allocated time I uploaded or worked on:
[DLA 4168-1] openafs security update of three CVEs related to theft of credentials, crashes or buffer overflows.
[DLA 4196-1] kmail-account-wizard security update to fix one CVE related to a man-in-the-middle attack when using http instead of https to get some configuration.
[DLA 4198-1] espeak-ng security update to fix five CVEs related to buffer overflow or underflow in several functions and a floating point exception. Thanks to Samuel Thibault for having a look at my debdiff.
[#1106867] created Bookworm pu-bug for kmail-account-wizard. Thanks to Patrick Franz for having a look at my debdiff.
I also continued my to work on libxmltok and suricata. This month I also had to do some support on seger, for example to inject packages newly needed for builds.
Debian ELTS
This month was the eighty-second ELTS month. During my allocated time I uploaded or worked on:
[ELA-1444-1] kmail-account-wizard security update to fix two CVEs in Buster related to a man-in-the-middle attack when using http instead of https to get some configuration. The other issue is about a misleading UI, in which the state of encryption is shown wrong.
[ELA-1445-1] espeak-ng security update to fix five CVEs in Stretch and Buster. The issues are related to buffer overflow or underflow in several functions and a floating point exception.
All packages I worked on have been on the list of longstanding packages. For example espeak-ng has been on this list for more than nine month. I now understood that there is a reason why packages are on this list. Some parts of the software have been almost completely reworked, so that the patches need a “reverse” rework. For some packages this is easy, but for others this rework needs quite some time. I also continued to work on libxmltok and suricata.
Debian Printing
Unfortunately I didn’t found any time to work on this topic.
Thanks a lot to the Release Team who quickly handled all my unblock bugs!
FTP master
It is this time of the year when just a few packages arrive in NEW: it is Hard Freeze. So I enjoy this period and basically just take care of kernels or other important packages. As people seem to be more interested in discussions than in fixing RC bugs, my period of rest seems to continue for a while. So thanks for all this valuable discussions and really thanks to the few people who still take care of Trixie. This month I accepted 146 and rejected 10 packages. The overall number of packages that got accepted was 147.
My Debian contributions this month were all
sponsored by
Freexian. Things were a bit quieter than usual, as for the most part I was
sticking to things that seemed urgent for the upcoming trixie release.
After my appeal for help last month to
debug intermittent sshd crashes, Michel
Casabona helped me put together an environment where I could reproduce it,
which allowed me to track it down to a root
cause and fix it. (I
also found a misuse of
strlcpy affecting at
least glibc-based systems in passing, though I think that was unrelated.)
I backported fixes for some security vulnerabilities to unstable (since
we’re in freeze now so it’s not always appropriate to upgrade to new
upstream versions):
Recently someone in our #remotees channel at work asked about WFH setups and given quite a few things changed in mine, I thought it's time to post an update.
But first, a picture!
(Yes, it's cleaner than usual, how could you tell?!)
desk
It's still the same Flexispot E5B, no change here. After 7 years (I bought mine in 2018) it still works fine.
If I'd have to buy a new one, I'd probably get a four-legged one for more stability (they got quite affordable now), but there is no immediate need for that.
chair
It's still the IKEA Volmar. Again, no complaints here.
hardware
Now here we finally have some updates!
laptop
A Lenovo ThinkPad X1 Carbon Gen 12, Intel Core Ultra 7 165U, 32GB RAM, running Fedora (42 at the moment).
It's connected to a Lenovo ThinkPad Thunderbolt 4 Dock. It just worksâ„¢.
workstation
It's still the P410, but mostly unused these days.
monitor
An AOC U2790PQU 27" 4K. I'm running it at 150% scaling, which works quite decently these days (no comparison to when I got it).
speakers
As the new monitor didn't want to take the old Dell soundbar, I have upgraded to a pair of Alesis M1Active 330 USB.
It's not a Shure, for sure, but does the job well and Christian was quite satisfied with the results when we recorded the Debian and Foreman specials of Focus on Linux.
keyboard
It's still the ThinkPad Compact USB Keyboard with TrackPoint.
I had to print a few fixes and replacement parts for it, but otherwise it's doing great.
Replacement feet, because I broke one while cleaning the keyboard.
USB cable clamp, because it kept falling out and disconnecting.
Seems Lenovo stopped making those, so I really shouldn't break it any further.
mouse
Logitech MX Master 3S. The surface of the old MX Master 2 got very sticky at some point and it had to be replaced.
other
notepad
I'm still terrible at remembering things, so I still write them down in an A5 notepad.
whiteboard
I've also added a (small) whiteboard on the wall right of the desk, mostly used for long term todo lists.
coaster
Turns out Xeon-based coasters are super stable, so it lives on!
yubikey
Yepp, still a thing. Still USB-A because... reasons.
headphones
Still the Bose QC25, by now on the third set of ear cushions, but otherwise working great and the odd 15€ cushion replacement does not justify buying anything newer (which would have the same problem after some time, I guess).
I did add a cheap (~10€) Bluetooth-to-Headphonejack dongle, so I can use them with my phone too (shakes fist at modern phones).
And I do use the headphones more in meetings, as the Alesis speakers fill the room more with sound and thus sometimes produce a bit of an echo.
charger
The Bose need AAA batteries, and so do some other gadgets in the house, so there is a technoline BC 700 charger for AA and AAA on my desk these days.
light
Yepp, I've added an IKEA Tertial and an ALDI "face" light.
No, I don't use them much.
KVM switch
I've "built" a KVM switch out of an USB switch, but given I don't use the workstation that often these days, the switch is also mostly unused.
Welcome to our 5th report from the Reproducible Builds project in 2025! Our monthly reports outline what we’ve been up to over the past month, and highlight items of news from elsewhere in the increasingly-important area of software supply-chain security. If you are interested in contributing to the Reproducible Builds project, please do visit the Contribute page on our website.
Security audit of Reproducible Builds tools published
The Open Technology Fund’s (OTF) security partner Security Research Labs recently an conducted audit of some specific parts of tools developed by Reproducible Builds. This form of security audit, sometimes called a “whitebox� audit, is a form testing in which auditors have complete knowledge of the item being tested. They auditors assessed the various codebases for resilience against hacking, with key areas including differential report formats in diffoscope, common client web attacks, command injection, privilege management, hidden modifications in the build process and attack vectors that might enable denials of service.
The audit focused on three core Reproducible Builds tools: diffoscope, a Python application that unpacks archives of files and directories and transforms their binary formats into human-readable form in order to compare them; strip-nondeterminism, a Perl program that improves reproducibility by stripping out non-deterministic information such as timestamps or other elements introduced during packaging; and reprotest, a Python application that builds source code multiple times in various environments in order to to test reproducibility.
[Colleagues] approached me to talk about a reproducibility issue they’d been having with some R code. They’d been running simulations that rely on generating samples from a multivariate normal distribution, and despite doing the prudent thing and using set.seed() to control the state of the random number generator (RNG), the results were not computationally reproducible. The same code, executed on different machines, would produce different random numbers. The numbers weren’t “just a little bit different� in the way that we’ve all wearily learned to expect when you try to force computers to do mathematics. They were painfully, brutally, catastrophically, irreproducible different. Somewhere, somehow, something broke.
present attestable builds, a new paradigm to provide strong source-to-binary correspondence in software artifacts. We tackle the challenge of opaque build pipelines that disconnect the trust between source code, which can be understood and audited, and the final binary artifact, which is difficult to inspect. Our system uses modern trusted execution environments (TEEs) and sandboxed build containers to provide strong guarantees that a given artifact was correctly built from a specific source code snapshot. As such it complements existing approaches like reproducible builds which typically require time-intensive modifications to existing build configurations and dependencies, and require independent parties to continuously build and verify artifacts.
The authors compare “attestable builds� with reproducible builds by noting an attestable build requires “only minimal changes to an existing project, and offers nearly instantaneous verification of the correspondence between a given binary and the source code and build pipeline used to construct it�, and proceed by determining that t�he overhead (42 seconds start-up latency and 14% increase in build duration) is small in comparison to the overall build time.�
Timo Pohl, Pavel Novák, Marc Ohm and Michael Meier have published a paper called Towards Reproducibility for Software Packages in Scripting Language Ecosystems. The authors note that past research into Reproducible Builds has focused primarily on compiled languages and their ecosystems, with a further emphasis on Linux distribution packages:
However, the popular scripting language ecosystems potentially face unique issues given the systematic difference in distributed artifacts. This Systemization of Knowledge (SoK) [paper] provides an overview of existing research, aiming to highlight future directions, as well as chances to transfer existing knowledge from compiled language ecosystems. To that end, we work out key aspects in current research, systematize identified challenges for software reproducibility, and map them between the ecosystems.
Ultimately, the three authors find that the literature is “sparse�, focusing on few individual problems and ecosystems, and therefore identify space for more critical research.
Distribution work
In Debian this month:
Ian Jackson filed a bug against the debian-policy package in order to delve into an issue affecting Debian’s support for cross-architecture compilation, multiple-architecture systems, reproducible builds’ SOURCE_DATE_EPOCH environment variable and the ability to recompile already-uploaded packages to Debian with a new/updated toolchain (binNMUs). Ian identifies a specific case, specifically in the libopts25-dev package, involving a manual page that had interesting downstream effects, potentially affecting backup systems. The bug generated a large number of replies, some of which have references to similar or overlapping issues, such as this one from 2016/2017.
There is now a “Reproducibility Status� link for each app on f-droid.org, listed on every app’s page. Our verification server shows ✔�� or 💔 based on its build results, where ✔�� means our rebuilder reproduced the same APK file and 💔 means it did not. The IzzyOnDroid repository has developed a more elaborate system of badges which displays a ✅ for each rebuilder. Additionally, there is a sketch of a five-level graph to represent some aspects about which processes were run.
Hans compares the approach with projects such as Arch Linux and Debian that “provide developer-facing tools to give feedback about reproducible builds, but do not display information about reproducible builds in the user-facing interfaces like the package management GUIs.�
Arnout Engelen of the NixOS project has been working on reproducing the minimal installation ISO image. This month, Arnout has successfully reproduced the build of the minimal image for the 25.05 release without relying on the binary cache. Work on also reproducing the graphical installer image is ongoing.
In openSUSE news, Bernhard M. Wiedemann posted another monthly update for their work there.
diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. This month, Chris Lamb made the following changes, including preparing and uploading versions 295, 296 and 297 to Debian:
Don’t rely on zipdetails’ --walk argument being available, and only add that argument on newer versions after we test for that. […]
Review and merge support for NuGet packages from Omair Majid. […]
Merge support for an lzma comparator from Will Hollywood. […][…]
Chris also merged an impressive changeset from Siva Mahadevan to make disorderfs more portable, especially on FreeBSD. disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues […]. This was then uploaded to Debian as version 0.6.0-1.
Lastly, Vagrant Cascadian updated diffoscope in GNU Guix to version 296 […][…] and 297 […][…], and disorderfs to version 0.6.0 […][…].
Website updates
Once again, there were a number of improvements made to our website this month including:
Incorporated a number of fixes for the JavaScript SOURCE_DATE_EPOCH snippet from Sebastian Davis, which did not handle non-integer values correctly. […]
Remove the JavaScript example that uses a ‘fixed’ timezone on the SOURCE_DATE_EPOCH page. […]
Reproducibility testing framework
The Reproducible Builds project operates a comprehensive testing framework running primarily at tests.reproducible-builds.org in order to check packages and other artifacts for reproducibility.
However, Holger Levsen posted to our mailing list this month in order to bring a wider awareness to funding issues faced by the Oregon State University (OSU) Open Source Lab (OSL). As mentioned on OSL’s public post, “recent changes in university funding makes our current funding model no longer sustainable [and that] unless we secure $250,000 in committed funds, the OSL will shut down later this year�. As Holger notes in his post to our mailing list, the Reproducible Builds project relies on hardware nodes hosted there. Nevertheless, Lance Albertson of OSL posted an update to the funding situation later in the month with broadly positive news.
Migrating the central jenkins.debian.net server AMD Opteron to Intel Haswell CPUs. Thanks to IONOS for hosting this server since 2012.
After testing it for almost ten years, the i386 architecture has been dropped from tests.reproducible-builds.org. This is because that, with the upcoming release of Debian trixie, i386 is no longer supported as a ‘regular’ architecture — there will be no official kernel and no Debian installer for i386 systems. As a result, a large number of nodes hosted by Infomaniak have been retooled from i386 to amd64.
Another node, ionos17-amd64.debian.net, which is used for verifying packages for all.reproduce.debian.net (hosted by IONOS) has had its memory increased from 40 to 64GB, and the number of cores doubled to 32 as well. In addition, two nodes generously hosted by OSUOSL have had their memory doubled to 16GB.
Lastly, we have been granted access to more riscv64 architecture boards, so now we have seven such nodes, all with 16GB memory and 4 cores that are verifying packages for riscv64.reproduce.debian.net. Many thanks to PLCT Lab, ISCAS for providing those.
Outside of this, a number of smaller changes were also made by Holger Levsen:
Fix a (harmless) typo in the multiarch_versionskew script. […]
In addition, Jochen Sprickerhof made a series of changes related to reproduce.debian.net:
Add out of memory detection to the statistics page. […]
Reverse the sorting order on the statistics page. […][…][…][…]
Improve the spacing between statistics groups. […]
Update a (hard-coded) line number in error message detection pertaining to a debrebuild line number. […]
Support Debian unstable in the rebuilder-debian.sh script. […]…]
Rely on rebuildctl to sync only ‘arch-specific’ packages. […][…]
Upstream patches
The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. This month, we wrote a large number of such patches, including:
0xFFFF: Use SOURCE_DATE_EPOCH for date in manual pages.
Finally, if you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:
The Two Cultures is a term first used by C.P. Snow in a 1959
speech and monograph focused on the split between humanities and the
sciences. Decades later, the term was (quite famously) re-used by Leo
Breiman in a (somewhat prophetic) 2001
article about the split between ‘data models’ and ‘algorithmic
models’. In this note, we argue that statistical computing practice and
deployment can also be described via this Two Cultures
moniker.
Referring to the term linking these foundational pieces is of course
headline bait. Yet when preparing for the discussion of r2u in the invited talk in
Mons (video,
slides),
it occurred to me that there is in fact a wide gulf between two
alternative approaches of using R and, specifically,
deploying packages.
On the one hand we have the approach described by my friend Jeff as “you go to the Apple store,
buy the nicest machine you can afford, install what you need and
then never ever touch it�. A computer / workstation / laptop is
seen as an immutable object where every attempt at change may
lead to breakage, instability, and general chaos—and is hence best
avoided. If you know Jeff, you know he exaggerates. Maybe only slightly
though.
Similarly, an entire sub-culture of users striving for
“reproducibility� (and sometimes also “replicability�) does the same.
This is for example evidenced by the popularity of package renv by Rcpp collaborator and pal Kevin. The expressed hope is
that by nailing down a (sub)set of packages, outcomes are constrained to
be unchanged. Hope springs eternal, clearly. (Personally, if need be, I
do the same with Docker containers and their respective
Dockerfile.)
On the other hand, ‘rolling’ is fundamentally different approach. One
(well known) example is Google building “everything at @HEAD�. The entire (ginormous)
code base is considered as a mono-repo which at any point in
time is expected to be buildable as is. All changes made are pre-tested
to be free of side effects to other parts. This sounds hard, and likely
is more involved than an alternative of a ‘whatever works’ approach of
independent changes and just hoping for the best.
Another example is a rolling (Linux) distribution as for example Debian. Changes are first committed to
a ‘staging’ place (Debian calls this the ‘unstable’ distribution) and,
if no side effects are seen, propagated after a fixed number of days to
the rolling distribution (called ‘testing’). With this mechanism,
‘testing’ should always be installable too. And based on the rolling
distribution, at certain times (for Debian roughly every two years) a
release is made from ‘testing’ into ‘stable’ (following more elaborate
testing). The released ‘stable’ version is then immutable (apart from
fixes for seriously grave bugs and of course security updates). So this
provides the connection between frequent and rolling updates, and
produces immutable fixed set: a release.
This Debian approach has been influential for any other
projects—including CRAN as can
be seen in aspects of its system providing a rolling set of curated
packages. Instead of a staging area for all packages, extensive tests
are made for candidate packages before adding an update. This aims to
ensure quality and consistence—and has worked remarkably well. We argue
that it has clearly contributed to the success and renown of CRAN.
Now, when accessing CRAN
from R, we fundamentally have
two accessor functions. But seemingly only one is widely known
and used. In what we may call ‘the Jeff model’, everybody is happy to
deploy install.packages() for initial
installations.
One of my #rstats coding rituals is that every time I load a @vincentab.bsky.social package
I go check for a new version because invariably it’s been updated with
18 new major features 😆
And that is why we have two cultures.
Because some of us, yours truly included, also use
update.packages() at recurring (frequent !!) intervals:
daily or near-daily for me. The goodness and, dare I say, gift of
packages is not limited to those by my pal Vincent. CRAN updates all the time, and
updates are (generally) full of (usually excellent) changes, fixes, or
new features. So update frequently! Doing (many but small) updates
(frequently) is less invasive than (large, infrequent) ‘waterfall’-style
changes!
But the fear of change, or disruption, is clearly pervasive. One can
only speculate why. Is the experience of updating so painful on other
operating systems? Is it maybe a lack of exposure / tutorials on best
practices?
These ‘Two Cultures’ coexist. When I delivered the talk in Mons, I
briefly asked for a show of hands among all the R users in the audience to see who
in fact does use update.packages() regularly. And maybe a
handful of hands went up: surprisingly few!
Now back to the context of installing packages: Clearly ‘only
installing’ has its uses. For continuous integration checks we generally
install into ephemeral temporary setups. Some debugging work may be with
one-off container or virtual machine setups. But all other uses may well
be under ‘maintained’ setups. So consider calling
update.packages() once in while. Or even weekly or daily.
The rolling feature of CRAN is a real benefit, and it is
there for the taking and enrichment of your statistical computing
experience.
So to sum up, the real power is to use
install.packages() to obtain fabulous new statistical
computing resources, ideally in an instant; and
update.packages() to keep these fabulous resources
current and free of (known) bugs.
For both tasks, relying on binary installations accelerates
and eases the process. And where available, using binary
installation with system-dependency support as r2u does makes it easier
still, following the r2u slogan of ‘Fast. Easy.
Reliable. Pick All Three.’ Give it a try!
As I wrote in my last post, Twitter's new encrypted DM infrastructure is pretty awful. But the amount of work required to make it somewhat better isn't large.
When Juicebox is used with HSMs, it supports encrypting the communication between the client and the backend. This is handled by generating a unique keypair for each HSM. The public key is provided to the client, while the private key remains within the HSM. Even if you can see the traffic sent to the HSM, it's encrypted using the Noise protocol and so the user's encrypted secret data can't be retrieved.
But this is only useful if you know that the public key corresponds to a private key in the HSM! Right now there's no way to know this, but there's worse - the client doesn't have the public key built into it, it's supplied as a response to an API request made to Twitter's servers. Even if the current keys are associated with the HSMs, Twitter could swap them out with ones that aren't, terminate the encrypted connection at their endpoint, and then fake your query to the HSM and get the encrypted data that way. Worse, this could be done for specific targeted users, without any indication to the user that this has happened, making it almost impossible to detect in general.
This is at least partially fixable. Twitter could prove to a third party that their Juicebox keys were generated in an HSM, and the key material could be moved into clients. This makes attacking individual users more difficult (the backdoor code would need to be shipped in the public client), but can't easily help with the website version[1] even if a framework exists to analyse the clients and verify that the correct public keys are in use.
It's still worse than Signal. Use Signal.
[1] Since they could still just serve backdoored Javascript to specific users. This is, unfortunately, kind of an inherent problem when it comes to web-based clients - we don't have good frameworks to detect whether the site itself is malicious.
(Edit: Twitter could improve this significantly with very few changes - I wrote about that here. It's unclear why they'd launch without doing that, since it entirely defeats the point of using HSMs)
When Twitter[1] launched encrypted DMs a couple of years ago, it was the worst kind of end-to-end encrypted - technically e2ee, but in a way that made it relatively easy for Twitter to inject new encryption keys and get everyone's messages anyway. It was also lacking a whole bunch of features such as "sending pictures", so the entire thing was largely a waste of time. But a couple of days ago, Elon announced the arrival of "XChat", a new encrypted message platform built on Rust with (Bitcoin style) encryption, whole new architecture. Maybe this time they've got it right?
tl;dr - no. Use Signal. Twitter can probably obtain your private keys, and admit that they can MITM you and have full access to your metadata.
The new approach is pretty similar to the old one in that it's based on pretty straightforward and well tested cryptographic primitives, but merely using good cryptography doesn't mean you end up with a good solution. This time they've pivoted away from using the underlying cryptographic primitives directly and into higher level abstractions, which is probably a good thing. They're using Libsodium's boxes for message encryption, which is, well, fine? It doesn't offer forward secrecy (if someone's private key is leaked then all existing messages can be decrypted) so it's a long way from the state of the art for a messaging client (Signal's had forward secrecy for over a decade!), but it's not inherently broken or anything. It is, however, written in C, not Rust[2].
That's about the extent of the good news. Twitter's old implementation involved clients generating keypairs and pushing the public key to Twitter. Each client (a physical device or a browser instance) had its own private key, and messages were simply encrypted to every public key associated with an account. This meant that new devices couldn't decrypt old messages, and also meant there was a maximum number of supported devices and terrible scaling issues and it was pretty bad. The new approach generates a keypair and then stores the private key using the Juicebox protocol. Other devices can then retrieve the private key.
Doesn't this mean Twitter has the private key? Well, no. There's a PIN involved, and the PIN is used to generate an encryption key. The stored copy of the private key is encrypted with that key, so if you don't know the PIN you can't decrypt the key. So we brute force the PIN, right? Juicebox actually protects against that - before the backend will hand over the encrypted key, you have to prove knowledge of the PIN to it (this is done in a clever way that doesn't directly reveal the PIN to the backend). If you ask for the key too many times while providing the wrong PIN, access is locked down.
But this is true only if the Juicebox backend is trustworthy. If the backend is controlled by someone untrustworthy[3] then they're going to be able to obtain the encrypted key material (even if it's in an HSM, they can simply watch what comes out of the HSM when the user authenticates if there's no validation of the HSM's keys). And now all they need is the PIN. Turning the PIN into an encryption key is done using the Argon2id key derivation function, using 32 iterations and a memory cost of 16MB (the Juicebox white paper says 16KB, but (a) that's laughably small and (b) the code says 16 * 1024 in an argument that takes kilobytes), which makes it computationally and moderately memory expensive to generate the encryption key used to decrypt the private key. How expensive? Well, on my (not very fast) laptop, that takes less than 0.2 seconds. How many attempts to I need to crack the PIN? Twitter's chosen to fix that to 4 digits, so a maximum of 10,000. You aren't going to need many machines running in parallel to bring this down to a very small amount of time, at which point private keys can, to a first approximation, be extracted at will.
Juicebox attempts to defend against this by supporting sharding your key over multiple backends, and only requiring a subset of those to recover the original. I can't find any evidence that Twitter's does seem to be making use of this,Twitter uses three backends and requires data from at least two, but all the backends used are under x.com so are presumably under Twitter's direct control. Trusting the keystore without needing to trust whoever's hosting it requires a trustworthy communications mechanism between the client and the keystore. If the device you're talking to can prove that it's an HSM that implements the attempt limiting protocol and has no other mechanism to export the data, this can be made to work. Signal makes use of something along these lines using Intel SGX for contact list and settings storage and recovery, and Google and Apple also have documentation about how they handle this in ways that make it difficult for them to obtain backed up key material. Twitter has no documentation of this, and as far as I can tell does nothing to prove that the backend is in any way trustworthy. (Edit to add: The Juicebox API does support authenticated communication between the client and the HSM, but that relies on you having some way to prove that the public key you're presented with corresponds to a private key that only exists in the HSM. Twitter gives you the public key whenever you communicate with them, so even if they've implemented this properly you can't prove they haven't made up a new key and MITMed you the next time you retrieve your key)
On the plus side, Juicebox is written in Rust, so Elon's not 100% wrong. Just mostly wrong.
But ok, at least you've got viable end-to-end encryption even if someone can put in some (not all that much, really) effort to obtain your private key and render it all pointless? Actually no, since you're still relying on the Twitter server to give you the public key of the other party and there's no out of band mechanism to do that or verify the authenticity of that public key at present. Twitter can simply give you a public key where they control the private key, decrypt the message, and then reencrypt it with the intended recipient's key and pass it on. The support page makes it clear that this is a known shortcoming and that it'll be fixed at some point, but they said that about the original encrypted DM support and it never was, so that's probably dependent on whether Elon gets distracted by something else again. And the server knows who and when you're messaging even if they haven't bothered to break your private key, so there's a lot of metadata leakage.
Signal doesn't have these shortcomings. Use Signal.
[1] I'll respect their name change once Elon respects his daughter
[2] There are implementations written in Rust, but Twitter's using the C one with these JNI bindings
[3] Or someone nominally trustworthy but who's been compelled to act against your interests - even if Elon were absolutely committed to protecting all his users, his overarching goals for Twitter require him to have legal presence in multiple jurisdictions that are not necessarily above placing employees in physical danger if there's a perception that they could obtain someone's encryption keys
Despite comments on my ikiwiki blog being fully moderated, spammers have
been increasingly posting link spam comments on my blog. While I used to use
the blogspam plugin, the
underlying service was likely retired circa
2017 and its public
repositories are all archived.
It turns out that there is a relatively simple way to drastically reduce the
amount of spam submitted to the moderation queue: ban the datacentre IP
addresses that spammers are using.
Looking up AS numbers
It all starts by looking at the IP address of a submitted comment:
From there, we can look it up using whois:
$ whois -r 2a0b:7140:1:1:5054:ff:fe66:85c5
% This is the RIPE Database query service.
% The objects are in RPSL format.
%
% The RIPE Database is subject to Terms and Conditions.
% See https://docs.db.ripe.net/terms-conditions.html
% Note: this output has been filtered.
% To receive output for a database update, use the "-B" flag.
% Information related to '2a0b:7140:1::/48'
% Abuse contact for '2a0b:7140:1::/48' is 'abuse@servinga.com'
inet6num: 2a0b:7140:1::/48
netname: EE-SERVINGA-2022083002
descr: servinga.com - Estonia
geoloc: 59.4424455 24.7442221
country: EE
org: ORG-SG262-RIPE
mnt-domains: HANNASKE-MNT
admin-c: CL8090-RIPE
tech-c: CL8090-RIPE
status: ASSIGNED
mnt-by: MNT-SERVINGA
created: 2020-02-18T11:12:49Z
last-modified: 2024-12-04T12:07:26Z
source: RIPE
% Information related to '2a0b:7140:1::/48AS207408'
route6: 2a0b:7140:1::/48
descr: servinga.com - Estonia
origin: AS207408
mnt-by: MNT-SERVINGA
created: 2020-02-18T11:18:11Z
last-modified: 2024-12-11T23:09:19Z
source: RIPE
% This query was served by the RIPE Database Query Service version 1.114 (SHETLAND)
Alternatively, you can use this WHOIS server with much better output:
$ whois -h whois.cymru.com -v 2a0b:7140:1:1:5054:ff:fe66:85c5
AS | IP | BGP Prefix | CC | Registry | Allocated | AS Name
207408 | 2a0b:7140:1:1:5054:ff:fe66:85c5 | 2a0b:7140:1::/48 | DE | ripencc | 2017-07-11 | SERVINGA-EE, DE
While I do want to eliminate this source of spam, I don't want to block
these datacentre IP addresses outright since legitimate users could be using
these servers as VPN endpoints or crawlers.
I therefore added the following to my Apache config to restrict the CGI
endpoint (used only for write operations such as commenting):
<Location /blog.cgi>
Include /etc/apache2/spammers.include
Options +ExecCGI
AddHandler cgi-script .cgi
</Location>
and then put the following in /etc/apache2/spammers.include:
<RequireAll>
Require all granted
# https://ipinfo.io/AS207408
Require not ip 46.11.183.0/24
Require not ip 80.77.25.0/24
Require not ip 194.76.227.0/24
Require not ip 2a0b:7140:1::/48
</RequireAll>
Finally, I can restart the website and commit my changes:
$ apache2ctl configtest && systemctl restart apache2.service
$ git commit -a -m "Ban all IP blocks from Servinga"
Future improvements
I will likely automate this process in the future, but at the moment my
blog can go for a week without a single spam message (down from dozens every
day). It's possible that I've already cut off the worst offenders.
Blocking comment spammers on an Ikiwiki blog
Despite comments on my ikiwiki blog being fully moderated, spammers have been increasingly posting link spam comments on my blog. While I used to use the blogspam plugin, the underlying service was likely retired circa 2017 and its public repositories are all archived.
It turns out that there is a relatively simple way to drastically reduce the amount of spam submitted to the moderation queue: ban the datacentre IP addresses that spammers are using.
Looking up AS numbers
It all starts by looking at the IP address of a submitted comment:
From there, we can look it up using
whois
:The important bit here is this line:
which referts to Autonomous System 207408, owned by a hosting company in Germany called Servinga.
Alternatively, you can use this WHOIS server with much better output:
Looking up IP blocks
Autonomous Systems are essentially organizations to which IPv4 and IPv6 blocks have been allocated.
These allocations can be looked up easily on the command line either using a third-party service:
or a local database downloaded from IPtoASN.
This is what I ended up with in the case of Servinga:
Preventing comment submission
While I do want to eliminate this source of spam, I don't want to block these datacentre IP addresses outright since legitimate users could be using these servers as VPN endpoints or crawlers.
I therefore added the following to my Apache config to restrict the CGI endpoint (used only for write operations such as commenting):
and then put the following in
/etc/apache2/spammers.include
:Finally, I can restart the website and commit my changes:
Future improvements
I will likely automate this process in the future, but at the moment my blog can go for a week without a single spam message (down from dozens every day). It's possible that I've already cut off the worst offenders.
I have published the list I am currently using.
04 June, 2025 08:28PM