AI generated code and its quality.
22 February, 2026 05:45AM by Junichi Uekawa
planet: Debian Social Contract point #3: we will not hide problems
Debian is a trademark of Software in the Public Interest, Inc. This site is operated independently in the spirit of point three of the Debian Social Contract which tells us We will not hide problems.
22 February, 2026 05:45AM by Junichi Uekawa
I want to get back into the habit of blogging, but I've struggled. I've had several ideas of topics to try and write about, but I've not managed to put aside the time to do it. I thought I'd try and bash out a one-take, stream-of-conciousness-style post now, to get back into the swing.
I'm writing from the lounge of my hotel room in Lanzarote, where my family have gone for the School break. The weather at home has been pretty awful this year, and this week is traditionally quite miserable at the best of times. It's been dry with highs of around 25℃ .
It's been an unusual holiday in one respect: one of my kids is struggling with Autistic Burnout. We were really unsure whether taking her was a good idea: and certainly towards the beginning of the holiday felt we may have made a mistake. Writing now, at the end, I'm not so sure. But we're very unlikely to have anything resembling a traditional summer holiday for the foreseeable.
Managing Autistic Burnout and the UK ways the UK healthcare and education systems manage it (or fail to) has been a huge part of my recent life. Perhaps I should write more about that. This coming week the government are likely to publish plans for reforming Special Needs support in Education. Like many other parents, we wait in hope and fear to see what they plan.
In anticipation of spending a lot of time in the hotel room with my preoccupied daughter I (unusually) packed a laptop and set myself a nerd-task: writing a Pandoc parser ("reader") for the MoinMoin Wiki markup language. There's some unfinished prior art from around 2011 by Simon Michael (of hledger) to work from.
The motivation was our plan to migrate the Debian Wiki away from MoinMoin. We've since decided to approach that differently but I might finish the Reader anyway, it's been an interesting project (and a nice excuse to write Haskell) and it will be useful for others.
Unusually (for me) I've not been reading fiction on this trip: I took with me Human Compatible by Prof Stuart Russell: discussing how to solve the problem of controlling a future Artificial Intelligence. I've largely avoided the LLM hype cycle we're suffering through at the moment, and I have several big concerns about it (moral, legal, etc.), and felt it was time to try and make my concerns more well-formed and test them. This book has been a big help in doing so, although it doesn't touch on the issue of copyright, which is something I am particularly interested in at the moment.
tl;dr:
To the question: “what does it take to upgrade OpenStack”, my personal answer is: less than 2K lines of dash script. I’ll here describe its internals, and why I believe it is the correct solution.
Why writing this blog post
During FOSSDEM 2024, I was asked “how to you handle upgrades”. I answered with a big smile and a short “with a very small shell script” as I couldn’t explain in 2 minutes how it was done. Just saying “it is great this way” doesn’t help giving readers enough hints to be trusted. Why and how did I do it the right way ? This blog post is an attempt to reply better to this question more deeply.
Upgrading OpenStack in production
I wrote this script maybe a 2 or 3 years ago. Though I’m only blogging about it today, because … I did such an upgrade in a public cloud in production last Thuesday evening (ie: the first region of the Infomaniak public cloud). I’d say the cluster is moderately large (as of today: about 8K+ VMs running, 83 compute nodes, 12 network nodes, … for a total of 10880 physical CPU cores and 125 TB of RAM if I only count the compute servers). It took “only” 4 hours to do the upgrade (though I already wore some more code to speed this up for the next time…). It went super smooth without a glitch. I mostly just sat, reading the script output… and went to bed once it finished running. The next day, all my colleagues at Infomaniak were nicely congratulating me that it went that smooth (a big thanks to all of you who did). I couldn’t dream of an upgrade that smooth! :)
Still not impressed? Boring read? Yeah… let’s dive into more technical details.
Intention behind the implementation
My script isn’t perfect. I wont ever pretend it is. But at least, it does minimize down time of every OpenStack service. It also is a “by the book” implementation of what’s written in the OpenStack doc, following every upstream advice. As a result, it is fully seamless for some OpenStack services, and as HA as OpenStack can be for others. The upgrade process is of course idempotent and can be re-run in case of failure. Here’s why.
General idea
My upgrade script does thing in a certain order, respecting what is documented about upgrades in the OpenStack documentation. It basically does:
Installing dependencies
The first thing the upgrade script does is:
For this last thing, a static list of all needed dependency upgrade is maintained between each release of OpenStack, and for each type of nodes. Then for all packages in this list, the script checks with dpkg-query that the package is really installed, and with apt-cache policy that it really is going to be upgraded (Maybe there’s an easier way to do this?). This way, no package is marked as manually installed by mistake during the upgrade process. This ensure that “apt-get –purge autoremove” really does what it should, and that the script is really idempotent.
The idea then, is that once all dependencies are installed, upgrading and restarting leaf packages (ie: OpenStack services like Nova, Glance, Cinder, etc.) is very fast, because the apt-get command doesn’t need to install all dependencies. So at this point, doing “apt-get install python3-cinder” for example (which will also, thanks to dependencies, upgrade cinder-api and cinder-scheduler, if it’s in a controller node) only takes a few seconds. This principle applies to all nodes (controller nodes, network nodes, compute nodes, etc.), which helps a lot speeding-up the upgrade and reduce unavailability.
hapc
At its core, the oci-cluster-upgrade-openstack-release script uses haproxy-cmd (ie: /usr/bin/hapc) to drain each API server to-be-upgraded from haproxy. Hapc is a simple Python wrapper around the haproxy admin socket: it sends command to it with an easy to understand CLI. So it is possible to reliably upgrade one API service only after it’s drained away. Draining means one just wait for the last query to finish and the client to disconnect from http before giving the backend server some more queries. If you do not know hapc / haproxy-cmd, I recommend trying it: it’s going to be hard for you to stop using it once you tested it. Its bash-completion script makes it VERY easy to use, and it is helpful in production. But not only: it is also nice to have when writing this type of upgrade script. Let’s dive into haproxy-cmd.
Example on how to use haproxy-cmd
Let me show you. First, ssh one of the 3 controller and search where the virtual IP (VIP) is located with “crm resource locate openstack-api-vip” or with a (more simple) “crm status”. Let’s ssh that server who got the VIP, and now, let’s drain it away from haproxy.
$ hapc list-backends
$ hapc drain-server --backend glancebe --server cl1-controller-1.infomaniak.ch --verbose --wait --timeout 50
$ apt-get install glance-api
$ hapc enable-server --backend glancebe --server cl1-controller-1.infomaniak.ch
Upgrading the control plane
My upgrade script leverages hapc just like above. For each OpenStack project, it’s done in this order on the first node holding the VIP:
Starting at [1], the risk is that other nodes may have a new version of the database schema, but an old version of the code that isn’t compatible with it. But it doesn’t take long, because the next step is to take care of the other (usually 2) nodes of the OpenStack control plane:
So while there’s technically zero down time, still some issues between [1] and [2] above may happen because of the new DB schema and the old code (both for API and other services) are up and running at the same time. It is however supposed to be rare cases (some OpenStack project don’t even have db change between some OpenStack releases, and it often continues to work on most queries with the upgraded db), and the cluster will be like that for a very short time, so that’s fine, and better than an full API down time.
Satellite services
Then there’s satellite services, that needs to be upgraded. Like Neutron, Nova, Cinder. Nova is the least offender as it has all the code to rewrite Json object schema on-the-fly so that it continues to work during an upgrade. Though it’s a known issue that Cinder doesn’t have the feature (last time I checked), and it’s also probably the same for Neutron (maybe recent-ish versions of OpenStack do use oslo.versionnedobjects ?). Anyways, upgrade on these nodes are done just right after the control plane for each service.
Parallelism and upgrade timings
As we’re dealing with potentially hundreds of nodes per cluster, a lot of operations are performed in parallel. I choose to simply use the & shell thingy with some “wait” shell stuff so that not too many jobs are done in parallel. For example, when disabling SSH on all nodes, this is done 24 nodes at a time. Which is fine. And the number of nodes is all depending on the type of thing that’s being done. For example, while it’s perfectly OK to disable puppet on 24 nodes at the same time, but it is not OK to do that with Neutron services. In fact, each time a Neutron agent is restarted, the script explicitly waits for 30 seconds. This conveniently avoids a hailstorm of messages in RabbitMQ, and neutron-rpc-server to become too busy. All of these waiting are necessary, and this is one of the reasons why can sometimes take that long to upgrade a (moderately big) cluster.
Not using config management tooling
Some of my colleagues would have prefer that I used something like Ansible. Whever, there’s no reason to use such tool if the idea is just to perform some shell script commands on every servers. It is a way more efficient (in terms of programming) to just use bash / dash to do the work. And if you want my point of view about Ansible: using yaml for doing such programming would be crasy. Yaml is simply not adapted for a job where if, case, and loops are needed. I am well aware that Ansible has workarounds and it could be done, but it wasn’t my choice.
21 February, 2026 12:44AM by Goirand Thomas
We are pleased to announce that Proxmox has committed to sponsor DebConf26 as a Platinum Sponsor.
Proxmox develops powerful, yet easy-to-use open-source server solutions. The comprehensive open-source ecosystem is designed to manage divers IT landscapes, from single servers to large-scale distributed data centers. Our unified platform integrates server virtualization, easy backup, and rock-solid email security ensuring seamless interoperability across the entire portfolio. With the Proxmox Datacenter Manager, the ecosystem also offers a "single pane of glass" for centralized management across different locations.
Since 2005, all Proxmox solutions have been built on the rock-solid Debian platform. We are proud to return to DebConf26 as a sponsor because the Debian community provides the foundation that makes our work possible. We believe in keeping IT simple, open, and under your control.
Thank you very much, Proxmox, for your support of DebConf26!
DebConf26 will take place from 20th to July 25th 2026 in Santa Fe, Argentina, and will be preceded by DebCamp, from 13th to 19th July 2026.
DebConf26 is accepting sponsors! Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf26 website at https://debconf26.debconf.org/sponsors/become-a-sponsor/.
20 February, 2026 05:26PM by Leonardo Martínez, Santiago Ruano Rincón
In 2025, US President Donald Trump signed an executive order creating the US federal Bitcoin reserve.
When cryptocurrencies are seized from criminals, they are placed in the reserve. If the reserve did not exist, the authorities could sell seized Bitcoins on a Bitcoin exchange.
From a market perspective, the government's willingness to hold Bitcoins rather than selling them immediately has various implications.
If the government was selling them, this would put downward pressure on the prices.
Instead, holding them gives them more legitimacy and may entice other people to purchase Bitcoins.
Governments with a strong attitude against climate change would be far less likely to hold Bitcoins due to the associated electricity waste.
Ultimately, some Bitcoin founders and proponents argued for Bitcoin to exist as a currency out of government control.
When I hear Bitcoin proponents talking about their enthusiasm for government participation, it raises questions about the philosophy of Bitcoin. Has the philosophy changed over time, is it viable for different participants to have different philosophies or is the question of philosophy irrelevant?
For those who like the philosophy of having some wealth in an asset outside legal and political control, their needs are already well met by gold and silver bullion. The notion that Bitcoin would be a convenient digital replacement for physical metal ownership is a dangerous fantasy. The very people who could benefit most from buying gold and silver bullion are giving their money to the crypto barons. The crypto barons take that money and buy more gold and silver for themselves.
A small government holding some Bitcoin is not the same as a government controlling the Bitcoin market.
Nonetheless, as a large government acquires more and more Bitcoins they will have a bigger incentive to control the technology, for better or worse. Some US government agencies and some of the largest US tech companies may have the resources to outsmart the Bitcoin industry. Maybe they already have done so. Maybe they even created it in the first place.
In comparison, if the government put the same amount of money into equities, buying shares in small businesses, this funds innovation and new ideas. New companies create jobs and the equities eventually pay dividends to the investors. Bitcoins don't create jobs and they don't pay dividends.
In fact, I think the real motivation for those people who want the government to hold Bitcoins is the realization that the process of building a reserve takes more Bitcoins out of circulation. This, in turn, pushes up the price. Eventually, the private owners of Bitcoins who promoted that policy, having already acquired their Bitcoins at cheaper prices before the federal government, will be able to sell their personal holdings of Bitcoin for a profit. In other words, over time, the system transfers wealth from the public purse to the private purses of the people who got in first. The public do not appear to gain any benefit from that transaction.
Continue reading the inconvenient truth about cryptocurrency.
The author holds an MIT MicroMasters in Data, Economics and Development Policy. He does not hold any crypto "assets". Swiss financial regulator FINMA will neither confirm nor deny an investigation on this blog precipitated the resignation of their deputy CEO .
sq network keyserver search $id ; sq cert export --cert=$id > $id.asc
This is also known as: "ifconfig is not installed by default
anymore, how do I do this only with the ip command?"
I have been slowly training my brain to use the new commands but I
sometimes forget some. So, here's a couple of equivalence from the old
package to net-tools the new iproute2, about 10 years late:
net-tools |
iproute2 |
shorter form | what it does |
|---|---|---|---|
arp -an |
ip neighbor |
ip n |
|
ifconfig |
ip address |
ip a |
show current IP address |
ifconfig |
ip link |
ip l |
show link stats (up/down/packet counts) |
route |
ip route |
ip r |
show or modify the routing table |
route add default GATEWAY |
ip route add default via GATEWAY |
ip r a default via GATEWAY |
add default route to GATEWAY |
route del ROUTE |
ip route del ROUTE |
ip r d ROUTE |
remove ROUTE (e.g. default) |
netstat -anpe |
ss --all --numeric --processes --extended |
ss -anpe |
list listening processes, less pretty |
Also note that I often alias ip to ip -br -c as it provides a
much prettier output.
Compare, before:
anarcat@angela:~> ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host noprefixroute
valid_lft forever preferred_lft forever
2: wlan0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff permaddr xx:xx:xx:xx:xx:xx
altname wlp166s0
altname wlx8cf8c57333c7
4: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
20: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether xx:xx:xx:xx:xx:xx brd ff:ff:ff:ff:ff:ff
inet 192.168.0.108/24 brd 192.168.0.255 scope global dynamic noprefixroute eth0
valid_lft 40699sec preferred_lft 40699sec
After:
anarcat@angela:~> ip -br -c a
lo UNKNOWN 127.0.0.1/8 ::1/128
wlan0 DOWN
virbr0 DOWN 192.168.122.1/24
eth0 UP 192.168.0.108/24
I don't even need to redact MAC addresses! It also affects the display of the other commands, which look similarly neat.
Also imagine pretty colors above.
Finally, I don't have a cheat sheet for iw vs iwconfig (from
wireless-tools) yet. I just use NetworkManager now and rarely have
to mess with wireless interfaces directly.
For context, there are traditionally two ways of configuring the network in Linux:
ifconfig, arp, route and
netstat, those are part of the net-tools packageip
command, that is the iproute2 packageIt seems like the latter was made "important" in Debian in 2008,
which means every release since Debian 5 "lenny"
has featured the
ip command.
The former net-tools package was demoted in December 2016 which
means every release since Debian 9 "stretch" ships without an
ifconfig command unless explicitly requested. Note that this was
mentioned in the release notes in a similar (but, IMHO, less
useful) table.
(Technically, the net-tools Debian package source still indicates it
is Priority: important but that's a bug I have just filed.)
Finally, and perhaps more importantly, the name iproute is hilarious
if you are a bilingual french speaker: it can be read as "I proute"
which can be interpreted as "I fart" as "prout!" is the sound a fart
makes. The fact that it's called iproute2 makes it only more
hilarious.
The FAI.me service has reached another milestone:
The 42.000th job was submitted via the web interface since the beginning of this service in 2017.
The idea was to provide a simple web interface for end users for creating the configs for the fully automatic installation with only minimal questions and without knowing the syntax of the configuration files. Thanks a lot for using this service and for all your feedback.
The next job can be yours!
P.S.: I like to get more feedback for the FAI.me service. What do you like most? What's missing? Do you have any success story how you use the customized ISO for your deployment? Please fill out the FAI questionaire or sent feedback via email to fai.me@fai-project.org
FAI.me is the service for building your own customized images via a web interface. You can create an installation or live ISO or a cloud image. For Debian, multiple release versions can be chosen, as well as installations for Ubuntu Server, Ubuntu Desktop, or Linux Mint.
Multiple options are available like selecting different desktop environments, the language and keyboard and adding a user with a password. Optional settings include adding your own package list, choosing a backports kernel, adding a postinst script and adding a ssh public key, choosing a partition layout and some more.
The eighteenth release of the qlcal package arrivied at CRAN today. There has been no calendar update in QuantLib 1.41 so it has been relatively quiet since the last release last summer but we now added a nice new feature (more below) leading to a new minor release version.
qlcal delivers the calendaring parts of QuantLib. It is provided (for the R package) as a set of included files, so the package is self-contained and does not depend on an external QuantLib library (which can be demanding to build). qlcal covers over sixty country / market calendars and can compute holiday lists, its complement (i.e. business day lists) and much more. Examples are in the README at the repository, the package page, and course at the CRAN package page.
This releases makes it (much) easier to work with multiple calendars. The previous setup remains: the package keeps one ‘global’ (and hidden) calendar object which can be set, queried, altered, etc. But now we added the ability to hold instantiated calendar objects in R. These are external pointer objects, and we can pass them to functions requiring a calendar. If no such optional argument is given, we fall back to the global default as before. Similarly for functions operating on one or more dates, we now simply default to the current date if none is given. That means we can now say
> sapply(c("UnitedStates/NYSE", "Canada/TSX", "Australia/ASX"),
\(x) qlcal::isBusinessDay(xp=qlcal::getCalendar(x)))
UnitedStates/NYSE Canada/TSX Australia/ASX
TRUE TRUE TRUE
> to query today (February 18) in several markets, or compare to two days ago when Canada and the US both observed a holiday
> sapply(c("UnitedStates/NYSE", "Canada/TSX", "Australia/ASX"),
\(x) qlcal::isBusinessDay(as.Date("2026-02-16"), xp=qlcal::getCalendar(x)))
UnitedStates/NYSE Canada/TSX Australia/ASX
FALSE FALSE TRUE
> The full details from NEWS.Rd follow.
Changes in version 0.1.0 (2026-02-18)
Invalid calendars return id ‘TARGET’ now
Calendar object can be created on the fly and passed to the date-calculating functions; if missing global one used
For several functions a missing date object now implies computation on the current date, e.g.
isBusinessDay()
Courtesy of my CRANberries, there is a diffstat report for this release. See the project page and package documentation for more details, and more examples.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
Edited 2026-02-21 to correct a minor earlier error: it referenced a QuantLib 1.42 release which does not (yet) exist.This disturbing and amusing article describes how an Open AI investor appears to be having psychological problems releated to SCP based text generated by ChatGPT [2]. Definitely going to be a recursive problem as people who believe in it invest in it.
interesting analysis of dbus and design for a more secure replacement [3].
Ploum wrote an insightful article about the problems caused by the Github monopoly [5]. Radicale sounds interesting.
Niki Tonsky write an interesting article about the UI problems with Tahoe (latest MacOS release) due to trying to make an icon for everything [6]. They have a really good writing style as well as being well researched.
This video about designing a C64 laptop is a masterclass in computer design [9].
Ron Garrett wrote an insightful blog post about abortion [11].
Bruce Schneier and Nathan E. Sanders wrote an insightful article about the potential of LLM systems for advertising and enshittification [12]. We need serious legislation about this ASAP!
17 February, 2026 08:09AM by etbe
In the Tor Project system Administrator's team (colloquially known as TPA), we've recently changed how we take decisions, which means you'll get clearer communications from us about upcoming changes or targeted questions about a proposal.
Note that this change only affects the TPA team. At Tor, each team has its own way of coordinating and making decisions, and so far this process is only used inside TPA. We encourage other teams inside and outside Tor to evaluate this process to see if it can improve your processes and documentation.
We had traditionally been using a "RFC" ("Request For Comments") process and have recently switched to "ADR" ("Architecture Decision Record").
The ADR process is, for us, pretty simple. It consists of three things:
As team lead, the first thing I did was to propose a new template (in ADR-100), a variation of the Nygard template. The TPA variation of the template is similarly simple, as it has only 5 headings, and is worth quoting in full:
Context: What is the issue that we're seeing that is motivating this decision or change?
Decision: What is the change that we're proposing and/or doing?
Consequences: What becomes easier or more difficult to do because of this change?
More Information (optional): What else should we know? For larger projects, consider including a timeline and cost estimate, along with the impact on affected users (perhaps including existing Personas). Generally, this includes a short evaluation of alternatives considered.
Metadata: status, decision date, decision makers, consulted, informed users, and link to a discussion forum
The previous RFC template had 17 (seventeen!) headings, which encouraged much longer documents. Now, the decision record will be easier to read and digest at one glance.
An immediate effect of this is that I've started using GitLab issues more for comparisons and brainstorming. Instead of dumping in a document all sorts of details like pricing or in-depth alternatives comparison, we record those in the discussion issue, keeping the document shorter.
The whole process is simple enough that it's worth quoting in full as well:
Major decisions are introduced to stakeholders in a meeting, smaller ones by email. A delay allows people to submit final comments before adoption.
Now, of course, the devil is in the details (and ADR-101), but the point is to keep things simple.
A crucial aspect of the proposal, which Jacob Kaplan-Moss calls the one weird trick, is to "decide who decides". Our previous process was vague about who makes the decision and the new template (and process) clarifies decision makers, for each decision.
Inversely, some decisions degenerate into endless discussions around trivial issues because too many stakeholders are consulted, a problem known as the Law of triviality, also known as the "Bike Shed syndrome".
The new process better identifies stakeholders:
Picking those stakeholders is still tricky, but our definitions are more explicit and aligned to the classic RACI matrix (Responsible, Accountable, Consulted, Informed).
Finally, a crucial part of the process (ADR-102) is to decouple the act of making and recording decisions from communicating about the decision. Those are two radically different problems to solve. We have found that a single document can't serve both purposes.
Because ADRs can affect a wide range of things, we don't have a specific template for communications. We suggest the Five Ws method (Who? What? When? Where? Why?) and, again, to keep things simple.
The ADR process is not something I invented. I first stumbled upon it in the Thunderbird Android project. Then, in parallel, I was in the process of reviewing the RFC process, following Jacob Kaplan-Moss's criticism of the RFC process. Essentially, he argues that:
And, indeed, I have been guilty of a lot of those issues. A verbose writer, I have written extremely long proposals that I suspect no one has ever fully read. Some proposals were adopted by exhaustion, or ignored because not looping in the right stakeholders.
Our discussion issue on the topic has more details on the issues I found with our RFC process. But to give credit to the old process, it did serve us well while it was there: it's better than nothing, and it allowed us to document a staggering number of changes and decisions (95 RFCs!) made over the course of 6 years of work.
We're still experimenting with the communication around decisions, as this text might suggest. Because it's a separate step, we also have a tendency to forget or postpone it, like this post, which comes a couple of months late.
Previously, we'd just ship a copy of the RFC to everyone, which was easy and quick, but incomprehensible to most. Now we need to write a separate communication, which is more work but, hopefully, worth the as the result is more digestible.
We can't wait to hear what you think of the new process and how it works for you, here or in the discussion issue! We're particularly interested in people that are already using a similar process, or that will adopt one after reading this.
Note: this article was also published on the Tor Blog.
You might see a verification screen pop up on more and more Debian web properties. Unfortunately the AI world of today is meeting web hosts that use Perl CGIs and are not built as multi-tiered scalable serving systems. The issues have been at three layers:
Optimally we would go and solve some scalability issues with the services, however there is also a question of how much we want to be able to serve - as AI scraper demand is just a steady stream of requests that are not shown to humans.
DSA has now stood up some VMs with Varnish for proxying. Incoming TLS is provided by hitch, and TLS "on-loading" is done using haproxy. That way TLS goes in and TLS goes out. While Varnish does cache, if the content is cachable (e.g. does not depend on cookies) - that is not the primary reason for using it: It can be used for flexible query and response rewriting.
If no cookie with a proof of work is provided, the user is redirected to a challenge page that does some webcrypto in Javascript - because that looked similar to what other projects do (e.g. haphash that originally inspired the solution). However so far it looks like scrapers generally do not run with Javascript enabled, so this whole crypto proof of work business could probably be replaced with just a Javascript-based redirect. The existing solution also has big (security) holes in it. And, as we found out, Firefox is slower at webcrypto than Chrome. I have recently reduced the complexity, so you should notice it blocking you significantly less.
Once you have the cookie, you can keep accessing the site for as long as the cookie is valid. Please do not make any assumptions about the cookies, or you will be broken in the future.
For legitimate scrapers that obey robots.txt, there is now an automatically generated IP allowlist in place (thanks, Marco d'Itri). Turns out that the search engines do not actually run Javascript either and then loudly complain about the redirect to the challenge page. Other bots are generally exempt.
I hope that right now we found sort of the sweet spot where the admins can stop spending human time on updating firewall rules and the services are generally available, reasonably fast, and still indexed. In case you see problems or run into a block with your own (legitimate) bots, please let me know.
16 February, 2026 07:55PM by Philipp Kern (noreply@blogger.com)
What if I told you there is a way to configure the network on any Linux server that:
systemd-networkd, ifupdown, NetworkManager,
nothing)It has literally 8 different caveats on top of that, but is still totally worth your time.
People following Debian development might have noticed there are now four ways of configuring the network Debian system. At least that is what the Debian wiki claims, namely:
ifupdown (/etc/network/interfaces): traditional static
configuration system, mostly for workstations and servers that has
been there forever in Debian (since at least 2000), documented
in the Debian wiki
NetworkManager: self-proclaimed "standard Linux network configuration", mostly used on desktops but technically supports servers as well, see the Debian wiki page (introduced in 2004)
systemd-network: used more for servers, see Debian reference Doc
Chapter 5 (introduced some time around Debian 8 "jessie", in
2015)
Netplan: latest entry (2018), YAML-based configuration abstraction layer on top of the above two, see also Debian reference Doc Chapter 5 and the Debian wiki
At this point, I feel ifupdown is on its way out, possibly replaced
by systemd-networkd. NetworkManager already manages most desktop
configurations.
The method is this:
ip= on the Linux kernel command line: for servers with a
single IPv4 or IPv6 address, no software required other than the
kernel and a boot loader (since 2002 or older)So by "new" I mean "new to me". This option is really old. The
nfsroot.txtwhere it is documented predates the git import of the Linux kernel: it's part of the 2005 git import of 2.6.12-rc2. That's already 20+ years old already.The oldest trace I found is in this 2002 commit, which imports the whole file at once, but the option might goes back as far as 1996-1997, if the copyright on the file is correct and the option was present back then.
The trick is to add an ip= parameter to the kernel's
command-line. The syntax, as mentioned above, is in nfsroot.txt
and looks like this:
ip=<client-ip>:<server-ip>:<gw-ip>:<netmask>:<hostname>:<device>:<autoconf>:<dns0-ip>:<dns1-ip>:<ntp0-ip>
Most settings are pretty self-explanatory, if you ignore the useless ones:
<client-ip>: IP address of the server<gw-ip>: address of the gateway<netmask>: netmask, in quad notation<device>: interface name, if multiple available<autoconf>: how to configure the interface, namely:
off or none: no autoconfiguration (static)on or any: use any protocol (default)dhcp, essentially like on for all intents and purposes<dns0-ip>, <dns1-ip>: IP address of primary and secondary name
servers, exported to /proc/net/pnp, can by symlinked to
/etc/resolv.confWe're ignoring the options:
<server-ip>: IP address of the NFS server, exported to /proc/net/pnp<hostnname>: Name of the client, typically sent over the DHCP
requests, which may lead to a DNS record to be created in some
networks<ntp0-ip>: exported to /proc/net/ipconfig/ntp_servers, unused by
the kernelNote that the Red Hat manual has a different opinion:
ip=[<server-id>]:<gateway-IP-number>:<netmask>:<client-hostname>:inteface:[dhcp|dhcp6|auto6|on|any|none|off]
It's essentially the same (although server-id is weird), and the
autoconf variable has other settings, so that's a bit odd.
For example, this command-line setting:
ip=192.0.2.42::192.0.2.1:255.255.255.0:::off
... will set the IP address to 192.0.2.42/24 and the gateway to 192.0.2.1. This will properly guess the network interface if there's a single one.
A DHCP only configuration will look like this:
ip=::::::dhcp
Of course, you don't want to type this by hand every time you boot the machine. That wouldn't work. You need to configure the kernel commandline, and that depends on your boot loader.
With GRUB, you need to edit (on Debian), the file /etc/default/grub
(ugh) and find a line like:
GRUB_CMDLINE_LINUX=
and change it to:
GRUB_CMDLINE_LINUX=ip=::::::dhcp
For systemd-boot UKI setups, it's simpler: just add the setting to
the /etc/kernel/cmdline file. Don't forget to include anything
that's non-default from /proc/cmdline.
This assumes that is the Cmdline=@ setting in
/etc/kernel/uki.conf. See 2025-08-20-luks-ukify-conversion for
my minimal documentation on this.
This is perhaps where this is much less portable than it might first look, because of course each distribution has its own way of configuring those options. Here are some that I know of:
/etc/default/grub,
/boot/loader/entries/arch.conf for systemd-boot or
/etc/kernel/cmdline for UKI)/etc/default/grub, may be more RHEL mentions
grubby, possibly some systemd-boot things here as well)/etc/default/grub,
/efi/loader/entries/gentoo-sources-kernel.conf for systemd-boot,
or /etc/kernel/install.d/95-uki-with-custom-opts.install)It's interesting that /etc/default/grub is consistent across all
distributions above, while the systemd-boot setups are all over the
place (except for the UKI case), while I would have expected those be
more standard than GRUB.
If dropbear-initramfs is setup, it already requires you to have
such a configuration, and it might not work out of the box.
This is because, by default, it disables the interfaces configured in the kernel after completing its tasks (typically unlocking the encrypted disks).
To fix this, you need to disable that "feature":
IFDOWN="none"
This will keep dropbear-initramfs from disabling the configured
interface.
Traditionally, I've always setup my servers with ifupdown on servers
and NetworkManager on laptops, because that's essentially the
default. But on some machines, I've started using systemd-networkd
because ifupdown has ... issues, particularly with reloading network
configurations. ifupdown is a old hack, feels like legacy, and is
Debian-specific.
Not excited about configuring another service, I figured I would try something else: just configure the network at boot, through the kernel command-line.
I was already doing such configurations for dropbear-initramfs (see this documentation), which requires the network the be up for unlocking the full-disk encryption keys.
So in a sense, this is a "Don't Repeat Yourself" solution.
Also known as: "wait, that works?" Yes, it does! That said...
This is useful for servers where the network configuration will not change after boot. Of course, this won't work on laptops or any mobile device.
This only works for configuring a single, simple, interface. You can't configure multiple interfaces, WiFi, bridges, VLAN, bonding, etc.
It does support IPv6 and feels like the best way to configure IPv6 hosts: true zero configuration.
It likely does not work with a dual-stack IPv4/IPv6 static configuration. It might work with a dynamic dual stack configuration, but I doubt it.
I don't know what happens when a DHCP lease expires. No daemon seems to be running so I assume leases are not renewed, so this is more useful for static configurations, which includes server-side reserved fixed IP addresses. (A non-renewed lease risks getting reallocated to another machine, which would cause an addressing conflict.)
It will not automatically reconfigure the interface on link
changes, but ifupdown does not either.
It will not write /etc/resolv.conf for you but the dns0-ip
and dns1-ip do end up in /proc/net/pnp which has a compatible
syntax, so a common configuration is:
ln -s /proc/net/pnp /etc/resolv.conf
I have not really tested this at scale: only a single, test server at home.
Yes, that's a lot of caveats, but it happens to cover a lot of machines for me, and it works surprisingly well. My main doubts are about long-term DHCP behaviour, but I don't see why that would be a problem with a statically defined lease.
Once you have this configuration, you don't need any "user" level network system, so you can get rid of everything:
apt purge systemd-networkd ifupdown network-manager netplan.io
Note that ifupdown (and probably others) leave stray files in (e.g.)
/etc/network which you might want to cleanup, or keep in case all
this fails and I have put you in utter misery. Configuration files for
other packages might also be left behind, I haven't tested this, no
warranty.
This whole idea came from the A/I folks (not to be confused with AI) who have been doing this forever, thanks!
Note: I have not published blog posts about my academic papers over the past few years. To ensure that my blog contains a more comprehensive record of my published papers and to surface these for folks who missed them, I will be periodically (re)publishing blog posts about some “older” published projects.
It seems natural to think of online communities competing for the time and attention of their participants. Over the last few years, I’ve worked with a team of collaborators—led by Nathan TeBlunthuis—to use mathematical and statistical techniques from ecology to understand these dynamics. What we’ve found surprised us: competition between online communities is rare and typically short-lived.
When we started this research, we figured competition would be most likely among communities discussing similar topics. As a first step, we identified clusters of such communities on Reddit. One surprising thing we noticed in our Reddit data was that many of these communities that used similar language also had very high levels of overlap among their users. This was puzzling: why were the same groups of people talking to each other about the same things in different places? And why don’t they appear to be in competition with each other for their users’ time and activity?
We didn’t know how to answer this question using quantitative methods. As a result, we recruited and interviewed 20 active participants in clusters of highly related subreddits with overlapping user bases (for example, one cluster was focused on vintage audio).
We found that the answer to the puzzle lay in the fact that the people we talked to were looking for three distinct things from the communities they worked in:
Critically, we also found that these three things represented a “trilemma,” and that no single community can meet all three needs. You might find two of the three in a single community, but you could never have all three.

The end result is something I recognize in how I engage with online communities on platforms like Reddit. People tend to engage with a portfolio of communities that vary in size, specialization, topical focus, and rules. Compared with any single community, such overlapping systems can provide a wider range of benefits. No community can do everything.
This work was published as a paper at CSCW: TeBlunthuis, Nathan, Charles Kiene, Isabella Brown, Laura (Alia) Levi, Nicole McGinnis, and Benjamin Mako Hill. 2022. “No Community Can Do Everything: Why People Participate in Similar Online Communities.” Proceedings of the ACM on Human-Computer Interaction 6 (CSCW1): 61:1-61:25. https://doi.org/10.1145/3512908.
This work was supported by the National Science Foundation (awards IIS-1908850, IIS-1910202, and GRFP-2016220885). A full list of acknowledgements is in the paper.
16 February, 2026 03:13AM by Benjamin Mako Hill
When the price of gold and silver bullion recently shot up, many leaders in the Bitcoin community expressed surprise that Bitcoin prices didn't go up too. In fact, Bitcoin crashed.
The last time the Straight of Hormuz was closed, the prices of gold and silver bullion doubled. Bitcoin didn't exist back in 1972 so nobody can say what will happen to Bitcoin prices if the same circumstances were repeated.
The USS Gerald R. Ford was christened on 9 November 2013 and incidentally, President Trump was elected for the first time on 9 November 2016.
News reports tell us that President Trump has asked the Ford to leave the Caribbean and travel to Iran. Don't be surprised if the USS Gerald R. Ford never reaches the gulf: it may show up next week in Cuba or Greenland instead.
When President Trump was elected back in 2016, I published a blog about two movies that anticipate the Trump administration. One of them concerns conflict in the Straight of Hormuz and there has never been a better time to watch it.
If Bitcoin prices do take a big fall so quickly after the last big drop then it may permanently tarnish the reputation of Bitcoin and other cryptocurrencies too. If new users become scared of buying Bitcoins then the existing users will find it harder to convert their coins back to cash when necessary.
Continue reading the inconvenient truth about cryptocurrency.
The author holds an MIT MicroMasters in Data, Economics and Development Policy. He does not hold any crypto "assets". Swiss financial regulator FINMA will neither confirm nor deny an investigation on this blog precipitated the resignation of their deputy CEO .
tag2upload allows authorised Debian contributors to upload to Debian simply by pushing a signed git tag to Debian’s gitlab instance, Salsa.
We have recently announced that tag2upload is, in our opinion, now very stable, and ready for general use by all Debian uploaders.
tag2upload, as part of Debian’s git transition programme, is very flexible - it needs to support a large variety of maintainer practices. And it’s relatively unopinionated, wherever that’s possible. But, during the open beta, various contributors emailed us asking for Debian packaging git workflow advice and recommendations.
This post is an attempt to give some more opinionated answers, and guide you through modernising your workflow.
(This article is aimed squarely at Debian contributors. Much of it will make little sense to Debian outsiders.)
git offers a far superior development experience to patches and tarballs. Moving tasks from a tarballs and patches representation to a normal, git-first, representation, makes everything simpler.
dgit and tag2upload do automatically many things that have to be done manually, or with separate commands, in dput-based upload workflows.
They will also save you from a variety of common mistakes. For example, you cannot accidentally overwrite an NMU, with tag2upload or dgit. These many safety catches mean that our software sometimes complains about things, or needs confirmation, when more primitive tooling just goes ahead. We think this is the right tradeoff: it’s part of the great care we take to avoid our software making messes. Software that has your back is very liberating for the user.
tag2upload makes it possible to upload with very small amounts of data transfer, which is great in slow or unreliable network environments. The other week I did a git-debpush over mobile data while on a train in Switzerland; it completed in seconds.
See the Day-to-day work section below to see how simple your life could be.
Most Debian contributors have spent months or years learning how to work with Debian’s tooling. You may reasonably fear that our software is yet more bizarre, janky, and mistake-prone stuff to learn.
We promise (and our users tell us) that’s not how it is. We have spent a lot of effort on providing a good user experience. Our new git-first tooling, especially dgit and tag2upload, is much simpler to use than source-package-based tooling, despite being more capable.
The idiosyncrasies and bugs of source packages, and of the legacy archive, have been relentlessly worked around and papered over by our thousands of lines of thoroughly-tested defensive code. You too can forget all those confusing details, like our users have! After using our systems for a while you won’t look back.
And, you shouldn’t fear trying it out. dgit and tag2upload are unlikely to make a mess. If something is wrong (or even doubtful), they will typically detect it, and stop. This does mean that starting to use tag2upload or dgit can involve resolving anomalies that previous tooling ignored, or passing additional options to reassure the system about your intentions. So admittedly it isn’t always trivial to get your first push to succeed.
One of Debian’s foundational principles is that we publish the source code.
Nowadays, the vast majority of us, and of our upstreams, are using git. We are doing this because git makes our life so much easier.
But, without tag2upload or dgit, we aren’t properly publishing our work! Yes, we typically put our git branch on Salsa, and point Vcs-Git at it. However:
debian/, or something even stranger.
debian/1.2.3-7 tag on salsa corresponds precisely to what was actually uploaded. dput-based tooling (such as gbp buildpackage) doesn’t cross-check the .dsc against git.
This means that the git repositories on Salsa cannot be used by anyone who needs things that are systematic and always correct. They are OK for expert humans, but they are awkward (even hazardous) for Debian novices, and you cannot use them in automation. The real test is: could you use Vcs-Git and Salsa to build a Debian derivative? You could not.
tag2upload and dgit do solve this problem. When you upload, they:
archive/debian/1.2.3-7 tag to a single central git depository, *.dgit.debian.org;
Dgit field in .dsc so that clients can tell (using the ftpmaster API) that this was a git-based upload, what the corresponding git objects are, and where to find them.
This dependably conveys your git history to users and downstreams, in a standard, systematic and discoverable way. tag2upload and dgit are the only system which achieves this.
(The client is dgit clone, as advertised in e.g. dgit-user(7). For dput-based uploads, it falls back to importing the source package.)
tag2upload is a substantial incremental improvement to many existing workflows. git-debpush is a drop-in replacement for building, signing, and uploading the source package.
So, you can just adopt it without completely overhauling your packaging practices. You and your co-maintainers can even mix-and-match tag2upload, dgit, and traditional approaches, for the same package.
Start with the wiki page and git-debpush(1) (ideally from forky aka testing).
You don’t need to do any of the other things recommended in this article.
The rest of this article is a guide to adopting the best and most advanced git-based tooling for Debian packaging.
Your current approach uses the “patches-unapplied” git branch format used with gbp pq and/or quilt, and often used with git-buildpackage. You previously used gbp import-orig.
You are fluent with git, and know how to use Merge Requests on gitlab (Salsa). You have your origin remote set to Salsa.
Your main Debian branch name on Salsa is master. Personally I think we should use main but changing your main branch name is outside the scope of this article.
You have enough familiarity with Debian packaging including concepts like source and binary packages, and NEW review.
Your co-maintainers are also adopting the new approach.
tag2upload and dgit (and git-debrebase) are flexible tools and can help with many other scenarios too, and you can often mix-and-match different approaches. But, explaining every possibility would make this post far too confusing.
This article will guide you in adopting:
In Debian we need to be able to modify the upstream-provided source code. Those modifications are the Debian delta. We need to somehow represent it in git.
We recommend storing the delta as git commits to those upstream files, by picking one of the following two approaches.
rationale
Much traditional Debian tooling like
quiltandgbp pquses the “patches-unapplied” branch format, which stores the delta as patch files indebian/patches/, in a git tree full of unmodified upstream files. This is clumsy to work with, and can even be an alarming beartrap for Debian outsiders.
Option 1: simply use git, directly, including git merge.
Just make changes directly to upstream files on your Debian branch, when necessary. Use plain git merge when merging from upstream.
This is appropriate if your package has no or very few upstream changes. It is a good approach if the Debian maintainers and upstream maintainers work very closely, so that any needed changes for Debian are upstreamed quickly, and any desired behavioural differences can be arranged by configuration controlled from within debian/.
This is the approach documented more fully in our workflow tutorial dgit-maint-merge(7).
Option 2: Adopt git-debrebase.
git-debrebase helps maintain your delta as linear series of commits (very like a “topic branch” in git terminology). The delta can be reorganised, edited, and rebased. git-debrebase is designed to help you carry a significant and complicated delta series.
The older versions of the Debian delta are preserved in the history. git-debrebase makes extra merges to make a fast-forwarding history out of the successive versions of the delta queue branch.
This is the approach documented more fully in our workflow tutorial dgit-maint-debrebase(7).
Examples of complex packages using this approach include src:xen and src:sbcl.
We recommend using upstream git, only and directly. You should ignore upstream tarballs completely.
rationale
Many maintainers have been importing upstream tarballs into git, for example by using
gbp import-orig. But in reality the upstream tarball is an intermediate build product, not (just) source code. Using tarballs rather than git exposes us to additional supply chain attacks; indeed, the key activation part of the xz backdoor attack was hidden only in the tarball!git offers better traceability than so-called “pristine” upstream tarballs. (The word “pristine” is even a joke by the author of pristine-tar!)
First, establish which upstream git tag corresponds to the version currently in Debian. From the sake of readability, I’m going to pretend that upstream version is 1.2.3, and that upstream tagged it v1.2.3.
Edit debian/watch to contain something like this:
version=4
opts="mode=git" https://codeberg.org/team/package refs/tags/v(\d\S*)You may need to adjust the regexp, depending on your upstream’s tag name convention. If debian/watch had a files-excluded, you’ll need to make a filtered version of upstream git.
From now on we’ll generate our own .orig tarballs directly from git.
rationale
We need some “upstream tarball” for the
3.0 (quilt)source format to work with. It needs to correspond to the git commit we’re using as our upstream. We don’t need or want to use a tarball from upstream for this. The.origis just needed so a nice legacy Debian source package (.dsc) can be generated.
Probably, the current .orig in the Debian archive, is an upstream tarball, which may be different to the output of git-archive and possibly even have different contents to what’s in git. The legacy archive has trouble with differing .origs for the “same upstream version”.
So we must — until the next upstream release — change our idea of the upstream version number. We’re going to add +git to Debian’s idea of the upstream version. Manually make a tag with that name:
git tag -m "Compatibility tag for orig transition" v1.2.3+git v1.2.3~0
git push origin v1.2.3+gitIf you are doing the packaging overhaul at the same time as a new upstream version, you can skip this part.
Prepare a new branch on top of upstream git, containing what we want:
git branch -f old-master # make a note of the old git representation
git reset --hard v1.2.3 # go back to the real upstream git tag
git checkout old-master :debian # take debian/* from old-master
git commit -m "Re-import Debian packaging on top of upstream git"
git merge --allow-unrelated-histories -s ours -m "Make fast forward from tarball-based history" old-master
git branch -d old-master # it's incorporated in our history nowIf there are any patches, manually apply them to your main branch with git am, and delete the patch files (git rm -r debian/patches, and commit). (If you’ve chosen this workflow, there should be hardly any patches,)
rationale
These are some pretty nasty git runes, indeed. They’re needed because we want to restart our Debian packaging on top of a possibly quite different notion of what the upstream is.
Convert the branch to git-debrebase format and rebase onto the upstream git:
git-debrebase -fdiverged convert-from-gbp upstream/1.2.3
git-debrebase -fdiverged -fupstream-not-ff new-upstream 1.2.3+gitIf you had patches which patched generated files which are present only in the upstream tarball, and not in upstream git, you will encounter rebase conflicts. You can drop hunks editing those files, since those files are no longer going to be part of your view of the upstream source code at all.
rationale
The force option
-fupstream-not-ffwill be needed this one time because your existing Debian packaging history is (probably) not based directly on the upstream history.-fdivergedmay be needed because git-debrebase might spot that your branch is not based on dgit-ish git history.
Manually make your history fast forward from the git import of your previous upload.
dgit fetch
git show dgit/dgit/sid:debian/changelog
# check that you have the same version number
git merge -s ours --allow-unrelated-histories -m 'Declare fast forward from pre-git-based history' dgit/dgit/sidDelete any existing debian/source/options and/or debian/source/local-options.
Change debian/source/format to 1.0. Add debian/source/options containing -sn.
rationale
We are using the “1.0 native” source format. This is the simplest possible source format - just a tarball. We would prefer “3.0 (native)”, which has some advantages, but dpkg-source between 2013 (wheezy) and 2025 (trixie) inclusive unjustifiably rejects this configuration.
You may receive bug reports from over-zealous folks complaining about the use of the 1.0 source format. You should close such reports, with a reference to this article and to #1106402.
Ensure that debian/source/format contains 3.0 (quilt).
Now you are ready to do a local test build.
Edit README.source to at least mention dgit-maint-merge(7) or dgit-maint-debrebase(7), and to tell people not to try to edit or create anything in debian/patches/. Consider saying that uploads should be done via dgit or tag2upload.
Check that your Vcs-Git is correct in debian/control. Consider deleting or pruning debian/gbp.conf, since it isn’t used by dgit, tag2upload, or git-debrebase.
Add a note to debian/changelog about the git packaging change.
git-debrebase new-upstream will have added a “new upstream version” stanza to debian/changelog. Edit that so that it instead describes the packaging change. (Don’t remove the +git from the upstream version number there!)
In “Settings” / “Merge requests”, change “Squash commits when merging” to “Do not allow”.
rationale
Squashing could destroy your carefully-curated delta queue. It would also disrupt git-debrebase’s git branch structure.
gitlab is a giant pile of enterprise crap. It is full of startling bugs, many of which reveal a fundamentally broken design. It is only barely Free Software in practice for Debian (in the sense that we are very reluctant to try to modify it). The constant-churn development approach and open-core business model are serious problems. It’s very slow (and resource-intensive). It can be depressingly unreliable. That Salsa works as well as it does is a testament to the dedication of the Debian Salsa team (and those who support them, including DSA).
However, I have found that despite these problems, Salsa CI is well worth the trouble. Yes, there are frustrating days when work is blocked because gitlab CI is broken and/or one has to keep mashing “Retry”. But, the upside is no longer having to remember to run tests, track which of my multiple dev branches tests have passed on, and so on. Automatic tests on Merge Requests are a great way of reducing maintainer review burden for external contributions, and helping uphold quality norms within a team. They’re a great boon for the lazy solo programmer.
The bottom line is that I absolutely love it when the computer thoroughly checks my work. This is tremendously freeing, precisely at the point when one most needs it — deep in the code. If the price is to occasionally be blocked by a confused (or broken) computer, so be it.
Create debian/salsa-ci.yml containing
include:
- https://salsa.debian.org/salsa-ci-team/pipeline/raw/master/recipes/debian.ymlIn your Salsa repository, under “Settings” / “CI/CD”, expand “General Pipelines” and set “CI/CD configuration file” to debian/salsa-ci.yml.
rationale
Your project may have an upstream CI config in
.gitlab-ci.yml. But you probably want to run the Debian Salsa CI jobs.You can add various extra configuration to
debian/salsa-ci.ymlto customise it. Consult the Salsa CI docs.
Add to debian/salsa-ci.yml:
.git-debrebase-prepare: &git-debrebase-prepare
# install the tools we'll need
- apt-get update
- apt-get --yes install git-debrebase git-debpush
# git-debrebase needs git user setup
- git config user.email "salsa-ci@invalid.invalid"
- git config user.name "salsa-ci"
# run git-debrebase make-patches
# https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/371
- git-debrebase --force
- git-debrebase make-patches
# make an orig tarball using the upstream tag, not a gbp upstream/ tag
# https://salsa.debian.org/salsa-ci-team/pipeline/-/issues/541
- git-deborig
.build-definition: &build-definition
extends: .build-definition-common
before_script: *git-debrebase-prepare
build source:
extends: .build-source-only
before_script: *git-debrebase-prepare
variables:
# disable shallow cloning of git repository. This is needed for git-debrebase
GIT_DEPTH: 0rationale
Unfortunately the Salsa CI pipeline currently lacks proper support for git-debrebase (salsa-ci#371) and has trouble directly using upstream git for orig tarballs (#salsa-ci#541).
These runes were based on those in the Xen package. You should subscribe to the tickets #371 and #541 so that you can replace the clone-and-hack when proper support is merged.
Push this to salsa and make the CI pass.
If you configured the pipeline filename after your last push, you will need to explicitly start the first CI run. That’s in “Pipelines”: press “New pipeline” in the top right. The defaults will very probably be correct.
In your project on Salsa, go into “Settings” / “Repository”. In the section “Branch rules”, use “Add branch rule”. Select the branch master. Set “Allowed to merge” to “Maintainers”. Set “Allowed to push and merge” to “No one”. Leave “Allow force push” disabled.
This means that the only way to land anything on your mainline is via a Merge Request. When you make a Merge Request, gitlab will offer “Set to auto-merge”. Use that.
gitlab won’t normally merge an MR unless CI passes, although you can override this on a per-MR basis if you need to.
(Sometimes, immediately after creating a merge request in gitlab, you will see a plain “Merge” button. This is a bug. Don’t press that. Reload the page so that “Set to auto-merge” appears.)
Ideally, your package would have meaningful autopkgtests (DEP-8 tests) This makes Salsa CI more useful for you, and also helps detect and defend you against regressions in your dependencies.
The Debian CI docs are a good starting point. In-depth discussion of writing autopkgtests is beyond the scope of this article.
With this capable tooling, most tasks are much easier.
Make all changes via a Salsa Merge Request. So start by making a branch that will become the MR branch.
On your MR branch you can freely edit every file. This includes upstream files, and files in debian/.
For example, you can:
git cherry-pick an upstream commit.
git am a patch from a mailing list or from the Debian Bug System.
git revert an earlier commit, even an upstream one.
When you have a working state of things, tidy up your git branch:
Use git-rebase to squash/edit/combine/reorder commits.
Use git-debrebase -i to squash/edit/combine/reorder commits. When you are happy, run git-debrebase conclude.
Do not edit debian/patches/. With git-debrebase, this is purely an output. Edit the upstream files directly instead. To reorganise/maintain the patch queue, use git-debrebase -i to edit the actual commits.
Push the MR branch (topic branch) to Salsa and make a Merge Request.
Set the MR to “auto-merge when all checks pass”. (Or, depending on your team policy, you could ask for an MR Review of course.)
If CI fails, fix up the MR branch, squash/tidy it again, force push the MR branch, and once again set it to auto-merge.
An informal test build can be done like this:
apt-get build-dep .
dpkg-buildpackage -uc -bIdeally this will leave git status clean, with no modified or un-ignored untracked files. If it shows untracked files, add them to .gitignore or debian/.gitignore as applicable.
If it dirties the tree, consider trying to make it stop doing that. The easiest way is probably to build out-of-tree, if supported upstream. If this is too difficult, you can leave the messy build arrangements as they are, but you’ll need to be disciplined about always committing, using git clean and git reset, and so on.
For formal binaries builds, including for testing, use dgit sbuild as described below for uploading to NEW.
Start an MR branch for the administrative changes for the release.
Document all the changes you’re going to release, in the debian/changelog.
gbp dch can help write the changelog for you:
dgit fetch sid
gbp dch --ignore-branch --since=dgit/dgit/sid --git-log=^upstream/mainrationale
--ignore-branchis needed because gbp dch wrongly thinks you ought to be running this onmaster, but of course you’re running it on your MR branch.The
--git-log=^upstream/mainexcludes all upstream commits from the listing used to generate the changelog. (I’m assuming you have anupstreamremote and that you’re basing your work on theirmainbranch.) If there was a new upstream version, you’ll usually want to write a single line about that, and perhaps summarise anything really important.
(For the first upload after switching to using tag2upload or dgit you need --since=debian/1.2.3-1, where 1.2.3-1 is your previous DEP-14 tag, because dgit/dgit/sid will be a dsc import, not your actual history.)
Change UNRELEASED to the target suite, and finalise the changelog. (Note that dch will insist that you at least save the file in your editor.)
dch -r
git commit -m 'Finalise for upload' debian/changelogMake an MR of these administrative changes, and merge it. (Either set it to auto-merge and wait for CI, or if you’re in a hurry double-check that it really is just a changelog update so that you can be confident about telling Salsa to “Merge unverified changes”.)
Now you can perform the actual upload:
git checkout master
git pull --ff-only # bring the gitlab-made MR merge commit into your local treegit-debpushgit-debpush --quilt=linear--quilt=linear is needed only the first time, but it is very important that first time, to tell the system the correct git branch layout.
If your package is NEW (completely new source, or has new binary packages) you can’t do a source-only upload. You have to build the source and binary packages locally, and upload those build artifacts.
Happily, given the same git branch you’d tag for tag2upload, and assuming you have sbuild installed and a suitable chroot, dgit can help take care of the build and upload for you:
Prepare the changelog update and merge it, as above. Then:
Create the orig tarball and launder the git-derebase branch:
git-deborig
git-debrebase quickrationale
Source package format 3.0 (quilt), which is what I’m recommending here for use with git-debrebase, needs an orig tarball; it would also be needed for 1.0-with-diff.
Build the source and binary packages, locally:
dgit sbuild
dgit push-builtrationale
You don’t have to use
dgit sbuild, but it is usually convenient to do so, because unlike sbuild, dgit understands git. Also it works around a gitignore-related defect in dpkg-source.
Find the new upstream version number and corresponding tag. (Let’s suppose it’s 1.2.4.) Check the provenance:
git verify-tag v1.2.4rationale
Not all upstreams sign their git tags, sadly. Sometimes encouraging them to do so can help. You may need to use some other method(s) to check that you have the right git commit for the release.
Simply merge the new upstream version and update the changelog:
git merge v1.2.4
dch -v1.2.4-1 'New upstream release.'Rebase your delta queue onto the new upstream version:
git debrebase mew-upstream 1.2.4If there are conflicts between your Debian delta for 1.2.3, and the upstream changes in 1.2.4, this is when you need to resolve them, as part of git merge or git (deb)rebase.
After you’ve completed the merge, test your package and make any further needed changes. When you have it working in a local branch, make a Merge Request, as above.
git-based sponsorship is super easy! The sponsee can maintain their git branch on Salsa, and do all normal maintenance via gitlab operations.
When the time comes to upload, the sponsee notifies the sponsor that it’s time. The sponsor fetches and checks out the git branch from Salsa, does their checks, as they judge appropriate, and when satisfied runs git-debpush.
As part of the sponsor’s checks, they might want to see all changes since the last upload to Debian:
dgit fetch sid
git diff dgit/dgit/sid..HEADOr to see the Debian delta of the proposed upload:
git verify-tag v1.2.3
git diff v1.2.3..HEAD ':!debian'Or to show all the delta as a series of commits:
git log -p v1.2.3..HEAD ':!debian'Don’t look at debian/patches/. It can be absent or out of date.
Fetch the NMU into your local git, and see what it contains:
dgit fetch sid
git diff master...dgit/dgit/sidIf the NMUer used dgit, then git log dgit/dgit/sid will show you the commits they made.
Normally the best thing to do is to simply merge the NMU, and then do any reverts or rework in followup commits:
git merge dgit/dgit/sidYou should git-debrebase quick at this stage, to check that the merge went OK and the package still has a lineariseable delta queue.
Then make any followup changes that seem appropriate. Supposing your previous maintainer upload was 1.2.3-7, you can go back and see the NMU diff again with:
git diff debian/1.2.3-7...dgit/dgit/sidThe actual changes made to upstream files will always show up as diff hunks to those files. diff commands will often also show you changes to debian/patches/. Normally it’s best to filter them out with git diff ... ':!debian/patches'
If you’d prefer to read the changes to the delta queue as an interdiff (diff of diffs), you can do something like
git checkout debian/1.2.3-7
git-debrebase --force make-patches
git diff HEAD...dgit/dgit/sid -- :debian/patchesto diff against a version with debian/patches/ up to date. (The NMU, in dgit/dgit/sid, will necessarily have the patches already up to date.)
Some upstreams ship non-free files of one kind of another. Often these are just in the tarballs, in which case basing your work on upstream git avoids the problem. But if the files are in upstream’s git trees, you need to filter them out.
This advice is not for (legally or otherwise) dangerous files. If your package contains files that may be illegal, or hazardous, you need much more serious measures. In this case, even pushing the upstream git history to any Debian service, including Salsa, must be avoided. If you suspect this situation you should seek advice, privately and as soon as possible, from dgit-owner@d.o and/or the DFSG team. Thankfully, legally dangerous files are very rare in upstream git repositories, for obvious reasons.
Our approach is to make a filtered git branch, based on the upstream history, with the troublesome files removed. We then treat that as the upstream for all of the rest of our work.
rationale
Yes, this will end up including the non-free files in the git history, on official Debian servers. That’s OK. What’s forbidden is non-free material in the Debianised git tree, or in the source packages.
git checkout -b upstream-dfsg v1.2.3
git rm nonfree.exe
git commit -m "upstream version 1.2.3 DFSG-cleaned"
git tag -s -m "upstream version 1.2.3 DFSG-cleaned" v1.2.3+ds1
git push origin upstream-dfsgAnd now, use 1.2.3+ds1, and the filtered branch upstream-dfsg, as the upstream version, instead of 1.2.3 and upstream/main. Follow the steps for Convert the git branch or New upstream version, as applicable, adding +ds1 into debian/changelog.
If you missed something and need to filter out more a nonfree files, re-use the same upstream-dfsg branch and bump the ds version, eg v1.2.3+ds2.
git checkout upstream-dfsg
git merge v1.2.4
git rm additional-nonfree.exe # if any
git commit -m "upstream version 1.2.4 DFSG-cleaned"
git tag -s -m "upstream version 1.2.4 DFSG-cleaned" v1.2.4+ds1
git push origin upstream-dfsgIf the files you need to remove keep changing, you could automate things with a small shell script debian/rm-nonfree containing appropriate git rm commands. If you use git rm -f it will succeed even if the git merge from real upstream has conflicts due to changes to non-free files.
rationale
Ideally
uscan, which has a way of representing DFSG filtering patterns indebian/watch, would be able to do this, but sadly the relevant functionality is entangled with uscan’s tarball generation.
Tarball contents: If you are switching from upstream tarballs to upstream git, you may find that the git tree is significantly different.
It may be missing files that your current build system relies on. If so, you definitely want to be using git, not the tarball. Those extra files in the tarball are intermediate built products, but in Debian we should be building from the real source! Fixing this may involve some work, though.
gitattributes:
For Reasons the dgit and tag2upload system disregards and disables the use of .gitattributes to modify files as they are checked out.
Normally this doesn’t cause a problem so long as any orig tarballs are generated the same way (as they will be by tag2upload or git-deborig). But if the package or build system relies on them, you may need to institute some workarounds, or, replicate the effect of the gitattributes as commits in git.
git submodules: git submodules are terrible and should never ever be used. But not everyone has got the message, so your upstream may be using them.
If you’re lucky, the code in the submodule isn’t used in which case you can git rm the submodule.
I’ve tried to cover the most common situations. But software is complicated and there are many exceptions that this article can’t cover without becoming much harder to read.
You may want to look at:
dgit workflow manpages: As part of the git transition project, we have written workflow manpages, which are more comprehensive than this article. They’re centered around use of dgit, but also discuss tag2upload where applicable.
These cover a much wider range of possibilities, including (for example) choosing different source package formats, how to handle upstreams that publish only tarballs, etc. They are correspondingly much less opinionated.
Look in dgit-maint-merge(7) and dgit-maint-debrebase(7). There is also dgit-maint-gbp(7) for those who want to keep using gbp pq and/or quilt with a patches-unapplied branch.
NMUs are very easy with dgit. (tag2upload is usually less suitable than dgit, for an NMU.)
You can work with any package, in git, in a completely uniform way, regardless of maintainer git workflow, See dgit-nmu-simple(7).
Native packages (meaning packages maintained wholly within Debian) are much simpler. See dgit-maint-native(7).
tag2upload documentation: The tag2upload wiki page is a good starting point. There’s the git-debpush(1) manpage of course.
dgit reference documentation:
There is a comprehensive command-line manual in dgit(1). Description of the dgit data model and Principles of Operation is in dgit(7); including coverage of out-of-course situations.
dgit is a complex and powerful program so this reference material can be overwhelming. So, we recommend starting with a guide like this one, or the dgit-…(7) workflow tutorials.
Design and implementation documentation for tag2upload is linked to from the wiki.
Debian’s git transition blog post from December.
tag2upload and dgit are part of the git transition project, and aim to support a very wide variety of git workflows. tag2upload and dgit work well with existing git tooling, including git-buildpackage-based approaches.
git-debrebase is conceptually separate from, and functionally independent of, tag2upload and dgit. It’s a git workflow and delta management tool, competing with gbp pq, manual use of quilt, git-dpm and so on.
git-debrebase reference documentation:
Of course there’s a comprehensive command-line manual in git-debrebase(1).
git-debrebase is quick and easy to use, but it has a complex data model and sophisticated algorithms. This is documented in git-debrebase(5).
The following 42 15-bit values form a 2-disjunctive matrix (that is, no union of two values contain or equal a third value), or equivalently, a superimposed code:
000000000011111 000000011100011 000000101101100 000001010110100 000001101010001 000001110001010 000010011011000 000100100110010 000110010000110 000110100001001 000111001100000 001000110000101 001010000110001 001010101000010 001011000001100 001100001010100 001100010101000 001101000000011 010001000101001 010010001000101 010010110100000 010011000010010 010100001001010 010100010010001 010101100000100 011000000100110 011000100011000 011001011000000 100001001000110 100010000101010 100010100010100 100011010000001 100100000100101 100100111000000 100101000011000 101000001001001 101000010010010 101001100100000 110000001110000 110000010001100 110000100000011 111110000000000
This shows that A286874 a(15) >= 42.
If I had to make a guess, I'd say the equality holds, but I have nowhere near the computing resources to actually find the answer for sure. Stay tuned for news about a(14), though.
Registration and the Call for Proposals for DebConf 26 are now open. The 27th edition of the Debian annual conference will be held from July 20th to July 25th, 2026, in Santa Fe, Argentina.
The conference days will be preceded by DebCamp, which will take place from July 13th to July 19th, 2026.
The registration form can be accessed on the DebConf 26 website. After creating an account, click "register" in the profile section.
As always, basic registration for DebConf is free of charge for attendees. If you are attending the conference in a professional capacity or as a representative of your company, we kindly ask that you consider registering in one of our paid categories to help cover the costs of organizing the conference and to support subsidizing other community members.
The last day to register with guaranteed swag is June 14th.
We also encourage eligible individuals to apply for a diversity bursary. Travel, food, and accommodation bursaries are also available. More details can be found on the bursary info page.
The last day to apply for a bursary is April 1st. Applicants should receive feedback on their bursary application by May 1st.
The call for proposals for talks, discussions and other activities is also open. To submit a proposal you need to create an account on the website, and then use the "Submit Talk" button in the profile section.
The last day to submit and have your proposal be considered for the main conference schedule, with video coverage guaranteed, is April 1st.
DebConf 26 is also accepting sponsors. Interested companies and organizations may contact the DebConf team through sponsors@debconf.org or visit the DebConf 26 website.
See you in Santa Fe,
The DebConf 26 Team
14 February, 2026 12:15PM by Carlos Henrique Lima Melara, Santiago Ruano Rincón
US Justice Department officials have recently released another three million Jeffrey Epstein documents. One of those was an email discussion about a cryptocurrency investment. Epstein immediately suspected it was a pump-and-dump scheme and he felt that his involvement would be a bad idea due to "questionable ethics".
The fact this email was exposed now is another unlucky coincidence for Bitcoin. The original cryptocurrency has been on the ropes after suddenly losing half of its value. News reports talk of a death spiral. Anybody who is about to try cryptocurrency for the first time is going to take one look at the very steep fall on the charts and they are going to try something else instead, like the rush for a traditional safe haven: gold and silver bullion.
Bitcoins in circulation are allegedly worth $2.2 trillion. That figure is only valid in the hypothetical scenario where every Bitcoin could be simultaneously converted to US dollars. In practice, markets don't work that way.
We can roughly estimate the amount of electrical energy used to produce the Bitcoins in circulation. From there, we would anticipate that Bitcoin miners have sold enough Bitcoins to pay their bills and keep mining more.
In practice, it looks like some of the smartest investors were selling their Bitcoins while the price was over $100,000 and they used that money to buy gold and silver bullion before the prices of those metals started to gain momentum.
Where did that money come from though? New investors who decided to try cryptocurrency for the first time. Those are the people holding on to their Bitcoins right now hoping the price will go back up again.
Continue reading the inconvenient truth about cryptocurrency.
The author holds an MIT MicroMasters in Data, Economics and Development Policy. He does not hold any crypto "assets". Swiss financial regulator FINMA will neither confirm nor deny an investigation on this blog precipitated the resignation of their deputy CEO .
Version 0.0.27 of RcppSpdlog arrived on CRAN moments ago, and will be uploaded to Debian and built for r2u shortly. The (nice) documentation site will be refreshed too. RcppSpdlog bundles spdlog, a wonderful header-only C++ logging library with all the bells and whistles you would want that was written by Gabi Melman, and also includes fmt by Victor Zverovich. You can learn more at the nice package documention site.
Brian Ripley has now turned C++20 on as a default for R-devel (aka R
4.6.0 ‘to be’), and this turned up misbehvior in packages using RcppSpdlog such as
our spdl wrapper
(offering a nicer interface from both R and C++) when relying on
std::format. So for now, we turned this off and remain with
fmt::format from the fmt library while we
investigate further.
The NEWS entry for this release follows.
Changes in RcppSpdlog version 0.0.27 (2026-02-11)
- Under C++20 or later, keep relying on
fmt::formatuntil issues experienced usingstd::formatcan be identified and resolved
Courtesy of my CRANberries, there is also a diffstat report detailing changes. More detailed information is on the RcppSpdlog page, or the package documention site.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
Uma das consequências do surto de projetos Debian é que os primeiros membros dessa comunidade estão crescendo e alguns deles também querem participar do Debianismo . Ou, no mínimo, querem incluir o Debian em seus currículos e usar a marca registrada como uma ferramenta para se apropriarem do trabalho substancial realizado por desenvolvedores e inovadores que impulsionaram o ecossistema de software livre.
Para ver como isso funcionará para essas crianças, podemos analisar casos já existentes com sobrinhas, sobrinhos, primos e namoradas.
Não consigo imaginar outra organização onde falaríamos sobre familiares de outras pessoas dessa forma. No Debianismo , porém, minha família vem sendo atacada há anos. Divulgaram rumores sobre estagiários que buscavam um relacionamento comigo durante o Google Summer of Code e o Outreachy . Enquanto esses rumores persistirem, é justo que eu possa publicar informações para defender a honra da minha família e dos meus ex-estagiários. Em outras palavras, é necessário analisar todos os relacionamentos dentro do grupo. Isso inclui relacionamentos amorosos e também casos em que o filho ou sobrinho de alguém parece estar furando a fila.
Phil Wyett reclamou recentemente que estava esperando há sete anos para que sua inscrição como desenvolvedor do Debian fosse concluída. Ele ficou desiludido e parou de contribuir completamente .
Quando um dos casais influentes do Debianismo apresentar o seu filho ao Debianismo , será que a criança ficará sete anos na fila de novos mantenedores, como aconteceu com Phil Wyett ?
Quando forem atribuídas bolsas de viagem para participação na DebConf , as crianças, especialmente as meninas, receberão automaticamente o subsídio para a diversidade?
Em 2024, no Dia de Ada Lovelace , Tassia falou sobre ser esposa de um debian :
A maioria daquele grupo se tornou amiga íntima; casei-me com uma delas... tivemos nosso próprio filho e tivemos o prazer de apresentá-lo à comunidade; lamentamos a perda de amigos e nos curamos juntos.
Março de 2014: veja a programação da Women's MiniDebConf em Barcelona . Podemos ver que Tiago e sua esposa Tassia vieram do Canadá para Barcelona.
Bartosz Fenski percebeu o que estava acontecendo e renunciou:
Para: <debian-private@lists.debian.org> Não tenho estado muito ativo ultimamente, mas com a recente atividade do nosso líder e o apoio oficial ao women-mini-debconf, quero me retirar do projeto. Não quero mais fazer parte dele. Por favor, aprovem minha declaração de aposentadoria. Nos vemos em um projeto onde todos serão bem-vindos. Não apenas mulheres.
Bartosz não é sexista e não está reclamando das mulheres. Todos sabem que existe uma cultura de nepotismo e favoritismo. Certas pessoas pareciam estar pedindo fundos do Debian para ir a Barcelona com seus parceiros.
Poucos meses depois de Barcelona, &ZeroWidthSpace&ZeroWidthSpaceTássia Camões Araújo recebeu o título de desenvolvedora Debian não desenvolvedora e foi imediatamente nomeada para a equipe de três coordenadores da DebConf .
Agora, avançamos dez anos. Em 22 de outubro de 2024, aconteceu o evento Online MiniDebConf Brasil 2024. Houve uma sessão, em português, comparando eventos online e presenciais . A sessão contou com um moderador e três palestrantes. A programação lista seus nomes como Paulo Santana , Antonio Terceiro , Thais Araujo e Tássia Camões Araújo .
Paulo apresenta os outros três participantes. Usando a tradução automática do YouTube, vemos o que ele está dizendo: os três palestrantes são todos parentes. Não fica claro se Thais é irmã, prima ou sobrinha de Tássia . Seu sobrenome sugere que ela seja prima ou sobrinha. Aliás, a palavra nepotismo deriva da palavra sobrinho, a forma masculina de sobrinha.
Lembrem-se que Tássia já havia atuado como uma das coordenadoras da DebConf e Antonio Terceiro foi nomeado para o comitê da DebConf em 2019. Paulo nos contou que Thais Rebouças de Araujo tem parentesco com essas duas pessoas que influenciam as oportunidades da DebConf.
Lembrem-se, em 2019, a DebConf19 foi realizada no Brasil. Um grande grupo de mulheres da Albânia e do Kosovo participou da conferência. Nenhuma delas precisou pagar pelas próprias passagens aéreas ou hospedagem. Tudo o que precisavam foi totalmente pago. No jantar da conferência, quatro dessas mulheres sentaram-se à mesma mesa que o ex-líder do Debian, Chris Lamb . Algumas semanas depois do jantar, a mulher albanesa sentada mais perto de Lamb ganhou um estágio na Outreachy . Evidência: um líder apaixonado?
Entre 2019 e 2022, espalharam inúmeros rumores acusando minha última estagiária do Google Summer of Code de ter um relacionamento comigo. A mulher tinha metade da minha idade e já era casada. Neste vídeo, ela nos conta como tentaram manipulá-la .
Aqui estão alguns exemplos da turba Debian/Zizian , pessoas que eu nunca conheci, espalhando boatos:
e no ano seguinte voltou a acontecer:
Em 2022, os desajustados gastaram mais de 120 mil dólares em honorários advocatícios para insistir que houve irregularidades , apesar de a mulher ter nos dito claramente que eu era um ótimo mentor e que ela não queria fazer parte de um grupo que se comporta pior do que um grêmio estudantil .
Em 12 de setembro de 2023, Thais Rebouças de Araujo usou o nome Thais Rebouças na programação da DebConf23 . Sem seu nome completo, não podemos afirmar se ela é parente de Tassia .
Em 15 de setembro de 2023, Thaís Rebouças de Araujo , usando o nome Thais R. Araujo, solicita a entrada na equipe Debian Python para ajudar na manutenção dos pacotes Robber , Delta e Pytest-executable .
Ela menciona Debian Python em seu perfil do LinkedIn :
Lembra quando Vincent Danjean entrou para o Debian? Ele disse ao debian-private (vazou) :
Quando participei do processo de NM, eu sabia que precisava fornecer algumas informações pessoais para que meu AM me conhecesse melhor. Também sabia que esses e-mails seriam lidos pelo FD e pelo DAM. Mas sabia que eles não seriam públicos, nem mesmo entre outros DD. Então, escrevi algumas coisas sobre minha vida pessoal. Também escrevi opiniões sobre outros DD (meu AM me pediu para reagir a algumas afirmações públicas desses DD). Não quero que esses e-mails se tornem públicos (nem mesmo entre DD e outras pessoas externas) porque não me lembro exatamente do que escrevi, não quero verificar/reescrever/filtrar as partes públicas/privadas... Esses e-mails nunca foram destinados a serem lidos por pessoas além daquelas que já os leram.
Alguém pediu a Thaís Rebouças de Araujo que fornecesse garantias? Ou ela estava imune a tais pedidos devido aos seus relacionamentos com outros membros do grupo?
Em 7 de junho de 2024, Thaís Rebouças de Araujo registrou uma empresa em seu próprio nome, número de registro 55.442.988/0001-30, com capital social de R$ 2.500 (Fonte: registro comercial ). Suas participações subsequentes em conferências parecem ter sido benéficas para seus negócios.
Quando registrei minha empresa em abril de 2021, os debianistas desonestos imediatamente começaram a atacar minha família e a espalhar boatos. Lembram-se de quando atacaram o Dr. Norbert Preining no Natal de 2018? Este era o tópico "Pedido de experiências com Norbert Preining" :
Em 18 de julho de 2024, os desajustados do GNOME abriram um tópico em seu fórum Discourse espalhando difamação sobre Sonny Piers . Mais de 21.200 pessoas já o visualizaram.
Em 28 de julho de 2024, Tássia Camões Araújo e Thaís Rebouças de Araújo farão uma palestra juntas. Mais uma vez, na programação , vemos apenas o nome de Thaís Rebouças , sem qualquer indício de parentesco entre elas. No início da palestra, cada uma apresenta a outra ao público, sem mencionar qualquer ligação familiar.
DebConf101: esqueça o mérito, o que importa é quem você conhece.
No dia 22 de outubro de 2024, ocorreu o evento Online MiniDebConf Brasil 2024, conforme descrito anteriormente, onde foi revelado que muitas dessas pessoas são parentes entre si.
On 14 July 2025, at DebConf25 in France, Tássia Camões Araújo and Thaís Rebouças de Araujo repeat the same talk from the previous year, DebConf 101. Therefore, it looks like Tássia Camões Araújo and Thaís Rebouças de Araujo both got funding for a trip to France.
Em dezembro de 2025, Thaís Rebouças de Araujo se formou. O plano deu certo e ela foi contratada antes mesmo de se graduar. Atualmente, trabalha na empresa Place Tecnologia e Inovação SA .
O que vemos aqui é bastante trágico para todos os outros na história do Debian e do software livre. Tássia Camões Araújo é esposa de Tiago , então ambos viajam juntos. Não está claro se eles solicitaram e receberam financiamento do Debian em todas as viagens. Agora, Thaís Rebouças de Araújo chegou, deu uma palestra com sua tia (?) e já fez pelo menos três viagens de longa distância saindo do Brasil, visitando a Índia em 2023, a Coreia do Sul em 2024 e a França em 2025.
Lembrem-se, Abraham Raji trabalhou muito na organização da DebConf23 na Índia. Os voluntários foram solicitados a contribuir com seu próprio dinheiro para o custo do passeio de um dia. Raji optou por não pagar e ficou de fora da excursão de caiaque paga, foi deixado sozinho para nadar sem supervisão e se afogou, uma morte evitável .
O custo das passagens aéreas da sobrinha de Tássia Camões Araújo, do Brasil para a Índia, para a DebConf23, é maior do que o custo de oferecer um passeio de um dia gratuito para todos.
Trazer a sobrinha de alguém do Brasil para a Índia foi considerado um gasto mais importante do que manter o grupo unido e em segurança durante a excursão de um dia.
Para o bem ou para o mal, esta parece ser a situação que a Igreja Católica estava tentando evitar quando decidiu que era melhor para os padres serem celibatários .
Numa organização comunitária local, como por exemplo, um clube desportivo amador, é bastante comum que várias pessoas da mesma família participem, porque todas vivem no mesmo bairro.
Os debianistas afirmam ser uma organização baseada no mérito. O Contrato Social Debian promete transparência. Quem trabalha remotamente não consegue saber facilmente quando alguém dá dinheiro para sua namorada ou sobrinha. Neste caso, como em muitos outros, a mulher usa diferentes variações do seu nome. Algumas mulheres usam pseudônimos. Quem consulta a programação da conferência ou a assiste online fica sem saber muito sobre esses relacionamentos.
As candidatas aos estágios da Outreachy foram solicitadas a dedicar tempo a tarefas de programação não remuneradas para competir por mérito. Como essas mulheres se sentem ao ver que a sobrinha de alguém recebeu oportunidades de viagem e palestras em todas as conferências?
Todos os outros candidatos a financiamento devem compartilhar os links públicos para seus trabalhos. Aqui estão os links que encontramos para Thaís Rebouças de Araujo :
É interessante comparar as contribuições de Thaís Rebouças de Araujo com as de pessoas da mesma idade nos tempos áureos do voluntariado no Debian. Pessoas como Shaya Potter , Joel "Espy" Klecker e Chris Rutter . Os primeiros vazamentos do debian-private incluem muitas evidências de como as pessoas costumavam se juntar e contribuir espontaneamente antes que o dinheiro chegasse e corrompesse a comunidade.
Na maioria das organizações, não consigo imaginar que as pessoas seriam pressionadas a escrever blogs e e-mails documentando conflitos de interesse como este. Na época em que meu pai faleceu, a turba Debian/Zizian começou sua obsessão em atacar minha família e espalhar boatos de que eu estava tendo um relacionamento com uma estagiária do Google Summer of Code . Eles até tentaram enganar a estagiária para que ela fizesse uma denúncia contra mim. Ela se recusou e não queria ser associada a essas pessoas que usam mulheres para espalhar boatos .
Os esnobes usam a marca registrada Debian e as doações para denunciar e eliminar pessoas que dedicaram décadas de trabalho genuíno ao desenvolvimento de código aberto. Em seguida, esses mesmos agressores virtuais usam a marca registrada, o dinheiro, as oportunidades de palestras e tudo o mais para ajudar seus parceiros românticos, filhos e sobrinhos a construírem uma imagem pública para si mesmos.
A hipocrisia é extraordinária. Milhares de dólares de fundos de diversidade foram gastos levando Thaís Rebouças de Araujo e outros amigos pessoais a conferências para que pudessem incluir o Debian em seus currículos e obter vantagem no mercado de trabalho. No entanto, pessoas que realizam uma quantidade enorme de trabalho real, como o Dr. Norbert Preining , Sonny Piers , Richard Stallman e eu, fomos alvo de ataques cruéis e desonestos às nossas reputações. Então, esses impostores chegam e tomam o nosso lugar. Quando os impostores aparecem em um palco na DebConf ou quando se autodenominam Desenvolvedores Debian , estão se apropriando do crédito pelo trabalho que todos os outros fizeram no passado. Estão se apoiando nos ombros de gigantes.
Eles estão envergonhando e humilhando desenvolvedores de verdade por cometerem os menores erros com linguagem e pronomes politicamente corretos, e depois colocam a sobrinha de alguém em um pedestal e fingem que ela é a próxima Linus Torvalds .
Na palestra na DebConf23, a câmera percorre a sala. Podemos ver que a sala está quase vazia. Muitas palestras também estavam quase vazias. Todas as pessoas interessantes, como Richard Stallman , Linus Torvalds e eu, estamos sujeitos à censura. As pessoas não esperam que ninguém diga nada interessante, então não se dão ao trabalho de ir à DebConf.
A expressão favorita de Joel "Espy" Klecker é atemporal. Descanse em paz, Espy (1979-2000).
Consulte mais informações sobre o cluster de gravidez do Debian .
Veja o histórico cronológico de como a cultura de assédio e abuso no Debian evoluiu .
L'une des conséquences de l' engouement suscité par Debian est que les premiers utilisateurs, désormais adultes, souhaitent eux aussi s'impliquer dans la communauté Debian . À tout le moins, ils veulent inscrire Debian sur leur CV et utiliser la marque pour s'attribuer le mérite du travail considérable accompli par les véritables développeurs et innovateurs qui ont bâti l'écosystème du logiciel libre.
Pour voir comment cela fonctionnera pour ces enfants, nous pouvons examiner des cas existants concernant des nièces, des neveux, des cousins &ZeroWidthSpace&ZeroWidthSpaceet des petites amies.
Je n'imagine aucune autre organisation où l'on parlerait ainsi des familles d'autrui. Au sein de Debianism , cependant, ma famille est attaquée depuis plusieurs années. Des rumeurs ont circulé concernant des stagiaires de Google Summer of Code et d'Outreachy qui auraient cherché à nouer une relation avec moi . Tant que ces rumeurs persistent, il est légitime que je publie des informations pour défendre l'honneur de ma famille et de mes anciens stagiaires. Autrement dit, il est nécessaire d'examiner toutes les relations au sein du groupe, y compris les relations amoureuses, mais aussi les cas où un enfant ou un neveu/une nièce semble passer avant les autres.
Phil Wyett s'est récemment plaint d'attendre depuis sept ans que sa candidature au programme de développeur Debian soit finalisée. Désabusé , il a cessé toute contribution .
Quand un couple influent du mouvement débianiste initiera son enfant au débianisme , celui-ci devra-t-il attendre sept ans dans la file d'attente des nouveaux responsables de la maintenance, comme Phil Wyett ?
Lorsque des bourses de voyage sont attribuées pour participer à DebConf , les enfants, et notamment les filles, recevront-ils automatiquement l'argent destiné à la diversité ?
En 2024, à l'occasion de la Journée Ada Lovelace , Tassia a parlé de son expérience en tant qu'épouse d'un Débien :
La plupart des membres de ce groupe sont devenus des amis proches ; j'ai épousé l'une d'entre eux… nous avons eu un enfant et avons eu le plaisir de le présenter à la communauté ; nous avons partagé le deuil de nos amis et nous nous sommes soutenus ensemble.
Mars 2014 : consultez le programme de la MiniDebConf féminine à Barcelone . On y voit Tiago et sa femme Tassia, tous deux venus du Canada.
Bartosz Fenski a compris ce qui se passait et a démissionné :
À : <debian-private@lists.debian.org> Je n'ai pas été très actif ces derniers temps, mais avec la dernière activité de notre leader et soutien officiel aux femmes - mini-débconf Je veux prendre ma retraite Je ne veux plus faire partie de ce projet. S'il vous plaît J'approuve par la présente ma déclaration de retraite. À bientôt sur le projet où Tout le monde sera le bienvenu. Pas seulement les femmes.
Bartosz n'est pas sexiste et ne se plaint pas des femmes. Tout le monde sait qu'il existe une culture du népotisme et du favoritisme. Il semblerait que certaines personnes aient demandé des fonds de Debian pour se rendre à Barcelone avec leurs conjoints.
Quelques mois après Barcelone, Tássia Camões Araújo a reçu le titre de développeuse Debian non-développeuse , puis a été immédiatement nommée dans l'équipe des trois présidents de DebConf .
Dix ans plus tard, le 22 octobre 2024 s'est tenu l'événement Online MiniDebConf Brasil 2024. Une session, en portugais, comparait les événements en ligne et en présentiel . Elle était animée par un modérateur et comptait trois intervenants : Paulo Santana , Antonio Terceiro , Thais Araujo et Tássia Camões Araújo .
Paulo présente les trois autres participants. Grâce à la traduction automatique de YouTube, on comprend ce qu'il dit : les trois intervenants sont apparentés. On ignore si Thaïs est la sœur, la cousine ou la nièce de Tássia . Son nom de famille laisse penser qu'elle est cousine ou nièce. D'ailleurs, le mot « népotisme » est dérivé du mot « neveu », la forme masculine de « nièce ».
Rappelons que Tássia avait auparavant occupé l'un des postes de présidente de DebConf et qu'Antonio Terceiro avait été nommé au comité de DebConf en 2019. Paulo nous a indiqué que Thais Rebouças de Araujo est liée à ces deux personnes qui ont une influence sur les opportunités offertes par DebConf.
Rappelez-vous, en 2019, la DebConf19 s'est tenue au Brésil. Un important groupe de femmes albanaises et kosovares y a participé. Aucune d'entre elles n'a eu à payer son billet d'avion ni son hébergement. Tout était pris en charge. Lors du dîner de gala, quatre d'entre elles étaient assises à la même table que Chris Lamb , l'ancien dirigeant de Debian . Quelques semaines plus tard, la Albanaise assise le plus près de Lamb a décroché un stage chez Outreachy . Preuve : un dirigeant sous le charme ?
Entre 2019 et 2022, ils ont répandu de nombreuses rumeurs accusant ma dernière stagiaire du programme Google Summer of Code d'avoir eu une relation avec moi. Cette femme avait la moitié de mon âge et était déjà mariée. Dans cette vidéo, elle raconte comment ils ont tenté de la manipuler .
Voici quelques exemples de la foule Debian/Zizian , des gens que je n'ai jamais rencontrés, qui répandent des rumeurs :
et l'année suivante, la question est revenue sur le tapis :
En 2022, ces marginaux ont dépensé plus de 120 000 $ en frais juridiques pour insister sur le fait qu’il y avait eu des actes répréhensibles , malgré le fait que la femme nous ait clairement dit que j’étais un excellent mentor et qu’elle ne voulait pas faire partie d’un groupe qui se comporte pire qu’un syndicat étudiant .
Le 12 septembre 2023, Thais Rebouças de Araujo a utilisé le nom de Thais Rebouças dans le programme de DebConf23 . Sans son nom complet, il est impossible de déterminer son lien de parenté avec Tassia .
Le 15 septembre 2023, Thaís Rebouças de Araujo , sous le nom de Thais R. Araujo, demande à rejoindre l'équipe Debian Python et à aider à maintenir les paquets Robber , Delta et Pytest-executable .
Elle mentionne Debian Python sur son profil LinkedIn :
Vous vous souvenez quand Vincent Danjean a rejoint Debian ? Il a déclaré à debian-private (informations divulguées) :
Lors de ma procédure de gestion des comptes, je savais que je devais fournir des informations personnelles afin que mon gestionnaire de compte me connaisse mieux. Je savais également que ces courriels seraient lus par mon responsable de compte et mon gestionnaire de compte. Cependant, je savais qu'ils resteraient confidentiels, même entre les autres gestionnaires de comptes. J'ai donc inclus des éléments de ma vie privée. J'ai également donné mon avis sur d'autres gestionnaires de comptes (mon gestionnaire de compte m'a demandé de réagir à certaines déclarations publiques de ces gestionnaires). Je ne souhaite pas que ces courriels deviennent publics (même entre les gestionnaires de comptes et des personnes extérieures) car je ne me souviens plus exactement de leur contenu et je ne veux pas avoir à les vérifier, les réécrire ou les filtrer en fonction de leur nature publique ou privée. Ces courriels n'ont jamais été destinés à être lus par d'autres personnes que celles qui les ont déjà lus.
Est-ce que quelqu'un a demandé à Thaís Rebouças de Araujo de fournir des garanties ? Ou était-elle immunisée contre de telles demandes en raison de ses relations avec d'autres membres du groupe ?
Le 7 juin 2024, Thaís Rebouças de Araujo a immatriculé une entreprise à son nom, sous le numéro 55 442 988/0001-30, avec un capital social de 2 500 BRL (Source : registre du commerce ). Ses interventions ultérieures lors de conférences semblent toutes avoir été bénéfiques pour son entreprise.
Lorsque j'ai immatriculé mon entreprise en avril 2021, les débianistes dissidents ont immédiatement commencé à s'en prendre à ma famille et à répandre des rumeurs. Vous souvenez-vous de leur attaque contre le Dr Norbert Preining à Noël 2018 ? Voici le sujet de la discussion intitulée « Appel à témoignages concernant Norbert Preining » :
Le 18 juillet 2024, les membres dissidents de GNOME ouvrent une discussion sur leur forum Discourse , propageant des propos diffamatoires à l'encontre de Sonny Piers . Plus de 21 200 personnes l'ont consultée à ce jour.
Le 28 juillet 2024, Tássia Camões Araújo et Thaís Rebouças de Araujo donneront une conférence ensemble. Une fois encore, le programme ne mentionne que le nom de Thaís Rebouças, sans laisser transparaître leur lien de parenté. Au début de la conférence, elles présenteront chacune l'autre au public, sans faire mention de ce lien.
DebConf101 : Oubliez le mérite, tout est question de relations.
Le 22 octobre 2024 a eu lieu l'événement Online MiniDebConf Brasil 2024, comme décrit précédemment, où il a été révélé que beaucoup de ces personnes sont liées entre elles.
Le 14 juillet 2025, lors de la DebConf25 en France, Tássia Camões Araújo et Thaís Rebouças de Araujo répètent le même discours de l'année précédente, DebConf 101 . Il semble donc que Tássia Camões Araújo et Thaís Rebouças de Araujo aient tous deux obtenu un financement pour un voyage en France.
En décembre 2025, Thaís Rebouças de Araujo a obtenu son diplôme. Son plan a fonctionné et elle a été embauchée avant même d'être diplômée. Elle travaille désormais pour la société Place Tecnologia e Inovacao SA .
Ce que nous observons ici est assez tragique pour tous les autres acteurs de l'histoire de Debian et du logiciel libre. Tássia Camões Araújo est l'épouse de Tiago , ce qui leur permet de voyager ensemble. On ignore s'ils ont sollicité et obtenu des fonds de Debian pour chacun de leurs voyages. Thaís Rebouças de Araujo est désormais arrivée ; elle a donné une conférence avec sa tante (?) et a effectué au moins trois longs voyages depuis le Brésil, se rendant en Inde en 2023, en Corée du Sud en 2024 et en France en 2025.
Rappelons qu'Abraham Raji s'était beaucoup investi dans l'organisation de DebConf23 en Inde. Les bénévoles étaient invités à participer financièrement à l'excursion d'une journée. Raji a refusé de payer et n'a donc pas pu participer à l'excursion payante en kayak. Livré à lui-même, sans surveillance, il s'est noyé, une mort qui aurait pu être évitée .
Le coût des vols pour la nièce de Tássia Camões Araújo, qui doit venir du Brésil en Inde pour la DebConf23, est supérieur au coût d'une excursion d'une journée gratuite pour tous.
Amener la nièce de quelqu'un du Brésil en Inde était considéré comme une dépense plus importante que d'assurer la sécurité et la cohésion du groupe pendant l'excursion d'une journée.
Pour le meilleur ou pour le pire, cela ressemble à la situation que l'Église catholique essayait d'éviter lorsqu'elle a décidé qu'il était préférable que les prêtres soient célibataires .
Dans une association locale, par exemple un club sportif amateur, il est tout à fait normal que plusieurs personnes d'une même famille y participent car elles vivent toutes dans le même quartier.
Les débianistes se présentent comme une organisation fondée sur le mérite. Le Contrat social de Debian promet la transparence. Il est difficile pour les personnes travaillant à distance de savoir si de l'argent est versé à la petite amie ou à la nièce d'un membre de la communauté. Dans ce cas précis, comme dans beaucoup d'autres, la femme utilise différentes variantes de son nom. Certaines femmes ont même recours à des pseudonymes. Les personnes qui consultent le programme de la conférence ou la suivent en ligne ignorent tout de ces relations.
Les candidates aux stages d'Outreachy devaient consacrer du temps à des tâches de programmation non rémunérées afin de se mesurer à leurs compétences. Que ressentent ces femmes lorsqu'elles voient que la nièce de quelqu'un a bénéficié de voyages et d'opportunités de prise de parole à chaque conférence ?
Chaque autre candidat à un financement doit partager les liens publics vers ses travaux. Voici les liens que nous avons trouvés pour Thaís Rebouças de Araujo :
Il est intéressant de comparer les contributions de Thaís Rebouças de Araujo à celles de personnes du même âge, à l'apogée du bénévolat Debian, comme Shaya Potter , Joel « Espy » Klecker et Chris Rutter . Les premières fuites de debian-private contiennent de nombreux éléments montrant comment les gens rejoignaient et contribuaient spontanément avant que l'argent n'en pervertisse l'esprit.
Dans la plupart des organisations, j'imagine mal que l'on incite les gens à rédiger des articles de blog et des courriels pour documenter des conflits d'intérêts de ce genre. Au moment du décès de mon père, la communauté Debian/Zizian s'est acharnée sur ma famille, répandant des rumeurs selon lesquelles j'entretenais une relation avec une stagiaire du Google Summer of Code . Ils ont même tenté de la piéger pour qu'elle porte plainte contre moi. Elle a refusé, ne souhaitant pas être associée à ces personnes qui instrumentalisent les femmes pour lancer des rumeurs .
Les snobs utilisent la marque Debian et les dons pour dénoncer et faire taire ceux qui ont consacré des décennies au véritable développement de logiciels libres. Puis, ces mêmes cyberharceleurs exploitent la marque, l'argent, les opportunités de prise de parole et autres ressources pour aider leurs conjoints, enfants et neveux/nièces à se construire une notoriété.
L'hypocrisie est sidérante. Des milliers de dollars de fonds pour la diversité ont été dépensés pour emmener Thaís Rebouças de Araujo et d'autres amis à des conférences, afin qu'ils puissent ajouter Debian à leur CV et se donner un avantage sur le marché du travail. Pourtant, des personnes qui accomplissent un travail considérable, comme le Dr Norbert Preining , Sonny Piers , Richard Stallman et moi-même, avons tous été victimes d'attaques vicieuses et malhonnêtes contre notre réputation. Puis ces imposteurs arrivent et prennent notre place. Lorsqu'ils montent sur scène à la DebConf ou lorsqu'ils se prétendent développeurs Debian , ils s'attribuent le mérite du travail accompli par tous les autres. Ils se tiennent sur les épaules de géants.
Ils font honte et humilient de vrais développeurs pour la moindre erreur de langage et de pronoms « woke », puis ils encensent la nièce de quelqu'un et prétendent qu'elle est la prochaine Linus Torvalds .
Lors de ma présentation à DebConf23, la caméra a balayé la salle. On a constaté qu'elle était presque vide. De nombreuses conférences se sont déroulées dans des salles quasi désertes. Toutes les personnalités intéressantes, comme Richard Stallman , Linus Torvalds et moi-même, sont victimes de censure. Les gens ne s'attendent à rien d'intéressant, alors ils ne prennent même pas la peine d'aller à DebConf.
L'expression favorite de Joel « Espy » Klecker est intemporelle. Repose en paix, Espy (1979-2000).
Veuillez consulter plus d'informations sur le cluster de grossesse Debian .
Veuillez consulter l' historique chronologique de l'évolution de la culture du harcèlement et des abus au sein de Debian .
About 80% of my Debian contributions this month were sponsored by Freexian, as well as one direct donation via GitHub Sponsors (thanks!). If you appreciate this sort of work and are at a company that uses Debian, have a look to see whether you can pay for any of Freexian‘s services; as well as the direct benefits, that revenue stream helps to keep Debian development sustainable for me and several other lovely people.
You can also support my work directly via Liberapay or GitHub Sponsors.
New upstream versions:
pkg_resources)pkg_resources)pkg_resources)Fixes for Python 3.14:
Fixes for pytest 9:
Porting away from the deprecated pkg_resources:
Other build/test failures:
global logged_msgs is unused: name is never assigned in scope (NMU)I investigated several more build failures and suggested removing the packages in question:
Other bugs:
Alejandro Colomar reported that man(1) ignored the MANWIDTH environment variable in some circumstances. I investigated this and fixed it upstream.
I contributed an ubuntu-dev-tools patch to stop recommending sudo.
I added forky support to the images used in Salsa CI pipelines.
I began working on getting a release candidate of groff 1.24.0 into experimental, though haven’t finished that yet.
I worked on some lower-priority security updates for OpenSSH.
08 February, 2026 07:30PM by Colin Watson
Both R and Python make it reasonably easy to work with compiled extensions. But how to access objects in one environment from the other and share state or (non-trivial) objects remains trickier. Recently (and while r-forge was ‘resting’ so we opened GitHub Discussions) a question was asked concerning R and Python object pointer exchange.
This lead to a pretty decent discussion including arrow interchange demos (pretty ideal if dealing with data.frame-alike objects), but once the focus is on more ‘library-specific’ objects from a given (C or C++, say) library it is less clear what to do, or how involved it may get.
R has external pointers, and these make it feasible to instantiate
the same object in Python. To demonstrate, I created a pair of
(minimal) packages wrapping a lovely (small) class from the excellent spdlog library by Gabi Melman, and more specifically
in an adapted-for-R version (to avoid some R CMD check
nags) in my RcppSpdlog
package. It is essentially a nicer/fancier C++ version of the
tic() and tic() timing scheme. When an object
is instantiated, it ‘starts the clock’ and when we accessing it later it
prints the time elapsed in microsecond resolution. In Modern C++ this
takes little more than keeping an internal chrono
object.
Which makes for a nice, small, yet specific object to pass to Python. So the R side of the package pair instantiates such an object, and accesses its address. For different reasons, sending a ‘raw’ pointer across does not work so well, but a string with the address printed works fabulously (and is a paradigm used around other packages so we did not invent this). Over on the Python side of the package pair, we then take this string representation and pass it to a little bit of pybind11 code to instantiate a new object. This can of course also expose functionality such as the ‘show time elapsed’ feature, either formatted or just numerically, of interest here.
And that is all that there is! Now this can be done from R as well
thanks to reticulate
as the demo() (also shown on the package README.md)
shows:
> library(chronometre)
> demo("chronometre", ask=FALSE)
demo(chronometre)
---- ~~~~~~~~~~~
> #!/usr/bin/env r
>
> stopifnot("Demo requires 'reticulate'" = requireNamespace("reticulate", quietly=TRUE))
> stopifnot("Demo requires 'RcppSpdlog'" = requireNamespace("RcppSpdlog", quietly=TRUE))
> stopifnot("Demo requires 'xptr'" = requireNamespace("xptr", quietly=TRUE))
> library(reticulate)
> ## reticulate and Python in general these days really want a venv so we will use one,
> ## the default value is a location used locally; if needed create one
> ## check for existing virtualenv to use, or else set one up
> venvdir <- Sys.getenv("CHRONOMETRE_VENV", "/opt/venv/chronometre")
> if (dir.exists(venvdir)) {
+ > use_virtualenv(venvdir, required = TRUE)
+ > } else {
+ > ## create a virtual environment, but make it temporary
+ > Sys.setenv(RETICULATE_VIRTUALENV_ROOT=tempdir())
+ > virtualenv_create("r-reticulate-env")
+ > virtualenv_install("r-reticulate-env", packages = c("chronometre"))
+ > use_virtualenv("r-reticulate-env", required = TRUE)
+ > }
> sw <- RcppSpdlog::get_stopwatch() # we use a C++ struct as example
> Sys.sleep(0.5) # imagine doing some code here
> print(sw) # stopwatch shows elapsed time
0.501220
> xptr::is_xptr(sw) # this is an external pointer in R
[1] TRUE
> xptr::xptr_address(sw) # get address, format is "0x...."
[1] "0x58adb5918510"
> sw2 <- xptr::new_xptr(xptr::xptr_address(sw)) # cloned (!!) but unclassed
> attr(sw2, "class") <- c("stopwatch", "externalptr") # class it .. and then use it!
> print(sw2) # `xptr` allows us close and use
0.501597
> sw3 <- ch$Stopwatch( xptr::xptr_address(sw) ) # new Python object via string ctor
> print(sw3$elapsed()) # shows output via Python I/O
datetime.timedelta(microseconds=502013)
> cat(sw3$count(), "\n") # shows double
0.502657
> print(sw) # object still works in R
0.502721
> The same object, instantiated in R is used in Python and thereafter again in R. While this object here is minimal in features, the concept of passing a pointer is universal. We could use it for any interesting object that R can access and Python too can instantiate. Obviously, there be dragons as we pass pointers so one may want to ascertain that headers from corresponding compatible versions are used etc but principle is unaffected and should just work.
Both parts of this pair of packages are now at the corresponding repositories: PyPI and CRAN. As I commonly do here on package (change) announcements, I include the (minimal so far) set of high-level changes for the R package.
Changes in version 0.0.2 (2026-02-05)
Removed replaced unconditional virtualenv use in demo given preceding conditional block
Updated README.md with badges and an updated demo
Changes in version 0.0.1 (2026-01-25)
- Initial version and CRAN upload
Questions, suggestions, bug reports, … are welcome at either the (now awoken from the R-Forge slumber) Rcpp mailing list or the newer Rcpp Discussions.
This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. If you like this or other open-source work I do, you can sponsor me at GitHub.
I have unearthed a few old articles typed during my adolescence, between 1996 and 1998. Unremarkable at the time, these pages now compose, three decades later, the chronicle of a vanished era.1
The word “blog� does not exist yet. Wikipedia has yet to come. Google has not been born. AltaVista reigns over searches, while already struggling to embrace the nascent immensity of the web2. To meet someone, you had to agree in advance and prepare your route on paper maps. 🗺�
The web is taking off. The CSS specification has just emerged, HTML tables still serve for page layout. Cookies and advertising banners are making their appearance. Pages are adorned with music and videos, forcing browsers to arm themselves with plugins. Netscape Navigator sits on 86% of the territory, but Windows 95 now bundles Internet Explorer to quickly catch up. Facing this offensive, Netscape open-sources its browser.
France falls behind. Outside universities, Internet access remains expensive and laborious. Minitel still reigns, offering a phone directory, train tickets, remote shopping. This was not yet possible with the Internet: buying a CD online was a pipe dream. Encryption suffers from inappropriate regulation: the DES algorithm is capped at 40 bits and cracked in a few seconds.
These pages bear the trace of the web’s adolescence. Thirty years have passed. The same battles continue: data selling, advertising, monopolies.
Most articles linked here are not translated from French to English. ↩�
I recently noticed that Google no longer fully indexes my blog. For example, it is no longer possible to find the article on lanĉo. I assume this is a consequence of the explosion of AI-generated content or a change in priorities for Google. ↩�
08 February, 2026 02:51PM by Vincent Bernat
This was my hundred-thirty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian (as the LTS- and ELTS-teams have been merged now, there is only one paragraph left for both activities).
During my allocated time I uploaded or worked on:
I also attended the monthly LTS/ELTS meeting. While working on updates, I stumbled upon packages, whose CVEs have been postponed for a long time and their CVSS score was rather high. I wonder whether one should pay more attention to postponed issues, otherwise one could have already marked them as ignored.
Unfortunately I didn’t found any time to work on this topic.
This month I worked on unifying packaging on Debian and Ubuntu. This makes it easier to work on those packages independent of the used platform.
This work is generously funded by Fre(i)e Software GmbH!
This month I uploaded a new upstream version or a bugfix version of:
Unfortunately I didn’t found any time to work on this topic.
Unfortunately I didn’t found any time to work on this topic.
This month I uploaded a new upstream version or a bugfix version of:
Unfortunately this month I was distracted from my normal Debian work by other unpleasant things, so that the paragraphs above are mostly empty. I now have to think about how many of my spare time I am able to dedicate to Debian in the future.
08 February, 2026 01:25PM by alteholz
Another year of data from Société de Transport de Montréal, Montreal's transit agency!
A few highlights this year:
Although the Saint-Michel station closed for emergency repairs in November 2024, traffic never bounced back to its pre-closure levels and is still stuck somewhere around 2022 Q2 levels. I wonder if this could be caused by the roadwork on Jean-Talon for the new Blue Line stations making it harder for folks in Montreal-Nord to reach the station by bus.
The effects of the opening of the Royalmount shopping center has had a durable impact on the traffic at the De la Savane station. I reported on this last year, but it seems this wasn't just a fad.
With the completion of the Deux-Montagnes branch of the Réseau express métropolitain (REM, a light-rail, above the surface transit network still in construction), the transfer stations to the Montreal subway have seen major traffic increases. The Édouard-Montpetit station has nearly reached its previous all-time record of 2015 and the McGill station has recovered from the general slump all the other stations have had in 2025.
The Assomption station, which used to have one of the lowest number of riders of the subway network, has had a tremendous growth in the past few years. This is mostly explained by the many high-rise projects that were built around the station since the end of the COVID-19 pandemic.
Although still affected by a very high seasonality, the Jean-Drapeau station broke its previous record of 2019, a testament of the continued attraction power of the various summer festivals taking place on the Sainte-Hélène et Notre-Dame islands.
More generally, it seems the Montreal subway has had a pretty bad year. Traffic had been slowly climbing back since the COVID-19 pandemic, but this is the first year since 2020 such a sharp decline can be witnessed. Even major stations like Jean-Talon or Lionel-Groulx are on a downward trend and it is pretty worrisome.
As for causes, a few things come to mind. First of all, as the number of Montrealers commuting to work by bike continues to rise1, a modal shift from public transit to active mobility is to be expected. As local experts put it, this is not uncommon and has been seen in other cities before.
Another important factor that certainly turned people away from the subway this year has been the impacts of the continued housing crisis in Montreal. As more and more people get kicked out of their apartments, many have been seeking refuge in the subway stations to find shelter.
Sadly, this also brought a unprecedented wave of incivilities. As riders' sense of security sharply decreased, the STM eventually resorted to banning unhoused people from sheltering in the subway. This decision did bring back some peace to the network, but one can posit damage had already been done and many casual riders are still avoiding the subway for this reason.
Finally, the weekslong STM worker's strike in Q4 had an important impact on general traffic, as it severely reduced the opening hours of the subway. As for the previous item, once people find alternative ways to get around, it's always harder to bring them back.
Hopefully, my 2026 report will be a more cheerful one...
By clicking on a subway station, you'll be redirected to a graph of the station's foot traffic.