No, compromising on your core thing that you care about for a "seat at the table" is not how you win. It is how you lose. It is how you lose the game, the metagame, and your soul. All at once.
When you do not have a seat at the table, you are not in the game, and winning a game is an impossibility. As long as you are a player, it is remains an option, if perhaps not win it somehow, but at least drag it to a draw, or change the rules,
or make a loss to be survivable.
The system in question is a distributed system, an interaction within that system such as "confession" involves ridiculous amounts of distributed processing, far beyond two nodes that were participating in that original exchange.
An issue with this approach is that engaging in this way can start to reset your standards for "toxic people", and not in the cheerful "I'd like to buy the world a coke" manner.
One other issue I've had when I have tried to do this is that largely the "big" horrible issues with things are systematic rather than interpersonal- it doesn't matter who is operating the "baby seal blender", its operation is both the harm being done and the reason why "baby-seal-smoothies-r-us" operates so unless you cease the very profitable baby-seal-smoothy business the harm isn't going to stop.
Not to say that those issues are universally applicable, but rather to note that when you dance with the devil you need to observe how the devil is dancing with you; if you're going to go that way you need to be really careful in ways you don't need to be careful if you, say, just go work in a situation where the harm you create is less obvious and immediate.
Is there a good reason why upgrades need to stress-test the whole system? Can't they go slowly, throttling resource usage to background levels?
They involve heavy CPU use, stress the whole system completely unnecessary, the system easily sees the highest temperature the device had ever seen during these stress tests. If during that strain something fails or gets corrupted, it's a system-level corruption...
Incidentally, Linux kernel upgrades are not better. During DKMS updates the CPU load skyrockets and then a reboot is always sketchy. There's no guarantee that something would not go wrong, a secure boot issue after a kernel upgrade in particular could be a nightmare.
To answer your question; it helps to explain what the upgrade process entails.
In the case of Linux DKMS updates: DKMS is re-compiling your installed kernel modules to match the new kernel. Sometimes a kernel update will also update the system compiler. In that instance it can be beneficial for performance or stability to have all your existing modules recompiled with the new version of the compiler. The new kernel comes with a new build environment, which DKMS uses to recompile existing kernel modules to ensure stability and consistency with that new kernel and build system.
Also, kernel modules and drivers may have many code paths that should only be run on specific kernel versions. This is called 'conditional compilation' and it is a technique programmers use to develop cross platform software. Think of this as one set of source code files that generates wildly different binaries depending on the machine that compiled it. By recompiling the source code after the new kernel is installed, the resulting binary may be drastically different than the one compiled by the previous kernel. Source code compiled on a 10 year old kernel might contain different code paths and routines than the same source code that was compiled on the latest kernel.
Compiling source code is incredibly taxing on the CPU and takes significantly longer when CPU usage is throttled. Compiling large modules on extremely slow systems could take hours. Managing hardware health and temperatures is mostly a hardware level decision controlled by firmware on the hardware itself. That is usually abstracted away from software developers who need to be able to be certain that the machine running their code is functional and stable enough to run it. This is why we have "minimum hardware requirements."
Imagine if every piece of software contained code to monitor and manage CPU cooling. You would have software fighting each other over hardware priorities. You would have different systems for control, with some more effective and secure than others. Instead the hardware is designed to do this job intrinsically, and developers are free to focus on the output of their code on a healthy, stable system. If a particular system is not stable, that falls on the administrator of that system. By separating the responsibility between software, hardware, and implementation we have clear boundaries between who cares about what, and a cohesive operating environment.
The default could be that a background upgrade should not be a foreground stress test.
Imagine you are driving a car and from time ro time, without any warning, it suddenly starts accelerating and decelerating aggressively. Your powertrain, engine, breaks are getting tear and wear, oh and at random that car also spins out and rolls, killing everyone inside (data loss).
This is roughly how current unattended upgrades work.
I'm curious, where it has to be disclosed? Like if a company would pay a few legitimate reddit account owners to review their post and upvote, and would disclose this activity in the DISCLOSURES.txt available on their website, would that be legal?
Where would one find some reddit users willing to do such reviews, by the way?