> Anyway, if you want to fuck with them ask them how they avoid L1/L2/L3 chance misses with all that code separation. They obviously don’t but you’re very likely to get a puzzled look as nobody ever taught them how a computer actually works.
It hardly even matters now because each major function will have to wait on the scheduler queue until the cluster manages to assign it a container, then incur a dozen kinds of network slowness spinning up and initializing, then cold-start an interpreter, just to check a value in a painfully-slowly serialized-then-deserialized structure that was passed in as its arguments + context, only to decide based on that check it doesn’t need to do anything after all and shut down.
So why would you want to add to that? A loop in which you change a few attributes on a thousand entities will run 20 times slower when you cause cache misses, even worse if your cloud provider isn’t using fast ram. Then add to that your exponential slowness as your vtable class hierarchy grows and you’re adding a load of poor performance to your already poor performance.
Which might made sense if spreading your code out over 20 files in 5 projects gave you something in return. But I’d argue that it didn’t just cause your CPU, but also your brain, to have memory issues while working on the code.
It hardly even matters now because each major function will have to wait on the scheduler queue until the cluster manages to assign it a container, then incur a dozen kinds of network slowness spinning up and initializing, then cold-start an interpreter, just to check a value in a painfully-slowly serialized-then-deserialized structure that was passed in as its arguments + context, only to decide based on that check it doesn’t need to do anything after all and shut down.
Processor cache? Lol.