It's you. Every trader who does not have the insider information loses. That's how markets work after all. They collect information by rewarding the use of information. Anyone who has information and uses it is rewarded and anyone who does not is punished.
Even when you are a passive investor you lose. You essentially buy shares at random points in time. When that point happens to fall between the trading of an insider and the public disclosure of the insider information you will get a worse price for that trade.
It isn't even relevant whether the insider buys stocks or other securities directly or trades in futures instead. All information you enter into the market through trades permeates the whole market through arbitrage regardless of where you enter that information.
Ohhhh trust me, I have, assuming you mean "Disable animations". The three duration scale developer settings too. Thank you for suggesting it, though, just in case.
Some apps do respect it, but sometimes it's hardcoded, and OS settings don't seem to override it. Even the OS doesn't respect it in some cases, but I think it used to. Flutter apps? Forget about it.
KolibriOS would fit in there, even with the data in memory. You cannot load it into the cache directly, but when the cache capacity is larger than all the data you read there should be no cache eviction and the OS and all data should end up in the cache more or less entirely. In other words it should be really, really fast, which KolibriOS already is to begin with.
I thought there was an MSR buried deep somewhere that enables "Cache as RAM" mode and basically maps the cache into the memory address space or something like that.
Lol a quick Google search leads me to a Linked in post with all the gory technical details?
Unless you lay everything out continuously in memory, you’ll still get cache eviction due to associativty and depending on the eviction strategy of the CPU. But certainly DOS or even early Windows 95 could conceivably just run out of the cache
Windows 95 only needed 4MB RAM and 50 MB disk, so that's certainly doable. The trick is to have a hypervisor spread that allocation across cache lines.
Yeah, cache eviction is the reason I was assuming it is "probably not possible architecturally", but I also figured there could be features beyond my knowledge that might make it possible.
Edit: Also this 192MB of L3 is spread across two Zen CCDs, so it's not as simple as "throw it all in L3" either, because any given core would only have access to half of that.
Well, yeah, reality strikes again. All you need is an exploit in the microcode to gain access to AMD's equivalent to the ME and now you can just map the cache as memory directly. Maybe. Can microcode do this or is there still hardware that cannot be overcome by the black magic of CPU microcode?
That assumes KolibriOS or any major component is pinned to one core and one cache slice instead of getting dragged between CCDs or losing memory affinity. Throw actual users, IO, and interrupts at it and you get traffic across chiplets, or at least across L3 groups, so the nice 'everything lives in cache' story falls apart fast.
Nice demo, bad model. The funny part is that an entire OS can fit in cache now, the hard part is making the rest of the system act like that matters.
It isn't adequately explained by incompetence. This is out of the playbook of boiling the frog. Nothing about this is new or unexpected. We have plenty of history about how these things go down. First they make installing device owner chosen software ridiculously laberous. Then they will remove the option altogether.
reply