No, most games don't do it this way now. When I created this type of networking in 1994 it was to solve the particular problem of lots of units and low bandwidth. Now where bandwidth is much less consideration games typically use an authoritative server (even if that server is a 'headless' process that runs on a machine with a client). All clients send turns to the server and it sends out the authoritative results to all the clients. Unreal and Unity both have some documentation I believe on how their networking works at the high level - they are really adequate for most cases.
There are still limitations to this architecture and instances when you have to run something else. If I would be doing an RTS with thousands of units, I'd still prefer the lock-step version described above, for example. Or, for a more concrete example, right now I'm revamping multiplayer architecture for a game with a huge open world — and the only way to get it to work without paying for dedicated server farm is to run a distributed authority key-value storage.
Human cognitive reaction times are a lot mroe latent than connections are now. The quality of play percieved by the players in the studies I did back in the day was much more about consistency of responsiveness and not that direct number of milliseconds. Best possible Human reaction time for cognitive tasks is still around 250msec for most players - it would be great to see updated information for tournament players (who are a different class of player) on what their actual perception-to-action time is in their favorite games. AOK and beyond code used an adaptive scaling system to go faster when the network would reliably move packets more quickly - so it would auto-adjust to 'lan speed' (actually a combination of the best render speed of the slowest PC plus an estimate of the round-trip latency). Also - the command confirmation is not waiting for RT latency - you are getting confirmation when the command goes into the local buffer - 'command accepted' - once that happens it is going to execute so you get the confirm bark sound from the unit or building queue - or the movement arrow triggers. The games actually do their command simultaneously when all machines are executing the turn.
>Human cognitive reaction times are a lot mroe latent than connections are now. [...] Best possible Human reaction time for cognitive tasks is still around 250msec for most players
This is an important realization. Our brain can perceive reasonably fast actions, but our reaction is much slower. Under good conditions we can easily tell a 60fps animation from a 30fps animation from a 10fps slideshow, but the fastest reaction time we can manage is around 100ms (the time one frame is visible at 10fps).
We are reasonable tollerant to latency because our brain has quite high latency, and all our actions have to account for that (for example the point in time where you decide to release a ball you are throwing is very different from the point it is actually released). On top of that, many real-world interactions behave similar to latency (e.g. springs). What throws us off is inconsitent latency, because then we are suddenly not able to predict when to perform an action in order to have the effect at the desired point in time.
The 250ms pure read-react time deals with arbitrary events, but when we can chunk reactions into a practiced technique our precision goes way up, to nearly the individual millisecond: thus musicians can play rapid passages with unusual rhythms in time if they have time to plan and prepare, but they lose this ability if dealing with unusually high latency(extreme reverb, amp across the stage, digital audio with huge buffer sizes). The technique, after all, is based on fast confirming feedback that your execution is correct.
And like you say, "bouncy" latency is even more disruptive. We can adjust to a small and consistent lag, but inconsistency will degrade any level of skill.
> And like you say, "bouncy" latency is even more disruptive
The technical term for this is "jitter". Networking and telecoms people pay a lot of attention to this metric, both for the reasons you cite and because jitter is much more noticeable than high-but-constant latency in voice or video communications.
People played lan games all the time with very low latency - I actually had to patch the code specially for the exhibition match at the AOK ReleaseToManufacture party because the Lan they were using had <1 msec latency - we had never had close to that even in our office so my code didn't handle it well and performed poorly on that optimal network.
The word "Networked" could mean either a LAN (Local Area Network) or an internet connection.
The word "networked" is instead used to distinguish from "non-networked" gaming, which involves playing on a single PC. This could be either a single-player game like the Starcraft campaign, a turn-based hotseat game like Civilization hotseat, or a simultaneous shared-keyboard game like Achtung, die Kurve!
(Original author here) - here was a really hard-to-find bug - in some instances more than one quantity of fish could be placed in the same location - that meant the game would work fine until someone fished that same fish the second time and the fishing boats would diverge in the different simulations. The world sync check only counted the tile contents so we didn't see it. - There were actually two RNG's in the game, one was synchronized with the same start seed on all machines (basically the same random pool) for combat and whatnot - the other was unsynchronized and used for animation variance, etc -things that weren't gameplay related. Not knowing when to use one of these specifically (e.g. animals facing seems like animation but definitely affected gameplay if they could be hunted) could alter the code path and cause an out-of-sync condition.
If you don't have an unsynchronized RNG, then any unsynchronized content can't use an RNG, and unsynchronized content is important for improving a game's experience by using local resources for things that don't really need to be shared or sent over a network.
For example, maybe in a FPS, part of the non-gameplay-critical graphics use particle generators for a cool effect that not all players see (because it's behind a building for some of them and thus doesn't even need to be rendered); if these generators used a synchronized RNG, then all players would have to do computations for every particle effect happening anywhere, just so that the combat and more game-important RNG values would be in synch when they really need to be.
There were a lot of extra checks and code involved in the synchronized RNG - the results were loggable (to track down sync failures) for instance so bloating that up with every random fire flicker from a burning castle would have been crazy.
(Author of the original article here) It was really pretty naive - nobody would plan for an RTS now without planning a series of patches to do the inevitable adjustments. Even for AOK we planned to patch and adjust. The 'general argument' was a MS publisher stance as quoted by Matt Pritchard - to do a patch in those days through the MS system meant a lengthy and expensive full test process, rollout, creation of a patch system, etc - so it was something you planned and budgeted for. The concept of 'day-one-patch' would have been pretty horrific. Patching is now something you integrate, plan for and expect - because we aren't shipping on gold masters to a printing company.
The Command & Conquer approach (used by them and some others in the era of gold masters) was to release expansions to the game that incorporated the balance and other fixes as part of the release cycle of the game. In those days they didn't do independent patches because of the logistical complexity. It was an entirely different ecosystem before the Internet was common-place.