> Load in latest db dump, may take as long as it wants.
400TB its about a week+ ?
> Then start replication and catch up on the delay.
Then u have a changes in the delay about +- 1TB. It means a changes syncing about few days more while changes still coming.
They said "current requests are buffered" which is impossible, especial for long distributed (optional) transactions which in a progress (it can spend a hours, days (for analitycs)).
Overwall this article is a BS or some super custom case which irrelevant for common systems. You can't migrate w/o downtime, it's a physical impossible.
"Take snapshot and begin streaming replication"... like to where? The snapshot isn't even prepared fully yet and definitely hasn't reached the target. Where are you dumping/keeping those replication logs for the time being?
Secondly, how are you managing database state changes due to realtime update queries? They are definitely going in source table at this point.
I don't get this. Im still stuck on point 1... have read it twice already.
He can't. It's not a reference, just a bunch of CLI examples. Please learn what is the reference. Even docs is a BS, wonderful product. Overall this article is a typical advertising and clickbait..
The code is open source though, you can read it. The cli examples point you towards the relevant bits of the actual database code to read.
For my own sake, I'm not sure what is so surprising here. "Turn up a hot second replica and fail over to it intentionally behind a global load balancer." Is pretty well trodden ground.
YES!! But the article point us to it's a 400TB+ w/o downtime migration. This is impossible. That why is looks like clickbait and advertising of a product.
Thank you for the link but it's not the same case ;) Google used storage switching which has migration in mixed mode, i.e. migration on demand when data migrated due user access to. API had compatibility layer to read/write from/to both storage systems (i built kind of this migration mechanism about decade ago). And google spend about 8 years for the migration which ok. And the article about Database migration which can be periodical process (critical scheme changes for example) and they describe it to us. Take snapshot and racing with snapshot overhead changes and etc. I think we can let's over here. It's not a zero downtime solution cuz it's not exists.
So you don't understand how something works. That's fine. But to then say the article and/or tech are BS is... a choice.
This work has been and is being used by some of the largest sites / apps in the world including Uber, Slack, GitHub, Square... But sure, "it's BS, super custom, and irrelevant". Gee, yer super smart! Thank you for the amazing insights. 5 stars.
What they talk about is Rosetta's macOS frameworks compiled for Intel being kept around (which macOS Intel apps use, like if you run some old <xxx>.app that's not available for Apple Silicon).
The low-level Rosetta as a translation layer (which is what containers use) will be kept, and they will even keep it for Intel games, as they say in the OP.
You might've tried it during an arms race moment. YT is constantly changing it's anti-blocking measures, and uBO and uBO Lite are constantly responding. uBO had the same issue.
uBO Lite does lack custom filters and custom filter lists. It also doesn't have sync, but uBO didn't do sync well anyway. Also sync is far less useful without custom filters.
400TB its about a week+ ?
> Then start replication and catch up on the delay.
Then u have a changes in the delay about +- 1TB. It means a changes syncing about few days more while changes still coming.
They said "current requests are buffered" which is impossible, especial for long distributed (optional) transactions which in a progress (it can spend a hours, days (for analitycs)).
Overwall this article is a BS or some super custom case which irrelevant for common systems. You can't migrate w/o downtime, it's a physical impossible.