You do realise that almost all connections are long-lived, and burst up and down in throughput? So the 10,000 “heaviest” connections right now are not the same as in, say, 3 seconds from now ?
So you propose constantly swapping in and out connections from “hardware NAT” to “software NAT”? What heuristic will you use to decide which connections go where?
Such a heuristic will probably look a lot like QoS, which is even more (much more!) resource hungry than NAT.
At which point will the obvious conclusion be, “maybe the carriers who actually deal with these problems have a point, NAT is indeed a significant amount of complexity, and let’s be happy IPv6 starts to make actual economic sense?”
I bet across even an ISP network of a million users, 80% of the traffic at any point in time is within 10,000 connections.