Phones and the apps that run on them have adapted to run reasonably with intermittently available, slow, pricey networks and relatively low amounts of power. Does Plan 9 have a smart way to deal with these network and power constraints? It seems unlikely since it wasn't designed for it.
To me that's a bit like arguing that Linux/Windows/OS X should have driven broadband take-up.
What actually happened was that a meta-OS - the web - drove broadband take-up, and broadband providers scrambled to improve the technology to meet demand.
The demand for super-bandwidth mobile services isn't there in anything like the same way. And there's no equivalent meta-OS for mobile.
But... at some point we're going to be moving to non-local storage and non-local processing, and Linux isn't really ideal for that.
My guess is that will happen when computing finally starts moving past concepts that were developed in the late 1960s. AI may well be a driver of a non-local distributed computing which isn't based on the cycles-as-utility or cycles-as-private-resource models we're stuck with now.
Is it too unrealistic to consider the possibility that the web could evolve into a single connected intelligent application that automatically load-balances and distributes cycles and storage across all connected devices?
The intelligent way to reduce power consumption for a radio is to not leave it on all the time. Instead you batch up requests and do them all at once. For example, a phone OS has a special push notification system to so that status updates from different apps will be delivered to the phone in the same radio cycle.
This is fundamentally different from a world where you assume always-on connectivity. It's more like the old pre-Internet days where email and Usenet were stored and forwarded.
If you have a compute task to accomplish on a phone, there's a tradeoff whether doing an RPC versus computing it locally would use less power and have less latency. As phone CPU's get faster it becomes more feasible to do compute-only tasks locally. This also increases availability.
So I'd argue that the trend is more towards offline computing and data synchronization protocols, not always-connected computing.
It's an incredibly slow wire protocol if there's any latency involved. Copying a large file from Bell Labs' servers to my California-based system via 9P took an order of magnitude longer than using HTTP between the same systems. Unfortunately that's kind of baked into the design of 9P.
aan(1) can handle the network bit. I use that to mount my fs on my laptop over my phone so if the net drops it doesn't bother me as much. 9front is pretty average on battery. I get the same usage times as FreeBSD.