If you could spawn a coroutine and pass it around as a value, you could implement your effects just by calling coroutine.resume(...), coroutine.yield(...). The reason this is more powerful abstraction than async/await is that it's colorless - structs that could be sync or async just need a field that's Option<Coroutine> and when they want to be "effectful" they just do something like
if let Some(async) = self.async.as_mut() {
let next_arg = async.yield(next_result);
}
Consider deserializing JSON with serde_json over a TCP socket. There's no need for an AsyncRead because the asynchronicity isn't colorful - internally in the implementation of std::io::Read, it just needs to check if there is a coroutine to yield to.
I'm less compelled about code duplication for trivial stuff like try_async_map, whereas being able to duct tape together major libraries with a few lines of code internally is much more compelling. And solves real problems, the previous example is actually a problem faced in codebases today.
I can think of a few arguments against this, but I don't see it as structurally that different from algebraic effects.
I agree that try_async_map is whatever, but currently if you want to call several fallible functions from different libraries and propagate the errors you get into a bit of a mess.
I'm less compelled about code duplication for trivial stuff like try_async_map, whereas being able to duct tape together major libraries with a few lines of code internally is much more compelling. And solves real problems, the previous example is actually a problem faced in codebases today.
I can think of a few arguments against this, but I don't see it as structurally that different from algebraic effects.