Fetching js over the net is slow (multiple orders of magnitude slower than parsing).
Optimized compilation is slow (an order of magnitude).
But you can't even do baseline compilation without knowing what's in your evironment. For example, JSC (from WebKit) compiles to bytecode. But the compilation process happens lazily, as functions are first executed. (There is a quick pass to check syntax, beforehand. V8 does this too AFAIR.)
But OK, let's say you're parsing. You parse, create an AST, and go to compile it. The result of bytecompiling for V8 is native code! You can't cache that effectively, it will embed too many runtime constants. For JSC you do produce bytecode, but that bytecode too is dependent on runtime object layouts: for example if you reference a binding in the global object, and the binding is present, then the compiler will produced an indexed lookup into the global. That depends on what's in the global when the compilation unit (function, eval, or global code) is compiled. Caching is a security / correctness risk, in that context.
And then you have the negative impacts of caches on browser memory footprint (and associated GC cost).
It's no wonder why modern JS engines prefer to just keep around the source, and re-parse on demand.
You might be surprised. I ran some benchmarks[0] and found a significant parse time, and significant differences between browsers. tldr: a) parsing has a measurable cost, b) the parser is different from the interpreter, and the speed of one has little or no bearing on the other.
To be fair, parsing does take time. And if you're parsing 5MB+ of JS generated by something like Emscripten or GWT, it can take a sizable amount of time. It just tends to be small in comparison to the combination of network traffic and actual compilation.
Fetching js over the net is slow (multiple orders of magnitude slower than parsing).
Optimized compilation is slow (an order of magnitude).
But you can't even do baseline compilation without knowing what's in your evironment. For example, JSC (from WebKit) compiles to bytecode. But the compilation process happens lazily, as functions are first executed. (There is a quick pass to check syntax, beforehand. V8 does this too AFAIR.)
But OK, let's say you're parsing. You parse, create an AST, and go to compile it. The result of bytecompiling for V8 is native code! You can't cache that effectively, it will embed too many runtime constants. For JSC you do produce bytecode, but that bytecode too is dependent on runtime object layouts: for example if you reference a binding in the global object, and the binding is present, then the compiler will produced an indexed lookup into the global. That depends on what's in the global when the compilation unit (function, eval, or global code) is compiled. Caching is a security / correctness risk, in that context.
And then you have the negative impacts of caches on browser memory footprint (and associated GC cost).
It's no wonder why modern JS engines prefer to just keep around the source, and re-parse on demand.