It's interesting you mention Web Audio, because Web Audio, AFAIK, is based on a tried and true mature Audio API (Core Audio on OSX, Chris Rogers worked on both) that has had tons of feedback from the professional audio community for years. I wonder, how much of the objection is based on Mozilla because their counter proposal to essentially delegate DSP to JS was rejected?
And how do you define working in the open? Isn't dropping design docs and sample implementations into open repositories, and then iterating on them in public, open? Asm.js was just dropped like a bomb on everyone. When they announced it, they had running code already, and V8 at the time ran it very poorly. When Mozilla announced their Device API effort, they completely avoided the parallel W3C effort at first (I don't blame them) There's a kind of bizarre mismatch in the treatment between Google's work on projects in the open, and Mozilla's code dumps which get a pass.
The specs-first-implementation-second approach is IMHO a ret-con of how things are done in the industry. Rough consensus and running code is the usual model. You need experience from implementations in the wild to inform the specification, feedback from actual people trying to develop applications or implementations. Pretty much everything that went into each revision of OpenGL started life as a "behind closed doors" OpenGL extension, that was subsequently generalized, and incorporated into the base spec.
We tried the "do everything at the standards committee level first" in the 00s at the W3C and the Web stagnated. XHTML, XForms, XSL, XPath, XPointer, SVG, SMIL, on and on. A long list of mostly failed standards that took forever to get standardized, and by the time they did, the world had moved on. Really, only SVG remains viable and the spec is crazy complex.
Look at how C# evolved vs Java. Java worked via a community process "don't break existing semantics", and as a result, we got really shitty generics, and hardly any improvements to the bytecode or VM. Microsoft took a shotgun to C# 1.0, and as a result, IMHO, C# 3.0+ is way better than Java.
Frankly, I don't believe Javascript's semantic model can be evolved without breaking backwards compatibility. For example, Javascript is fundamentally single threaded and non-shared memory, yet today's devices, even mobile ones, have 2-8 cores on them. You can introduce shared memory, but then what, how do you implement concurrency primitives? You're retrofitting a memory model into a language that didn't have one, unlike Java which introduced the JMM early on. To me, Javascript's memory and execution model makes it fundamentally incompatible for something like an intermediate representation for portable code for high performance games. Yes, the demos are impressive, but who is going to ship Infinity Blade 3 as asm.js on mobile if there's a native OpenGLES API? The TC39 committee is essentially operating with design constraints that inhibit solving real problems with the language due to concerns with backwards compatibility. It's mostly syntactic sugar.
Everything has to start somewhere. Java started out from zero, backed only by Sun. C# as well. A whole community of libraries and tools had to be built. All languages go through this process.
I work on GWT, it outputs JS, so I understand the power of leveraging existing libraries and toolchains. But there are always pioneers and early adopters who want to work on fresh new platforms and throw away existing paradigms. Why oppose them?
I think it is important for people to sometimes ignore what exists and see what can be done if you start from scratch. That's how we get breakthroughs instead of incrementalism.
Web Audio being supposedly based on Core Audio is beside the point; the web isn't OS X. I never claimed that it was designed by someone who was ignorant or designed without awareness of experience.
I already made this point but I'll make it again: The problem here is not that Google engineers are stupid, the problem is that they are making decisions behind closed doors based on only their knowledge. It doesn't matter that their knowledge may have come from decades of experience, the fact is that it's very hard to accurately predict customer needs based on only your knowledge.
For the Web Audio API specifically, its problems run the gamut, from poorly specified features, obvious missing features that are needed for basic use cases, clumsy APIs that require lots of boilerplate code and GC pressure to do simple things, and a myriad of race conditions resulting from a mixture of underspecification and specification of behaviors that don't make sense in JS. It also shipped in Chrome with a couple blatantly bad design decisions that have only now been removed from the spec because crogers didn't even remember they were there, among them an API that blocked the UI thread to decode audio and wasn't clearly documented as doing so. My objection to the Web Audio API comes from being forced to use it for over a year due to Google's reluctance to ship a working implementation of <audio>, thereby forcing their half-baked API on everyone. I can't comment on DSP to JS as a feature because I have absolutely no interest in it, I just want to play sounds.
Can't comment on the Device API effort either, other than that I sat in on a couple conference calls about one of them a couple years back and it was definitely open to the public, and I believe some of the participants were random community contributors. Not just Mozilla or Google employees.
How do I define working in the open? Here are a few things worth trying:
1. Don't drop a huge new surface area of APIs and behaviors on people in the form of a new prefixed feature in your browser that is immediately set in stone because big names (i.e. Rovio) are using it in content that can't be broken.
2. Actually specify the thing you claim to be proposing so that other people can implement it compatibly, instead of leaving out parts of it that shipped content uses and underspecifying other parts.
3. Actively solicit collaboration from users and/or other browser vendors before shipping your prefixed APIs and setting them in stone. If this was done with Web Audio I certainly never saw it; it seemed to be news to everyone. Even the initial publication of the draft spec had one name on it.
Again, I've never argued in the thread that something has to be a W3/WhatWG spec before you can implement it, just that you have to actually collaborate with the community. I see a lot of hiveminding here where I say 'in the open' and people like you interpret that to mean 'by committee', when the former does not require the latter. Just actively communicate with developers and tell them why it seems like you're screwing them, instead of expecting them to bend over and put up with whatever you've decided - for them - is best. "Specs-first" is not a term I have ever used or implied here either, and it feels like another sort of hivemind PoV where you're used to the word "open" meaning something other than what it actually means.
Maybe asm.js felt like it was dropped 'like a bomb on everyone' to you from inside Google HQ, but as a complete non-player in its development, I was aware of it months before it ever shipped in a Firefox nightly. The people behind it were publishing draft papers that described intended behavior, demonstrating sample code, developing a compiler that targeted it in the open, developing a validator in the open, and actively soliciting (and responding to) feedback from all takers. This is what I mean by working in the open: Active communication, frequent sharing, and actually responding to feedback in early stages of development. The asm.js spec and validator lived in a git repository, readable to the public (accepting issue reports and pull requests) from pretty close to the beginning, before that feature ever shipped (prefix or otherwise). I cannot say the same for any Google prototype I ever interacted with, even if I applaud your frequent publication of papers after you've done a huge chunk of the design and engineering work. Those papers are great!
It's also important to point out that asm.js gracefully degrades, so other browser vendors didn't have to do any work to be compatible with test cases. The same is not true for NaCL or Web Audio. Dart passes this bar, as long as you are willing to tolerate Dart2JS's rough edges. When they supposedly 'dropped it like a bomb' on you, the demos were 100% possible to run in any browser, regardless of support for asm.js. When you guys dropped Web Audio like a bomb on developers (while <audio> was still completely broken), we were now presented with two equally awful choices: Continue to have nonfunctional, crashy audio in Chrome, or build a Chrome-specific audio backend for our games and hope that it would work. Asm.js presented no such dilemma for anyone.
If you're arguing in favor of the strength of breakthroughs instead of incrementalism, I won't disagree with you - sometimes we really need them. But Dart doesn't feel at all like a breakthrough to me, so maybe that's why I don't get excited by the promises or consider it worth shipping an entirely new VM as part of browsers and complicating the whole web platform with things like cross-language GC and a required compiler toolchain.
And how do you define working in the open? Isn't dropping design docs and sample implementations into open repositories, and then iterating on them in public, open? Asm.js was just dropped like a bomb on everyone. When they announced it, they had running code already, and V8 at the time ran it very poorly. When Mozilla announced their Device API effort, they completely avoided the parallel W3C effort at first (I don't blame them) There's a kind of bizarre mismatch in the treatment between Google's work on projects in the open, and Mozilla's code dumps which get a pass.
The specs-first-implementation-second approach is IMHO a ret-con of how things are done in the industry. Rough consensus and running code is the usual model. You need experience from implementations in the wild to inform the specification, feedback from actual people trying to develop applications or implementations. Pretty much everything that went into each revision of OpenGL started life as a "behind closed doors" OpenGL extension, that was subsequently generalized, and incorporated into the base spec.
We tried the "do everything at the standards committee level first" in the 00s at the W3C and the Web stagnated. XHTML, XForms, XSL, XPath, XPointer, SVG, SMIL, on and on. A long list of mostly failed standards that took forever to get standardized, and by the time they did, the world had moved on. Really, only SVG remains viable and the spec is crazy complex.
Look at how C# evolved vs Java. Java worked via a community process "don't break existing semantics", and as a result, we got really shitty generics, and hardly any improvements to the bytecode or VM. Microsoft took a shotgun to C# 1.0, and as a result, IMHO, C# 3.0+ is way better than Java.
Frankly, I don't believe Javascript's semantic model can be evolved without breaking backwards compatibility. For example, Javascript is fundamentally single threaded and non-shared memory, yet today's devices, even mobile ones, have 2-8 cores on them. You can introduce shared memory, but then what, how do you implement concurrency primitives? You're retrofitting a memory model into a language that didn't have one, unlike Java which introduced the JMM early on. To me, Javascript's memory and execution model makes it fundamentally incompatible for something like an intermediate representation for portable code for high performance games. Yes, the demos are impressive, but who is going to ship Infinity Blade 3 as asm.js on mobile if there's a native OpenGLES API? The TC39 committee is essentially operating with design constraints that inhibit solving real problems with the language due to concerns with backwards compatibility. It's mostly syntactic sugar.
Everything has to start somewhere. Java started out from zero, backed only by Sun. C# as well. A whole community of libraries and tools had to be built. All languages go through this process.
I work on GWT, it outputs JS, so I understand the power of leveraging existing libraries and toolchains. But there are always pioneers and early adopters who want to work on fresh new platforms and throw away existing paradigms. Why oppose them?
I think it is important for people to sometimes ignore what exists and see what can be done if you start from scratch. That's how we get breakthroughs instead of incrementalism.