I run Firefox latest so it should work. There's always a risk when going from HTML5->Web Audio. There's an occasional blip that's impossible to avoid (or at least, I have never found a solution). It doesn't happen every time though. Try going from track 2 to track 3 in the second tab of the demo (if both are "READY" as web audio).
The problem with exclusively using the web audio API is that the entire track must be loaded into memory before playing it, whereas HTML5 loads progressively. So we use both to balance the techniques.
In prior versions of the library, we'd load the track in parallel to HTML5 and make the switch mid-track so it's actually far less noticeable even if it does blip. I'm considering adding that to the new version.
Another alternative is building a custom buffer using RANGE requests to exclusively drive it via the web audio API. But obviously that is a far more complex undertaking (and requires the server to support RANGE requests). I'm open to implementing it, though.
Gapless 5 was actually the precursor to this library over a decade ago, so Rego deserves full credit. They built the first example of gapless playback on the web and I took inspiration from their techniques.
Gapless 5 has a built in UI and style. Our library is headless: you bring your own UI and controls. It just depends on what your use-case is.
RSC by design does not ship everything to the client. That's one of its basic premises. It ships markup, composed in client interactivity, but you can shed a lot of the code required curate that markup.
I obviously meant traditional React components, not RSC. RSC can eliminate some client code, but they can be very awkward to use in practice, and lines between server and client get blurry really fast. The mental model is difficult for many to fully grok. I say this as someone who has lead engineering teams with folks of varying skill levels. RSCs are not worth the extra complexity and mental overhead they bring.
This is pretty fascinating and comes with some complicated AI-world incentives that I've been ruminating on lately. The better you document your work, the stronger contracts you define, the easier it is for someone to clone your work. I wouldn't be surprised if we end up seeing open source commercial work bend towards the SQLite model (open core, private tests). There's no way Cloudflare could have pulled this off without next's very own tests.
Speaking more about the framework itself, the only real conclusion I have here is that I feel server components are a misunderstood and under-utilized pattern and anyone attempting to simplify their DX is a win in my book.
Next is very complex, largely because it has incrementally grown and kept somewhat backwards compatible. A framework that starts from the current API surface and grows can be more malleable and make some tough decisions here at the outset.
Crazy to see it's already being run on a .gov domain[0]. TTFGOV as a new adoption metric?
> The better you document your work, the stronger contracts you define, the easier it is for someone to clone your work.
Well said; this is my thinking as well. One person or organization can do the hard work of testing multiple approaches to the API, establishing and revising best practices, and developing an ecosystem. Then once things are fairly stable and well-understood, another person can just yoink it.
I have little empathy for Vercel, and here they're kind of being hoist by their own petard of inducing frustration in people who don't use their hosting; but I'm concerned about how smaller-scale projects (including copyleft ones) will be laundered and extinguished.
> Then once things are fairly stable and well-understood, another person can just yoink it.
That transparency & availability for community contributions or forks is the point of open-source.
If you're only using open-source as marketing because you're bad at marketing, then you should probably go closed source & find a non-technical business partner.
Whoever "yoinks" the package runs into the same problem because they now have to build credibility somehow to actually profit from it.
Established corporations will be doing yoinking, with a pre-existing credibility. There's a huge incentive to offer these copied services for cents on the dollar, as a way to kill the competition.
It'll be interesting to see if this happens at a service level too. Like how lots of companies offer an S3 compatible API, will companies start offering similar services and building a compatibility layer over the top as an easy way to for customers to transition? You could use the existing service as a test suite to check your compatibility API behaves the same as the original product.
> There's no way Cloudflare could have pulled this off without next's very own tests.
I'm very uncovinced. History showed us very complex systems reverse engineered without access to the source code. With access to the source code, coupled with the rapid iteration of AI, I don't see any real moat here; at best a slight delay.
There was a recent post on here where the creator of Ladybird (Andreas Kling) translated a chunk of his novel browser from c++ to Rust in two weeks -- a feat he estimated would take him months: https://ladybird.org/posts/adopting-rust/
I, in my own way, have discovered that recent versions of Claude are extremely (as in, super-humanly) good at rewriting or porting. Apparently if recently released coding agents have a predefined target and a good test suite then you can basically tell them that you want X (well-defined target w/ good suite of tests) written in Y (the language/framework you want X written in but it isn't) -- and a week or two later you have a working version.
I have spent the last month wrapping my head around the idea that there is a class of tasks in software engineering that is now solved for not very much money at all. More or less every single aspirational idea I have ever had over the last 20 years or so I have begun emabarking on within the last two months.
I am curious, have you attempted to do this to any binary packed with commercial obfuscation/"virtualization" schemes (e.g. Orean's Themida/Code Virtualizer and VMProtect)?
No, I would need to find a binary to test on. I suspect it would produce horrible code at the decompiler layer but ultimately I would expect that function signatures are still relatively clean?
Its scary - once you get the differential testing harness set up it seems to be just a matter of time/tokens for it to stubbornly work through it.
Source code is one thing; tests covering the codebase are another.
And if you just copy the source code or translate it one-to-one into a new language, rather than make a behavioral copy, there will be copyright issues.
The tests are absolutely essential, otherwise there's no signal to guide the LLM towards correct behavior and hallucinations accumulate until any hope of forward progress collapses.
> I wouldn't be surprised if we end up seeing open source commercial work bend towards the SQLite model (open core, private tests).
Wouldn't this just mean that actual open source is the tests? or spec? or ... The artifact which acts as seed for the program, what ever that ends up being?
I'm not sure about this. LLMs can extract both documentation and tests from bare source code. That said I think you're correct that having an existing quality test suite to run against is a huge help.
Yup! We’re just one link in the chain (bands, tapers, archivists, & the listeners), but I appreciate the sentiment. Alec and I have been running Relisten for over a decade and we’ve put a lot of work into it these past few years.
Unfortunately even if you pick nominally-equal-width glyphs, on the web you can still get screwed over by font substitution/fallback done by the browser.
I agree with you that we should care more about resource usage, but it's a false comparison. Backend devs control where their code runs, frontend devs don't.
You can make more precise decisions when you have complete control over the environment. When you don't, you have to make trade-offs. In this case, universality (electron and javascript) for higher RAM usage. It doesn't seem to have slowed Discord's adoption rate.
Even if they built their desktops apps in native code and UI, they'd have to build a JS website in parallel.