Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

> just burn all the things and start over

Wrap all of this with an "IN MY OPINION"...

That would make things worse because we'd make the same mistakes again. I've been on many start over projects (Xeon Phi, for example, threw out the P4 pipeline and went back to the Pentium U/V). It doesn't work. You know what the most robust project I've worked on? The instruction decoder on Intel CPUs. The validation tests go back to the late 1980's!

You make progress by building on top and fixing your mistakes because there literally IS NO OTHER WAY.

Go read about hemoglobin, its the one of the oldest genes in the genome, used by everything that uses blood to transport oxygen, and it is a GIGANTIC gene, full of redundancies. Essentially a billion years of evolution accreted one of the most robust and useful genes in our body, and there's nothing elegant about it.

I think that's where we are headed. Large systems that bulge with heft but contain so many redundant checking code that they become statistically more robust.



It really pisses me off when devs start ripping out asserts and tests that fail "But have been doing so for a while, so clearly they aren't needed."

How about... no. Review the code and determine our undefined behavior is understood well enough that we can accept the bad inputs. Those asserts & tests were created for a reason, and need to be maintained. Not just removed because no one can spot the subtle failure modes.


Yes, though fragile tests can be a sign that simplification might benefit your system.

Sometimes it's useful to just turn flaky failures into solid failures.


> You make progress by building on top and fixing your mistakes because there literally IS NO OTHER WAY.

As long as you are talking about knowledge, not artefacts. There is indeed no choice but accrete, organise, and correct knowledge over time, because anything you forget is something you might get wrong all over again.

Artefacts are different. It often makes sense to rebuild some of them from scratch, using more recent knowledge. We rarely do that, because short terms considerations usually win out (case in point: Qwerty).

> I think that's where we are headed. Large systems that bulge with heft but contain so many redundant checking code that they become statistically more robust.

Only if we give up any hope of improving performance, energy consumption, or die area. Right now the biggest gains can be found by removing cruft. See Vulkan for instance.


Vulkan is a good example indeed.

Most studios end up putting middleware on top of it to reduce Vulkan boilerplate to some more manageable code level, which ironically makes some Vulkan code bases run slower than OpenGL AZDO, due to misunderstandings how to do the low level work in a proper way.


Vulkan is both a good and bad example.

Vulkan tries to remove the "bloat" of the driver by moving it into the engine (or the middleware the engine uses), which, yes, reimplements a pretty sizable chunk of what the driver used to do.

But it exposes the API in a way that requires domain-specific knowledge of how modern GPUs work, which requires, frankly, smarter engine developers. They need to stop only thinking in the ways OGL/D3D taught them to think, and need to also think like a driver developer, or possibly even a compiler developer.

OpenGL was written wrong because no one knew what modern GPUs would eventually look like, and tried to solve the problem at the wrong layer; and fixed function hardware worked pretty much the way OpenGL worked in the beginning, not realizing stuff would eventually become, basically, absolutely massive highly parallel DSP-esque math coprocessors that are more complex than the system that hosts it. OpenGL became a mess because they kept bolting newer styles of hardware onto it (VBOs/IBOs/VAOs, the eventual move to unified buffers, compute shaders, fixed function geometry then non-fixed function geometry shaders, ubershaders and the move from direct to deferred back to direct, etc)


> They need to stop only thinking in the ways OGL/D3D taught them to think, and need to also think like a driver developer, or possibly even a compiler developer.

Which is exactly the opposite than anyone that wants to draw something wants to think about.

Also Vulkan is on its merry path to have endless list of extensions, so it will eventually match OpenGL's complexity about what code paths to take, with the required cleverness of having to be a graphics programmer, driver developer and compiler developer at the same time.

No wonder anyone that wants to stay sane rather picks up a middleware engine instead.


Vulkan was never meant to be directly used by people who “just want to draw stuff”. It was meant to give engine developers the tools required to squeeze more performance out of the GPUs.

This is a case of working as designed. The people trying to directly use Vulkan in their games without any middleware layer are just generally wrong here.

Regarding extensions, this is just what happens when you specify something that keeps evolving - there’s no getting around this. What you can do to minimise the complexity is decide to require certain extensions once they’ve been around for long enough. That’s what everyone does.


It would help if Khronos would promote an API for people who “just want to draw stuff”, given that OpenGL 5.0 will never happen most likely and those people don't want to be stuck with OpenGL 4.6 forever.

That is not what everyone does, because Vulkan gets new extensions updates almost every week.


> It would help if Khronos would promote an API for people who “just want to draw stuff”

Would it?

What you really want is a stable API to talk to the hardware. It doesn't really matter if the API is nigh unusable, as long as it is stable. Only when such stability is reached, can we reliably build stuff on top of it. Including a "just draw stuff" engine.

We could for instance re-implement Flash on top of Vulkan. Such a thing wouldn't need a standard body to get done and be usable by a wide range of people. (Though in this particular case, we'd likely have some standard body involved anyway, since there's already so much Flash code out there.)


So where do OpenGL developers move to, when 4.6 moves into "this legacy thing we would like to drop"?


You implement OpenGL as a middleware that speaks Vulkan.

Basically, ANGLE it, but for desktop OpenGL instead of GLES.

As an aside, Google is adding a Vulkan backend for ANGLE, and Microsoft seems to be adding a DX12 (which is basically DXVulkan) backend (to match the DX11 backend they gave Google) at some point in the future.

So, GLES, in its entirety, is now a community supported, open source, Vulkan middleware. No reason why we can't do that with desktop GL too.


So 10 years from now those that don't want to be a mix of graphical developer, driver author and compiler designers, have to keep using a frozen API from 2018, without any access to whatever has changed on their computers during the next decade.

All because providing something like MetalKit or DirectXTK is too much to ask for to Khronos and LunarG.


But should it change? There are issues where updating OpenGL support in drivers broke earlier apps due to accidental changes in how existing features were implemented.

Vulkan and DX12 are far less likely to break existing apps i the future due to far fewer core features.

It makes no sense to have what is essentially an entire legacy middleware in the driver when it no longer represents modern hardware.

Unlike GLES, OpenGL basically can never deprecate features, and D3D9 support will never, truly, die. It's a lot easier to just package a universal shim into existing legacy apps than it is to keep mangling drivers over the issue.


> It makes no sense to have what is essentially an entire legacy middleware in the driver when it no longer represents modern hardware.

Exactly, but it is the only API that Khronos is offering for those that don't want to be Vulkan experts.

Which leaves middleware engines, something totally unrelated to Khronos APIs, as the only future proof path for accessing GPU features on modern hardware.

As for drivers breaking down, the main reason why Vulkan is now compulsory on Android as of version 10, is because while it was optional, the few OEMs that bothered to support it did a very lousy job, so Google hopes by making it compulsory and part of Android CTS, the situation will improve.


> Vulkan and DX12 are far less likely to break existing apps i the future due to far fewer core features.

This is questionable. Vulkan by definition has basically no error checking in the driver, and while developers are supposed to use the validation layer, they may not do so, and even if they do, there are certain plenty of incorrect things an application can do that won't be caught by validation.

Incorrect programs may still happen to run correctly on existing drivers, but then fail with a driver update that happens to change the undefined behavior.


C compiler writers have answered that conundrum a long time ago: "if you (even accidentally) rely on undefined behaviour, the warranty is void".

I don't necessarily agree, but if we have a way to avoid undefined behaviour (and at least in C there are ways to make pretty thorough checks), then it works in practice.


The checks that according to most surveys and security reports are used by a tiny part of the C community?

If it doesn't work for C regarding mainstream adoption, how come it will work for Vulkan?


It won't, unless only a fairly small elite ends up using Vulkan. And I believe that's what will happen indeed: Vulkan is low level enough that most likely, only engine devs and middleware devs will touch it.

You will of course have the occasional cowboy (which I personally am, though in a different domain), but that shouldn't matter that much in the grand scheme of things.

Now if you ask me, Vulkan is not enough. What we really want is a stable, usable hardware interface. Basically an ISA. The thing will have close to zero bug, because hardware folks know how to properly test their designs. Undefined behaviour is likely unavoidable, but I believe it can be reduced to a reasonable minimum.

If AMD and NVidia started something like RISC-V, except for graphics cards, it will likely have a greater impact than RISC-V itself.


Perhaps wireguard VPN as a low-cruft replacement for openvpn is a good illustration of this.



> That LoadLibrary call is ugly but it makes the code work on old versions of Windows 95.

with the proper code comments for the hair (in the code or in the source repo) and regression tests it becomes possible to clean up and even rewrite.

floppy disks are no longer a thing and neither is windows 95. RAM & swap space are abundant, so that OOM may never happen any more IRL.

the app's whole architecture may have been carefully chosen for the hardware and simplistic compilers of another age.

is it okay never to rewrite a webpage where 50% of the codebase is IE6 hacks?

if it is code that must be maintained, then at some point the hair may need to be shaved; it simply cannot grow forever as the world moves on around it. the doma of "never rewrite" is silly without further context.


>if it is code that must be maintained, then at some point the hair may need to be shaved; it simply cannot grow forever as the world moves on around it. the doma of "never rewrite" is silly without further context.

"Shaving" seems more close to refactoring & partial rewrite than starting from scratch. Which is what we are talking about.

Still, You make good points. Maybe, we should think of "restarting from scratch" the same way we do premature optimization - (1. don't do it. 2. Don't do it too early. 3. if you must do, measure first)

I think every developer dreams of rewriting from scratch because they hate how hacky and ugly their code is probably due to rushed deadlines and because it was just supposed to be an mvp. And they think about throwing it away and starting clean and doing it the 'proper way'. This imo is the wrong reason to start from scratch.

But if your technical debt is genuinely preventing your product from going where it needs to go or do its job. That is the right reason. Again you have to make the calculation whether the technical debt is greater than the cost of rewriting from scratch, re-opening old bugs and introducing new ones, breaking your customers workflow - they also invested a lot into your old program. How many times have you liked a program until 'the damn devs went and ruined a good thing' by 'fixing what wasn't broken'. Again, customers don't see or care about the code or technical debt. They just need to get their work done.


> April 6, 2000

Wow. That goes waaaaaaay back. It’s crazy to think that 1980 was to him at the time as 2000 is to us _right now_.


This is nothing new. Our parents all saw this already 50 years ago wherever they were working at the time.


Thank you for this, it's exactly what I'm getting at.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: