Hacker Timesnew | past | comments | ask | show | jobs | submitlogin

The problem is that this will only work in one direction. You can calculate the stimulation of the photoreceptors for a certain spectrum, but not the other way around. For example the eye cannot distinguish between purple light consisting of one specific wavelength and purble light mixed by red and blue wavelengths, because both give the same stimulation of the receptors. So there is an infinite number of possible spectra for any given stimulation of the photoreceptors. All we can do is take the stimulation values (X, Y and Z) and convert from there to all kinds of color models and back.

Your approach would make a lot of sense for sensors that are full spectrum analyzers, but the eye isn't one.



You are talking about the inverse problem. https://en.wikipedia.org/wiki/Inverse_problem

Yes because it's not a one to one map we cannot invert the map uniquely. But that's ok, we can maintain a distribution over the possible frequencities consistent with the response. That's how it's done in other areas of mathematics where similar non-bijections arise.

Much thanks for answering though, because I suspect I am asking a very basic question.


You're correct, for what it's worth. I too have always wished that light was modeled based on physics, not on how humans happen to see.

Unfortunately the problem is data acquisition (cameras), and data creation (artists). You need lots of data to figure out e.g. what a certain metal's spectrum is, and it's not nearly as clear-cut as just painting RGB values onto a box in a game engine.

For better or worse, all our tools are set up to work in RGB, regardless of the color space you happen to be using. So your physics-based approach would have the monumental task of redefining how to create a texture in Photoshop, and how to specify a purple light in a game engine.

I think the path toward actual photorealism is to use ML models. They should be able to take ~any game engine's rendered frame as input, and output something closer to what you'd see in real life. And I'm pretty sure it can be done in realtime, especially if you're using a GAN based approach instead of diffusion models.


No need for ML. This already exists, the keyword to look for is "spectral rendering".

To add to the general thread: the diverse color spaces are there to answer questions that inherently involve how a typical human sees colors, so they _have_ to include biology, that's their whole point. For example:

- I want objects of a specific color (because of branding), how to communicate that to contractors, and how to check it?

- What's a correct processing chain from capturing an image to display/print, that guarantees that the image will look the same on all devices?


I see. Makes sense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: