Posted by zdw 17 hours ago
I spent a good part of my career, working in image processing.
That first image is pretty much exactly what a raw Bayer format looks like, without any color information. I find it gets even more interesting, if we add the RGB colors, and use non-square pixels.
Is the output produced by the sensor RGB or a single value per pixel?
In front of the sensor is a bayer filter which results in each physical pixel seeing illumination filtered R G or B.
From there the software onboard the camera or in your RAW converter does interpolation to create RGB values at each pixel. For example if the local pixel is R filtered, it then interpolates its G & B values from nearby pixels of that filter.
https://en.wikipedia.org/wiki/Bayer_filter
There are alternatives such as what Fuji does with its X-trans sensor filter.
https://en.wikipedia.org/wiki/Fujifilm_X-Trans_sensor
Another alternative is Foveon (owned by Sigma now) which makes full color pixel sensors but they have not kept up with state of the art.
https://en.wikipedia.org/wiki/Foveon_X3_sensor
This is also why Leica B&W sensor cameras have higher apparently sharpness & ISO sensitivity than the related color sensor models because there is no filter in front or software interpolation happening.
https://en.wikipedia.org/wiki/Pixel_shift
EDIT: Sigma also has "Foveon" sensors that do not have the filter and instead stacks multiple sensors (for different wavelengths) at each pixel.
Works great. Most astro shots are taken using a monochrome sensor and filter wheel.
> filters are something like quantum dots that can be turned on/off
If anyone has this tech, plz let me know! Maybe an etalon?
https://en.wikipedia.org/wiki/Fabry%E2%80%93P%C3%A9rot_inter...
I have no idea, it was my first thought when I thought of modern color filters.
AKA imagine a camera with R/G/B filters being quickly rotated out for 3 exposures, then imagine it again but the technology is integrated right into the sensor (and, ideally, the sensor and switching mechanism is fast enough to read out with rolling shutter competitive with modern ILCs)
Edit or maybe it does work? I've watched at least one movie on a DLP type video projector with sequential colour and not noticed colour fringing. But still photos have much higher demand here.
R G B
B R G
G B R
?Each RGB pixel would be 2x2 grid of
``` G R B G ```
So G appears twice as many as other colors (this is mostly the same for both the screen and sensor technology).
There are different ways to do the color filter layouts for screens and sensors (Fuji X-Trans have different layout, for example).
G G R R
G G R R
B B G G
B B G G
[0]: https://en.wikipedia.org/wiki/Bayer_filter R G
G B
Then at a later stage the image is green because "There are twice as many green pixels in the filter matrix".Generally we shoot “flat” (there are so many caveats to this but I don’t feel like getting bogged down in all of it. If you plan on getting down and dirty with colors and really grading, you generally shoot flat). The image that we handover to DIT/editing can be borderline grayscale in its appearance. The colors are so muted, the dynamic range is so wide, that you basically have a highly muted image. The reason for this is you then have the freedom to “push” the color and look and almost any direction, versus if you have a very saturated, high contrast image, you are more “locked” into that look. This matters more and more when you are using a compressed codec and not something with an incredibly high bitrate or raw codecs, which is a whole other world and I am also doing a bit of a disservice to by oversimplifying.
Though this being HN it is incredibly likely I am telling few to no people anything new here lol
It's sort of the opposite of what's going on with photography, where you have a dedicated "raw" format with linear readings from the sensor. Without these formats, someone would probably have invented "log JPEG" or something like that to preserve more data in highlights and in the shadows.