Top
Best
New

Posted by zdw 17 hours ago

What an unprocessed photo looks like(maurycyz.com)
1789 points | 289 commentspage 4
ChrisMarshallNY 15 hours ago|
That's a cool walkthrough.

I spent a good part of my career, working in image processing.

That first image is pretty much exactly what a raw Bayer format looks like, without any color information. I find it gets even more interesting, if we add the RGB colors, and use non-square pixels.

MetaMalone 9 hours ago||
I have always wondered at the lowest level how a camera captures and processes photos. Much appreciated post.
XCSme 16 hours ago||
I am confused by the color filter step.

Is the output produced by the sensor RGB or a single value per pixel?

steveBK123 15 hours ago||
In its most raw form, camera sensors only see illumination not color.

In front of the sensor is a bayer filter which results in each physical pixel seeing illumination filtered R G or B.

From there the software onboard the camera or in your RAW converter does interpolation to create RGB values at each pixel. For example if the local pixel is R filtered, it then interpolates its G & B values from nearby pixels of that filter.

https://en.wikipedia.org/wiki/Bayer_filter

There are alternatives such as what Fuji does with its X-trans sensor filter.

https://en.wikipedia.org/wiki/Fujifilm_X-Trans_sensor

Another alternative is Foveon (owned by Sigma now) which makes full color pixel sensors but they have not kept up with state of the art.

https://en.wikipedia.org/wiki/Foveon_X3_sensor

This is also why Leica B&W sensor cameras have higher apparently sharpness & ISO sensitivity than the related color sensor models because there is no filter in front or software interpolation happening.

XCSme 15 hours ago|||
What about taking 3 photos while quickly changing the filter (e.g. filters are something like quantum dots that can be turned on/off)?
lidavidm 15 hours ago|||
Olympus and other cameras can do this with "pixel shift": it uses the stabilization mechanism to quickly move the sensor by 1 pixel.

https://en.wikipedia.org/wiki/Pixel_shift

EDIT: Sigma also has "Foveon" sensors that do not have the filter and instead stacks multiple sensors (for different wavelengths) at each pixel.

https://en.wikipedia.org/wiki/Foveon_X3_sensor

itishappy 15 hours ago||||
> What about taking 3 photos while quickly changing the filter

Works great. Most astro shots are taken using a monochrome sensor and filter wheel.

> filters are something like quantum dots that can be turned on/off

If anyone has this tech, plz let me know! Maybe an etalon?

https://en.wikipedia.org/wiki/Fabry%E2%80%93P%C3%A9rot_inter...

XCSme 15 hours ago||
> If anyone has this tech, plz let me know!

I have no idea, it was my first thought when I thought of modern color filters.

card_zero 15 hours ago||
That's how the earliest color photography worked. "Making color separations by reloading the camera and changing the filter between exposures was inconvenient", notes Wikipedia.
to11mtm 15 hours ago||
I think they are both more asking about 'per pixel color filters'; that is, something like a sensor filter/glass but the color separators could change (at least 'per-line') fast enough to get a proper readout of the color in formation.

AKA imagine a camera with R/G/B filters being quickly rotated out for 3 exposures, then imagine it again but the technology is integrated right into the sensor (and, ideally, the sensor and switching mechanism is fast enough to read out with rolling shutter competitive with modern ILCs)

MarkusWandel 15 hours ago||||
Works for static images, but if there's motion the "changing the filters" part is never fast enough, there will always be colour fringing somewhere.

Edit or maybe it does work? I've watched at least one movie on a DLP type video projector with sequential colour and not noticed colour fringing. But still photos have much higher demand here.

numpad0 14 hours ago|||
You can use sets of exotic mirrors and/or prisms to split incoming images into separate RGB beams into three independent monochrome sensors, through the same singular lens and all at once. That's what "3CCD" cameras and their predecessors did.
stefan_ 15 hours ago|||
B&W sensors are generally more sensitive than their color versions, as all filters (going back to signal processing..) attenuate the signal.
wtallis 15 hours ago|||
The sensor outputs a single value per pixel. A later processing step is needed to interpret that data given knowledge about the color filter (usually Bayer pattern) in front of the sensor.
i-am-gizm0 15 hours ago|||
The raw sensor output is a single value per sensor pixel, each of which is behind a red, green, or blue color filter. So to get a usable image (where each pixel has a value for all three colors), we have to somehow condense the values from some number of these sensor pixels. This is the "Debayering" process.
ranger207 16 hours ago||
It's a single value per pixel, but each pixel has a different color filter in front of it, so it's effectively that each pixel is one of R, G, or B
XCSme 15 hours ago||
So, for a 3x3 image, the input data would be 9 values like:

   R G B
   B R G
   G B R

?
jeeyoungk 15 hours ago|||
If you want "3x3 colored image", you would need 6x6 of the bayer filter pixels.

Each RGB pixel would be 2x2 grid of

``` G R B G ```

So G appears twice as many as other colors (this is mostly the same for both the screen and sensor technology).

There are different ways to do the color filter layouts for screens and sensors (Fuji X-Trans have different layout, for example).

Lanzaa 15 hours ago||||
This depends on the camera and the sensor's bayer filter [0]. For example the quad bayer uses a 4x4 like:

    G G R R
    G G R R
    B B G G
    B B G G
[0]: https://en.wikipedia.org/wiki/Bayer_filter
card_zero 15 hours ago||||
In the example ("let's color each pixel ...") the layout is:

  R G
  G B
Then at a later stage the image is green because "There are twice as many green pixels in the filter matrix".
nomel 15 hours ago||
And this is important because our perception is more sensitive to luminance changes than color, and with our eyes being most sensitive to green, luminance is also. So, higher perceived spatial resolution by using more green [1]. This is also why JPG has lower resolution red and green channels, and why modern OLED usually use a pentile display, with only green being at full resolutio [2].

[1] https://en.wikipedia.org/wiki/Bayer_filter#Explanation

[2] https://en.wikipedia.org/wiki/PenTile_matrix_family

card_zero 15 hours ago|||
Funny that subpixels and camera sensors aren't using the same layouts.
userbinator 15 hours ago|||
Pentile displays are acceptable for photos and videos, but look really horrible displaying text and fine detail --- which looks almost like what you'd see on an old triad-shadow-mask colour CRT.
jacktang 9 hours ago||
I fill the original photo to Nano banana Pro, and it recovered well. It also explained how to recover it.
CosmicShadow 10 hours ago||
Interesting to see this whole thing shown outside of Astrophotography, sometimes I forget it's the same stuff!
exabrial 15 hours ago||
I love the look of the final product after the manual work (not the one for comparison). Just something very realistic and wholesome about it, not pumped to 10 via AI or Instagram filters.
ws404 11 hours ago||
Did you steal that tree from Charlie Brown?
excalibur 9 hours ago|
Surprised that nobody else commented on this, it is a very sad tree.
Forgeties79 15 hours ago||
For those who are curious, this is basically what we do when we color grade in video production but taken to its most extreme. Or rather, stripped down to the most fundamental level. Lots of ways to describe it.

Generally we shoot “flat” (there are so many caveats to this but I don’t feel like getting bogged down in all of it. If you plan on getting down and dirty with colors and really grading, you generally shoot flat). The image that we handover to DIT/editing can be borderline grayscale in its appearance. The colors are so muted, the dynamic range is so wide, that you basically have a highly muted image. The reason for this is you then have the freedom to “push” the color and look and almost any direction, versus if you have a very saturated, high contrast image, you are more “locked” into that look. This matters more and more when you are using a compressed codec and not something with an incredibly high bitrate or raw codecs, which is a whole other world and I am also doing a bit of a disservice to by oversimplifying.

Though this being HN it is incredibly likely I am telling few to no people anything new here lol

nospice 15 hours ago|
"Flat" is a bit of a misnomer in this context. It's not flat, it's actually a logarithmic ("log profile") representation of data computed by the camera to allow a wider dynamic range to be squeezed into traditional video formats.

It's sort of the opposite of what's going on with photography, where you have a dedicated "raw" format with linear readings from the sensor. Without these formats, someone would probably have invented "log JPEG" or something like that to preserve more data in highlights and in the shadows.

Forgeties79 11 hours ago|||
I said “flat” because I didn’t feel like going into “log” and color profiles and such but I’ll admit I’m leaning hard into over-simplification, because log, raw, etc. gets messy when discussing profiles vs codecs/compression/etc. In video we still call some codecs “raw,” but it’s not the same necessarily as how it’s used in photography. Like the Red raw codec has various compression ratios (5:1 tends to be the sweet spot IME) and it really messes with the whole idea of what raw even is. It’s all quasi-technical and somewhat inconsistent.
DustinBrett 10 hours ago||
2 top HN posts in 1 day, maurycyz is on fire!
gruez 15 hours ago|
Honestly, I think the gamma normalization step don't really count as "processing", any more than the gzip decompression step doesn't count as "processing" for the purposes of "this is what an unprocessed html file looks like" demo. At the end of the day, it's the same information, but encoded differently. Similar arguments can be made for de-bayer filter step. If you ignore these two steps, the "processing" that happens looks far less dramatic.
More comments...