Top
Best
New

Posted by zdw 15 hours ago

What an unprocessed photo looks like(maurycyz.com)
1621 points | 273 commentspage 3
mrheosuper 4 hours ago|
>Our perception of brightness is non-linear.

Apart from brightness, it's everything. Loudness, temperature, etc.

eru 12 hours ago||
> As a result of this, if the linear data is displayed directly, it will appear much darker then it should be.

Then -> than? (In case the author is reading comments here.)

Biganon 2 hours ago|
The author makes this error every single time, in both articles by him I've read today. For some reason, as a person whose native language is not English, this particular error pisses me off so much.
eru 12 hours ago||
> There’s nothing that happens when you adjust the contrast or white balance in editing software that the camera hasn’t done under the hood. The edited image isn’t “faker” then the original: they are different renditions of the same data.

Almost, but not quite? The camera works with more data than what's present in the JPG your image editing software sees.

doodlesdev 12 hours ago|
You can always edit the RAW files from the camera, which essentially means working with the same data the camera chip had to generate the JPEGs.
eru 8 hours ago||
Not quite. At the very least, the RAW file is a static file. Whereas your camera chip can make interactive decisions.

In any case, RAW files aren't even all that raw. First, they are digitised. They often apply de-noising, digital conditioning (to take care of hot and dead pixels), lens correction. Some cameras even apply some lossy compression.

emodendroket 14 hours ago||
This is actually really useful. A lot of people demand an "unprocessed" photo but don't understand what they're actually asking for.
Dylan16807 5 hours ago|
They probably do know what they're asking for, they're just using an ambiguous word.
Toutouxc 4 hours ago||
My mirrorless camera shoots in RAW. When someone asks me if a certain photo was “edited”, I honestly don’t know what to answer. The files went through a RAW development suite that applied a bewildering amount of maths to transform them into a sRGB image. Some of the maths had sliders attached to it and I have moved some of the sliders, but their default positions were just what the software thought was appropriate. The camera isn’t even set to produce a JPEG + RAW combo, so there is literally no reference.
uolmir 14 hours ago||
This is a great write up. It's also weirdly similar to a video I happened upon yesterday playing around with raw Hubble imagery: https://www.youtube.com/watch?v=1gBXSQCWdSI

He take a few minutes to get to the punch line. Feel free to skip ahead to around 5:30.

MetaMalone 7 hours ago||
I have always wondered at the lowest level how a camera captures and processes photos. Much appreciated post.
jacktang 7 hours ago||
I fill the original photo to Nano banana Pro, and it recovered well. It also explained how to recover it.
ChrisMarshallNY 13 hours ago||
That's a cool walkthrough.

I spent a good part of my career, working in image processing.

That first image is pretty much exactly what a raw Bayer format looks like, without any color information. I find it gets even more interesting, if we add the RGB colors, and use non-square pixels.

CosmicShadow 9 hours ago||
Interesting to see this whole thing shown outside of Astrophotography, sometimes I forget it's the same stuff!
XCSme 14 hours ago|
I am confused by the color filter step.

Is the output produced by the sensor RGB or a single value per pixel?

steveBK123 14 hours ago||
In its most raw form, camera sensors only see illumination not color.

In front of the sensor is a bayer filter which results in each physical pixel seeing illumination filtered R G or B.

From there the software onboard the camera or in your RAW converter does interpolation to create RGB values at each pixel. For example if the local pixel is R filtered, it then interpolates its G & B values from nearby pixels of that filter.

https://en.wikipedia.org/wiki/Bayer_filter

There are alternatives such as what Fuji does with its X-trans sensor filter.

https://en.wikipedia.org/wiki/Fujifilm_X-Trans_sensor

Another alternative is Foveon (owned by Sigma now) which makes full color pixel sensors but they have not kept up with state of the art.

https://en.wikipedia.org/wiki/Foveon_X3_sensor

This is also why Leica B&W sensor cameras have higher apparently sharpness & ISO sensitivity than the related color sensor models because there is no filter in front or software interpolation happening.

XCSme 14 hours ago|||
What about taking 3 photos while quickly changing the filter (e.g. filters are something like quantum dots that can be turned on/off)?
lidavidm 13 hours ago|||
Olympus and other cameras can do this with "pixel shift": it uses the stabilization mechanism to quickly move the sensor by 1 pixel.

https://en.wikipedia.org/wiki/Pixel_shift

EDIT: Sigma also has "Foveon" sensors that do not have the filter and instead stacks multiple sensors (for different wavelengths) at each pixel.

https://en.wikipedia.org/wiki/Foveon_X3_sensor

itishappy 13 hours ago||||
> What about taking 3 photos while quickly changing the filter

Works great. Most astro shots are taken using a monochrome sensor and filter wheel.

> filters are something like quantum dots that can be turned on/off

If anyone has this tech, plz let me know! Maybe an etalon?

https://en.wikipedia.org/wiki/Fabry%E2%80%93P%C3%A9rot_inter...

XCSme 13 hours ago||
> If anyone has this tech, plz let me know!

I have no idea, it was my first thought when I thought of modern color filters.

card_zero 13 hours ago||
That's how the earliest color photography worked. "Making color separations by reloading the camera and changing the filter between exposures was inconvenient", notes Wikipedia.
to11mtm 13 hours ago||
I think they are both more asking about 'per pixel color filters'; that is, something like a sensor filter/glass but the color separators could change (at least 'per-line') fast enough to get a proper readout of the color in formation.

AKA imagine a camera with R/G/B filters being quickly rotated out for 3 exposures, then imagine it again but the technology is integrated right into the sensor (and, ideally, the sensor and switching mechanism is fast enough to read out with rolling shutter competitive with modern ILCs)

MarkusWandel 13 hours ago||||
Works for static images, but if there's motion the "changing the filters" part is never fast enough, there will always be colour fringing somewhere.

Edit or maybe it does work? I've watched at least one movie on a DLP type video projector with sequential colour and not noticed colour fringing. But still photos have much higher demand here.

numpad0 12 hours ago|||
You can use sets of exotic mirrors and/or prisms to split incoming images into separate RGB beams into three independent monochrome sensors, through the same singular lens and all at once. That's what "3CCD" cameras and their predecessors did.
stefan_ 13 hours ago|||
B&W sensors are generally more sensitive than their color versions, as all filters (going back to signal processing..) attenuate the signal.
wtallis 14 hours ago|||
The sensor outputs a single value per pixel. A later processing step is needed to interpret that data given knowledge about the color filter (usually Bayer pattern) in front of the sensor.
i-am-gizm0 13 hours ago|||
The raw sensor output is a single value per sensor pixel, each of which is behind a red, green, or blue color filter. So to get a usable image (where each pixel has a value for all three colors), we have to somehow condense the values from some number of these sensor pixels. This is the "Debayering" process.
ranger207 14 hours ago||
It's a single value per pixel, but each pixel has a different color filter in front of it, so it's effectively that each pixel is one of R, G, or B
XCSme 14 hours ago||
So, for a 3x3 image, the input data would be 9 values like:

   R G B
   B R G
   G B R

?
jeeyoungk 14 hours ago|||
If you want "3x3 colored image", you would need 6x6 of the bayer filter pixels.

Each RGB pixel would be 2x2 grid of

``` G R B G ```

So G appears twice as many as other colors (this is mostly the same for both the screen and sensor technology).

There are different ways to do the color filter layouts for screens and sensors (Fuji X-Trans have different layout, for example).

Lanzaa 13 hours ago||||
This depends on the camera and the sensor's bayer filter [0]. For example the quad bayer uses a 4x4 like:

    G G R R
    G G R R
    B B G G
    B B G G
[0]: https://en.wikipedia.org/wiki/Bayer_filter
card_zero 14 hours ago||||
In the example ("let's color each pixel ...") the layout is:

  R G
  G B
Then at a later stage the image is green because "There are twice as many green pixels in the filter matrix".
nomel 13 hours ago||
And this is important because our perception is more sensitive to luminance changes than color, and with our eyes being most sensitive to green, luminance is also. So, higher perceived spatial resolution by using more green [1]. This is also why JPG has lower resolution red and green channels, and why modern OLED usually use a pentile display, with only green being at full resolutio [2].

[1] https://en.wikipedia.org/wiki/Bayer_filter#Explanation

[2] https://en.wikipedia.org/wiki/PenTile_matrix_family

card_zero 13 hours ago|||
Funny that subpixels and camera sensors aren't using the same layouts.
userbinator 13 hours ago|||
Pentile displays are acceptable for photos and videos, but look really horrible displaying text and fine detail --- which looks almost like what you'd see on an old triad-shadow-mask colour CRT.
More comments...