Top
Best
New

Posted by zdw 12/28/2025

What an unprocessed photo looks like(maurycyz.com)
2510 points | 409 commentspage 4
srean 12/29/2025|
Bear with me for a possibly strange question, directed more towards chemists. Are their crystalline compounds with the formula X_2YZ where X,Y,Z are three elements of roughly same atomic size.

What I am curious if are the different symmetrical arrangements chosen by such crystals and how they compare with Bayer pattern. The analogy being X becomes a site for green and the other two for red and blue.

eru 12/29/2025||
> There’s nothing that happens when you adjust the contrast or white balance in editing software that the camera hasn’t done under the hood. The edited image isn’t “faker” then the original: they are different renditions of the same data.

Almost, but not quite? The camera works with more data than what's present in the JPG your image editing software sees.

doodlesdev 12/29/2025|
You can always edit the RAW files from the camera, which essentially means working with the same data the camera chip had to generate the JPEGs.
eru 12/29/2025||
Not quite. At the very least, the RAW file is a static file. Whereas your camera chip can make interactive decisions.

In any case, RAW files aren't even all that raw. First, they are digitised. They often apply de-noising, digital conditioning (to take care of hot and dead pixels), lens correction. Some cameras even apply some lossy compression.

doodlesdev 12/30/2025||
In my experience with the NEF files generated from my Nikon D5500 these RAW files have no denoising at all, no lenas correction, they keep the hot pixels and the RAW compression is visually lossless (I keep 14bits of color data).

Most cameras nowadays offer lossless RAW files, mine is entry level and a bit old already. I fix all of those things you cited through Darktable, which offers a scene referred workflow.

Basically all cameras cameras do offer compression for RAW files, but most times that's just lossless compression (i.e. no data is lost). Do you have any source to back your claims?

uolmir 12/28/2025||
This is a great write up. It's also weirdly similar to a video I happened upon yesterday playing around with raw Hubble imagery: https://www.youtube.com/watch?v=1gBXSQCWdSI

He take a few minutes to get to the punch line. Feel free to skip ahead to around 5:30.

srean 12/29/2025||
Does anyone remember a blog post on how repeated sharpening and blurring results in reaction diffusion Turing patterns. That blog also had an article on sub pixel shift.

Trying frantically to remember and looking for it in my bookmarks but failing miserably. If anyone remembers what blog I am talking about please leave a link.

boobsbr 12/29/2025|
Maybe these?

https://patorjk.com/blog/2025/11/02/what-happens-if-you-blur...

https://patorjk.com/blog/2025/03/10/making-a-follow-up-to-su...

https://relativisticobserver.blogspot.com/2012/02/keeping-it...

srean 12/29/2025||
Hey thanks a bunch. These were not the ones though. The blog was more mathy and had a signal processing analysis and fixed point analysis of the repeated blue and sharpen phenomena.

I had upvoted it on HN either as a post or a comment that had the link. Wish there was an easy way to search through ones own upvoted comments and posts.

Thanks again though for trying to help.

naths88 12/29/2025||
Fed it to Gemini 3 pro and got this remark :

The "Squid" Note: You might notice a weird hidden text at the bottom of that webpage about a squid—ignore that, it's a "prompt injection" joke for AI bots! The relevant content is purely the image processing.

dep_b 12/29/2025||
I had similar experiences working with the RAW data API's that appeared a few years ago in iOS. My photos were barely better than the stuff I would take with my old Nokia!

I have a lot of respect they manage to get pictures to get to look as good as they do on phones.

cartesius13 12/29/2025||
Highly recommend this CaptainDisillusion video that covers this topic of how cameras process colors in a very entertaining way

https://www.youtube.com/watch?v=aO3JgPUJ6iQ

ChrisMarshallNY 12/29/2025||
That's a cool walkthrough.

I spent a good part of my career, working in image processing.

That first image is pretty much exactly what a raw Bayer format looks like, without any color information. I find it gets even more interesting, if we add the RGB colors, and use non-square pixels.

XCSme 12/28/2025||
I am confused by the color filter step.

Is the output produced by the sensor RGB or a single value per pixel?

steveBK123 12/28/2025||
In its most raw form, camera sensors only see illumination not color.

In front of the sensor is a bayer filter which results in each physical pixel seeing illumination filtered R G or B.

From there the software onboard the camera or in your RAW converter does interpolation to create RGB values at each pixel. For example if the local pixel is R filtered, it then interpolates its G & B values from nearby pixels of that filter.

https://en.wikipedia.org/wiki/Bayer_filter

There are alternatives such as what Fuji does with its X-trans sensor filter.

https://en.wikipedia.org/wiki/Fujifilm_X-Trans_sensor

Another alternative is Foveon (owned by Sigma now) which makes full color pixel sensors but they have not kept up with state of the art.

https://en.wikipedia.org/wiki/Foveon_X3_sensor

This is also why Leica B&W sensor cameras have higher apparently sharpness & ISO sensitivity than the related color sensor models because there is no filter in front or software interpolation happening.

stefan_ 12/29/2025|||
B&W sensors are generally more sensitive than their color versions, as all filters (going back to signal processing..) attenuate the signal.
XCSme 12/28/2025|||
What about taking 3 photos while quickly changing the filter (e.g. filters are something like quantum dots that can be turned on/off)?
lidavidm 12/29/2025|||
Olympus and other cameras can do this with "pixel shift": it uses the stabilization mechanism to quickly move the sensor by 1 pixel.

https://en.wikipedia.org/wiki/Pixel_shift

EDIT: Sigma also has "Foveon" sensors that do not have the filter and instead stacks multiple sensors (for different wavelengths) at each pixel.

https://en.wikipedia.org/wiki/Foveon_X3_sensor

itishappy 12/29/2025||||
> What about taking 3 photos while quickly changing the filter

Works great. Most astro shots are taken using a monochrome sensor and filter wheel.

> filters are something like quantum dots that can be turned on/off

If anyone has this tech, plz let me know! Maybe an etalon?

https://en.wikipedia.org/wiki/Fabry%E2%80%93P%C3%A9rot_inter...

XCSme 12/29/2025||
> If anyone has this tech, plz let me know!

I have no idea, it was my first thought when I thought of modern color filters.

card_zero 12/29/2025||
That's how the earliest color photography worked. "Making color separations by reloading the camera and changing the filter between exposures was inconvenient", notes Wikipedia.
to11mtm 12/29/2025||
I think they are both more asking about 'per pixel color filters'; that is, something like a sensor filter/glass but the color separators could change (at least 'per-line') fast enough to get a proper readout of the color in formation.

AKA imagine a camera with R/G/B filters being quickly rotated out for 3 exposures, then imagine it again but the technology is integrated right into the sensor (and, ideally, the sensor and switching mechanism is fast enough to read out with rolling shutter competitive with modern ILCs)

MarkusWandel 12/29/2025||||
Works for static images, but if there's motion the "changing the filters" part is never fast enough, there will always be colour fringing somewhere.

Edit or maybe it does work? I've watched at least one movie on a DLP type video projector with sequential colour and not noticed colour fringing. But still photos have much higher demand here.

numpad0 12/29/2025|||
You can use sets of exotic mirrors and/or prisms to split incoming images into separate RGB beams into three independent monochrome sensors, through the same singular lens and all at once. That's what "3CCD" cameras and their predecessors did.
wtallis 12/28/2025|||
The sensor outputs a single value per pixel. A later processing step is needed to interpret that data given knowledge about the color filter (usually Bayer pattern) in front of the sensor.
i-am-gizm0 12/29/2025|||
The raw sensor output is a single value per sensor pixel, each of which is behind a red, green, or blue color filter. So to get a usable image (where each pixel has a value for all three colors), we have to somehow condense the values from some number of these sensor pixels. This is the "Debayering" process.
ranger207 12/28/2025||
It's a single value per pixel, but each pixel has a different color filter in front of it, so it's effectively that each pixel is one of R, G, or B
XCSme 12/28/2025||
So, for a 3x3 image, the input data would be 9 values like:

   R G B
   B R G
   G B R

?
jeeyoungk 12/29/2025|||
If you want "3x3 colored image", you would need 6x6 of the bayer filter pixels.

Each RGB pixel would be 2x2 grid of

``` G R B G ```

So G appears twice as many as other colors (this is mostly the same for both the screen and sensor technology).

There are different ways to do the color filter layouts for screens and sensors (Fuji X-Trans have different layout, for example).

Lanzaa 12/29/2025||||
This depends on the camera and the sensor's bayer filter [0]. For example the quad bayer uses a 4x4 like:

    G G R R
    G G R R
    B B G G
    B B G G
[0]: https://en.wikipedia.org/wiki/Bayer_filter
card_zero 12/28/2025||||
In the example ("let's color each pixel ...") the layout is:

  R G
  G B
Then at a later stage the image is green because "There are twice as many green pixels in the filter matrix".
nomel 12/29/2025||
And this is important because our perception is more sensitive to luminance changes than color, and with our eyes being most sensitive to green, luminance is also. So, higher perceived spatial resolution by using more green [1]. This is also why JPG has lower resolution red and green channels, and why modern OLED usually use a pentile display, with only green being at full resolutio [2].

[1] https://en.wikipedia.org/wiki/Bayer_filter#Explanation

[2] https://en.wikipedia.org/wiki/PenTile_matrix_family

card_zero 12/29/2025|||
Funny that subpixels and camera sensors aren't using the same layouts.
nomel 1/3/2026||
It would only be relevant if viewing the image at one-to-one resolution, which is extremely rare.
userbinator 12/29/2025|||
Pentile displays are acceptable for photos and videos, but look really horrible displaying text and fine detail --- which looks almost like what you'd see on an old triad-shadow-mask colour CRT.
Abh1Works 12/29/2025|
Why is the native picture (fig 1) in grayscale? or more generally why is black and white the default of signal processing? Is it just because black and white are two opposites that can be easily discerned?
loki_ikol 12/29/2025||
It's not really grayscale. The output of an image sensor integrated circuit is a series of voltages read one after the other, that could be from –0.3 to +18 volts for example, in an order specific to the sensor's red, green and blue "pixels" arrangement. The native picture (fig 1) is the result of converting a sensor's output voltage to a series of values from black (let's say -0.3 volts for example) up to white (let's say +18 volts for example) while ignoring if they are from a red, a green or a blue image sensor "pixel".

The various "raw" camera image formats kind of work like this, they include the voltages converted to some numerical range and what each "pixels" represents for a specific camera sensor setup.

qubitcoder 12/30/2025|||
They’re known as DNs, or digital numbers. Thom Hogan’s eBooks do a phenomenal job of explaining the intricacies of camera sensors, their architecture, processing to JPEGs, and pretty much every aspect of capturing good photos.

The books, while geared toward Nikon cameras, are generally applicable. And packed with high-quality illustrations and an almost obsessive uber-nerd level of detail. He’s very much an engineer and photographer. When he says “complete guide”, he means it.

The section on image sensors, read-outs, and ISO/dual gain/S&R, etc. is particularly interesting—-and should be baseline knowledge for anyone who’s seriously interested in photography.

[0] https://zsystemuser.com/z-system-books/complete-guide-to-the...

seba_dos1 12/29/2025||
It's just a common default choice to represent spacial data that lacks any context on how to interpret the values chromatically. You could very well use a heatmap-like color scheme instead.
More comments...