Top
Best
New

Posted by zdw 12/28/2025

What an unprocessed photo looks like(maurycyz.com)
2510 points | 409 commentspage 5
jiggawatts 12/29/2025|
I've been studying machine learning during the xmas break, and as an exercise I started tinkering around with the raw Bayer data from my Nikon camera, throwing it at various architectures to see what I can squeeze out of the sensor.

Something that surprised me is that very little of the computation photography magic that has been developed for mobile phones has been applied to larger DSLRs. Perhaps it's because it's not as desperately needed, or because prior to the current AI madness nobody had sufficient GPU power lying around for such a purpose.

For example, it's a relatively straightforward exercise to feed in "dark" and "flat" frames as extra per-pixel embeddings, which lets the model learn about the specifics of each individual sensor and its associated amplifier. In principle, this could allow not only better denoising, but also stretch the dynamic range a tiny bit by leveraging the less sensitive photosites in highlights and the more senstive ones in the dark areas.

Similarly, few if any photo editing products do simultaneous debayering and denoising, most do the latter as a step in normal RGB space.

Not to mention multi-frame stacking that compensates for camera motion, etc...

The whole area is "untapped" for full-frame cameras, someone just needs to throw a few server grade GPUs at the problem for a while!

AlotOfReading 12/29/2025||
This stuff exists and it's fairly well-studied. It's surprisingly hard to find without coming across it in literature though, the universe of image processing is huge. Joint demosaicing, for example, is a decades-old technique [0] fairly common in astrophotography. Commercial photographers simply never cared or asked for it, and so the tools intended for them didn't bother either. You'd find more of it in things like scientific ISP and robotics.

[0] https://doi.org/10.1145/2980179.2982399

jiggawatts 12/29/2025||
I trawled through much of the research but as you’ve mentioned it seems to be known only in astrophotography and mobile devices or other similarly constrained hardware.
pbalau 12/29/2025||
> Something that surprised me is that very little of the computation photography magic that has been developed for mobile phones has been applied to larger DSLRs. Perhaps it's because it's not as desperately needed, or because prior to the current AI madness nobody had sufficient GPU power lying around for such a purpose.

Sony Alpha 6000 had face detection in 2014.

jiggawatts 12/29/2025||
Sure, and my camera can do bird eye detection and whatnot too, but that's a very lightweight model running in-body. Probably just a fine-tuned variant of something like YOLO.

I've seen only a couple of papers from Google talking about stacking multiple frames from a DSLR, but that was only research for improving mobile phone cameras.

Ironically, some mobile phones now have more megapixels than my flagship full-frame camera, yet they manage to stack and digitally process multiple frames using battery power!

This whole thing reminds me of the Silicon Graphics era, where the sales person would tell you with a straight face that it's worth spending $60K on a workstation and GPU combo that can't even texture map when I just got a Radeon for $250 that runs circles around it.

One industry's "impossible" is a long-since overcome minor hurdle for another.

trashb 12/29/2025||
A DSLR and mobile phone camera optimize for different things and can't really be compared.

Mobile phone camera's are severely handicapped by the optics & sensor size. Therefore to create a acceptable picture (to share on social media) they need to do a lot of processing.

DSLR and professional camera's feature much greater hardware. Here the optics and sensor size/type are important it optimize the actual light being captured. Additionally in a professional setting the image is usually captured in a raw format and adjusted/balanced afterwards to allow for certain artistic styles.

Ultimately the quality of a picture is not bound to it's resolution size but to the amount and quality of light captured.

jiggawatts 12/29/2025||
> A DSLR and mobile phone camera optimize for different things and can't really be compared.

You sound exactly like the sales guy trying to explain why that Indigo workstation is “different” even though it was performing the exact same vector and matrix algebra as my gaming GPU. The. Exact. Same. Thing.

Everything else you’ve said is irrelevant to computational photography. If anything, it helps matters because there’s better raw data to work with.

The real reason is that one group had to solve these problems, the other could keep making excuses for why it was “impossible” while the problem clearly wasn’t.

And anyway, what I’m after isn’t even in-body processing! I’m happy to take the RAW images and grind them through an AI that barely fits into a 5090 and warms my room appreciably for each photo processed.

tehjoker 12/30/2025|||
most likely one reason is that to do that, you'd have to pair the price of a fancy smartphone to a nice camera, so adding ~$1000 for a feature professionals often prefer to do offline since they can get good focus and color using optics and professional lights
qubitcoder 12/30/2025|||
There are many things wrong with this. I have an iPhone 17 Pro Max and use it to capture HEIF 48 and ProRAW images for Lightroom. There’s no doubt of the extraordinary capabilities of modern phone cameras. And there are camera applications that give you a sense of the sensor data captured, which only further illustrates the dazzling wizardly between sensor capture vs the image seen by laypeople.

That said, there is literally no comparison between the iPhone camera and the RAW photos captured on a modern full-frame mirrorless camera like my Nikon Z6III or Z9. I can’t mount a 180-600mm telephoto lens to an iPhone, or a 24-120mm, or use a teleconverter. Nor can I instantly swing an iPhone and capture a bird or aircraft flying by at high speed and instantly lock and track focus in 3D, capture 30 RAW images per second at 45MP (or 120 JPEGs per second), all while controlling aperture, shutter speed and ISO.

Physics is a thing. The large sensor size and lenses (that can make a Mac Studio seem cheap by comparison) serve a purpose. Try capturing even a remotely similar image on an iPhone in low light, and especially RAW, and you’ll be sitting there waiting seconds or more for a single image. Professional lenses can easily contain 25 individual lens elements that move in conjunction as groups for autofocus, zoom, motion stabilization, etc. They’re state-of-the-art modern marvels that make an iPhone’s subject detection pale by compare. Examples: I can lock on immediately to a small bird’s eye 300 feet away with a square tracking the tiny eye precisely, and continue tracking. The same applies to pets, people, vehicles, and more with AI detection.

You can handhold a low-light shot at 1/15s to capture a waterfall with motion blur and continue shooting, with the camera optimizing the stabilization around the focus point—that’s the sensor and lens working in conjunction for real-time stabilization for standard shots, or “sports mode” for rapidly panning horizontally or vertically.

There’s a reason pro-grade cameras exist and people use them. See Simon D’entrement, Steve Perry, and many others on YouTube for examples.

For most people, it doesn’t matter. They can happily shoot still images and even amazingly high-quality video these days. But dismissing the differences is wildly misleading. These cameras require memory cards that cost half as much or more than the latest iPhone, and for good reason [1].

With everything, there are trade offs. An iPhone fits in my pocket. A Nikon Z8 and 800mm lens and associated gear is a beast. Different tools, different job.

A modern lens, for comparison: https://www.nikonusa.com/p/nikkor-z-600mm-f63-vr-s/20122/ove...

[0] https://youtu.be/2yZEeYVouXs

[1] https://www.bhphotovideo.com/c/product/1887815-REG/delkin_de...

jiggawatts 12/30/2025||
You are totally missing my point and talking past me. I have a Nikon Z8! I know what it is capable of!

The point I'm trying to make is that the RAW images coming out of a modern full-frame camera get very "light" processing in a typical workflow (i.e.: Adobe Lightroom), little more than debayering before all further treatment is in ordinary RGB space.

Modern mobile phones have sensors with just as many megapixels, capturing a volume of raw data (measured in 'bits') that is essentially identical to a high-end full-frame sensor!

The difference is that mobile phones capture and digitally merge multiple frames captured in a sequence to widen the HDR dynamic range and reduce noise. They can even merge images taken from slightly different perspectives or with moving objects. They also apply tricks like debayering that is aware of pixel-level sensor characteristics and is tuned to the specific make and model instead of shared across all cameras ever made, which is typical of something like Lightroom, Darktable, or whatever.

If I capture a 20 fps burst with a Nikon Z series camera... I can pick one. That's about the only operation I can do with those images! Why can't I merge multiple exposures with motion compensation to get an effective 10 ISO instead of 64, but without the blur from camera motion?

None of this has anything to do with lenses, auto-focus, etc...

I'm talking about applying "modern GPU" levels of computer power to the raw bits coming off a bayer sensor, whether that's in a phone or a camera. The phone can do it! Why can't Lightroom!?

trashb 12/30/2025||
> I have a Nikon Z8! I know what it is capable of!

It seems to me you underestimate the amount of work your camera is already doing. I feel like you overestimate the raw quality of a mobile camera as well.

> Modern mobile phones have sensors with just as many megapixels, capturing a volume of raw data (measured in 'bits') that is essentially identical to a high-end full-frame sensor!

There may be the same amount of bits but that doesn't mean that it captures the same quality of signal. It's like saying that a higher amount of bits on a ADC correspond to a better quality signal on the line, it just isn't true. Megapixels are overhyped, resolution isn't everything for picture quality.

> The phone can do it! Why can't Lightroom!?

Be the change you want to see, if the features that you want are not in Lightroom write a tool to implement it (or add the features to a tool like ffmpeg). The features you are talking about are in just software after capture so it should be possible from the camera's raw.

Perhaps you would be better of buying a high quality point and shoot camera or just using your phone instead of a semi professional full-frame camera for your purpose. With a DSLR you have options how to process, if that means in your "typical workflow" light processing then that's up to you. perhaps If you want to point shoot, instagram you indeed don't want to spend time processing in Lightroom and that's fine.

It feels like you are complaining about how your expensive pickup can't fit your family and suitcases when going on holiday like the neighbors SUV even though they have the same amount of horsepower and are build on the same chassis. They are obviously build for different purposes.

exabrial 12/29/2025||
I love the look of the final product after the manual work (not the one for comparison). Just something very realistic and wholesome about it, not pumped to 10 via AI or Instagram filters.
shepherdjerred 12/28/2025||
Wow this is amazing. What a good and simple explanation!
noja 12/29/2025||
That poor Christmas tree. Whatever happened to it?
dbacar 12/29/2025||
The site explicitly states that : "This website is not licensed for ML/LLM training or content creation. "

Yet I asked chatgpt to summarize it , and it did. And it says that: Why summarization is allowed

In most jurisdictions and policies: Summarization is a transformative use It does not substitute for the original work It does not expose proprietary structure, wording, or data It does not enable reconstruction of the original content

very strange days, you cant cope with this mouthful mambo jambos.

logicprog 12/29/2025|
The bot is correct.
dbacar 12/30/2025||
IMHO it is not, summarization is a valuable content.
CosmicShadow 12/29/2025||
Interesting to see this whole thing shown outside of Astrophotography, sometimes I forget it's the same stuff!
flkiwi 12/29/2025||
Another tool to add to my arsenal of responses to people who claim either "no filter used" or "SOOC photo". Both of those may be true for some values of "no filter" or "straight out of camera" but they're not remotely describing the reality that any digital image is heavily manipulated before it leaves the camera. And that's ok! Our eyes are filters. Our brain is a filter. Photographic film and processing techniques are filters. The use of "no filter" and "SOOC" to imply capturing something unedited and therefore authentic is the artificial thing.
MetaMalone 12/29/2025||
I have always wondered at the lowest level how a camera captures and processes photos. Much appreciated post.
jacktang 12/29/2025||
I fill the original photo to Nano banana Pro, and it recovered well. It also explained how to recover it.
mrheosuper 12/29/2025|
>Our perception of brightness is non-linear.

Apart from brightness, it's everything. Loudness, temperature, etc.

More comments...