Posted by zdw 11 hours ago
A fun tangent on the "green cast" mentioned in the post: the reason the Bayer pattern is RGGB (50% green) isn't just about color balance, but spatial resolution. The human eye is most sensitive to green light, so that channel effectively carries the majority of the luminance (brightness/detail) data. In many advanced demosaicing algorithms, the pipeline actually reconstructs the green channel first to get a high-resolution luminance map, and then interpolates the red/blue signals—which act more like "color difference" layers—on top of it. We can get away with this because the human visual system is much more forgiving of low-resolution color data than it is of low-resolution brightness data. It’s the same psycho-visual principle that justifies 4:2:0 chroma subsampling in video compression.
Also, for anyone interested in how deep the rabbit hole goes, looking at the source code for dcraw (or libraw) is a rite of passage. It’s impressive how many edge cases exist just to interpret the "raw" voltages from different sensor manufacturers.
The main reason is that green does indeed overwhelmingly contribute to perceptual luminance (over 70% in sRGB once gamma corrected: https://www.w3.org/TR/WCAG20/#relativeluminancedef) and modern demosaicking algorithms will rely on both derived luminance and chroma information to get a good result (and increasingly spatial information, e.g. "is this region of the image a vertical edge").
Small neural networks I believe are the current state of the art (e.g. train to reverse a 16x16 color filter pattern for the given camera). What is currently in use by modern digital cameras is all trade secret stuff.
> green does indeed overwhelmingly contribute to perceptual luminance
so... if luminance contribution is different from "sensitivity" to you - what do you imply by sensitivity?
From the classic file format "ppm" (portable pixel map) the ppm to pgm (portable grayscale map) man page:
https://linux.die.net/man/1/ppmtopgm
The quantization formula ppmtopgm uses is g = .299 r + .587 g + .114 b.
You'll note the relatively high value of green there, making up nearly 60% of the luminosity of the resulting grayscale image.I also love the quote in there...
Quote
Cold-hearted orb that rules the night
Removes the colors from our sight
Red is gray, and yellow white
But we decide which is right
And which is a quantization error.
(context for the original - https://www.youtube.com/watch?v=VNC54BKv3mc )After that, the next most important problem is the fact he operates in the wrong color space, where he's boosting raw RGB channels rather than luminance. That means that some objects appear much too saturated.
So his photo isn't "unprocessed", it's just incorrectly processed.
>There’s nothing that happens when you adjust the contrast or white balance in editing software that the camera hasn’t done under the hood. The edited image isn’t “faker” then the original: they are different renditions of the same data.
When I worked at Amazon on the Kindle Special Offers team (ads on your eink Kindle while it was sleeping), the first implementation of auto-generated ads was by someone who didn't know that properly converting RGB to grayscale was a smidge more complicated than just averaging the RGB channels. So for ~6 months in 2015ish, you may have seen a bunch of ads that looked pretty rough. I think I just needed to add a flag to the FFmpeg call to get it to convert RGB to luminance before mapping it to the 4-bit grayscale needed.
I remember trying out some of the home-made methods while I was implementing a creative work section for a school assignment. It’s surprising how "flat" the basic average looks until you actually respect the coefficients (usually some flavor of 0.21R + 0.72G + 0.07B). I bet it's even more apparent in a 4-bit display.
I went to a photoshop conference. There was a session on converting color to black and white. Basically at the end the presenter said you try a bunch of ways and pick the one that looks best.
(people there were really looking for the “one true way”)
I shot a lot of black and white film in college for our paper. One of my obsolete skills was thinking how an image would look in black and white while shooting, though I never understood the people who could look at a scene and decide to use a red filter..
This is the coefficients I use regularly.
It is necessary that it was precisely defined because of the requirements of backwards-compatible color transmission (YIQ is the common abbreviation for the NTSC color space, I being ~reddish and Q being ~blueish), basically they treated B&W (technically monochrome) pictures like how B&W film and videotubes treated them: great in green, average in red, and poorly in blue.
A bit unrelated: pre-color transition, the makeups used are actually slightly greenish too (which appears nicely in monochrome).
Page 5 has:
Eq' = 0.41 (Eb' - Ey') + 0.48 (Er' - Ey')
Ei' = -0.27(Eb' - Ey') + 0.74 (Er' - Ey')
Ey' = 0.30Er' + 0.59Eg' + 0.11Eb'
The last equation are those coefficients.Like q3_sqrt
There is no such thing as “unprocessed” data, at least that we can perceive.
And different films and photo papers can have totally different looks, defined by the chemistry of the manufacturer and however _they_ want things to look.
You’re right about Ansel Adams. He “dodged and burned” extensively (lightened and darkened areas when printing.) Photoshop kept the dodge and burn names on some tools for a while.
https://m.youtube.com/watch?v=IoCtni-WWVs
When we printed for our college paper we had a dial that could adjust the printed contrast a bit of our black and white “multigrade” paper (it added red light). People would mess with the processing to get different results too (cold/ sepia toned). It was hard to get exactly what you wanted and I kind of see why digital took over.
I pass on a gift I learned of from HN: Susan Sunday’s “On Photography”.
Out of curiosity: what led you to write "Susan Sunday" instead of "Susan Sontag"? (for other readers: "Sonntag" is German for "Sunday")
[1] https://dpreview.com/articles/9828658229/computational-photo...
How does this affect luminance perception for deuteranopes? (Since their color blindness is caused by a deficiency of the cones that detect green wavelengths)
Blue cones make little or no contribution to luminance. Red cones are sensitive across the full spectrum of visual light, but green cones have no sensitivity to the longest wavelengths [2]. Since protans don't have the "hardware" to sense long wavelengths, it's inevitable that they'd have unusual luminance perception.
I'm not sure why deutans have such a normal luminous efficiency curve (and I can't find anything in a quick literature search), but it must involve the blue cones, because there's no way to produce that curve from the red-cone response alone.
[1]: https://en.wikipedia.org/wiki/Luminous_efficiency_function#C...
[2]: https://commons.wikimedia.org/wiki/File:Cone-fundamentals-wi...
Also there’s something to be said about the fact that the eye is a squishy analog device, and so even if the medium wavelengths cones are deficient, long wavelength cones (red-ish) have overlap in their light sensitivities along with medium cones so…
This argument is very confusing: if is most sensitive, less intensity/area should be necessary, not more.
But it is not the case, we are first measuring with cameras, and then presenting the image to human eyes. Being more sensitive to a colour means that the same measurement error will lead to more observable artifacts. So to maximize visual authenticity, the best we can do is to make our cameras as sensitive to green light (relatively) as human eyes.
The JPEGs cameras produce are heavily processed, and they are emphatically NOT "original". Taking manual control of that process to produce an alternative JPEG with different curves, mappings, calibrations, is not a crime.
>There’s nothing that happens when you adjust the contrast or white balance in editing software that the camera hasn’t done under the hood. The edited image isn’t “faker” then the original: they are different renditions of the same data.
I used to have a high resolution phone camera from a cheaper phone and then later switched to an iPhone. The latter produced much nicer pictures, my old phone just produces very flat-looking pictures.
People say that the iPhone camera automatically edits the images to look better. And in a way I notice that too. But that’s the wrong way of looking at it; the more-edited picture from the iPhone actually corrresponds more to my perception when I’m actually looking at the scene. The white of the snow and glaciers and the deep blue sky really does look amazing in real life, and when my old phone captured it into a flat and disappointing looking photo with less postprocessing than an iPhone, it genuinely failed to capture what I can see with my eyes. And the more vibrant post processed colours of an iPhone really do look more like what I think I’m looking at.
https://puri.sm/posts/librem-5-photo-processing-tutorial/
Which is a pleasant read, and I like the pictures. Has the Librem 5's automatic JPEG output improved since you wrote the post about photography in Croatia (https://dosowisko.net/l5/photos/)?
I know my Sony gear can't call out to AI because the WIFI sucks like every other Sony product and barely works inside my house, but also I know the first ILC manufacturer that tries to put AI right into RAW files is probably the first to leave part of the photography market.
That said I'm a purist to the point where I always offer RAWs for my work [0] and don't do any photoshop/etc. D/A, horizon, bright adjust/crop to taste.
Where phones can possibly do better is the smaller size and true MP structure of a cell phone camera sensor, makes it easier to handle things like motion blur. and rolling shutter.
But, I have yet to see anything that gets closer to an ILC for true quality than the decade+ old pureview cameras on Nokia cameras, probably partially because they often had sensors large enough.
There's only so much computation can do to simulate true physics.
[0] - I've found people -like- that. TBH, it helps that I tend to work cheap or for barter type jobs in that scene, however it winds up being something where I've gotten repeat work because they found me and a 'photoshop person' was cheaper than getting an AIO pro.
And then there's all the nonsense BigTech enables out of the box today with automated AI touch ups. That definitely qualifies as fakery although the end result may be visually pleasing and some people might find it desirable.
if you take that away, a picture is not very interesting, it's hyperrealistic so not super creative a lot of the time (compared to eg paintings), & it doesn't even require the mastery of other mediums to get hyperrealistism
I can't see infrared.
But they don't see it as IR. Instead, this infrared information just kind of irrevocably leaks into the RGB channels that we do perceive. With the unmodified camera on my Samsung phone, IR shows up kind of purple-ish. Which is... well... it's fake. Making invisible IR into visible purple is an artificially-produced artifact of the process that results in me being able to see things that are normally ~impossible for me to observe with my eyeballs.
When you generate your own "genuine" images using your digital camera(s), do you use an external IR filter? Or are you satisfied with knowing that the results are fake?
Your Samsung phone probably has the green filter of its bayer matrix that blocks IR better than the blue and red ones.
Here's a random spectral sensitivity for a silicon sensor:
https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcRkffHX...
what's so special about green? oh so just because our eyes are more sensitive to green we should dedicate double the area to green in camera sensors? i mean, probably yes. but still. (⩺_⩹)
Why is YUV or other luminance-chrominance color spaces important for a RGB input? Because many processing steps and encoders, work in YUV colorspaces. This wasn't really covered in the article.
To truly record an appearance without reference to the sensory system of our species, you would need to encode the full electromagnetic spectrum from each point. Even then, you would still need to decide on a cutoff for the spectrum.
...and hope that nobody ever told you about coherence phenomena.
For instance, the Leica M series have specific monochrome versions with huge resolutions and better monochrome rendering.
You can also modify some cameras and remove the filter, but the results usually need processing. A side effect is that the now exposed sensor is more sensitive to both ends of the spectrum.
this is totally out of my own self-interest, no problems with its content
I'm imagining a sort of Logan's Run-like scifi setup where only people with a documented em dash before November 30, 2022, i.e. D(ash)-day, are left with permission to write.
At least Robespierre needed two sentences before condemning a man. Now the mob is lynching people on the basis of a single glyph.
I have actually been deliberately modifying my long-time writing style and use of punctuation to look less like an LLM. I'm not sure how I feel about this.
But now, likewise, having to bail on emdashes. My last differentiator is that I always close set the emdash—no spaces on either side, whereas ChatGPT typically opens them (AP Style).
Russians use this for at least 15 years
But anyway, it’s option-hyphen for a en-dash and opt-shift-hyphen for the em-dash.
I also just stopped using them a couple years ago when the meme about AI using them picked up steam.
also your question implies a bad assumption even if you disclaim it. if you don't want to imply a bad assumption the way to do that is to not say the words, not disclaim them
“NO EM DASHES” is common system prompt behavior.
What bothers me as an old-school photographer is this. When you really pushed it with film (e.g. overprocess 400ISO B&W film to 1600 ISO and even then maybe underexpose at the enlargement step) you got nasty grain. But that was uniform "noise" all over the picture. Nowadays, noise reduction is impressive, but at the cost of sometimes changing the picture. For example, the IP cameras I have, sometimes when I come home on the bike, part of the wheel is missing, having been deleted by the algorithm as it struggled with the "grainy" asphalt driveway underneath.
Smartphone and dedicated digital still cameras aren't as drastic, but when zoomed in, or in low light, faces have a "painted" kind of look. I'd prefer honest noise, or better yet an adjustable denoising algorithm from "none" (grainy but honest) to what is now the default.
It was my fault that I didn't check the pictures while I was doing it. Imagine my dissapointment when I checked them back at home: the Android camera decided to apply some kind of AI filter to all the pictures. Now my grandparents don't look like them at all, they are just an AI version.
Heavy denoising is necessary for cheap IP cameras because they use cheap sensors paired with high f-number optics. Since you have a photography background you'll understand the tradeoff that you'd have to make if you could only choose one lens and f-stop combination but you needed everything in every scene to be in focus.
You can get low-light IP cameras or manual focus cameras that do better.
The second factor is the video compression ratio. The more noise you let through, the higher bitrate needed to stream and archive the footage. Let too much noise through for a bitrate setting and the video codec will be ditching the noise for you, or you'll be swimming in macroblocks. There are IP cameras that let you turn up the bitrate and decrease the denoise setting like you want, but be prepared to watch your video storage times decrease dramatically as most of your bits go to storing that noise.
> Smartphone and dedicated digital still cameras aren't as drastic, but when zoomed in, or in low light, faces have a "painted" kind of look. I'd prefer honest noise, or better yet an adjustable denoising algorithm from "none" (grainy but honest) to what is now the default.
If you have an iPhone then getting a camera app like Halide and shooting in one of the RAW formats will let you do this and more. You can also choose Apple ProRAW on recent iPhone Pro models which is a little more processed, but still provides a large amount of raw image data to work with.
This is interesting to think about, at least for us photo nerds. ;) I honestly think there are multiple right answers, but I have a specific one that I prefer. Applying the same transfer function to all pixels corresponds pretty tightly to film & paper exposure in analog photography. So one reasonable followup question is: did we count manually over- or under-exposing an analog photo to be manipulation or “processing”? Like you can’t see an image without exposing it, so even though there are timing & brightness recommendations for any given film or paper, generally speaking it’s not considered manipulation to expose it until it’s visible. Sometimes if we pushed or pulled to change the way something looks such that you see things that weren’t visible to the naked eye, then we call it manipulation, but generally people aren’t accused of “photoshopping” something just by raising or lowering the brightness a little, right?
When I started reading the article, my first thought was, ‘there’s no such thing as an unprocessed photo that you can see’. Sensor readings can’t be looked at without making choices about how to expose them, without choosing a mapping or transfer function. That’s not to mention that they come with physical response curves that the author went out of his way to sort-of remove. The first few dark images in there are a sort of unnatural way to view images, but in fact they are just as processed as the final image, they’re simply processed differently. You can’t avoid “processing” a digital image if you want to see it, right? Measuring light with sensors involves response curves, transcoding to an image format involves response curves, and displaying on monitor or paper involves response curves, so any image has been processed a bunch by the time we see it, right? Does that count as “processing”? Technically, I think exposure processing is always built-in, but that kinda means exposing an image is natural and not some type of manipulation that changes the image. Ultimately it depends on what we mean by “processing”.
Reading reg plates would be a lot easier if I could sharpen the image myself rather than try to battle with the "turn it up to 11" approach by manufacturers.
Noise is part of the world itself.
I was honestly happier with the technically inferior iPhone 5 camera, the photos at least didn't look fake.
Artist develops a camera that takes AI-generated images based on your location. Paragraphica generates pictures based on the weather, date, and other information.
https://www.standard.co.uk/news/tech/ai-camera-images-paragr...
Years ago I became suspicious of my Samsung Android device when I couldn't produce a reliable likeness of an allergy induced rash. No matter how I lit things, the photos were always "nicer" than what my eyes recorded live.
The incentives here are clear enough - people will prefer a phone whose camera gives them an impression of better skin, especially when the applied differences are extremely subtle and don't scream airbrush. If brand-x were the only one to allow "real skin" into the gallery viewer, people and photos would soon be decried as showing 'x-skin', which would be considered gross. Heaven help you if you ever managed to get close to a mirror or another human.
To this day I do not know whether it was my imagination or whether some inline processing effectively does or did perform micro airbrushing on things like this.
Whatever did or does happen, the incentive is evergreen - media capture must flatter the expectations of its authors, without getting caught in its sycophancy. All the while, capacity improves steadily.
It gets even wilder when perceiving space and time as additional signal dimensions.
I imagine a sort of absolute reality that is the universe. And we’re all just sensor systems observing tiny bits of it in different and often overlapping ways.
It's the same reason that allows RGB screens to work. No screen has ever produced "real" yellow (for which there is a wavelength), but they still stimulate our trichromatic vision very similar to how actual yellow light would.
What a nice way to put it.
I've had this in mind at times in recent years due to $DAYJOB. We use simulation heavily to provide fake CPUs, hardware devices, what have you, with the goal of keeping our target software happy by convincing it that it's running in its native environment instead of on a developer laptop.
Just keep in mind that it's important not to go _too_ far down the rabbit hole, one can spend way too much time in "what if we're all just brains in jars?"-land.
A better discriminator might be global edits vs local edits, with local edits being things like retouching specific parts of the image to make desired changes, and one could argue that local edits are "more fake" than global edits, but it still depends on a thousand factors, most importantly intent.
"Fake" images are images with intent to deceive. By that definition, even an image that came straight out of the camera can be "fake" if it's showing something other than what it's purported to (e.g. a real photo of police violence but with a label saying it's in a different country is a fake photo).
What most people think when they say "fake", though, is a photo that has had filters applied, which makes zero sense. As the post shows, all photos have filters applied. We should get over that specific editing process, it's no more fake than anything else.
Filters themselves don't make it fake, just like words themselves don't make something a lie. How the filters and words are used, whether they bring us closer or further from some truth, is what makes the difference.
Photos implicitly convey, usually, 'this is what you would see if you were there'. Obviously filters can help with that, as in the OP, or hurt.
Artists, who use these tools with clear vision and intent to achieve specific goals, strangely never have this problem.
For example, I took a picture of my dog at the dog park the other day. I didn’t notice when framing the picture but on review at home, right smack in the middle of the lower 3rd of the photo and conveniently positioned to have your eyes led there by my dog’s pose and snout direction, was a giant, old, crusty turd. Once you noticed it, it was very hard to not see it anymore. So I broke out the photo editing tools and used some auto retouching tool to remove the turd. And lucky for me since the ground was mulch, the tool did a fantastic job of blending it out, and if I didn’t tell you it had been retouched, you wouldn’t know.
Is that a fake image? The subject of the photo was my dog. The purpose of the photo was to capture my dog doing something entertaining. When I was watching the scene with my own human eyes I didn’t see the turd. Nor was capturing the turd in the photo intended or essential to capturing what I wanted to capture. But I did use some generative tool (algorithmic or AI I couldn’t say) to convincingly replace the turd with more mulch. So does doing that make the image fake? I would argue no. If you ask me what the photo is, I say it’s a photo of my dog. The edit does not change my dog, nor change the surrounding to make the dog appear somewhere else or to make the dog appear to be doing something they weren’t doing were you there to witness it yourself. I do not intend the photo to be used as a demonstration of how clean that particular dog park is or was on that day, or even to be a photo representing that dog park at all. My dog happened to be in that locale when they did something I wanted a picture of. So to me that picture is no more fake than any other picture in my library. But a pure “differentiate on the tools” analysis says it is a fake image, content that wasn’t captured by the sensor is now in the image and content that was captured no longer is. Fake image then right?
I think the OP has it right, the intent of your use of the tool (and its effect) matters more than what specific tool you used.
The ones that make the annual rounds up here in New England are those foliage photos with saturation jacked. “Look at how amazing it was!” They’re easy to spot since doing that usually wildly blows out the blues in the photo unless you know enough to selectively pull those back.
If you want reality, go there in person and stop looking at photos. Viewing imagery is a fundamentally different type of experience.
We’ve had similar debates about art using miniatures and lens distortions versus photos since photography was invented — and digital editing fell on the lens trick and miniature side of the issue.
Portrait photography -- no, people don't look like that in real life with skin flaws edited out
Landscape photography -- no, the landscapes don't look like that 99% of the time, the photographer picks the 1% of the time when it looks surreal
Staged photography -- no, it didn't really happen
Street photography -- a lot of it is staged spontaneously
Product photography -- no, they don't look like that in normal lighting
A lot of armchair critics on the internet who only go out to their local park at high noon will say they look fake but they're not.
There are other elements I can spot realism where the armchair critic will call it a "bad photoshop". For example, a moon close to the horizon usually looks jagged and squashed due to atmospheric effects. That's the sign of a real moon. If it looks perfectly round and white at the horizon, I would call it a fake.
What about this? https://news.ycombinator.com/item?id=35107601
You can look it up because it's published on the web but IIRC it's generally what you'd expect. It's okay to do whole-image processing where all pixels have the same algorithm applied like the basic brightness, contrast, color, tint, gamma, levels, cropping, scaling, etc filters that have been standard for decades. The usual debayering and color space conversions are also fine. Selectively removing, adding or changing only some pixels or objects is generally not okay for journalistic purposes. Obviously, per-object AI enhancement of the type many mobile phones and social media apps apply by default don't meet such standards.
I downsized it to 170x170 pixels
My point is that there IS an experiment which would show that Samsung is doing some nonstandard processing likely involving replacement. The evidence provided is insufficient to show that
For example see
You can try to guess the location of edges to enhance them after upscaling, but it's guessing, and when the source has the detail level of a 170x170 moon photo a big proportion of the guessing will inevitably be wrong.
And in this case it would take a pretty amazing unblur to even get to the point it can start looking for those edges.
You did not link an example of upscaling, the before and after are the same size.
Unsharp filters enhance false edges on almost all images.
If you claim either one of those are wrong, you're being ridiculous.
And to be clear, everyone including Apple has been doing this since at least 2017
The problem with what Samsung was doing is that it was moon-specific detection and replacement
I zoomed in on the monitor showing that image and, guess what, again you see slapped on detail, even in the parts I explicitly clipped (made completely 100% white):
The first words I said were that Samsung probably did this
And you're right that I didn't read the dozens of edits which were added after the original post. I was basing my arguments off everything before the "conclusion section", which it seems the author understands was not actually conclusive.
I agree that the later experiments, particularly the "two moons" experiment were decisive.
Also to be clear, I know that Samsung was doing this, because as I said I worked at a competitor. At the time I did my own tests on Samsung devices because I was also working on moon related image quality
i.e. Camera+Lens+ISO+SS+FStop+FL+TC (If present)+Filter (If present). Add focus distance if being super duper proper.
And some of that is to help at least provide the right requirements to try to recreate.
Even that isn't all that clear-cut. Is noise removal a local edit? It only touches some pixels, but obviously, that's a silly take.
Is automated dust removal still global? The same idea, just a bit more selective. If we let it slide, what about automated skin blemish removal? Depth map + relighting, de-hazing, or fake bokeh? I think that modern image processing techniques really blur the distinction here because many edits that would previously need to be done selectively by hand are now a "global" filter that's a single keypress away.
Intent is the defining factor, as you note, but intent is... often hazy. If you dial down the exposure to make the photo more dramatic / more sinister, you're manipulating emotions too. Yet, that kind of editing is perfectly OK in photojournalism. Adding or removing elements for dramatic effect? Not so much.
The only process in the article that involves nearby pixels is to combine R G and B (and other G) into one screen pixel. (In principle these could be mapped to subpixels.) Everything fancier than that can be reasonably called some fake cosmetic bullshit.
Regarding sharpening and optical stuff, many modern camera lenses are built with the expectation that some of their optical properties will be easy to correct for in software, allowing the manufacturer to optimize for other properties.
Raw formats usually carry "Bayer-filtered linear (well, almost linear) light in device-specific color space", not necessarily "raw unprocessed readings from the sensor array", although some vendors move it slightly more towards the latter than others.
Removing dust and blemishes entails looking at more than one pixel at a time.
Nothing in the basic processing described in the article does that.
For example if you add snow to a shot with masking or generative AI. It's fake because the real life experience was not actually snowing. You can't just hallucinate a major part of the image - that counts as fake to me. A major departure from the reality of the scene. Many other types of edits don't have this property because they are mostly based on the reality of what occurred.
I think for me this comes from an intrinsic valuing of the act/craft of photography, in the physical sense. Once an image is too digitally manipulated then it's less photography and more digital art.
So there are levels of image processing, and it would be wrong to dump them all in the same category.
I mean to some degree, human perception is a hallucination of reality. It is well known by magicians that if you know the small region of space that a person is focusing on, then you can totally change other areas of the scene without the person noticing.
Really good article though
Computer imaging is much wider than you think. It cares about the entire signal pipeline, from emission from a light source, to capture by a sensor, to re-emission from a display, to absorption in your eye, and how your brain perceives it. Just like our programming languages professor called us "Pythonized minds" for only knowing a tiny subset of programming, there is so much more to vision than the RGB we learn at school. Look up "Metamerism" for some entry-level fun. Color spaces are also fun and funky.
There are a lot of interesting papers in the field, and its definitely worth reading some.
A highlight of my time at university.
== Tim's Vermeer ==
Specifically Tim's quote "There's also this modern idea that art and technology must never meet - you know, you go to school for technology or you go to school for art, but never for both... And in the Golden Age, they were one and the same person."
https://en.wikipedia.org/wiki/Tim%27s_Vermeer
https://www.imdb.com/title/tt3089388/quotes/?item=qt2312040
== John Lind's The Science of Photography ==
Best explanation I ever read on the science of photography https://johnlind.tripod.com/science/scienceframe.html
== Bob Atkins ==
Bob used to have some incredible articles on the science of photography that were linked from photo.net back when Philip Greenspun owned and operated it. A detailed explanation of digital sensor fundamentals (e.g. why bigger wells are inherently better) particularly sticks in my mind. They're still online (bookmarked now!)
https://www.bobatkins.com/photography/digital/size_matters.h...
This seems more a limitation of monitors. If you had very large bit depth, couldn't you just display images in linear light without gamma correction.
If you kept it linear all the way to the output pixels, it would look fine. You only have to go nonlinear because the screen expects nonlinear data. The screen expects this because it saves a few bits, which is nice but far from necessary.
To put it another way, it appears so dark because it isn't being "displayed directly". It's going directly out to the monitor, and the chip inside the monitor is distorting it.
Why exactly? My understanding is that gamma correction is effectively a optimization scheme during encoding to allocate bits in a perceptually uniform way across the dynamic range. But if you just have enough bits to work with and are not concerned with file sizes (and assuming all hardware could support these higher bit depths), then this shouldn't matter? IIRC unlike crts, LCDs don't have a power curve response in terms of the hardware anyway, and emulate the overall 2.2 trc via LUT. So you could certainly get monitors to accept linear input (assuming you manage to crank up the bit depth enough to the point where you're not losing perceptual fidelity), and just do everything in linear light.
In fact if you just encoded the linear values as floats that would probably give you best of both worlds, since floating point is basically log-encoding where density of floats is lower at the higher end of the range.
https://www.scantips.com/lights/gamma2.html (I don't agree with a lot of the claims there, but it has a nice calculator)
More importantly, the camera isn't recording blinding brightness in the first place! It'll say those pixels are pure white, which is probably a few hundred or thousand nits depending on shutter settings.