Posted by abe94 3 days ago
This method of color photography is absolutely fascinating and resulted in some of the best color photographs of the early 20th century.
The Library of Congress has a collection [1] of plates by Prokudin-Gorskii who was hired by the Czar to ride around Russia on a train and photograph the country in the years before WWI and the Revolution. In the last couple of decades someone restored and digitally aligned each color plate so now we have nearly 1,500 relatively high resolution color photographs of imperial Russia. He took photos of everything from Emirs to peasant girls to Tolstoy and all the architecture and scenery in between.
[1] https://www.loc.gov/collections/prokudin-gorskii/about-this-...
A the start of the photo era, the state of the art for illustrations was for them to be drawn by an artist and then engraved on a wood block, manually, that was then used as a printing plate. There was a period when no method was available to convert photos to printing plates, so from that period you find prints of photos where someone has manually copied it to a wood engraving for publication.
Development of the plate produces a superposition of many different volume diffraction gratings mostly parallel to the surface of the plate. If the plate is bleached, these diffraction gratings become high efficiency phase gratings.
For playback/viewing, light from the illumination source is both diffracted and filtered in wavelength by the volume diffraction gratings. In a hologram, the diffraction gives the multiple perspectives that make the medium so cool. For Lippmann photographs, the camera has removed most of the perspective information and the dichroic or interference filtering of the gratings is the primary effect.
In either case, the final image is a layer free image bearing volume in the emulsion.
That's why it can't be effectively copied using 2D techniques. Since the image is 2D (maybe slightly 3D? I'd have to think about that), it can be copied using a standard photographic technique. But the interference gratings in the emulsion have some angular dependence on the light source and viewer's angle that wouldn't be present in a 2D copy. In this way they also look a bit like a reflection hologram.
Regarding perspective through a lens, I'm imagining looking through my stereo microscope vs my SLR ... Does the fact that a lens has a single focal point get in the way of keeping depth information ? Or could I split the image landing on the mirror into two and have a 3d stereo viewfinders for my Nikon camera, such that the view is stereo and it's only the film that's throwing out what direction the light is coming from... I'm reminded of the Lytro Illum lightfield cameras, they only leaned on "focus after the fact" gimmicks, maybe if they tried it during a VR boom to share "spatial photographs" they would have had access to a new market
No, really single lens have separate focal point for each wavelength, but achromatic optical system in SLR have at least two lenses, so they have range of frequencies for which focal points are very close to one designed point.
Mirror optical system don't have chromatic distortions, but in SLR case lens just cheaper to produce.
Lytro using other idea - light field from short distances is not square but rounded (square light field from distant stars, because of very large distance, so radius too large to see rounding), and in Lythro each point seen via prism by few pixels (each angle with other pixel), so could measure distance to light source and using this info, restore distance information.
Unfortunately, Lythro technology means, with few Megapixels CCD, will have just some hundreds Kilo-pixels restored image, so in reality, need some hybrid approach, where will be classic high-resolution 2D and some sort of depth sensor. Plus, need additional processing power to calculate restored image. So yes, in reality, latest Lythro camera using huge CCD and very powerful processor, all too expensive for COTS (but acceptable in some cinema niches).
Anybody could make Lythro-like setup from ordinary lens and CCD (and make calculations on for example Raspberry), but now Lythro patents prohibited to make money on this.
Unfortunately none of them are as well restored and presented as the Library of Congress collection. A lot of their photos are in books like Endzeit Europa [5] and other commercial media instead of in the public domain.
[1] https://www.telegraph.co.uk/news/picturegalleries/worldnews/...
[2] https://www.nationalgeographic.com/history/article/autochrom...
[3] https://www.vintag.es/2013/03/color-photographs-of-life-in-p...
[4] https://www.vintag.es/2012/12/beautiful-color-photos-of-hung...
[5] https://www.amazon.de/Endzeit-Europa-kollektives-deutschspra...
Maybe I've misunderstood the coding. Corrections are welcome.
[1] Crude overestimate, assume 5 bits per color for 20 bits per pixel. More accurate is log2(32 choose 4), which you can type into Google to get 15 bits.
https://ieeexplore.ieee.org/document/9438269
If, in our previous example, the 4 wavelengths were to be selected from a palette of 32 different wavelengths, a single worfel location could store ~36 Kilobits of data. Thus, a 1 cm2 media with 10μ2 data locations (8μ2 worfels with 2μ spacing on all sides) = 1,000,000 worfels/cm2. For example, (32!/((32-4)! • 4!)) = 35,960 distinct states. (An analogous use of formula (1) is drawing a hand of 5 playing cards from a 52-card deck yields 2,598,960 distinct hands.)
Applying the 35,960-state permutation table for k=4 (i.e., superimposing 4 wavelengths per worfel), and drawing from a palette, N, of 32 different wavelengths, yields 35,960,000,000 bits (≈35.9 gigabits) per cm2; or 35.9 x (6.42 cm2 per square inch) ≈ 230.4 gigabits/in2. And so for an example of a 4″x5″ media (20 in2), 20 x 230.4 ≈ 4.6 terabits per 4x5 inch media.
In current fiber-optics solutions using multi-wavelength, so on fiber input mixed 8 or more laser waves of different lengths (to be strict not wl but bands) and on output mix divided with optical filter and all wavelengths processed separately.
But you don't have to use exact wavelength in this, you could use any wl within band (yes, 4 colors in each of 40000 bands looking possible), if you have light source with changeable wl and detector with enough precision, and number of wl's could be much larger than in for example LCD or OLED.
Unfortunately, authors of article for some reason don't write about this important nuance, but it mean, in reality to make this technology, need precision light source(s) with adjustable wl's (classic RGB is combination of just 3 wl's, and modern amoleds usually 4 wl's) and precision compact spectrometer (for example, what I seen myself gives 1024 lines for visual spectrum, so just 10 bits). All these are not impossible, but will not easy to achieve.
It was basically cathode ray tubes to expose photographic strips, an automatic chemical photo wet lab, robotic storage and retrieval of the developed film strips, and optical readout.
Absolutely bonkers.
In current fiber-optics solutions using multi-wavelength, so on fiber input mixed 8 or more laser waves of different lengths (to be strict not wl but bands) and on output mix divided with optical filter and all wavelengths processed separately.
But you don't have to use exact wavelength in this, you could use any wl within band (yes, 4 colors in each of 40000 bands looking possible), if you have light source with changeable wl and detector with enough precision, and number of wl's could be much larger than in for example LCD or OLED.
Unfortunately, authors of article for some reason don't write about this important nuance, but it mean, in reality to make this technology, need precision light source(s) with adjustable wl's (classic RGB is combination of just 3 wl's, and modern amoleds usually 4 wl's) and precision compact spectrometer (for example, what I seen myself gives 1024 lines for visual spectrum, so just 10 bits). All these are not impossible, but will not easy to achieve.
Or use some substitute technology, for example, calculate diffraction strips digitally and then implement them with modern femto-second laser.
So could achieve similar result, but (in case of femto-second laser) trade for much larger time and much more energy to create forever storage.
https://www.researchgate.net/publication/350499602_WORF_Writ...
This doesn't seem right to me, considering the amount and age of COTS hardware with a variety of flash-storage in them (Thinkpads, Nikon DSLRs etc.)
IIRC the shuttle’s magnetic-coil memory was hardened explicitly to defend against this sort of corruption, with additional windings to maintain a stronger charge state than would be used within the shield of the atmosphere.
DRAM/SRAM really have problems with cosmic rays, but on LEO enough to use ECC, as DRAM refresh it's contents every few milliseconds. In Deep space missions (on Mars and beyond), even hardened electronics hangs, as I could remember, approximately every year.
Magnetic-core memory is not affected by cosmic rays (only support circuits affected), but unfortunately it is not dense enough for current storage demands (exists micro-electronics magnetic-core technology, but even it cannot compete with CMOS).
Unfortunately, ECC notebook (mobile) platforms are not produced anymore (yes, tens years ago exists Sun notebooks, built on server platform, but they not produced for long time), and on ISS used off-the-shelf technologies, that's why they have troubles with cosmic rays.
Because of this, Shuttle flight control computers was special radiation hardened but with DRAM.
Magnetic-core storage is just don't used in modern spacecrafts, but used radiation hardened CMOS and for storage used cylindrical magnetic domains technology (you could buy it on free market easy, but it is still magnitudes less dense than single layer flash).
PS to be strict, Shuttle flight computers was upgraded at least once, but from beginning was hardened semiconductor.
My own laptop is such a Dell Precision model.
EDIT: Looking now at the Dell site, I see that buying a laptop with ECC memory has become much more difficult than a few years ago. For many of the "mobile workstations" ECC memory is not offered at this time, while for those where you can customize the laptop and choose ECC, the price is absolutely outrageous, e.g. $850 for 64 GB of ECC memory.
Of course, anyone sensible would buy the "mobile workstation" with the smallest and cheapest memory option, then they would buy separately 64 GB of ECC SODIMM memory at a price 4 times lower than demanded by Dell.
I most time avoid to read about top ultrabooks, because mostly this info useless for me, but now I first time hear about notebook with server processor (xeon 5xxx).
TECHNIQUE and METHOD are synonymous terms (don't quibble). Does anybody else find it irksome to build a sentence this way?