Skip to content

liquid thoughts Posts

What is the maximum (theoretical) possible camera sensor resolution?

The basic issue with maximum sensor resolution is the wavelength/’size’ of light:

Light

Visible radiation has a wavelength of around 500 nm so you would have  2,000 photons per millimetre.

That would make the maximum possible resolution/light sensitivity:

(Where 35mm sensors are 36*24mm):

2,000 per mm * 36 = 72,000 pixels wide
2,000 per mm * 24 = 48,000 pixels wide
= 3,456,000,000
or
3,456 million
or
3,5 K megapixel
Therefore: The theoretical maximum sensor resolution for full frame sensor is 3,5 K megapixels. That’s amazing…

Optics

But it’ seems that the optics is the current limit:

“About 55 to 60 lpmm (line pairs per millimeter) is the max for the best quality glass currently available.”

http://www.velocityreviews.com/forums/t681591-highest-megapixels-possible-in-aps-cs.html

For 35mm (full frame) sensors (35mm wide) will need to resolve 2,100*2? No… that can’t be right. Anyone have any ideas here?

Processing

Issues of how closely you can stack light sensors and how we move on to super-resolution (where longer time capture not only allows more photons to arrive but they are interrogated for more information and best likely image is computed based on averages and thus the diffraction limit of the system is transcended and noise calculated out) come into it after this, but a 21,6 Quadrillion pixel camera sounds pretty interesting.

Storage and processing capacity is just a matter of following Moore’s Law down the rabbit hole so that’s not interesting from a theoretical point of view but it is of course a major constraint for a practical implementation.

Leave a Comment

Bokeh and Sensor Size

Pogue discusses bokeh and depth of field in his updated article and it’s quite interesting. It finally explains how sensor size actually does have a real (not just irrelevant relative) impact on depth of field and bokeh:

He refers to an article on Cambridge in Colour:
“As sensor size increases, the depth of field will decrease for a given aperture (when filling the frame with a subject of the same size and distance). This is because larger sensors require one to get closer to their subject, or to use a longer focal length in order to fill the frame with that subject. This means that one has to use progressively smaller aperture sizes in order to maintain the same depth of field on larger sensors. The following calculator predicts the required aperture and focal length in order to achieve the same depth of field (while maintaining perspective).

As an example calculation, if one wanted to reproduce the same perspective and depth of field on a full frame sensor as that attained using a 10 mm lens at f/11 on a camera with a 1.6X crop factor, one would need to use a 16 mm lens and an aperture of roughly f/18. Alternatively, if one used a 50 mm f/1.4 lens on a full frame sensor, this would produce a depth of field so shallow it would require an aperture of 0.9 on a camera with a 1.6X crop factor — not possible with consumer lenses!
Comments closed

Origin Of Us

After a wonderful afternoon with Tom, Kirstin, Ella and Miles, Emily and I watched Origins Of Us, presented by Dr. Alice Roberts. (DVD). It’s a very well presented series on human evolution, from the BBC. It is well produced and shows how we didn’t just become adapted to walking upright with Homo Erectus, but for distance running with a long abdomen for twisting, sweat systems and and so on. Becoming upright then left us with hands no longer needed for locomotion and we became more adapt to using tools (more so than making tools, modern measurements show). Check it out, it’s well worth the watch. We’ve only seen episode 1/3 but looking forward to the next two :-)

Comments closed