Skip to content

Month: February 2013


A wonderful meeting at the British Museum today. I learnt that cuneiform may have started in Sumeria since the language was monosyllabic, therefore easier to record.  !   How fascinating is that? More to follow..

Leave a Comment

What is the maximum (theoretical) possible camera sensor resolution?

The basic issue with maximum sensor resolution is the wavelength/’size’ of light:


Visible radiation has a wavelength of around 500 nm so you would have  2,000 photons per millimetre.

That would make the maximum possible resolution/light sensitivity:

(Where 35mm sensors are 36*24mm):

2,000 per mm * 36 = 72,000 pixels wide
2,000 per mm * 24 = 48,000 pixels wide
= 3,456,000,000
3,456 million
3,5 K megapixel
Therefore: The theoretical maximum sensor resolution for full frame sensor is 3,5 K megapixels. That’s amazing…


But it’ seems that the optics is the current limit:

“About 55 to 60 lpmm (line pairs per millimeter) is the max for the best quality glass currently available.”

For 35mm (full frame) sensors (35mm wide) will need to resolve 2,100*2? No… that can’t be right. Anyone have any ideas here?


Issues of how closely you can stack light sensors and how we move on to super-resolution (where longer time capture not only allows more photons to arrive but they are interrogated for more information and best likely image is computed based on averages and thus the diffraction limit of the system is transcended and noise calculated out) come into it after this, but a 21,6 Quadrillion pixel camera sounds pretty interesting.

Storage and processing capacity is just a matter of following Moore’s Law down the rabbit hole so that’s not interesting from a theoretical point of view but it is of course a major constraint for a practical implementation.

Leave a Comment

Bokeh and Sensor Size

Pogue discusses bokeh and depth of field in his updated article and it’s quite interesting. It finally explains how sensor size actually does have a real (not just irrelevant relative) impact on depth of field and bokeh:

He refers to an article on Cambridge in Colour:
“As sensor size increases, the depth of field will decrease for a given aperture (when filling the frame with a subject of the same size and distance). This is because larger sensors require one to get closer to their subject, or to use a longer focal length in order to fill the frame with that subject. This means that one has to use progressively smaller aperture sizes in order to maintain the same depth of field on larger sensors. The following calculator predicts the required aperture and focal length in order to achieve the same depth of field (while maintaining perspective).

As an example calculation, if one wanted to reproduce the same perspective and depth of field on a full frame sensor as that attained using a 10 mm lens at f/11 on a camera with a 1.6X crop factor, one would need to use a 16 mm lens and an aperture of roughly f/18. Alternatively, if one used a 50 mm f/1.4 lens on a full frame sensor, this would produce a depth of field so shallow it would require an aperture of 0.9 on a camera with a 1.6X crop factor — not possible with consumer lenses!
Comments closed