bool(false)

How Good Are Your Numbers?


It used to be simple.  Well, sort of simple anyway.  How sharp an image is depends on two things.  The lens and the image capturing device, either the film or the electronic sensing device, both of which vary greatly.

In equipment tests, we used to read a great deal more about lens performance than we do now.  There are many factors which determine whether or not you would wish to buy a particular lens, such as rectilinearity, chromatic aberration, flare rejection, contrast, speed, and sundry other things unrelated to this piece.  Sharpness is interesting though because there is more to it than just resolution.  The resolution performance of a lens is indicated by its ability to resolve line pairs per millimetre.  But it is further complicated by apparent sharpness being a combination of resolution and contrast.  A low contrast lens of very high resolving performance will ‘look’ like it has low resolving power in the real world of making photographic images.  But for the purposes of this rant, let’s imagine that we’re using a top notch lens of impeccable pedigree and excellent light bending manners.

The other side to this coin is the bit that actually captures the image.  Films resolve well or they resolve poorly.  This varies depending upon their structure, their grain size, shape and distribution, the type of film and the manner of its processing.  Electronic sensors don’t get a free ride either.  They vary massively.  And here’s the funny thing.  Most people think that the whole story is that the higher the pixel count, the better the resolving power of the sensor.  And the camera manufacturers’ marketing people know this.

It’s funny how it only appears to be the number that matters.  It has always amazed me that the obvious penny so seldom drops.  That being that there must be a difference between an expensive 12 megapixel camera and the 12 megapixel camera in a phone.  But nope, it’s the same number, so mine’s as big as yours, so “stick that in your pipe and smoke it.”  Very out dated reference there.  Sorry, couldn’t resist.

But here’s the thing … the photo sites (pixels) in the phone are teeny-weeny … no room for big ones, and those in the big camera are bigger … more room.  A bigger pixel can gather more light than a small one, so the signal it produces needs less amplification.  Also, the teeny-weeny photo sites are much nearer to each other on the phone than they are on its bigger sibling, and if you put two signal generating devices close to each other they have a nasty habit of inducing interference one in another.  In other words, they talk to each other.  Bad idea.  And to make the whole situation worse than it would be anyway, amplifying the weaker signals (smaller pixels) adds another layer of interference.  All this interference is called “noise.”  Basically it means that data (the numbers) allocated for that site address (pixel) are wrong.

There are various ways to reduce this interference, one of which which always makes me smile at the marvelousness of working on a microscopic scale.  Some sensors have trenches dug between the photo sites thus reducing the site crosstalk.  They call it Deep Trench technology.  Can you imagine the equipment which can do that?

Many years ago I had a 5MP Olympus camera, with which I earned my living.  I was then seduced into buying a 9MP Fuji camera.  The lenses were comparable but the Olympus out performed the Fuji hands down even though the native (not perceived) resolution of the Fuji was higher.  The culprit was “noise.”  Fuji had crammed more pixels onto their sensor which made it a good marketing bet but the noise it produced made it a very poor photographic bet.

There’s a fun way you can reduce this noise and the effect can be quite arresting.  The important thing is to start with a good image.  It might seem obvious but you can’t make a silk purse out of a sow’s ear.  You can however “see” the silk purse, if you have one in the first place, by reducing the noise.  You see, oddly, noise is randomly generated.  If it wasn’t, this idea wouldn’t work.  If you put your camera on a tripod and make an exposure of a scene which is repeatable (i.e. a still life), and then make another identical exposure, then process them in exactly the same way and finally look at them, you will think you are looking at two identical pictures.  Until you go “pixel peeping” that is.  Zoom in so you can examine the individual pixels.  When you compare the same pixel addresses in the two image files you will start to identify anomalies.  Differences which, on the face of it shouldn’t be there.  But they are.  I know … odd!  But you can use this oddity to your advantage.

Take four identical images of your chosen non-moving scene, being very fussy about not moving the tripod even slightly.  Then process the files and stack them in layers in Photoshop, applying 100% opacity to the bottom layer, 75% opacity to the next layer, 50% to the next and 25% to to the top one  Then make a file with the resultant accumulation.  Now you can compare that file to any of the originals.

You’ll be staggered with the noise reduction.  Nowadays, some cameras do this automatically but a lot of cameras with excellent optics do not and their optical prowess is seldom truly seen because of the noise veil obscuring the truth.  Now you’ve had a look at the noise reduction, have a look at the image as you would normally view it (big computer monitor or print … not on your phone please … don’t get me started!) and you’ll see what your lens can really do.  You’ll think you’ve upgraded something but you haven’t.  You’ve just optimised what you already had.  By the way, the more image files you use the better the outcome, but the law of diminishing returns comes into play with this.

So, the next time a salesman tries to upgrade you using numbers alone ask him … “how good are your numbers?”