Sunday, September 25, 2016
VOLUME -27 NUMBER 8
Publication Date: 08/1/2012
Advertisements
Archive >  August 2012 Issue >  Special Features: Test and Measurement > 

What Is Resolution?
Fig. 1. In this thin lens model, an image is formed of the object with height H, projected on the image plane with height h.

Automated Optical Inspection (AOI) systems rely on something we take too much for granted: the lens system. The lens system provides the image acquisition capabilities that are needed in all AOI systems — in all cameras, whether for AOI or for machine vision, for security applications, for entertainment videos, movies and still photos — even pictures taken with a cell phone.

The concepts described here are common to all machine vision systems making use of an objective or lens, and camera or sensor.

By way of simple examples, the following will be explained:

Optical magnification is the ratio between an object's actual size, and its projected size, most commonly on the surface of the sensor.

Optical resolution is the smallest distance between two distinct objects which can be reproduced at the projected surface.

Pixel resolution or pixel count is the number of pixels comprising an image, or the number of pixels on a sensor.

Pixel size is the physical size of an object represented by one pixel of the object projected image on the sensor.

Radiometric image resolution is the number of discrete levels present in the image. In a gray scale image, this is the number of grey scales used to represent intensity or reflectivity.
Fig. 2. Two points ΔH apart, appear on the image plane δh apart.


Spacial image resolution is the smallest distance between two lines which could be resolved in an image, or the number of independent pixel values per unit length.

Image acquisition is the process by which objects become digital images, which can then be processed. When the object is illuminated an image can be acquired.

Each of the parts described above, come together to result in an image from which image processing algorithms can extract useful information. The algorithms finally determine the system resolution.

System Resolution is the smallest difference in a test characteristic which could be resolved, or detected, by the system.

An object seen by an AOI is lit by the illumination system. Light rays emitted from the source follow the laws of reflection. Reflected rays are captured by the lens to form an image, or object projection.

Real systems make use of complex, optical objectives rather than simple lenses. However the principle can be well understood with a thin lens model (Figure 1).

Optical magnification, optical resolution, and wavelength.
The sidebar (see right, below) mathematically develops the relationship between optical magnification, optical resolution, and wavelength for the purposes of this discussion.
Fig. 3. Points are always projected on the image plane as a small disk known as an Airy Disk.


Best possible optical resolution.
For optical inspection the visible spectrum may be used, from ~400nm (UV) to ~700nm (Infrared)

λBlue wavelength 400nm = 0.40µm λRed wavelength 750nm = 0.75µm

These values, 0.4-0.75µm are the best possible Optical Resolution. Typically optical aberration (from non-Gaussian optics) will result in much worse values.

The projection on the image plane is a two-dimensional continuous projection of light energy.

In an AOI system a camera is used to capture the projection, the sensor is located at the image plane.
Optical magnification, optical resolution, and wavelength.


Different types of sensor are used in different types of cameras, however regardless of the technology, the conversion of the analog, continuous, light distribution into a digital image has three main steps.

Spatial Sampling in which the continuous light distribution is converted from light energy to voltage. The continuous distribution becomes discrete, however the signal remains analog.

Temporal Sampling is the snapshot, freezing the sampled voltage in time. This can be achieved with a trigger, or electronic shutter.

Quantizing is the final step, in which analog to digital convertors take the analog voltage and convert the continuous values into a finite range of values.

Spatial Sampling defines the pixel resolution. The sensor is made up of an evenly spaced grid of elements, each element measures the light falling on it as a voltage. This is converted to a discrete value by the analog-to-digital convertor.

The resulting image is a mosaic of these picture elements, or pixels. A sensor with 2048 columns and 1536 rows of pixels has a resolution of 3,145,728 pixels. Sensor pixel resolution is typically described in Mega Pixels (or millions of pixels), in this example, 3 Mega Pixels.

The physical size of a single element of the sensor, or pixel, varies between 5 and 15µm. More often, Pixel Size p is taken to mean the logical pixel size, which is the ratio of the dimension l of an object to the number of pixels n its projection occupies:

 
The logical pixel size depends on physical pixel size, optical magnification and spatial sampling.

If 1mm of a real object is projected on to 40 pixels, the logical pixel size is 25µm. For sensors with square pixels, the logical pixel size in X and Y will be identical, and this is most often the case.
Fig. 5. Image projected by the lens of points A and B is captured by the sensor as A' and B'.


We have already seen the analog-to-digital conversion during quantizing of the image on the sensor. Continuous values over a range become compartmentalized to discrete values.

Radiometric Resolution is the number of discrete levels that this value may have. Commonly used sensors use an 8-bit representation, resulting in 256 different values.

The higher the Radiometric Resolution, or the more discrete values available, the better small differences in object characteristics can be represented.
Fig. 6. An object projected onto the pixel grid.


Overall system resolution depends on all the factors explained above, but it is not limited by them. Smart algorithms make it possible to perform very precise measurements. Consider the imaging of a laser scan of a PCB. The scanning of the PCB produces a height profile where the height of the object scanned is directly proportional to the displacement of the line from the base line. The accuracy of simple pixel counting is limited to the logical pixel size, in the example image the logical pixel size is 20µm.

The number of pixels between the base line and the deposit may be counted in 20µm steps. However, this image has a radiometric resolution of 256. Using sub- pixel line detection algorithms the exact position of the line can be determined.

Each line has a certain width, with somewhat blurred edges. The middle of each line has pixels with high intensity (above 220) towards the edge the pixels become darker until there is no longer a measurable reflection (value of 0). Each pixel row is a one dimensional function of integer values, they can be plotted. To find the edges the second order derivative of the line is found.
Fig. 7. Continuous distribution (green curve) is converted to discrete voltages per pixel.


Edges are located at the zero-crossing of the second order derivative. By interpolation of the points around the zero crossing we can define a continuous function, which gives the exact point of the zero crossing in sub-pixel accuracy. Considering the sub pixel result, a more accurate line can be calculated which is independent of the logical pixel size. The described technique can deliver a tenfold improvement in accuracy, in the case of a 20µm logical pixel, the accuracy of the measurement can be up to 0.2µm.

Contact: ORPRO Vision GmbH, Hefehof 24, 31785 Hameln, Germany, +49(0) 5151 809 44 0 Web:
http://www.orprovision.com

Add your comment:

Full Name:
E-mail:
Subject:
Comment:
 

 
 
search login