System Resolution is the smallest difference in a test characteristic which could be resolved, or detected, by the system.
An object seen by an AOI is lit by the illumination system. Light rays emitted from the source follow the laws of reflection. Reflected rays are captured by the lens to form an image, or object projection.
Real systems make use of complex, optical objectives rather than simple lenses. However the principle can be well understood with a thin lens model (Figure 1).
Optical magnification, optical resolution, and wavelength.
The sidebar (see right, below) mathematically develops the relationship between optical magnification, optical resolution, and wavelength for the purposes of this discussion.

Fig. 3. Points are always projected on the image plane as a small disk known as an Airy Disk.

Best possible optical resolution.
For optical inspection the visible spectrum may be used, from ~400nm (UV) to ~700nm (Infrared)
λBlue wavelength 400nm = 0.40µm λRed wavelength 750nm = 0.75µm
These values, 0.40.75µm are the best possible Optical Resolution. Typically optical aberration (from nonGaussian optics) will result in much worse values.
The projection on the image plane is a twodimensional continuous projection of light energy.
In an AOI system a camera is used to capture the projection, the sensor is located at the image plane.

Optical magnification, optical resolution, and wavelength.

Different types of sensor are used in different types of cameras, however regardless of the technology, the conversion of the analog, continuous, light distribution into a digital image has three main steps.
Spatial Sampling in which the continuous light distribution is converted from light energy to voltage. The continuous distribution becomes discrete, however the signal remains analog.
Temporal Sampling is the snapshot, freezing the sampled voltage in time. This can be achieved with a trigger, or electronic shutter.
Quantizing is the final step, in which analog to digital convertors take the analog voltage and convert the continuous values into a finite range of values.
Spatial Sampling defines the pixel resolution. The sensor is made up of an evenly spaced grid of elements, each element measures the light falling on it as a voltage. This is converted to a discrete value by the analogtodigital convertor.
The resulting image is a mosaic of these picture elements, or pixels. A sensor with 2048 columns and 1536 rows of pixels has a resolution of 3,145,728 pixels. Sensor pixel resolution is typically described in Mega Pixels (or millions of pixels), in this example, 3 Mega Pixels.
The physical size of a single element of the sensor, or pixel, varies between 5 and 15µm. More often, Pixel Size p is taken to mean the logical pixel size, which is the ratio of the dimension l of an object to the number of pixels n its projection occupies:
p=1/n
The logical pixel size depends on physical pixel size, optical magnification and spatial sampling.
If 1mm of a real object is projected on to 40 pixels, the logical pixel size is 25µm. For sensors with square pixels, the logical pixel size in X and Y will be identical, and this is most often the case.

Fig. 5. Image projected by the lens of points A and B is captured by the sensor as A' and B'.

We have already seen the analogtodigital conversion during quantizing of the image on the sensor. Continuous values over a range become compartmentalized to discrete values.
Radiometric Resolution is the number of discrete levels that this value may have. Commonly used sensors use an 8bit representation, resulting in 256 different values.
The higher the Radiometric Resolution, or the more discrete values available, the better small differences in object characteristics can be represented.

Fig. 6. An object projected onto the pixel grid.

Overall system resolution depends on all the factors explained above, but it is not limited by them. Smart algorithms make it possible to perform very precise measurements. Consider the imaging of a laser scan of a PCB. The scanning of the PCB produces a height profile where the height of the object scanned is directly proportional to the displacement of the line from the base line. The accuracy of simple pixel counting is limited to the logical pixel size, in the example image the logical pixel size is 20µm.
The number of pixels between the base line and the deposit may be counted in 20µm steps. However, this image has a radiometric resolution of 256. Using sub pixel line detection algorithms the exact position of the line can be determined.
Each line has a certain width, with somewhat blurred edges. The middle of each line has pixels with high intensity (above 220) towards the edge the pixels become darker until there is no longer a measurable reflection (value of 0). Each pixel row is a one dimensional function of integer values, they can be plotted. To find the edges the second order derivative of the line is found.

Fig. 7. Continuous distribution (green curve) is converted to discrete voltages per pixel.

Edges are located at the zerocrossing of the second order derivative. By interpolation of the points around the zero crossing we can define a continuous function, which gives the exact point of the zero crossing in subpixel accuracy. Considering the sub pixel result, a more accurate line can be calculated which is independent of the logical pixel size. The described technique can deliver a tenfold improvement in accuracy, in the case of a 20µm logical pixel, the accuracy of the measurement can be up to 0.2µm.
Contact: ORPRO Vision GmbH, Hefehof 24, 31785 Hameln, Germany, +49(0) 5151 809 44 0 Web: