Understanding resolution for your vision application
Binder UK Ltd
Posted to News on 21st Nov 2017, 00:00

Understanding resolution for your vision application

Many who are new to machine vision and image processing, struggle with how to select a camera with the correct resolution for their application. A calculation can give you a good head start to determining your camera needs, but as Paul Scardino of Baumer explains, ultimately a practical evaluation is critical for success.

Understanding resolution for your vision application

In selecting the correct resolution for a vision camera, several real-world issues come into play. Front or back lighting plays a crucial part. Camera calibration is necessary to control project, such as the appearance of rails coming together at a distance. Part placement and lens type are hugely influential, especially when considering part thickness. Real-world cameras and lenses have to be taken into account. And software differences can have a big impact on the final image.

A sensor can be thought of as an x and y grid of pixels, which forms an image using light. The resolution of a sensor is given by the number of pixels in both the x and y direction. For example, a common VGA sensor has a resolution of 640×480, where it has 640 pixels in the x direction and 480 pixels in the y direction. This sensor has 307200 pixels (640x480) and is thus called a 0.3 Meg sensor.

A simple formula for calculating the x or y starting point for the needed resolution of a camera is:

Pixels needed = (Field of view) x (Accuracy) / (Tolerance)

To understand this better, let’s look at a simple example. Let’s say we wish to measure a small part which always fits inside a 1x1in area, and we wish to measure with a tolerance of 0.001in. In this case, the FOV is 1in, and is what the camera will always see. The tolerance is 0.001in, and the software we are using is accurate to 1⁄4 of a pixel. Plugging those values in, we would need a camera that has 250 pixels. Thus, a standard VGA camera would most likely do the job.

Because many cameras do not have square sensors, such as our 640×480 camera, you can use the direction with the higher pixel count and gain accuracy. If we measured something in the 1in FOV using the camera in the 640 direction, then we would have tolerance of (1in) × (0.25)/640, giving a tolerance of around 0.0004 in. If other direction is used, the tolerance would be only (1in) × (0.25) / 480, or around 0.0005in.

This simple calculation should help you as a starting point to choose a camera with the correct resolution. However, it is based on a simplistic point of view of a complicated vision system. In the real world, lighting is never perfect, items are not infinitely thin, and items can’t be perfectly placed directly under a perfect camera, using a perfect lens. In the real world, we need to take into consideration some real-world issues:

  • Lighting
  • 2D Camera calibration
  • Part placement and lens type
  • Real world cameras and lenses
  • Software

A vision system uses sensors and light to create an image which is then processed. Thus, the light that is used to form this image is extremely important. Depending on what you are trying to measure, different kinds of lighting can be used. It is important to know how different lighting setups will change the image that will then be processed.

Back lighting is one of the best and most preferred methods to use for obtaining accurate measurements in a vision system. This is because the light goes directly from the source into the cam-era, so that shadows and stray light are minimised. The result is a crisp clean image with robust edges and a lot of contrast. In some cases, back lighting will not be an option or you will also need to inspect features on the top of your part, in which case, you will need to front light it as well.

It is important to test with different lighting and then look at the resulting image. Keep in mind that the software you are using may or may not always pick up the correct edges or features when measuring your parts. If your software is 1⁄4 sub-pixel accurate, it is only 1⁄4 accurate if the lighting is also able to produce a quality image such that your software can work as expected.

2D camera calibration

If we were to look at a pair of parallel lines stretching off into the distance, the optical illusion of projection would appear to make the lines move closer together. Just as the human visual system has this issue, so does a vision system. Thus, it is not accurate to say 1 pixel is 0.0015625in. In a modern vision system, this is corrected by using camera calibration, so that the camera reports back the true measurements of what you are measuring. The standard way to calibrate a vision camera is to place something known into the camera’s FOV and then use four or more points to set up the camera calibration.

Once a system is calibrated, it is normally very accurate in the 2D plane it was calibrated from. The phrase “in the 2D plane it was calibrated from” is an important one. All 2D camera calibration routines are just that, 2D. They work well in calibrating all the points in one 2D plane (the calibration target) to the other 2D plane (the sensor of the camera).

If we only measure points that lie in the actual plane itself, we are all set, but that would require that the parts we measure be infinitely thin. But in the world of machine vision, we normally need to measure parts that have some thickness. So it is important to keep in mind that 2D camera calibrations are just that: 2D measurements – and we live in a 3D world.

Part placement and lens type

Associated with camera calibration is the idea of where you place your part with respect to the centre of the camera and lens, and what type of lens to use. The principal axis of a lens is normally aligned with the centre of the imaging sensor.

Imagine that we wish to measure the holes in a 3x3in block that is 2in thick, with 1⁄4in holes evenly spaced. One may think to simply use a backlight with a standard camera and lens, and assume excellent results.

But due to the thickness of the part, the perfect circles in the part appear non-circular in the resulting image. The hole that is aligned directly with the principal axis of the camera is the only hole that appears circular. For all other holes, the light that forms the pixels in the image may come from the bottom of the part while others come from the top or anywhere in between. This issue can be solved for the most part by using a special lens called a telecentric lens, but normally at a much greater cost.

The parts we wish to measure in the real world are not infinitely thin and may move in any direction away from the principal axis of the lens and camera. Understanding this fact helps to determine the needed resolution of our camera, and what type and quality of lens is needed.

Real world cameras and lenses

Most cameras are composed of an imaging sensor and supporting electronics. The imaging sensor is attached to a PCB which then is wired to the electronics. When the camera is turned on the imaging sensor creates heat, which can actually move the plane of the imaging sensor with respect to the camera. Remembering that 2D camera calibration is the calibration of one 2D plane to another 2D plane, any movement in the sensor will then create errors in the resulting measurements which rely on the camera calibration.

Therefore, it is very important to have proper heat sinks and PCB layout so this issue is minimised. Make sure to check with your camera manufacturer to ensure a proper heat sink is built into the camera near the sensor. Not all cameras are created equal.

Every lens has a set of imperfections. Some of these imperfections can mostly be corrected by 2D camera calibration while others cannot. Software from one vendor will not implement the same exact camera calibration as another and in fact, may be very different. It is important to know what camera and lens imperfections are corrected by the software you are using, as it will affect your measurements.

Software

Each software vendor may have similar image processing and machine vision routines, but these routines are created by the software engineers and each may perform differently. There are countless numbers of routines in image processing and machine vision. Each can be very different in how it works.

Most routines work by finding edges or other features in images, which are made from the contrast between these different features in images. When the contrast or features are different from one image to the next, the software may give different results. This is important to keep in mind when selecting the equipment that makes up the entire vision system, including the resolution of the camera.

Baumer Ltd

33/36 Shrivenham Hundred Business Park, Majors Road
Watchfield
SN6 8TZ
UNITED KINGDOM

+44 (0)1793 783839

ABSSAC Ltd Reliance Precision Ltd igus (UK) Ltd Lamonde Automation Limited Eplan Ltd AdaptTech Manufacturing Solutions
Binder UK Ltd