The challenges, the options, and the role of camera technology
For many years, most machine vision systems only operated in black-and-white. Even today, the majority of machine vision applications are still monochrome. Yet there are a growing number of applications where color imaging is both required and can provide significant advantages.
The use of color in machine vision applications has increased significantly in the past ten years. This has been accompanied by steady improvements in both the camera technology and the algorithms to support color machine vision applications. As a result, many more machine vision system designers are finding themselves facing new challenges as they embark on building systems where color is a critical factor.
Read on to learn more about the unique characteristics of color imaging in machine vision and find out which color imaging machine vision technologies best suit your application requirements.
In recent years, the quality of color machine vision systems has increased significantly. Part of this improvement has been the result of the increasing resolution of camera imagers. Since image sensors can’t actually “see” colors, color cameras must use filter arrays and other techniques to capture light in a way that allows color imaging information to be derived.
This process, however, typically reduces the effective resolution of the image. When most camera resolutions were under two megapixels, the resolution penalty on standard single-sensor color cameras made them unsuitable for many tasks. Now that cameras with resolutions of five megapixels or higher are commonplace, the resolution penalty has become less of a factor and machine vision designers can more easily meet their requirements using color cameras.
Along with the improvements in sensor technology have come software libraries and camera firmware much better tuned to color imaging requirements. Where once designing a color machine vision system required having extensive knowledge of color science and how to work with color image data, advanced software libraries and built-in camera capabilities have simplified the process to make working with color more straightforward than ever before.
A wide range of applications can benefit from the use of color imaging in machine vision. The majority of these color imaging applications fall into three broad categories:
Color imaging can provide you with additional data that can optimize your inspection process. Especially when your aim is to classify defects or check the shape of colored products, the use of color imaging is crucial. Take the inspection of color-coded wires for example. If you want to check if each wire is connected to the right connector on the board, your machine vision system has to be able to read the color of the wire and see if there’s a match.
Color imaging in machine vision can also be used to separate items based on their color. That means color imaging can be used in classifying or categorizing objects by color. Subsequently, it can also be used as a way of grading certain objects. For example, cherries, apples, and other fruits might be sorted by their color as a way of indicating ripeness.
Color detection aims to teach the camera what color it is looking at. For an application to use the data coming from the color machine vision camera, the host computer first needs to connect a color value to each pixel or to a histogram representing an area or blob of pixels. Once the application has assigned a color value, it can compare that against a target color or a range of target color values. This matching process can be used to make sure printed material matches a predefined corporate color, or to make sure that a car’s side view mirror matches the door panel, or for many other applications.
The degree to which a color machine vision system can perform in any of the broad application categories listed above depends on its ability to measure up to several key challenges:
Machine vision color cameras provide the host computer with pixel-level data generated by the reflected or incident light coming from the scene. While two cameras may both produce “pleasing” images, the specific pixel values may be different depending on the type, quality, and/or age of the cameras. The goal is to produce values that most closely match the “true” color value that might be calculated using precise laboratory equipment under the same lighting conditions.
Depending on the application, it might be crucial that the machine vision camera can distinguish subtle variances of the same color. For example, when multiple pieces of leather are used to create a purse or jacket, it is important that all the pieces for a single item have the same shade of color. Pieces with slightly different shades can be used together on other jackets or purses, but mixing different shades on a single item would not be acceptable.
Achieving high levels of differentiation requires color accuracy that can be calculated with a high degree of precision and repeatability. This is particularly challenging when light levels are lower because color values become compressed into a smaller range of possible values. The more sensitive the color camera, the better equipped it is to provide color differentiation, though other limiting factors may come into play.
Just like in monochrome applications, many color applications must distinguish small image details in order to perform their task. They may need to read bar codes or QR-codes, or may need to exactly identify the edge of an object to perform measurements or to determine shapes and positions.
Cameras that rely on color filter arrays and Bayer interpolation to derive color information, create soft or blurry edges resulting from the process used to estimate the color value for each pixel. While for some applications this loss of detail may be acceptable or can be overcome by using higher resolution cameras, other applications may demand the use of prism camera technology in order to achieve the necessary combination of color accuracy and spatial precision.
Before beginning to evaluate camera options, one must first determine if the application is more suited to an area scan or a line scan type of color camera.
An area scan camera is typically used when the items that are being inspected, sorted, or analyzed have definite shapes or boundaries. For example, when you need to inspect individual, 3-dimensional items like fruit, boxes or printed circuit boards, an area scan camera, which uses a matrix of pixels to create a 2D image of each object, will most likely be your choice.
Area scan cameras can create an image of a defined area very quickly.
Area scan cameras are easier to set-up and install than line scan cameras and serve a more general purpose.
In some situations, area scan cameras can be used for imaging of continuous, moving objects but only by capturing discrete overlapping images. This may require considerable processing effort to “stitch together” the pieces for proper analysis.
Line scan cameras are most effective when you need to inspect long, continuous items, or a variety of items that have many different lengths or sizes. When the object does not have a well-defined start, end, or even size to it, a line scan camera is most suitable for your application.
For example, for so-called “web” inspection - where you need to inspect rolls of paper, textiles, or steel - a line scan camera is the obvious choice. Likewise, conveyor systems filled with randomly arranged fruit or produce, or aerial imaging flying over forests or farmlands are good candidates for line scan cameras.
A line scan camera scans moving objects by repeatedly capturing a single line of pixels at a high-frequency.
The continuous “linear images” that the camera captures can be analyzed instantaneously as they are captured, or can be reconstructed line-by-line by software into larger images for analysis.
Because they operate continuously, line scan cameras have a vertical resolution that is effectively “unlimited” and can easily construct two dimensional images with much greater total resolution than area scan cameras.
In case an area scan camera is the right choice for your application and you need to incorporate color into your machine vision system, there are two different area scan options to consider for color imaging: Bayer mosaic and prism-based, multi-sensor technology.
Bayer cameras rely on a predefined pattern of color filters which overlay the pixels on a camera’s imager. Calculating the red, green, and blue (RGB) color value for any specific pixel requires a process of interpolation which looks at surrounding pixels to estimate the values for the two colors not captured by that pixel’s filter.
When the price of the camera is an important decision factorBayer color cameras are far less expensive than prism cameras. Users can get a good, basic 5-megapixel Bayer area scan camera for less than half the price of a 3.2-megapixel prism camera.
When your machine vision system does not require exceptional color accuracyBecause Bayer cameras must “interpolate” (i.e., estimate) two of the three color values for each pixel, there can be a noticeable variance between the RGB values that are calculated by the Bayer algorithm and the “true” color values in the target scene. If colors in your application must be accurately captured and compared against predefined reference colors, the lower accuracy of Bayer data could create problems. But if the use of color only requires relative accuracy, i.e., how one color in the image compares to another, a Bayer camera should be sufficient.
When your application does not require subtle color differentiation In addition to lower absolute color accuracy, Bayer filters also block a portion of the light falling on each pixel, resulting in an overall lower level of effective sensitivity. These factors typically reduce the ability of a Bayer camera to distinguish between very subtle color shades. However, for applications where relatively coarse color differentiation is all that is required, a Bayer camera may be more than adequate while allowing the user to take advantage of the lower price point.
When your machine vision system does not require a high level of spatial precisionAs noted previously, the interpolation process used by a Bayer camera causes an overall loss of detail as it applies to edges, lines and small printing within images. If the system you’re designing does not require exceptional spatial precision, or if you are amenable to the higher cost and processing overhead of using a higher resolution camera, a Bayer area scan camera can still be the right choice for your application.
Multi-sensor prism cameras utilize high-quality prisms with dichroic filter coatings to split the incoming light to three separate imagers based on spectral wavelengths. The three precisely-aligned sensors provide an independent red, green and blue intensity value for each pixel in the image with no interpolation required.
When you need the highest possible color accuracy With three separate imagers, a prism camera provides an exact R, G, and B value for each pixel. No interpolation means the values delivered to your application are more accurate than those coming from Bayer cameras, which could be vital in color matching or distinguishing subtle shading variations.
When your application requires a camera with high sensitivity levelsDichroic prism filters have higher light transmittance than Bayer filters to allow more of the incoming light to pass through. Furthermore, with three sensors, prism cameras capture virtually all of the light entering the camera, while Bayer cameras block two thirds of the wavelengths striking each pixel. Since most pixels are represented by some combination of red, green, and blue information, a significant amount of the incoming light never reaches the sensor in a Bayer camera. This gives prism cameras the advantage when needing better contrast and differentiation, particularly in darker areas of an image.
When you want to detect and measure small detailsUnlike the “softness” that blurs edges, printing, and details in a Bayer interpolated image, prism technology does not need to be interpolated, resulting in better spatial precision for applications that must read, measure, or otherwise analyze text or small image features.
When you need accurate, vivid colors across the full spectral rangeLike the human eye, cameras rely on splitting visible light into three separate spectral bands representing the longer (reddish), medium (greenish), and shorter (bluish) wavelengths. Prism cameras are able to separate these three bands with very little overlap (referred to as color crosstalk). This keeps colors crisp and bright throughout the spectral range, compared to Bayer cameras whose filters have much greater overlap, resulting in colors that are more dull or muddy, especially for colors that fall somewhere between the three primary colors.
When you require greater flexibility over some camera parametersMany multi-sensor prism cameras allow users to control the settings on each sensor almost as if they were three separate cameras. This provides greater flexibility for white balancing, color enhancement, and other functions than is available with Bayer cameras.
If you’re building a machine vision system that requires the performance and flexibility of color line scan technology, there are also two different camera types you can consider: a trilinear or a prism camera.
Trilinear technology uses three separate imaging lines to capture RGB images. In the past, three distinct linear sensors were mounted as close together as possible, but today most newer cameras feature a single sensor with three closely-spaced lines of pixels.
Each line is equipped with polymer color filters over its pixels to capture one of the three primary colors (R, G, or B). By synchronizing the camera with the movement of the target, the images captured as each line passes over the same point on the target can be combined to provide RGB values for each pixel in the target line.
When the price of the camera is an important decision factorEspecially now that most trilinear cameras are built around a single, multi-line sensor, trilinear cameras offer a less expensive option than prism cameras. In addition to the lower camera cost, trilinear cameras also offer savings over the recommended lenses needed for prism cameras. Together, this can result in savings of 50% over a comparable prism camera. Be advised, however, that several factors such as the need to use higher intensity lighting and the more rapid degradation of polymer filters vs. prism filters, may negate many of these cost savings over the lifetime of the system.
When your application requires high-speed imaging Trilinear cameras are known for their ability to deliver accurate (non-interpolated) RGB image data at fast line rates. The latest 4K models (4096 pixels per line) can operate as fast as 50 kHz to 70 kHz (50 to 70 thousand lines per second). Comparable prism cameras have traditionally not been able to match those line rates, though new prism cameras are now available with almost the same speed as the fastest trilinear camera.When you can guarantee a roughly perpendicular alignmentWhen trilinear cameras are tilted relative to the target, the distance from the target to each of the three sensor lines becomes different, slightly changing the length covered by each line on the target. If the tilt is small, compensation algorithms in the camera can make adjustments. But for larger angles, the offset can create color fringes (“halos”) or other artifacts in the image. A trilinear camera will perform best when the angle to the target is close to perpendicular and will not require frequent changes.
When working with a flat surface with minimal undulations Because the three lines needed to collect full RGB information must be captured at slightly different points in time, ripples or other surface vibrations can cause the target to be closer or farther away when each line is captured. This can create pixel offsets and “halos” as described above. Similarly, discrete objects that might wobble or roll when moving on a conveyor can cause inconsistency between the three lines captured. For best results, trilinear cameras should be used when the target is flat and any fluctuations are small.
When your system requires a resolution above 8k The highest resolution for a prism line scan camera today is 8K. If your line scan system calls for a 16K line resolution or higher, trilinear will be the only option.
When your system requires a small-sized and lightweight camera with low power consumptionTrilinear cameras are generally smaller than prism cameras which must accommodate the prism and multiple imagers. On top of that, because a prism camera is bigger and has separate control of 3 imagers, it is naturally heavier and requires more power to operate.
Like trilinear cameras, prism line scan cameras use three separate lines to capture RGB information. But prism line scan cameras do so with three separate line sensors. These are mounted on a prism and aligned to a single optical plane so that all three sensors can capture the same line on the target at the same time – rather than sequentially as in a trilinear camera. Dichroic coatings on the prism split the image to the three separate sensors so that highly accurate RGB values are captured for each pixel in the line.
When you require ultimate color accuracyThe dichroic prism coatings used to separate the R, G, and B wavelengths offer more precise discrimination than the polymer filters on trilinear cameras. This results in less crosstalk between color channels resulting in better color accuracy, particularly where spectral bands overlap.
If your system requires an angled camera or if the speed of the conveyor belt variesUsing a trilinear camera in an angled position or capturing objects on a conveyor belt with varying speed can cause problems related to the spacing of the lines and the synchronization of the exposures. A prism camera, however, takes one line at a time and splits it internally. When a prism camera is tilted relative to the target, all three lines still have the same length on the target, as opposed to a trilinear camera where the angle creates a different length for each line (keystone effect). Similarly, a prism camera captures R, G, and B information at the same time for each line, so unlike trilinear cameras, small variations in speed have no effect on the quality of the color data captured.
If there are ripples in the “web” or if objects change in orientation Small waves in a continuous sheet of paper can create large problems for a trilinear camera because it can change how each of the three lines sees the target, causing pixel offsets and color fringes. The same is true if three-dimensional objects are rolling or moving slightly, causing the object to have a slightly different orientation for each of the three lines on a trilinear camera. Prism based line scan cameras avoid such issues due to their single optical plane which ensures that each pixel on each sensor is focused on exactly the same point at all times and therefore is able to create a clean image regardless of undulations or moving/rolling objects.
If you want more control over white balancing and color correctionPrism cameras allow independent control of the exposure settings for each of the three line scan sensors. This means white balancing can be done with exposure rather than with gain as is required in a trilinear camera. If an absolute minimum of noise is required in an application, the combination of better sensitivity and exposure-based white balancing may point toward a prism camera as your best choice.
If you want your machine vision system to have more stability over timeThe dichroic prism coatings not only have better light transmittance than trilinear polymer filters, they also are more stable over time. The better light transmittance means prism-based systems can use lower intensity (and lower cost) light sources and still achieve good exposures. Trilinear-based systems often require higher intensity lights to achieve the same exposure, which not only cost more but can cause an even more rapid degradation of the color filters in the camera.
Whether or not a specific machine vision color camera is suitable for your application depends on a variety of factors. All these factors need to be considered when developing the most suitable color machine vision system for your application. Below, are a few more camera issues to consider when developing a color machine vision system:
The first consideration that should be taken into account is the level of color accuracy and differentiation that is necessary for your application. In certain applications, it is crucial that the machine vision camera can distinguish how far off the detected color is from the target value. Machine vision users who require a high level of precision in this area need a more advanced camera than users for whom a lower level of precision and differentiation is acceptable.
As previously noted, interpolation and low sensitivity are the two biggest obstacles standing in the way of reaching higher levels of color accuracy and differentiation. Interpolation can cause subtle differences in color detection since it takes the average of surrounding pixels to determine the color value of each pixel. Because of that, when your machine vision system attempts to differentiate subtle color variances, you may not know if the shades of color are actually different, or if they are just variations in the Bayer interpolation.
High degrees of color crosstalk influence the level of color accuracy that the machine vision camera can produce. High levels of crosstalk are the result of the considerable overlap between the spectral responses of the red, blue and green channels, as defined by either the Bayer color filters or the dichroic prism coatings. When there is a lot of overlap between channels, a significant amount of uncertainty is created for certain color families, particularly those in the yellow or teal families.
This can be very problematic when your machine vision system needs to distinguish different shades of these colors. Therefore, when developing a color machine vision system, it is important to consider what color families are essential for your analyses, and what levels of color crosstalk would be acceptable in your machine vision system.
Depending on your application, your machine vision system will require a specific level of sensitivity to light. Bayer, trilinear and prism cameras all transmit light differently and therefore vary in light sensitivity.
Bayer filters, for instance, not only are made of a material that provides lower light transmittance than the high-grade glass used in optical prisms, but the mosaic methodology also causes each pixel to be sensitive to only one-third of the wavelengths which might fall upon it. Depending on the exact color of a given pixel, this might result in over half of the light striking the filter never reaching the sensor.
Based on the light levels under which your system will operate, and the levels of gain/noise that can be tolerated, you can choose the most suitable camera for your application.
White balancing is required for every machine vision application in which color is used. Without a well-defined baseline adjusted to the spectrum of the lighting that is being used by the system, there is no way to accurately capture true color values. Different methods of white balancing may be utilized, depending on the type of machine vision camera selected.
Bayer and trilinear cameras, for example, can only be white balanced by adding gain (amplification) to two of the three color channels in order to match the channel with the highest response. However, adding gain not only multiplies the signal, it also multiplies the noise in your image. Any additional gain required due to overall low-light conditions, would then be added to this baseline. If ultra-low noise is a requirement, this factor may need to be addressed, either by increasing the amount of available light or by switching to a different camera type.
A prism camera, by contrast, provides independent control over each sensor, including shutter speed as well as gain. This creates the option to use shutter speed for white balancing – either by lengthening the exposure time for the two channels with lower response, or shortening the exposure time for the two channels with the highest response. While noise may increase slightly if longer exposures are used, the increase is far less than when gain is applied. For some applications, this reduction in noise can be one of several justifications for the use prism camera technology.
Color artifacts are image defects – often falsely-colored pixels or patterns – caused by the way that the color information for the image is derived. Cameras that use estimation or interpolation to calculate color are most likely to exhibit color artifacts.
However, even trilinear cameras, which produce independent R, G, and B values (non-interpolated), can generate color artifacts due to spatial offsets caused by keystone effects, uneven surfaces, or slight timing variances. Because prism cameras have three individual sensors and use a single optical plane to capture the image, there is a very low risk of generating color artifacts with prism cameras. The most common types of color artifacts are:
Color aliasing refers to situations when a line or the edge of an object with a particular color (for example, a dark blue diagonal line) shows different colors, like reddish or yellowish pixels along its edges when an image is examined at the pixel level.
This problem is most common with Bayer cameras because the interpolation technique used to assign an RGB value to each pixel uses a mixture of surrounding pixels which may have completely different colors than the line or edge itself.
Besides causing problems when capturing a single edge or line, large-scale aliasing can result in the appearance of a moiré pattern when images contain fine repeating patterns. Although this effect can occur with any camera needing to capture higher spatial frequencies, a Bayer camera is — again because of the interpolation technique — extra prone to this.
Compared to a monochrome system, special care is required when determining the level of resolution needed in a color machine vision system. That’s because color technologies like Bayer interpolation greatly reduce the effective resolution of the camera. While a Bayer camera might have five million pixels (5 megapixels), the interpolation process “averages out” many of the small details rendering the effective resolution to somewhere around one-third of the overall pixel number.
Depending on the minimum feature size that your application must be able to detect/analyze, and the size of the field-of-view that must be covered, there are two possible courses of action:
You can choose a Bayer camera with a much higher resolution than you might use on a similar monochrome system. Of course, this typically comes with a higher price tag, more expensive optics, and a higher processing load on your host computer.
You can choose a prism camera with roughly the same base resolution than you would use on a monochrome system. A 3.2-megapixel prism camera is really a 3 x 3.2-megapixel camera with three separate image sensors totaling 9.6 megapixels. Thus it can produce 24-bit, 3.2-megapixel output without the loss of resolution as in Bayer cameras. Prism cameras are, as noted, more expensive than Bayer cameras. But when compared with all the associated costs of using a 9-megapixel Bayer camera, the overall comparison becomes much closer.
The information above applies only to area scan resolution. In the case of line scan systems, neither trilinear nor prism cameras rely on interpolation, so there is no significant reduction in effective resolution for either technology. However, some of the issues discussed in the chapter on Line Scan Cameras may affect the ability of a trilinear camera to discriminate small details as well as the single optical plane of a prism camera.
When developing a machine vision system, you need to decide which color space is best for your particular application. The exact color space depends on what the application is intended to do and how the color information will be analyzed.
For example, applications that simply display objects on a screen, would naturally make use of standard RGB color spaces since that is how all monitors construct the color of their pixels. But if you are dealing with printed material instead, a color space like Adobe RGB might be a better choice because it offers a slightly wider selection of colors that are tailored to digital printing.
Other color spaces like HSI (hue, saturation, intensity) and the CIE XYZ or CIE L*a*b * color spaces use mathematical coordinates to describe colors in such a way that it is easier for certain applications to calculate color matches and color variances in terms of both degree and direction. In most applications, you will use algorithms and processing resources on the host computer to convert the RGB data coming from your camera(s) to the color space that is best suited to your application. However, in some situations, you may prefer the camera to to perform this conversion while your host processing resources focus on other tasks. For these cases, it is worth selecting cameras which have built-in color space conversion capabilities.
In some cases you may find it valuable to intentionally change the accuracy of your color. If so, color enhancement and optimization capabilities are worth considering when developing your machine vision system.
For example, if you want to detect a particular deviation in the image or distinguish two objects from each other, it can sometimes help to enhance a specific color in your image. Distinguishing blood cells from tissue, for example, can be done more easily when the red color in the image is enhanced.You can enhance colors in your image after it is captured by using an algorithm on the host computer. However, post-processing enhancements may be limited by the saturation or contrast of the raw image. Some cameras are equipped with color optimization capabilities that allow users to enhance specific primary or complementary colors up to 200 percent (2X) their true value. System builders should consider whether such a capability can add value to their application or help to differentiate it from competitive systems.
Let a JAI product consultant answer your questions and provide you with personalized advice.
Update your browser to display this website correctly. Update my browser now
×