Understanding Colour Machine Vision

Understanding Colour Machine Vision

Digital cameras with colour image sensors are now commonplace. The same is true for the computing power and device interfaces necessary to handle the additional data from colour images. What’s more, as users become familiar and comfortable with machine vision technology, they seek to tackle more difficult or previously unsolvable applications. These circumstances combine to make colour machine vision an area of mounting interest. Colour machine vision poses unique challenges but it also brings some unique capabilities for manufacturing control and inspection.

The Colour Challenge

Colour is the manifestation of light from the visible part of the electromagnetic spectrum. It is perceived by an observer and is therefore subjective – two people may discern a different colour from the same object in the same scene. This difference in interpretation also extends to camera systems with their lenses and image sensors. A camera system’s response to colour varies not only between different makes and models for its components but also between components of the same make and model. Scene illumination adds further uncertainty by altering a colour’s appearance. These subtleties come about from the fact that light emanates with its own colour spectrum. Each object in a scene absorbs and reflects (i.e., filters) this spectrum differently and the camera system responds to (i.e., accepts and rejects) the reflected spectrum in its own way. The challenge for colour machine vision is to deliver consistent analysis throughout a system’s operation – and between systems performing the same task – while also imitating a human’s ability to discern and interpret colours.

The majority of today’s machine vision systems successfully restrict themselves to greyscale image analysis. In certain instances however, it is unreliable or even impossible to just depend upon intensity and/or geometric (i.e., shape) information. In these cases, the flexibility of colour machine vision software is needed to:

  • optimally convert an image from colour to monochrome for proper analysis using greyscale machine vision software tools
  • calculate the colour difference to identify anomalies
  • compare the colour within a region in an image against colour samples to assess if an acceptable match exists or to determine the best match
  • segment an image based on colour to separate object or features from one another and from the background

Colour images contain a greater amount of data to process (i.e., typically three times more) than grayscale images and require more intricate handling. Efficient and optimized algorithms are needed to analyze these images in a reasonable amount of time.

Matrox Imaging Colour Analysis Tools

Matrox Imaging provides a set of software tools to help identify parts, products and items using colour, assess quality from colour, and isolate features using colour. The colour matching tool determines the best matching colour from a collection of samples for each region of interest within an image. A colour sample can be specified either interactively from an image — with the ability to mask out undesired colours — or using numerical values. A colour sample can be a single colour or a distribution of colours (i.e., histogram). The colour matching method and the interpretation of colour differences can be manually adjusted to suit particular application requirements. The colour matching tool can also match each image pixel to colour samples to segment the image into appropriate elements for further analysis using other tools. The colour distance tool reveals the extent of colour differences within and between images, while the projection tool enhances colour to grayscale image conversion for analysis — again using other tools.

Calibration and Lighting

The majority of color cameras feature a single sensor that employs a color filter array (CFA) or mosaic. This mosaic typically consists of red (R), green (G), and blue (B) optical filters overlaid in a specific pattern over the pixels (Figure 1).

Figure 1 – Common Bayer color filter array (CFA) or mosaic pattern.

A demosaicing operation – performed either by the camera or software – is needed to convert the raw sensor data into a proper colour image (i.e., with an RGB value for each pixel position). Several demosaicing techniques exist, each with a trade-off between speed and quality (i.e., introduction of color artifacts). This demosaicing operation can and must be adjusted to normalize the (RGB) response of the setup (i.e., camera system and illumination) and thus produce consistent colour images. The normalization factors are determined – most often automatically – by performing a white balance calibration: the machine vision system is presented a sample deemed white and the normalization factors to produce a white image are computed accordingly. Controlled scene illumination is also critical for effective colour machine vision — the light source, usually white and diffused, must provide a sufficiently consistent output and the scene must be adequately shrouded from the effects of varying ambient light.

The Right Colour Space

Typically, color is represented mathematically by three components and is thus visualized as a point or region in 3D space. The most common color spaces for machine vision are RGB, HSL, and CIELAB (Figure 2).

Figure 2 – The 3D representation of the RGB (left), HSL (middle) and CIELAB (right) colour spaces.

RGB is the most common color space since it is used natively by most cameras and by all computer monitors. In HSL, a given color is represented by its hue (H), saturation (S) or purity, and luminance (L) or brightness. The CIELAB color space was created to mimic human perception; the numerical difference between colors is proportional to typical human interpretation (Figure 3).

Figure 3 – The difference between colors marked A and B from X are essentially the same in RGB (left) but are substantially different in CIELAB (right), which better reflects typical human perception.

With HSL and CIELAB, it is easier to factor out the effect on luminance from non-uniform lighting, which adversely affects analysis. CIELAB is useful when the automated inspection needs to replicate human inspection criteria.

Colour Projection

Extracting just the intensity or luminance information from a colour image can result in objects or features, which differ only in colour, becoming indistinguishable from one another. Principal component projection is a tool provided in Matrox Imaging software that uses the color distribution trend to optimize the conversion from colour to greyscale and minimizes the loss of critical image information (Figure 4).

Figure 4 – Extracting just the luminance information from a colour image (left) produces an image where the objects are indistinguishable from one another (center) while principal component projection produces an image that still differentiates the objects (left).

Colour Distance

Colour distance is how the difference between colours is measured. In its simplest form, the distance is computed between every pixel in an image and the corresponding pixel in a reference image or a specific colour. The distance can be computed using various methods (e.g., Euclidean, Manhattan, and Mahalanobis/Delta-E). The colour distance can be a simple and effective way of detecting defects best characterized by their colour. Matrox Imaging software includes a colour distance operation that is also the basis for colour matching.

Colour Matching

The colour matching tool provided in Matrox Imaging software performs one of two basic tasks: colour identification or supervised colour segmentation. Colour identification compares the colour in a given region to a set of predefined colour samples to determine the best match if one exists (Figure 5).

Figure 5 – Identifying if the colour in a region (i.e., banana) matches that of a single sample. The left and middle images are considered a match while the right one is not (i.e., too greenish).

The region whose colour needs to be identified is either known beforehand or located using another tool like geometric pattern recognition. Supervised colour segmentation consists of associating (and replacing) each pixel in an image or region with one of the predefined colour samples and therefore separating objects or features by their colour (Figure 6).

Figure 6 – Supervised colour segmentation separates regions of a clam for subsequent grading. Original image (left) courtesy of Lizotte Machine Vision.

Supervised colour segmentation is also used to obtain colour statistics on an image; how much of one colour sample versus another. A colour sample is defined either from a reference image or a specific colour. If based on an image, the sample’s colour is derived from statistical analysis (i.e., mean or distribution). A target area in an image is matched either by comparing its statistics (i.e., mean or distribution) with those of each sample or each pixel voting for the closest sample. The mean-based method is quick but requires a carefully-defined target area. The vote-based method is slower but the target area can be more loosely defined and it is more robust to outlying colours. The latter method also provides more detailed results and is used for supervised color segmentation. The histogram-based method is ideal for multi-coloured samples (Figure 7).

Figure 7 – Multi-colour samples (top row) used to identify correct (left) and incorrect (right) filling.

A score is computed to indicate how close the target colour is to each sample colour. Controls are provided to tailor the colour matching for a particular application. A match is reported if the score is above thresholds for the best colour sample (i.e., acceptance level) and the next best colour sample (i.e., relevance level). A situation can arise where the score is deemed acceptable for two or more colour samples but too close between colour samples for there to be a definite match. A colour distance tolerance adjusts how close the target colour needs to be to a sample colour to be considered a match.

Optimized for Speed

Working in colour means that there is more data to process and the data requires more elaborate manipulation. Colour analysis tools must not only be accurate and robust to be effective, but they must also be optimized for speed. The Matrox Imaging colour analysis tools take full advantage of the vector (SIMD) instruction units in contemporary CPUs, as well as their multi-core designs.


The colour analysis tools included in the Matrox Imaging Library (MIL) software development kit and the Matrox Design Assistant interactive development environment offer the accuracy, robustness, flexibility, and speed to tackle colour applications with confidence. The colour tools are complemented with a comprehensive set of field-proven greyscale analysis tools (i.e., pattern recognition, blob analysis, gauging and measurement, ID mark reading, OCR, etc.). Moreover, application development is backed by the Matrox Imaging Vision Squad, a team dedicated to helping developers and integrators with application feasibility, best strategy and even prototyping.

Note: This article was originally published on Matrox's website.