Robotic machine vision system in factory

7 keys to integrating machine vision for in-line metrology

May 8, 2025
Solving the measurement puzzle at production speed

Machine Vision for Precision In-line Metrology

DAVID L. DECHOW

ENGINEER, PROGRAMMER, TECHNOLOGIST

David L. Dechow, engineer, programmer and technologist, will present “Fundamentals of Machine Vision” at 8 am on May 12; "Machine Vision for Precision In-line Metrology" at 2:30 pm on May 13; "Vision System Design" with Perry West, president, Automated Vision Systems, at 10:30 on May 14; and "Vision System Design," with Robert Tait, partner, Optical Metrology Solutions, at 8 am on May 16, during A3's Automate 2025 in Detroit.

The application of machine vision for in-line metrology provides valuable capabilities in production environments where precise part geometry influences quality and dictates functionality. In addition to capturing individual non-conforming parts, it most significantly can deliver actionable data to manufacturing processes for continuous or real-time improvements. This automated imaging task, though, presents unique challenges in implementation. Here are seven practical considerations for integrating machine vision for metrology to help avoid potential issues and ensure success.

1. Understand gaging techniques and embrace the value proposition for in-line non-contact measurement.

Let’s start with a brief perspective. Imaging is widely employed in varying off-line metrology systems for many different use cases. Our focus, however, will be exclusively on automated systems that are directly integrated into the manufacturing process, providing continuous production measurements.

At the most fundamental level, measurement is about gauging techniques. This involves using tools to measure objects or features. Measurement tasks range from simple 2D point-to-point distances to the analysis of more complex geometries including 3D relationships. Handheld gauges and many partially or completely automated off-line systems—for example, coordinate measuring machines (CMMs get positional data by physically contacting an object at various points.

By comparison, machine vision uses imaging technologies and executes what can be described as non-contact measurement. The differences between contact and non-contact measurement techniques are not trivial, and it is important to understand how these compare to each other.

An imaging system captures object features from the camera's perspective, showing only surface edges of perpendicular features. For example, when measuring a machined hole diameter with a caliper gauge or CMM probe, the measurement device contacts the bore below the surface, but an image from above sees only the surface edges.

The key question is whether these measurements match physically and if there’s a consistent relationship to resolve differences. This challenge affects nearly any non-contact measurement application.

The takeaway is that differences exist between measurements taken with contact and non-contact gauging tools. While it might be non-intuitive, both measurements can be precise and even correct, just taken from slightly different physical perspectives.

An important value proposition when considering non-contact in-line metrology then is having every part measured with precision and the ability to collect a very large amount of actionable data about manufacturing process trends. This is an important capability even if an individual measurement does not exactly correlate to a specific contact-based measurement. To be clear, the measurements will be closely related but may not be identical.

2. Learn about measurement metrics, tolerancing and gauge resolution.

In metrology, there are common but sometimes misunderstood or misused terms that describe the metrics by which one can evaluate the level of performance of a gauging system. Quality engineering professionals deal with these metrics and terms regularly. But a basic, practical understanding of these metrics is very suitable in the context of implementing automated non-contact measurement.

Precision is the metric that quantifies the extent to which the system can repeat a measurement under identical conditions. Some substitute the terms “repeatability” and “reproducibility.” While there are formal tests and calculations to state the precision of a measurement process, in a practical sense some simple tests can tell us a lot about the system.

In testing repeatability, the same part is measured a statistically significant number of times under identical conditions collecting the measurement results. For each measurement, the range of the data is observed.

Do not use an average deviation; the raw range between the minimum and maximum values is more important in this context because it reveals outliers that will skew the results and potentially result in a high level of false rejects during production.

Within this testing structure, a static repeatability test is where a single part remains undisturbed in the inspection system as new images and measurements are collected. This simple test reveals the effective gauging resolution of the imaging system itself. This can be considered the baseline for the capability of the system as implemented.

Often, this observed capability is much different from the estimated resolution listed by the imaging component manufacturer or calculated as pixel spatial resolution. A worse-than-expected range in this test might reveal deficiencies in sensor resolution specifications or algorithm selection.

There is a close relationship between system resolution and the stated measurement tolerance. Note that measurement tolerance is the expected range of measurement values that are acceptable for the real-world size of the feature. Gauge resolution is the amount of error that might be expected when making the measurement.

It’s common to say that the resolution of a gauge typically must be 1/10 of the tolerance range, but the actual gauge resolution required varies by application. In any case, the static test though helps confirm that system resolution will be suitable for the expected tolerance range.

A further test to evaluate repeatability is to dynamically—under production conditions—measure the same features on the same part many times and again observe the range of differences in the reported real-world values. In this case the results can reveal deficiencies in the design and execution of the imaging acquisition components or the analysis algorithms being used relative to variations in the part presentation.

The difference between static and dynamic repeatability exactly shows the contribution that the automation has in the error stack-up with respect to the inspection system architecture being used.

Accuracy and trueness also are essential metrics in measurement. Trueness denotes the closeness of a measurement's average to the actual value. Accurate measurements exhibit high precision and trueness.

While it appears straightforward to define accuracy as the error relative to an “actual” value, determining this actual value can be problematic. Considering the differences between various gauging techniques, and notably between contact versus in-line non-contact methods, the actual value can be ambiguous and subjective.

Although this topic extends beyond the scope of this discussion, a practical approach is to ensure the system's repeatability to the required resolution first. If exact alignment with an arbitrary measurement is necessary, introducing a bias to the results may be appropriate.

3. Employ vision techniques that optimize resolution, feature segmentation and measurement analysis.

Machine vision systems for any application have three common processes requiring diligent and competent design and implementation to ensure a successful solution. These are acquisition, analysis and the handling of data and results.

Cameras, lighting and optics represent the essential elements within the acquisition segment of a machine vision system. While image resolution significantly influences system resolution, though not exclusively, selected optics also play a role.

The illumination components determine the system's capability to consistently segment and analyze features that require measurement. Accurately estimating the required resolution for achieving a specific level of measurement precision and correctly specifying the imaging component can be challenging.

Modern cameras are available with increasingly higher pixel counts. But a high-resolution imaging system over a larger field of view can create challenges in illumination geometry and coverage, and in lens specification. Some applications may be better-suited to using multiple cameras at lower imaging resolutions and smaller fields of view.

With smaller fields of view a “telecentric” lens can be a good choice. These lenses have virtually no imaging angle, producing an image that is nearly perpendicular to the lens. This type of lens is often even referred to as a “gauging” lens. Useful in certain applications, telecentric lenses have relatively small depths of field and limited fields of view.

The selection of software tools or algorithms for feature location and measurement requires some experience and frankly a large amount of testing. There may be multiple options or only one configuration that produces the best results.

Often feature variations that fall within the expected range of production variation must be accommodated and can be the hardest measurements to make repeatable from part to part. Nonetheless, there are some general strategies that can help ensure success.

Do not use search algorithms as tools for the actual measurement of a feature. These are repeatable to a point but mostly when the object is highly rigid and exhibits little variation part to part. Search tools are amazing at finding an object even with variation and deformities. However, the reported location point delivered by the algorithm will usually shift relative to the expected “golden part” location.

This is also true of deep learning tools like segmentation or anomaly detection. A better strategy is to use search as needed to generally locate an object and then implement discrete measurement tools like gradient edge detection with linear or circular regression to identify the exact location/size of a feature.

Furthermore, it is most always strategically beneficial to use analysis tools that provide subpixel results, like gradient edge detection. In the process of improving repeatability, these kinds of algorithms also effectively increase the baseline gauging resolution of the system. Just keep in mind that estimates of tool subpixel capability are often exaggerated. Only thorough testing can show what the resolution improvement will be in the production environment.

Get your subscription to Control Design’s daily newsletter.

4. Make the most of image calibration.

While calibration of cameras in metrology applications is not strictly necessary, it offers several advantages. Primarily, while virtually all imaging analysis fundamentally provides information in pixel space, calibration of the imaging system either automatically or on demand delivers results in real-world units.

When done correctly, the calibration also corrects many imaging inconsistencies, including lens perspective and any angular variation of the sensor relative to the desired plane of measurement, as defined by the distance from the sensor to the object surface.

When performing a calibration of an imaging system that has lenses that have an imaging angle—endocentric or fixed focal length lenses—the calibration is only accurate for measurements taken at the exact plane at which calibration was performed. Some calibration procedures do, however, allow the selection of different sensor-to-part distances when transforming images or pixel data to real world.

A calibrated imaging system also may provide a rectified image where image inconsistencies are removed or flattened in the rectified image. Analysis within the rectified image might be easier and perhaps more precise than in the original because the feature representations are more consistent geometrically.

Calibration, typically performed using a specific calibration article, is a feature available in most machine vision systems or software.

5. Address part presentation issues that can impact repeatability.

An important consideration in high-resolution, low-bias machine vision metrology for an on-line application is the repeatability of part presentation. The imaging, optics, resolution and algorithms might all function well in the off-line or static testing, but achieving consistent and reliable inspection results in production can be challenging. This is often due to variability in part presentation, which can sometimes prevent certain measurements from being repeatable.

For instance, the small but deep bore hole discussed previously can be measured accurately when its face is perpendicular to the lens and the image is captured directly down the depth of the hole. However, if the part tilts even slightly, the hole may appear as an ellipse or be entirely obscured if backlit.

When using imaging for non-contact measurement, it is essential to minimize any variations in part presentation and acknowledge that part presentation will contribute to some stack-up error in the measurement. This should be considered when determining and specifying resolution, optics and lighting.

6. Test, test, test and then test.

This caveat is so critically important and so obvious that it should barely require explanation. Testing early and often is crucial in machine vision projects, yet it is frequently overlooked. Designing the initial architecture based on preliminary imaging and performance estimates is important, but actual implementation testing must begin at the project's start and continue until final acceptance. Early detection of issues can prevent system failures and even costly rebuilds later.

7. Use the data productively.

Recall the important value proposition for implementing in-line, 100% part inspection. It was not the guarantee of the most precise or accurate measuring system but rather the overarching value of having tons of actionable data to improve production quality. Yes, in-line metrology can catch out-of-tolerance parts in real time. However, using the information to manually or even autonomously manipulate the manufacturing process is one of the best benefits of the technology.

These seven practical considerations are by no means the complete story of successful metrology. Hopefully this will provide you with a good starting point in implementing this valuable machine vision task in your industrial automation setting.

About the Author

David L. Dechow | engineer, programmer, technologist

David L. Dechow, engineer, programmer and technologist with expertise in the integration of machine vision, robotics and automation technologies, will present “Fundamentals of Machine Vision” at 8 am on May 12; "Machine Vision for Precision In-line Metrology" at 2:30 pm on May 13; "Vision System Design" with Perry West, president, Automated Vision Systems, at 10:30 on May 14; and "Vision System Design," with Robert Tait, partner, Optical Metrology Solutions, at 8 am on May 16, during A3's Automate 2025 in Detroit. Dechow has served various companies and most notably was founder, owner and principal engineer for two automation system integration firms. Dechow is a recipient of the Association for Advancing Automation (A3) Automated Imaging Achievement Award and is a member of A3 Imaging Technology Strategy Board. As an educator within the machine-vision industry, Dechow has participated in the training of hundreds of engineers as an instructor with the A3 Certified Vision Professional program. Contact him at [email protected].

Other articles by David Dechow include:

Machine-vision system integration basics

Sponsored Recommendations

NSK integrates advanced automation and drive technologies to deliver high capacity, high speed, ultra-precise indexing and positioning in a compact, flexible linear actuator: ...
Unlock comprehensive insights into today's thermal processing landscape with Honeywell's whitepaper, detailing advanced technologies and solutions designed to enhance thermal ...
Sensing devices and vision components are a large part of safety systems. They protect employees, equipment and processes. But they do so much more. The applications are continue...
Learn how today's drives enhance performance, even in the most challenging industrial sectors.