The maximum allowable time to get all the way through the loop a single time is our update rate. It can be limited by sensor speed or actuator bandwidth, but most often it is limited by that pesky Step 3, the actual calculations and decisions required to move the system from where it is to where we would like it to be.
Humans seem to perceive anything above about 20 images/sec as continuous; this is why movies work even though nothing is moving. CCD cameras always have been at least 30 or 60 frame/sec devices, dating back to early television days. Today, industrial-grade CCD and CMOS cameras have resolutions from 640x480 up to tens of megapixels, and with frame rates from a few frames/sec on the huge arrays through hundreds of frames/sec at typical frame resolutions. So sensor speed is not really much of a limiting factor in typical vision applications. Success or failure really hinges on image consistency and the practicality of the image analysis functions required.
We often worry about computer system reliability, but sensor reliability is just as critical to the system. If you lose your ability to “see,” whether it is a limit switch, a rotary encoder, a linear glass scale or a camera, the controller cannot perform correctly. Unfortunately, the more complex the sensor, the more likely the failure, and cameras are pretty complicated pieces of hardware. They often contain one or more CPUs with associated firmware just to operate themselves. Fortunately, industrial cameras have been available from companies like Dalsa, Panasonic, Pulnix/JAI and Sony for decades, and a host of other low-cost, high-performance, ruggedized, and specialized cameras reliably cover an incredible spectrum of imaging needs.
A major new trend is to abandon custom image-acquisition hardware in favor of USB 2.0 or Gigabit Ethernet standard interfaces. This makes cost of ownership lower for vision hardware and dramatically reduces wiring cost, simplifies power distribution and opens up new opportunities for using redundant network pathways to keep pixels flowing from a primary or even a backup camera.
When is real-time vision realistic? Can we make a reliable closed-loop vision application? Absolutely, as long as we have control over all three parts of image consistency—object, illumination, and background—as we discussed in Part I, and if image-acquisition time at the necessary resolution falls within real-time feedback requirements, and if image processing and analysis algorithms can be designed and implemented to work in the real-time window.
Many applications satisfy all three requirements. I’ve developed systems in factories that manufacture various web products requiring real-time control on edge location, web speed and web thickness. Here, we can build a light-tight box, fill it with predictable LED illuminators, maybe use a prism or mirror to get an image of different features into the same camera and use simple edge-finding and measurement algorithms to feed back data to a control PLC that adjusts servos and motor drives. With simple vision hardware, often just a PC or so-called “smart camera,” update rates of 20–100 measurements/sec easily are attainable.
Vision feedback for fiducial alignment or pad alignment in semiconductor wafer and printed circuit board part placement long has been deployed without problems. Indeed, pattern-finding and alignment is still one of the largest application spaces in the vision market. Here again, we usually can build light-tight boxes and control background and illumination with an iron fist.
In spite of many successes, there are ongoing issues with inconsistent patterns, partially occluded or misshapen marks, object inconsistency due to lighting non-uniformity, surface-finish variation, mark discoloration, stamping depth and scratches that continue to plague many industrial applications. There is also a marketing-driven myth of excessive “subpixel resolution” that continues to plague engineers trying to solve real problems. However, if you can ensure image consistency, then most existing vision solutions will work reliably for alignment, robot guidance and part-tracking applications.
Ned Lecky is a mechanical and electrical engineer with 25 years of hands-on experience in control systems and machine vision. As owner of Lecky Integration, he consults for OEMs, system integrators, machine vision providers and large end users.