For machine vision, basic differences are blurring and on the verge of disappearing. As a result, a long-running battle may soon end, and machine builders will then have one less choice to worry about.
Fundamentally, machine vision cameras can be classified into two camps. On one hand are charge coupled devices (CCDs). The alternative vision sensors are built using complementary metal-oxide-semiconductor (CMOS), the same technology that powers computer chips. The two have long been locked in a struggle, which looks to be ending.
"Just in the past year, it's become pretty clear that CMOS is going to be the dominant one," says Vineet Aggarwal, when discussing the future of the two technologies in industrial machine vision applications. Aggarwal is senior group manager for embedded systems products at National Instruments. The company works with many different vendors of machine vision cameras.
Both CCD and CMOS sensors are silicon-based and both convert incoming photons into electrons. Of the two, CCD is the older technology. It was the only game in town from the mid-1970s until the mid-1990s, which is when the first commercial CMOS sensors appeared. One result of this technological headstart is that CCD sensors were traditionally considered higher quality, which was defined by two key attributes.
Also Read: Machine Vision Sees Further, Faster
"One of these was better signal-to-noise ratio, and the other was a lower number of dead pixels, or pixels that don't respond to light. But CMOS has improved greatly in the past few years and has closed the gap to the point where the two are almost interchangeable for most applications," says Rick Roszkowski, senior director of marketing for the vision products business unit of Cognex, which offers both CCD and CMOS sensors in its products.
An indication that CMOS vision sensors have gained parity is the fact that Japan's Sony, which has traditionally only produced CCD sensors, released its first global shutter CMOS sensor in early 2014. That development is particularly important to the machine vision market, according to Michael Gibbons, director of sales and marketing for Point Grey Research. The company makes products with both CCD and CMOS vision sensors.
CMOS has traditionally used a rolling shutter that sequentially exposes each line of pixels in the sensor. Thus, not all parts of a scene would be captured at the same instant in time, and so objects moving fast enough could be blurred. With a global shutter, on the other hand, the entire sensor is exposed at the same time, eliminating a source of image distortion. Hence, the growing availability of global-shutter CMOS sensors means the technology is better suited for a wider range of machine vision applications, Gibbons explains.
Also important to the growing use of the newer sensor technology is that most digital-grade consumer cameras use CMOS sensors. As a result, CMOS is the target of the bulk of research and development spending.
That R&D work exploits a key characteristic of CMOS sensors—the light-to-electron converting silicon sits adjacent to circuitry. This means that individual pixels can be read out in a largely parallel fashion, which makes the vision sensor capable of a faster frame rate. Another consequence is that analog-to-digital convertors can be built in, allowing features like integrated gain, offset and dark level adjustment. This makes it less expensive to integrate the sensor into a vision system and potentially reduces overall cost, according to Roszkowski.
Because CMOS sensor technology is both newer and evolving more rapidly, machine vision applications can benefit from such features as high dynamic range, variable trigger modes, light control output, windowing and on-chip image scaling, as well as the ability to exclude everything outside of multiple regions of interest.
However, it's not quite time just yet to abandon CCD in all machine vision applications. For one thing, the vision technology found in consumer cameras is close to but not exactly the same as that in industrial applications, which means machine vision cameras are not following precisely the consumer device cost curve. One reason is that consumer cameras squeeze a lot of pixels into a tiny chip, which means each pixel is small, perhaps 1.5 microns (µm) across. A machine vision camera has much larger pixels that are typically 4.5 µm in size. The advantage of bigger pixels is they collect more light, but the downside is the sensor chips are larger and therefore more costly for an equivalent number of pixels.
"There's only one reason to go CCD over CMOS—low light levels. In all other situations, CMOS offers greater speed and equivalent sensitivity, while benefiting from advances in consumer camera technology," says Joachim Linkemann, senior product manager at Basler. Like the other camera manufacturers, Basler uses both CCD and CMOS sensors in its products.
Thanks to this greater sensitivity, CCD sensors are still preferred for scientific applications, such as those in astronomy and the life sciences. Both typically involve a photon-poor source, and so need to get the most out of any light that arrives. While industrial users typically don't face the same issues, there can be cases where light levels are low and CCD sensors are a better solution.
On the other hand, good lighting is almost always critical to machine vision success. Hence, the instances in an industrial setting where light levels are low enough to make CCD sensors strongly preferred over their CMOS counterparts could be rare.
However, even this low-light advantage is in jeopardy. Roszkowski reports changes are underway that promise to improve CMOS sensors in this area. "CMOS is moving to back-illuminated designs, which would allow them to be much more sensitive than current CMOS devices in the near future," he says.
This approach puts the light sensing material on the back side of the chip while the circuitry stays on the front. Consequently, there is no decrease of incoming light arising from shadows cast by metal traces, transistors or other circuit components.