In this episode of Control Intelligence, written by contributing editor Rick Rice, editor in chief Mike Bacidore explains the differences between machine vision and computer vision.
Transcript
Vision systems can really be divided into two separate but similar technologies—machine vision and computer vision. Both are based on the goal of mimicking the function of the human eye but differ on the objective and means by which they collect and interpret information.
They both start with a captured image or images. Machine vision might be described more as an inspection tool with a specific goal, while computer vision could be described more as extracting as much information as possible from an image and other related sources. Machine vision depends solely on its own images, while computer vision can combine images from multiple sources to make a determination.
Machine vision, like other aspects of our industry, has evolved as technology has evolved. Some of the earlier vision systems were in use in manufacturing medical devices. The cameras used in those days were of a much lower resolution and relied on the presentation of stationary or very-slow moving processes to capture the image, evaluate the information and make a decision.
One such application looked at a device used during childbirth. The device was produced by an injection-molding machine. The inspection system was used to make sure that the features were completely formed and to make sure that there was no leftover flashing from the molding process. In that early example, we were simply counting pixels to determine if too little or too much product was present on the inspected device.
Inspection methods are called “tools” in machine-vision systems. The toolsets of the earlier vision systems were rudimentary at best. Cameras captured images in black and white. With relatively low-resolution cameras, the tools were limited. We could check for the presence or absence of an object or feature. We would detect protrusions and look for a definable edge. With a reliable, consistent background, we could convert the image to pixels and use that in order to determine if a feature was missing by counting the pixels of our test piece and comparing that count to subject piece. A good piece would fall within a certain range of pixels in the image.
Another limitation of these early inspection tools was the need to accurately locate the subject piece within the field of view of the camera in a repeatable method. Over the years, this need has been improved by having more sophisticated algorithms that will first locate and orient the object within the field of view and then perform inspection tasks based on that presented image.
One of the major improvements with vision systems has been the size of the package. Earlier systems were comprised of a camera on a mount, an independent lighting system, a controlled environment, essentially a box around the inspection area, and a huge box that made up the smarts of the camera system.
Long cables connected the controller to the camera and lights. The controller was essentially an industrial PC. Specialty daughter boards provided the connection to the camera and peripheral devices. The software was proprietary and required a high degree of training to get the desired results.
While some high-end inspection systems still use a version of this architecture, most vision systems have lighting, camera and controller/interface all in one package that will fit in the palm of your hand. The system communicates via popular fieldbus protocols, such as Ethernet or Profibus. The setup is accomplished via a laptop running vendor-supplied software. Some vendors are offering systems where the setup software resides on the camera package. A simple browser connection to the camera is all that is needed to connect to and configure the system for operation.
One huge improvement in machine vision has been the development of real-world application of artificial intelligence to the inspection toolset. Traditionally, rules-based tools are used to make decisions. These include object location, bead and edge detection, measurement tools, histogram and image-processing tools. More recently, texture- and color-based tools have been developed to complement the base tools.
The introduction of AI tools further enhances the rules-based tools by adding the ability to make determinations that were far too complicated otherwise. These so-called deep-learning tools start with the introduction of multiple examples of good or bad product images. The larger the sample base, the more deterministic the results.
One such application of AI tools is alternate means of checking for open flaps on a carton. Traditionally a series of photo eyes located at strategic places on an exit conveyor would be used to capture open major and minor flaps on a carton.
Cartons have minor flaps that fold down first and then the major, or longer, flaps fold down over the minor ones to seal a carton when a bead of glue is applied prior to folding the final flap down. Periodically, a minor flap will get missed or improperly folded and will stick completely out of the carton. Alternately, the minor flap might be partially folded and a portion will stick out of the finished package.
Similarly, the final major flap may not get a good glue application and will not completely glue down to the opposing major flap to completely seal the package. By applying an array of photo sensors in a tight formation around a carton as it exits the packaging machine, any of these poorly sealed cartons can be caught. This is a tedious task to set up and often misses packages.
The application further complicates the matter by having cartons that are filled and sealed in a vertical orientation. After the final glue station, the cartons transition from vertical to horizontal to continue down the packaging line.
This is accomplished by simply knocking over the carton after it exits the machine. Unlike a horizontal cartoner, where the location of the package is absolutely controlled as it exits the machine, for the vertical carton, the final position on the exit conveyor is somewhat random, in that it might not be square to the conveyor, and the conventional photo-eye array will not work.
Choosing a vision system with advanced AI tools provides an answer to this issue. With a single camera, looking straight down, it only gives profiles of the cartons. We could subject multiple passes of good or bad cartons, but that is still limited to two-dimensional, profile views of the samples.
Adding more cameras, some mounted on angles, rather than straight down, provides a much larger database of good and bad product profiles. The system is taught by passing a package through the field of view and then telling the system if the product is acceptable or not. The same product could be passed through with different angles of skew, as well as flipping the products over to present an opposite view of the same defect. The system learns from each pass and creates a larger sample base upon which to make decisions. This deep-learning algorithm is key to coming up with a good solution for the application.
Machine vision does more than just parts inspection. The ever-expanding use of robots is further enhanced by vision systems. Packaging machines, for example, use a vision system to indicate not only the presence of a package, but the orientation as well.
This is especially important in applications a product needs to be picked up at its geographic center but also be able to accurately orient the package for proper placement in the finished container. The need to identify each object precisely as it moves down a conveyor and command the robot to pick up the package in the correct position and orientation is extremely important. Vision systems are more than up to the task.
Like any camera application, lighting and background are very important to the overall success of the process. Advances in the types of lighting and the use of appropriate filters, based on the characteristics of the product being inspected, are additional considerations that must be made when applying machine vision to a project.
Machine-vision applications will continue to grow, and the reduction in package size coupled with the ease of use will make it more attractive in the years to come. Costs continue to come down, making this element of a control system even more desirable to future designs. Many manufacturers provide add-on profiles that will work with your favorite programmable controllers, so integrating into the final package is an appealing consideration.
If you've enjoyed this episode of Control Intelligence, don't miss our older episodes and subscribe to find new podcasts in the future. You can find our podcast library at ControlDesign.com/Podcasts, or you can download all episodes by Apple Podcasts.