One of our clients, a food processor, fills an average of 800 cans per minute with a wide range of products such as vegetables, fruit and soup. An important challenge for the company is to ensure that the code printed on the bottom of each can matches the product in the can.
Current and upcoming food traceability legislation requires that food-processing companies have systems in place to provide a trail of information that follows each food item through the supply chain. To ensure food safety and efficient recalls, manufacturers must be able to identify and locate any item in that supply chain, and quickly trace it back to its source and forward to its destination.
To achieve this, many companies are in the process of implementing 2D barcodes, vision systems and image-based ID readers to ensure the safety of the supply chain.
Many food canners use a "bright stacking" process. Bright stacking is the storing of uncased cans without labels, commonly on pallets in a warehouse after sterilization and cooling. The only indication of what's inside each can is a text code on the end of it, so when the time comes to put the label on the can, a vision system checks the code on the end of the can to make sure the right label is applied, and then verifies that the can contents and label match.
Also Read: Link Machine Vision and SCADA
The food processor in this application decided to begin reading the code on the bottom of each can to be sure that it's readable and matches the contents of the can. The goals were to prevent mislabeled products from reaching customers, prevent accidental product mixing on the line, and prevent shipping of mixed products to customers. The code is inspected directly after the canning process and then again just before the label is applied.
These goals were achieved with the installation of a machine vision system that reads the code on the can in just 60 ms—enough time to reject cans with incorrect or unreadable codes. The vision system uses pattern-matching to orient the code regardless of radial position, and then optical character recognition reads the code and matches it against the product being produced on the line.
The greatest challenges in the application are the high speed of the line, the fact that the code can be oriented in any radial position on the can as it moves down the line, and the varying finish of the cans, which can range from dull to bright and can include watermarks. The food processor asked Puffin Automation in Eden Prairie, Minnesota, to find a way to print the code, read the code and reject cans with incorrect or unreadable codes. This was our first project with this customer because it previously used periodic manual/operator inspection.
"We had to locate the four-character code in the radial direction, then read it within the 75-millisecond spacing between parts," says Darin Berg, partner at Puffin Automation. "We needed an exceptionally fast camera that could determine the radial position of the code quickly, and then provide a near-100% read rate, despite inevitable distortion of characters and variation in the background."
Puffin Automation selected a Cognex In-Sight 5600 series vision system, which is the company's fastest. In-Sight vision systems use PatMax, a geometric pattern-matching technology for part and feature location. PatMax dramatically improves the ability to find objects despite changes in angle, size and shading. It learns an object's geometry using a set of boundary curves that are not tied to a pixel grid, and it then looks for similar shapes in the image without relying on specific gray levels. Image resolution is 640 x 480 VGA for this process.
"What makes OCRMax unique in the industry is that it can read very challenging text, but is very simple to use," states Cognex's Ron Pulicari, marketing manager for the Americas. "The first step is to draw a region, kind of like using an image crop tool, around the characters you wish to read. OCRMax then automatically segments each character it finds. If the text is too hard to read—if, for example, the characters are touching, or they're too poorly printed to make out—OCRMax allows you to modify the automatic segmentation if necessary to properly segment the characters. After that, you just train your characters by simply typing in the text that OCRMax is looking at. Once you've trained the characters, you have the option of providing fielding information to further improve accuracy. Fielding essentially allows you to tell OCRMax if it should expect numbers, letters or both in certain parts of the text."
Berg adds, "Cognex is the preferred vision sensor at this facility. The new OCRMax tool provided a robust solution, so there wasn't any need to evaluate other brands and options."