Balluff-57897-00-P-hero

Another look at vision systems

Jan. 22, 2021
Smaller applications can benefit from paired down systems, and deep learning applications are growing for manufacturers of all sizes

Machine vision is part of most current machine applications. A plethora of options exist to match the right system with exactly what the application needs, no more or no less. Three manufacturers Balluff, Pepperl+Fuchs and Cogniac—discuss their unique solutions. For three more machine vision building blocks, read "Take a closer look at vision systems."

Abbreviated package

For users that want more than a vision sensor, but not a fully stocked vision system, Balluff offers the SmartCamera Lite. “We currently have a smart camera offering, a fully fledged vision system, all in one, with embedded processor and software, and a bunch of extra stuff,” says Logan Welch, technical sales specialist for machine vision at Balluff. “However, we found that some people don’t need all those bells and whistles but don’t want to go down to a vision sensor.” The SmartCamera Lite is a “true vision system in an abbreviated package,” he says.

Balluff has had a lot of success with the original SmartCamera, Welch says, with customers that have reached the limits of a vision sensor but don’t need the full controller and multiple camera heads. “A good fit for this would be is if somebody has a challenging application that they’ve struggled with solving with a vision sensor, and they’re ready to make that step up. This is that bridge product to bring you into a vision system” Welch says.

The Balluff SmartCamera Lite is used for product quality control and defect detection (Figure 1). It can detect machine codes and text, including serial numbers, and can assist robots in position finding. Ethernet TCP/UDP and RS-232 are the only communications that the Lite version of the SmartCamera offers. “We stripped down the SmartCamera to make it cost-effective,” Welch says. An ideal application for the SmartCamera Lite, he says, would be a stand-alone machine or challenging vision applications that don’t require multiple cameras.

The solution is also very scalable, Welch says, with easy upgrades for firmware revisions or tool updates. “It’s basically a drag-and-drop process,” Welch says. It’s also simple to expand. “The beauty of being on an Ethernet bus is you can put as many cameras on it as you would like,” he says. “It’s cost-effective and you can buy multiple and still be at a lower investment than a vision controller.”

The system has enough storage to save images from the past few days, but if customers need more, Balluff can easily offload that data onto a file server.

“We really see the evolution away from vision sensors, into high-power vision systems and that’s why we’ve designed this product to meet the needs of that with the bells and whistles stripped down to help with cost,” Welch says.

Light section technology

For a simpler machine vision application that is less dependent on external lighting configurations, a sensor could adequately do the job, versus a full vision system. Pepperl+Fuchs offers the SmartRunner, which uses innovative light section technology, to make it a stand-out sensor for profile matching or parts comparison inspections. Without external lighting and a control box, the stand-alone sensor works as one device and is typically smaller than most vision solutions, says Gerry Paci, product manager for vision products at Pepperl+Fuchs. The light section technology also allows for a smaller design.

“We promote it as something that is very simple,” Paci says. Light section technology is relatively new, he says, but it’s been out for about five years.

“We’ve been pushing it as an alternative to the standard options when you talk about machine vision technology in general,” Paci says.

Figure 2: Light section technology is triangulation-based, which helps shrink the size of the housing.

The technology is triangulation-based. “It projects light onto the object, whether it’s a car part or a bottle, whatever the case may be, and it projects back and the image it projects back give us tons of details,” Paci says. “You get a profile that’s created when that snapshot is taken.”

Color, shine or shape doesn’t matter. “That’s one of the big advantages over a vision system. Whether the target is blue or pink, the SmartRunner technology does not care. It’s looking for a profile, so as long as that matches what you taught it within the tolerances that you allotted, you will get a reliable result every time,” Paci says.

Light section technology uses a defractive optical element (DOE), which helps to shrink the size of the housing (Figure 2). The triangulation profile in reference to where the camera and laser light area are, along with some innovative mirrors, makes this design so compact. “The key there is the defractive optical element inside,” Paci says.

In the right application, this solution can provide a huge reduction in cost and installation time. “Parts verification, profile matching and file comparisons, these are the applications where this technology really shines,” Paci says.

The technology is geared toward providing a simple yes-or-no result, but it does have more complex capability built-in. It can provide raw data for evaluation or set tolerances for position, but, in general, the technology is sold as quick and simple. It can also lower application costs without external lighting and additional controllers.

The vison configurator software has a very small learning curve, Paci says, and it also includes a wizard that will provide a tutorial for those who need more instruction. “If you have vision experience, the software is simple to use,” Paci says.

The system also makes applications very scalable. “Let’s say you have 10 to 20 sensors in your application. Once you program one sensor, you can save the data matrix code that has all that data inside one code. You can print it out and have it on your computer, and you can take a picture for the other sensors, right out of the box, and now they have the same program features,” Paci says.

Also read: Machine Vision Sees Further, Faster

The two biggest markets where this sensor has been successful is in parts verification for the automotive industry and material handling. “In material handling, it’s quite vast,” Paci says. “It really has a lot of different applications. Packaging is another market where the sensor is shining.”

Continuous improvement

Artificial intelligence (AI)-based machine vision inspection tasks are getting smarter and quicker with deep learning and hyper parameter optimization. Cogniac offers a software as a service (SAAS) AI platform that requires less expertise to deploy and adapt.

“It’s a drag-and-drop interface,” says Amy Wang, co-founder and vice president of systems for Cogniac. Users can create a workflow, with cameras and images coming in, and users designate what they want the system to detect. “Our system guides you through labeling some images,” Wang says. “Deep learning is what we call supervised learning. You train by examples. You provide examples of what you’re looking for.”

What makes Cogniac unique is how it leverages its hyper parameter optimization engine. Many deep-learning applications need data scientist to build the proper models for the application and implement solutions. “We have an algorithm that is able to automatically search through a vast parameter space to find the better model for your application,” Wang says.

Millions of parameters need to be configured to produce a good model, which is time-consuming work and requires highly skilled people with doctorate degrees. “There are many different competing families of open-source architecture,” Wang says. Many researchers are working in the field to develop better models every day. “What we did is we took all the best of what is out there, and theirs has become one of ours,” Wang says. The software automatically selects which architecture or which school of thought works better for the application.

Wang says AI-powered visual inspection solution is used in five main types of applications: inspection, grading, kitting, measurement and asset tracking/management. Inspection is the most common use case for the Cogniac system. As an example, it works with a leading automobile manufacturer to help it improve the quality and safety of its aluminum-stamped products coming off the production line. A new 10-by-7-foot sheet of metal is stamped every four seconds, which is physically impossible for humans to inspect. The Cogniac system uses 29 high-resolution GigE Vision cameras to take images and flag defects.

Figure 3: A Georgia Pacific logging facility uses software to assess the form, health, size and shape of timber.

A Georgia Pacific (GP) logging facility uses Cogniac software to assess the form, health, size and shape of timber, which dictates payment for the loads (Figure 3). The mill doesn’t want logs that are too small, and logs that are too large can disrupt the milling process. Instead of human operators eyeballing timber sizes, GP contracted Cogniac to use computer-vision-based measurement to determine exact log size in real time on the trucks.

The system uses deep-learning models based on a convolutional neural network, which is trying to model how the human brain works and its connections between neurons. “Our system evaluates which method is better. That happens automatically. With many other systems, you have to have someone that understands neural network models. In our system, it’s totally automatic,” Wang says.

Data augmentation is another area where Cogniac is excelling, taking data and manipulating it in different ways for training data-hungry deep-learning models. Typically, deep learning requires millions of labeled images to represent and train for every scenario. If users want to add new defects or additional functions, the neural network needs more training. With hyper parameter optimization, Cogniac can do some of that data augmentation automatically, reducing the time and labor needed for labeling images.

The system is also designed to involve human input when needed. “The beauty of deep learning is the model knows and predicts with confidence. It knows when it’s confident about its prediction or not,” Wang says. “If it’s not confident, it’s time to look into it because something has changed.”

The system typically requires an integrator for initial set up, but for small jobs Cogniac can do setup internally. “Our edge device, which typically sits in the production line, next to the camera, has lots of functionality to make the setup process as painless as possible,” Wang says.

Over time, the system gets smarter and is continuously improving on the model and the inspection workflow. “I like the concept of continuous improvement. It naturally works with our system and deep-learning systems and naturally fits with the manufacturing world. For all the top manufacturers, continuous improvement is in their bones. That’s how they get better over time,” Wang says. “It’s a snowballing effect. You just have to kick it once to start it rolling down the hill. Once it is rolling, it’s getting faster and more powerful.”

About the author: Anna Townshend
Anna Townshend has been a writer and journalist for almost 20 years. Previously, she was the editor of Marina Dock Age and International Dredging Review, published by The Waterways Journal, until she joined Putman Media in June 2020. She is the managing editor of Control Design and Plant Services. Email her at [email protected].
About the Author

Anna Townshend | Managing Editor

Anna Townshend has been a writer and journalist for 20 years. Previously, she was the editor of Marina Dock Age and International Dredging Review, until she joined Endeavor Business Media in June 2020. She is the managing editor of Control Design and Plant Services.