Machine vision is part of most current machine applications. A plethora of options exist to match the right system with exactly what the application needs, no more or no less. Three manufacturers Balluff, Pepperl+Fuchs and Cogniacâdiscuss their unique solutions. For three more machine vision building blocks, read "Take a closer look at vision systems."
Abbreviated package
For users that want more than a vision sensor, but not a fully stocked vision system, Balluff offers the SmartCamera Lite. âWe currently have a smart camera offering, a fully fledged vision system, all in one, with embedded processor and software, and a bunch of extra stuff,â says Logan Welch, technical sales specialist for machine vision at Balluff. âHowever, we found that some people donât need all those bells and whistles but donât want to go down to a vision sensor.â The SmartCamera Lite is a âtrue vision system in an abbreviated package,â he says.
Balluff has had a lot of success with the original SmartCamera, Welch says, with customers that have reached the limits of a vision sensor but donât need the full controller and multiple camera heads. âA good fit for this would be is if somebody has a challenging application that theyâve struggled with solving with a vision sensor, and theyâre ready to make that step up. This is that bridge product to bring you into a vision systemâ Welch says.
The Balluff SmartCamera Lite is used for product quality control and defect detection (Figure 1). It can detect machine codes and text, including serial numbers, and can assist robots in position finding. Ethernet TCP/UDP and RS-232 are the only communications that the Lite version of the SmartCamera offers. âWe stripped down the SmartCamera to make it cost-effective,â Welch says. An ideal application for the SmartCamera Lite, he says, would be a stand-alone machine or challenging vision applications that donât require multiple cameras.
The solution is also very scalable, Welch says, with easy upgrades for firmware revisions or tool updates. âItâs basically a drag-and-drop process,â Welch says. Itâs also simple to expand. âThe beauty of being on an Ethernet bus is you can put as many cameras on it as you would like,â he says. âItâs cost-effective and you can buy multiple and still be at a lower investment than a vision controller.â
The system has enough storage to save images from the past few days, but if customers need more, Balluff can easily offload that data onto a file server.
âWe really see the evolution away from vision sensors, into high-power vision systems and thatâs why weâve designed this product to meet the needs of that with the bells and whistles stripped down to help with cost,â Welch says.
Light section technology
For a simpler machine vision application that is less dependent on external lighting configurations, a sensor could adequately do the job, versus a full vision system. Pepperl+Fuchs offers the SmartRunner, which uses innovative light section technology, to make it a stand-out sensor for profile matching or parts comparison inspections. Without external lighting and a control box, the stand-alone sensor works as one device and is typically smaller than most vision solutions, says Gerry Paci, product manager for vision products at Pepperl+Fuchs. The light section technology also allows for a smaller design.
âWe promote it as something that is very simple,â Paci says. Light section technology is relatively new, he says, but itâs been out for about five years.
âWeâve been pushing it as an alternative to the standard options when you talk about machine vision technology in general,â Paci says.
Figure 2: Light section technology is triangulation-based, which helps shrink the size of the housing.
The technology is triangulation-based. âIt projects light onto the object, whether itâs a car part or a bottle, whatever the case may be, and it projects back and the image it projects back give us tons of details,â Paci says. âYou get a profile thatâs created when that snapshot is taken.â
Color, shine or shape doesnât matter. âThatâs one of the big advantages over a vision system. Whether the target is blue or pink, the SmartRunner technology does not care. Itâs looking for a profile, so as long as that matches what you taught it within the tolerances that you allotted, you will get a reliable result every time,â Paci says.
Light section technology uses a defractive optical element (DOE), which helps to shrink the size of the housing (Figure 2). The triangulation profile in reference to where the camera and laser light area are, along with some innovative mirrors, makes this design so compact. âThe key there is the defractive optical element inside,â Paci says.
In the right application, this solution can provide a huge reduction in cost and installation time. âParts verification, profile matching and file comparisons, these are the applications where this technology really shines,â Paci says.
The technology is geared toward providing a simple yes-or-no result, but it does have more complex capability built-in. It can provide raw data for evaluation or set tolerances for position, but, in general, the technology is sold as quick and simple. It can also lower application costs without external lighting and additional controllers.
The vison configurator software has a very small learning curve, Paci says, and it also includes a wizard that will provide a tutorial for those who need more instruction. âIf you have vision experience, the software is simple to use,â Paci says.
The system also makes applications very scalable. âLetâs say you have 10 to 20 sensors in your application. Once you program one sensor, you can save the data matrix code that has all that data inside one code. You can print it out and have it on your computer, and you can take a picture for the other sensors, right out of the box, and now they have the same program features,â Paci says.
Also read: Machine Vision Sees Further, Faster
The two biggest markets where this sensor has been successful is in parts verification for the automotive industry and material handling. âIn material handling, itâs quite vast,â Paci says. âIt really has a lot of different applications. Packaging is another market where the sensor is shining.â
Continuous improvement
Artificial intelligence (AI)-based machine vision inspection tasks are getting smarter and quicker with deep learning and hyper parameter optimization. Cogniac offers a software as a service (SAAS) AI platform that requires less expertise to deploy and adapt.
âItâs a drag-and-drop interface,â says Amy Wang, co-founder and vice president of systems for Cogniac. Users can create a workflow, with cameras and images coming in, and users designate what they want the system to detect. âOur system guides you through labeling some images,â Wang says. âDeep learning is what we call supervised learning. You train by examples. You provide examples of what youâre looking for.â
What makes Cogniac unique is how it leverages its hyper parameter optimization engine. Many deep-learning applications need data scientist to build the proper models for the application and implement solutions. âWe have an algorithm that is able to automatically search through a vast parameter space to find the better model for your application,â Wang says.
Millions of parameters need to be configured to produce a good model, which is time-consuming work and requires highly skilled people with doctorate degrees. âThere are many different competing families of open-source architecture,â Wang says. Many researchers are working in the field to develop better models every day. âWhat we did is we took all the best of what is out there, and theirs has become one of ours,â Wang says. The software automatically selects which architecture or which school of thought works better for the application.
Wang says AI-powered visual inspection solution is used in five main types of applications: inspection, grading, kitting, measurement and asset tracking/management. Inspection is the most common use case for the Cogniac system. As an example, it works with a leading automobile manufacturer to help it improve the quality and safety of its aluminum-stamped products coming off the production line. A new 10-by-7-foot sheet of metal is stamped every four seconds, which is physically impossible for humans to inspect. The Cogniac system uses 29 high-resolution GigE Vision cameras to take images and flag defects.
Figure 3: A Georgia Pacific logging facility uses software to assess the form, health, size and shape of timber.
A Georgia Pacific (GP) logging facility uses Cogniac software to assess the form, health, size and shape of timber, which dictates payment for the loads (Figure 3). The mill doesnât want logs that are too small, and logs that are too large can disrupt the milling process. Instead of human operators eyeballing timber sizes, GP contracted Cogniac to use computer-vision-based measurement to determine exact log size in real time on the trucks.
The system uses deep-learning models based on a convolutional neural network, which is trying to model how the human brain works and its connections between neurons. âOur system evaluates which method is better. That happens automatically. With many other systems, you have to have someone that understands neural network models. In our system, itâs totally automatic,â Wang says.
Data augmentation is another area where Cogniac is excelling, taking data and manipulating it in different ways for training data-hungry deep-learning models. Typically, deep learning requires millions of labeled images to represent and train for every scenario. If users want to add new defects or additional functions, the neural network needs more training. With hyper parameter optimization, Cogniac can do some of that data augmentation automatically, reducing the time and labor needed for labeling images.
The system is also designed to involve human input when needed. âThe beauty of deep learning is the model knows and predicts with confidence. It knows when itâs confident about its prediction or not,â Wang says. âIf itâs not confident, itâs time to look into it because something has changed.â
The system typically requires an integrator for initial set up, but for small jobs Cogniac can do setup internally. âOur edge device, which typically sits in the production line, next to the camera, has lots of functionality to make the setup process as painless as possible,â Wang says.
Over time, the system gets smarter and is continuously improving on the model and the inspection workflow. âI like the concept of continuous improvement. It naturally works with our system and deep-learning systems and naturally fits with the manufacturing world. For all the top manufacturers, continuous improvement is in their bones. Thatâs how they get better over time,â Wang says. âItâs a snowballing effect. You just have to kick it once to start it rolling down the hill. Once it is rolling, itâs getting faster and more powerful.â
About the Author
Anna Townshend
Managing Editor
Anna Townshend has been a writer and journalist for 20 years. Previously, she was the editor of Marina Dock Age and International Dredging Review, until she joined Endeavor Business Media in June 2020. She is the managing editor of Control Design and Plant Services.

Leaders relevant to this article: