Quality control in an automated production line with camera

How can vision enhance artificial intelligence?

Jan. 10, 2023
Vision systems provide sensing capabilities to enable machine-learning capabilities

Artificial intelligence (AI) is all the buzz since the introduction of ChatGPT. This technology, developed by OpenAI and free to use, right now anyway, allows the user to interface with the AI engine in English and provides the user answers to questions asked.

It can respond to historical questions, as well as real-time data. It can even fix your code for you. While it may be concerning to some, the advancement of this technology raises the bar for new developments and innovation.

We have all heard of machine learning and how it can apply to systems controlling these machines. The programmable logic controller (PLC) itself cannot machine-learn as such, so the collection of data in order to learn comes from the devices connected to the machine.

Part of the advancements in manufacturing has been a strong implementation of robotics. One of the issues that used to be prevalent is that the robot had to be programmed to a tolerance based on the application.

There is no room for error. If they could only “see” and adjust, if necessary. Well, it seems that they can with vision systems that allow for communication to the robot controller, as well as integrated systems of multiple robots, cobots and people.

Full disclosure here: The robots I have worked on and with are the big fellas, like welding robots who have been taught. I have no experience at all with these AI-enabled robots/cobots, so, when it is said that they can eliminate overlaps, distortions and misalignments, I can only surmise which application they would be used in.

One application that comes to mind is food processing. I saw a video on LinkedIn where a potato-sorting machine “saw” a bad spud and positioned a linear cylinder, which punched out the bad spud. I may have simplified it, but the speed at which the vision system detected this bad actor was impressive.

Food is not a fixed size, shape, color or consistency. If the application is packaging three heads of romaine lettuce, the heads are not of the same dimensions. Suppose the vision system would know on-the-fly which three heads are there and determine how much pressure would need to be applied to the packaging machine to insert the three heads into a bag and seal it.

While this could be an imaginary application—I just went shopping—it fits into the mindset that some processes need to adjust on a per-package basis.

What isn’t imaginary is the need for the food supply chain to be safe. The quality and safety of the food that we have in our refrigerator and freezer are important. Human error during inspection can run quite high so the industry has had to adjust to improve ratings.

Food issues can include sizing, color, transient conditions and full dimensioning. Machine vision can provide all of these things, as long as the resulting information is being used for control, data gathering and diagnostics to the sorting and packaging systems.

A mortal human cannot process information fast enough to be of service in this age of high-speed sorting and conveyance. Profits are driving this move to speed, and the quality-control (QC) part of the equation has to keep up. Vision systems have kept pace with the needs of the industry.

Defect analysis is a primary goal of the systems. What defines a defect is very subjective. The above-mentioned characteristics of a certain food group such as potatoes can have thresholds set to reject certain product based on the imaging of the individual veggie if you will.

A newer form of detection, which I have often wondered about, is: How ripe is that banana?

Ripeness is a thing. We all have seen green bananas on the shelf. We have also seen them black because they have over-ripened and are destined for the banana-bread pan. The supply chain for these beauties does not have inspection at all stages. Inspection at the source would determine whether the fruit is able to be packaged and shipped.

I wrote a white paper on Franzia Wines and the use of supervisory control and data acquisition (SCADA) in the winemaking process. I learned about the crush, which is when the grapes are ready to be processed and begin the process of fermentation. It was all about sugar content of the grapes on the vine. The importance of that metric cannot be understated. However, I would suspect that that metric is only available to a tester.

There are surely some things that high-speed vison can bring to the table and some things that it can’t. With the advent of AI, high-speed imagery, deep learning and processing of data, I feel really good about our food quality and safety. Thank a camera today. 

About the Author

Jeremy Pollard | CET

Jeremy Pollard, CET, has been writing about technology and software issues for many years. Pollard has been involved in control system programming and training for more than 25 years.

Sponsored Recommendations

eBook: Efficient Operations: Propelling the Food Automation Market

For industrialized food production sectors, the megatrends of sustainable practices, digitalization and demand for skilled employees are underpinned by rising adaptability of ...

2024 State of Technology: Report: Sensors, Vision & Machine Safety

Manufacturing rarely takes place in a vacuum. Workers must be protected from equipment. And equipment must be protected. Sensing technology, vision systems and safety components...

Enclosure Cooling Primer

Learn more about enclosure cooling in this helpful primer.

Ultra-fast, ultra-accurate linear indexing

NSK integrates advanced automation and drive technologies to deliver high capacity, high speed, ultra-precise indexing and positioning in a compact, flexible linear actuator: ...