Is machine vision changing robots, or are robots changing machine vision? With robotics playing such a pivotal role in the future of discrete manufacturing, we asked a seasoned panel of industry for their insights and predictions on the role of robots.
Q: The growth of the robot market has seen an equally robust uptick in the machine vision market. Can you explain the correlation between the two, if there is one? For example, how have improved machine vision systems impacted robots, or how has the increased use of robots necessitated more capable and flexible machine vision systems?
Craig Souser: All of our systems are vision-guided, so it’s a one-to-one correlation. More and more customers want inspection incorporated into the packaging operation, and a camera is a huge tool for this.
David Arens: Vision allows for both of the aspects of quality and flexibility to be enhanced in a robot system, and, in a way, it also provides a double-check on the maintenance of the robot. If the alignment of a tool is off because of damage or wear, there is the possibility with vision to compensate for the lack of maintenance by performing machine offset adjustment checks on a periodic basis while the robot is running.
Corey Ryan: Obviously, since the standard robotics market is currently so much larger than the collaborative robots market, the machine vision market should see the same lift. Right now, the collaborative robotics market is focused heavily on having the robots designed for human-robot collaboration (HRC) without additional sensors and technology. Both standard robots and those designed specifically for HRC fit into the overall collaborative robotics market. There is a space for both, and most customers need help to understand the total spectrum and the role for different robots.
Allan Hottovy: Machine vision/robotics systems when they first came out were extremely hard to work with. As the world has become more digitally connected, the robot-machine-vision interface work has become vastly simpler. They are fast becoming commodity items that are very flexible, easy to understand and interchangeable. Making the technology easier to use has sped up implementation.
Another important change is the feedback to the robot. In the past, you programmed a robot to do a task and the robot would do it, right or wrong. Now the robot can see the image in real time to validate that the part was placed correctly. Also, since it can see it, self-teaching capabilities can be embedded into the system.
Alex Bonaire: As vision systems became easier to use and more cost effective, their use for robotic applications increased dramatically. Before the widespread use of robotic vision, most robotic hardware had to be purposely designed to accommodate specific parts and applications. A robot that can see where parts are located in the environment requires less hard fixturing to locate parts and becomes more flexible in its ability to locate multiple parts with the same hardware. A vision system can also provide information to a robot that would be otherwise very difficult to obtain, such as cosmetic conditions that let the robot know whether or not a part is acceptable.
Scott Mabie: The improvement in vision systems has opened up more doors for robot applications, providing a more seamless integration of human senses and motion. In this case, the vision starts by identifying the part, its orientation and relationship to the robot. This information is fed to the robot and the robot motion begins. By using collaborative robots, the human operator can still be engaged in the process and handling other tasks that might best be solved by a person. By having these technologies combined with people, the number of applications can grow exponentially.
John Keinath: More and more manufacturers are requiring more error-proofing in their systems to reduce expensive quality issues with their products. Because manufactures are requiring more quality inspections, machine builders are turning to vision solutions. A single camera can provide multiple inspections at once. However, cameras can only look at one side of the part. So machine builders have a couple options. For example, they can have multiple cameras positioned at different sides of the part, or they can have one camera mounted to a robot and let the robot move the camera to the different inspections. There are also some 3D vision products that can create a 3D model of the part by moving a laser line across it. A robot works great for moving the 3D camera over the part.
Garrett Place: Robots have always been capable. They have also been, for the most part, blind. Advances in 2D vision have given these robots a monocular view of their environment. This works very well for structured environments, which is why we see so many fixtures in the robot cell. Advances in 3D vision technology, especially with time of flight (ToF) camera systems, has opened up new opportunities for robots. It allows the robot to perceive its environment the way we do—in 3D.
3D cameras are becoming smaller and more affordable. This allows for broader deployment of robots and allows them to work in more unstructured environments. This flexibility can reduce the overall cost of the robot integration and open new applications altogether.
The advent of open-source tools such as the robot operating system (ROS) is also allowing robot integrators to apply these new sensors quickly and efficiently and without additional cost.
Homepage image courtesy of duntaro at FreeDigitalPhotos.net