In 2009, Max Falcone was lead engineer on Vision Aided Tooling for Comau and responsible for the development of new product solutions using vision and recognition.
Robotic systems are becoming more popular and widespread in many industrial manufacturing settings, so the need for reliable operation with the least possible amount of downtime is a common, expected demand. Significant advantages can be realized when these robots are coupled with vision systems.
Most current vision systems require extensive support from trained experts and are less reliable due to their complexity. Our company made a fundamental step change to simplify the programming and mechanical complexity of robotic guidance applications.
As our company learned through extensive installations, an innovative vision scheme is only half the battle. Mechanical reliability also plays a big role.
Comau, a worldwide integrator of automation systems headquartered in Southfield, Mich., develops turnkey automated assembly systems and produces robots, weld guns, conveyors, recognition systems and other critical components of automation.
Comau ranks among the largest integrators in the world, with 34 locations in 18 countries and has worked with most of the major automotive manufacturers, first-tier suppliers and industrial/consumer manufacturing companies around the world.
Guidance Counseling
During 2008, engineers at Comau’s North American headquarters looked to develop a robotic guidance system that was an improvement over what they were purchasing from outside suppliers. The goal was to provide a robust system with lower acquisition and operating costs. As part of this improvement initiative, Comau developed its new RecogniSense software, which was introduced in January.
RecogniSense is, in our view, a groundbreaking visual recognition and guidance package. It functions very similarly to the human visual process. “This means it can learn a large number of objects and recognize any of the learned objects regardless of its orientation in the visual field of the camera,” says Mark Anderson, robotic product development engineer at Comau.
RecogniSense uses a conventional 2D camera but it provides three-dimensional data. It provides the coordinate offsets from teach position with six degrees of freedom, X, Y, Z, Rx, Ry and Rz. These coordinates guide industrial robots effortlessly.
A Robot’s Viewpoint
“A traditional robot is programmed to pick up a part from the exact same location every time,” says Anderson. “If the part is even slightly out of place, the robot will fail to pick that part. The RecogniSense system is used to adjust the coordinates from where the robot expected to find the object to where it actually is located with only a single camera (Figure 1). The camera, computer and software work together with the robot to adjust the robot’s position, allowing retrieval of the part.”
Camera, Action
Figure 1: Comau’s new software, which is compatible with any robot and with any GigE camera, reliably positions a robot using only one camera, reducing total solution costs for the end user.
Source: COMAU
To work effectively, adds Anderson, “current software programs require each robot to be equipped with more than one camera to ‘see’ all six degrees of freedom, or external aids such as structured lights must be incorporated into the vision system’s design.” Structured light, says Anderson, is a laser line or cross-hair that is used to find the degrees of freedom the part moved, which a traditional camera can’t see on its own.
RecogniSense software reliably positions a robot using only one camera. The software will work with any GigE camera and any robot available on the market and effectively reduces the number of cameras required per robot, reducing total solution costs for the end user.
“RecogniSense provides our robots true visual recognition,” continues Anderson. “The software emulates the visual cortex of the human brain, and teaches the system to recognize an object in the same way you would teach an infant. We teach the system an object by taking a picture of the object and naming it. All information pertaining to that object is stored in the system’s memory, which allows the system to recognize the target part from a 2D picture.” When the camera sees the object, the system recognizes it regardless of its orientation. The system, adds Anderson, then senses the relationship of the camera to the object and guides the robot to the taught orientation.
When Vision Is Not Enough
To make our robotic guidance systems as reliable and cost-effective as possible, we needed to take our design one step further.
“Not only did we need to reduce the number of cameras in our systems, we wanted to reduce the chances of system failure associated with cabling,” explains Tony Ventura, robotics and vision product line manager at Comau. “The more cables located on a robot, the higher the risk of failure."
Comau was mounting standard GigE cameras with external power on the robots in its systems. Standard GigE cameras require two cables—one for communications and one for power. Two cables meant two times the opportunity for failure in a single-camera application. “With hundreds of cameras mounted on robots throughout an automated assembly line, cable related chances of failure and the associated downtime represented too big a gamble,” states Ventura. “Further, if you have an application for which you need an external trigger, now three cables are involved, with three times the chance of a failure in the cabling.”
The Two Became as One
Comau engineers determined the best course of action would be to incorporate a Power-over-GigE camera into their systems. This camera design would require only one cable for communications, power and trigger. The only time a second cable would be needed would be infrequent cases in which an external trigger as a proximity switch or other external I/O was required.
Comau contacted every known camera supplier on the market but could not find a viable Power-over-GigE camera. “All the camera manufacturers claimed they had Power over GigE in development but could not commit to a fulfillment date,” recalls Falcone.
In June 2008, Comau engineers attended The Vision Show in Boston, still searching for a suitable Power-over-GigE camera. That’s where they found the TXG camera being displayed by its developer, Baumer. They learned that a specially developed industrial power-injector module or multi-port power switch could provide power through the Cat. 6 Ethernet cable at distances to 100 m.
“By eliminating cables, the Power-over-GigE camera minimizes the risk of cable fatigue and greatly improves the integrity of a vision system,” says Doug Erlemann, Baumer’s camera business development manager. “In addition, the camera offers high-speed multi-camera operation and frame rates up to 90 frames/sec. The camera’s resolution ranges from VGA to 5 Mb, and standard functions include gain, offset and exposure time settings.”
This camera had the PoGigE feature that we were looking for. Most of the other performance specs were the same as what we were using. In fact, Baumer is working to develop a couple of the features that our existing camera had that it does not include.
Comau took the Baumer SDK camera software and incorporated it into the RecogniSense software. This took some reworking on Baumer’s end because the SDK code was written in C# and the RecogniSense software was written in C++. Baumer was able to provide Comau with a working solution in a relatively short amount of time. Once the two software programs were communicating smoothly, the camera became a plug-and-play component in the system, operating efficiently and reliably on just one cable. The software recognized the camera and imported its first image immediately. Baumer’s SDK is now available in C, C++ and C# for .NET languages.
Into Testing
Comau is using the camera in two prototype applications and will have incorporated the cameras online at a major automotive customer in the course of 2009.
“The advantage that the Baumer solution gives Comau is a simple, robust and clean, single-cable solution to mount a camera on a robot,” comments Ventura. “By reducing the number of cables on the robot arm, we’re confident we’ll achieve better MTTR and greater MTBF values. It’s too soon in the evaluation to have solid metrics, since we’re still collecting data. And we’ve simplified the complexity of the systems. We can send a signal well beyond 100 ft with a rather inexpensive cable that still has the high-flex rating needed for robotic applications.”
Anderson states that the camera itself is neat, complete and well-engineered. “The lock-style connections are strong, reliable, industrially accepted connectors,” he adds. “An integrated UV filter mounted in front of the CCD on the camera face eliminates our need to buy and install a separate filter to show true colors.”
Customer Satisfaction
“This solution allows us to provide our customers with a system that will have lower total lifecycle costs,” says Anderson. “The total life costs of any system we provide, including downtime and cabling costs, should greatly outweigh the acquisition cost of the camera.”
In addition, the GigE feature let us do away with expensive frame grabber and PC solutions and run the entire system with readily available off-the-shelf components.
Our company expects this will make more potential users less reluctant to try robotic guidance, because of the reduced complexity.
Dicey Application
Figure 2: The first public prototype of the PoGigE application was demonstrated at the 2008 FabTech Show in Las Vegas. Show attendees threw dice onto a craps table, and the robotic dealer located, retrieved and returned them to the shooter.
Source: COMAU
The first public prototype of the PoGigE camera application took place at the October 2008 FabTech Show in Las Vegas. In our booth, Comau set up a robotic craps dealer. When show attendees threw dice onto a craps table, the robotic dealer would locate the dice, retrieve them and return them to the shooter (Figure 2). The robotic dealer was a huge success throughout the show.
We envision finding applications for this solution in areas that include racking and deracking, wheel load, pallet stacking, random bin picking, wheel opening hemming, AGV guidance, power train engine assembly and part recognition.