1660601321582 Bokehvisualizedotsmain

How do I visualize my vision application?

Aug. 28, 2019
What are the critical techniques, functions or checklists I should know to choose the vision hardware and communication method?

A Control Design reader writes: I am working on the design of a large automated assembly line with many cells assembling small parts using advanced automation, robots and motion control, and I think machine vision is the key to success. In several of the machine cells, the plan is to use cameras to find small parts or features and then use vision-guided robotics or guided motion to accurately pick and place parts. Vision inspection, alignment and 2D ID scanning are also needed. What am I getting myself into?

What are the critical techniques, functions or checklists I should know to choose the vision hardware and communication method to the system controller, calibrate and scale the image to the robot or motion control system (mm to mm) and make it easy for operators and support personnel to operate and adjust the vision systems?

ANSWERS

The eye of the machine

Our approach to vision is a bit different because while you can use a bolt-on vision system, we have integrated vision into our automation platform, along with machine control, robotics, networked safety and all other aspects of automation.

Any vision system supplier should be able to perform the tasks noted, such as identification, orientation, inspection, integration with robots and communication to PLCs. Through an approach that includes integration of the vision components, tight synchronization of lighting to camera, and vision to controller, factory calibration and an integrated software environment, we are mitigating the difficulty factor. The idea is to put vision applications within the reach of machine builders and users without vision experience.

So we couldn’t recommend a checklist because our approach doesn’t require one. We think this will be the trend moving forward. Going the conventional route is going to require OJT or formal training or a combination.

In a conventional system, that includes the communications between vision components such as lighting, camera and processor. Some vision systems require a dedicated network between the camera and a PC used for image processing, and a network from that PC supported by the PLCs and robot controllers, as well. Latency of those communications and the image processing needs to be accounted for.

Critical techniques include selecting the optimum lighting for an object’s geometry, color and reflectivity. And attenuation of ambient light can mean designing structures to prevent light infiltration in a conventional system, especially where long exposures are required. If objects to be sensed are moving fast, exposures need to be kept short, which can require higher-intensity lighting. Also, the colors of the lighting should be coordinated to the colors of the objects being sensed. These are in addition to the basic tasks to be performed.

Vision suppliers often have basic instructional materials available from their websites that will give you a better idea of “what you’re getting into.” Also, agreed that vision will be a critical component in such an advanced assembly system. We say that vision is the eye of the machine, and it will be worth considerable effort to ensure the productivity of the system for years to come.

Another suggestion would be to base the entire line on intelligent track technology that is designed to be integrated with vision, robots, cobots and machinery both in terms of nonsequential production for small batch sizes and extremely precise synchronized motion between the track’s shuttles and other devices.

DERRICK STACEY / product owner / B&R Industrial Automation

4 requirements

Motion-guided robotics are really in their infancy. We did one of these with a machine-builder integrator last year. As this was an NDA application, all I can say about it are four things:

  1. Lighting is critical and must be regulated.
  2. Vision focal distance must be repeatable.
  3. Vision measurement surfaces should not be shiny. They must have definable matt finishes.
  4. Optics must be kept clean. Our job involved laser-welding a fuel-delivery component. If the argon gas was not kept in the right position, the weld splatter would build on the optics.

If all four of these are not adhered to, the system will deliver non-Six Sigma results.

DAVE MILLS / managing partner / CyberGear

Location and rotation

Consider a single part. The center of the part is at location X, Y, Z, but the part has a rotation around X, around Y and around Z. These six pieces of information are what every robot needs to interact with any part. If you are trying to pick up a penny on the floor with a two-finger gripper, you need the normal vector to the penny’s resting position, and you need the XYZ of the penny. Since pennies are round, the nominal rotation around Z (RZ) is pretty much irrelevant, so you really only need X, Y, Z, RX and RY—five pieces of information.

Standard 2D-vision cameras give you three pieces of information when properly calibrated and registered with a robot—X, Y and rotation around Z. Now, getting this information out of a standard camera means first calibrating the camera to a specific plane to remove the parabolic distortion of the lens and then tying the plane to robot coordinates—fixing the field of view directly to coordinates in the robot. In this manner, if the camera says it is precisely in the center of the image you can then tell the robot where to go in the X and Y axis to get to whatever you have found in the center of the camera’s image.

The problem with standard vision cameras is that they can only provide three of the six pieces of information you need to guide a robot. If you slide the part across a fixture plate, then you are “fixing” the Z axis, fixing the rotation around X and also fixing the rotation around Y, which makes a standard camera very useful in guiding a robot since it will return the X, Y, and rotation around Z—giving you all six pieces of information you need. This is why many people use a part-locating plate when using a camera to guide the robot.

If the part is not flat and does not sit very well on a plate, then standard 2D cameras are not easy to use. Putting two cameras looking at the same part can yield more than three pieces of information, but at a high cost and with lots of maintenance headaches. Furthermore, the cameras will share certain axes such that going from one camera to two may only give you one additional piece of information, such as an additional rotation or just the Z measurement. Furthermore, calibrating the pair of cameras is no simple thing, like just placing a calibration grid on your flat plate from the first example.

Then there is cost. Two cameras, two sets of lighting and the precise fixturing required usually make this a painful way to gain one or maybe two additional bits of information.

This is why people use triangulation sheet of light lasers such as the Hermary SL1880, the LMI Gocator or the Sick Ruler. They can give you four to six pieces of information, but at a high cost, and they also require specific motion to function, making the cost even higher and the maintenance more complex, and the time to gather the information much longer.

So, the long and short of it is that controlling your part presentation is the key to using standard vision with a robot. If you are dealing with amorphous parts such as decorating birthday cakes, as an example, your vision system to guide your robots will be elaborate and will be difficult to maintain correctly since each bit of information it returns—X, Y, Z, RX, RY and RZ—needs proper adjustments and careful calibration to be accurate. If you are trying to find flat steel washers on a flat conveyor, then even the most basic vision system can probably be used at a very reasonable cost.

Vision-guided robotics have many opportunities for hidden complexity, and the hidden complexity can require extreme costs to accurately control and specialized training to maintain correctly. Selecting a team that is familiar with the places where this complexity can hide is important during the early stages of design to better control the variability of part presentations to lower total cost of system ownership, as well as increase uptime and lower maintenance costs. Choosing your team wisely can be the difference between a smooth-running line and a high-profile mistake.

DOUG TAYLOR / automation engineer / Concept Systems / Control System Integrators Association (CSIA) member

See the whole vision picture

Per the reader’s description, we have the following main challenges and requests:

  • looking for complete solution, not just focused on stand-alone vision; integration to the machines is key
  • using vision to full extent—quality inspection, recognition (2D codes) and positioning/motion guidance
  • ease of use for machine operators and maintenance personnel.

When considering the solution for a vision application being integrated to a complete system, it is important to take a step back and consider the total solution instead of just focusing on the vision system or sensor as a stand-alone device.

Since you have a mix of usage for the vision system/sensor and you are also looking for a solution that would be easily maintained by your team, it is recommended to have them working under the same software platform as the other components in your machine.

There are integrated development environments (IDEs) that can take care of the programming, testing and troubleshooting of the entire machine using a single software interface and a single project file to maintain.

Having all of the input/output/logic/safety/vision/motion devices under the same platform will also improve machine performance—they are exchanging data under the same machine-level network—and reduce engineering time when calibrating the components and start exchanging data between them. On top of that, if your team needs to troubleshoot the machine for any error condition, that can be done via the same software platform and/or via the machine HMI. In an integrated development environment, a variety of components can be used to program an entire machine, such as a horizontal flow wrapper (Figure 1).

Another point to consider is to evaluate if you will need multiple smart cameras for your inspection needs—imager, processor, output in the same unit—or a vision system with remote cameras returning the image data for processing in the same controller. They could all be programmed using the same software interface, as well.

FERNANDO CALLEJON / product manager—vision / Omron Automation Americas

"Factors such as the smallest object, measurement accuracy needed, the image size, speed of image capture and processing and color range all affect camera and lens choices. Without defining exactly what you require, it’s impossible to know if a vision system fits the bill."

Build your system without restrictions

Let’s scale your problem down to a single, illustrative cell: a conveyor belt moving small parts for drilling. As the parts are scattered across the conveyor in random locations, a camera detects the part’s position. This information is sent to a robot, which picks and places the part in the next processing location to drill a precise hole. From here, another camera quality checks the hole. If within quality limits, a robot picks and places onto the “good” conveyor. If not, a robot sends it to the reject bin.

This is a straightforward example of vision in action. All that’s needed is a simple check of the drilled hole, and so a smart camera or even a presence-absence checking photodetector could be enough. However, if your application warrants it, you can take things a whole lot further.

If the production line involves sorting and checking by microscopic detail, a high-powered vision system can be used. But, remember, no matter how good the hardware configuration is, if the image analysis capability is low, it isn’t a good machine-vision system. Good hardware must be installed to obtain good images, which should be analyzed by high-performance machine-vision software.

To ensure you get it right, define the key visual performance criteria. Factors such as the smallest object, measurement accuracy needed, the image size, speed of image capture and processing and color range all affect camera and lens choices. Without defining exactly what you require, it’s impossible to know if a vision system fits the bill.

Environmental factors must also be considered. Temperature, humidity and vibration can render some cameras unsuitable. Additionally, the physical space for installing the system can restrict camera and lens choices.

If you opt for multiple cameras to collect and process mountains of data, you may even need the system to run on a PC, rather than relying on local processing. With such processing power, your assembly line gains a wealth of additional capabilities, such as alerting personnel via email regarding machine or process problems. PC usage can also enable remote control of the system on mobile devices for easy adjustment of the vision system.

JONATHAN WILKINS / director of industrial parts / EU Automation

6 considerations

When it comes to designing the machine-vision solution in vision-guided robotic (VGR) applications, the designer faces two critical challenges. The first challenge is to quickly, accurately and robustly locate the part and determine its pose and position. This directly speaks to the importance of the pattern search tools and the implied performance benefits accrued by using the best-in-class machine-vision geometric-pattern search algorithms.

The second challenge lies with how the machine vision provides information to the robotic system for actuation. This entails calibrating the machine-vision coordinate system with the robot coordinate system and the real-world 3D space. Because the number and placement of cameras, product spacing, line throughput and many other considerations impact the success of the vision-robot collaboration, being able to automatically calibrate these separate systems so they accurately, reliably and robustly work together without the assistance of a trained vision engineer is paramount to the system’s total operational cost and successful runtime performance.

Below are six key considerations for a machine-vision-solution system that is designed for providing guidance for robots or motion systems in small-parts assembly. Depending upon where the machine builder or end user is in the VGR adoption journey, some of these topics are more important than others.

The six key considerations are:

  1. finding the part/pose
  2. establishing assembly methods/modality
  3. converting image coordinates to robot/motion coordinates
  4. deployment and support
  5. performance
  6. integration and application development.

Finding the part/pose: To find parts and their 3D positions or poses, the designer needs to extract the best image possible from the vision system (image formation), so that one or more features can be identified in the image (locating features) and then determine 2D/3D position of the part (part pose) based on the located features.

While many factors affect these three system functions, the most important considerations for image formation are the camera and its pixel size and resolution; optics; field of view (FoV); feature size; and lighting requirements. For locating features within the image, consider whether it’s better or more cost-effective to use one or multiple cameras and whether you require 2D or 3D image data. Is the part’s height sufficient that it must be taken into account for a VGR pick-and-place application? Lastly, should the solution require multiple cameras, the machine-vision software will have to reconcile those different coordinate systems to determine the part’s final 2D/3D position and pose.

Establishing assembly methods/modality: A small-part pick-and-place application often implies assembly or sortation, so the designer needs to consider what information the robotic system will require after the part is located in 2D/3D space. VGR applications often require the machine vision system to track the moving part and provide information about the robot’s position relative to other key objects or features. For example, does the robot guide a peg to a hole to lift an engine block into position? If so, the best algorithm for gauging the relative position of peg to hole may be a center-to-center alignment. In other cases, edges may be the best guidance mechanism, while a sortation application may dictate point-to-point movement with accuracy, requiring machine vision supervision.

Converting image coordinates to robot/motion coordinates: Calibrating between multiple cameras in a machine-vision system to establish a common global coordinate system that defines the workspace for the vision system, as well as calibrating the vision system to the robot, motion and operating coordinate systems, is critical to VGR success.

If all systems do not agree on where the robot’s end-of-arm tooling is located in the workspace, then errors and conflicts will interrupt/sabotage accurate guidance. Calibration may need to be performed multiple times a shift or only after some external event, such as an operator accidentally knocking a camera askew. However, whatever the cause, the VGR system needs to be able to quickly, accurately and repeatedly calibrate these different systems without the assistance of engineer-level system expertise. In short, the system must empower the operators and maintenance technicians to succeed.

Deployment and support: Regular calibration is just one important part of system maintenance and support. For example, should the system have planned downtime due to maintenance, upgrades or retasking, what expertise level is required to restart the line? How does a company not only maintain accuracy among systems on a single production line, but also among multiple production lines in geographically diverse locations? Setup time and robustness for new parts’ recipe and changeover is also important for mixed-use production lines.

Performance: System performance is defined by accuracy, cycle time and robustness (repeatability) of varying parts and environmental conditions, among other factors. While performance criteria are important considerations before the initial system design, we’re placing it here to emphasize that performance criteria must be continuously evaluated to maintain the VGR’s effectiveness and avoid unplanned downtime or unacceptable levels of false results.

Integration and application development: After defining the application, but before spending a lot of time developing the solution in house, consider the company’s internal vision expertise levels and the ease of use of its integrated development environment, namely the machine-vision software underlying the solution. The system will not require just machine-vision development, but also integration with the robot controller or PLC/PC controller for other motion systems. Do you have the expertise, time and toolsets to guarantee success? For example, VGR integrators often use extensive testing and simulation to reduce development time and costs. Do you have all the resources you need to reduce costs and deliver success?

Finally, your question also concerns serialization and using image-based readers to track production, including possibly barcodes and 2D machine readable codes. For help with these applications, which are much less complex than VGR challenges, I would suggest you learn details about right-sizing your code reader to your application and discover other helpful tools and everything else you would need to set up a real-time track-and-trace system for your new VGR production line.

NILESH PRADHAN / senior product marketing manager / Cognex

Vision sensors

In the past, I would have suggested using a complete vision system. But technology has advanced, and there are now other options that offer similar functionality without the cost or complexity of traditional vision systems. Light section technology is based on triangulation but includes a 2-D camera and a laser line—a combination that removes the need for external lighting and contrast. While vision sensors with light section technology cannot replace complete vision systems for all applications, they can likely offer a simpler, more economical solution for many tasks.

Vision sensors are perfect if you have targets that are black with a black background (no contrast). Small parts are also no issue, as vision sensors commonly offer a large field of view (more than 300 mm) in the x-axis. Position feedback is also possible in the x-axis and z-axis, which can greatly improve your process over time.

As far as communication goes, vision sensors provide flexibility and simplicity. You should keep in mind, though, that they offer a relatively simple solution—digital outputs are the main data stream. So this means a good or bad signal is sent when the part is scanned by the sensor. It is simply doing a quick match to verify if the part is within the pre-set tolerances. But if raw data measurement is the preferred output, this is also available. Overall, using vision sensors with light section technology is an excellent option with machine vision, especially when quick setup, low contrast or simple output data is needed.

GERRY PACI / advanced positioning systems product manager / Pepperl+Fuchs

It’s the lighting

I know firsthand just how complex using robotics and vision can be for a first-timer; however, I can guarantee that once completed the rewards definitely outweigh the drawbacks.

In my 35 years of experience with automation, vision and robotics, I’ve designed and programmed many robotic cells which use vision. One application was to determine the angle of a hot bar and send that angle information to the robot. The robot would then adjust its pick angle and retrieve the bar in its proper rotation and load it into a machine for processing, much in the same way you propose to build your project.

To answer some of your more technical questions and concerns let me start with the components used in our application and some of the details of the program to get you started:

  • Fanuc 710ic robot
  • Allen-Bradley 1756-L73 ControlLogix 5000 controller
  • Cognex IS7000 camera
  • Fujinon lens 4.151.

For choosing hardware for this application, I was limited by the plant and customer’s existing standards, which were Fanuc robots and Allen-Bradley controllers, so I needed to choose a vision system that would be compatible with these predetermined options.

Luckily, the Cognex vision system incorporates user-defined inputs and outputs along with standard commands for trigger and data acquisition, and it gels nicely with the ControlLogix platform. Everything is Ethernet, so communications are simple to set up. I do suggest, when setting up the Cognex vision module, if you choose that route, in the ControlLogix, that the module definition have the following:

  1. data bi-directional
  2. input results from sensor = DINT-4
  3. output data to sensor = DINT-4.

This will allow for the camera and the PLC to exchange data in 32-bit format. And bi- directional will allow for communications both ways.

The camera had to determine the angle of rotation and give that number to the robot; however, negative values are hard to interpret in robotics, so it was important to specify where the zero angle was. When doing the camera angle, I would set one solitary bit and send it to the PLC. This bit was to determine negative or positive, (1 = positive, 0 = negative); then I would send the remaining data as the angle, all done in a single 32-bit DINT. It worked perfectly. The PLC would then interpret the data from the camera and calculate a number to send to the robot.

In my application, I only adjusted the y-axis of the robot; in your application, it may differ. The robot code was:

CALL PICKCALC ;

L P[2] 500mm/sec FINE Offset,PR[191:CONV OFFSET]

The PICKCALC program was a separate subroutine in the robot, which took in the camera data from the PLC and applied math to produce an offset which then was sent to PR[191]. When the robot reached this portion of the program, it would offset by that amount. Multiple offsets are available but harder to manipulate unless using more than one camera.

The most important, and probably surprising, difficulty of dealing with vision isn’t dealing directly with the technologies themselves; it is dealing with the lighting. I cannot stress this enough. When doing any vision application, a successful outcome is based on what the camera sees. Lines that are blurry or shadows or improperly focused parts will do nothing but fail and cause unwanted results. A good lens and proper lighting are the key.

Another key element that you asked about is scaling your object. Cognex has a scaling feature that will allow the camera to be calibrated to millimeters or inches. Just use a ruler and measure; then apply. It’s very simple. This will allow for proper handling of the part at the robot end-of-arm tooling (EOAT).

If you are able, try to schedule adequate time for research and development. I found that R&D was definitely needed when first attempting my project. I tried different lens filters with different lighting and different angles of approach to determine the best output. I found in some cases I had to build a box around the part to eliminate unwanted lighting (bay doors opening and closing or plant windows when the sun comes out).

If you have the time to get your hands on some of the cameras and technologies and set up a lab to do some experimenting, it will help you greatly with not just your implementation, but also in selecting your vision system and components.

All the research prior to jumping in made all the difference in the world. Once I had favorable results the process took off and worked very well.

That brings me to your final point about making it easy for operators and support personal to operate and adjust the vision systems. My experience was this: Too many hands spoil the cake.

What I mean by that is, as much as I wanted to make sure my system ran well, it was also imperative that only I programmed it and only I maintained it. Changes in the vision program were only done by me, not as job security, but simply due to the complexity of the system. Giving others access to the camera would have most likely caused issues for me. Now once you have the solution implemented and tested and running, there is no reason that, with diligent backups of the camera positions, documentation and adequate training for your support team, the burden for maintaining the system should fall only on you.

DAVE HINCH project engineer / Brave Control SolutionsControl System Integrators Association (CSIA) member

ALSO READ: Vision is not always clear

About the author: Mike Bacidore
About the Author

Mike Bacidore | Editor in Chief

Mike Bacidore is chief editor of Control Design and has been an integral part of the Endeavor Business Media editorial team since 2007. Previously, he was editorial director at Hughes Communications and a portfolio manager of the human resources and labor law areas at Wolters Kluwer. Bacidore holds a BA from the University of Illinois and an MBA from Lake Forest Graduate School of Management. He is an award-winning columnist, earning multiple regional and national awards from the American Society of Business Publication Editors. He may be reached at [email protected] 

Sponsored Recommendations

IDEC Push-In Terminals make control panel wiring quicker and easier

Push-in terminals simplify the wiring of control panels for equipment manufacturers that have many control devices in the panel. The push-in terminal also reduces manufacturing...

Addressing Harsh Environmental Challenges with Technology

Discover why rugged HMI technology is crucial for enhancing machine performance and reliability in harsh environments. Learn about our high-quality, certified solutions designed...

2024 State of Technology Report: Motors, Drives & Motion

Motion makes manufacturing move. Motors and drives are at the core of industrial operations. Without them, production comes to a halt. This new State of Technology Report from...

Case Study: Conveyor Solution for Unique Application

Find out how the Motion Automation Intelligence Conveyor Engineering team provided a new and reliable conveyance solution that helped a manufacturer turn downtime into uptime....