Rick-Rice-new-thumb

See what’s new with photoelectric sensors

March 6, 2017
Twice the distance and improved color detection result from CMOS laser.

Photoelectric sensors have been around for decades. The first photo sensor was developed in the 1950s, so it’s not necessarily breaking news that these exist. What has changed over the years is the sensor package and the technology inside. A photoelectric sensor uses a change in light intensity to determine the distance to, presence of or absence of an object. The light is generated by an oscillator to create a light of predicted length and frequency. The generated light is then bounced off of the object or background, and the resultant light beam is analyzed by a phototransistor to determine if the light came from the source, and the result is then converted into an output that is used to determine logic. Most photo sensors use infrared (IR) light, and you will find three basic deployments—opposed, retro-reflective and proximity.

In the opposed mode, one module transmits the light while another receives it. This is the most accurate form of sensing, as the receiver will only accept the light that was sent by its matching transmitter. The object being sensed is passed through the beam of light between transmitter and receiver. The receiver then produces an output based on the presence or absence of an object in the light beam. These sensor pairs can be in the form of a sender and receiver module mounted at a distance but facing each other, but they may also take the form of glass or plastic fiberoptic elements, again mounted facing each other at a distance, for more accurate beams of light.

A retro-reflective photoeye combines the transmitter and receiver into a common module and uses a reflective tape or object to return the transmitted beam to the receiver. Like the opposed mode, the receiver is looking for an object to block the beam of light that comes out of the transmitter and is reflected back by the reflector or tape. These sensors can be somewhat unreliable as they depend on the correct reflection of the transmitted beam back to the receiver. As such, they are susceptible to ambient light in the area of the application. This phenomenon can be better predicted by polarizing both the light source and the reflector to improve the reflected light by ignoring light that doesn’t have the same polarity as the source.

A proximity photoeye uses the same principle as the retro-reflective photoeye but uses the properties of the object itself to reflect light back to the receiver. Sometimes called a diffuse or background suppression photoeye, this device counts on the light beam being directed to an imaginary point in space and the light reflecting back will then be expected to be in a particular spot on the face of the receiver. A deviation in the reflected light will determine if an object is in the beam.

Some diffuse photoeyes have a fixed focal point, while others are adjustable by way of a setscrew on the body of the sensor. The fixed focus can be used to the advantage of the designer by deploying them in such a way as to ignore a background object while triggering on any object that is closer than the limit (focal distance) of the sensor and the face of the sensor itself. For example, using a fixed-field photoeye with a distance of 4 inches, one can mount the sensor with the face at 4.5 inches above a conveyor and then sense any object that is taller than 0.5 inches above the surface of the conveyor.

Let’s talk about package for a moment. Who remembers the days when a photoeye came in a phenolic casing measuring 1.5 by 1.5 by 4 inches? Over the years, like most things in our industry, greater things come in ever-shrinking packages. Most common sensors come in an industry standard 18 mm body or barrel, and there are a number of companies out there making sensors to suit most applications. Up to this point in the discussion, most photoelectric sensors share the same properties and capabilities. Recent developments have added some great features to this often used sensor.

Any time light is used as a medium for detection there is the risk of interference from any number of causes. Development over the years has evolved to use different sources of light to improve the accuracy of sensing, as well as the distances over which the light can be transmitted. Diffuse sensors are nice because they don’t require a reflector, but they are limited by the properties of the object being sensed and its ability to reflect some of that transmitted light back at the receiving optics. Early light sources relied on a light emitting diode (LED) to produce the necessary light. The common variants are visible red or invisible infrared light.

Newer sensors rely on the use of lasers to improve the range of photoelectric sensors. These distances can be from as little as 6 cm to as much as 5 m for a diffuse (proximity) sensor.

Within the past few years, a new technology has come into play. The use of a self-contained, complementary-metal-oxide-semiconductor (CMOS) laser enables sensors to use both position and contrast to determine the presence of an object. Recently, a vendor demonstrated the use of Class II lasers to enhance these devices yet again. Class I sensors were formerly used. In this package, the accuracy of the sensor was dramatically improved. Not only would it indicate position (distance) accurately, but it could use both contrast and now texture to further clarify the object presence. The remarkable result is a sensor that can be used to find a piece of card stock on a conveyor but also a brown or dark-blue eye spot (registration spot) on a black shiny film. That same sensor can also be used to detect color.

I have many applications where I’m tasked with accurately stopping packaging machines based on the presence of a blotch of color on a piece of shiny film being run through a machine where the film distance to the sensor might waver due to movement under ambient light conditions where there is a great deal of reflectivity due to the light itself. The accuracy of the Class II sensor was amazing. I could teach the basic background versus target qualities and then go into a deep teach mode where I could then further define the target conditions by moving the object around under the sensor to detune for variances in reflectivity due to light and the texture of the material itself.

A major advantage over the Class I device was the distance at which an accurate change in distance could be measured. One of the applications that I use distance sensors for is to watch for the presence of a white or reflective film pouch in a stainless-steel bucket of a cartoner. The bucket moves by at 60-100 ft/min and has ridges on the bottom of the bucket to facilitate the transfer of the pouch into the cartoner. In normal ambient conditions, the ridges kick off light in all directions as the bucket passes by the sensor. With the Class I sensor I had to have the sensor mounted within ½-inch of the top of the bucket due to the limited distance to guarantee an accurate sensing of the product in the bucket. With the Class II sensor, I have nearly twice the distance from the background to accurately detect the object.

One final accolade should be mentioned about this new technology. The product that I find most difficult is a thin (less than ¼-inch) pouch of a shiny white material with a blue printing on the surface. Conventional distance sensors could not always pick up the product, due to both the small distance element as well as the shiny white film reflecting the ambient light. The blue print also caused issues as it often absorbed the visible red light, creating blanks in my sensing window, so much so that my attempts to filter the signal presence would result in not picking up the pouch presence at all. The ability to deep-teach the sensor while seeing the target allowed me to teach out the blue printing and make the sensor consider that part of a good read and not part of that highly reflective, stainless-steel bucket passing by.

This advancement in technology has me thinking about a photoelectric sensor where I might normally have considered a vision system as a last resort. Imagine how that affects my budget for projects. Definitely worth a second thought.

About the author

Rick Rice is a controls engineer at Crest Foods, a dry-foods manufacturing and packaging company in Ashton, Illinois. With nearly 30 years’ experience in the field of automation, Rice has designed and programmed everything from automotive assembly, robots, palletizing and depalletizing equipment, conveyors and forming machines for the plastics industry but most of his career has focused on OEM in the packaging machinery industry with a focus on R&D for custom applications. Contact him at [email protected].

About the Author

Rick Rice | Contributing Editor

Rick Rice is a controls engineer at Crest Foods, a dry-foods manufacturing and packaging company in Ashton, Illinois. With more than 30 years’ experience in the field of automation, Rice has designed and programmed everything from automotive assembly, robots, palletizing and depalletizing equipment, conveyors and forming machines for the plastics industry but most of his career has focused on OEM in the packaging machinery industry with a focus on R&D for custom applications.