Traceability is a term that is commonly used in most plants today. Whether it is being used to describe tracking received and shipped goods, tracking valuable assets down to their exact location, or tracking an item through production as it is being built, traceability is usually associated with only two technologies — RFID and/or barcode. While these two technologies are critical in establishing a framework for traceability within the plant, there are other technologies that can help tell the rest of the story.
Utilizing vision along with a data collection technology adds another dimension to traceability by providing physical evidence in the form of an image. While vision cameras have been widely used in manufacturing for a long time, most cameras operate outside of the traceability system. The vision system and tracking system often operate independently. While they both end up sending data to the same place, that data must be transported and processed separately which causes a major increase in network traffic.
Current vision technology allows images to be “stamped” with the information from the barcode or RFID tag. The image becomes redundant traceability by providing visual proof that everything happened correctly in the build process. In addition, instead of sending image files over the network they are sent through a separate channel to a server that contains all the process data from the tag and has the images associated with it. This frees up the production network and provides visual proof that the finished product is what we wanted it to be.
Used separately, the three technologies mentioned above provide actionable data which allows manufacturers to make important decisions. Used together, they tell a complete story and provide visual evidence of every step along the way. This allows manufacturers to make more informed decisions based on the whole story not just part of it.
The key to deploying a robust machine vision application in a factory automation setting is ensuring that you create the necessary environment for a stable image. The three areas you must focus on to ensure image stability are: lighting, lensing and material handling. For this blog, I will focus on the seven main lighting techniques that are used in machine vision applications.
On-Axis Ring Lighting
On-axis ring lighting is the most common type of lighting because in many cases it is integrated on the camera and available as one part number. When using this type of lighting you almost always want to be a few degrees off perpendicular (Image 1A). If you are perpendicular to the object you will get hot spots in the image (Image 1B), which is not desirable. When the camera with its ring light is tilted slightly off perpendicular you achieve the desired image (Image 1C).
Off-Axes Bright Field Lighting
Off-axes bright field lighting works by having a separate LED source mounted at about 15 degrees off perpendicular and having the camera mounted perpendicular to the surface (Image 2A). This lighting technique works best on mostly flat surfaces. The main surface or field will be bright, and the holes or indentations will be dark (Image 2B).
Dark Field Lighting
Dark field lighting is required to be very close to the part, usually within an inch. The mounting angle of the dark field LEDs needs to be at least 45 degrees or more to create the desired effect (Image 3A). In short, it has the opposite effect of Bright Field lighting, meaning the surface or field is dark and the indentations or bumps will be much brighter (Image 3B).
Back lighting works by having the camera pointed directly at the back light in a perpendicular mount. The object you are inspecting is positioned in between the camera and the back light (Image 4A). This lighting technique is the most robust that you can use because it creates a black target on a white background (Image 4B).
Diffused Dome Lighting
Diffused dome lighting, aka the salad bowl light, works by having a hole at the top of the salad bowl where the camera is mounted and the LEDs are mounted down at the rim of the salad bowl, pointing straight up which causes the light to reflect off of the curved surface of the salad bowl and it creates very uniform reflection (Image 5A). Diffused dome lighting is used when the object you are inspecting is curved or non-uniform (Image 5B). After applying this lighting technique to an uneven surface or texture, hotspots and other sharp details are deemphasized, and it creates a sort of matte finish to the image (Image 5C).
Diffused On-Axis Lighting
Diffused on-axis lighting, or DOAL, works by having a LED light source pointed at a beam splitter and the reflected light is then parallel with the direction that the camera is mounted (Image 6A). DOAL lighting should only be used on flat surfaces where you are trying to diminish very shiny parts of the surface to create a uniformed image. Applications like DVD, CD, or silicon wafer inspection are some of the most common uses for this type of lighting.
Structured Laser Line Lighting
Structured laser line lighting works by projecting a laser line onto a three-dimensional object (Image 7A), resulting in an image that gives you information on the height of the object. Depending on the mounting angle of the camera and laser line transmitter, the resulting laser line shift will be larger or smaller as you change the angle of the devices (Image 7B). When there is no object the laser line will be flat (Image 7C).
Real Life Applications
The images below, (Image 8A) and (Image 8B) were used for an application that requires the pins of a connector to be counted. As you can see, the bright field lighting on the left does not produce a clear image but the dark field lighting on the right does.
This next example (Image 9A) and (Image 9B) was for an application that requires a bar code to be read through a cellophane wrapper. The unclear image (Image 9A) was acquired by using an on-axis ring light, while the use of dome lighting (Image 9B) resulted in a clear, easy-to-read image of the bar code.
This example (Image 10A), (Image 10B) and (Image 10C) highlights different lighting techniques on the same object. In the (Image 10A) image, backlighting is being used to measure the smaller hole diameter. In image (Image 10B) dome lighting is being used for inspecting the taper of the upper hole in reference to the lower hole. In (Image 10C) dark field lighting is being used to do optical character recognition “OCR” on the object. Each of these could be viewed as a positive or negative depending on what you are trying to accomplish.
While IO-Link is well addressed in an automated production environment, some have overlooked the benefits IO-Link can deliver for machine vision products.
Any IO-Link device can be connected and controlled by the PLC via fieldbus interface. Saving installation costs and controlling and running IO-Link components are the key values. All the well-known IO-Link benefits apply.
Camera-Modus – without PLC
However, with IO-Link that operates in this mode, the IO-Link-interfaces are directly controlled. IO-Link I/O-Modules are automatically detected, configured and controlled.
In a stand-alone situation where an optical inspection of a component is performed without PLC, the operator delivers the component, hits a trigger button, the SmartCamera checks for completeness of production quality, sends a report to a separate customer server, and controls directly via IO-Link interface the connected vision product.
For more information about machine vision and optical identification see www.balluff.com
To OCV, or OCR: that is the question: Whether ’tis nobler to use OCV (Optical Character Verification) to verify print, Or OCR (Optical Character Recognition) to decode a sea of print troubles. And by decoding will turmoil end? No more to have the camera sleep; we program the TTL (Time to Live) That font won’t print correctly, ’tis a communication issue? The undiscover’d font no longer puzzles the will as I can check with OCV.
OCR in Machine Vision software has a library of numbers, letters, fonts, and special characters. Sometimes print is not readable when quality checked using the ISO 1831:1980 specification library. Fortunately, we can teach printed characters utilizing OCV. To verify the quality of print, it can be graded following the ISO 15415,15416 AIM DPM-1-2006/ISO29158 standard. This standard also checks print quality when 1D or 2D barcodes are read.
Hence, methinks even Shakespeare would be impressed by modern-day OCV and OCR technology.
Finding information that is not biased or a shrouded sales pitch for a companies products can sometimes be a difficult proposition in today’s open communication society. The world of machine vision is no exception. So when seeking un-biased information, sometimes it can seem like the deck is stacked against you.
In parts one and two of this blog series, I described the typical packaging process, how actual runtime is defined, how vision is used to improve runtime, and how vision compares to the use of discrete sensors. In this last installment of this series, I will show some specific examples of how vision sensors have been used in packaging and show two case studies exemplifying the benefits customers achieved with the use of vision in their processes.
In part one of this blog series, I described the basic definition of the typical packaging process and how many processes runtime actually get broken down and defined. In this second part of vision sensors in packaging, I will specifically describe how vision is used to reduce planned and unplanned downtime and compare discrete versus the use of vision to achieve the same goals of error proofing a process and runtime improvement.
One of the things I am often asked about is “why use machine vision in packaging”? There are many reasons, including dealing with the perceived complexity of serviceability and cost. I will show you where the use of vision in packaging can significantly decrease a major cost factor called “planned downtime”, along with other benefits in this 3 part blog series – so stay tuned for my later posts.