Tackle Quality Issues and Improve OEE in Vision Systems for Packaging

Packaging industries must operate with the highest standards of quality and productivity. Overall Equipment Effectiveness (OEE) is a scoring system widely used to track production processes in packaging. An OEE score is calculated using data specifying quality (percent of good parts), performance (performance of nominal speed) and equipment availability (percent of planned uptime).

Quality issues can directly impact the customer, so it is essential to have processes in place to ensure the product is safe to use and appropriately labeled before it ships out. Additionally, defects to the packaging like dents, scratches and inadequate labeling can affect customer confidence in a product and their willingness to buy it at the store. Issues with quality can lead to unplanned downtime, waste and loss of productivity, affecting all three metrics of the OEE score.

1.png

Traditionally, visual inspections and packaging line audits have been used to monitor quality, however, this labor can be challenging in high volume applications. Sensing solutions can be used to partly automate the process, but complex demands, including multiple package formats and product formulas in the same line, require the flexibility that machine vision offers. Machine vision is also a vital component in adding traceability down to the unit in case a quality defect or product recall does occur.

2.JPG

Vision systems can increase productivity in a packaging line by reducing the amount of planned and unplanned downtime for manual quality inspection. Vision can be reliably used to detect quality defects as soon as they happen. With this information, a company can make educated improvements to the equipment to improve repeatability and OEE and ensure that no defective product reaches the customers’ hands.

Some vision applications for quality assurance in packaging include:

  • Label inspection (presence, integrity, print quality, OCV/OCR)
    • Check that a label is in place, lined up correctly and free of scratches and tears. Ensure that any printed graphics, codes and text are legible and printed with the expected quality. Use a combination of OCR (Optical Character Recognition) to read a lot number, expiration date or product information, and then OCV (Optical Character Verification) to ensure legibility.
  • Primary and secondary packaging inspection for dents and damage
    Inspect bottles, cans and boxes to make sure that their geometry has not been altered during the manufacturing process. For example, check that a bottle rim is circular and has not been crushed so that the bottle cap can be put on after filling with product.
  • Safety seal/cap presence and position verification
    Verifying that a cap and/or seal has been placed correctly on a bottle, and/or that the container being used is the correct one for the formula / product being manufactured.
  • Product position verification in packages with multiple items
    In packages of solids, making sure they have been filled adequately and in the correct sequence. In pharmaceutical industries, this can be used to check that blister packs have a pill in each space, and in food industries to ensure that the correct food item is placed in each space of the package.
  • Certification of proper liquid level in containers
    For applications in which it can’t be done reliably with traditional sensing technologies, vision systems can be used to ensure that a bottle has been filled to its nominal volume.

The flexibility of vision systems allows for addressing these complex applications and many more with a well-designed vision solution.

For more information on Balluff vision solutions and applications, visit www.balluff.com.

Sensor and Device Connectivity Solutions For Collaborative Robots

Sensors and peripheral devices are a critical part of any robot system, including collaborative applications. A wide variety of sensors and devices are used on and around robots along with actuation and signaling devices. Integrating these and connecting them to the robot control system and network can present challenges due to multiple/long cables, slip rings, many terminations, high costs to connect, inflexible configurations and difficult troubleshooting. But device level protocols, such as IO-Link, provide simpler, cost-effective and “open” ways to connect these sensors to the control system.

Just as the human body requires eyes, ears, skin, nose and tongue to sense the environment around it so that action can be taken, a collaborative robot needs sensors to complete its programmed tasks. We’ve discussed the four modes of collaborative operation in previous blogs, detailing how each mode has special safety/sensing needs, but they have common needs to detect work material, fixtures, gripper position, force, quality and other aspects of the manufacturing process. This is where sensors come in.

Typical collaborative robot sensors include inductive, photoelectric, capacitive, vision, magnetic, safety and other types of sensors. These sensors help the robot detect the position, orientation, type of objects, and it’s own position, and move accurately and safely within its surroundings. Other devices around a robot include valves, RFID readers/writers, indicator lights, actuators, power supplies and more.

The table, below, considers the four collaborative modes and the use of different types of sensors in these modes:

Table 1.JPG

But how can users easily and cost-effectively connect this many sensors and devices to the robot control system? One solution is IO-Link. In the past, robot users would run cables from each sensor to the control system, resulting in long cable runs, wiring difficulties (cutting, stripping, terminating, labeling) and challenges with troubleshooting. IO-Link solves these issues through simple point-to-point wiring using off-the-shelf cables.

Table 2.png

Collaborative (and traditional) robot users face many challenges when connecting sensors and peripheral devices to their control systems. IO-Link addresses many of these issues and can offer significant benefits:

  • Reduced wiring through a single field network connection to hubs
  • Simple connectivity using off-the-shelf cables with plug connectors
  • Compatible will all major industrial Ethernet-based protocols
  • Easy tool change with Inductive Couplers
  • Advanced data/diagnostics
  • Parametarization of field devices
  • Faster/simpler troubleshooting
  • Support for implementation of IIoT/Industry 4.0 solutions

IO-Link: an excellent solution for simple, easy, fast and cost-effective device connection to collaborative robots.

When to use optical filtering in a machine vision application

Industrial image processing is essentially a requirement in modern manufacturing. Vision solutions can deliver visual quality control, identification and positioning. While vision systems have gotten easier to install and use, there isn’t a one-size-fits-all solution. Knowing how and when you should use optical filtering in a machine vision application is a vital part of making sure your system delivers everything you need.

So when should you use optical filtering in your machine vision applications? ALWAYS. Image filtering increases contrast, usable resolution, image quality and most importantly, it dramatically reduces ambient light interference, which is the number one reason a machine vision application doesn’t work as expected.

Different applications require different types of filtering. I’ve highlighted the most common.

Bandpass Filtering

Different light spectrums will enhance or de-emphasize certain aspects of the target you are inspecting. Therefore, the first thing you want to do is select the proper color/wavelength that will give you the best contrast for your application. For example, if you are using a red area light that transmits at 617nm (Figure 1), you will want to select a filter (Figure 3) to attach to the lens (Figure 2) that passes the frequency of the area light and filters out the rest of the color spectrum. This filter technique is called Bandpass filtering reference (Figure 4).

This allows only the light from the area light to pass through while all other light is filtered out. To further illustrate the kinds of effects that can be emphasized or de-emphasized we can look at the following images of the same product but with different filters.

Another example of Bandpass filtering can be seen in (Figure 9), which demonstrates the benefit of using a filter in an application to read the LOT code and best before sell date. A blue LED light source and a blue Bandpass filter make the information readable, whereas without the filter it isn’t.

f9
Figure 9

Narrow Bandpass Filtering

Narrow bandpass filtering, shown in (Figure 10), is mostly used for laser line dimensional measurement applications, referenced in (Figure 11). This technique creates more ambient light immunity than normal Bandpass filtering. It also decreases the bandwidth of the image and creates a kind of black on white effect which is the desired outcome you want for this application.

Shortpass Filtering

Another optical filtering technique is shortpass filtering, shown in (Figure 12), which is commonly used in color camera imaging because it filters out UV and IR light sources to give you a true color image.

f12
Figure 12

Longpass Filtering

Longpass filtering, referenced in (Figure 13), is often used in IR applications where you want to suppress the visible light spectrum.

f13
Figure 13

Neutral Density Filtering

Neutral density filtering is regularly used in LED inspection. Without filtering, light coming from the LEDs completely saturates the image making it difficult, if not impossible, to do a proper inspection. Deploying neutral density filtering acts like sunglasses for your camera. In short, it reduces the amount of full spectrum light the camera sees.

Polarization Filtering

Polarization filtering is best to use when you have surfaces that are highly reflective or shiny. Polarization filtering can be deployed to reduce glare on your target. You can clearly see the benefits of this in (Figure 14).

f14
Figure 14

How flexible inspection capabilities help meet customization needs and deliver operational excellence

As the automotive industry introduces more options to meet the growing complexities and demands of its customers (such as increased variety of trim options) it has rendered challenges to the automotive manufacturing industry.

Demands of the market filter directly back to the manufacturing floor of tier suppliers as they must find the means to fulfill the market requirements on a flexible industrial network, either new or existing. The success of their customers is dependent on the tier supplier chain delivering within a tight timeline. Whereby, if pressure is applied upon that ecosystem, it will mean a more difficult task to meet the JIT (just in time) supply requirements resulting in increased operating costs and potential penalties.

Meeting customer requirements creates operational challenges including lost production time due to product varieties and tool change time increases. Finding ways to simplify tool change and validate the correct components are placed in the correct assembly or module to optimize production is now an industry priority. In addition, tracking and traceability is playing a strong role in ensuring the correct manufacturing process has been followed and implemented.

How can manufacturing implement highly flexible inspection capabilities while allowing direct communication to the process control network and/or MES network that will allow the capability to change inspection characteristics on the fly for different product inspection on common tooling?

Smart Vision Inspection Systems

Compact Smart Vision Inspection System technology has evolved a long way from the temperamental technologies of only a decade ago. Systems offered today have much more robust and simplistic intuitive software tools embedded directly in the Smart Vision inspection device. These effective programming cockpit tools allow ease of use to the end user at the plant providing the capability to execute fast reliable solutions with proven algorithm tools. Multi-network protocols such as EthernetIP, ProfiNet, TCP-IP-LAN (Gigabit Ethernet) and IO-LINK have now come to realization. Having multiple network capabilities delivers the opportunity of not just communicating the inspection result to the programmable logic controller (via process network) but also the ability to send image data independent of the process network via the Gigabit Ethernet network to the cloud or MES system. The ability to over-lay relevant information onto the image such as VIN, Lot Code, Date Code etc. is now achievable.  In addition, camera housings have become more industrially robust such as having aluminum housings with an ingress protection rating of IP67.

Industrial image processing is now a fixture within todays’ manufacturing process and is only growing. The technology can now bring your company a step closer to enabling IIOT by bringing issues to your attention before they create down time (predictive maintenance). They aid in reaching operational excellence as they uncover processing errors, reduce or eliminate scrap and provide meaningful feedback to allow corrective actions to be implemented.

How to Select the Best Lighting Techniques for Your Machine Vision Application

The key to deploying a robust machine vision application in a factory automation setting is ensuring that you create the necessary environment for a stable image.  The three areas you must focus on to ensure image stability are: lighting, lensing and material handling.  For this blog, I will focus on the seven main lighting techniques that are used in machine vision applications.

On-Axis Ring Lighting

On-axis ring lighting is the most common type of lighting because in many cases it is integrated on the camera and available as one part number. When using this type of lighting you almost always want to be a few degrees off perpendicular (Image 1A).  If you are perpendicular to the object you will get hot spots in the image (Image 1B), which is not desirable. When the camera with its ring light is tilted slightly off perpendicular you achieve the desired image (Image 1C).

Off-Axes Bright Field Lighting

Off-axes bright field lighting works by having a separate LED source mounted at about 15 degrees off perpendicular and having the camera mounted perpendicular to the surface (Image 2A). This lighting technique works best on mostly flat surfaces. The main surface or field will be bright, and the holes or indentations will be dark (Image 2B).

Dark Field Lighting

Dark field lighting is required to be very close to the part, usually within an inch. The mounting angle of the dark field LEDs needs to be at least 45 degrees or more to create the desired effect (Image 3A).  In short, it has the opposite effect of Bright Field lighting, meaning the surface or field is dark and the indentations or bumps will be much brighter (Image 3B).

Back Lighting

Back lighting works by having the camera pointed directly at the back light in a perpendicular mount.  The object you are inspecting is positioned in between the camera and the back light (Image 4A).  This lighting technique is the most robust that you can use because it creates a black target on a white background (Image 4B).

Diffused Dome Lighting

Diffused dome lighting, aka the salad bowl light, works by having a hole at the top of the salad bowl where the camera is mounted and the LEDs are mounted down at the rim of the salad bowl, pointing straight up which causes the light to reflect off of the curved surface of the salad bowl and it creates very uniform reflection (Image 5A).  Diffused dome lighting is used when the object you are inspecting is curved or non-uniform (Image 5B). After applying this lighting technique to an uneven surface or texture, hotspots and other sharp details are deemphasized, and it creates a sort of matte finish to the image (Image 5C).

Diffused On-Axis Lighting

Diffused on-axis lighting, or DOAL, works by having a LED light source pointed at a beam splitter and the reflected light is then parallel with the direction that the camera is mounted (Image 6A).  DOAL lighting should only be used on flat surfaces where you are trying to diminish very shiny parts of the surface to create a uniformed image.  Applications like DVD, CD, or silicon wafer inspection are some of the most common uses for this type of lighting.

6A
Image 6A

 

Structured Laser Line Lighting

Structured laser line lighting works by projecting a laser line onto a three-dimensional object (Image 7A), resulting in an image that gives you information on the height of the object.  Depending on the mounting angle of the camera and laser line transmitter, the resulting laser line shift will be larger or smaller as you change the angle of the devices (Image 7B).  When there is no object the laser line will be flat (Image 7C).

Real Life Applications 

The images below, (Image 8A) and (Image 8B) were used for an application that requires the pins of a connector to be counted. As you can see, the bright field lighting on the left does not produce a clear image but the dark field lighting on the right does.

This next example (Image 9A) and (Image 9B) was for an application that requires a bar code to be read through a cellophane wrapper.  The unclear image (Image 9A) was acquired by using an on-axis ring light, while the use of dome lighting (Image 9B) resulted in a clear, easy-to-read image of the bar code.

This example (Image 10A), (Image 10B) and (Image 10C) highlights different lighting techniques on the same object. In the (Image 10A) image, backlighting is being used to measure the smaller hole diameter.  In image (Image 10B) dome lighting is being used for inspecting the taper of the upper hole in reference to the lower hole.  In (Image 10C) dark field lighting is being used to do optical character recognition “OCR” on the object.  Each of these could be viewed as a positive or negative depending on what you are trying to accomplish.

What’s So Smart About a Smart Camera?

Smart “things” are coming into the consumer market daily. If one Googles “Smart – Anything” they are sure to come up with pages of unique products which promise to make life easier. No doubt, there was a marketing consortium somewhere that chose to use the word “smart” to describe a device which includes many and variable features. The smart camera is a great example of one such product where its name only leads to more confusion due to the relative and ambiguous term used to summarize a large list of features. A smart camera, used in many manufacturing processes and applications, is essentially a more intuitive, all-in-one, plug-and-play, mid-level technology camera.

OK, so maybe the marketing consortium is on to something. “Smart” does indicate a lot of features in a simple, single word, but it is important to determine if those smart features translate into benefits that help solve problems. If a smart camera is really smart it should include the following list of benefits:

  • Intuitive: To say it is easy to use just doesn’t cut it. To say it is easy for a vision engineer to use doesn’t mean that it is easy for an operator, a controls engineer, production engineer, etc. The camera should allow someone who has basic vision knowledge and minimal vision experience to select tools (logically named) and solve general applications without having to consult a manufacturer for a 2 day on-site visit for training and deployment.
  • All-In-One: The camera should house the whole package. This includes the software, manuals, network connections, etc. If the camera requires an external device like a laptop or an external switch to drive it, then it doesn’t qualify as smart.
  • Plug-and-play: Quick set up and deployment is the key. If the camera requires days of training and consultation just to get it up and running, then it’s not smart.
  • Relative technology: Smart cameras don’t necessarily need to have the highest end resolution, memory, or processing speed. These specs simply need to be robust enough to address the application. The best way to determine that is by conducting a feasibility study along with the manufacturer to make sure you are not paying for technology that won’t be needed or used.

Ultimately, a lot of things can be described as “smart”, but if you can make an effort to investigate what smart actually means, it’s a whole lot easier to eliminate the “gotchas” that tend to pop up at the most inopportune times.

Note: As with any vision application, the most important things to consider are lighting, lenses and fixtures. I have heard vision gurus say those three things are more critical than the camera itself.

To OCV, or OCR, that is the question

VisionOWLTo OCV, or OCR: that is the question:
Whether ’tis nobler to use OCV (Optical Character Verification) to verify print,
Or OCR (Optical Character Recognition) to decode a sea of print troubles.
And by decoding will turmoil end?
No more to have the camera sleep; we program the TTL (Time to Live)
That font won’t print correctly, ’tis a communication issue?
The undiscover’d font no longer puzzles the will as I can check with OCV.

OCR in Machine Vision software has a library of numbers, letters, fonts, and special characters. Sometimes print is not readable when quality checked using the ISO 1831:1980 specification library. Fortunately, we can teach printed characters utilizing OCV. To verify the quality of print, it can be graded following the ISO 15415,15416 AIM DPM-1-2006/ISO29158 standard. This standard also checks print quality when 1D or 2D barcodes are read.

Hence, methinks even Shakespeare would be impressed by modern-day OCV and OCR technology.

To learn more about machine vision visit www.balluff.us/vision.

Special thanks to Diane Weymier-Dodd for her contribution to this post. 

Reducing Planned/Unplanned Downtime with Vision Sensors; Part 3

Share/Bookmark

In parts one and two of this blog series, I described the typical packaging process, how actual runtime is defined, how vision is used to improve runtime, and how vision compares to the use of discrete sensors. In this last installment of this series, I will show some specific examples of how vision sensors have been used in packaging and show two case studies exemplifying the benefits customers achieved with the use of vision in their processes.

Continue reading “Reducing Planned/Unplanned Downtime with Vision Sensors; Part 3”

Reducing Planned/Unplanned Downtime with Vision Sensors; Part 2

Share/Bookmark

In part one of this blog series, I described the basic definition of the typical packaging process and how many processes runtime actually get broken down and defined. In this second part of vision sensors in packaging, I will specifically describe how vision is used to reduce planned and unplanned downtime and compare discrete versus the use of vision to achieve the same goals of error proofing a process and runtime improvement.

Continue reading “Reducing Planned/Unplanned Downtime with Vision Sensors; Part 2”

Reducing Planned/Unplanned Downtime with Vision Sensors; Part 1

Share/Bookmark

One of the things I am often asked about is “why use machine vision in packaging”? There are many reasons, including dealing with the perceived complexity of serviceability and cost. I will show you where the use of vision in packaging can significantly decrease a major cost factor called “planned downtime”, along with other benefits in this 3 part blog series – so stay tuned for my later posts.

Continue reading “Reducing Planned/Unplanned Downtime with Vision Sensors; Part 1”