Machine Vision: 5 Simple Steps to Choose the Right Camera

The machine vision and industrial camera market is offering thousands of models with different resolutionssizes, speeds, colors, interfaces, prices, etc. So, how do you choose? Let’s go through 5 simple steps which will ensure easy selection of the right camera for your application. 

1.  Defined task: color or monochrome camera  

2.  Amount of information: minimum of pixels per object details 

3.  Sensor resolution: formula for calculating the image sensor 

4.  Shutter technology: moving or static object 

5.  Interfaces and camera selector: lets pick the right model 

STEP 1 – Defined task  

It is always necessary to start with the size of the scanned object (X, Y), or you can determine the smallest possible value (d) that you want to distinguish with the camera.

For easier explanation, you can choose the option of solving the measurement task. However, the basic functionality can be used for any other applications.

In the task, the distance (D) between the centers of both holes is determined with the measurement accuracy (d). Using these values, we then determine the parameter for selecting the right image sensor and camera.

Example:
Distance (D) between 2 points with measuring accuracy (d) of 0.05 mm. Object size X = 48 mm (monochrome sensor, because color is not relevant here)

Note: Monochrome or color?
Color sensors use a Bayer color filter, which allows only one basic color to reach each pixel. The missing colors are determined using interpolation of the neighboring pixels. Monochrome sensors are twice as light sensitive as color sensors and lead to a sharper image by acquiring more details within the same number of pixels. For this reason, monochrome sensors are recommended if no color information is needed.

STEP 2 – Amount of information

Each type of application needs a different size of information to solve. This is differentiated by the minimum number of pixels. Lets again use monochrome options.

Minimum of pixels per object details

  • Object detail measuring / detection       3
  • Barcode line width                                           2
  • Datamatrix code module width                4
  • OCR character height                                    16

Example:
The measuring needs 3 pixels for the necessary accuracy (object detail size d). As necessary accuracy (d) which is 0.05 mm in this example, is imaged on 3 pixels.

Note:
Each characteristic or application type presupposes a minimum number of pixels. It avoids the loss of information through sampling blurs.

STEP 3 – Sensor resolution

We already defined the object size as well as resolution accuracy. As a next step, we are going to define resolution of the camera. It is simple formula to calculate the image sensor.

S = (N x O) / d = (min. number of pixels per object detail x object size) / object detail size

Object size (O) can be describe horizontally as well as vertically. Some of sensors are square and this problem is eliminated 😊

Example:
S = (3 x 48 mm) / 0.05 mm = 2880 pixels

We looked at the available image sensors and the closest is a model with resolution 3092 x 2080 => 6.4Mpixels image sensor.

Note:
Pay attention to the format of the sensor.

For a correct calculation, it is necessary to check the resolution, not only in the horizontal but also in the vertical axis.

 

STEP 4 – Shutter technology

Global shutter versus rolling shutter.

These technologies are standard in machine vision and you are able to find hundreds of cameras with both.

Rolling shutter: exposes the motive line-by-line. This procedure results in a time delay for each acquired line. Thus, moving objects are displayed blurrily in the resulting motive through the generated “object time offset” (compare to the image).

Pros:

    • More light sensitive
    • Less expensive
    • Smaller pixel size provides higher resolution with the same image format.

Cons:

    • Image distortion occurs on moving objects

Global shutter: used to get distortion-free images by exposing all pixels at the same time.

Pros:

    • Great for fast processes
    • Sharp images with no blur on moving objects.

Cons:

    • More expensive
    • Larger image format

Note:
The newest rolling shutter sensors have a feature called global reset mode, which starts the exposure of all rows simultaneously and the reset of each row is released simultaneously, also. However, the readout of the lines is equal to the readout of the rolling shutter: line by line.

This means the bottom lines of the sensor will be exposed to light longer! For this reason, this mode will only make sense, if there is no extraneous light and the flash duration is shorter or equal to the exposure time.

STEP 5 – Interfaces and camera selector

Final step is here:

You must consider the possible speed (bandwidth) as well as cable length of camera technology.

USB2
Small, handy and cost-effective, USB 2.0 industrial cameras have become integral parts in the area of medicine and microscopy. You can get a wide range of different variants, including with or without housings, as board-level or single-board, or with or without digital I/Os.

USB3/GigE Vision
Without standards every manufacturer does their own thing and many advantages customers learned to love with the GigE Vision standard would be lost. Like GigE Vision, USB3 Vision also defines:

    • a transport layer, which controls the detection of a device (Device Detection)
    • the configuration (Register Access)
    • the data streaming (Streaming Data)
    • the handling of events (Event Handling)
    • established interface to GenICam. GenICam abstracts the access to the camera features for the user. The features are standardized (name and behavior) by the standard feature naming convention (SFNC). Additionally, it is possible to create specific features in addition to the SFNC to differentiate from other vendors (quality of implementation). In contrast to GigE Vision, this time the mechanics (e.g. lockable cable connectors) are part of the standard which leads to a more robust interface.

I believe that these five points will help you choose the most suitable camera. Are you still unclear? Do not hesitate to contact us or contact me directly: I will be happy to consult your project, needs or any questions.

 

 

Which 3D Vision Technology is Best for Your Application?

3D machine vision. This is such a magical combination of words. There are dozens of different solutions on the market, but they are typically not universal enough or they are so universal that they are not sufficient for your application. In this blog, I will introduce different approaches for 3D technology and review what principle that will be the best for future usage.

Bonus:  I created a poll asking professionals what 3D vision technology they believe is best and I’ve shared the results.

Triangulation

One of the most used technologies in the 3D camera world is triangulation, which provides simple distance measurement by angular calculation. The reflected light falls incident onto a receiving element at a certain angle depending on the distance. This standard method relies on a combination of the projector and camera. There are two basic variants of the projections — models with single-line structure and 2-dimensional geometric pattern.

A single projected line is used in applications where the object is moving under the camera. If you have a static object, then you can use multiple parallel lines that allow the evaluation of the complete scene/surface. This is done with a laser light shaped into a two-dimensional geometric pattern (“structured light”) typically using a diffractive optical element (DOE). The most common patterns are dot matrices, line grids, multiple parallel lines, and circles.

Structured light

Another common principle of 3D camera technology is the structured light technique. System contains at least one camera (it is most common to use two cameras) and a projector. The projector creates a narrow band of light (patterns of parallel stripes are widely used), which illuminate the captured object. Cameras from different angles observe the various curved lines from the projector.

Projecting also depends on the technology which is used to create the pattern. Currently, the three most widespread digital projection technologies are:

  • transmissive liquid crystal,
  • reflective liquid crystal on silicon (LCOS)
  • digital light processing (DLP)

Reflective and transparent surfaces create challenges.

Time of Flight (ToF)

For this principle, the camera contains a high-power LED which emits light that is reflected from the object and then returns to the image sensor. The distance from the camera to the object is calculated based on the time delay between transmitted and received light.

This is really simple principle which is used for 3D applications. The most common wavelength used is around 850nm. This is called near infrared range, which is invisible for human and eye safety.

This is an especially great use since the camera can standardly provide 2D as well as 3D picture in the same time.

An image sensor and LED emitter are used as an all-in-one product making it simple to integrate and easy to use. However, a negative point is that the maximum resolution is VGA (640 x 480) and  for Z resolution expect +/- 1cm. On the other hand, it is an inexpensive solution with modest dimensions.

Likely applications include:

  • mobile robotics
  • door controls
  • localization of the objects
  • mobile phones
  • gaming consoles (XBOX and Kinect camera) or industrial version Azure Kinect.

Stereo vision

The 3D camera by stereo vision is a quite common method that typically includes two area scan sensors (cameras). As with human vision, 3D information is obtained by comparing images taken from two locations.

The principle, sometimes called stereoscopic vision, captures the same scene from different angles. The depth information is then calculated from the image pixel disparities (difference in lateral position).

The matching process, finding the same information with the right and left cameras, is critical to data accuracy and density.

Likely applications include:

  • Navigation
  • Bin-picking
  • Depalletization
  • Robotic guidance
  • Autonomous Guiding Vehicles
  • Quality control and product classification

I asked my friends, colleagues, professionals, as well as competitors, on LinkedIn what is the best 3D technology and which technology will be used in the future. You can see the result here.

As you see, over 50% of the people believe that there is no one principle which can solve each task in 3D machine vision world. And maybe that’s why machine vision is such a beautiful technology. Many approaches, solutions and smart people can bring solutions from different perspectives and accesses.

Machine Vision: A Twenty-first Century Automation Solution

Lasers, scanners, fingerprint readers, and face recognition is not just science fiction anymore.  I love seeing technology only previously imagined become reality through necessity and advances in technology.  We, as a world economy, need to be able to verify who we are and ensure transitions are safe, and material and goods are tracked accurately.  With this need came the evolution of laser barcode readers, fingerprint identification devices, and face ID on your phone.  Similar needs have pushed archaic devices to be replaced within factory automation for data collection.

When I began my career in control engineering the 1990s high tech tools were limited to PLCs, frequency drives, and HMIs. The quality inspection data these devices relied on was collected mostly through limit switches and proximity sensors.  Machine vision was still in it’s expensive and “cute” stage.  With the need for more information, seriously accurate measurement, machining specs, and speed; machine vision has evolved, just like our personal technology has, to fill the needs of the modern time.

Machine vision has worked its way into the automation world as a need to have rather than a nice to have.  With the ability to stack several tools and validations on top of each other, within a fraction of a second scan we now have the data our era needs to stay competitive.  Imagine an application requiring you to detect several material traits, measure the part, read a barcode for tracking, and validate  a properly printed logo screened onto the finished product.  Sure, you could use several individual laser sensors, barcode readers and possibly even a vision sensor all working in concert to achieve your goal.  Or you could use a machine vision system to do all the above easily with room to grow.

I say all of this because there is still resistance in the market to move to machine vision due to historical high costs and complexity.  Machine Vision is here to stay and ready for your applications today.  Think of it this way.  How capable would you think a business is they took out a carbon copy credit card machine to run a payment for you?  Well, think of this before you start trying to solve applications with several sensors.  Take advantage of the technology at your fingertips; don’t hold on to nostalgia.

Buying a Machine Vision System? Focus on Capabilities, Not Cost

Gone are the days when an industrial camera was used only to take a picture and send it to a control PC. Machine vision systems are a much more sophisticated solution. Projects are increasingly demanding image processing, speed, size, complexity, defect recognition and so much more.

This, of course, adds to the new approach in the field of software, where deep learning and artificial intelligence play a bigger and bigger role. There is often a lot of effort behind improved image processing, however,  some people, if only a few, have realized that part of it can already be processed by that little “dummy” industrial camera.

I will try to briefly explain to you in the next few paragraphs how to achieve this in your application. Thanks to that, you will be able to get some of these benefits:

  • Reduce the amount of data
  • Relieve the entire system
  • Generate the maximum performance potential
  • Simplify the hardware structure
  • Reduce the installation work required
  • Reduce your hardware costs
  • Reduce your software costs
  • Reduce your development expenses

How to achieve it?  

Try to use more intelligent industrial cameras, which have a built-in internal memory sometimes called a buffer. Together with FPGA (field programmable gate array) they will do a lot of work that will appreciate your software for image processing. These functions are often also called pre-processing features.

What if you have a project where the camera must send images much faster than the USB or Ethernet interface allows?

For simple cameras, this would mean using a much faster interface, which of course would make the complete solution more expensive. Instead, you can use the Smart Framer Recall function in standard USB and GigE cameras, which generates small preview images with reduced resolution (thumbnails) with an extremely accelerated number of frames per second, which are transferred to the host PC with IDs. At the same time, the corresponding image in full resolution is archived in the camera’s image memory. If the image is required in full resolution, the application sends a request and the image is transferred in the same data stream as the preview image.

The function is explained in this video.

Is there a simpler option than a line scan camera? Yes!

Many people struggle to use line scan cameras and it is understandable. They are not easy to configurate, are hard to install, difficult to properly set and few people can modify them. You can use an area scan camera in line scan mode. The biggest benefit is standard interface: USB3 Vision and GigE Vision instead of CoaXPress and Cameralink. This enables inspection of round/rotating bodies or long/endless materials at high speed (like line scan cameras). Block scan mode acquires an Area of Interest (AOI) block which consists of several lines. The user defines the number of AOI blocks which are used to create one image. This minimizes the overhead, which you would have instead when transferring AOI blocks as single images using the USB3 Vision and GigE Vision protocols.

The function is explained in this video.

Polarization has never been easier

Sony came with a completely new approach to — a polarized filter . Until this new approach was developed, everyone just used a polarization filter in front of the lens and combined it with polarized lighting. With the polarized filter, above the pixel array is a polarizer array and each pixel square contains 0°, 45°, 90°, and 135° of polarization.

 

What is the best part of it? It doesn’t matter if you need a color or monochrome version. There are at least 5️ applications when you want to use it:

  • Remove reflection – > multi-plane surfaces or bruise/defect detection
  • Visual inspection – > detect fine scratches or dust
  • Contrast improvement -> recognize similar objects or colors
  • 3D/Stress recognition -> quality analysis
  • People/vehicle detection -> using your phone while driving

Liquid lens is very popular in smart sensor technology. When and why do you want to use it with an Industrial camera?  

 

Liquid lens is a single optical element like a traditional lens made from glass. However, it also includes a cable to control the focal length. In addition, it contains a sealed cell with water and oil inside. The technology uses an electrowetting process to achieve superior autofocus capabilities.

Benefits to the traditional lenses are obvious. It doesn’t have any moving mechanical parts. Thanks to that, they are highly resistant to shocks and vibrations. Liquid lens is a perfect fit for applications where you need to observe or inspect objects with different sizes and/or working distances and you need to react very quickly. One  liquid lens can do the work of multiple-image systems.

To connect the liquid lens, it requires the RS232 port in the camera plus a DC power from 5 to 24 Volt. An intelligent industrial camera is able to connect with the camera directly and the lens uses the power supply of the camera.

 

Reduce Packaging Downtime with Machine Vision

Packaging encompasses many different industries and typically has several stages in its process. Each industry uses packaging to accomplish specific tasks, well beyond just acting as a container for a product. The pharmaceutical industry for example, typically uses its packaging as a means of dispensing as well as containing. The food and beverage industry uses packaging as a means of preventing contamination and creating differentiation from similar products. Consumer goods typically require unique product containment methods and have a need for “eye-catching” differentiation.

The packaging process typically has several stages. For example, you have primary packaging where the product is first placed in a package, whether that is form-fill-seal bagging or bottle fill and capping. Then secondary packaging that the consumer may see on the shelf, like cereal boxes or display containers, and finally tertiary packaging or transport packaging where the primary or secondary packaging is put into shipping form. Each of these stages require verification or inspection to ensure the process is running properly, and products are properly packaged.

1

Discrete vs. Vision-Based Error Proofing

With the use of machine vision technology, greater flexibility and more reliable operation of the packaging process can be achieved. Typically, in the past and still today, discrete sensors have been used to look for errors and manage product change-over detection. But with these simple discrete sensing solutions come limitations in flexibility, time consuming fixture change-overs and more potential for errors, costing thousands of dollars in lost product and production time. This can translate to more expensive and less competitively priced products on the store selves.

There are two ways implementing machine vision can have a benefit toward improving the scheduled line time. The first is reducing planned downtime by reducing product change over and fixturing change time. The other is to decrease unplanned downtime by catching errors right away and dynamically rejecting them or bringing attention to line issues requiring correction and preventing waste. The greatest benefit vision can have for production line time is in reducing the planned downtime for things like product changeovers. This is a repeatable benefit that can dramatically reduce operating costs and increase the planned runtime. The opportunities for vision to reduce unplanned downtime could include the elimination of line jams due to incorrectly fed packaging materials, misaligned packages or undetected open flaps on cartons. Others include improperly capped bottles causing jams or spills and improper adjustments or low ink causing illegible labeling and barcodes.

Cost and reliability of any technology that improves the packaging process should always be proportional to the benefit it provides. Vision technologies today, like smart cameras, offer the advantages of lower costs and simpler operation, especially compared to the older, more expensive and typically purpose-built vision system counterparts. These new vision technologies can also replace entire sensor arrays, and, in many cases, most of the fixturing at or even below the same costs, while providing significantly greater flexibility. They can greatly reduce or eliminate manual labor costs for inspection and enable automated changeovers. This reduces planned and unplanned downtime, providing longer actual runtime production with less waste during scheduled operation for greater product throughput.

Solve Today’s Packaging Challenges

Using machine vision in any stage of the packaging process can provide the flexibility to dramatically reduce planned downtime with a repeatable decrease in product changeover time, while also providing reliable and flexible error proofing that can significantly reduce unplanned downtime and waste with examples like in-line detection and rejection to eliminate jams and prevent product loss. This technology can also help reduce or eliminate product or shipment rejection by customers at delivery. In today’s competitive market with constant pressure to reduce operating costs, increase quality and minimize waste, look at your process today and see if machine vision can make that difference for your packaging process.

Beyond the Human Eye

Have you ever had to squint, strain, adjust your glasses, or just ask for someone with better vision to help read something for you? Now imagine having to adjust your eyesight 10 times a second. This is the power of machine vision. It can adjust, illuminate, filter, focus, read, and relay information that our eyes struggle with. Although the technology is 30 years old, machine vision is still in its early stages of adoption within the industrial space. In the past, machine vision was ‘nice to have’ but not really a ‘need to have’ technology because of costs, and the technology still not being refined. As traceability, human error proofing, and advanced applications grow more common, machine vision has found its rhythm within factory automation. It has evolved into a robust technology eager to solve advanced applications.

Take, for example, the accurate reading, validation, and logging of a date located on the concaved bottom of an aluminum can. Sometimes, nearly impossible to see with the human eye without some straining involved, it is completely necessary to ensure it is there to be able to sell the product. What would be your solution to ensuring the date stamp is there? Having the employee with the best eyes validate each can off the line? Using more ink and taking longer to print a larger code? Maybe adding a step by putting a black on white contrasting sticker on the bottom that could fall off? All of these would work but at what cost? A better solution is using a device easily capable of reading several cans a second even on a shiny, poor angled surface and saving a ton of unnecessary time and steps.

Machine vison is not magic; it is science. By combining high end image sensors, advanced algorithms, and trained vision specialists, an application like our aluminum can example can be solved in minutes and run forever, all while saving you time and money. In Figure 1 you can see the can’s code is lightly printed and overcome by any lighting due to hotspots from the angle of the can. In Figure 2 we have filtered out some of the glare, better defined the date through software, and validate the date is printed and correct.

Take a moment to imagine all the possibilities machine vision can open for your production process and the pain points it can alleviate. The technology is ready, are you?

Figure 1
Figure 1
Figure 2
Figure 2

What Machine Vision Tool is Right for Your Application?

Machine vision is an inherent terminology in factory automation but selecting the most efficient and cost-effective vision product for your project or application can be tricky.

We can see machine vision from many angles of view, for example market segment and application or image processing deliver different perspectives. In this article I will focus on the “sensing element” itself, which scan your application.

The sensing element is a product which observes the application, analyzes it and forwards an evaluation. PC is a part of machine vision that can be embedded with the imager or separated like the controller. We could take many different approaches, but let’s look at the project according to the complexity of the application. The basic machine vision hardware comparison is

  1. smart sensors
  2. smart cameras
  3. vision systems

Each of these products are used in a different way and they fit different applications, but what do they all have in common? They must have components like an imager, lens, lighting, SW, processor and output HW. All major manufacturing companies, regardless of their focus or market segment, use these products, but what purpose and under what circumstances are they used?

Smart Sensors

Smart sensors are dedicated to detecting basic machine vision applications. There are hundreds of different types on the market and they must quickly provide standard performance in machine vision. Don’t make me wrong, this is not necessarily a negative. These sensors are used for simple applications. You do not want to wait seconds to detect QR code; you need a response time in milliseconds. Smart sensors typically include basic functions like:

  • data matrix, barcode and 2D code reading
  • presence of the object,
  • shape, color, thickness, distance

They are typically used in single purpose process and you cannot combine all the features.

Smart Cameras

Smart cameras are used in more complex projects. They provide all the function of smart sensors, but with more complex functions like:

  • find and check object
  • blob detection
  • edge detection
  • metrology
  • robot navigation
  • sorting
  • pattern recognition
  • complex optical character recognition

Due to their complexity, you can use them to find products with higher resolution , however it is not a requirement. Smart cameras can combine more programs and can do parallel several functions together. Image processing is more sophisticated, and limits may occur in processing speed, because of embedded PC.

Vision Systems

Typically, machine vision systems are used in applications where a smart camera is not enough.

Vision system consists of industrial cameras, controller, separated lighting and lens system, and it is therefore important to have knowledge of different types of lighting and lenses. Industrial cameras provide resolution from VGA up to 30Mpxl and they are easy connected to controller.

Vision systems are highly flexible systems. They provide all the functions from smart sensors and cameras. They bring complexity as well as flexibility. With a vision system, you are not limited by resolution or speed. Thanks to the controller, you have dedicated and incomparable processing power which provides multi-speed acceleration.

And the most important information at the end. How does it look with pricing?

You can be sure that smart sensor is the most inexpensive solution. Basic pricing is in the range of $500 – $1500. Smart cameras can cost $2000 – $5000, while a vision system cost would start closer to $6000. It may look like an easy calculation, but you need to take into consideration the complexity of your project to determine which is best for you.

Pros Cons Cost
Smart sensor
    • Easy integration
    • Simple configuration
    • Included lightning and lenses
    • Limited functions
    • Closed SW
    • Limited programs/memory
$
Smart camera
    • Combine more programs together
    • Available functions
    • Limited resolution
    • Slower speed due to embedded PC
$$
Vision system
    • Connect more cameras(up to 8)
    • Open SW
    • Different resolution options
    • Requires skilled machine vision specialist
    • Requires knowledge of lightning and lenses
    • Increased integration time
$$$

Capture

Top 5 Insights from 2019

With a new year comes new innovation and insights. Before we jump into new topics for 2020, let’s not forget some of the hottest topics from last year. Below are the five most popular blogs from our site in 2019.

1. How to Select the Best Lighting Techniques for Your Machine Vision Application

How to select the best vision_LI.jpgThe key to deploying a robust machine vision application in a factory automation setting is ensuring that you create the necessary environment for a stable image.  The three areas you must focus on to ensure image stability are: lighting, lensing and material handling.  For this blog, I will focus on the seven main lighting techniques that are used in machine vision applications.

READ MORE>>

2. M12 Connector Coding

blog 7.10_LI.jpgNew automation products hit the market every day and each device requires the correct cable to operate. Even in standard cables sizes, there are a variety of connector types that correspond with different applications.

READ MORE>>

3. When to use optical filtering in a machine vision application

blog 7.3_LI.jpgIndustrial image processing is essentially a requirement in modern manufacturing. Vision solutions can deliver visual quality control, identification and positioning. While vision systems have gotten easier to install and use, there isn’t a one-size-fits-all solution. Knowing how and when you should use optical filtering in a machine vision application is a vital part of making sure your system delivers everything you need.

READ MORE>>

4. The Difference Between Intrinsically Safe and Explosion Proof

5.14_LIThe difference between a product being ‘explosion proof’ and ‘intrinsically safe’ can be confusing but it is vital to select the proper one for your application. Both approvals are meant to prevent a potential electrical equipment malfunction from initiating an explosion or ignition through gases that may be present in the surrounding area. This is accomplished in both cases by keeping the potential energy level below what is necessary to start ignition process in an open atmosphere.

READ MORE>>

5. Smart choices deliver leaner processes in Packaging, Food and Beverage industry

Smart choices deliver leaner processes in PFB_LI.jpgIn all industries, there is a need for more flexible and individualized production as well as increased transparency and documentable processes. Overall equipment efficiency, zero downtime and the demand for shorter production runs have created the need for smart machines and ultimately the smart factory. Now more than ever, this is important in the Packaging, Food and Beverage (PFB) industry to ensure that the products and processes are clean, safe and efficient.

READ MORE>>

We appreciate your dedication to Automation Insights in 2019 and look forward to growth and innovation in 2020!

 

 

Traceability in Manufacturing – More than just RFID and Barcode

Traceability is a term that is commonly used in most plants today. Whether it is being used to describe tracking received and shipped goods, tracking valuable assets down to their exact location, or tracking an item through production as it is being built, traceability is usually associated with only two technologies — RFID and/or barcode. While these two technologies are critical in establishing a framework for traceability within the plant, there are other technologies that can help tell the rest of the story.

Utilizing vision along with a data collection technology adds another dimension to traceability by providing physical evidence in the form of an image. While vision cameras have been widely used in manufacturing for a long time, most cameras operate outside of the traceability system. The vision system and tracking system often operate independently. While they both end up sending data to the same place, that data must be transported and processed separately which causes a major increase in network traffic.

Datamatrixlesen_Platine

Current vision technology allows images to be “stamped” with the information from the barcode or RFID tag. The image becomes redundant traceability by providing visual proof that everything happened correctly in the build process. In addition, instead of sending image files over the network they are sent through a separate channel to a server that contains all the process data from the tag and has the images associated with it. This frees up the production network and provides visual proof that the finished product is what we wanted it to be.

Used separately, the three technologies mentioned above provide actionable data which allows manufacturers to make important decisions.  Used together, they tell a complete story and provide visual evidence of every step along the way. This allows manufacturers to make more informed decisions based on the whole story not just part of it.

How to Select the Best Lighting Techniques for Your Machine Vision Application

The key to deploying a robust machine vision application in a factory automation setting is ensuring that you create the necessary environment for a stable image.  The three areas you must focus on to ensure image stability are: lighting, lensing and material handling.  For this blog, I will focus on the seven main lighting techniques that are used in machine vision applications.

On-Axis Ring Lighting

On-axis ring lighting is the most common type of lighting because in many cases it is integrated on the camera and available as one part number. When using this type of lighting you almost always want to be a few degrees off perpendicular (Image 1A).  If you are perpendicular to the object you will get hot spots in the image (Image 1B), which is not desirable. When the camera with its ring light is tilted slightly off perpendicular you achieve the desired image (Image 1C).

Off-Axes Bright Field Lighting

Off-axes bright field lighting works by having a separate LED source mounted at about 15 degrees off perpendicular and having the camera mounted perpendicular to the surface (Image 2A). This lighting technique works best on mostly flat surfaces. The main surface or field will be bright, and the holes or indentations will be dark (Image 2B).

Dark Field Lighting

Dark field lighting is required to be very close to the part, usually within an inch. The mounting angle of the dark field LEDs needs to be at least 45 degrees or more to create the desired effect (Image 3A).  In short, it has the opposite effect of Bright Field lighting, meaning the surface or field is dark and the indentations or bumps will be much brighter (Image 3B).

Back Lighting

Back lighting works by having the camera pointed directly at the back light in a perpendicular mount.  The object you are inspecting is positioned in between the camera and the back light (Image 4A).  This lighting technique is the most robust that you can use because it creates a black target on a white background (Image 4B).

Diffused Dome Lighting

Diffused dome lighting, aka the salad bowl light, works by having a hole at the top of the salad bowl where the camera is mounted and the LEDs are mounted down at the rim of the salad bowl, pointing straight up which causes the light to reflect off of the curved surface of the salad bowl and it creates very uniform reflection (Image 5A).  Diffused dome lighting is used when the object you are inspecting is curved or non-uniform (Image 5B). After applying this lighting technique to an uneven surface or texture, hotspots and other sharp details are deemphasized, and it creates a sort of matte finish to the image (Image 5C).

Diffused On-Axis Lighting

Diffused on-axis lighting, or DOAL, works by having a LED light source pointed at a beam splitter and the reflected light is then parallel with the direction that the camera is mounted (Image 6A).  DOAL lighting should only be used on flat surfaces where you are trying to diminish very shiny parts of the surface to create a uniformed image.  Applications like DVD, CD, or silicon wafer inspection are some of the most common uses for this type of lighting.

6A
Image 6A

 

Structured Laser Line Lighting

Structured laser line lighting works by projecting a laser line onto a three-dimensional object (Image 7A), resulting in an image that gives you information on the height of the object.  Depending on the mounting angle of the camera and laser line transmitter, the resulting laser line shift will be larger or smaller as you change the angle of the devices (Image 7B).  When there is no object the laser line will be flat (Image 7C).

Real Life Applications 

The images below, (Image 8A) and (Image 8B) were used for an application that requires the pins of a connector to be counted. As you can see, the bright field lighting on the left does not produce a clear image but the dark field lighting on the right does.

This next example (Image 9A) and (Image 9B) was for an application that requires a bar code to be read through a cellophane wrapper.  The unclear image (Image 9A) was acquired by using an on-axis ring light, while the use of dome lighting (Image 9B) resulted in a clear, easy-to-read image of the bar code.

This example (Image 10A), (Image 10B) and (Image 10C) highlights different lighting techniques on the same object. In the (Image 10A) image, backlighting is being used to measure the smaller hole diameter.  In image (Image 10B) dome lighting is being used for inspecting the taper of the upper hole in reference to the lower hole.  In (Image 10C) dark field lighting is being used to do optical character recognition “OCR” on the object.  Each of these could be viewed as a positive or negative depending on what you are trying to accomplish.