Picking Solutions: How Complex Must Your System Be?

Bin-picking, random picking, pick and place, pick and drop, palletization, depalletization—these are all part of the same project. You want a fully automated process that grabs the desired sample from one position and moves it somewhere else. Before you choose the right solution for your project, you should think about how the objects are arranged. There are three picking solutions: structured, semi-structured, and random.

As you can imagine, the basic differences between these solutions are in their complexity and their approach. The distribution and arrangement of the samples to be picked will set the requirements for a solution. Let’s have a look at the options:

Structured picking

From a technical point of view, this is the easiest type of picking application. Samples are well organized and very often in a single layer. Arranging the pieces in a highly organized way requires high-level preparation of the samples and more storage space to hold the pieces individually. Because the samples are in a single layer or are layered at a defined height, a traditional 2-dimensional camera is more than sufficient. There are even cases where the vision system isn’t necessary at all and can be replaced by a smart sensor or another type of sensor. Typical robot systems use SCARA or Delta models, which ensure maximum speed and a short cycle time.

Semi-structured picking

Greater flexibility in robotization is necessary since semi-structured bin picking requires some predictability in sample placement. A six-axis robot is used in most cases, and the demands on its grippers are more complex. However, it depends on the gripping requirements of the samples themselves. It is rarely sufficient to use a classic 2D area scan camera, and a 3D camera is required instead. Many picking applications also require a vision inspection step, which burdens the system and slows down the entire cycle time.

Random picking

Samples are randomly loaded in a carrier or pallet. On the one hand, this requires minimal preparation of samples for picking, but on the other hand, it significantly increases the demands on the process that will make a 3D vision system a requirement. You need to consider that there are very often collisions between selected samples. This is a factor not only when looking for the right gripper but also for the approach of the whole picking process.

Compared to structured picking, the cycle time is extended due to scanning evaluation, robot trajectory, and mounting accuracy. Some applications require the deployment of two picking stations to meet the required cycle time. It is often necessary to limit the gripping points used by the robot, which increases the demands on 3D image quality, grippers, and robot track guidance planning and can also require an intermediate step to place the same in the exact position needed for gripping.

In the end, the complexity of the picking solution is set primarily by the way the samples are arranged. The less structured their arrangement, the more complicated the system must be to meet the project’s demands. By considering how samples are organized before they are picked, as well as the picking process, you can design an overall process that meets your requirements the best.

Machine Vision: 5 Simple Steps to Choose the Right Camera

The machine vision and industrial camera market is offering thousands of models with different resolutionssizes, speeds, colors, interfaces, prices, etc. So, how do you choose? Let’s go through 5 simple steps which will ensure easy selection of the right camera for your application. 

1.  Defined task: color or monochrome camera  

2.  Amount of information: minimum of pixels per object details 

3.  Sensor resolution: formula for calculating the image sensor 

4.  Shutter technology: moving or static object 

5.  Interfaces and camera selector: lets pick the right model 

STEP 1 – Defined task  

It is always necessary to start with the size of the scanned object (X, Y), or you can determine the smallest possible value (d) that you want to distinguish with the camera.

For easier explanation, you can choose the option of solving the measurement task. However, the basic functionality can be used for any other applications.

In the task, the distance (D) between the centers of both holes is determined with the measurement accuracy (d). Using these values, we then determine the parameter for selecting the right image sensor and camera.

Example:
Distance (D) between 2 points with measuring accuracy (d) of 0.05 mm. Object size X = 48 mm (monochrome sensor, because color is not relevant here)

Note: Monochrome or color?
Color sensors use a Bayer color filter, which allows only one basic color to reach each pixel. The missing colors are determined using interpolation of the neighboring pixels. Monochrome sensors are twice as light sensitive as color sensors and lead to a sharper image by acquiring more details within the same number of pixels. For this reason, monochrome sensors are recommended if no color information is needed.

STEP 2 – Amount of information

Each type of application needs a different size of information to solve. This is differentiated by the minimum number of pixels. Lets again use monochrome options.

Minimum of pixels per object details

  • Object detail measuring / detection       3
  • Barcode line width                                           2
  • Datamatrix code module width                4
  • OCR character height                                    16

Example:
The measuring needs 3 pixels for the necessary accuracy (object detail size d). As necessary accuracy (d) which is 0.05 mm in this example, is imaged on 3 pixels.

Note:
Each characteristic or application type presupposes a minimum number of pixels. It avoids the loss of information through sampling blurs.

STEP 3 – Sensor resolution

We already defined the object size as well as resolution accuracy. As a next step, we are going to define resolution of the camera. It is simple formula to calculate the image sensor.

S = (N x O) / d = (min. number of pixels per object detail x object size) / object detail size

Object size (O) can be describe horizontally as well as vertically. Some of sensors are square and this problem is eliminated 😊

Example:
S = (3 x 48 mm) / 0.05 mm = 2880 pixels

We looked at the available image sensors and the closest is a model with resolution 3092 x 2080 => 6.4Mpixels image sensor.

Note:
Pay attention to the format of the sensor.

For a correct calculation, it is necessary to check the resolution, not only in the horizontal but also in the vertical axis.

 

STEP 4 – Shutter technology

Global shutter versus rolling shutter.

These technologies are standard in machine vision and you are able to find hundreds of cameras with both.

Rolling shutter: exposes the motive line-by-line. This procedure results in a time delay for each acquired line. Thus, moving objects are displayed blurrily in the resulting motive through the generated “object time offset” (compare to the image).

Pros:

    • More light sensitive
    • Less expensive
    • Smaller pixel size provides higher resolution with the same image format.

Cons:

    • Image distortion occurs on moving objects

Global shutter: used to get distortion-free images by exposing all pixels at the same time.

Pros:

    • Great for fast processes
    • Sharp images with no blur on moving objects.

Cons:

    • More expensive
    • Larger image format

Note:
The newest rolling shutter sensors have a feature called global reset mode, which starts the exposure of all rows simultaneously and the reset of each row is released simultaneously, also. However, the readout of the lines is equal to the readout of the rolling shutter: line by line.

This means the bottom lines of the sensor will be exposed to light longer! For this reason, this mode will only make sense, if there is no extraneous light and the flash duration is shorter or equal to the exposure time.

STEP 5 – Interfaces and camera selector

Final step is here:

You must consider the possible speed (bandwidth) as well as cable length of camera technology.

USB2
Small, handy and cost-effective, USB 2.0 industrial cameras have become integral parts in the area of medicine and microscopy. You can get a wide range of different variants, including with or without housings, as board-level or single-board, or with or without digital I/Os.

USB3/GigE Vision
Without standards every manufacturer does their own thing and many advantages customers learned to love with the GigE Vision standard would be lost. Like GigE Vision, USB3 Vision also defines:

    • a transport layer, which controls the detection of a device (Device Detection)
    • the configuration (Register Access)
    • the data streaming (Streaming Data)
    • the handling of events (Event Handling)
    • established interface to GenICam. GenICam abstracts the access to the camera features for the user. The features are standardized (name and behavior) by the standard feature naming convention (SFNC). Additionally, it is possible to create specific features in addition to the SFNC to differentiate from other vendors (quality of implementation). In contrast to GigE Vision, this time the mechanics (e.g. lockable cable connectors) are part of the standard which leads to a more robust interface.

I believe that these five points will help you choose the most suitable camera. Are you still unclear? Do not hesitate to contact us or contact me directly: I will be happy to consult your project, needs or any questions.

 

 

Traceability in Manufacturing – More than just RFID and Barcode

Traceability is a term that is commonly used in most plants today. Whether it is being used to describe tracking received and shipped goods, tracking valuable assets down to their exact location, or tracking an item through production as it is being built, traceability is usually associated with only two technologies — RFID and/or barcode. While these two technologies are critical in establishing a framework for traceability within the plant, there are other technologies that can help tell the rest of the story.

Utilizing vision along with a data collection technology adds another dimension to traceability by providing physical evidence in the form of an image. While vision cameras have been widely used in manufacturing for a long time, most cameras operate outside of the traceability system. The vision system and tracking system often operate independently. While they both end up sending data to the same place, that data must be transported and processed separately which causes a major increase in network traffic.

Datamatrixlesen_Platine

Current vision technology allows images to be “stamped” with the information from the barcode or RFID tag. The image becomes redundant traceability by providing visual proof that everything happened correctly in the build process. In addition, instead of sending image files over the network they are sent through a separate channel to a server that contains all the process data from the tag and has the images associated with it. This frees up the production network and provides visual proof that the finished product is what we wanted it to be.

Used separately, the three technologies mentioned above provide actionable data which allows manufacturers to make important decisions.  Used together, they tell a complete story and provide visual evidence of every step along the way. This allows manufacturers to make more informed decisions based on the whole story not just part of it.

How to Select the Best Lighting Techniques for Your Machine Vision Application

The key to deploying a robust machine vision application in a factory automation setting is ensuring that you create the necessary environment for a stable image.  The three areas you must focus on to ensure image stability are: lighting, lensing and material handling.  For this blog, I will focus on the seven main lighting techniques that are used in machine vision applications.

On-Axis Ring Lighting

On-axis ring lighting is the most common type of lighting because in many cases it is integrated on the camera and available as one part number. When using this type of lighting you almost always want to be a few degrees off perpendicular (Image 1A).  If you are perpendicular to the object you will get hot spots in the image (Image 1B), which is not desirable. When the camera with its ring light is tilted slightly off perpendicular you achieve the desired image (Image 1C).

Off-Axes Bright Field Lighting

Off-axes bright field lighting works by having a separate LED source mounted at about 15 degrees off perpendicular and having the camera mounted perpendicular to the surface (Image 2A). This lighting technique works best on mostly flat surfaces. The main surface or field will be bright, and the holes or indentations will be dark (Image 2B).

Dark Field Lighting

Dark field lighting is required to be very close to the part, usually within an inch. The mounting angle of the dark field LEDs needs to be at least 45 degrees or more to create the desired effect (Image 3A).  In short, it has the opposite effect of Bright Field lighting, meaning the surface or field is dark and the indentations or bumps will be much brighter (Image 3B).

Back Lighting

Back lighting works by having the camera pointed directly at the back light in a perpendicular mount.  The object you are inspecting is positioned in between the camera and the back light (Image 4A).  This lighting technique is the most robust that you can use because it creates a black target on a white background (Image 4B).

Diffused Dome Lighting

Diffused dome lighting, aka the salad bowl light, works by having a hole at the top of the salad bowl where the camera is mounted and the LEDs are mounted down at the rim of the salad bowl, pointing straight up which causes the light to reflect off of the curved surface of the salad bowl and it creates very uniform reflection (Image 5A).  Diffused dome lighting is used when the object you are inspecting is curved or non-uniform (Image 5B). After applying this lighting technique to an uneven surface or texture, hotspots and other sharp details are deemphasized, and it creates a sort of matte finish to the image (Image 5C).

Diffused On-Axis Lighting

Diffused on-axis lighting, or DOAL, works by having a LED light source pointed at a beam splitter and the reflected light is then parallel with the direction that the camera is mounted (Image 6A).  DOAL lighting should only be used on flat surfaces where you are trying to diminish very shiny parts of the surface to create a uniformed image.  Applications like DVD, CD, or silicon wafer inspection are some of the most common uses for this type of lighting.

6A
Image 6A

 

Structured Laser Line Lighting

Structured laser line lighting works by projecting a laser line onto a three-dimensional object (Image 7A), resulting in an image that gives you information on the height of the object.  Depending on the mounting angle of the camera and laser line transmitter, the resulting laser line shift will be larger or smaller as you change the angle of the devices (Image 7B).  When there is no object the laser line will be flat (Image 7C).

Real Life Applications 

The images below, (Image 8A) and (Image 8B) were used for an application that requires the pins of a connector to be counted. As you can see, the bright field lighting on the left does not produce a clear image but the dark field lighting on the right does.

This next example (Image 9A) and (Image 9B) was for an application that requires a bar code to be read through a cellophane wrapper.  The unclear image (Image 9A) was acquired by using an on-axis ring light, while the use of dome lighting (Image 9B) resulted in a clear, easy-to-read image of the bar code.

This example (Image 10A), (Image 10B) and (Image 10C) highlights different lighting techniques on the same object. In the (Image 10A) image, backlighting is being used to measure the smaller hole diameter.  In image (Image 10B) dome lighting is being used for inspecting the taper of the upper hole in reference to the lower hole.  In (Image 10C) dark field lighting is being used to do optical character recognition “OCR” on the object.  Each of these could be viewed as a positive or negative depending on what you are trying to accomplish.