Picking Solutions: How Complex Must Your System Be?

Bin-picking, random picking, pick and place, pick and drop, palletization, depalletization—these are all part of the same project. You want a fully automated process that grabs the desired sample from one position and moves it somewhere else. Before you choose the right solution for your project, you should think about how the objects are arranged. There are three picking solutions: structured, semi-structured, and random.

As you can imagine, the basic differences between these solutions are in their complexity and their approach. The distribution and arrangement of the samples to be picked will set the requirements for a solution. Let’s have a look at the options:

Structured picking

From a technical point of view, this is the easiest type of picking application. Samples are well organized and very often in a single layer. Arranging the pieces in a highly organized way requires high-level preparation of the samples and more storage space to hold the pieces individually. Because the samples are in a single layer or are layered at a defined height, a traditional 2-dimensional camera is more than sufficient. There are even cases where the vision system isn’t necessary at all and can be replaced by a smart sensor or another type of sensor. Typical robot systems use SCARA or Delta models, which ensure maximum speed and a short cycle time.

Semi-structured picking

Greater flexibility in robotization is necessary since semi-structured bin picking requires some predictability in sample placement. A six-axis robot is used in most cases, and the demands on its grippers are more complex. However, it depends on the gripping requirements of the samples themselves. It is rarely sufficient to use a classic 2D area scan camera, and a 3D camera is required instead. Many picking applications also require a vision inspection step, which burdens the system and slows down the entire cycle time.

Random picking

Samples are randomly loaded in a carrier or pallet. On the one hand, this requires minimal preparation of samples for picking, but on the other hand, it significantly increases the demands on the process that will make a 3D vision system a requirement. You need to consider that there are very often collisions between selected samples. This is a factor not only when looking for the right gripper but also for the approach of the whole picking process.

Compared to structured picking, the cycle time is extended due to scanning evaluation, robot trajectory, and mounting accuracy. Some applications require the deployment of two picking stations to meet the required cycle time. It is often necessary to limit the gripping points used by the robot, which increases the demands on 3D image quality, grippers, and robot track guidance planning and can also require an intermediate step to place the same in the exact position needed for gripping.

In the end, the complexity of the picking solution is set primarily by the way the samples are arranged. The less structured their arrangement, the more complicated the system must be to meet the project’s demands. By considering how samples are organized before they are picked, as well as the picking process, you can design an overall process that meets your requirements the best.

Add Depth to Your Processes With 3D Machine Vision

What comes to mind first when you think of 3D? Cheap red and blue glasses? Paying extra at a movie theater? Or maybe the awkward top screen on a Nintendo 3DS? Neither industrial machine vision nor robot guidance likely come to mind, but they should.

Advancements in 3D machine vision have taken the old method of 2D image processing and added literal depth. You become emerged into the application with true definition of the target—far from what you get looking at a flat image.

See For Yourself

Let’s do an exercise: Close one eye and try to pick up an object on your desk by pinching it. Did you miss it on the first try? Did things look foreign or off? This is because your depth perception is skewed with only one vision source. It takes both eyes to paint an accurate picture of your surroundings.

Now, imagine what you can do with two cameras side by side looking at an application. This is 3D machine vision; this is human.

How 3D Saves the Day

Robot guidance. The goal of robotics is to emulate human movements while allowing them to work more safely and reliably. So, why not give them the same vision we possess? When a robot is sent in to do a job it needs to know the x, y and z coordinates of its target to best control its approach and handle the item(s). 3D does this.

Part sorting. If you are anything like me, you have your favorite parts of Chex mix. Whether it’s the pretzels or the Chex pieces themselves, picking one out of the bowl takes coordination. Finding the right shape and the ideal place to grab it takes depth perception. You wouldn’t use a robot to sort your snacks, of course, but if you need to select specific parts in a bin of various shapes and sizes, 3D vision can give you the detail you need to select the right part every time.

Palletization and/or depalletization. Like in a game of Jenga, the careful and accurate stacking and removing of parts is paramount. Whether it’s for speed, quality or damage control, palletization/ depalletization of material needs 3D vision to position material accurately and efficiently.

I hope these 3D examples inspire you to seek more from your machine vision solution and look to the technology of the day to automate your processes. A picture is worth a thousand words, just imagine what a 3D image can tell you.

How to Choose the Best 4K Camera for Your Application

I need 4K resolution USB camera, what would you recommend me?

This is a common question that I am asked by customers, unfortunately the answer is not simple.

First, a quick review on the criteria to be a 4K camera. The term “4K” comes from TV terminology and is derived from full HD resolution.

Full HD is 1920 x 1080 = 2,073,600 total pixels
4K is 3840 x 2160 = 8,294,400 total pixels.

This assumes that the minimum camera resolution must be 8.3 Mpix. It is not guaranteed that the camera reaches 4K resolution, however, it is a basic recognition. For example, a camera with an IMX546 sensor has a resolution of 2856 x 2848 pixels. While the height of the sensor richly meets the conditions of 4K, the width does not. Even so, for our comparison I will use this camera because for certain types of projects (e.g. square object observation), it is more efficient than a 10.7 Mpix camera with a resolution 3856 x 2764 pixels.

Of course, 4K resolution isn’t the only parameter to consider when you are choosing a camera. Shutter type, frame rate and sensor size are also incredibly important and dictated by your application. And, of course, you must factor price into your decision.

Basic comparison

Sensor Mpixel Shutter Size Width Height Framerate Pricing
MT9J003 10.7 Rolling Shutter / Global Reset 1/2.35 3856 2764  

7.3

 

$
IMX267 8.9 Global 1 4112 2176 31.9 $$
IMX255 8.9 Global 1 4112 2176 42.4 $$$
IMX226 12.4 Rolling Shutter / Global Reset 1/1.7

 

4064 3044 30.7 $
IMX546 8.1 Global 2/3 2856 2848 46.7 $$$
IMX545 12.4 Global 1/1.1 4128 3008 30.6 $$$$

 

Shutter
Rolling shutter and global shutter are common shutter types in CMOS cameras. A rolling shutter sensor has simpler design and can offer smaller pixel size. It means that you can use lower cost lenses, but you must have in mind that you have limited usage with moving objects. A workaround for moving objects is a rolling shutter with global reset functionality which helps eliminating the image distortion.

Frames Per Second
The newest sensors offer a higher frame rate than the USB interface can handle. Check with the manufacturer; not everyone is able to get the listed framerate because of technical limitations caused by the camera.

Sensor Size
Very important information. Other qualitative information should also be considered, not only of the camera but also of the lens used.

Price
Global shutter image sensors are more expensive than rolling shutter ones. For this reason, the prices of global shutter cameras are higher than the rolling shutter cameras. It is also no secret that the image sensor is the most expensive component, so it is understandable that the customer very often bases the decision on the sensor requirements.

Advanced comparison

Sensor Pixel size EMVA report Dynamic range SNR Preprocessing features
MT9J003 1.67 link 56.0 37.2 *
IMX267 3.45 link 71.0 40.2 **
IMX255 3.45 link 71.1 40.2 ***
IMX226 1.85 link 69.2 40.3 **
IMX546 2.74 link 70.2 40.6 ****
IMX545 2.74 link 70.1 40.3 ****

 

There are many other advanced features you can also consider based on your project, external conditions, complexity of the scene and so on. These include:

Pixel Size
Sensor size from the basic comparison is in direct correlation with the size of the pixel because the size of the pixel multiplied by the width and height gives you the size of the sensor itself.

EMVA Report
EMVA 1288 is great document comparing individual sensors and cameras. In case you want the best possible image quality and functionality of the whole system, comparison is an important component in deciding which image sensor will be in your chosen camera. EMVA 1288 is the standard for measurement and presentation of specifications for machine vision sensors and cameras. This standard determines properties like signal-to-noise ratio, dark current, quantum efficiency, etc.

Dynamic Range
Dynamic range is one of the basic features and part of EMVA 1288 report as well. It is expressed in decibels (dB). Dynamic range is the ratio between the maximum output signal level and the noise floor at minimum signal amplification. Simply, dynamic range indicates the ability of a camera to reproduce the brightest and darkest portions of an image.

SNR
Signal-to-noise ratio (SNR) is a linear ratio between recorded signal and total root mean squared noise. SNR describes the data found in an image. It establishes an indication as to the signal quality found in the image indicating with what amount of precision machine vision algorithms will be able to detect objects in an image.

 

Preprocessing Features

Do you build high-end product? Is the speed important for you?
You need to rely on the camera/image sensor features. Every update of an image sensor comes with more and more built-in features. For example:

  • Dual trigger, where you set two different levels of exposure and gain and each can be triggered separately.
  • Self-trigger – you set 2 AOI, the first one triggers image and second detects difference in the AOI.
  • Short exposure modes – you can set as fast as 2us between shutters.

Machine vision components continue to be improved upon and new features are added regularly. So, when you are selecting a camera for your application, first determine what features are required to meet your application needs. Filter to only the cameras that can meet those needs and use their additional features to determine what more you can do.

Ensure Food Safety with Machine Vision

Government agencies have put food manufacturers under a microscope to ensure they follow food safety standards and comply with regulations. When it comes to the health and safety of consumers, quality assurance is a top priority, but despite this, according to The World of Health Organization, approximately 600 million people become ill each year after eating contaminated food, and 420,000 die.

Using human manual inspection for quality assurance checks in this industry can be detrimental to the company and its consumers due to human error, fatigue and subjective opinions. Furthermore, foreign particles that should not be found in the product may be microscopic and invisible to the human eye. These defects can lead to illness, recalls, lawsuits, and a long-term negative perception of the brand itself. Packaging, food and beverage manufacturers must realize these potential risks and review the benefits of incorporating machine vision. Although machine vision implementation may sound like a costly investment, it is small price to pay when compared to the potential damage of uncaught issues. Below I explore a few benefits that machine vision offers in the packaging, food and beverage industries.

Safety
Consumers expect and rely on safe products from food manufacturers. Machine vision can see through packaging to determine the presence of foreign particles that should not be present, ensuring these products are removed from the production line. Machine vision is also capable of inspecting for cross-contamination, color correctness, ripeness, and even spoilage. For example, bruises on apples can be hard to spot for the untrained eye unless extremely pronounced. SWIR (shortwave infrared) illumination proves effective for the detection of defects and contamination. Subsurface bruising defects become much easier to detect due to the optimization of lighting and these defected products can be scrapped.

Uniformity of Containers
Brand recognition is huge for manufacturers in this industry. Products that have defects such as dents or uneven contents inside the container can greatly affect the public’s perception of the product and/or company. Machine vision can detect even the slightest deformity in the container and ensure they are removed from the line. It can also scan the inside of the container to ensure that the product is uniform for each batch. Vision systems have the ability optimize lighting intensity, uniformity, and geometry to obtain images with good contrast and signal to noise. Having the ability to alter lighting provides a much clearer image of the point of interest. This can allow you to see inside a container to determine if the fill level is correct for the specific product.

Packaging
Packaging is important because if the products shipped to the store are regularly defected, the store can choose to stop stocking that item, costing the manufacturer valuable business. The seal must last from production to arrival at the store to ensure that the product maintains its safe usability through its marked expiration date. In bottling applications, the conveyors are moving at high speeds so the inspection process must be able to quickly and correctly identify defects. A facility in Marseille, France was looking to inspect Heineken beer bottles as they passed through a bottling machine at a rate of 22 bottles/second (80,000 bottles/hour). Although this is on the faster end of the spectrum, many applications require high-speed quality checks that are impossible for a human operator. A machine vision system can be configured to handle these high-speed applications and taught to detect the specified defect.

Labels

It’s crucial for the labels to be printed correctly and placed on the correct product because of the food allergy threats that some consumers experience. Machine vision can also benefit this aspect of the production process as cameras can be taught to recognize the correct label and brand guidelines. Typically, these production lines move at speeds too fast for human inspection. An intuitive, easy to use, machine vision software package allows you to filter the labels, find the object using reference points and validate the text quickly and accurately.

These areas of the assembly process throughout packaging, food and beverage facilities should be considered for machine vision applications. Understanding what problems occur and the cost associated with them is helpful in justifying whether machine vision is right for you.

For more information on machine vision, visit https://www.balluff.com/local/us/products/product-overview/machine-vision-and-optical-identification/.

 

 

Document Product Quality and Eliminate Disputes with Machine Vision

“I caught a record-breaking walleye last weekend,” an excited Joe announced to his colleagues after returning from his annual fishing excursion to Canada.

“Record-breaking?  Really?  Prove it.” demanded his doubtful co-worker.

Well, I left my cell phone in the cabin so it wouldn’t get wet on the boat so I couldn’t take a picture, but I swear that big guy was the main course for dinner.”

“Okay, sure it was Joe.”

We have all been there — spotted a mountain lion, witnessed an amazing random human interaction, or maybe caught a glimpse at a shooting star.  These are great stories, but they are so much more believable and memorable with a picture or video to back them up.  Now a days, we all carry a camera within arm’s reach.  Capturing life events has never been easier and more common, so why not use cameras to document and record important events and stages within your manufacturing process?

As the smart phone becomes more advanced and common, so does the technology and hardware for industrial cameras (i.e. machine vision).  Machine vision can do so much more than pass fail and measurement type applications.  Taking, storing, and relaying pictures along different stages of a production process could not only set you apart from the competition but also save you costly quality disputes after it leaves your facility.  A picture can tell a thousand words, so what do you want to tell the world?  Here are just a couple examples how you can back up you brand with machine vision:

Package integrity: We have all seen the reduced rack at a grocery store where a can is dented or missing a label.  If this was caused by a large-scale label application defect, someone is losing business.  So, before everyone starts pointing fingers, the manufacturer could simply provide a saved image from their end-of line-vision system to prove the cans were labeled when shipped from their facility.

Assembly defects: When you are producing assembled parts for a larger manufacturer, the standards they set are what you live and die by.  If there is ever a dispute, having several saved images from either individual parts or an audit of them throughout the day could prove your final product met their specifications and could save your contract.

Barcode legibility and placement: Show your retail partners that your product’s bar code will not frustrate the cashier by having to overcome a poorly printed or placed barcode.  Share images with them to show an industrial camera easily reading the code along the packaging line ensuring a hassle-free checkout as well as a barcode grade to ensure their barcode requirements are being met.

In closing, pictures always help tell a story and make it more credible.  Ideally your customers will take your word for it, but when you catch the record-breaking walleye, you want to prove it.

Machine Vision: 5 Simple Steps to Choose the Right Camera

The machine vision and industrial camera market is offering thousands of models with different resolutionssizes, speeds, colors, interfaces, prices, etc. So, how do you choose? Let’s go through 5 simple steps which will ensure easy selection of the right camera for your application. 

1.  Defined task: color or monochrome camera  

2.  Amount of information: minimum of pixels per object details 

3.  Sensor resolution: formula for calculating the image sensor 

4.  Shutter technology: moving or static object 

5.  Interfaces and camera selector: lets pick the right model 

STEP 1 – Defined task  

It is always necessary to start with the size of the scanned object (X, Y), or you can determine the smallest possible value (d) that you want to distinguish with the camera.

For easier explanation, you can choose the option of solving the measurement task. However, the basic functionality can be used for any other applications.

In the task, the distance (D) between the centers of both holes is determined with the measurement accuracy (d). Using these values, we then determine the parameter for selecting the right image sensor and camera.

Example:
Distance (D) between 2 points with measuring accuracy (d) of 0.05 mm. Object size X = 48 mm (monochrome sensor, because color is not relevant here)

Note: Monochrome or color?
Color sensors use a Bayer color filter, which allows only one basic color to reach each pixel. The missing colors are determined using interpolation of the neighboring pixels. Monochrome sensors are twice as light sensitive as color sensors and lead to a sharper image by acquiring more details within the same number of pixels. For this reason, monochrome sensors are recommended if no color information is needed.

STEP 2 – Amount of information

Each type of application needs a different size of information to solve. This is differentiated by the minimum number of pixels. Lets again use monochrome options.

Minimum of pixels per object details

  • Object detail measuring / detection       3
  • Barcode line width                                           2
  • Datamatrix code module width                4
  • OCR character height                                    16

Example:
The measuring needs 3 pixels for the necessary accuracy (object detail size d). As necessary accuracy (d) which is 0.05 mm in this example, is imaged on 3 pixels.

Note:
Each characteristic or application type presupposes a minimum number of pixels. It avoids the loss of information through sampling blurs.

STEP 3 – Sensor resolution

We already defined the object size as well as resolution accuracy. As a next step, we are going to define resolution of the camera. It is simple formula to calculate the image sensor.

S = (N x O) / d = (min. number of pixels per object detail x object size) / object detail size

Object size (O) can be describe horizontally as well as vertically. Some of sensors are square and this problem is eliminated 😊

Example:
S = (3 x 48 mm) / 0.05 mm = 2880 pixels

We looked at the available image sensors and the closest is a model with resolution 3092 x 2080 => 6.4Mpixels image sensor.

Note:
Pay attention to the format of the sensor.

For a correct calculation, it is necessary to check the resolution, not only in the horizontal but also in the vertical axis.

 

STEP 4 – Shutter technology

Global shutter versus rolling shutter.

These technologies are standard in machine vision and you are able to find hundreds of cameras with both.

Rolling shutter: exposes the motive line-by-line. This procedure results in a time delay for each acquired line. Thus, moving objects are displayed blurrily in the resulting motive through the generated “object time offset” (compare to the image).

Pros:

    • More light sensitive
    • Less expensive
    • Smaller pixel size provides higher resolution with the same image format.

Cons:

    • Image distortion occurs on moving objects

Global shutter: used to get distortion-free images by exposing all pixels at the same time.

Pros:

    • Great for fast processes
    • Sharp images with no blur on moving objects.

Cons:

    • More expensive
    • Larger image format

Note:
The newest rolling shutter sensors have a feature called global reset mode, which starts the exposure of all rows simultaneously and the reset of each row is released simultaneously, also. However, the readout of the lines is equal to the readout of the rolling shutter: line by line.

This means the bottom lines of the sensor will be exposed to light longer! For this reason, this mode will only make sense, if there is no extraneous light and the flash duration is shorter or equal to the exposure time.

STEP 5 – Interfaces and camera selector

Final step is here:

You must consider the possible speed (bandwidth) as well as cable length of camera technology.

USB2
Small, handy and cost-effective, USB 2.0 industrial cameras have become integral parts in the area of medicine and microscopy. You can get a wide range of different variants, including with or without housings, as board-level or single-board, or with or without digital I/Os.

USB3/GigE Vision
Without standards every manufacturer does their own thing and many advantages customers learned to love with the GigE Vision standard would be lost. Like GigE Vision, USB3 Vision also defines:

    • a transport layer, which controls the detection of a device (Device Detection)
    • the configuration (Register Access)
    • the data streaming (Streaming Data)
    • the handling of events (Event Handling)
    • established interface to GenICam. GenICam abstracts the access to the camera features for the user. The features are standardized (name and behavior) by the standard feature naming convention (SFNC). Additionally, it is possible to create specific features in addition to the SFNC to differentiate from other vendors (quality of implementation). In contrast to GigE Vision, this time the mechanics (e.g. lockable cable connectors) are part of the standard which leads to a more robust interface.

I believe that these five points will help you choose the most suitable camera. Are you still unclear? Do not hesitate to contact us or contact me directly: I will be happy to consult your project, needs or any questions.

 

 

Which 3D Vision Technology is Best for Your Application?

3D machine vision. This is such a magical combination of words. There are dozens of different solutions on the market, but they are typically not universal enough or they are so universal that they are not sufficient for your application. In this blog, I will introduce different approaches for 3D technology and review what principle that will be the best for future usage.

Bonus:  I created a poll asking professionals what 3D vision technology they believe is best and I’ve shared the results.

Triangulation

One of the most used technologies in the 3D camera world is triangulation, which provides simple distance measurement by angular calculation. The reflected light falls incident onto a receiving element at a certain angle depending on the distance. This standard method relies on a combination of the projector and camera. There are two basic variants of the projections — models with single-line structure and 2-dimensional geometric pattern.

A single projected line is used in applications where the object is moving under the camera. If you have a static object, then you can use multiple parallel lines that allow the evaluation of the complete scene/surface. This is done with a laser light shaped into a two-dimensional geometric pattern (“structured light”) typically using a diffractive optical element (DOE). The most common patterns are dot matrices, line grids, multiple parallel lines, and circles.

Structured light

Another common principle of 3D camera technology is the structured light technique. System contains at least one camera (it is most common to use two cameras) and a projector. The projector creates a narrow band of light (patterns of parallel stripes are widely used), which illuminate the captured object. Cameras from different angles observe the various curved lines from the projector.

Projecting also depends on the technology which is used to create the pattern. Currently, the three most widespread digital projection technologies are:

  • transmissive liquid crystal,
  • reflective liquid crystal on silicon (LCOS)
  • digital light processing (DLP)

Reflective and transparent surfaces create challenges.

Time of Flight (ToF)

For this principle, the camera contains a high-power LED which emits light that is reflected from the object and then returns to the image sensor. The distance from the camera to the object is calculated based on the time delay between transmitted and received light.

This is really simple principle which is used for 3D applications. The most common wavelength used is around 850nm. This is called near infrared range, which is invisible for human and eye safety.

This is an especially great use since the camera can standardly provide 2D as well as 3D picture in the same time.

An image sensor and LED emitter are used as an all-in-one product making it simple to integrate and easy to use. However, a negative point is that the maximum resolution is VGA (640 x 480) and  for Z resolution expect +/- 1cm. On the other hand, it is an inexpensive solution with modest dimensions.

Likely applications include:

  • mobile robotics
  • door controls
  • localization of the objects
  • mobile phones
  • gaming consoles (XBOX and Kinect camera) or industrial version Azure Kinect.

Stereo vision

The 3D camera by stereo vision is a quite common method that typically includes two area scan sensors (cameras). As with human vision, 3D information is obtained by comparing images taken from two locations.

The principle, sometimes called stereoscopic vision, captures the same scene from different angles. The depth information is then calculated from the image pixel disparities (difference in lateral position).

The matching process, finding the same information with the right and left cameras, is critical to data accuracy and density.

Likely applications include:

  • Navigation
  • Bin-picking
  • Depalletization
  • Robotic guidance
  • Autonomous Guiding Vehicles
  • Quality control and product classification

I asked my friends, colleagues, professionals, as well as competitors, on LinkedIn what is the best 3D technology and which technology will be used in the future. You can see the result here.

As you see, over 50% of the people believe that there is no one principle which can solve each task in 3D machine vision world. And maybe that’s why machine vision is such a beautiful technology. Many approaches, solutions and smart people can bring solutions from different perspectives and accesses.

Machine Vision: A Twenty-first Century Automation Solution

Lasers, scanners, fingerprint readers, and face recognition is not just science fiction anymore.  I love seeing technology only previously imagined become reality through necessity and advances in technology.  We, as a world economy, need to be able to verify who we are and ensure transitions are safe, and material and goods are tracked accurately.  With this need came the evolution of laser barcode readers, fingerprint identification devices, and face ID on your phone.  Similar needs have pushed archaic devices to be replaced within factory automation for data collection.

When I began my career in control engineering the 1990s high tech tools were limited to PLCs, frequency drives, and HMIs. The quality inspection data these devices relied on was collected mostly through limit switches and proximity sensors.  Machine vision was still in it’s expensive and “cute” stage.  With the need for more information, seriously accurate measurement, machining specs, and speed; machine vision has evolved, just like our personal technology has, to fill the needs of the modern time.

Machine vision has worked its way into the automation world as a need to have rather than a nice to have.  With the ability to stack several tools and validations on top of each other, within a fraction of a second scan we now have the data our era needs to stay competitive.  Imagine an application requiring you to detect several material traits, measure the part, read a barcode for tracking, and validate  a properly printed logo screened onto the finished product.  Sure, you could use several individual laser sensors, barcode readers and possibly even a vision sensor all working in concert to achieve your goal.  Or you could use a machine vision system to do all the above easily with room to grow.

I say all of this because there is still resistance in the market to move to machine vision due to historical high costs and complexity.  Machine Vision is here to stay and ready for your applications today.  Think of it this way.  How capable would you think a business is they took out a carbon copy credit card machine to run a payment for you?  Well, think of this before you start trying to solve applications with several sensors.  Take advantage of the technology at your fingertips; don’t hold on to nostalgia.

Buying a Machine Vision System? Focus on Capabilities, Not Cost

Gone are the days when an industrial camera was used only to take a picture and send it to a control PC. Machine vision systems are a much more sophisticated solution. Projects are increasingly demanding image processing, speed, size, complexity, defect recognition and so much more.

This, of course, adds to the new approach in the field of software, where deep learning and artificial intelligence play a bigger and bigger role. There is often a lot of effort behind improved image processing, however,  some people, if only a few, have realized that part of it can already be processed by that little “dummy” industrial camera.

I will try to briefly explain to you in the next few paragraphs how to achieve this in your application. Thanks to that, you will be able to get some of these benefits:

  • Reduce the amount of data
  • Relieve the entire system
  • Generate the maximum performance potential
  • Simplify the hardware structure
  • Reduce the installation work required
  • Reduce your hardware costs
  • Reduce your software costs
  • Reduce your development expenses

How to achieve it?  

Try to use more intelligent industrial cameras, which have a built-in internal memory sometimes called a buffer. Together with FPGA (field programmable gate array) they will do a lot of work that will appreciate your software for image processing. These functions are often also called pre-processing features.

What if you have a project where the camera must send images much faster than the USB or Ethernet interface allows?

For simple cameras, this would mean using a much faster interface, which of course would make the complete solution more expensive. Instead, you can use the Smart Framer Recall function in standard USB and GigE cameras, which generates small preview images with reduced resolution (thumbnails) with an extremely accelerated number of frames per second, which are transferred to the host PC with IDs. At the same time, the corresponding image in full resolution is archived in the camera’s image memory. If the image is required in full resolution, the application sends a request and the image is transferred in the same data stream as the preview image.

The function is explained in this video.

Is there a simpler option than a line scan camera? Yes!

Many people struggle to use line scan cameras and it is understandable. They are not easy to configurate, are hard to install, difficult to properly set and few people can modify them. You can use an area scan camera in line scan mode. The biggest benefit is standard interface: USB3 Vision and GigE Vision instead of CoaXPress and Cameralink. This enables inspection of round/rotating bodies or long/endless materials at high speed (like line scan cameras). Block scan mode acquires an Area of Interest (AOI) block which consists of several lines. The user defines the number of AOI blocks which are used to create one image. This minimizes the overhead, which you would have instead when transferring AOI blocks as single images using the USB3 Vision and GigE Vision protocols.

The function is explained in this video.

Polarization has never been easier

Sony came with a completely new approach to — a polarized filter . Until this new approach was developed, everyone just used a polarization filter in front of the lens and combined it with polarized lighting. With the polarized filter, above the pixel array is a polarizer array and each pixel square contains 0°, 45°, 90°, and 135° of polarization.

 

What is the best part of it? It doesn’t matter if you need a color or monochrome version. There are at least 5️ applications when you want to use it:

  • Remove reflection – > multi-plane surfaces or bruise/defect detection
  • Visual inspection – > detect fine scratches or dust
  • Contrast improvement -> recognize similar objects or colors
  • 3D/Stress recognition -> quality analysis
  • People/vehicle detection -> using your phone while driving

Liquid lens is very popular in smart sensor technology. When and why do you want to use it with an Industrial camera?  

 

Liquid lens is a single optical element like a traditional lens made from glass. However, it also includes a cable to control the focal length. In addition, it contains a sealed cell with water and oil inside. The technology uses an electrowetting process to achieve superior autofocus capabilities.

Benefits to the traditional lenses are obvious. It doesn’t have any moving mechanical parts. Thanks to that, they are highly resistant to shocks and vibrations. Liquid lens is a perfect fit for applications where you need to observe or inspect objects with different sizes and/or working distances and you need to react very quickly. One  liquid lens can do the work of multiple-image systems.

To connect the liquid lens, it requires the RS232 port in the camera plus a DC power from 5 to 24 Volt. An intelligent industrial camera is able to connect with the camera directly and the lens uses the power supply of the camera.

 

Reduce Packaging Downtime with Machine Vision

Packaging encompasses many different industries and typically has several stages in its process. Each industry uses packaging to accomplish specific tasks, well beyond just acting as a container for a product. The pharmaceutical industry for example, typically uses its packaging as a means of dispensing as well as containing. The food and beverage industry uses packaging as a means of preventing contamination and creating differentiation from similar products. Consumer goods typically require unique product containment methods and have a need for “eye-catching” differentiation.

The packaging process typically has several stages. For example, you have primary packaging where the product is first placed in a package, whether that is form-fill-seal bagging or bottle fill and capping. Then secondary packaging that the consumer may see on the shelf, like cereal boxes or display containers, and finally tertiary packaging or transport packaging where the primary or secondary packaging is put into shipping form. Each of these stages require verification or inspection to ensure the process is running properly, and products are properly packaged.

1

Discrete vs. Vision-Based Error Proofing

With the use of machine vision technology, greater flexibility and more reliable operation of the packaging process can be achieved. Typically, in the past and still today, discrete sensors have been used to look for errors and manage product change-over detection. But with these simple discrete sensing solutions come limitations in flexibility, time consuming fixture change-overs and more potential for errors, costing thousands of dollars in lost product and production time. This can translate to more expensive and less competitively priced products on the store selves.

There are two ways implementing machine vision can have a benefit toward improving the scheduled line time. The first is reducing planned downtime by reducing product change over and fixturing change time. The other is to decrease unplanned downtime by catching errors right away and dynamically rejecting them or bringing attention to line issues requiring correction and preventing waste. The greatest benefit vision can have for production line time is in reducing the planned downtime for things like product changeovers. This is a repeatable benefit that can dramatically reduce operating costs and increase the planned runtime. The opportunities for vision to reduce unplanned downtime could include the elimination of line jams due to incorrectly fed packaging materials, misaligned packages or undetected open flaps on cartons. Others include improperly capped bottles causing jams or spills and improper adjustments or low ink causing illegible labeling and barcodes.

Cost and reliability of any technology that improves the packaging process should always be proportional to the benefit it provides. Vision technologies today, like smart cameras, offer the advantages of lower costs and simpler operation, especially compared to the older, more expensive and typically purpose-built vision system counterparts. These new vision technologies can also replace entire sensor arrays, and, in many cases, most of the fixturing at or even below the same costs, while providing significantly greater flexibility. They can greatly reduce or eliminate manual labor costs for inspection and enable automated changeovers. This reduces planned and unplanned downtime, providing longer actual runtime production with less waste during scheduled operation for greater product throughput.

Solve Today’s Packaging Challenges

Using machine vision in any stage of the packaging process can provide the flexibility to dramatically reduce planned downtime with a repeatable decrease in product changeover time, while also providing reliable and flexible error proofing that can significantly reduce unplanned downtime and waste with examples like in-line detection and rejection to eliminate jams and prevent product loss. This technology can also help reduce or eliminate product or shipment rejection by customers at delivery. In today’s competitive market with constant pressure to reduce operating costs, increase quality and minimize waste, look at your process today and see if machine vision can make that difference for your packaging process.