Buying a Machine Vision System? Focus on Capabilities, Not Cost

Gone are the days when an industrial camera was used only to take a picture and send it to a control PC. Machine vision systems are a much more sophisticated solution. Projects are increasingly demanding image processing, speed, size, complexity, defect recognition and so much more.

This, of course, adds to the new approach in the field of software, where deep learning and artificial intelligence play a bigger and bigger role. There is often a lot of effort behind improved image processing, however,  some people, if only a few, have realized that part of it can already be processed by that little “dummy” industrial camera.

I will try to briefly explain to you in the next few paragraphs how to achieve this in your application. Thanks to that, you will be able to get some of these benefits:

  • Reduce the amount of data
  • Relieve the entire system
  • Generate the maximum performance potential
  • Simplify the hardware structure
  • Reduce the installation work required
  • Reduce your hardware costs
  • Reduce your software costs
  • Reduce your development expenses

How to achieve it?  

Try to use more intelligent industrial cameras, which have a built-in internal memory sometimes called a buffer. Together with FPGA (field programmable gate array) they will do a lot of work that will appreciate your software for image processing. These functions are often also called pre-processing features.

What if you have a project where the camera must send images much faster than the USB or Ethernet interface allows?

For simple cameras, this would mean using a much faster interface, which of course would make the complete solution more expensive. Instead, you can use the Smart Framer Recall function in standard USB and GigE cameras, which generates small preview images with reduced resolution (thumbnails) with an extremely accelerated number of frames per second, which are transferred to the host PC with IDs. At the same time, the corresponding image in full resolution is archived in the camera’s image memory. If the image is required in full resolution, the application sends a request and the image is transferred in the same data stream as the preview image.

The function is explained in this video.

Is there a simpler option than a line scan camera? Yes!

Many people struggle to use line scan cameras and it is understandable. They are not easy to configurate, are hard to install, difficult to properly set and few people can modify them. You can use an area scan camera in line scan mode. The biggest benefit is standard interface: USB3 Vision and GigE Vision instead of CoaXPress and Cameralink. This enables inspection of round/rotating bodies or long/endless materials at high speed (like line scan cameras). Block scan mode acquires an Area of Interest (AOI) block which consists of several lines. The user defines the number of AOI blocks which are used to create one image. This minimizes the overhead, which you would have instead when transferring AOI blocks as single images using the USB3 Vision and GigE Vision protocols.

The function is explained in this video.

Polarization has never been easier

Sony came with a completely new approach to — a polarized filter . Until this new approach was developed, everyone just used a polarization filter in front of the lens and combined it with polarized lighting. With the polarized filter, above the pixel array is a polarizer array and each pixel square contains 0°, 45°, 90°, and 135° of polarization.

 

What is the best part of it? It doesn’t matter if you need a color or monochrome version. There are at least 5️ applications when you want to use it:

  • Remove reflection – > multi-plane surfaces or bruise/defect detection
  • Visual inspection – > detect fine scratches or dust
  • Contrast improvement -> recognize similar objects or colors
  • 3D/Stress recognition -> quality analysis
  • People/vehicle detection -> using your phone while driving

Liquid lens is very popular in smart sensor technology. When and why do you want to use it with an Industrial camera?  

 

Liquid lens is a single optical element like a traditional lens made from glass. However, it also includes a cable to control the focal length. In addition, it contains a sealed cell with water and oil inside. The technology uses an electrowetting process to achieve superior autofocus capabilities.

Benefits to the traditional lenses are obvious. It doesn’t have any moving mechanical parts. Thanks to that, they are highly resistant to shocks and vibrations. Liquid lens is a perfect fit for applications where you need to observe or inspect objects with different sizes and/or working distances and you need to react very quickly. One  liquid lens can do the work of multiple-image systems.

To connect the liquid lens, it requires the RS232 port in the camera plus a DC power from 5 to 24 Volt. An intelligent industrial camera is able to connect with the camera directly and the lens uses the power supply of the camera.

 

Reduce Packaging Downtime with Machine Vision

Packaging encompasses many different industries and typically has several stages in its process. Each industry uses packaging to accomplish specific tasks, well beyond just acting as a container for a product. The pharmaceutical industry for example, typically uses its packaging as a means of dispensing as well as containing. The food and beverage industry uses packaging as a means of preventing contamination and creating differentiation from similar products. Consumer goods typically require unique product containment methods and have a need for “eye-catching” differentiation.

The packaging process typically has several stages. For example, you have primary packaging where the product is first placed in a package, whether that is form-fill-seal bagging or bottle fill and capping. Then secondary packaging that the consumer may see on the shelf, like cereal boxes or display containers, and finally tertiary packaging or transport packaging where the primary or secondary packaging is put into shipping form. Each of these stages require verification or inspection to ensure the process is running properly, and products are properly packaged.

1

Discrete vs. Vision-Based Error Proofing

With the use of machine vision technology, greater flexibility and more reliable operation of the packaging process can be achieved. Typically, in the past and still today, discrete sensors have been used to look for errors and manage product change-over detection. But with these simple discrete sensing solutions come limitations in flexibility, time consuming fixture change-overs and more potential for errors, costing thousands of dollars in lost product and production time. This can translate to more expensive and less competitively priced products on the store selves.

There are two ways implementing machine vision can have a benefit toward improving the scheduled line time. The first is reducing planned downtime by reducing product change over and fixturing change time. The other is to decrease unplanned downtime by catching errors right away and dynamically rejecting them or bringing attention to line issues requiring correction and preventing waste. The greatest benefit vision can have for production line time is in reducing the planned downtime for things like product changeovers. This is a repeatable benefit that can dramatically reduce operating costs and increase the planned runtime. The opportunities for vision to reduce unplanned downtime could include the elimination of line jams due to incorrectly fed packaging materials, misaligned packages or undetected open flaps on cartons. Others include improperly capped bottles causing jams or spills and improper adjustments or low ink causing illegible labeling and barcodes.

Cost and reliability of any technology that improves the packaging process should always be proportional to the benefit it provides. Vision technologies today, like smart cameras, offer the advantages of lower costs and simpler operation, especially compared to the older, more expensive and typically purpose-built vision system counterparts. These new vision technologies can also replace entire sensor arrays, and, in many cases, most of the fixturing at or even below the same costs, while providing significantly greater flexibility. They can greatly reduce or eliminate manual labor costs for inspection and enable automated changeovers. This reduces planned and unplanned downtime, providing longer actual runtime production with less waste during scheduled operation for greater product throughput.

Solve Today’s Packaging Challenges

Using machine vision in any stage of the packaging process can provide the flexibility to dramatically reduce planned downtime with a repeatable decrease in product changeover time, while also providing reliable and flexible error proofing that can significantly reduce unplanned downtime and waste with examples like in-line detection and rejection to eliminate jams and prevent product loss. This technology can also help reduce or eliminate product or shipment rejection by customers at delivery. In today’s competitive market with constant pressure to reduce operating costs, increase quality and minimize waste, look at your process today and see if machine vision can make that difference for your packaging process.

Beyond the Human Eye

Have you ever had to squint, strain, adjust your glasses, or just ask for someone with better vision to help read something for you? Now imagine having to adjust your eyesight 10 times a second. This is the power of machine vision. It can adjust, illuminate, filter, focus, read, and relay information that our eyes struggle with. Although the technology is 30 years old, machine vision is still in its early stages of adoption within the industrial space. In the past, machine vision was ‘nice to have’ but not really a ‘need to have’ technology because of costs, and the technology still not being refined. As traceability, human error proofing, and advanced applications grow more common, machine vision has found its rhythm within factory automation. It has evolved into a robust technology eager to solve advanced applications.

Take, for example, the accurate reading, validation, and logging of a date located on the concaved bottom of an aluminum can. Sometimes, nearly impossible to see with the human eye without some straining involved, it is completely necessary to ensure it is there to be able to sell the product. What would be your solution to ensuring the date stamp is there? Having the employee with the best eyes validate each can off the line? Using more ink and taking longer to print a larger code? Maybe adding a step by putting a black on white contrasting sticker on the bottom that could fall off? All of these would work but at what cost? A better solution is using a device easily capable of reading several cans a second even on a shiny, poor angled surface and saving a ton of unnecessary time and steps.

Machine vison is not magic; it is science. By combining high end image sensors, advanced algorithms, and trained vision specialists, an application like our aluminum can example can be solved in minutes and run forever, all while saving you time and money. In Figure 1 you can see the can’s code is lightly printed and overcome by any lighting due to hotspots from the angle of the can. In Figure 2 we have filtered out some of the glare, better defined the date through software, and validate the date is printed and correct.

Take a moment to imagine all the possibilities machine vision can open for your production process and the pain points it can alleviate. The technology is ready, are you?

Figure 1
Figure 1
Figure 2
Figure 2

How Cameras Keep Tire Manufacturers From Spinning Their Wheels

Tires being transported between the curing presses and the staging area before their final inspection often become clustered together. This jam up can cause imperfections to the tires and damage to the conveyors. To alleviate this problem, some tire manufacturers have installed vision systems on their conveyors to provide visual feedback to their production and quality teams, and alert them when the tires start to get too close together.

A vision system can show you alerts back in your HMI by using inputs and outputs built into the camera or use an IO-Link port on the camera to attach a visual display, for example a SmartLight with audible and flashing alerts enabled. Once you see these alerts, the PLC can easily fix the issue from the program or a maintenance worker or engineer can quickly respond to the alert.

Widespread use of smart vision cameras with various pixel options has become a trend in tire manufacturing. In additional to giving an early alert to bunching problems, vision systems can also capture pictures and data to verify that tires were cleared all the way into final inspection. Although tire machine builders are being asked to incorporate vision systems into their machines during the integration process, it is more likely for systems to be added in plants at the application level.

Vision systems can improve production throughput, quality issues and record production data about the process for analytics and analysis down the road. Remember a tire plant usually consists of these processes in their own large section of the plant and involves many machines in each section:

  • Mixing
  • Tire Prep
  • Tire Build
  • Curing
  • Final Inspection

Each one of these process areas in a plant can benefit from the addition of vision systems. Here are a few examples:

  • Mixing areas can use cameras as they mill rubber and detect when rubber sheets are off the rollers and to look for engraved information embedded in the rubber material for logistics and material flow to the proper processes.
  • Tire Prep can use cameras to ensure all the different strand colors of steel cords are embedded or painted on the rubber plies before going to tire build process.
  • Tire Build can use vision to detect the side-wall beads are facing the right direction and reading the embedded position arrows on the beads before tire plies are wrapped around them.
  • Curing area can use vision to monitor tire clusters on conveyors and make sure they are not too close to each other by using the measuring tool in the camera software.
  • Final Inspection can use vision to read barcodes, QR codes, detect colors of embossed or engraved serial numbers, detect different color markings and shape of the markings on the tire.

The use of machine vision systems can decrease quality issues by pinpointing errors before they make it through the entire production process without detection.

What Machine Vision Tool is Right for Your Application?

Machine vision is an inherent terminology in factory automation but selecting the most efficient and cost-effective vision product for your project or application can be tricky.

We can see machine vision from many angles of view, for example market segment and application or image processing deliver different perspectives. In this article I will focus on the “sensing element” itself, which scan your application.

The sensing element is a product which observes the application, analyzes it and forwards an evaluation. PC is a part of machine vision that can be embedded with the imager or separated like the controller. We could take many different approaches, but let’s look at the project according to the complexity of the application. The basic machine vision hardware comparison is

  1. smart sensors
  2. smart cameras
  3. vision systems

Each of these products are used in a different way and they fit different applications, but what do they all have in common? They must have components like an imager, lens, lighting, SW, processor and output HW. All major manufacturing companies, regardless of their focus or market segment, use these products, but what purpose and under what circumstances are they used?

Smart Sensors

Smart sensors are dedicated to detecting basic machine vision applications. There are hundreds of different types on the market and they must quickly provide standard performance in machine vision. Don’t make me wrong, this is not necessarily a negative. These sensors are used for simple applications. You do not want to wait seconds to detect QR code; you need a response time in milliseconds. Smart sensors typically include basic functions like:

  • data matrix, barcode and 2D code reading
  • presence of the object,
  • shape, color, thickness, distance

They are typically used in single purpose process and you cannot combine all the features.

Smart Cameras

Smart cameras are used in more complex projects. They provide all the function of smart sensors, but with more complex functions like:

  • find and check object
  • blob detection
  • edge detection
  • metrology
  • robot navigation
  • sorting
  • pattern recognition
  • complex optical character recognition

Due to their complexity, you can use them to find products with higher resolution , however it is not a requirement. Smart cameras can combine more programs and can do parallel several functions together. Image processing is more sophisticated, and limits may occur in processing speed, because of embedded PC.

Vision Systems

Typically, machine vision systems are used in applications where a smart camera is not enough.

Vision system consists of industrial cameras, controller, separated lighting and lens system, and it is therefore important to have knowledge of different types of lighting and lenses. Industrial cameras provide resolution from VGA up to 30Mpxl and they are easy connected to controller.

Vision systems are highly flexible systems. They provide all the functions from smart sensors and cameras. They bring complexity as well as flexibility. With a vision system, you are not limited by resolution or speed. Thanks to the controller, you have dedicated and incomparable processing power which provides multi-speed acceleration.

And the most important information at the end. How does it look with pricing?

You can be sure that smart sensor is the most inexpensive solution. Basic pricing is in the range of $500 – $1500. Smart cameras can cost $2000 – $5000, while a vision system cost would start closer to $6000. It may look like an easy calculation, but you need to take into consideration the complexity of your project to determine which is best for you.

Pros Cons Cost
Smart sensor
    • Easy integration
    • Simple configuration
    • Included lightning and lenses
    • Limited functions
    • Closed SW
    • Limited programs/memory
$
Smart camera
    • Combine more programs together
    • Available functions
    • Limited resolution
    • Slower speed due to embedded PC
$$
Vision system
    • Connect more cameras(up to 8)
    • Open SW
    • Different resolution options
    • Requires skilled machine vision specialist
    • Requires knowledge of lightning and lenses
    • Increased integration time
$$$

Capture

Top 5 Insights from 2019

With a new year comes new innovation and insights. Before we jump into new topics for 2020, let’s not forget some of the hottest topics from last year. Below are the five most popular blogs from our site in 2019.

1. How to Select the Best Lighting Techniques for Your Machine Vision Application

How to select the best vision_LI.jpgThe key to deploying a robust machine vision application in a factory automation setting is ensuring that you create the necessary environment for a stable image.  The three areas you must focus on to ensure image stability are: lighting, lensing and material handling.  For this blog, I will focus on the seven main lighting techniques that are used in machine vision applications.

READ MORE>>

2. M12 Connector Coding

blog 7.10_LI.jpgNew automation products hit the market every day and each device requires the correct cable to operate. Even in standard cables sizes, there are a variety of connector types that correspond with different applications.

READ MORE>>

3. When to use optical filtering in a machine vision application

blog 7.3_LI.jpgIndustrial image processing is essentially a requirement in modern manufacturing. Vision solutions can deliver visual quality control, identification and positioning. While vision systems have gotten easier to install and use, there isn’t a one-size-fits-all solution. Knowing how and when you should use optical filtering in a machine vision application is a vital part of making sure your system delivers everything you need.

READ MORE>>

4. The Difference Between Intrinsically Safe and Explosion Proof

5.14_LIThe difference between a product being ‘explosion proof’ and ‘intrinsically safe’ can be confusing but it is vital to select the proper one for your application. Both approvals are meant to prevent a potential electrical equipment malfunction from initiating an explosion or ignition through gases that may be present in the surrounding area. This is accomplished in both cases by keeping the potential energy level below what is necessary to start ignition process in an open atmosphere.

READ MORE>>

5. Smart choices deliver leaner processes in Packaging, Food and Beverage industry

Smart choices deliver leaner processes in PFB_LI.jpgIn all industries, there is a need for more flexible and individualized production as well as increased transparency and documentable processes. Overall equipment efficiency, zero downtime and the demand for shorter production runs have created the need for smart machines and ultimately the smart factory. Now more than ever, this is important in the Packaging, Food and Beverage (PFB) industry to ensure that the products and processes are clean, safe and efficient.

READ MORE>>

We appreciate your dedication to Automation Insights in 2019 and look forward to growth and innovation in 2020!

 

 

Tackle Quality Issues and Improve OEE in Vision Systems for Packaging

Packaging industries must operate with the highest standards of quality and productivity. Overall Equipment Effectiveness (OEE) is a scoring system widely used to track production processes in packaging. An OEE score is calculated using data specifying quality (percent of good parts), performance (performance of nominal speed) and equipment availability (percent of planned uptime).

Quality issues can directly impact the customer, so it is essential to have processes in place to ensure the product is safe to use and appropriately labeled before it ships out. Additionally, defects to the packaging like dents, scratches and inadequate labeling can affect customer confidence in a product and their willingness to buy it at the store. Issues with quality can lead to unplanned downtime, waste and loss of productivity, affecting all three metrics of the OEE score.

1.png

Traditionally, visual inspections and packaging line audits have been used to monitor quality, however, this labor can be challenging in high volume applications. Sensing solutions can be used to partly automate the process, but complex demands, including multiple package formats and product formulas in the same line, require the flexibility that machine vision offers. Machine vision is also a vital component in adding traceability down to the unit in case a quality defect or product recall does occur.

2.JPG

Vision systems can increase productivity in a packaging line by reducing the amount of planned and unplanned downtime for manual quality inspection. Vision can be reliably used to detect quality defects as soon as they happen. With this information, a company can make educated improvements to the equipment to improve repeatability and OEE and ensure that no defective product reaches the customers’ hands.

Some vision applications for quality assurance in packaging include:

  • Label inspection (presence, integrity, print quality, OCV/OCR)
    • Check that a label is in place, lined up correctly and free of scratches and tears. Ensure that any printed graphics, codes and text are legible and printed with the expected quality. Use a combination of OCR (Optical Character Recognition) to read a lot number, expiration date or product information, and then OCV (Optical Character Verification) to ensure legibility.
  • Primary and secondary packaging inspection for dents and damage
    Inspect bottles, cans and boxes to make sure that their geometry has not been altered during the manufacturing process. For example, check that a bottle rim is circular and has not been crushed so that the bottle cap can be put on after filling with product.
  • Safety seal/cap presence and position verification
    Verifying that a cap and/or seal has been placed correctly on a bottle, and/or that the container being used is the correct one for the formula / product being manufactured.
  • Product position verification in packages with multiple items
    In packages of solids, making sure they have been filled adequately and in the correct sequence. In pharmaceutical industries, this can be used to check that blister packs have a pill in each space, and in food industries to ensure that the correct food item is placed in each space of the package.
  • Certification of proper liquid level in containers
    For applications in which it can’t be done reliably with traditional sensing technologies, vision systems can be used to ensure that a bottle has been filled to its nominal volume.

The flexibility of vision systems allows for addressing these complex applications and many more with a well-designed vision solution.

For more information on Balluff vision solutions and applications, visit www.balluff.com.

Sensor and Device Connectivity Solutions For Collaborative Robots

Sensors and peripheral devices are a critical part of any robot system, including collaborative applications. A wide variety of sensors and devices are used on and around robots along with actuation and signaling devices. Integrating these and connecting them to the robot control system and network can present challenges due to multiple/long cables, slip rings, many terminations, high costs to connect, inflexible configurations and difficult troubleshooting. But device level protocols, such as IO-Link, provide simpler, cost-effective and “open” ways to connect these sensors to the control system.

Just as the human body requires eyes, ears, skin, nose and tongue to sense the environment around it so that action can be taken, a collaborative robot needs sensors to complete its programmed tasks. We’ve discussed the four modes of collaborative operation in previous blogs, detailing how each mode has special safety/sensing needs, but they have common needs to detect work material, fixtures, gripper position, force, quality and other aspects of the manufacturing process. This is where sensors come in.

Typical collaborative robot sensors include inductive, photoelectric, capacitive, vision, magnetic, safety and other types of sensors. These sensors help the robot detect the position, orientation, type of objects, and it’s own position, and move accurately and safely within its surroundings. Other devices around a robot include valves, RFID readers/writers, indicator lights, actuators, power supplies and more.

The table, below, considers the four collaborative modes and the use of different types of sensors in these modes:

Table 1.JPG

But how can users easily and cost-effectively connect this many sensors and devices to the robot control system? One solution is IO-Link. In the past, robot users would run cables from each sensor to the control system, resulting in long cable runs, wiring difficulties (cutting, stripping, terminating, labeling) and challenges with troubleshooting. IO-Link solves these issues through simple point-to-point wiring using off-the-shelf cables.

Table 2.png

Collaborative (and traditional) robot users face many challenges when connecting sensors and peripheral devices to their control systems. IO-Link addresses many of these issues and can offer significant benefits:

  • Reduced wiring through a single field network connection to hubs
  • Simple connectivity using off-the-shelf cables with plug connectors
  • Compatible will all major industrial Ethernet-based protocols
  • Easy tool change with Inductive Couplers
  • Advanced data/diagnostics
  • Parametarization of field devices
  • Faster/simpler troubleshooting
  • Support for implementation of IIoT/Industry 4.0 solutions

IO-Link: an excellent solution for simple, easy, fast and cost-effective device connection to collaborative robots.

When to use optical filtering in a machine vision application

Industrial image processing is essentially a requirement in modern manufacturing. Vision solutions can deliver visual quality control, identification and positioning. While vision systems have gotten easier to install and use, there isn’t a one-size-fits-all solution. Knowing how and when you should use optical filtering in a machine vision application is a vital part of making sure your system delivers everything you need.

So when should you use optical filtering in your machine vision applications? ALWAYS. Image filtering increases contrast, usable resolution, image quality and most importantly, it dramatically reduces ambient light interference, which is the number one reason a machine vision application doesn’t work as expected.

Different applications require different types of filtering. I’ve highlighted the most common.

Bandpass Filtering

Different light spectrums will enhance or de-emphasize certain aspects of the target you are inspecting. Therefore, the first thing you want to do is select the proper color/wavelength that will give you the best contrast for your application. For example, if you are using a red area light that transmits at 617nm (Figure 1), you will want to select a filter (Figure 3) to attach to the lens (Figure 2) that passes the frequency of the area light and filters out the rest of the color spectrum. This filter technique is called Bandpass filtering reference (Figure 4).

This allows only the light from the area light to pass through while all other light is filtered out. To further illustrate the kinds of effects that can be emphasized or de-emphasized we can look at the following images of the same product but with different filters.

Another example of Bandpass filtering can be seen in (Figure 9), which demonstrates the benefit of using a filter in an application to read the LOT code and best before sell date. A blue LED light source and a blue Bandpass filter make the information readable, whereas without the filter it isn’t.

f9
Figure 9

Narrow Bandpass Filtering

Narrow bandpass filtering, shown in (Figure 10), is mostly used for laser line dimensional measurement applications, referenced in (Figure 11). This technique creates more ambient light immunity than normal Bandpass filtering. It also decreases the bandwidth of the image and creates a kind of black on white effect which is the desired outcome you want for this application.

Shortpass Filtering

Another optical filtering technique is shortpass filtering, shown in (Figure 12), which is commonly used in color camera imaging because it filters out UV and IR light sources to give you a true color image.

f12
Figure 12

Longpass Filtering

Longpass filtering, referenced in (Figure 13), is often used in IR applications where you want to suppress the visible light spectrum.

f13
Figure 13

Neutral Density Filtering

Neutral density filtering is regularly used in LED inspection. Without filtering, light coming from the LEDs completely saturates the image making it difficult, if not impossible, to do a proper inspection. Deploying neutral density filtering acts like sunglasses for your camera. In short, it reduces the amount of full spectrum light the camera sees.

Polarization Filtering

Polarization filtering is best to use when you have surfaces that are highly reflective or shiny. Polarization filtering can be deployed to reduce glare on your target. You can clearly see the benefits of this in (Figure 14).

f14
Figure 14

How flexible inspection capabilities help meet customization needs and deliver operational excellence

As the automotive industry introduces more options to meet the growing complexities and demands of its customers (such as increased variety of trim options) it has rendered challenges to the automotive manufacturing industry.

Demands of the market filter directly back to the manufacturing floor of tier suppliers as they must find the means to fulfill the market requirements on a flexible industrial network, either new or existing. The success of their customers is dependent on the tier supplier chain delivering within a tight timeline. Whereby, if pressure is applied upon that ecosystem, it will mean a more difficult task to meet the JIT (just in time) supply requirements resulting in increased operating costs and potential penalties.

Meeting customer requirements creates operational challenges including lost production time due to product varieties and tool change time increases. Finding ways to simplify tool change and validate the correct components are placed in the correct assembly or module to optimize production is now an industry priority. In addition, tracking and traceability is playing a strong role in ensuring the correct manufacturing process has been followed and implemented.

How can manufacturing implement highly flexible inspection capabilities while allowing direct communication to the process control network and/or MES network that will allow the capability to change inspection characteristics on the fly for different product inspection on common tooling?

Smart Vision Inspection Systems

Compact Smart Vision Inspection System technology has evolved a long way from the temperamental technologies of only a decade ago. Systems offered today have much more robust and simplistic intuitive software tools embedded directly in the Smart Vision inspection device. These effective programming cockpit tools allow ease of use to the end user at the plant providing the capability to execute fast reliable solutions with proven algorithm tools. Multi-network protocols such as EthernetIP, ProfiNet, TCP-IP-LAN (Gigabit Ethernet) and IO-LINK have now come to realization. Having multiple network capabilities delivers the opportunity of not just communicating the inspection result to the programmable logic controller (via process network) but also the ability to send image data independent of the process network via the Gigabit Ethernet network to the cloud or MES system. The ability to over-lay relevant information onto the image such as VIN, Lot Code, Date Code etc. is now achievable.  In addition, camera housings have become more industrially robust such as having aluminum housings with an ingress protection rating of IP67.

Industrial image processing is now a fixture within todays’ manufacturing process and is only growing. The technology can now bring your company a step closer to enabling IIOT by bringing issues to your attention before they create down time (predictive maintenance). They aid in reaching operational excellence as they uncover processing errors, reduce or eliminate scrap and provide meaningful feedback to allow corrective actions to be implemented.